

Automated certificates are relatively new and pretty neat. Killing off the certificate cartels is an added bonus.


Automated certificates are relatively new and pretty neat. Killing off the certificate cartels is an added bonus.


You could try a path unit watching the cert directory (there are caveats around watching the symlinks directly) or most acme implementations have post renewal hooks you can use which would be more reliable.


You could try using the DNS challenge instead; I find it a lot more convenient as not all my services are exposed.


I have smart plugs from Innr, Samsung, Aqara (I think) and have never experienced the problem you’re speaking of. Mine are all ZigBee – not sure what yours are.
That said, I just got a bunch of Shelly EM Mini G4 and put them in some PowerPoints and they work great.
If you don’t mind some basic wiring they’re easy to set up.


Not really with the same flexibility.
You only get usable capacity of the smallest disk in a vdev or you have to add a new vdev with your newly sized disks.
Unraid lets you mix and match however you like and get all the usable capacity (as long as your parity is your largest sized disks).


I can’t argue, but there are benefits.
If you need something running 24/7 then on-prem may work out cheaper for you. Keep in mind you need a team of server monkeys to keep that running, and your company’s security certifications will come nowhere near that of a major cloud provider.
Cloud is good for elastic workloads. And you can save money that way if you’re set up for it. A simple lift and shift will always be more expensive. But doing things like moving build tasks to spot instances and auto scaling capacity in peak periods is a huge win. No need to over provision your DC and no need to upgrade your hardware – generally AWS releases new products at roughly the same price as old but with increased performance. You get upgrades “for free”* with no capex.
Again I’m not saying that your circumstance means that cloud isn’t more expensive. But there are medium term benefits.
AWS refused to offer hybrid as an option for years. They’ve changed their tune in the past 5 or so. No reason not to take advantage and do what mix makes sense for you.


I’m legitimately curious to understand more (not challenging your assertions). They offer hosted Jira/Confluence and probably other stuff no-one cares about.
What’s the problem with adoption?
Cisco, HP, and many other “Enterprise” switches will take a minute or two to start forwarding frames after boot.
Doesn’t really excuse Ubiquiti but that’s what they’re trying for.


Why are you searching for a solution to a problem you don’t have?
There’s nothing wrong with systemd.


They’re not more effective. They might assist with speed of absorption but that’s it.


Never trust the network in any circumstance. If you start from that basis then life becomes easier.
Google has a good approach to this: https://cloud.google.com/beyondcorp
EDIT:
I’d like to add a tangential rant about companies still using shit like IP AllowLists and VPNs. They’re just implementing eggshell security.


I actually disagree. I only know a little of Crowdstrike internals but they’re a company that is trying to do the whole DevOps/agile bullshit the right way. Unfortunately they’ve undermined the practice for the rest of us working for dinosaurs trying to catch up.
Crowdstrike’s problem wasn’t a quality escape; that’ll always happen eventually. Their problem was with their rollout processes.
There shouldn’t have been a circumstance where the same code got delivered worldwide in the course of a day. If you were sane you’d canary it at first and exponentially increase rollout from thereon. Any initial error should have meant a halt in further deployments.
Canary isn’t the only way to solve it, by the way. Just an easy fix in this case.
Unfortunately what is likely to happen is that they’ll find the poor engineer that made the commit that led to this and fire them as a scapegoat, instead of inspecting the culture and processes that allowed it to happen and fixing those.
People fuck up and make mistakes. If you don’t expect that in your business you’re doing it wrong. This is not to say you shouldn’t trust people; if they work at your company you should assume they are competent and have good intent. The guard rails are there to prevent mistakes, not bad/incompetent actors. It just so happens they often catch the latter.
I agree with most of these but there’s another missing benefit. A lot of the time my colleagues will be iterating on a PR so commits of “fuck, that didn’t work, maybe this” are common.
I like meaningful commit messages. IMO “fixed the thing” is never good enough. I want to know your intent when I’m doing a blame in 18 months time. However, I don’t expect anyone’s in progress work to be good before it hits main. You don’t want those comments in the final merge, but a squash or rebase is an easy way to rectify that.


Honestly, these days I have no idea. When I said “wouldn’t recommend” that wasn’t an assertion to avoid; just a lack of opinion. Most of my recent experience is with Cloud vendors wherein the problem domain is quite different.
I’ve had experience with most of the big vendors and they’ve all had quirks etc. that you just have to deal with. Fundamentally it’ll come down to a combination of price, support requirements, and internal competence with the kit. (Don’t undermine the last item; it’s far better if you can fix problems yourself.)
Personally I’d actually argue that most corporates could get by with a GNU/Linux VM (or two) for most of their routing and firewalling and it would absolutely be good enough; functionally you can do the same and more. That’s not to say dedicated machines for the task aren’t valuable but I’d say it’s the exception rather than rule that you need ASICs and the like.


I agree. GeoIP was never a good idea, but here we are. Any ASN could be broken up and routed wherever (and changed) but it’s still far too prevalent.


I might be misunderstanding. It’s definitely possible to have as many IPv4 aliases on an interface as you want with whatever routing preferences you want. Can you clarify?
I agree with your stance on deployment.


Given how large the address space is, it’s super easy to segregate out your networks to the nth degree and apply proper firewall rules.
There’s no reason your clients can’t have public, world routeable IPs as well as security.
Security via obfuscation isn’t security. It’s a crutch.
Why can’t you just have a long lived internally signed cert on your archaic apps and LE at the edge on a modern proxy? It’s easy enough to have the proxy trust the internal cert and connect to your backend service that shouldn’t know the difference if there’s a proxy or not.
Or is your problem client side?