Feedback: music, bash, BSD, cattle/pets, and "the cloud"
More feedback to more comments. Here we go!
What's your favorite compression algorithm? Are you obsessed with squeezing every last bit or does normal/medium level compression work just fine for you? On a similar note, what about encoding your media files? Did you consider moving all your HQ MP4 files to HEVC to enjoy the great size reduction benefits without the discernible loss in quality? How about audio? Do you trust only FLAC or is the lossiness of something like Ogg or AAC acceptable to you? Are you a fan of Winrar or 7zip or just plain old Winzip? Windows 10 now offers multiple compression algorithms for its NTFS filesystem, with LZX being the strongest and most CPU intensive. Does that sound interesting to you?
I guess my general disdain for "gearhead" stuff hasn't been shining through in whatever posts this person read. I am definitely not obsessed with compression algorithms or putting a ton of effort into something to only save a middling amount of disk space.
Once upon a time, there were mp3s, and mp3s were what I made (from my own CDs). Then I got the free software leftist thing going on, stopped buying from Amazon because "their one-click patent makes them evil", and then started doing Ogg Vorbis. The "Vorbis" part matters here - just saying Ogg isn't quite enough for people who care about these things.
That was... years and years ago. Then I ended up with a Macbook from the colorful search company and wound up feeding it all of my CDs so I could listen to music from it at work. This worked well enough, and I used that laptop for a bit more than I probably should have. Put it this way - when I bought my first iPhone, they *had* to be paired with a Mac. I paired it with *that* Mac. It had photos and music on it, I read reddit from it, and so on.
I would definitely NOT do this now, and I would advise anyone against doing the same. Nothing bad happened - it was just laziness on my part for a year or so, using their machine to sit there and derp around on the web while at home. I was usually having to be on call and online during most of those evenings when the laptop was out and on, so... eh.
I bought my own laptop a year or so later, and it was a Mac since I had bought into the iPhone + iTunes + iPhoto + whatever else ecosystem by then.
Also, Windows anything? No. I only have Windows on the absolute cheapest POS laptop that was still a laptop (and not a tablet pretending to be a real computer) so that I can run a handful of Windows-only tools. I'm talking about tools that have no business being Windows-only, since all they do is twiddle files on a SD card that gets mounted by the OS. Uniden, I'm still looking at you.
Another person asks about a much older post regarding scripting, and that's OK. Feedback on old posts is fine, too:
> Switching it to use bash and adding "set - e" would be a good start.
Are there any specific advantages of using bash in this case? 'man 1p set' on my computer has "-e" documented, but I'm not sure if it's e.g. a recent addition or there are specific quirks.
So, a couple of things. /bin/sh is not bash. If you are using "bash-isms" in your script, you should not be asking for #!/bin/sh as the interpreter up top. Lots of people cheated over the years with this (and got away with it, too) because many Linux distributions shipped /bin/sh as just a symlink to /bin/bash. CentOS (and so RHEL) still does this, and Slackware does too.
Macs are a little trickier: they will start up bash, dash or zsh as /bin/sh depending on how you have another symlink set. Debian just points /bin/sh at /bin/dash, and I believe Ubuntu does too by extension.
Then the point about "set -e" is that it's one tiny step towards applying a higher standard to your scripting. If something fails, the script fails. It doesn't just carry on as if nothing broke earlier.
Of course, to do it right, you probably also need "-o pipefail", since otherwise a pipeline of stuff | like | this probably won't do what you wanted with your "-e".
That brings us to the ever-present fun where variable expansion by default gives you an empty string instead of an error when it's missing, and so that usually creates a request for a "-u". The first time you delete or overwrite the wrong thing because $FOO/bar expanded to just "/bar", you'll wish you had it.
Hence, "set -e -u -o pipefail" up at the top of a shell script (along with actually starting the right shell) are signs that someone cared about trying to do it right.
At some point, though, the problem is just too big for a shell script and you probably should take several steps back and spend the time to do it right in some other language.
I hear your frustrations with Centos -- this is what moved me to switch over to OpenBSD all those years ago and never look back. It's what Unix used to feel like, stuff makes sense (most of the time), cruft gets removed instead of added, upgrades actually work, and you know exactly what everything in `ps -aux` is doing. I realize you probably have some very specific use cases and the linux is super comfortable, but if you've never taken a look, and you have some spare cycles -- could be worth your time. Could also be a waste of time, so take this with a grain of salt. I'm sure everyone is preaching their favourite non-linux flavour at you right now.
Ah yes, the BSDs. I actually used to be a BSD-flavored sysadmin (at work) back in the olden times. This was before people really "trusted" Linux, so they would shell out the bucks for BSD/OS. Or, if they couldn't afford that, they'd go for FreeBSD and get the same thing, only three years sooner and at a far better price point. I kid, but BSD/OS vs. FreeBSD back in the day was beyond silly for SMP support alone, never mind everything else.
But, I digress. So, yeah, I've run the BSDs, and I'm not counting my flock of sometimes-brain-damaged Macs. But I have some lingering issues with them, mostly about the userspace. There have historically been a bunch of things that rub me the wrong way and so I find myself having to drop a bunch of GNU stuff onto the box right after it gets installed.
I'm talking about really dumb things like "find" requiring a . before it'll start doing stuff. This bites me on my Mac a lot, because "find|grep foo" won't just spit out files in or under my working directory matching foo. I have to go "ah crap, on the Mac again", go back and jam a space and a . in there before it'll get busy. It's BSD find, so the actual BSD operating systems probably (still) behave the same way.
On the GNU find that comes on basically every Linux distribution you can imagine, "find" assumes that . and just starts going. This is incredibly small and stupid, but it's an example of things being ... meh.
Another one: for the longest time, you'd get csh, and while a certain part of me fondly remembers my very first ISP shell account on a SunOS box that unironically dropped me into csh and said "figure it out", most of me wants no part of it. csh wound up being used just long enough to fire up ports and install bash -- sort of like how Internet Explorer on Windows 95 used to only run long enough to fetch and install Netscape. You get the idea.
I assume they're probably on zsh now. I haven't tried in a long time.
Back when I relied on Makefiles, the whole BSD make vs. GNU make thing mattered a lot. You had to write rules that were maximally compatible lest you look like a clown for shipping stupid stuff. That's another example of "not having nice things", albeit one that wouldn't bother me any more. (I wrote my own C++ build thing, so there.)
Maybe things are different with them. Maybe things are different with me. Maybe it won't seem like a bunch of dumb little annoyances to me next time. We'll see.
At least it's trivial to mess around with this stuff these days.
Regarding my CentOS post, a reader said:
In your "On the whole CentOS thing" post, you mention that you set things up manually. Why not treat your machines as cattle instead of pets? I don't personally use fancy tools like Ansible or Puppet, but some bash scripts that set up the boring parts go a long way.
Indeed I do set things up manually. You need only look at the numbers to understand why.
I've been using dedicated hosting since I was a web hosting support tech myself, and that was over 16 years ago now. In that time, there have been four boxes: the first three were RHEL, and the current one is CentOS. No two of those boxes were the same version.
There have been substantial changes to the base libraries, init systems (sysvinit, then Upstart, and now systemd), add-on libraries, compilers, Apache versions, firewalling, and everything else you can imagine along the line. I mean, it's been sixteen years. You kind of expect them to shift stuff.
And yet, there have only been four machines. More importantly, any given version of the OS has been installed and run on a single instance at a time. That means that even if I had sat down and come up with some magic all-singing, all-dancing installer script for (say) RHEL 5, it wouldn't have been of any use, since I only had the one RHEL 5 box, ever.
Now recall that this has been the case for all four versions, and now it really should not make any sense to plow time into something that will magically turn a freshly-installed version of *anything* into something that makes me happy.
If I had a fleet of these things, it would be a different story. I would have already built and/or abused something into doing my bidding to wrangle that army of penguins. But, I don't. It's just ONE BOX sitting somewhere in the greater Dallas area.
Story time: I actually *had* a decent-sized (for the time) fleet of machines once. I think the highest count was about 50, with just little old me running the whole thing. What happened? I wrote stuff to manage it. I had something that kept track of every package on every box, and then compared it to my management system, where I had a mapping of what was OK. If a system ended up with a package that was not approved, it would shoot me a mail every day.
When a new version of a package came out from upstream (for security reasons, typically), it would get shifted into the list, and the old one would shift out. This would generate a bunch of mails which I could then use to go and manually *boop* each machine to get them to upgrade.
Of course, that got old too, so I built something that would just show me all of the machines on a single web page, and I could either click click click click click through each one (one click = one machine being upgraded), or just hit the "ALL OF THEM" button and let them beat my web server senseless for a minute or two while they all fetched packages from it.
This kind of thing let me try a new package on a few machines first, see if it was okay, and then unleash it on the rest if it seemed okay.
You know when I first wrote this? July 1998. Yes! 22 years ago.
If I needed it again, it would make a return... in some form. Trust me.
Also regarding CentOS and migrations, someone asks:
Why not switch to one of the supported Linux distributions on Azure? I hear it's gotten really good and you will only pay for whatever computing resources you use. Do some calculations and see if it costs about the same or a little bit more than your current solution.
Azure, the Microsoft hosting thing? No thanks. I'm not quite the foaming anti-Microsoft type that I was back in the '90s, but I'm still not prepared to depend on them for basically anything I care about.
Also, I assume that would be some kind of VM, and that's another thing I'm not going to do. I'd buy a cabinet somewhere and start schlepping servers to a co-lo before I stop using dedicated equipment for this stuff.
I use VMs for experimentation, and to have a Linux workstation environment when otherwise constrained by goofy and/or limited IT departments. The latter situation isn't great, but it's better than putting up with something that never fails to get in my way when it's time to get work done.
What can I say? I am particular about my tools.