Reader feedback on CPUs: pay attention to those frequencies
A couple of days ago, I wrote about how CPU utilization has a bunch of weird and wacky measurements and how there's a bunch of inconsistent vocabulary to boot.
I got a note via feedback from a reader named Sam who pointed out that I missed something: frequencies! Yep, I totally forgot to mention that, so let's cover it here.
It's been a long time since CPU frequencies have been a simple thing. Ever since Intel dropped the DX2 line of 486s onto the world in 1992, we've had to pay careful attention to what we're actually buying. At the time, there were 25 and 33 MHz 486s, and then there was a "66 MHz" 486 (the DX2), but that was the internal frequency. Externally, it still looked like 33 MHz to the rest of the board. There was also a DX2-50 which ran at 25 MHz externally.
There actually was a "real" 50 MHz chip in that family, the 486DX-50, but it seemed to have a bunch of problems and so it kind of vanished.
This was the beginning of CPU speed shenanigans for various reasons. Some were marketing-driven and others are just practicalities about signal stability, the speed of light, little bits of copper turning into antennas and other nasty physics issues.
Then, to make things more interesting, these CPU speeds stopped being static. At some point, these processors gained the ability to shift their clock speeds around for various purposes: dropping it to save battery power or when it starts overheating, or boosting it temporarily to get some short computation done sooner - the so-called "turbo mode". (It's not using exhaust to spin a thing to ram more air into it, so calling it that is kind of goofy.)
If this sounds mildly familiar in terms of my stories, it should, because I ran headfirst into badness caused by dynamic CPU frequencies a couple of years ago. In that case, it was a software-defined radio stack that was obviously not handling the variable CPU frequency well, and it ended up affecting the resulting transmissions. It meant I got to hear Oasis and Guns 'n' Roses and a bunch of other popular music pitch-shifted, tempo-shifted, and generally mangled to hell and back since their drivers couldn't handle it. Meanwhile, other vendors manage to get this right!
What's funny is that I had learned about the relative friskiness of CPU frequencies from the cat picture place, and specifically a special kind of mayhem we called "PROCHOT". When you push your machines really hard, they eventually have to slow down, because it's either that, or they'll melt down. (And yes, that video put me off AMD-based machines for years at the time.)
So, to bring this back around, we don't name CPUs, processors, threads, and cores consistently. We have variable numbers of them. They come at different frequencies both inside and out, and those frequencies are subject to change. Also, now, we're starting to see more machines in which the processors aren't uniform. Some of them are tuned for "performance", and others are tuned for "efficiency". This is great for mobile devices and other power-limited situations, but it also means that you really have to pay attention to what you're running on.
Oh, and while I'm here talking about power limits, I might as well mention the scenario where you're in a data center and there are practical limits on how much juice you can pull into a given rack, power distribution unit, bus bar, or whatever shared situation it is you care about. Sometimes, you *really* need the machines to take a chill pill and stop pulling so much power for a little bit.
How do you suppose you do that? Easy! You make them run HLT instructions. Seriously, that is a real thing that exists in various places. Some servers in some companies run little daemons that stay in touch with a controller over the network, and when needed, they will force the CPUs to "halt" (in the instruction-level sense, not like shutting it down forever), and so the power consumption will drop.
Of course, if the CPU is HLTing, it's not going to be doing as much practical work, so everything else on the box will get slower, but if the alternative is tripping a breaker or *ahem* melting a bus bar again, I think most people would take a short bit of slowness.
That means that in addition to everything else, some distant controller machine might tell the system that's running your stuff that it needs to "slow its roll", lest it turn the electrical infrastructure feeding it into a puddle of molten slag.
If you're interested in reading more about this, the magic words are "power capping".
All of these things will affect how fast your stuff runs... and there are probably many more I've never heard of, forgot to mention, or otherwise didn't cover adequately.
Parting note: if you're on an Intel-based machine and are interested in this sort of thing, you owe it to yourself to install i7z and watch the cores on your machine while you do stuff. Try "cat /dev/zero" or something like that to see just how expensive random craziness can be. i7z is the tool which got me to realize it was certain C-states which were causing my SDR woes. Turning them off proved it. Thanks, i7z!