Microsoft produces crap, AMD eats it
June 16th 2009

It's old news, but I just read about in in the Wikipedia article for the Phenom II processor.

Apparently Phenom processors had the ability to scale the CPU frequency independently for each core in multicore systems. Now, Phenom II processors lack this feature: the CPU frequency can be scaled, but all cores must share the same frequency.

Did this happen because of technical reasons? AMD thought it was better to do it? No. As Wikipedia says:

Another change from the original Phenom is that Cool 'n Quiet is now applied to the processor as a whole, rather than on a per-core basis. This was done in order to address the mishandling of threads by Windows Vista, which can cause single-threaded applications to run on a core that is idling at half-speed.

The situation is explained in an article in, where the author mistakes an error on Vista's account with an error in the Phenom processor (bolding of text is mine):

In theory, the AMD design made sense. If you were running a single threaded application, the core that your thread was active on would run at full speed, while the remaining three cores would run at a much lower speed. AMD included this functionality under the Cool 'n' Quiet umbrella. In practice however, Phenom's Cool 'n' Quiet was quite flawed. Vista has a nasty habit of bouncing threads around from one core to the next, which could result in the following phenomenon (no pun intended): when running a single-threaded application, the thread would run on a single core which would tell Vista that it needed to run at full speed. Vista would then move the thread to the next core, which was running at half-speed; now the thread is running on a core that's half the speed as the original core it started out on.

Phenom II fixes this by not allowing individual cores to run at clock speeds independently of one another; if one core must run at 3.0GHz, then all four cores will run at 3.0GHz. In practice this is a much better option as you don't run into the situations where Phenom performance is about half what it should be thanks to your applications running on cores that are operating at half speed. In the past you couldn't leave CnQ enabled on a Phenom system and watch an HD movie, but this is no longer true with Phenom II.

Recall how the brilliant author ascribes the "flaw" to CnQ, instead of to Vista, and how it was AMD who "fixed" the problem!

The plain truth is that AMD developed a technology (independent core scaling) that would save energy (which means money and ecology) with zero-effects on performance (since the cores actually running jobs run at full speed), and MS Vista being a pile of crap forced them to revert it.

Now, if you have a computer with 4 or 8 cores, and watch a HD movie (which needs a full-speed core to decode it, but only one core), the full 8 cores will be running at full speed, wasting power, producing CO2, and making you get charged money at a rate 8 times that actually required!

The obvious right solution would be to fix Vista so that threads don't dance from core to core unnecessarily, so that AMD's CnQ technology could be used to full extent. AMD's movement with Phenom II just fixed the performance problem, by basically destroying the whole point of CnQ.

Now take a second to reflex how the monstrous domination of MS over the OS market leads to problems like this one. In a really competitive market, if a stupid OS provider gets it wrong and their OS does not support something like CnQ properly, the customers will migrate to other OSs, and the rogue provider will be forced to fix their OS. The dominance of MS (plus their stupidity), just held back precious technological advances!

Tags: , , , , , , , , , , ,


2 Responses to “Microsoft produces crap, AMD eats it”

  1. Marcos Sartori on 13 Aug 2009 at 18:09 pm #

    It is not Vista processor scheduler!

    threads are placed on a queue, like in any other system, and they are run by the idle processor based on their state and prioraty.

    If it actualy happens that the processor will trigs a fault (as you say, I have not read CnQ whitepapers) everytime it needs more power. That fault will be managed in a ring0 (kernel mode) trap handler. Which by nature will put the thread it was running back in the queue, like any other system supporting SMP with premptive scheduler. And if the other processor is idle, or running a thread with less prioraty, or even running something else, but for enough cycles that the system thinks it is better to switch to another task. It will take that paused task and run it.

    It is not actually vistas fault. But AMD faults not considering how proc schedullers works. If they managed to make it and opt-in feature, that OSes will turn on if they manage processor prioraties based on their clocks or handle clock scalling in a better way (in a non interrup call, or with a low prioraty process on the second proc). it would be a welcomed feature for sure

  2. isilanes on 24 Aug 2009 at 12:27 pm #

    Mmm, thanks Marcos for clearing that up! I am still not convinced, though. If the problem is on AMD, then any other OS (Linux, MacOS) should suffer the same flaw. Do they? I don't have experience on the issue (never owned a Phenom).

Trackback URI | Comments RSS

Leave a Reply

Subscribe without commenting

« | »

  • The contents of this blog are under a Creative Commons License.

    Creative Commons License

  • Meta