Hardware enthusiasts will remember the hubbub made out of the development and impending release of the Cell processor back in 2005 and 2006. A joint effort by sony, Toshiba, and IBM, the Cell processor was supposed to revolutionize the electronics industry, rock the core of it to its bone. The basis of the Cell ideology was to be that it produced high performance not by virtue of running necessarily fast, but instead through its networking capabilities with other Cell processors. The Cell itself was supposed to be tiny enough to put in everything – your TV, set-top box, radio receiver – even your refrigerator and microwave were supposed to be able to accommodate the Cell, it would be so cheap and tiny. And, when they all talked to each other, your home would become a veritable supercomputer. Of course, the fact that the first application of this mythical technology was to be a next-gen gaming console whose power would rival that of desktop computers at its launch should probably have been enough of a warning that this would not be the case, but nevertheless the Cell has yet to find its way into my alarm clock.
So the idea of desktop-class processors powering your everyday household appliances was lost to the ages – for a time. Apparently, and incredibly, Intel has been paying attention, and have given the concept try itself – and this time, it looks like they may be successful. Their strategy is to put a cheap implementation of x86 on a chip that is easy to manufacture in droves and runs cool enough that it does not require auxiliary cooling. “But wait,” you say, “the x86 is the architectural brain behind the Pentium and Athlon chips! That’s desktop class! Intel has already made its first folly and we haven’t yet begun.” And you would be right, of course, if Intel has approached this chip with the same mindset as it had with the Pentium (especially the miserable Pentium 4, which gave us the horrific NetBurst architecture). But there’s more.
The power of the x86 ISA isn’t necessarily in any particular design strength. In fact, it is an architecture with a number of marked flaws and weaknesses, particularly a slight dearth of available registers. Rather, the power of the x86 is in its popularity – everything these days is written for x86, and the resulting pre-existing libraries and optimization knowledge is a huge boon for any project that deems to use it. And of course, Intel itself has had substantial experience with the x86 set itself. So, they began there.
I will only cover the technical details of Intel’s entrant into this market, called the Atom (and codenamed Silverthorne) in brief. For significantly greater detail, you could peruse the Anandtech article on the subject. In short, the Atom is tiny, efficient, and reasonably fast. Its most interesting architectural feature is that rather than allowing out-of-order execution, the Atom is strictly in-order. This means that if the Atom runs across an instruction that it knows will require main memory access (an incredibly slow process in terms of the sheer speed at which processors run), it cannot take the liberties nearly every processor since the original Pentium has done, and choose other instructions to run first. The reason for this decision is due to the aspect of the Atom that I find much more interesting – the emphasis of efficiency in the development process.
The implications of the adoption of this particular mindset is a revolutionary one for Intel, and one that should be adopted across all projects, not just the Atom. Already on a solid path of recovering the CPU crown from AMD after the dark years, Intel had already begun to push efficiency with its various and successful Core architecture processors. However, Atom takes it a step farther. While the standard mantra in Intel right now is that if a feature will increase performance by 1%, it may increase power consumption (and this heat dissipation) by no more than 2%. With Atom, this latter statistic has been reduced to 1%, the result of which is that the Atom runs at incredibly fast speeds – up to 1.8Ghz – without need for any sort of external cooling whatsoever. This accomplishment is also due in part to Intel’s success with the 45nm fabrication process, and the size of the Atom – less than a quarter of a penny – means that Intel can manufacture these things for an estimated $10 $6 a pop.
Another interesting feature of the development process of the Atom was that the processor itself was designed in incredibly small chunks, or “modules”, of which each developer was in charge of several. This means that developers are able to work more freely from the constraints of the processor development process, with the various stages, especially layout, less locked together. In addition, this made locking the footprint of the die down much easier – if any developer wanted more space for one of their modules, they were required to convince a neighboring module to give up some room.
It remains to be seen whether the Atom processor will revolutionize the consumer electronics industry. I hope it does, as the machines people buy these days are in increasing need of such technology – Blu-ray and (hah) HD-DVD players are becoming increasingly slow, and televisions are having to deal with increasingly ridiculous resolutions, resulting already in a slight lag that any serious gamer would have noticed by now. It will also make Intel spades of money, which isn’t a good thing for the yet-again underdog AMD (which I still hope will make a comeback after the terrible Phenom), but the beauty of the decision to use x86 shines through again – AMD could easily manufacture competing chips. VIA has been in the low-power x86 business for years, with its nano- and pico-ITX classes of ridiculously tiny x86 computers.
What can be seen immediately, though, is the benefit the development process of the Atom can and hopefully will have on Intel’s general mindset. Efficiency and modular development are features that will benefit not only Intel, but consumers as well.