We know a little too much about Moore’s Law; in fact, in Digit itself, we’ve probably mentioned it about 1,562 times-enough to give people the general idea that everything hardware, not just the number of transistors, gets “better” and faster every year and a half.
In all the major hardware categories, we’ve tried and gone beyond just things getting “better,” which they will, of course. We’ve gone beyond the obvious. We’ve tried to tell you what hardware you’ll be running your software on, in the time to come… in some cases a year, in some cases 10.
Processors
How would you like to customise your processor?
Processors! The very word brings to mind numerical crunching monsters that eat a few thousand floating points for breakfast. This is one dominion of computing that enjoys big increments in performance every year or so. Ever-shrinking fabrication processes bring the all-significant advantages of reducing costs (hence prices) and power savings.
The two giants, Intel and AMD, are locked in a struggle for top market dog. We’ve witnessed the fallout courtesy AMD’s Athlon 64 a few years ago. More recently Intel’s Core 2 Duo (code-named Merom for laptops) fuelled retaliation; it came as a shock, even to industry experts. The Gigahertz battle is a thing of the past, and parallelism (read multiple cores) is the new order of things.
Intel’s Core 2 Duo desktop and notebook range will soon be replaced by a derivative of the same architecture code-named Penryn.
Penryn gets a die shrink from 65nm to 45nm. Code-named Wolfdale, the dual-core Penryn features 6 MB of L2 cache (opposed to the Core 2 Duo’s 4 MB). The quad-core Penryn, code-named Yorkfield, adds 4 MB over the Core 2 Quad’s 8 MB L2 cache.
Besides minor architectural changes like reduction in processor instruction latency, improved IPC (Instructions Per Clock), and tweaks to the cache and memory management subsystems, Penryn’s improved thermal efficiency allows it to enjoy higher clocks than Core 2. Improvements have also been made in power management, but this is restricted to the notebook space, where thermal envelopes are more restrictive.
Intel’s benchmarks indicate nearly 25% performance increments over Core 2. While such a large boost is exciting, there’s even more… Nehalem, a radical new architecture as Intel pitches it, takes the term “multithreading” to a whole new level. With native support for eight cores on a single die (the Core 2 micro-architecture could handle four cores tops), Nehalem has the industry salivating. Although details of power management are sketchy, Intel quotes “dynamic management,” which could mean independent power management for each core.
Nehalem goes the Athlon 64 way with an integrated memory controller (IMC). For those not in the know, the memory controller in Intel systems is situated on the Northbridge, while AMD had integrated it on the CPU die with their first 64-bit Desktop processors. This kind of architecture cuts down on latent clock cycles, thereby improving memory-related performance. Another leaf out of AMD’s book: Nehalem uses a point-to-point serial interconnect instead of an FSB (AMD calls this HyperTransport technology).
Something new is a hybrid of CPU and GPU (GPU on the CPU die itself)! This is Intel’s version of AMD’s “Fusion” technology. Fusion is an interesting concept that modularises processors. Both AMD and Intel have showcased Fusion as a computing evolution. It also brings flexibility to the manufacturing process.
Imagine-manufacturers are manufacturing a processing solution based on three CPU cores and one GPU core. Suppose demand for gaming suddenly increases. Manufacturers have the option to simply plonk more 3D processing cores onto a single wafer, say three graphics cores and one general processing core, thus saving on material and manufacturing costs tremendously. Nehalem will require a new platform due to the large number of non-backward-compatible changes.
AMD’s been on a bit of a rollercoaster since tweaking Intel’s tail back in early 2006. Disaster control plans revolve around a successor architecture to replace K8 (Hammer) powering their Athlon 64 and AM2 processor families. Barcelona is the codename for their latest architecture (65nm), which will be tasked with swiping back the performance crown lost a year back. It is AMD’s first (native) quad-core.
It has a big advantage as opposed to Kentsfield, which is basically a dual, dual-core solution (therefore not native). Barcelona features an optional L3 cache, and wide execution-128-bit to be exact, so 128-bit SSE (Steaming SIMD Extension) operations are possible with a single micro-operation. SSE allows processing of multiple data streams by a single command. This sort of execution works very well for audio and video encoding as well as image editing operations. AMD calls this SSE 128. The K8 had a 64-bit execution engine, which meant any 128-bit operations had to be decoded into two 64-bit micro-operations, resulting in wasted clock cycles.
Barcelona also has improved data fetch bandwidth and branch prediction, and AMD borrows Out of Order Execution from Intel (Core 2). All in all, in Barcelona AMD sees deliverance from the thorn that Core 2 has become. The Barcelona architecture will also make its presence felt in the notebook arena eventually replacing the current Turion X2 processors. In Desktop avatar, Barcelona has been codenamed “Phenom.”
The Cell processor powering the PS3 should, we reckon, head for computers soon. Designed ground up for high-performance computing the Cell processor consists of data and program cells (software) that are the fodder for the hardware cells. The Cell architecture isn’t restricted to computing on a single device, and if you have two or more Cell powered devices hooked up they can easily cooperate while processing. This is called “Distributed Computing”. Scalability is the name of the game with the Cell architecture that is designed for use in Desktops, Servers, Cell Phones, PDAs and even Consoles, so software cells can be distributed to each of these devices.
Manufacturers will have the option to simply plonk different cores
onto a single wafer, say three graphics cores and one general
processing core-based on demand
The performance of a single Cell processor is 256 billion floating point operations per second (256 GFLOPS). In comparison a Core 2 Duo e6600 at 2.4 GHz manages just around 4 GFLOPS!
We could well see your cell phone assist your console while gaming or your PC chipping in to share the load with your PDA. Convergence? Or parallelism unparalleled? Applications are pushing the bounds of interactivity and realism, and this bridge may only be crossed through faster processing. A small example is Windows Vista-a gorgeous bit of software that, while being sinfully delightful to use, exacts a heavy toll on hardware. And if better computing devices are available for the same (if not reduced) prices, what’s the problem? “May you live in interesting times”… was that the way the Chinese proverb went? Wish granted!
What’s up next in the blink-and-miss-it world of graphics cards?
The only category of computing hardware more exciting than processors at the moment-or for that matter in an even greater state of flux-is the GPU. And it’s infinitely more noticeable too. Seeing is equal to believing and GPUs are all about the visual experience.
Graphics cards have lost the stigma of being a “gamer’s tool.” We’ve seen greater acceptance of these processing marvels in many a home. The reason is the burgeoning drive for all things multimedia-movies, digital imaging, and, yes, games. And it’s the entertainment industry by large that drives GPU manufacturers back to their drawing boards to come out with solutions capable of driving their products. And the entertainment industry isn’t self-driven either-it’s driven by us! It’s in our nature to notice and applaud beauty. A simple example-anyone who has used Windows XP for a few days will find moving back to Windows 98 painful, inconvenient, and messy not to mention an eyesore! Take a wild guess as to what would happen if this new Windows XP fan were to be exposed to Windows Vista for a few days?
As with CPUs, the world of add-on desktop / notebook graphics solutions is a two brand show-NVIDIA and ATI. The phenomenon of one-up is seen at work here. NVIDIA had some success with their move to a 90 nm fabrication process with their G71 (7900 series) and G73 (7600 series) parts, which were adopted by gamers and multimedia aficionados alike. We’ve seen what ATI is capable of, courtesy the 48-pixel units on their X1900XTX series. These cards were DirectX 9.0c ready, and played most (then) current-generation games with a blatant absence of stuttering frames!
Then the whole DirectX 10 boom happened, and both manufacturers were back at their drawing boards trying to outwit each other at designing a solution for an API that hasn’t seen the light of day… yet. This API would obviously need a lot more processing power, and current DX 9 games were pushing the latest cards anyway.
Out came unified shaders (no more separate pixel and vertex shaders)-and NVIDIA beat ATI to the launch with 128 such shaders onboard their 8800GTX (codenamed G80). This 681-million transistor behemoth features, for the first time, a 384-bit memory interface, which in itself provides a huge theoretical advantage over the previously de facto 256 bits. Lower variants of the GTX designated “GTS” followed.
The whole DirectX 10 boom happened, and both manufacturers were back at their drawing boards trying to outwit each other
NVIDIA has since debuted the mid-range solution of their 8-series, a.k.a. the 8600GTS and 8600GT, which also signify the move to an 80 nm manufacturing process. Featuring a much lower shader unit count and slower and less memory, these have been very favourably priced, and we’ve seen these cards sell on the street for below Rs 10,000 (GT), while the GTS costs around Rs 3,000-4,000 more. We expect these prices to fall further as other DX 10 cards release and the dawn of the API gets nearer.
ATI, too, has announced its DX 10 solution, codenamed the R600. The card is christened X2900XTX/XT and features 512 MB of GDDR3 memory-512 bits wide. The core itself comprises an unbelievable 320 Stream Processors as opposed to 128 on the 8800GTX. Unbelievably, the core is clocked at around 740 MHz!
The term “Stream Processor” is simply the new-fangled name for what has been known as the “Shader Unit.” The X2900XT was released very recently, and while initial performance reviews place it behind the mighty G80, we expect to see lots of performance hikes with newer driver revisions. It’s also possible that DX10 titles may see more efficient use of those extra stream processors, which they were built for anyway…
As always we’re not commenting on its capabilities till we test it ourselves. Details for mid-range and low-end DX10 from ATI are also out.
The last card released from either manufacturer before the X2900XT is the 8800 Ultra from NVIDIA which is clocked higher than the 8800GTX and faster by a hairsbreadth. After the X2900 series release, we could very well see an 8900GTX/GTS, which will probably feature GDDR4. Then there’s a strong rumour that NVIDIA will succeed the dual-PCB, dual-GPU 7950GX2 with an 8950GX2, built around two 8800GTX cores. ATI hasn’t gone dual-GPU, but we’ve seen Sapphire (an ATI-only vendor) unveil one such card-the X1950 Pro Dual, based on two 80nm Radeon X1950 Pro GPU cores.
NVIDIA’s 8-series cards have raised the bar in games with a lot more visual quality than earlier-generation cards, courtesy the immense shader horsepower and bandwidth at their disposal. Improved anti-aliasing and Anisotropic Filtering algorithms bring the latest games to life, and even breathe a touch of life into earlier-generation hits like Far Cry, Doom 3, and F.E.A.R.
The term “Stream Processor” is simply the new-fangled name for what has been known as the “Shader Unit.”
GPUs are set to do a lot more than just accelerate games: NVIDIA is showcasing what they call CUDA technology which enables their graphics cards to be used as stream processing engines and floating point operators. The GPU has just invaded the CPU space…
For multimedia buffs, we have HDMI (High Definition Multimedia Interface) support being incorporated in the latest graphics cards. These sport improved hardware acceleration for a better video viewing experience. HDCP (High-definition Digital Content Protection) support is also something to look out for on the latest cards.
HD content (especially 1080p) on integrated solutions is an exercise in futility: a video card is mandatory. Yet another catalyst for the discrete graphics revolution is Vista, whose acclaimed Aero interface has onboard video solutions quivering in their boots. Then there are upcoming game titles like Crysis, BioShock, and Time Shift to name a few, that promise to take terms like realism and immersion to all-new levels.
You can’t talk about processors without talking about chipsets in the next breath. Here’s all that talk about sockets, pins and DDR3
While talking about the fastest processors, high bandwidth, low-latency memory and the newest batch of Direct X10 cards, we often forget the component that brings all these products together under a single roof. The motherboard is very little more than the sum of its chipsets (the Northbridge and Southbridge) with a hunk of silicon (PCB) thrown in to hold everything. It’s this motherboard that dictates compatibility by allowing or restricting use of certain processors, memory, and other components. It may seem therefore that the motherboard dictates the choice of the other components. Wrong!
You have to ask yourself… are these new chipsets merely
platforms to house faster processors?
Motherboard chipsets have been evolving, leading to improved products, supporting newer processors primarily, as well as other allied technologies. However, make no mistake-it’s been the processor industry that has largely dictated terms to chipset vendors, even memory vendors for that matter.
Flashback: Intel moved from a 478-pin processor socket to an LGA 775 processor socket.
This was due to thermal issues in particular. This brought about a major change in chipsets-Intel’s very own 875/865 gave way to their 925/915 series. Recently the move to Core 2 Duos saw 975/965 chipsets emerge from 955/945 chipset based boards because of their incompatibility with the Core 2 Duo.
AMD has seen its share of changes and upheavals too. The move to DDR2 initiated by Intel to complement its then latest Pentium D processors forced AMD to shift to a new platform as well-AM2, which brought DDR2 compatibility to AMD systems.
With their latest generation of processors, AMD and Intel will be leaping to yet another platform. Intel has lots of 45nm plans based around Penryn-their Core 2 Duo successor. This involves a new chipset-code-named Bearlake, a.k.a. the P35. that will support these processors . This is the mid-range offering replacing the P965. There’s support for DDR2 and DDR3 memory, 1333 MHz FSB support, a spanking new Southbridge, and more.
AMD also has a new socket for Barcelona-Socket F, which is a 1207-pin LGA-based affair; details are sketchy.
You have to ask yourself… are these new chipsets merely platforms to house faster processors? To a large extent, yes. For example, Intel’s Bearlake is supposed to debut with native E-SATA support, and possibly a new generation of PCI-Express to provide further bandwidth to GPUs.
The notebook space has also seen a lot of upheaval with both Intel and AMD introducing new products late last month.
Intel’s Santa Rosa (dubbed the Centrino Pro) is built around a 64-bit, dual-core Merom processor which should aid multitasking no end. There’s inbuilt HSPDA (High Speed Downlink Packet Access)-a new protocol for mobile data transmission (also called 3.5G). There are other noticeable improvements too. Like Intel Active Management technology that allows improved remote system access, thereby solving many problems with reduced costs. Imagine disconnecting an infected notebook from a network remotely! Wireless Draft N support is included. Also in the package is integrated DX10 graphics.
Puma is AMD’s code-name for their new notebook architecture and represents the platform as a whole. HyperTransport 3 connectivity and a new power saving DDR2 memory controller are the standouts here. The graphics will be a DX10 supporting core with full support for Blu-Ray and HD-DVD playback.
The latest generation of integrated graphics chipsets will be DX10 featuring unified shaders. Eventually DDR3 will become the Desktop standard-we expect greater bandwidth from the next generation PCI-Express standard to feed the latest and emerging GPUs and even multi-GPUs, that is, CrossFire and SLI configurations.
With faster processors, memory manufacturers have to step up and make sure that their components aren’t the weakest link in your PC
The memory market has seen its share of trials and tribulations the past couple of years. Now DDR2 is as common as cows on a highway, while DDR memory’s availability is somewhat sporadic. 667 MHz (PC 5300) memory has replaced 533 MHz (PC 4200) memory as the new entry-level solution for Desktops. PC 6400 memory (800 MHz) DDR2 from brands like Corsair, Kingston, and Transcend are readily available. A 1 GB stick of DDR2 667 MHz memory can now be had for as little as Rs 2,000-512 MB of DDR memory costs the same, incidentally!
The enthusiast memory market is still in its infancy in India, with low-latency memory costing upward of Rs 15,000. Don’t expect any miracles in this segment because demand for such memory is extremely low.
Intel promises DDR3 memory as a replacement for DDR2 over the next three years. The benefits of this would be lower power consumption and much higher bandwidth. The reason for the changeover is the speed barriers that DDR2 has hit (around 1.1 GHz), mainly due to thermal issues, and, of course, material limits. Intel visualises processor performance scaling higher, and they feel memory would bottleneck their quad-core offerings. Based on global sales, Intel is in a strong enough position to dictate terms to memory manufacturers. Other giants including Microsoft are also in favour of this move; we already know Vista is a memory bandwidth hog.
HD-grade movies and life-like games will deliver nirvana to their audiences, but they’re resource-hungry; memory-hungry in particular
Then there is NRAM or Nano RAM, which promises much higher densities. NRAM is non volatile unlike conventional RAM, and is based on the positions of carbon nanotubes on a chip-like substrate. However, the development is largely in conceptual stages. The benefits would be higher memory densities (and therefore cheaper) and low power consumption. Another advantage than NRAM has over Flash is the much longer life cycle of the insulators used.
Yet another technology that shows promise is MRAM (Magnetoresistive RAM) which works on the principle of magnet-induced storage rather than electric charges or current flows like in current DRAM. It’s non-volatile, but doesn’t degrade like Flash does. Speeds are right up there with latencies of 2ns already demonstrated. MRAM is already present in certain very niche markets, though the initial pricing has been a deterrent.
Graphics memory has got a further shot in the arm with better GDDR3 yields, allowing for memory speeds of up to 2 GHz. The major players in the GDDR3 market are Infineon, Samsung, Hynix, and Micron. Samsung is pushing their GDDR4 memory, which we’ve already seen in action on the ATI X1950XTX. The main benefit of GDDR4 is reduced power consumption, a smaller packaging, and, of course, greater bandwidth and higher speeds (up to 2.6 GHz).
The future of multimedia is HD-grade movies and life-like games. These applications will deliver nirvana to their audiences, but they’re resource-hungry; memory-hungry in particular. Gamers in the States have already moved to high-speed, 4 GB, dual-channel solutions. In India we’ve resisted the urge to splurge-restrictive prices and insufficient knowledge about the benefits of more memory being the culprits. Let’s hope we can play a part in changing things starting from here.
Windows Vista will demand more from your hard disks-not to mention all those HD movies and music you’re downloading
History proves our species is one of incurable hoarders… from precious metals to food, from fuel to weapons. There is something else mankind has hoarded through the ages-information. The digital era has pandered to this whim, even encouraged it.
Steady advancements in drive technology have seen sharply declining prices is another significant factor. A 250 GB Desktop hard drive retails at around Rs 3,500 today, while a 160 GB drive costing in excess of Rs 4,500 a couple of years ago is available for as little as Rs 2,700 now. Price per GB is down to rupees 16.875 per GB or 0.01 paisa per MB! Larger drives have also become cheaper, a 400 GB drive costs under 5,000 rupees now, while the rupees 10,000 plus costing 500 GB drives are near the 7,000-rupee mark.
Serial ATA (SATA) has matured as an interface, quickly becoming de facto for Desktop hard drives. Perpendicular Recording technology has emerged, leading to even greater capacities due to increased areal density. The 750GB barrier was hit last year (Seagate), and this year Hitachi has hit 1 Terabyte. The 1 Terabyte Hitachi features a 32-megabyte cache, and five platters-each capable of 200 GB. Though the initial cost of cramming so much data per platter is high, acceptance in the market will cause economies of large-scale production to take over.
A new hard drive technology called Hybrid Hard Drive or HHD has emerged, featuring a very high capacity (1 GB), high-speed, Flash-based buffer. The colossal cache means less spinning up and down of platters-power saving and higher performance. In the normal state, the platters are at rest and all data is written to the cache. The platters only spin when cache has run out, or when data needs to be read from the platters. Coupled with the fallen flash prices we could well see hard drives with 8 and 16 GB caches.
Yet another promising technology on the horizon is HAMR (Heat Assisted Magnetic Recording). A technology primarily pushed by Seagate, HAMR uses thermal lasers to heat the metal alloy-based platter to record data. The benefit is by doing so single bits of data can be stored in much smaller areas than with current HDD technology resulting in a much higher areal density. We could very well see 30 TB hard drives based on HAMR technology post 2010.
Flash drives are as portable as storage can get. They’re also ultra-convenient and dirt-cheap thanks to several recent price reductions. A year or so ago, a 512 MB USB drive cost Rs 2,000. Now a 4 GB drive (4096 MB) is available for just under the same amount! Just in time for Vista’s ReadyBoost. However, Flash prices aren’t viable for replacing hard drives any time soon. We see Flash and HDDs go hand in hand for a few years, with Flash taking over most immediate storage and retrieval situations and hard drives kicking in for situations where more space is required.
We see Flash and HDDs go hand in hand for a few years, with Flash taking over most immediate storage and retrieval situations
HD-DVD is also on the horizon; featuring up to three times the conventional storage per layer (15 GB as opposed to 4.7GB). At CES 2007, Toshiba unveiled 17-GB-per-layer HD-DVDs, and demonstrated a triple-layer (51 GB) disc. HD standards are much more rigorous for both audio and video-stereo sound at 24bit/192 KHz or eight channel sound at 24bit/96KHz. The other high-density optical format, Blu-ray, is the main competition to HD-DVD, as you probably know. Remember, the format war is still on.
Further on the horizon, we have HVD-Holographic Versatile Disc, a technology still in its infancy. Much research is being done but when HVD emerges it should make HD-DVD and BluRay seem like the floppies of old-with around 3.9 TB of data on a single disc!
This year should bring out the best in terms of pricing of current-generation storage technologies. We’ve also seen some interesting emergences like the use of 1.8-inch solid state drives for laptop storage (currently in 16 and 32 GB capacities). The next few years should see larger flash drives compete with existing HDD technology. May the hoarding continue!
- Printing Better
Printers have been workhorses we all love to flog, as stacks of A4 paper lying around any good office will testify to. Advancements in inks and printers don’t just relate to faster prints: printers are now being developed to print on a variety of media. Epson, for example, has developed printers specifically designed to print on certain surfaces, and they’ve demonstrated this by printing on diverse materials like canvas, a variety of fabrics, and even Papier Maché.Imagine famous works of art being recreated by printing on oil canvas! Equally intriguing is Epson printer’s ability to print on a variety of fabrics and other materials which are then used to create unique decors-very suitable for banquet halls, board rooms, and such. The main advantage is, the look and the design theme can be changed by simple swapping rather than having a costly redesign.
Epson has also developed printers that allow capturing of live TV screens and printing them out. Imagine printing out recipes, coupons, and other snippets as they appear on your TV channels! Perhaps the biggest innovation is the development of a metal compound containing ink that can actually be used to print directly on PCBs-no more wastage in the fabrication process!
Nearly all these B2B technology solutions are in the market’s fish-pond; what’s left to be seen is how many bite…
3D printing has been possible for awhile now, and industries have had access to these $1, 00,000 printers for over a decade. IdeaLab one of the companies in the 3D printing game has plans for a sub-$5,000 (Rs 2.25 lakh!) 3D printer for the home space very soon. 3D printers create 3D objects by melting plastic particles. IdeaLab has come up with a much cheaper and compact solution that uses a much lower intensity halogen bulb to melt nylon powder. Their vision-a 3D printer in each home…Imagine being able to print your very own toothbrush, comb, and razor!