There’s a lot of action in the processor space thanks to a rapid cycle of innovation brought on by the introduction of ARM-based processors and a general uptick in the pace of innovation. Competition is great for everyone and it’s not just consumer devices but even servers are seeing a sea change. The server processor market has long been dominated by the x86 architecture largely from industry giants such as Intel and AMD. For decades, x86 processors powered the world’s data centres, driving innovations and supporting the rapid growth of the cloud. However, a shift is underway even in the enterprise space. Hyperscalers such as Amazon Web Services (AWS), Google, and Microsoft, which control vast portions of the cloud infrastructure market, have developed their own custom processors optimised for their specific needs. These chips – AWS Graviton, Google Axion, and Microsoft Cobalt – are all ARM-based, a divergence from the traditional x86 design. So, we’ve got ARM-based designs making inroads into the consumer space as well as the data centre space.
Which begs the question – What prompted these large cloud providers to design their own processors, and what does this mean for the big guys who’ve been raking it in all along until now? At the core of this shift lies the desire for greater control over performance and power consumption. In the cloud computing world, efficiency matters more than ever. Data centres, which host tens of thousands of servers, are heavily focused on power consumption. You’ve got data centres building massive solar farms and even powering up nuclear reactors for their power needs. The cost of running their facilities scales directly with the amount of power they consume, and this has significant financial and environmental implications. Hyperscalers are under constant pressure to reduce their energy footprints while simultaneously delivering more computational power.
ARM-based processors, with their simpler and more efficient architecture, offer significant advantages over traditional x86 processors in this area. ARM chips are known for their power efficiency, a crucial factor for hyperscalers who are managing thousands of even millions of servers. AWS, Google and Microsoft, with their massive cloud infrastructures, are looking to squeeze every possible watt of efficiency out of their hardware. This is where custom processors come into play.
By designing their own chips, these companies can optimise them for the specific workloads that they need to run. Instead of relying on general-purpose x86 processors, which are designed to perform a wide variety of tasks, AWS, Google and Microsoft can tailor their chips to excel in the kinds of workloads that they handle the most. And it’s not just CPUs that these big players are making, they’ve even started building custom AI chips. Though nowhere close to what NVIDIA’s chips are capable of, their attempts will eventually lead them to building competitive products.
The x86 folks have found themselves somewhat caught off guard by the rapid rise of these custom ARM-based chips. And the rise of these custom processors threatens the dominance of Intel and AMD. Part of the challenge for the x86 processors lies in their legacy. The x86 architecture was designed in a different era, and while it has been continually updated, it still carries a great deal of complexity, particularly in the instructure set. Legacy features, such as support for 16-bit and 32-bit instructions, contribute to a larger, more power-hungry design than what can be achieved with ARM designs.
Enter x86s – an initiative to come up with a slimmed-down x86 instruction set. By doing this, Intel hopes to make its processors more competitive in terms of power efficiency, and perhaps even close the gap with ARM-based designs. But will it be enough?
ARM’s inherent advantages are difficult to ignore. ARM processors are designed to be both power-efficient and highly customisable. This allows companies such as AWS, Google and Microsoft to optimise their chips for specific tasks in ways that traditional x86 processors simply cannot. By focusing on specific workloads, these hyperscalers are able to build chips that are equally fast, if not faster, and more efficient than what’s available in the open market.
All of this sounds a little too good to be true, right? Well, there are downsides. One of the immediate concerns is fragmentation. As each cloud provider builds its own custom chips, they introduce a level of divergence in the market. Software that runs great of Graviton might not run just as fast on an Axion or a Cobalt, and vice-versa. Developers will have to come up with optimised middleware for each cloud provider’s unique hardware. Thankfully, the abstraction between hardware and software would mean that this level of customisation wouldn’t be a deal-breaker for anyone. But it will be a problem. A short-lived one since everyone wants a unified standard and historically, all attempts to fragment standards have eventually led them to being converged somewhere down the line.
The rise of these custom ARM-based processors from hyperscalers marks the beginning of a new era in the enterprise market. While x86 processors will remain a key player in the foreseeable future, their position is no longer as secure as it once was. For the big guys to stay relevant, they’ll have to be open to coming up with custom ARM-based designs themselves since coming up with a new instruction set is out of the question, especially considering that building a new ISA is a multi-decade endeavour. Which way do you think the pendulum will swing?