Google announced custom silicon for data centers on Tuesday, joining Amazon and Microsoft as cloud service providers.
Google’s Axion series of processors is its first Arm-based processors specifically designed for data centres. Amin Vahdat’s blog stated that “Axion will deliver industry-leading energy efficiency and performance to Google Cloud customers in the later part of this year.”
Google claims that Axion processors combine its silicon expertise with Arm’s most-performing CPU cores, resulting in instances with performance up to 30% higher than the fastest Arm-based general-purpose instances currently available on the cloud. They also deliver up to 50% more performance and 60% more energy efficiency than comparable current-generation x86 based instances.
Bob O’Donnell founder and Chief analyst at Technalysis ResearchFoster City is home to a firm that specializes in technology market research.
He told TechNewsWorld that “all these companies want something unique, so they can run their software and do things more efficiently.”
He added that “Power consumption in data centers is one their biggest costs and Arm designs tend to be more power-efficient than Intel”. “Google won’t get rid of Intel but Axion will give them a different option and for certain workloads it is going to be a much better alternative.”
Also, there are market considerations. O’Donnell stated that “everyone wants an alternate to Nvidia.” “Nobody wants to be the company that has 90% of the market unless it’s you.”
Bad News for Intel
Benjamin LeeProfessor of Engineering at the University of Pennsylvania explains that Google can design its own CPU to optimize its hardware for performance and efficiency.
He told TechNewsWorld that “a large part of this efficiency is due to the custom controllers we build, which handle important computations for security, networking and hardware management.” These custom hardware controllers are able to free up more CPU time for users and customers by handling the data center servers’ bookkeeping computation.
Intel’s dominance of the datacenter market, with its x86 chips, is hurt by the use Arm processors.
Google’s Axion Processor (Image credit: Google)
“This announcement shows a accelerating shift away from x86 to Arm architectures, which is the ultimate reward for chip companies.” Rodolfo RosiniCo-founder and CEO at Vaire, an reversible computer company with offices in Seattle, London and New York.
He told TechNewsWorld that he believes Arm will benefit more from this announcement in the end than Google.
Rise of the Proprietary Silicon
Axion is another example of major players — such as Apple and Tesla — investing in their own chip designs, observed Gaurav Gupta, vice president for semiconductors and electronics at GartnerStamford, Conn.-based company that provides research and advice.
He told TechNewsWorld, “We think this is a big trend.” “We call this OEM Foundry Direct. OEMs bypass design firms or get assistance from them and go straight to the foundry for their silicon. This is done to control costs, improve roadmaps, create IP synergies and other things. We’ll continue to see this.”
With this announcement, Google is putting its substantial financial and technical weight behind a market trend for semiconductors — like CPUs and accelerators — to be designed according to how they are going to be used, explained Shane Rau, a semiconductor analyst at IDC, is a global market-research company.
He told TechNewsWorld that “no single CPU or accelerator could handle all of the workloads and apps that Google’s customers have.” Google will now offer them another option for CPU and artificial intelligence acceleration.
TPU v5p – General Availability
Google has announced the general availability, along with the Axion announcements, of Cloud TPU version 5p, its most powerful and scalable Tensor Processing Unit.
In a blog, the company explained that the accelerator was designed to train the most complex and demanding generative AI models. A single TPU v5p pod contains 8,960 chips that run in unison — over 2x the chips in a TPU v4 pod — and can deliver over 2x higher FLOPS and 3x more high-bandwidth memory on a per-chip basis.
Google’s development and advancement of Tensor Processing Units (TPUs) for data centers underscores its commitment to accelerating the machine learning workloads efficiently,” said Dan deBeaubien. SANS Institute, is a global organization that provides cybersecurity education, certification and training.
He told TechNewsWorld that “this distinction highlights Google’s approach to optimizing mobile and datacenter environments for AI applications.”
Abdullah Anwer AhmedSerene Data Ops is a data management firm in Dublin, Ohio. According to, Google TPU provides another option for Google Cloud’s lower-cost inferencing.
Inference costs are the fees users pay for running their machine-learning model in the cloud. These costs can account for up to 90% of the cost of maintaining ML infrastructure.
Ahmed told TechNewsWorld that, “If a company is already using Google Cloud but their inferencing cost starts to outweigh training costs, then it might be a good option to switch to Google TPUs to reduce costs. It all depends on the workload.”
Promoting Sustainability
Google’s new Axion chip will also contribute to Google’s sustainability goals. Vahdat wrote: “Customers want to meet their sustainability and efficiency goals beyond performance.” With Axion, customers can optimize their energy efficiency.
Data centers consume a lot power because they are always running. Ahmed says that reducing energy consumption is a good way to contribute to sustainability.
O’Donnell continued, “The Arm CPU is more energy-efficient than the x86.” This is important because data centers have huge energy costs. These companies must work to reduce this. “That’s why they’re all using Arm.”
“As demand for computing increases, this can’t be done forever as there are only so many resources in the universe. So you need to get smarter,” said he. “That is what everyone is working on.”