Home » Technology » Nvidia ups its AI chip game with New Blackwell Architecture

Nvidia ups its AI chip game with New Blackwell Architecture

Nvidia has announced its Blackwell GPU architecture, which is a new design for its artificial intelligence chips. This was the first time in five year that Nvidia held its GPU Technology Conference.

According to Nvidia, the chip, designed for use in large data centers — the kind that power the likes of AWS, Azure, and Google — offers 20 PetaFLOPS of AI performance which is 4x faster on AI-training workloads, 30x faster on AI-inferencing workloads and up to 25x more power efficient than its predecessor.

Nvidia said that compared to the H100 Hopper, the B200 Blackwell has a higher power and is more energy efficient. For example, to train an AI model of the size of GPT-4 would require 8,000 H100 processors and 15 megawatts. This same task would only require 2,000 B200 chip and four Megawatts of power.

Bob O’Donnell – founder and chief expert of Technalysis ResearchIn his weekly LinkedIn newsletter, he wrote:

Repackaging Exercise

Sebastien Jeanne, CTO Phison ElectronicsThe chip was described as “a repackaging project” by a Taiwanese company.

“It is good, but not groundbreaking,” he said to TechNewsWorld. It will run faster, consume less power and fit more computing into a smaller space, but it’s not revolutionary.

He said: “Their results can be easily reproduced by their rivals.” “There is value in being the first, but you have to move on before your competition can catch up.”

“When you force your competition into a permanent catch-up game, unless they have very strong leadership, they will fall into a ‘fast follower’ mentality without realizing it,” he said.

He continued: “By being first and aggressive, Nvidia can cement their image as the only real innovators. This will drive demand for more of their products.”

He added that Blackwell is not just a repackaging effort, but has a net benefit. He said that, in practical terms Blackwell would allow users to compute more quickly for the same amount of power and space. “That will enable solutions based on Blackwell outpace and exceed their competition.”

Plug Compatible with Past

O’Donnell claimed that the Blackwell second-generation Transformer engine was a significant improvement because it reduced AI floating point computations to four bits instead of eight bits. He said that by reducing the size of these calculations from eight bits on previous generations they could double their compute performance on Blackwell.

These chips are compatible with previous versions. Jack E. Gold founder and principal analysts with stated that Blackwell was plug-compatible with Nvidia systems using the H100. J.Gold AssociatesThe Northborough IT consulting company,.

He told TechNewsWorld that “in theory, you can just unplug your H100s and connect the Blackwells.” Although you could theoretically do this, you may not be able do it financially. Nvidia H100 chips, for example, cost between $30,000 and $40,000 per chip. Nvidia has not revealed the price for its new AI chips, but it is likely to be in this range.

Gold said that Blackwell chips can help developers create better AI applications. The more data points that you can analyse, the better AI becomes,” Gold explained. “Nvidia’s Blackwell program is a way to be able analyze trillions of data points instead of billions,” he explained.

Nvidia Microservices for Inference (NIM) were also announced during the GTC. The NIM tools are built on Nvidia’s CUDA platforms and will allow businesses to bring pre-trained AI models and custom applications into production environments. This should help these firms to bring new AI products to the market,” Brian Colello. Morningstar Research ServicesIn an analyst’s report Tuesday,, in Chicago wrote.

AI Helping to Deploy

“Big companies can deploy and adopt new technology faster than small businesses, because they have data centers. But most people work in smaller and medium-sized enterprises, which don’t always have the money to customize and implement new technologies. “Anything like NIM which can help them to adopt new technology more quickly and deploy it easier will be beneficial for them”, explained Shane Rau a semiconductor analyst at IDC, is a global market-research company.

He told TechNewsWorld that NIM allows you to find models tailored to your needs. “Not everyone is interested in AI as a whole. “They want AI that is relevant to their business or enterprise.”

O’Donnell stated that while NIM may not be as exciting as the newest hardware designs, it is more important for the long term.

He wrote that “first, it’s supposed make it quicker and more efficient for businesses to move from GenAI tests and POCs into real-world production.” It’s not possible to have enough GenAI programmers and data scientists. Many companies, eager to deploy GenAI, are limited by the technical difficulties. Nvidia is helping to ease the process.

“Second,” said he, “these microservices can create an entirely new revenue stream for Nvidia. They are licensed on a GPU/per hour (as they also do in other variations). It could be a long-term, diversified, and important way for Nvidia to generate income.

The Seated Leader

Rau believes that Nvidia is likely to remain the AI processor of choice in the near future. “But AMD and Intel can take modest shares of the GPU markets,” he stated. And because there are different chips you can use for AI — microprocessors, FPGAs, and ASICs — those competing technologies will be competing for market share and growing.”

There are few challenges to Nvidia’s dominance on this market, said Abdullah Anwer Ahmed. He is the founder of Serene Data Ops in San Francisco, a company that manages data.

“Along with their superior hardware, CUDA’s software solution has been at the core of AI segments for the past decade,” he said to TechNewsWorld.

“The main danger is that Amazon/Google/OpenAI and Microsoft/OpenAI work on building their chips optimized around this model,” he added. “Google already has their ‘TPU’ chip in production. Amazon and OpenAI hinted similar projects.

He added that “in any case, building your own GPUs is only an option available to the largest companies.” “Most LLM companies will continue to purchase Nvidia GPUs.”