Menu

Search

  |   Technology

Menu

  |   Technology

Search

AMD Takes Direct Shot at Nvidia's Blackwell with New AI Chip to Seize Data Center Market

AMD's Instinct MI325X aims to outpace Nvidia's Blackwell in the highly competitive AI chip market. Credit: EconoTimes

AMD launched its Instinct MI325X AI chip on Thursday, positioning it as a direct competitor to Nvidia's upcoming Blackwell GPUs. This bold move could disrupt Nvidia’s data center dominance and intensify the AI chip war.

AMD Unveils AI Chip to Rival Nvidia’s Dominance

On Thursday, AMD unveiled a new AI chip that competes head-on with Nvidia's GPUs—the graphics processing units used in data centers.

At an event introducing the new product, AMD announced Thursday that production of the Instinct MI325X chip, as reported by CNBC, will begin before the end of 2024. Nvidia, which has witnessed strong demand for its GPUs and over 75% gross margins over the last year, could see pricing pressure from AMD's artificial intelligence chips if developers and cloud giants view them as near replacements for Nvidia's products.

Rising Demand for AI Chips Fuels Competition

More businesses are offering AI chips to meet the demand for advanced generative AI systems like OpenAI's ChatGPT, which need large data centers stocked with GPUs to handle the data.

Although AMD has always ranked second, Nvidia has recently cornered the market on graphics processing units (GPUs) for data centers. Now, AMD is determined to steal market share from its Silicon Valley competitor or at least secure a significant portion of the $500 billion market that it predicts will exist by 2028.

AMD’s Push for AI Market Share Intensifies

“AI demand has actually continued to take off and actually exceed expectations. It’s clear that the rate of investment is continuing to grow everywhere,” AMD CEO Lisa Su stated during the event.

At the presentation, AMD did not announce any new big cloud or internet customers for its Instinct GPUs. However, in the past, the company has mentioned that OpenAI uses its AI GPUs and that Meta and Microsoft purchase them. Since the Instinct MI325X is usually supplied in bundles with other server components, the business remained tight-lipped about its price.

Accelerating Product Timelines to Challenge Nvidia

To better compete with Nvidia and capitalize on the surge in AI chips, AMD is speeding up its product calendar with the debut of the MI325X. The company plans to deliver new chips on an annual timetable. The new AI chip will take over from the MI300X, which was released at the end of last year. The business announced that the 2025 and 2026 chips from AMD will be named MI350 and MI400, respectively.

The release of the MI325X will put it in competition with Nvidia's forthcoming Blackwell processors, which the graphics card manufacturer has promised will begin delivering in large quantities early in 2019.

Investor Interest Shifts as Competition Heats Up

Investors seeking more AI boom beneficiaries may take notice of AMD's next data center GPU if it has a good launch. Nvidia's stock increased by more than 175% in 2024, whilst AMD's stock only increased by 20%. When it comes to artificial intelligence chips for data centers, most experts agree that Nvidia controls over 90% of the market.

Thursday saw a 4% decline in AMD shares. Approximately 1% of Nvidia's stock was up.

ROCm Software as AMD’s Answer to Nvidia’s CUDA

Since its competitors' chips employ their proprietary language, CUDA, which has become the de facto standard for artificial intelligence developers, AMD faces a significant challenge in capturing market share. That forces programmers to use only the tools and resources provided by Nvidia.

As a countermeasure, AMD announced this week that it has been fine-tuning its competitor software, ROCm, to make it easier for AI developers to migrate more AI models to its accelerator processors.

AI Performance Gains with AMD’s Superior Memory

AMD has positioned its artificial intelligence (AI) accelerators to compete better in scenarios where AI models are generating predictions or content creation than when they are improving by digesting gigabytes of data. It said that this is because AMD's superior memory makes its CPUs faster than certain Nvidia chips, which in turn allows them to service Meta's Llama AI model.

According to Su, Meta's large language AI model, Llama 3.1, uses the H200, but the MI325 platform can provide inference performance that is up to 40% better.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.