In a significant leap for artificial intelligence, OpenAI has obtained one of the first NVIDIA Blackwell DGX B200 systems. The cutting-edge GPUs are poised to accelerate the training and performance of OpenAI's advanced AI models.
OpenAI Boosts AI Power With Early NVIDIA DGX B200 System
B200 cards, which use NVIDIA's Blackwell architecture, are selling like hotcakes.
The B200 GPUs are NVIDIA's fastest data center GPUs to date, and orders for them have begun to roll in from a number of multinational corporations. According to NVIDIA, OpenAI was going to use the B200 GPUs. The company appears to be aiming to boost its AI computing capabilities by taking advantage of the B200's groundbreaking performance.
OpenAI Showcases NVIDIA's Blackwell System for AI Innovation
Earlier today, OpenAI's official X handle shared a photo of its staff with an early DGX B200 engineering sample. They are now ready to put the B200 to the test and train their formidable AI models now that the platform has arrived at their office.
The DGX B200 is an all-in-one AI platform that will make use of the forthcoming Blackwell B200 GPUs for training, fine-tuning, and inference. With a maximum HBM3E memory bandwidth of 64 TB/s and eight B200 GPUs per DGX B200, each unit may provide GPU memory of up to 1.4 TB.
Blackwell GPUs Power Major Industry Players' AI Ambitions
The DGX B200, according to NVIDIA, can provide remarkable performance for AI models, with training speeds of up to 72 petaFLOPS and inference speeds of up to 144 petaFLOPS.
Blackwell GPUs have long piqued the curiosity of OpenAI, and CEO Sam Altman even hinted about the possibility of employing them to train their AI models at one point.
Global Tech Giants Jump on the Blackwell Bandwagon
With so many industry heavyweights already opting to use Blackwell GPUs to train their AI models, the firm certainly won't be left out. Amazon, Google, Meta, Microsoft, Google, Tesla, xAI, and Dell Technologies are all part of this pack.
WCCFTECH has previously stated that, in addition to the 100,000 H100 GPUs now in use, xAI intends to use 50,000 B200 GPUs. Using the B200 GPUs, Foxconn has now also stated that it will construct the fastest supercomputer in Taiwan.
NVIDIA B200 Outshines Previous Generations in Power Efficiency
When compared to NVIDIA Hopper GPUs, Blackwell is both more powerful and more power efficient, making it an ideal choice for OpenAI's AI model training.
According to NVIDIA, the DGX B200 is capable of handling LLMs, chatbots, and recommender systems, and it boasts three times the training performance and fifteen times the inference performance of earlier generations.


OpenAI Sets $50 Billion Stock Grant Pool, Boosting Employee Equity and Valuation Outlook
xAI Cash Burn Highlights the High Cost of Competing in Generative AI
Trump Weighs Blocking Exxon Investment as Venezuela Deemed “Uninvestable”
Rio Tinto–Glencore Talks Spark Pressure on BHP as Copper Fuels Mining Mega Deals
Lenovo Unveils AI Cloud Gigafactory With NVIDIA and Launches New AI Platform at CES 2026
FDA Limits Regulation of Wearable Devices and Wellness Software, Boosting Health Tech Industry
AMD Unveils Next-Generation AI and PC Chips at CES, Highlights Major OpenAI Partnership
China’s AI Sector Pushes to Close U.S. Tech Gap Amid Chipmaking Challenges
Trump Calls for 10% Credit Card Interest Rate Cap Starting 2026
Baidu’s AI Chip Unit Kunlunxin Prepares for Hong Kong IPO to Raise Up to $2 Billion
GM Takes $6 Billion EV Write-Down as Electric Vehicle Demand Slows in the U.S.
AustralianSuper Backs BlueScope Steel’s Rejection of $9 Billion Takeover Bid
BTIG Initiates Buy on SoftBank as AI and Robotics Strategy Gains Momentum
Chevron Seeks Expanded U.S. License to Boost Venezuelan Oil Exports Amid Sanctions Talks
Ford Targets Level 3 Autonomous Driving by 2028 with New EV Platform and AI Innovations 



