Elon Musk’s xAI team accomplished the rapid installation of 100,000 Nvidia H200 GPUs in only 19 days, a timeline that Nvidia CEO Jensen Huang says normally stretches to four years in data center projects.
Musk's xAI Defies GPU Setup Timelines
In an astounding 19 days, Elon Musk and the xAI team accomplished an engineering feat by establishing a supercluster of 100,000 H200 Blackwell GPUs. On X, Jensen Huang, CEO of Nvidia, shared with Tesla Owners Silicon Valley a tale of Elon Musk's remarkable installation skills.
As awe-inspiring and respectful, Huang characterizes Musk's 19-day adventure as "superhuman." Reportedly, the xAI team completed the transition from "concept" to "gear" compatible with Nvidia in about three weeks. This also involves training the first AI on the freshly constructed supercluster using xAI.
Building the Supercluster: From Factory to Functionality
The entire procedure began with the construction of the enormous X factory, which would house the GPUs. It continued with the installation of electricity and liquid cooling systems across the entire facility, ensuring that all 200,000 GPUs could function properly.
Per Toms Hardware, getting the hardware and infrastructure supplied and installed in a precise and coordinated manner required a great deal of cooperation between the engineering teams at Nvidia and Elon Musk.
Jensen Praises Musk's Speed and Precision
According to Huang, a typical data center would need four years to do what Elon Musk and his team accomplished in just nineteen days. The first three years would be devoted to planning, while the last year would be occupied with shipping, installing, and getting everything up and running.
Additionally, Huang describes in great detail the intricate nature of the networking on Nvidia's hardware. Networking Nvidia's gear isn't the same as networking regular servers in a data center, he says. "The number of wires that goes in one node...the back of a computer is all wires."
Four Years vs. 19 Days: How xAI Beat the Odds
No other firm is likely to replicate Elon Musk's integration of 100,000 H200 GPUs for quite some time, given it has "never been done before" (according to Jensen Huang).


Anthropic Launches HIPAA-Compliant Healthcare Tools for Claude AI Amid Growing Competition
FCC Approves Expansion of SpaceX Starlink Network With 7,500 New Satellites
China Reviews Meta’s $2 Billion AI Deal With Manus Amid Technology Control Concerns
Trump Pushes $100 Billion U.S. Oil Investment Plan for Venezuela After Maduro Seizure
SK Hynix Shares Hit Record High as AI Memory Demand Fuels Semiconductor Rally
Dell Revives XPS Laptop Lineup With New XPS 14 and XPS 16 to Boost Premium PC Demand
Rio Tinto–Glencore Merger Talks Spark Investor Debate Over Value, Strategy and Coal Exposure
OpenAI Sets $50 Billion Stock Grant Pool, Boosting Employee Equity and Valuation Outlook
Ford Targets Level 3 Autonomous Driving by 2028 with New EV Platform and AI Innovations
Intel Unveils Panther Lake AI Laptop Chips at CES 2025, Marking Major 18A Manufacturing Milestone
Hyundai Motor Shares Surge on Nvidia Partnership Speculation
NASA and SpaceX Target Crew-11 Undocking From ISS Amid Medical Concern
EU Orders Elon Musk’s X to Preserve Grok AI Data Amid Probe Into Illegal Content
FTC Blocks Edwards Lifesciences’ JenaValve Acquisition in Major Antitrust Ruling
Discord Confidentially Files for U.S. IPO, Signaling Major Milestone
Baidu’s AI Chip Unit Kunlunxin Prepares for Hong Kong IPO to Raise Up to $2 Billion
Elon Musk Says X Will Open-Source Its Algorithm Amid EU Scrutiny 



