DeepSeek’s artificial intelligence (AI) models are giving Chinese chipmakers like Huawei a competitive edge against U.S. processors, particularly in AI inference tasks. Unlike Nvidia’s powerful chips designed for training AI models, DeepSeek optimizes computational efficiency, making it more compatible with Chinese-made AI chips.
Huawei, along with chipmakers Hygon, EnFlame, Tsingmicro, and Moore Threads, has announced support for DeepSeek models, though details remain scarce. Analysts suggest that DeepSeek’s open-source nature and affordability could drive AI adoption in China, helping firms navigate U.S. export restrictions on advanced chips.
Inference, which involves AI models generating conclusions, is less reliant on raw processing power compared to training. Huawei’s Ascend 910B chip, already favored by companies like ByteDance for inference tasks, further strengthens this shift. Industry experts note that while Nvidia dominates AI training, inference workloads provide Chinese chipmakers with more opportunities.
Despite DeepSeek’s advantages, Nvidia remains a key player. The company still supplies less powerful training chips to China, which can be used for inference. Additionally, Nvidia’s CUDA platform, essential for AI software development, maintains a significant lead. While Huawei has introduced a CUDA alternative called Compute Architecture for Neural Networks (CANN), convincing developers to switch remains challenging.
As China integrates DeepSeek into various industries, the country’s AI chipmakers gain momentum in the domestic market. However, Nvidia’s influence persists due to its superior software ecosystem and hardware efficiency.