China’s Alternative Approach to AI Development
China must find an alternative approach to artificial intelligence (AI) development, as US sanctions continue to limit access to advanced semiconductors and chip-making equipment. According to industry experts, leveraging supercomputing technology developed over the past decade could help break the stranglehold of US-led restrictions on the mainland’s AI industry.
Importance of Supercomputing for LLMs
Supercomputing systems designed for training large language models (LLMs) are crucial to replacing power-hungry data-centre computing clusters. These clusters typically employ from 10,000 to 100,000 graphics processing units (GPUs) for such training, as noted by Zhang Yunquan, a researcher at the Institute of Computing Technology under the Chinese Academy of Sciences (CAS).
Urgency for Technological Self-Sufficiency
China’s quest to establish a viable computing platform for LLMs highlights the urgency of becoming technologically self-sufficient. The AI progress in China is currently hindered by limited GPU choices due to US sanctions, which have prevented top GPU firm Nvidia from supplying its most advanced chips to the country.
Building LLMs Efficiently
Nvidia is reportedly working on a version of its new flagship AI chips for the China market that complies with current US export controls. CAS academician Chen Runsheng emphasized that building LLMs requires more than just adding chips; they must learn to lower energy consumption while improving efficiency.
The Role of Coal Power
China, being the biggest data centre market in the Asia-Pacific, relies heavily on coal power, generating nearly two-thirds of its electricity from coal last year.
Fundamental Research for Innovation
Chen called for fundamental research in intelligent computing of LLMs, combined with high-performance computing (HPC) technology, to achieve breakthroughs. HPC refers to the ability to process data and perform complex calculations at high speeds.
Challenges in LLM Development
The LLMs developed in China are based on US models and algorithms, lacking sufficient consideration of fundamental theories. Chen stated, “If we can make progress in fundamental theory, we will achieve groundbreaking innovation.”
Chinese Companies and GPU Clusters
Currently, Chinese companies are building computing clusters for training LLMs that include home-grown chips from Moore Threads Technology.
Big Tech firms like Tencent Holdings are optimizing their infrastructure to enhance AI training efficiency, with Tencent’s Xingmai HPC network supporting clusters with over 100,000 GPUs.
- 0 Comments
- Ai Process
- Artificial Intelligence