Introduction
The wave of artificial intelligence (AI) chatbots available for public use in mainland China enables many users to create new content – including audio, code, images, simulations, videos, and grammatically correct text – to entertain and help with everyday tasks.
This demand has led to the local development of more than 200 large language models (LLMs), the technology underpinning generative AI (GenAI) services like ChatGPT. LLMs are deep-learning AI algorithms that can recognize, summarize, translate, predict, and generate content using very large data sets.
AI’s Math Struggles
Despite such resources behind chatbots, AI models have been proven to struggle with basic math knowledge. This was highlighted on the Chinese reality show Singer 2024, a singing competition produced by Hunan Television.
Mainland artist Sun Nan received 13.8 percent of online votes to edge out US singer Chanté Moore, who received 13.11 percent of votes. Some local netizens poked fun at the ranking, claiming that the latter number was larger. Ask AI, one commenter suggested. The results they got were mixed.
AI Responses
Both Moonshot AI’s chatbot Kimi and Baichuan’s own Baixiaoying initially gave the wrong answer. They corrected themselves and apologized after the user who made the query adopted a so-called chain-of-thought approach – a reasoning method in which an AI application is guided step-by-step through a problem.
Alibaba Group Holding’s Qwen LLM used a Python Code Interpreter to calculate the answer, while Baidu’s Ernie Bot took six steps to get the correct answer. Alibaba owns the South China Morning Post. ByteDance’s Doubao LLM, by contrast, generated a direct response with an example: “If you have US$9.90 and US$9.11, clearly US$9.90 is more money.”
Expert Opinions
“LLMs are bad at math – it’s very common,” said Wu Yiquan, a computer science researcher at Zhejiang University in Hangzhou.
GenAI does not inherently possess mathematical capabilities and can only predict answers based on training data, according to Wu. He said some LLMs perform well on math tests possibly because of “data contamination,” which means that the algorithm memorized the answers because similar questions were already in its training data.
“The world of AI is tokenized – numbers, words, punctuations, and spaces are all treated the same,” Wu said. “Therefore, any change in the prompt can affect the result significantly.”
Conclusion
The math issue shows that AI technology continues to evolve not only on the mainland but elsewhere around the world.
“The vast majority of experts believe the timing to craft unified national AI legislation may not yet be right since the technology is evolving so rapidly,” Zheng said.
The “number comparison testing” for AI models went viral after Allen Institute’s researcher Bill Yuchen Lin and tech firm Scale AI’s prompt engineer Riley Goodside highlighted the technology’s basic math inadequacies on social media platform X.
When asked which number was bigger, 9.9 or 9.11, advanced LLMs such as OpenAI’s GPT-4o, Claude 3.5 Sonnet, and Mistral AI answered 9.11.
In a post on X, Goodside said he does not intend to undermine LLMs but aims to help understand and fix their failures.
“Previously well-known issues in LLMs (e.g., bad math) are now mitigated so well the remaining errors are newly shocking to users – any reduction in frequency is also a delayed increase in severity,” he wrote. “We should be ready for this to keep happening across many task domains.”
- 0 Comments
- Ai Process
- Artificial Intelligence