The emergence of DeepSeek, a Chinese AI chatbot, has disrupted the global AI landscape, causing significant concern for Western tech companies who previously held a perceived lead in AI development. DeepSeek’s competitive edge stems from its remarkably low development cost of approximately £4.8 million, a stark contrast to the estimated £80 million or more invested in OpenAI’s ChatGPT. This cost disparity sent shockwaves through the tech industry, contributing to a near £500 billion loss in Nvidia’s market value, a record single-day drop for Wall Street. DeepSeek’s launch as a free US app coincided with Donald Trump’s presidential inauguration and quickly garnered attention, prompting rivals like Meta to investigate its cost-effective development strategy. While DeepSeek’s founder, Liang Wenfeng, was initially dismissed by established entrepreneurs, his success has elevated him to national hero status in China.

DeepSeek’s competitive pricing and performance have spurred responses from industry leaders. OpenAI CEO Sam Altman acknowledged the challenge while expressing confidence in his company’s ability to deliver superior models. ChatGPT subsequently expedited the release of its chatbots for US government services. Former President Trump viewed DeepSeek’s emergence as a “wake-up call” for the American AI industry, potentially motivating companies to pursue more cost-effective development. Experts suggest that this increased efficiency could also address concerns about the environmental impact of large data centers required for AI development, particularly their water and energy consumption.

Despite the initial enthusiasm, questions arose regarding DeepSeek’s claims about its development costs and reliance on older Nvidia chips. Speculation emerged that DeepSeek might have utilized more advanced chips than initially disclosed, given its performance capabilities. This speculation was fueled by comments from industry figures like Scale AI CEO Alexandr Wang, who alluded to DeepSeek’s potential access to Nvidia’s high-end H100 chips. Elon Musk’s succinct response, “Obviously,” further fueled the debate. Nvidia’s stock market plunge also raised questions about DeepSeek’s origins and the possibility of its parent hedge fund profiting from bets against Nvidia’s share price.

Beyond the technical and financial aspects, concerns surfaced about DeepSeek’s data handling practices and potential ties to the Chinese government. Wenfeng’s background as a businessman who leveraged AI for successful stock trading, and his hedge fund’s substantial worth, added to the intrigue. Critics, including Luke de Pulford of the Inter-Parliamentary Alliance on China, voiced concerns about DeepSeek’s collection of user data, including IP addresses, keystroke patterns, and device information, and its storage in China, where it might be vulnerable to government access. Experts pointed out the likelihood of data sharing with the Chinese state, particularly given the chatbot’s adherence to China’s strict censorship laws.

DeepSeek’s privacy policy explicitly states that data is stored on servers within China. Tests conducted by journalists revealed the chatbot’s avoidance of sensitive topics related to China, including Tiananmen Square, Taiwan, President Xi Jinping, and forced labor. This reinforced concerns about the chatbot’s potential role in disseminating state-approved narratives and suppressing dissenting views. Australian government officials, drawing parallels to concerns about TikTok, urged caution before downloading the app.

Independent testing of DeepSeek highlighted its susceptibility to bias and censorship, particularly regarding topics sensitive to the Chinese government. The chatbot’s responses on Taiwan’s independence, criticism of Xi Jinping, the Tiananmen Square incident, and China’s Olympic history all reflected a pro-China stance. Furthermore, DeepSeek’s denial of data collection contradicted its own privacy policy, raising further concerns about transparency and data security. The chatbot’s affirmation of trusting the Chinese Communist Party unequivocally added another layer of concern to its perceived biases. These findings underscore the potential risks associated with using AI tools developed under authoritarian regimes, where information control and censorship are prevalent.

© 2025 Tribune Times. All rights reserved.
Exit mobile version