DeepSeek Model ‘Nearly 100% Successful’ at Avoiding Controversial Topics

0
1
DeepSeek Model ‘Nearly 100% Successful’ at Avoiding Controversial Topics


Meet the new DeepSeek, now with more government compliance. According to a report from Reuters, the popular large language model developed in China has a new version called DeepSeek-R1-Safe, specifically designed to avoid politically controversial topics. Developed by Chinese tech giant Huawei, the new model reportedly is “nearly 100% successful” in preventing discussion of politically sensitive matters.

According to the report, Huawei and researchers at Zhejiang University (interestingly, DeepSeek was not involved in the project) took the open-source DeepSeek R1 model and trained it using 1,000 Huawei Ascend AI chips to instill the model with less of a stomach for controversial conversations. The new version, which Huawei claims has only lost about 1% of the performance speed and capability of the original model, is better equipped to dodge “toxic and harmful speech, politically sensitive content, and incitement to illegal activities.”

While the model might be safer, it’s still not foolproof. While the company claims a near 100% success rate in basic usage, it also found that the model’s ability to duck questionable conversations drops to just 40% when users disguise their desires in challenges or role-playing situations. These AI models, they just love to play out a hypothetical scenario that allows them to defy their guardrails.

DeepSeek-R1-Safe was designed to fall in line with the requirements of Chinese regulators, per Reuters, which require all domestic AI models released to the public to reflect the country’s values and comply with speech restrictions. Chinese firm Baidu’s chatbot Ernie, for instance, reportedly will not answer questions about China’s domestic politics or the ruling Chinese Communist Party.

China, of course, isn’t the only country looking to ensure AI deployed within its borders don’t rock the boat too much. Earlier this year, Saudi Arabian tech firm Humain launched an Arabic-native chatbot that is fluent in the Arabic language and trained to reflect “Islamic culture, values and heritage.” American-made models aren’t immune to this, either:  OpenAI explicitly states that ChatGPT is “skewed towards Western views.”

And there’s America under the Trump administration. Earlier this year, Trump announced his America’s AI Action Plan, which includes requirements that any AI model that interacts with government agencies be neutral and “unbiased.” What does that mean, exactly? Well, per an executive order signed by Trump, the models that secure government contracts must reject things like “radical climate dogma,” “diversity, equity, and inclusion,” and concepts like “critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.” So, you know, before lobbing any “Dear leader” cracks at China, it’s probably best we take a look in the mirror.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here