Table of Contents
Toggle
Microsoft Publicly Announces Internal Ban on DeepSeek
Concerns Over Sensitive Data Leaks to China
Microsoft Cloud Platform Still Offers DeepSeek R1 Model
According to TechCrunch, Microsoft Vice Chairman and President Brad Smith stated during a U.S. Senate hearing today that the company will prohibit employees from using the DeepSeek application due to concerns regarding data security issues and the propaganda risks associated with the Chinese government. Brad Smith remarked: “At Microsoft, we do not allow employees to use the DeepSeek application.” He was likely referring to the application services of DeepSeek on both computer and mobile platforms. He further added that for the same reasons, Microsoft has not made DeepSeek available in its own app store. Although many organizations and even countries have imposed restrictions on DeepSeek, this is the first time Microsoft has publicly announced a ban on the application.
Brad Smith highlighted that the restriction is primarily due to concerns over data leaks and content manipulation. According to DeepSeek’s privacy policy, user data is stored on servers in China and is subject to Chinese law, which means that Chinese intelligence agencies can legally access this data. Furthermore, DeepSeek also strictly monitors content on sensitive topics in accordance with the Chinese government’s speech policies.
Despite Smith’s critical stance on DeepSeek, Microsoft still listed the DeepSeek R1 model on the Azure cloud platform earlier this year for developers to use. However, this is different from providing the complete DeepSeek chatbot application. What Microsoft has made available is only the language model of DeepSeek, which is open-source, allowing anyone to download and deploy it locally, thus avoiding data transmission to Chinese servers. However, critics argue that this does not eliminate other risks, such as the model potentially carrying propaganda content or generating unsafe code. In response, Brad Smith revealed that Microsoft had modified the DeepSeek model internally before its listing to remove harmful risks, though he did not provide details on the modifications. According to Microsoft’s initial statement when listing the DeepSeek model on Azure, the model had passed rigorous red team testing and security assessments.