Microsoft’s AI researchers accidentally leaked 38,000 GB of data, including product keys, passwords, emails

  sophia sophia   7116   20 Sep, 2023 

Description:

In a major gaffe, Microsoft's AI researchers have leaked an enormous amount of company data and personal data of people working there. The data was leaked when researchers were uploading training data to train AI models for image recognition Researchers at Microsoft inadvertently exposed 38 terabytes of personal data. The incident occurred when the AI team was uploading training data to enable other researchers to train AI models for image recognition. Inadvertently, this exposure included sensitive information such as “secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages,” as initially detected by the cloud security platform Wiz. In Microsoft’s official report addressing the breach, they emphasized that no customer data was compromised, and no other internal services were put in jeopardy due to this incident. They also reassured that there was no specific action required from their customers. The data link that led to the exposure was created using a feature in Azure known as “SAS tokens,” which allows users to generate shareable links. Wiz first identified the unauthorized access to the data on June 22 and promptly alerted Microsoft. Microsoft took immediate action by revoking the token the following day. They claim to have resolved the issue and have adjusted SAS tokens to be less permissive than before. Microsoft clarified that the exposed information was unique to two former Microsoft employees and their workstations. Again, they reiterated that customer data remained secure, and no other Microsoft services were compromised. They emphasized the importance of handling SAS tokens appropriately and encouraged customers to follow their best practices to minimize the risk of unintended access or abuse. Wiz warned that these types of errors might become more prevalent as AI is increasingly utilized and trained. They explained that this case serves as an example of the new challenges organizations face when harnessing the power of AI on a broader scale. With engineers handling vast amounts of training data, additional security checks and safeguards are essential to protect sensitive information as data scientists and engineers work to deploy new AI solutions.

Comments

  • abigailn abi

    Its too bad to microsoft

    Reply | 20 Sep, 2023
  • sophia sophia

    Your opinion on Microsoft’s AI research

    Reply | 20 Sep, 2023

Respond to Talk

Subscribe to Newsletter

and receive new ads in inbox

x

John Doe

3