Microsoft AI Researchers Accidentally Expose 38 Terabytes of Confidential Data

Microsoft announced on Monday that it has taken action to address a flagrant security blunder that had exposed 38 terabytes of sensitive information.

The leak was found on the company’s AI GitHub repository, and it’s believed to have accidentally become public when a collection of open-source training data was published, according to Wiz. Additionally, it had a disk backup of the workstations of two former workers, which contained over 30,000 internal Teams communications in addition to secrets, keys, and passwords.

The repository, known as “robust-models-transfer,” is no longer reachable. It provided the raw code and machine learning models read more Microsoft AI Researchers Accidentally Expose 38 Terabytes of Confidential Data.

Stay informed with the best cybersecurity news and raise your cybersecurity awareness with our comprehensive coverage of the latest threats, breaches, and solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *