Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures – djohnson
– [[{“value”:”
Microsoft is petitioning a Virginia court to seize software and shut down internet infrastructure that they allege is being used by a group of foreign cybercriminals to bypass safety guidelines for generative AI systems.
In a filing with the Eastern District Court of Virginia, Microsoft brought a lawsuit against ten individuals for using stolen credentials and custom software to break into computers running Microsoft’s Azure OpenAI services to generate “harmful content.”
In a complaint filed Dec. 10, 2024, the company accuses the group of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act and the Racketeer Influence and Corrupt Organizations Act, as well as trespass to chattels and tortious interference under Virginia state law.
Microsoft claims the defendants used stolen API keys to gain access to devices and accounts with Microsoft’s Azure OpenAI service, which they then used to generate “thousands” of images that violated safety protocols in place to prevent misuse. The activity was first discovered in July 2024 through August 2024, and Microsoft said some of the stolen API keys belonged to U.S. companies located in Pennsylvania and New Jersey.
According to Microsoft, the defendants used a software tool that allowed them to gain insight into Microsoft and OpenAI’s filtering system, allowing them to identify specific phrases that were flagged as safety violations and reverse engineer language that would circumvent those restrictions. The software also allowed users to strip metadata from created media that is used to digitally watermark and identify AI-generated content.
In a blog describing the action, Steven Masada, assistant general counsel for Microsoft’s digital crimes unit wrote that the court order and seizure “will allow us to gather crucial evidence about the individuals behind these operations, decipher how these services are monetized, and disrupt additional technical infrastructure we find.”
A Microsoft spokesperson confirmed to CyberScoop that the company has obtained a temporary restraining order that allows for the seizure of the domain listed in the complaint.
“The seizure of this domain enables [Microsoft] to redirect communications occurring on the malicious domain to our [Digital Crimes Unit] sinkhole, making it available for the investigative team’s analysis, the spokesperson said. “In addition to the initial seizure, we also secured expedited discovery to further our investigation as well as preserve evidence at the locations we identified as hosting part of the ‘infrastructure’ used by the defendants.”
Microsoft uses OpenAI’s DALL-E image generator, and both OpenAI and Microsoft’s Azure OpenAI Service implement protocols and guardrails to prevent the creation of violent or hateful images, or photorealistic depictions of real faces or public figures. Neither the blog nor the complaint mentioned what the produced images depicted.
Microsoft does not know the identities of the ten individuals, instead identifying them through specific websites, stolen Azure API keys and GitHub tools used in the scheme. The complaint claims that at least three are the providers of these services and live outside of the United States, while the others appear to be end users.
The individuals are accused of setting up a hacking-as-a-service scheme, systematically stealing API keys from customers with access to Microsoft generative AI systems and selling that access to interested parties over the internet.
Generative AI systems like those offered by Microsoft and OpenAI are at the forefront of a battle between companies providing sophisticated text and image generation capabilities and malicious actors looking to exploit them.
Cybercriminals, scammers and foreign intelligence services have all sought to use generative AI tools to enhance or compliment hacking campaigns and generate false media for disinformation campaigns.
AI companies have sought to allay concerns around potential abuse of their technologies by implementing numerous technical guardrails and joining international agreements. Still, outside researchers have identified numerous methods capable of bypassing many of these restrictions, including simple prompting techniques.
However, there are some signs that the protections put in place by commercial U.S. companies may be more effective at keeping foreign actors from fully leveraging the technology.
Last year, the Office of the Director of National Intelligence said that foreign countries like Russia, China and Iran all used generative AI tools to create fake media designed to influence the 2024 U.S. elections. But the effectiveness of those campaigns was hindered in part due to struggles by those countries “to overcome restrictions built into many AI tools and remain undetected,” according to a senior ODNI official.
The post Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures appeared first on CyberScoop.
“}]] – Read More – CyberScoop