AI can help defenders stop nation-state threat actors at machine speed – CyberScoop

Last year, the escalating concerns about Chinese threat actors breaching U.S. organizations reached a crescendo as federal authorities issued increasingly urgent advisories about China’s “Typhoon” groups infiltrating U.S. networks, pressing organizations to take immediate action.
The Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) warned that these groups were engaged in a host of massive intrusions, ranging from infiltrating telecommunications networks and sensitive law enforcement communication platforms in order to preposition themselves on critical infrastructure networks to destroy or disrupt services.
Since late January, however, the U.S. government has issued few alerts about Chinese or other nation-state advanced persistent threat actors (APTs), including Russia, North Korea, and Iran. Experts say that despite the lack of warnings, it is more important than ever to stay alert against these groups, particularly given that rapidly developing artificial intelligence (AI) technologies have enabled defenders to spot these threat actors at machine speed and stop them in their tracks.
“Your ability to respond quickly is really important,” Alex Stamos, CISO at SentinelOne, told CyberScoop. “You can’t spend fifteen, twenty minutes for your security operations center analyst to go to the bathroom and then come back and look at an alert and to make a decision because the threat actors are already ten steps ahead of you.”
“Chinese threat actors are going for very large-scale operations,” Alon Schindel, VP of AI and threat research at Wiz, told CyberScoop. “AI can empower cybersecurity teams to walk faster and reduce the number of issues. You can reduce the remediation time. That’s the thing.”
AI brings it all together
Experts emphasize that AI’s real value in identifying and halting sophisticated threat actors lies in its capacity to process vast amounts of information across an organization’s tech surface. It can then correlate that data to identify and potentially thwart suspicious behavior swiftly.
“AI is there to augment your efforts by tying in a lot of the disparate context or the context that’s lacking between different siloed systems,” Cristian Rodriguez, Americas Field CTO at CrowdStrike, told CyberScoop. “We are firm believers that AI helps bridge that gap across disparate data sources so that contextually there’s a better understanding of the steps that an adversary needs to take to be successful in their tradecraft.”
“To help and try to understand whether it is a real attack or whether it is just some other activity, whether it’s a false positive alert by a security product, you can use the context that you have from your actual production environment, from your code, and the threat detection products,” Schindel said. “You can feed an LLM with all this information, and within a few seconds, you can get a conclusion with a high level of confidence, whether it is a real attack or whether it is just a false positive or maybe some ordinary activity in your environment.”
Before AI, defenders had massive amounts of information compiled in different locations with little ability to tie events together occurring in different log sources across the tech stack. The logs did not traditionally go into a repository “that allows for hyper scaling and hyper analysis of what those data points mean when they’re put together,” Stamos said.
The cloud nexus is critical
Most experts agree that the increasing adoption of cloud-based technologies is central to the problem of disparate data sources. As information moves between cloud and on-premises systems, it creates more avenues for threat actors to move around laterally within an organization.
“Very few companies have visibility across their cloud infrastructure and their on-premise tech in a way where they see all of it at the same time and detect and track a threat actor in real time across all of those different environments,” Stamos said. “And very few companies can respond fast enough.”
According to Stamos, this lack of visibility specifically benefits Chinese threat actors, notably in the Microsoft-based systems that dominate the enterprise sector’s cloud, security, and operating systems. “What [Chinese threat actors] have gotten very good at is chaining vulnerabilities across those three areas,” he said. “For example, you can have a cloud entry point where they can brute force a username and password.”
“That’s something that’s not getting logged, not getting alerted on,” Stamos said. “And so, they can just brute force for days until they find a user password pair that works for them and then use that against the VPN tied to Microsoft Active Directory, and then get onto the domain controller. Now, they can do a traditional domain controller attack. That’s not something you can do in the cloud; that’s only local.”
The combination of cloud-based technologies and stolen identities is at the crux of where AI can start shedding light on intrusions in a way that genuinely helps defenders. “AI can start to bring context around what are outliers within things like login attempts,” CrowdStrike’s Rodriguez said.
“Using legitimate credentials to get into your environment in lieu of having to use malware, for example, which is very noisy,” is how most unauthorized intrusions occur, Rodriguez added. “AI can act as that opportunity for analysts to scale themselves across these large data sets to contextually understand outliers for login attempts and outliers for authorization across applications. Think of identity, think of what’s happening on your endpoints, and what happens in your cloud workloads. Those are all major data sources a defender must use when responding or analyzing an attack.”
Warning: AI systems themselves need protection
As beneficial as AI technologies might be in identifying and thwarting threat actors, experts warn that new LLM models and other AI technologies that defenders use to protect assets are themselves prized targets of threat actors. Even worse, these AI technologies can leak organizational secrets.
Chinese threat actors are “targeting these AI companies directly for their intellectual property, whether it’s ChatGPT, Gemini, all these new models,” Wiz’s Schindel said. “They are trying to steal information and then build their own versions that are based on what they stole as part of their threat operations.”
For some of these threat actors, “especially coming out of China and even North Korea, not only are they looking for or using identities, but they’re also looking for these custom large language models or any type of generative AI that you may be hosting within your own cloud services,” CrowdStrike’s Rodriguez said.
“The adversary is looking for misconfigured large language models and any type of other genAI that you may be hosting in your cloud because that can also act as an exfiltration point if they were to access those systems,” he added. “And you’ve inadvertently put sensitive information or IP into those systems. They can ultimately use some prompt engineering or even access to misconfigurations within those models to exfiltrate sensitive data.”
What can defenders do?
According to Stamos, very few organizations are currently using AI in a way that prepares them to tackle threats from sophisticated adversaries to provide real-time intervention. “Out of the Fortune 500, there are maybe 150 to 200 companies playing at that level,” he said.
Stamos said organizations “need to gather as much security telemetry as possible and have it in one data lake that can be queried quickly in real time. You’ve got to do that plumbing, and that’s hard.”
Rodriguez advises organizations to “secure your identities. That is number one. Ensure that you understand the identities that you have for these services, have things like multifactor authentication, and [see to it] that the privileges for these identities are regularly assessed to ensure that you’re not overextending access to any single or handful of identities within environments that are sitting in the cloud, for example.”
Even though using AI to battle Chinese and other threat actors is a complex and high-level task that might need experienced AI engineers to implement, Schindel says that most organizations can easily start the process without this kind of scarce talent. “The only thing you need is someone enthusiastic about AI on your team,” he said. “They don’t necessarily have any significant background with AI, just someone who can use it. These models are easy to use.”
The post AI can help defenders stop nation-state threat actors at machine speed appeared first on CyberScoop.
–
Read More – CyberScoop