AI aids nation-state hackers but also helps US spies to find them, says NSA cyber director

Nation state-backed hackers and criminals are using generative AI in their cyberattacks, but U.S. intelligence is also using artificial intelligence technologies to find malicious activity, according to a senior U.S. National Security Agency official.

“We already see criminal and nation state elements utilizing AI. They’re all subscribed to the big name companies that you would expect — all the generative AI models out there,” said NSA director of cybersecurity Rob Joyce, speaking at a conference at Fordham University in New York on Tuesday. “We’re seeing intelligence operators [and] criminals on those platforms,” said Joyce.

“On the flip side, though, AI, machine learning [and] deep learning is absolutely making us better at finding malicious activity,” he said.

Joyce, who oversees the NSA’s cybersecurity directorate tasked with preventing and eradicating threats targeting U.S. critical infrastructure and defense systems, did not speak to specific cyberattacks involving the use of AI or attribute particular activity to a state or government. But Joyce said that recent efforts by China-backed hackers to target U.S. critical infrastructure — thought to be in preparation for an anticipated Chinese invasion of Taiwan — was an example of how AI technologies are surfacing malicious activity, giving U.S. intelligence an upper hand.

“They’re in places like electric, transportation pipelines and courts, trying to hack in so that they can cause societal disruption and panic at the time in place of their choosing,” said Joyce.

Joyce said that China state-backed hackers are not using traditional malware that could be detected, but rather exploiting vulnerabilities and implementation flaws that allow the hackers to gain a foothold on a network and appear as though they are authorized to be there.

“Machine learning, AI and big data helps us surface those activities [and] brings them to the fore because those accounts don’t behave like the normal business operators on their critical infrastructure, so that gives us an advantage,” Joyce said.

Joyce’s comments come at a time where generative AI tools are capable of producing convincing computer-generated text and imagery and are increasingly used in cyberattacks and espionage campaigns.

The Biden administration in October introduced an executive order aimed at establishing new standards for AI safety and security while pushing for stronger guardrails against abuse and errors. The Federal Trade Commission recently warned that AI technologies, like ChatGPT, can be “used to turbocharge fraud and scams.”

Joyce said that AI “isn’t the super tool that can make someone who’s incompetent actually capable, but it’s going to make those that use AI more effective and more dangerous.”

“One of the first things they’re doing is they’re just generating better English language outreach to their victims, whether it’s phishing emails or something much more elaborative in the case of malign influence,” said Joyce; the latter referring to efforts by foreign governments to sow discord and interfere in elections.

“The second thing we’re starting to see is we’re seeing less capable people use artificial intelligence to guide their hacking operations to make them better at a technical aspect of a hack that they wouldn’t have been able to do themselves,” said Joyce.

Zack Whittaker reporting from Fordham University in New York.