In his keynote at RSAC 2022, Bruce Schneier imagined the future of artificial intelligence (AI) hacking.
“AI will hack humanity unlike anything that has come before, and humans will be the collateral damage,” began Schneier, admitting that his opening was hyperbole but countering that “imagining this does not require any science fiction, nor does it require any major breakthroughs in AI hacking.”
He defined a hack as “something that a system permits but is unanticipated and unwanted by its designers. Hacks follow the rules of a system but subvert its goals or intent.”
Schneier contended that AIs are becoming hackers. “They’re not that good yet, but they are getting better and eventually the AIs will surpass the humans.” There are two considerations, he surmised: “One issue is that an AI might be instructed to hack a system. The second issue is that an AI could naturally, and inadvertently, hack a system.
“AIs don’t solve problems the way humans do,” he explained. “Their imitations are different, they consider more possible solutions than humans, they go down paths we don’t even consider. AI doesn’t think in terms of values, norms, implications, or context.”
In human language and thought, goals and desires are always underspecified, meaning humans give incomplete instructions because people understand the context and are able to fill the gaps. “We can’t incompletely specify goals to an AI and expect it to understand context,” Schneier contends. “AI will think out of the box because it doesn’t have any conception of the box.” One solution to this, Schneier explained, “is to try and teach AI context.”
The first place to look for AI hacking, said Schneier, is financial systems “because those are designed to be algorithmically hackable.” He contends that while a world filled with AI hackers is science fiction, “it isn’t stupid science fiction. It’s worth talking about now.”
To date, he explained, “hacking has been a human activity requiring expertise, time, creativity, and luck.” When AI hacking, however, “everything will change again. AI will change hacking speed, scale, and scope. They will act like aliens. We’re already seeing shadows of this; AI text generating bots already exist, overwhelming human discourse.
“Increasing the scope of AI systems will also make hacks more dangerous. AIs are already making important decisions that affect our lives; decisions that we used to believe needed making by human decision-makers. AI makes decisions on parole and bank loans, AI screens job candidates,” explained Schneier. “As AI increases in capability, society will see more important decisions being made by AI. That means that attacks on those systems become even more damaging.” Hacking will become a problem, warned Schneier, “that we, as a society, can no longer manage with our current tools.
“The same technology can be used by defense as well,” countered Schneier. “You can imagine a software company deploying a vulnerability-finding AI on its own code that will discover and patch vulnerabilities.” We can therefore imagine a future where software vulnerabilities will be a ‘thing of the past, he said. “The transition period is dangerous, though. The new code is secure, but the legacy code is vulnerable, so the attackers have an advantage on the old stuff, but defenders have an advantage on the new stuff.”
Crucially, Schneier noted that while “AI hacking can be deployed by the offense and the defense, in the end, it favors the defense.”
In conclusion, Schneier noted “the overarching solution here is people. We have to think about the risks inherent when computers start doing the part of humans. We need to decide, as people, what the role of technology in our future should be, and this is something we need to start figuring out now before the hacking starts taking over the world.” Read more: https://bit.ly/3zwzGFY
You can also read this: New Hacking Campaign Targeting Ukrainian Government with IcedID Malware