Side-Channel with Professor Ben Buchanan

You may not understand cybersecurity, but if you’re on social media, you’ve likely heard of the “Pentagon Pizza Index”. The phenomenon is simple: supposedly, takeout orders near the Pentagon surge during major geopolitical events, often visible before those events are public, and while we may not know exactly what’s happening, we can infer (through consumer behavior) that the government is unusually active.

Though the pizza meter remains a formally untested hypothesis, we can understand why it would constitute a security vulnerability. Information about when and what cleared personnel are working on is meant to be private. Yet that information leaks out in latent form and can be inferred through data that was never restricted to begin with, because nobody considered that it signaled sensitive data. This is called a “side channel attack.”

In the field of information security, there are plenty of fascinating examples of such attacks. I have personally (legally, in the course of my work) forced a web application to list its registered users against its will. It didn’t merely hand it over — I ran a program which attempted to login to every possible username short of a specific length with an arbitrary password. By analyzing the fractions of a second it took for the server to respond, I managed to determine when the server was rejecting login attempts out of hand (user does not exist), and when it was rejecting them only after checking the password (which is imperceptibly quick to humans), thereby quietly confirming the user’s existence. 

The wildest of side channel attacks now leverage the predictive power of Large Language Models (LLMs). In a remarkable case, researchers leveraged predictive AI to reconstruct the keystrokes of a test subject from the sound of their typing over Zoom. We would never think to sanitize or obfuscate such information because nobody would have imagined a practical method to infer the keystrokes that created it.

This overhaul in inferential power goes beyond a mere change in proportions. Thanks to LLMs, we now often find ourselves in the position of not merely obtaining predictive information, but lacking any explanation as to how we got there. LLMs often produce stunningly accurate yet unauditable conclusions.

AI is extraordinarily empowered to perform side channel attacks on potentially any data we give it and then leave us wondering what the side channel even was. Your consumer profile, held by marketing companies and analyzed by LLMs, may know you’re pregnant before you do. And the engineers may not know why. Our problem is one of plugging an invisible leak. 

This lack of auditability has produced policy and ethical issues in and of itself, but agentic AI crosses from the abstract to the physical world in terms of liability. Agentic AI is when you give an AI enabled program not only the ability to analyze information but also the executive power to act on its conclusions. Examples range from LLMs being given the right to book reservations to the power to aim and fire weapons without human intervention. 

Despite these examples, AI is profoundly beneficial. In many situations, the cost of managing the obscurity of predictive reasoning is outweighed by its benefit. AI is now detecting cancer where even the sharpest doctors miss it. It is also used for nuclear fault detection and early intervention. In these cases, the cost of false positives is the expected value of routine labor, far outweighed by the cost of disaster it averts. But what about situations where the cost of a false positive is someone’s life, liberty or privacy? How might this look? The company “Smart Shooter” developed a machine vision-powered gun that can fire based on a decision cycle with no humans in the loop. 

U.S. federal agencies are already deploying behavioral analytics and AI-powered anomaly detection systems to monitor for insider threats. In 2023, a self-driving Cruise taxi dragged a pedestrian underneath itself for 20 feet. We need a proper policy structure to handle a paradigm-shifting technology that can serve as judge, jury and executioner.

“This area of law and thought is not that well developed, said Dr. Ben Buchanan, the Dmitry Alperovitch assistant professor who joined the SAIS faculty earlier this year, whom I contacted for this piece. He weighed in on the issue of auditable reasoning. One of the most striking questions in this moment will be how AI is applied to different data sets and different problems. If there is structure in that data — if there is something to be learned — the odds are good that an AI system will find it and learn it, even if humans haven’t found it yet,” he said 

Buchanan added, In cybersecurity, this kind of ability to find novel, important patterns is alluring. A lot of cybersecurity companies are putting it to work on tasks like malicious code detection or finding insider threats. That said, it’s still early days and we should judge those companies and their technologies based on results, not marketing hype.” 

To the question of policy towards agentic AI, Buchanan drew from his own experiences. “It’s vital that society think about the application of AI to national security and set clear guardrails. When I was the White House Special Advisor for AI in the Biden Administration, we worked hard to do so, culminating in President Biden’s landmark National Security Memorandum on AI. That document directed U.S. government agencies to use it but with clear limits and clear testing to make sure it was useful, ethical and appropriate,” he said.

Those limits, Buchanan implied, are the scaffolding of trust. Without them, AI systems risk becoming unaccountable agents of judgment. 

Buchanan offered actionable advice for us concerned students. “For students at SAIS who want to work on these issues, it’s essential to understand the technology itself. SAIS has great opportunities to go deeper in this area, and so much about AI is available for free in newsletters and research papers. It’s impossible to make technology policy without understanding both technology and policy,” he said. That may be the most urgent lesson of all.

Edited by: Rowan Liu

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading