
Drones: A Story of Branding

By JANAE MARTIN
BOLOGNA — The UAE’s Drones for Good contest has announced the winner of its $1 million dollar prize to the best idea for beneficial applications of drone technology. The project is called Flyability, a crash-resistant search and rescue drone that promises to improve disaster relief operations. Another contestant called Waterfly would develop small swarms of drones that work together like insects to monitor water quality from above and collect samples.
Last week’s contest is one piece in the changing identity of drone technology. An October report from Business Insider analyzed the new market for commercial drones and predicted a “drone-powered economy” that would feature remotely-piloted commercial flights. As if in answer to this report, the U.S. Federal Aviation Administration (FAA) just approved the expansion of small drone research flights in North Dakota. However, when introducing and developing new technology for widespread applications, there are a series of social and policy concerns that nations must address to ensure public safety and to maintain moral integrity.
Professor Kenneth Keller, former director of SAIS Europe from 2006-2014, sat down with The SAIS Observer to help us navigate these questions. Dr. Keller has taught chemical engineering and materials science at the University of Minnesota and has served on the NASA Astrophysics Performance Committee. He was also a member and chairman of the National Research Council’s Board on Assessment at the National Institute of Standards and Technology (NIST) and a member of the Science and Technology Advisory Panel to the Director of Central Intelligence. A Ph.D. alumnus of the Johns Hopkins University Chemical Engineering program, Dr. Keller currently teaches a course on Science, Technology, and International Affairs at SAIS Europe.
Keller began.
There are articles that I’ve seen that talk specifically about what you should do when you introduce a technology, and that is to tell a story. The problem is that it could be a story that has little to do with the technology and that creates your reaction to it. In the case of this new kind of technology, we’ve been saying that it interferes with privacy. Well, in fact, we never as a society had a long conversation about what privacy should be. In the United States, it means many different things depending on what area of privacy you’re talking about.
To give you an example, we have a lot of rules and regulations about genetic privacy so you can’t use that in insurance, but I can take your family history, which will tell me just as much. That is not considered to be a violation of privacy. I can have certain rules with respect to the privacy of my mail, but not privacy of my library records. So we find that frequently what a new technology does is bring up a subject, which may be a deep social subject that really hasn’t had that discussion. If the police can tap my telephone with a court order, that’s something they could do, but, if they tap my internet connection, [people] say that’s inappropriate use of technology.
A lot of the technologies are now requiring, appropriately, that we look at deeper social questions. When we question whether we should be afraid of drone technology, one thing you ask is, “Well, what about privacy?” Well, my answer is, “What about privacy?” What should privacy mean? What should privacy protect, and what is its equivalence here?
How much time should we invest in discussing these deeper questions without taking away from the race to be leaders in this new market?
Well, there’s the example of nuclear power. The problem with nuclear power is we didn’t have the discussion first. In my own view, one of the things we ought to do with new technology is have the discussion so that you build confidence in the society. Rushing ahead without that is a mistake. If you have the trust of the society, if you’ve built it over time, you can go ahead when the technology comes along with the assumption that people will be considering those issues. But I think to argue that we ought to go ahead before we understand those consequences, I think it’s dangerous, because what it does is cost you trust. That’s where there are long-term problems.
For example, with genetically modified organisms, there were some bad experiences in England with mad cow disease. There were some bad experiences in Belgium with Coca-cola. Trust in the government was lost so that, when the government said this was safe, nobody really was listening. What we need to do in a technologically-driven society is figure out ways of building trust so that we can move forward with technologies, but avoiding that discussion I think is a mistake.
In the case of drones, I think the issue is that it’s so strange. We don’t know what to expect. There’s some obvious things you worry about like drones crashing or getting into airspace. Beyond that, what is the narrative which is leading us to either be afraid of it or not? What’s the narrative that says if you use it in war, it’s worse than shooting somebody? The issue is not the drone technology: it’s what story we tell around it.
We may have reasons. It’s possible to say the problem with drones is you don’t have the same sense of the destruction of life because it’s happened somewhere else and you don’t have to see it. That’s an argument we should have; it’s a discussion we should have. But I don’t find drone technology to be obviously worse than other things.
The issue of comparative advantage is there’s a price we pay for the openness of our societies. If there are other societies where they don’t worry as much about that, I don’t think we want to emulate them. I think we pay a bigger price through that. My problem is people who just don’t like new technology.
You mentioned earlier that technologies often have a narrative before they are even fully developed or applied. Only a couple of years ago, the word “drone” was associated with controversial military operations. When Amazon announced the drone delivery program, it sort of changed the game. What went through your mind after you heard about the Amazon drones, and what are your thoughts on the public’s reaction?
There are two kinds of public reactions, and then my reaction. When a new technology comes along, I often stay up thinking about whether this is something that society needs. Or is this something that the technology is pushing? A phrase I used earlier in the semester is, “When you’ve got a hammer, everything looks like a nail.” You assume your technology can really do something better. Frequently, that isn’t true. So my own reaction is to question what this technology is about. Does it really improve on what went before?
The other issue in the public is their relationship to technology generally. Are there people who are just put off by technology or are there people who are attracted to it? What kind of story do they tell? That sort of a reaction comes from a narrative.
In Germany, there’s more of a willingness to accept machines and technological advancements, but much less willingness to look at genetic modification because they’ve got a history that says that doesn’t always work out so well. In the United States, my view is that we’re less accepting of machines. We don’t like the idea of machines controlling us. There was a great fight in the United States over cameras that take pictures at intersections, saying that’s an invasion of privacy. If you had a cop standing at the corner, you wouldn’t say that. If you had a drone doing it, you get upset about it. That has a kind of narrative in advance of the technology, which makes you react, not to this particular application, but to the idea of it. There are narratives about what is natural and what is unnatural.
There are many people whose narrative is the other extreme of saying, this is the future: it’s a new technology; we can do things with it without asking the question of whether it’s something that ought to be introduced or whether it will help in some way.
In my perspective, narrative is a problem on both sides. Rather than looking at the technology itself and examining its consequences, we a priori have a story about it, which affects how we react to it.
The same thing happened with nuclear power. In the early days, the narrative that persisted was that energy is going to be too cheap to monitor. This is really great stuff; it’s clean; everything’s going to be fine. Then, the narrative shifted: the narrative of radiation, of terrorism, waste disposal, accidents. Most people argue that narrative is, one, unavoidable, and, two, valuable. I’m a little bit more skeptical about whether it allows us to look directly at a technology or look at the underlying values that the technology has inherited, rather than the story we want to tell around it.
We’ve talked in class about the globalization of research and development. If another country is more willing to advance with this technology, would that be something that would push the United States despite our own cultural hesitancies?
Well, I think it’s an argument that people make, and I think it’s a dangerous argument. To be driven, not by your own value set, but by somebody saying, “Well, they’re going to get ahead of us.” It’s true that they might get ahead of us, and that is the price you pay for holding to your own value system.
I’m less inclined to be convinced by the competition argument than I am about the deeper question that comes up: why is our regulatory scheme different from this other country’s? Are there explanations that make sense to us? Are we doing things to hamper development without any good reason? We’ve run into that in medical technologies where most medical companies argue that it’s easier to introduce new technologies in Europe than in the United States. Our FDA requirements are more rigid.
The question you would ask is: have there been technologies introduced in Europe with its more relaxed regulatory scheme that have caused problems that we’ve avoided? That’s a testable question. As it turns out, in [the medical] case, there aren’t. Then you could make the argument that our more rigid system, aside from inertia, has no justification, and we are losing ground. There may be others where that’s a different story. I think Europe’s agricultural technology has lost ground because it has regulatory schemes that prevent its adoption of technologies that would be of help without indicating that there’s any real purpose being served.
What are some institutions that we should watch for determining new regulations for drones?
Well, the FAA or the FCC, because they use communication links. The first question you raise with a new technology is, do we cover the regulatory waterfront adequately with the existing institutions? Is there a need for the creation of a new institution because none of them are exactly applicable?
One might say with respect to drone technology that we don’t have the proper institution for dealing with the set of questions that people may want to raise. I don’t know the answer to that today, but I do know that when new technologies come along, we have to first deal with that question: do we have the mechanisms in the existing institutions? Sometimes we find we don’t. The technology gets ahead of the institutional structure.
We ran into that with Internet communication. Internet communication required a new regulatory structure different from telephone communication. So what we had, in fact, was a new technology that required a new institutional structure. Maybe drones, ultimately, will fit that.
You ask who’s going to get involved with that. With drone technology, obviously, the FAA because it’s up in the air, the FCC because it’s using remote communication that’s in the airwaves. How long should the signal be? Will it interfere with other transmissions? [With] local police authorities if it’s being used to do things which are illegal, local emergency management systems if you’re worried about these things crashing in neighborhoods, licensing which would get you into local regulatory regimes? At some point, you say that doesn’t cover all the bases, and we really need to look at this separately and talk about how we want to regulate and license it, who we want to license it to. Can kids do it? Do they have to be 18 or 21? Can you operate a drone while under the influence of alcohol?
That’s what I think of as institutional structure having to keep up with technology.
Popular Scientists such as Bill Gates, Stephen Hawking, and the Tesla Motors founder have warned that artificial superintelligence could pose a risk to humanity. Is this a serious concern or have they been watching too much Terminator?
Well, the world of science is fundamentally built around skepticism. That’s the way the system works: you postulate something, then you attack it from all the directions you can because that uncovers greater truisms. When that plays out in the political system, it doesn’t work nearly as well. It creates a sense of foreboding which isn’t part of the scientific process. The scientific process is people ask a question, test the question, respond to it, and move forward. Having raised the question doesn’t necessarily mean you’re in really bad shape.
When that comes out into the public, it comes out as uncertainty, in a much more negative way. We get nervous about lacking consensus in an area we don’t understand.
There are some things we do understand. What we find is that people who expect science to be deterministic and to have answers, when people raise appropriate questions, that’s taken to be much more problematic than it need be. I’m not sure that these people believe in superintelligence. They’re saying, let’s think about it.
In the year 2000, we had the millennial problem. Everybody thought every computer in the world was going to crash. When CERN [European Organization for Nuclear Research] came online with its high-energy experiments, they said we were going to create a black hole and that all of Europe is going to disappear. These are things that scientists said. We were worried about nuclear energy creating a nuclear winter. All of those were legitimate questions, but the implication isn’t that we were facing extinction. It meant that everything ought to be examined.
I think that builds trust in the society because I want people to be thinking about all of these questions. The responsibility comes to scientists first because they are aware of the potential of these kinds of technologies, and they ought to be giving us early warning, not because they think it will happen but because they think this is something we ought to discuss. One of the things we find very difficult in modern society and democracy is how to build in the public the discussion of what you want to do when they’re not going to understand the top quark or the Boson particle. How do you inform them of what’s involved in fetal tissue research or stem cells? How do you build in their role? Because they do have a role. But we need experts telling us a lot of things along the way.
There’s a very famous scientist at the Princeton Institute of Advanced Study, Freeman Dyson, who has said that global warming is not really a problem. He’s a very smart guy. He’s raising the what-if question. If we coated the whole world with another half-inch of dirt, we would absorb all the carbon dioxide. He’s done a calculation which shows that. Now, that doesn’t mean that he believes it. It means that he’s being a contrarian because that’s one role you have in science, but it doesn’t play out as well in the public.
I’d also say that there are applications [in drone technology] that we may not want to get into because it’s changing the nature of our society, that we don’t want to be at that place. That’s another one of those value discussions that I think technology pushes us to. Not because it’s going to destroy the world, but because we’re going to ask questions about is this the way we want to be organized in our society. Is this our narrative?