Less discussed on this site―though not overlooked―has been the question of whether drone strikes are ethical in theory and practice. By contrast, the public conversation on drones in no small part concerns questions of morality, not only on the side of those who criticize the use of drones for targeted killing, but also from within the. A series of examples will be examined in order to attempt to identify the ethical problems that we confront with the recreational, civilian/commercial and military uses of drones. I will first survey how drone aircraft create ethical problems and how humans play an important role in how ethical issues emerge for drones. For comparison sake, there is a code of ethics for drone journalists that are viewed as a layer of additional ethical considerations atop the traditional professional and ethical expectations of a journalist in the 21st century. Drones can be used by drone journalists for aerial photography, although drones are by no means limited to that function.
As soon as the theater lights dim for the latest drone-centric film, Eye in the Sky, a quote fills the screen: 'In war, truth is the first casualty.' It's a sobering setup for the ethical dilemma at the heart of the film: Should British and American officials order a strike that would take out some of the highest-ranking members of Al-Shabab in East Africa if it means killing a nearby 9-year-old girl in the process?
The situation presented is a bit unrealistic, according to accounts by military members like former Air Force technician Cian Westmoreland, who participated in a Q&A hosted by international human rights group Reprieve following an early screening of the film. But if one also accepts that accuracy is often the first casualty in Hollywood, Eye in the Sky, which opens Friday, is an edge-of-your-seat thriller that offers moviegoers a framework for thinking critically about our increasingly automated wars.
'It's a movie about morally complex questions and very real ethical dilemmas set in the world of modern drone warfare,' says director Gavin Hood (Ender's Game, Tsotsi). 'It's a dramatic thriller aimed at a thinking audience of people who love films, and it leaves them with something to talk about.'
For years, U.K. Colonel Katherine Powell (Academy Award-winner Helen Mirren) has been tracking a radicalized British citizen, and American drones have finally located her target inside a safe house in Kenya. But when surveillance footage shows that the militant group is preparing suicide vests for an imminent attack, Powell turns her capture mission into an order to kill.
But just as American drone pilot Steve Watts (Emmy Award-winner Aaron Paul) is about to launch the deadly Hellfire missile from a bunker in Nevada, a little girl enters the kill zone. Her presence sparks a debate about the strike's morality and legality at all levels of the U.S. and U.K. governments, which ropes in U.K. Lieutenant General Frank Benson (the late Alan Rickman, in his final on-camera role).
Related; The Anatomy of an American Drone Strike
Like his fellow drone operator Brandon Bryant, who criticized the 2015 film Good Kill for its lack of realism, Westmoreland sees Hollywood's depictions of drone warfare as an opportunity to accurately introduce civilians to a clandestine world. 'It's a fantasy representation of what happens,' says Westmoreland, who first came forward in November to inveigh against the U.S. drone program's efficacy. Though he sees the tactical and strategic value of drones, he doesn't believe they should be used in a country the U.S. is not at war with, 'because we don't control the narrative on the ground.'
In Hood's high-definition rendering, surveillance shows even minute details like clothing patterns. But in reality, Westmoreland explains, the feeds are not that crisp—dust obscures the image, and signals are poor. 'The film gives you the impression that there is a great deal of accuracy,' he says, adding that the depiction may mislead viewers about the quality of footage that officials are actually working with.
Westmoreland adds that the idea of a single casualty sparking a heated governmental debate is also unrealistic. The U.S. sometimes targets weddings and funerals, he explains, and it accepts a degree of human collateral damage in strikes. The White House and Pentagon contend that drones are precise weapons that have caused few civilian deaths and have killed key leaders of groups that want to harm the United States and its allies. But leaked documents published by The Intercept in October show that during one five-month period in Afghanistan, nearly 9 out of 10 casualties were due to proximity to intended targets. 'One civilian would not constitute a reason to stop,' Westmoreland says.
Despite straying from reality, the film offers viewers a rare opportunity to jump into the shoes of drone warfare's major players—from decision-makers to drone operators—and gain new insights.
'I always thought that drones were being flown by men and women from the comfort of a bunker, but I learned that it isn't comfortable whatsoever,' says Paul. 'It is a very terrifying, scary situation they are put in.. If you drop a payload on a building, you know that people are going to die, and you know that sometimes innocent civilians are going to die.'
Since the filming, Mirren says, she reads news stories involving drones with a more critical eye. Discussing the recent U.S. manned and unmanned airstrike on an Al-Shabab training camp in Somalia that killed 150, she says, 'Without a doubt, there were innocent people killed in that strike.' And like the ethical conundrum faced in the fictional drone strike in Kenya, 'that very decision had to be taken' in Somalia, she adds.
'You can debate the political and technical factors that influence the use of drone strikes as much as you like,' says Hood, 'but I ultimately want this film to remind us of the need for compassion, no matter where we are in the world.'
'Many of the technologists involved in data aggregation see a benefit to civil society,' wrote Quentin Hardy, in a New York Times column on the disappearance of urban anonymity when all data is tracked. Ethicists, researchers, and corporate compliance officers, by way of contrast, may see risks to privacy and civil rights from 'big data.'
Over the years, I've encountered, channeled, or challenged different strains of technology-fueled optimism and pessimism regarding the impact of new technologies on society. I read a perspective that mixed some of both last week, published on the servers of MIT's Technology Review, when Stanford fellow Vivek Wadhwa argued that laws and ethics can't keep pace with technology.
He's on to something important: the nature of rapid technological progress and relatively slow legislative process and regulatory rulemaking means there will almost always be a gap between technology and the law, unless parliaments preemptively move to limit certain uses or developments. The development of some technologies may move underground, offshore, or overseas, beyond such restrictions.
Wadhwa shared three examples of technological change that pose challenging legal and ethical decisions for legislatures, courts, and society, from genomic testing to surveillance and the use of smartphones; he also looked ahead to how self-driving cars, drones, and robots will pose new legal and ethical issues. He's mining a rich vein of material: the introduction of new technologies into society will put individuals, organizations, and governments in situations where they will need to make choices that are novel to them.
Wadhwa made two assertions in his column that complicate his thesis a bit, with respect to the laws governing the use of genetic information or social media. An earlier version of the article stated that there was 'no law' governing the use of genetic information. That's not quite accurate, as the updated column reflects, but the Supreme Court has ruled (PDF) that law enforcement officers can take DNA evidence from people arrested — not convicted — of serious crimes. Similarly, doctors may take tissue from patients in hospitals and subsequently use it or the genetic information on it for research, without the informed consent of the patient. The issues around patenting genetic information are even more complex.
In his column, Wadhwa also stated that employers 'can use social media to filter out job applicants based on their beliefs, looks, and habits,' even though there's clear guidance from the U.S. Equal Employment Opportunity Commission that social media may not 'be used to make employment decisions on prohibited bases.'
The overall point Wadhwa made, however, is sound: despite regulatory guidance to the contrary, a growing number of employers are using search engines and social media to look up applicants as part of the hiring process and may be using what they find to discriminate. After winnowing down an applicant pool to the people who meet baseline qualifications, how many recruiters or hiring managers in 2014 will not Google them or search for their Facebook or LinkedIn profiles? And, after doing so, how many will not be influenced by what they find? The answer seems uncomfortably clear.
Where I found myself disagreeing with Wadhwa most, however, is not on details but rather the question of whether we see societal shifts in ethics due to exponential technological change. I define ethics generally as the moral principles that govern a person or group's behavior in a given context. While it's true that what we see as moral exists within a given culture or time, does that baseline shift because of technology? Did societal progress with respect to the abolition of slavery, suffrage, or civil rights in the US occur because of technology, or in the context of it?
Each generation born into the 20th century has had new technologies and tools to learn to use ethically, from telephones to smartphones to drones and genetic tests. With each new technology, society has had to grapple with unethical uses and find new norms. Each ethical decision is made in context of the society it is made within, with a given technology acting as a tool or enabler of an action.
Wadhwa is doing us all a public service by highlighting these challenges.Businesses are grappling with the ethical issues posed by massive data aggregation and data analysis, looking for the right balance between increased risk with innovation. Governments, schools, research labs, and anyone else using these new tools for data storage, processing, and analysis will as well. As more members of the media practice for extrajudicial killing of an American citizen without due process, legal ethicists may still reasonably question the morality of that action by the state, or government opacity around its use.
Or, to take another uncomfortable example raised by Wadhwa, does the use of less expensive ultrasound devices to determine the sex of fetuses change the underlying ethical questions that parents face regarding whether to carry a baby to term?
I suspect that Wadhwa and I could agree that stealing, murder, assault, slander, or fraud are all unethical. Do new technologies really change our present social compacts or the ethics involved? Are there ethical choices that will continue to be clearly flawed, regardless of tech change?
These are truly difficult questions, with no easy answers. I think ethics speak to a higher order of standards for decision-making that is more resistant to change. What that means in practice is that we'll need to learn how to apply existing ethics and moral values to novel situations that technology presents.
For instance, is the use of an individual's genomic information for research or profit without her consent or that of her family ethical? A recent bioethics controversy regarding the use of Henrietta Lacks' DNA for decades of research resulted in a landmark decision by the National Institutes of Health to give her family some control over how her genome is used. How to use directory opus.
It's worth noting that the Lacks case involved the use of data without consent. A focus on usage will be critical if an acceleration in data collection and creation around the world continues. So, too, will asking hard questions about big data, as danah boyd and Kate Crawford did in 2011. Almost two years ago, my colleague Alistair Croll wrote that 'big data is our generation's civil rights issue, and we don't even know it.'
Today, there is finally an international conversation about the related issues of power and influence around the collection and use of data. New scholarship on big data ethics is emerging from academia. Civil rights groups are warning against abuse of big data, and the White House is conducting a review of big data and privacy.
On the latter count, I participated in the second big data and privacy workshop pursuant to that review this spring, delivering a short talk ondata journalism, networked transparency, algorithmic transparency, and the public interest at the Data & Society Research Institute's workshop on the social, cultural and ethical dimensions of 'big data.' The forum was convened by the Data & Society Research Institute and hosted at New York University's Information Law Institute at the White House Office of Science and Technology Policy.
The selection of talks should offer considerable context for these issues, as will the forthcoming review from the White House this month, and another, separate assessment from the President's Council of Advisors on Science and Technology later this year.