verbatim mac · web viewthe desire for faster decision-making, concern about the hacking of...

14

Upload: others

Post on 10-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Verbatim Mac

AT: Arms Race

Ban can’t solve arms race—AI in LAWs encompasses military use that serves far beyond “lethal” purposes, technological dev necessary for nuclear deterrence proves the technology will be there no matter what

Geist 16 [Geist, Edward Moore. MacArthur Nuclear Security Fellow at Stanford University's Center for International Security and Cooperation, “It’s already too late to stop the AI arms race—We must manage it instead,” 08/15/16, Bulletin of the Atomic Scientists, https://www.tandfonline.com/doi/full/10.1080/00963402.2016.1216672?scroll=top&needAccess=true] /Triumph Debate

Any successful strategy to manage AI weaponization needs to acknowledge, and grapple with, this historical baggage. Russia and China have good reason to interpret present-day US weapons programs as merely the latest iteration of a decades-old pattern of Washington seeking to maximize and exploit its technological edge in artificial intelligence for strategic advantage (Long and Green 2015). Artificial intelligence failed to live up to man’s ambitions to abuse it during the Cold War, but we are unlikely to be so lucky in the future. Increased computing power and advances in machine learning have made possible weapons with previously unfeasible autonomous capabilities. The difficulty of defining exactly what constitutes an “autonomous weapon” is likely to forestall a meaningful international agreement to ban them altogether. Because the United States has long deployed systems, such as the Phalanx gun, that choose and engage their own targets, the official Department of Defense policy on autonomous weapons employs a nuanced definition excluding those that Moscow and Beijing would certainly contest in any arms-control negotiation (Defense Department 2012). Attempts to include autonomous weapons in the United Nations Convention on Certain Conventional Weapons have faltered so far for this reason, and there is little reason to believe this will change anytime soon. Furthermore, autonomous weapons are only part of the problem of AI weaponization, and a relatively small one at that. It is telling that “war-fighting applications” were merely one of the five military uses of AI considered by the Defense Science Board 2015 Summer Study on Autonomy. While the Pentagon’s interest in employing AI for “decision aids, planning systems, logistics, [and] surveillance” may seem innocuous compared to autonomous weapons, some of the most problematic military uses of AI involve blind obedience to human instructions absent the use of lethal force (Kendall 2014). Despite the fact that DARPA’s submarine-locating drones are not “autonomous weapons” because they do not engage targets themselves, this technology could potentially jeopardize the global strategic balance and make war more likely. Most nuclear powers base the security of their deterrent on the assumption that missile-carrying submarines will remain difficult for enemies to locate, but relatively inexpensive AI-controlled undersea drones may make the seas “transparent” in the not-too-distant future. The geostrategic consequences of such a development are unpredictable and could be catastrophic. Simply banning “killer robots” won’t stop the AI arms race, or even do much to slow it down. All military applications of artificial intelligence need to be evaluated on the basis of their systemic effects, rather than whether they fall into a particular broad category. If particular autonomous weapons enhance stability and mutual security, they should be welcomed.

Commercialization of AI technology makes spillover—the arms race—inevitable

Horowitz 19 [Horowitz, Michael C. Professor of political science at the University of Pennsylvania. “When speed kills: Lethal autonomous systems, deterrence and stability,” Journal of Strategic Studies, August 2019, https://www.tandfonline.com/doi/abs/10.1080/01402390.2019.1621174] /Triumph Debate

This article assesses the growing integration of AI in military systems with an eye towards the impact on crisis stability, specifically how countries think about developing and deploying weapons, as well as when they are likely to go to war, and the potential for arms control.68 Contrary to some public concern and media hype, unless AI capabilities reach truly science fiction levels, their impact on national and subnational military behaviour, especially interstate war, is likely to be relatively modest. Fundamentally, countries go to war for political reasons, and accidental wars have traditionally been more myth than reality.69 The effects for subnational use of AI could be more significant, especially if military applications of AI make it easier for autocrats to use military force to repress their population with a reduced number of loyalists. The commercial spread of machine learning in the private sector means some form of spillovers to military applications will be inevitable. The desire for faster decision-making, concern about the hacking of remotely piloted systems, and fear of what others may be developing could all incentivise the development of some types of LAWS. However, awareness of the potential risk of accidents regarding these systems, as well as the desire for militaries to maintain control over their weapons to maximise their effectiveness, will likely lead to caution in the development and deployment of systems where machine learning is used to select and engage targets with lethal force. One of the greatest risks regarding applications of AI to military systems likely comes from opacity concerning those applications, especially as it interacts with the potential to fight at machine speed. Unlike missiles or bombers, it will be difficult for countries to verify what, if any, AI capabilities potential adversaries have. Even an international agreement restricting or prohibiting the development of LAWS would be unlikely to resolve this concern. Fear would still exist. Given that uncertainty makes disputes harder to resolve, this could have an impact. These factors make international regulation potentially attractive, in theory, but challenging in application, because the very thing about LAWS that might make international regulations on LAWS attractive – their ability to enable faster and more devastating attacks, as well as the risk of accidents – may also make those regulations harder to implement and increase the risks if cheating occurs. But discussions at the CCW are ongoing, and may yet yield progress, or at the least agreement on considering safety and reliability issues when evaluating the development and use of autonomous systems. Humanity’s worst fears about an intelligent machine turning against it aside, the integration of machine learning and military power will likely be critical area of inquiry for strategic studies in the years ahead.

AT: AI BiasMoral responsibility for actions lies on the programmer—biases (bad moral actions in general) not inherent to the machine but the one who programs the machine

Robillard 17 [Michael Robillard, “No Such Thing as Killer Robots.” Journal of Applied Philosophy, 06/19/2017, https://onlinelibrary.wiley.com/doi/epdf/10.1111/japp.12274] /Triumph Debate

I do not disagree with any of the above claims about the nature of both moral reasons and moral reasoning. Indeed, I agree with Purves et al. that morality is not codifiable. I also agree with them insofar as I believe that fully just actions must follow from the right moral reasons. Where I take issue with them is in their thinking that the actions of the AWS should be the proper object for which these moral concerns actually apply. Put another way, while I do not think morality is subject to formal codification, I do not think that the apparent ‘decisions’ of the AWS stand as something metaphysically distinct from the set of prior decisions made by its human designers, programmers and implementers, decisions that ostensibly do satisfy the conditions for counting as genuine moral decisions. Even if the codified decision-procedures of the AWS amount to only a truncated or simplified version of the programmers’ moral decision-making for anticipated future contexts the AWS might some day find itself in, the act of codifying those decision-procedures into the machine’s software will itself still be a genuine moral decision. The same goes for the condition of being motivated by the right kinds of reasons.21 Accordingly, concerns about the morality of AWS would all amount to contingent worries at best (worries about apportioning moral responsibility among a collective of agents, determining epistemic responsibility, weighing risks in conditions of epistemic uncertainty, etc.). There would, however, be nothing wrong in principle about using AWS. That being said, were the moral stakes high enough, use of AWS over the use of soldiers, under some set of conditions, could be conceivably permissible as well as conceivably obligatory. What obfuscates the situation immensely is the highly collective nature of the machine’s programming, coupled with the extreme lag-time between the morally informed decisions of the programmers and implementers and the eventual real-world actions of the AWS. Indeed, the decision to deploy an AWS in a particular context, in principle, is not any more or any less insensitive to moral reasons than the decision to place a land mine in a particular context. These actions, at base, are both human decisions, responsive to moral reasons through and through. The only difference that separates these actions from more familiar actions on the battlefield is that there is a pronounced lag-time between the latent human decisions built into the causal architecture of the weapons system itself and the anticipated combat effect of that weapon system that later eventuates. This pronounced lag-time combined with the AWS’s collective nature and complex interface has led philosophers to mistake the set of human decisions instantiated in the form of the machine’s programming and implementation as summing up into an additional set of decisions (the AWS’s decisions) that is metaphysically distinct. However, after we have summed the total set of human decisions instantiated in the machine’s software and implementation, I am hard pressed to see what genuine decisions would be left over. Indeed, the AWS might and likely will behave in ways that we cannot predict, but these actions will not fail to be logical entailments of the initial set of programming decisions encoded in its software combined with the contingencies of it unique, unanticipated environment.22 As a final point worth noting, despite the arguments here given, one might still think that we can coherently conceive of the relationship between programmers and AWS as analogous to a parent’s relationship to a small child, insofar as both the AWS and the child can be seen as partial agents. This analogy, however, does not hold. For instance, imagine a radical terrorist parent who trains a small child to carry a bomb into a marketplace, to locate a person whose appearance looks least ethnically similar to their own, and to then press the detonator button. While the child’s individual decisions regarding where and when to detonate the bomb would still count as genuine decisions, we would not say that the parent was somehow absolved of moral responsibility for the resulting harm in virtue of the child’s authentic, though partial, agency. In other words no responsibility ‘gap’ would be present. Accordingly, this analogy breaks down if we think the AWS’s ‘parents’ (i.e. its programmers) should somehow be regarded any differently morally speaking.

Turn - AI can be used to solve back for biases

Fjeld 2020 [Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI." Berkman Klein Center for Internet & Society, 2020. https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf?sequence=1&isAllowed=y] /Triumph Debate

Algorithmic bias – the systemic under- or overprediction of probabilities for a specific population– creeps into AI systems in a myriad of ways. A system might be trained on unrepresentative, flawed, or biased data. Alternatively, the predicted outcome may be an imperfect proxy for the true outcome of interest or the outcome of interest may be influenced by earlier decisions that are themselves biased. As AI systems increasingly inform or dictate decisions, particularly in sensitive contexts where bias long predates their introduction such as lending, healthcare, and criminal justice, ensuring fairness and nondiscrimination is imperative. Consequently, the Fairness and Non-discrimination theme is the most highly represented theme in our dataset, with every document referencing at least one of its six principles: “non-discrimination and the prevention of bias,” “representative and high-quality data,” “fairness,” “equality,” “inclusiveness in impact,” and “inclusiveness in design.” Within this theme, many documents point to biased data – and the biased algorithms it generates – as the source of discrimination and unfairness in AI, but a few also recognize the role of human systems and institutions in perpetuating or preventing discriminatory or otherwise harmful impacts. Examples of language that focuses on the technical side of bias include the Ground Rules for AI conference paper (“[c]ompaniesbe wary of AI systems making ethically biased decisions”). While this concern is warranted, it points toward a narrow solution, the use of unbiased datasets, which relies on the assumption that such datasets exist. Moreover, it reflects a potentially technochauvinistic orientation – the idea that technological solutions are appropriate and adequate fixes to the deeply human problem of bias and discrimination. The Toronto Declaration takes a wider view on many places bias permeates the design and deployment of AI systems: All actors, public and private, must prevent and mitigate against discrimination risks in the design, development and application of machine learning technologies. They must also ensure that there are mechanisms allowing for access to effective remedy in place before deployment and throughout a system’s lifecycle. Within the Fairness and Non-discrimination theme, we see significant connections to the Promotion of Human Values theme, with principles such as “fairness” and “equality” sometimes appearing alongside other values in lists coded under the “Human Values and Human Flourishing” principle. There are also connections to the Human Control of Technology, and Accountability themes, principles under which can act as implementation mechanisms for some of the higher-level goals set by Fairness and Nondiscrimination principles. The “non-discrimination and the prevention of bias” principle articulates that bias in AI – in the training data, technical design choices, or the technology’s deployment – should be mitigated to prevent discriminatory impacts. This principle was one of the most commonly included ones in our dataset225 and, along with others like “fairness” and “equality” frequently operates as a high-level objective for which other principles under this theme (such as “representative and high-quality data” and “inclusiveness in design”) function as implementation mechanisms. Deeper engagement with the principle of “nondiscrimination and the prevention of bias” included warnings that AI is not only replicating existing patterns of bias, but also has the potential to significantly scale discrimination and to discriminate in unforeseen ways. Other documents recognized that AI’s great capacity for classification and differentiation could and should be proactively used to identify and address discriminatory practices in current systems. The German Government commits to assessing how its current legal protections against discrimination cover – or fail to cover – AI bias, and to adapt accordingly.

AT: Non-State Actors

Ban doesn’t solve - will only incentivize asymmetric research done by non-state actors

Newton 15 [Michael A. Newton, Back to the Future: Reflections on the Advent of Autonomous Weapons Systems, 47 Case W. Res. J. Int'l L. (2015) https://scholarlycommons.law.case.edu/jil/vol47/iss1/5] /Triumph Debate

Machines cannot be prosecuted, and the line of actual responsibility to programmers or policy makers becomes too attenuated to support liability. Of course, such considerations overlook the distinct possibility that technological advances may well make adherence to established jus in bello duties far more regularized. Information could flow across programmed weapons systems in an instantaneous and comprehensive manner, and thereby facilitate compliance with the normative goals of the laws of war. A preemptive ban on autonomous weapons would prevent future policy makers that seek to maximize the values embedded in jus in bello from ever being in position to make informed choices on whether to use humans or autonomous systems for a particular operational task. Of course, any sentient observer knows that we do indeed live in a flawed and dangerous world. There is little precedent to indicate that a complete ban would garner complete adherence. Banning autonomous systems might do little more than incentivize asymmetric research by states or non-state armed groups that prioritize their own military advantage above compliance with the normative framework of the law. There has been far too little analysis of the precise ways that advancing technology might well serve the interests of law-abiding states as they work towards regularized compliance with the laws and customs of war. Proponents of a complete ban on autonomous weapons simply assume technological innovations away, and certainly undervalue the benefits of providing some affirmative vision of a desired end state to researchers and scientists.\

LAWs necessary to combat changing battlefield with non-state actors—current cyber operations prove

Kiggins 18 [Ryan David Kiggins, Currently on faculty in the department of political science at the University of Central Oklahoma, Edmond, OK, USA. He has published on US Internet governance policy, US cyber security policy, and global security and rare earth. “Big Data, Artificial Intelligence, and Autonomous Policy Decision-Making: A Crisis in International Relations Theory?” International Political Economy Series (2018). https://link.springer.com/book/10.1007/978-3-319-51466-6] /Triumph Debate

The combination of big data and semi-autonomous and autonomous militarized machines presages a future for national and global security in which humans have an increasingly decreased role in decision-making, war fighting, and relevance as agents in international relations theory. The use of unmanned aerial vehicles (UAVs or more commonly drones) by the USA, other countries, and non-state actors, such as Hezbollah and ISIS, all portend the further automation of the battlefield where automated, semi-autonomous, or autonomous machines are put in harm’s waywith humans being far from actual kinetic fire and, thus, potential death and or injury. Singer (2009) demonstrates that the push to automate US national security decision-making has, in the twenty-first century, taken on more urgent tone as technological advances in computing power, computer networked communications, and robotics are leading to increased distance of humans from the battlefield. The political risk of committing US military forces to advance US diplomatic and national security objectives is, accordingly, plummeting. If fewer US military personnel are at risk of injury or death due to combat operations, US political leaders may be more apt to employ US military force to advance US interests leading to a more unstable global security environment. Of course, the form of that military force matters: kinetic or digital? Consider US efforts to slow the advance in nuclear weapons technology acquisition and development by Iran. Options included a full-scale air campaign that would have presented considerable political risk should US pilots be killed, captured, and paraded for global media consumption. In the end, the US government opted for a strategy that presented less political risk: the use of a computer virus, the first digital warhead. The Stuxnet computer virus was allegedly designed by the NSA and provided to its Israeli counterpart which then modified the virus before successfully planting the computer virus on the computer network used at Iranian nuclear facilities (Zetter 2015). Significantly, and unlike kinetic munitions, computer viruses and other forms of malware are reusable by friend and foe. Making the use of such weapons perhaps more risky for in utilizing digital weapons, one has disclosed and effectively given a capability to an adversary that can, in turn, be used against you. Furthermore, computer viruses and other malware become autonomous agents; as such, digital weapons perform the functions for which it was programed. Digital weapons simply make decisions based on preprogrammed instructions. In the case of the Stuxnet virus, it was designed to collect data, transmit that data home, and conduct disruptive operations to the production of highly enriched uranium for use in Iranian nuclear weapons. All tasks that it reportedly did well (Zetter 2015).

Non-State Actors using precision weapons is inevitable---they’ve already built up their arsenals, and current I-Law frameworks don’t solve

Lifshitz 16, [Itamar Lifshitz, Contributor and writer, "The Paradox of Precision: Nonstate Actors and Precision-Guided Weapons," 2/16/2016, War on the Rocks, https://warontherocks.com/2020/11/the-paradox-of-precision-nonstate-actors-and-precision-guided-weapons/] /Triumph Debate

The Characteristics of the Threat Nonstate actors are increasingly using precision weapons systems. The destructive effects of this trend are not theoretical. This is evident almost anywhere one looks in the Middle East. In Lebanon, Hizballah is investing heavily in making its vast ballistic arsenal more sophisticated and precise, including efforts to acquire the necessary manufacturing capability and know-how. In Yemen, Houthi rebels have used armed drones to target Saudi oil infrastructure. The abundance of precise systems is not exclusively an outcome of Iranian-backed proliferation — the Islamic State has also used weaponized drones on multiple occasions in Iraq and Syria. This phenomenon is by no means confined to the Middle East. Surface-to-air missiles launched from pro-Russian separatist-controlled territory in the Donbass shot down Malaysia Airlines Flight 17. There have been reports of Boko Haram using attack drones in Nigeria. Simultaneously, across the globe, criminal organizations are beginning to harness aerial capabilities. The adoption of drones for violent uses has apparently already begun in the war between Mexican drug cartels. Using precision technology and standoff capabilities, nonstate actors can cause more damage now than in the past. These technologies are readily available, cost less, do more, and require less expertise. Standoff capabilities challenge the ability to retaliate and to eliminate threats in real time. Some of these systems are being transferred by increasingly persistent “rogue” proliferators, but others are a part of the so-called “support” given by global and regional powers to local actors. As the threshold for the acquisition and use of technologies such as unmanned aerial systems, drones, and quadcopters has been significantly lowered, nonstate actors are also innovatively weaponizing commercially available technologies. In the coming years, as technology for precision-guided systems continues to advance, the challenges are likely to increase. First, these systems are sure to become more lethal — nonstate actors will have more precise weapon systems, with bigger payloads and more of them. Secondly, the ability to deploy these systems will improve significantly and they may be increasingly autonomous. Operating these systems from longer ranges will become easier, and the use of predetermined GPS targets (or other Global Navigation Satellite Systems) or AI algorithms could reduce the role of humans in the decision-making process. The use of space-related commercial intelligence gathering platforms, such as Google Earth, helps different entities to easily acquire precise targets. These new technologies might also simplify the use of nonconventional weapons, as, for example, a drone or quadcopter could easily disperse chemical agents. An Inadequate Existing Legal and Multilateral Framework Existing multilateral arms control frameworks don’t curtail the proliferation of precision standoff capabilities to nonstate actors and fail to address it as a unique problem. Current efforts regarding nonstate actors are focused on weapons of mass destruction and voluntary commitments regarding arms trades. Precise systems are generally perceived as preferred weapons since they help to minimize collateral damage and to distinguish between militants and civilians. The use of standoff precision weapons can assist commanders in meeting the requirements of the principle of distinction between civilians and combatants in international humanitarian law, and the general imperative to avoid the use of indiscriminate weapons. The growing importance of nonstate actors in international conflicts in the last 20 years has led scholars to grapple with their legal obligations. Many claim that while the rules of armed conflict and international humanitarian law still apply to nonstate actors, their enforcement faces significant challenges and is seldom sufficient. Some nonstate actors, of course, ignore legal norms or abuse them cynically. While states are accountable for their actions, violent nonstate actors can evade responsibility. These groups usually thrive in war zones and states with low governance, which lack effective institutions that might hold them accountable for their deeds. Furthermore, nonstate actors often don’t share the basic values that underpin international law and norms, demonstrated by their intentional targeting of civilians. In the hands of nonstate actors, precision-guided weapons cease to be a tool that decreases collateral damage. Instead, they become weapons of strategic, and potentially mass, destruction. The lack of accountability for nonstate actors also drives some states to seek impunity for their actions by conducting them through proxy groups. Therefore, it seems blatantly clear that the international community should do everything in its power to prevent nonstate actors from acquiring these weapons. To date, there are no adequate international norms on the proliferation of standoff precision capabilities to nonstate actors. The existing “binding” arms control framework regarding nonstate actors focuses on weapons of mass destruction. The 2004 U.N. Security Council Resolution 1540 determined that all states must refrain from providing support to nonstate actors to attain nuclear, chemical, or biological weapons and their means of delivery. It also requires states to adopt laws and regulations to prevent proliferation of these systems.