in-lab mini-debate – sunday · web viewin-lab mini-debate – sunday lesson plan...

31
In-Lab Mini-Debate – Sunday

Upload: vukiet

Post on 29-Mar-2018

230 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

In-Lab Mini-Debate – Sunday

Page 2: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Lesson plan explained

Page 3: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Basic Lesson plan

This is on different issues – but this is very similar to the blueprint used for yesterday’s in-lab mini-debate.

You will prep for both sides (Aff and Neg). This mini-debate is about the terror disad vs. the Privacy advantage.

Background on the Aff:

In the 1AC, the Aff runs the long-version of the privacy advantage (below). Runs the packet Aff with the following plan text:

Plan Option #5

In the absence of an individually-tailored warrant obtained via use of a specific selector term, federal intelligence agencies should cease collection of domestic phone, internet, email, and-or associated electronic records.

Page 4: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

How to prep and flow

Page 5: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

To start

Grab two sheets of flow paper and pre-flow the 1AC Privacy page (below)… and – on a separate piece of paper – pre-flow the 1NC on Terror.

Page 6: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

What you should do to set-up your 1NC

Please note that – this time - we are requiring the Neg to make the following two specific answers to the privacy. See more on this below.

As you prep the Neg, you should assume you read the Terror disad in the 1NC (below). BUT, you get to design the 1NC versus the privacy advantage. Here are the new guidelines:

You can make up to 8 answers to the privacy advantage. 2 of the answers should be analytic arguments that you deliver with “connection”

(emphasis). 6 of the answers should be cards. But:

o One of the six carded neg answers should be a circumvention argument – where the neg gives a specific analytic explanation AS TO WHY THE PLAN WOULD GET LAWYERED. You can chose the circumvention card that you read.

o One of the six carded answers should also be the corporate privacy argument. Lewis ’14 card (which appears in the neg’s 1NC frontline). It’s under the tag: “Alt cause – corporate privacy infringements are far worse and the public readily accepts it”

I recommend looking at the 1NC frontlines versus the privacy advantages for ideas – but you should (of course) tweak the answers.

Page 7: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

What you should do to set-up your 2AC

Same as yesterday’s drill.

As you prep the Aff you get to design the 2AC versus the terror disadvantage. I recommend looking at the 2AC terror frontline for ideas – but you should (of course) tweak the answers.

Here are the guidelines:

You can make up to 8 answers to the Terror disadvantage. 6 of the answers should be cards. 2 of the answers should be analytic arguments.

Page 8: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Speech times for this mini-debate

Page 9: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Speech times

1AC will not be read

1NC not timed

Cx of 1N 90 secs

2AC up to 3:15 – or 8 answers on the disad (whichever comes first)

Cx of 2A 90 secs

2NC up to 6 minutes

You should extend both the terror disad AND answer the case

Cx of 2N 90 secs

1AR 2:30

2NR 3 minutes (if we go this far).

Page 10: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Flow your 1AC and 1NC materials

Page 11: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

1AC - Privacy Advantage – longer version

Contention # ____ is Privacy

Privacy outweighs.- Utilitarian impact calc is skewed; and- Reject Surveillance as a structural matter of power – even when its

“reformed”, innocents are powerless unless neutral oversight’s in place.Solove ‘7

Daniel Solove is an Associate Professor at George Washington University Law School and holds a J.D. from Yale Law School. He is one of the world’s leading expert in information privacy law and is well known for his academic work on privacy and for popular books on how privacy relates with information technology. He has written 9 books and more than 50 law review articles – From the Article ““I’ve Got Nothing to Hide” and Other Misunderstandings of Privacy” - San Diego Law Review, Vol. 44, p. 745 - GWU Law School Public Law Research Paper No. 289 – available from download at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=998565

It is time to return to the nothing to hide argument . The reasoning of this argument is that when it comes to government surveillance or use of personal data, there is no privacy violation if a person has nothing sensitive,

embarrassing, or illegal to conceal. Criminals involved in illicit activities have something to fear, but for the vast majority of people, their activities are not illegal or embarrassing. Understanding privacy as I have set forth reveals the flaw of the nothing to hide argument at its roots. Many commentators who respond to the argument attempt a direct refutation by trying to point to things that people would want to hide. But the problem with the

nothing to hide argument is the underlying assumption that privacy is about hiding bad things. Agreeing with this assumption concedes far too much ground and leads to an unproductive discussion of information people would likely want or not want to hide. As Bruce

Schneier aptly notes, the nothing to hide argument stems from a faulty “premise that privacy is about hiding a wrong.”75 The deeper problem with the nothing to hide argument is that it myopically views privacy as a form of concealment or secrecy. But understanding privacy as a plurality of related problems demonstrates that concealment of bad things is just one among many problems caused by government programs such as

the NSA surveillance and data mining. In the categories in my taxonomy, several problems are implicated. The NSA programs involve problems of

information collection, specifically the category of surveillance in the taxonomy. Wiretapping involves audio surveillance of people’s conversations. Data mining often begins with the collection of personal information, usually from various third parties that possess people’s data. Under current Supreme Court Fourth Amendment jurisprudence, when the government gathers data from third parties, there is no Fourth Amendment protection because people lack a “reasonable expectation of privacy” in information exposed to others.76 In United States v. Miller, the Supreme Court concluded that there is no reasonable expectation of privacy in bank records because “[a]ll of the documents obtained, including financial statements and deposit slips, contain only information voluntarily conveyed to the banks and exposed to their employees in the ordinary course of business.”77 In Smith v. Maryland, the Supreme Court held that people lack a reasonable expectation of privacy in the phone numbers they dial because they “know that they must convey numerical information to the phone company,” and therefore they cannot “harbor any general expectation that the numbers they dial will remain secret.”78 As I have argued extensively elsewhere, the lack of Fourth Amendment protection of third party records results in the government’s ability to access an extensive amount of personal information with minimal limitation or oversight.79 Many scholars have referred to information collection as a form of surveillance. Dataveillance, a term coined by Roger Clarke, refers to the “systemic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons.”80 Christopher Slobogin has

referred to the gathering of personal information in business records as “transaction surveillance.”81 Surveillance can create chilling effects on free speech, free association, and other First Amendment rights essential for democracy.82 Even surveillance of legal activities can inhibit people from engaging in them. The value of protecting against chilling effects is not measured simply by focusing on the particular individuals who are deterred from exercising their rights. Chilling effects harm society because, among other things, they reduce the range of viewpoints expressed and the degree of freedom with which to engage in political activity. The nothing to hide argument focuses primarily on the information collection problems associated with the NSA programs. It contends that limited surveillance of lawful activity will not chill behavior sufficiently to outweigh the security benefits. One can certainly quarrel with this argument, but one of the difficulties with chilling effects is that it is

Page 12: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

often very hard to demonstrate concrete evidence of deterred behavior.83 Whether the NSA’s surveillance and collection of telephone records has deterred people from communicating particular ideas would be a difficult question to answer. Far too often, discussions of the NSA surveillance and data mining define the problem

solely in terms of surveillance. To return to my discussion of metaphor, the problems are not just Orwellian, but Kafkaesque. The NSA programs are problematic even if no information people want to hide is uncovered. In The Trial, the problem is not inhibited

behavior, but rather a suffocating powerlessness and vulnerability created by the court system’s use of personal data and its exclusion of the protagonist from having any knowledge or participation in the process. The harms consist of those created by bureaucracies—indifference, errors, abuses, frustration, and lack of transparency and accountability. One such harm, for example, which I call aggregation, emerges from the combination of small bits of seemingly innocuous data.84 When combined, the information becomes much more telling about a person. For the person who truly has nothing to hide, aggregation is not much of a problem. But in the stronger, less absolutist form of the nothing to hide argument, people argue that certain pieces of information are not something they would hide. Aggregation, however, means that by combining pieces of information we might not care to

conceal, the government can glean information about us that we might really want to conceal. Part of the allure of data mining for the government is its ability to reveal a lot about our personalities and activities by sophisticated means of analyzing data. Therefore,

without greater transparency in data mining, it is hard to claim that programs like the NSA data mining program will not reveal information people might want to hide, as we do not know precisely what is revealed. Moreover, data mining aims to be predictive of behavior, striving to prognosticate about our future actions. People who match certain profiles are deemed likely to engage in a similar pattern of behavior. It is

quite difficult to refute actions that one has not yet done. Having nothing to hide will not always dispel predictions of future activity. Another problem in the taxonomy, which is implicated by the NSA program, is the problem I refer to as exclusion.85 Exclusion is the problem caused when people are prevented from having knowledge about how their information is being used, as well as barred from being able to access and correct errors in that data. The NSA program involves a massive database of information that individuals cannot access. Indeed, the very existence of the program was kept secret for years.86 This kind of information processing, which forbids people’s knowledge or involvement, resembles in some ways a kind of due process problem. It is a structural problem involving the way people are treated by government institutions. Moreover, it creates a power imbalance between individuals and the government. To what extent should the Executive Branch and an agency such as the NSA, which is relatively insulated from the political process and public accountability, have a significant

power over citizens? This issue is not about whether the info rmation gathered is something people want to hide, but rather about the power and the structure of government. A related problem involves “secondary use.” Secondary use is the use of data obtained for one purpose for a different unrelated purpose without the person’s consent. The Administration has said little about how long the data will be stored, how it will be used, and what it could be used for in the future. The potential future uses of any piece of personal information are vast, and without limits or accountability on how that information is used, it is hard for people to assess the dangers of the data being in the government’s control. Therefore, the problem with the nothing to hide argument is that it focuses on just one or two particular kinds of privacy problems—the disclosure of personal information or

surveillance—and not others. It assumes a particular view about what privacy entails, and it sets the terms for debate in a manner that is often unproductive. It is

important to distinguish here between two ways of justifying a program such as the NSA surveillance and data mining program. The first

way is to not recognize a problem. This is how the nothing to hide argument works—it denies even the existence of a problem. The second manner of justifying such

a program is to acknowledge the problems but contend that the benefits of the NSA program outweigh

the privacy harms . The first justification influences the second, because the low value given to privacy is based upon a narrow view of the problem.

The key misunderstanding is that the nothing to hide argument views privacy in a particular way—as a form of secrecy, as the right to hide things. But there are many other types of harm involved beyond exposing one’s secrets to the government. Privacy problems are often difficult to recognize and redress because they create a panoply of types of harm. Courts,

legislators, and others look for particular types of harm to the exclusion of others, and their narrow focus blinds them to seeing other kinds of harms. One of the

difficulties with the nothing to hide argument is that it looks for a visceral kind of injury as opposed to a

structural one . Ironically, this underlying conception of injury is shared by both those advocating for greater privacy protections and those arguing

in favor of the conflicting interests to privacy . For example, law professor Ann Bartow argues that I have failed to describe

privacy harm s in a compelling manner in my article, A Taxonomy of Privacy, where I provide a framework for understanding the manifold different privacy problems.87 Bartow’s primary complaint is that my taxonomy “frames privacy harms in dry, analytical terms that fail to sufficiently identify and animate the compelling ways that privacy violations can negatively impact the lives of living, breathing human beings beyond simply provoking feelings of unease.”88 Bartow

claims that the taxonomy does not have “enough dead bodies” and that privacy’s “lack of blood and death,

or at least of broken bones and buckets of money, distances privacy harms from other categories of tort law. Most privacy problems lack dead bodies. Of course, there are exceptional cases such as the murders of Rebecca Shaeffer and Amy Boyer. Rebecca Shaeffer was an actress killed when a stalker obtained her address from a Department of Motor Vehicles record.90 This incident prompted Congress to pass the Driver’s Privacy Protection Act of 1994.91 Amy Boyer was murdered by a stalker who obtained her personal information, including her work address and Social Security number, from a database company.92 These examples aside, there is not a lot of death and gore in privacy law. If this is the standard to recognize a problem, then few privacy problems will be recognized. Horrific cases are not typical, and the purpose of my taxonomy is to explain why most privacy problems are

Page 13: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

still harmful despite this fact. Bartow’s objection is actually very similar to the nothing to hide argument. Those advancing the nothing to hide

argument have in mind a particular kind of visceral privacy harm, one where privacy is violated only when something deeply embarrassing or

discrediting is revealed. Bartow’s quest for horror stories represents a similar desire to find visceral privacy harms. The problem is that not all

privacy harms are like this. At the end of the day, privacy is not a horror movie, and demanding more palpable harms will be difficult in many cases.

Yet there is still a harm worth addressing, even if it is not sensationalistic. In many instances, privacy is

threatened not by singular egregious acts, but by a slow series of relatively minor acts which gradually begin to add up. In this way, privacy problems resemble certain environmental harms which occur over time through a series of small acts by different actors. Bartow wants to point to a major spill, but gradual pollution by a multitude of different actors often creates worse problems. The law frequently struggles with recognizing harms that do not result in embarrassment, humiliation, or physical or psychological injury.93 For example, after the September 11 attacks, several airlines gave their passenger records to federal agencies in direct violation of their privacy policies. The federal agencies used the data to study airline security.94 A group of passengers sued Northwest Airlines for disclosing their personal information. One of their claims was that Northwest Airlines breached its contract with the passengers. In Dyer v. Northwest Airlines Corp., the court rejected the contract claim because “broad statements of company policy do not generally give rise to contract claims,” the passengers never claimed they relied upon the policy or even read it, and they “failed to allege any contractual damages arising out of the alleged breach.”95 Another court reached a similar conclusion.96 Regardless of the merits of the decisions on contract law, the cases represent a difficulty with the legal system in addressing privacy problems. The disclosure of the passenger records represented a “breach of confidentiality.”97 The problems caused by breaches of confidentiality do not merely consist of individual emotional distress; they involve a violation of trust within a relationship. There is a strong social value in ensuring that promises are kept and that trust is maintained in relationships between businesses and their customers. The problem of secondary use is also implicated in this case.98 Secondary use involves data collected for one purpose being used for an unrelated purpose without people’s consent. The airlines gave passenger information to the government for an entirely different purpose beyond that for which it was originally gathered. Secondary use problems often do not cause financial, or even psychological, injuries. Instead, the harm is one of power imbalance. In Dyer, data was disseminated in a way that ignored airline passengers’ interests in the data despite promises made in the privacy policy. Even if the passengers were unaware of the policy, there is a social value in ensuring that companies adhere to established limits on the way they use personal information. Otherwise, any stated limits become meaningless, and companies have discretion to boundlessly use data. Such a state of affairs can leave nearly all consumers in a powerless position. The harm, then, is less one to particular individuals than it is a structural harm. A similar problem surfaces in another case, Smith v. Chase Manhattan Bank.99 A group of plaintiffs sued Chase Manhattan Bank for selling customer information to third parties in violation of its privacy policy, which stated that the information would remain confidential. The court held that even presuming these allegations were true, the plaintiffs could not prove any actual injury: [T]he “harm” at the heart of this purported class action, is that class members were merely offered products and services which they were free to decline. This does not qualify as actual harm. The complaint does not allege any single instance where a named plaintiff or any class member suffered any actual harm due to the receipt of an unwanted telephone solicitation or a piece of junk mail.100 The court’s view of harm, however, did not account for the breach of

confidentiality. When balancing privacy against security, the privacy harms are often characterized in terms of injuries to the individual, and the interest in security is often characterized in a more broad societal way. The security interest in the NSA programs has often been defined improperly. In a Congressional hearing, Attorney General Alberto Gonzales stated: Our enemy is listening, and I cannot help but wonder if they are not shaking their heads in amazement at the thought that anyone would imperil such a sensitive program by leaking its existence in the first place, and smiling at the prospect that we might now disclose even more or perhaps even unilaterally disarm ourselves of a key tool in the war on terror.101 The balance between privacy and security is often cast in terms of whether a particular

government information collection activity should or should not be barred. The issue, however, often is not whether the NSA or other government

agencies should be allowed to engage in particular forms of information gathering; rather, it is what kinds of oversight and accountability we want in place when the government engages in searches and seizures. The government can

employ nearly any kind of investigatory activity with a warrant supported by probable cause. This is a mechanism of oversight—it forces government officials to justify their suspicions to a neutral judge or magistrate before engaging in the tactic. For example, electronic surveillance law allows for wiretapping, but limits the practice with judicial supervision, procedures to minimize the breadth of the wiretapping, and requirements that the law enforcement officials report back to the court to prevent abuses.102 It is these procedures that the Bush Administration has ignored by engaging in the warrantless NSA surveillance. The question is not whether we want the government to monitor such conversations, but whether the Executive Branch should adhere to the appropriate oversight procedures that Congress has enacted into law, or should covertly ignore any oversight. Therefore, the security interest should not get weighed in its totality against the privacy interest. Rather, what should get weighed is the extent of marginal limitation on the effectiveness of a government information gathering or data mining program by imposing judicial oversight and minimization procedures. Only in cases where such procedures will completely impair the government program should the security interest be weighed in total, rather than in the marginal difference between an unencumbered

program versus a limited one. Far too often, the balancing of privacy interests against security interests takes place in a

manner that severely shortchanges the privacy interest while inflating the security interests . Such is the

logic of the nothing to hide argument. When the argument is unpacked, and its underlying assumptions examined and challenged, we can see how it shifts the debate to its terms , in which it draws power from its unfair advantage. It is time to pull the curtain on the nothing to hide argument. Whether explicit or not, conceptions of privacy underpin nearly every argument made about privacy, even the common quip “I’ve got nothing to hide.” As I have sought to demonstrate in this essay, understanding privacy as a pluralistic conception reveals that we are often talking past each other when discussing privacy issues. By focusing more specifically on the related problems

under the rubric of “privacy,” we can better address each problem rather than ignore or conflate them. The nothing to hide argument speaks to some

problems, but not to others. It represents a singular and narrow way of conceiving of privacy, and it wins by excluding

Page 14: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

consideration of the other problems often raised in government surveillance and data mining

programs. When engaged with directly, the nothing to hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by government data collection and use beyond

surveillance and disclosure, the nothing to hide argument, in the end, has nothing to say.

Put privacy before security. The ballot should create a side constraint where ends don’t justify the means. This is especially applies to data collection in the absence of probable cause.

Albright ‘14

Logan Albright is the Research Analyst at FreedomWorks, and is responsible for producing a wide variety of written content for print and the web, as well as conducting research for staff media appearances and special projects. He received his Master’s degree in economics from Georgia State University. “The NSA's Collateral Spying” – Freedom Works - 07/08/2014 - http://www.freedomworks.org/content/nsas-collateral-spying

In short, the report, based on information obtained by Edward Snowden, reveals that during the course of its ordinary, otherwise legal surveillance operations,

the NSA also collected data on large numbers of people who were not specifically targeted. The agency calls this practice “incidental surveillance.” I call it “collateral spying.” The report found that, on average, 9 out of every 10 people spied on were not the intended target. The NSA has the legal authority to obtain a warrant based on probable cause in order to surveil an individual. No one is disputing that. But when this targeting results in collateral spying on vast numbers of innocents, in the absence of probable cause and the corresponding warrants, that is a major problem. The NSA has asserted that such incidental data collection is inevitable, and to a certain extent that’s likely true. It is

understandable that in some situations the NSA may learn information about people other than the direct target, but this should obviously be minimized as far as possible , and at the very least the information should be immediately purged from government databases , not stored for years on end. In any case, the whole situation is indicative of

the agency’s cavalier attitude towards individual rights. While national security is a concern we all share, the ends do not justify the means when those means involve violate the constitutional protections afforded to citizens by our nation’s founders. It is

not okay to violate the rights of an innocent in the process of achieving a broader goal, even if that goal is noble. The way the NSA has been behaving is Machiavellian in the most literal sense. In his 16th century

political treatise, The Prince, Niccolo Machiavelli recognized a harsh reality of politics that still plagues us half a millennium later, writing, “A prince wishing to keep

his state is very often forced to do evil.” Taking Machiavelli’s advice as a green light for immoral behavior has been the problem with governments throughout history, a problem the founding fathers sought to avoid by setting down precise guidelines for what the government could and could not

do in the form of a Constitution. The disregard of these rules, and the argument that there should be a national security exception to the Fourth Amendment, undermines the entire purpose of the American experiment, and restores the European-style

tyrannies the revolutionaries fought against.

Even within a utilitarian framework, privacy outweighs for two reasons:

First – Structural bias. Their link inflates the security risk and their impact’s an epistemologically wrong. Solove ‘8

Page 15: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Daniel Solove is an Associate Professor at George Washington University Law School and holds a J.D. from Yale Law School. He is one of the world’s leading expert in information privacy law and is well known for his academic work on privacy and for popular books on how privacy relates with information technology. He has written 9 books and more than 50 law review articles – From the Article: “Data Mining and the Security-Liberty Debate” - University of Chicago Law Review, Vol. 74, p. 343, 2008 - http://papers.ssrn.com/sol3/papers.cfm?abstract_id=990030

Data mining is one issue in a larger debate about security and privacy. Proponents of data mining justify it as an essential tool to protect our security. For example, Judge Richard Posner argues that “[i]n an era of global terrorism and proliferation of weapons of mass destruction, the government has a

compelling need to gather, pool, sift, and search vast quantities of information, much of it personal.”9 Moreover, proponents of security measures argue that we must provide the executive branch with the discretion it needs to protect us. We cannot second guess every decision made by government officials, and excessive meddling into issues of national security by judges and oth-ers lacking expertise will prove detrimental. For example, William Stuntz contends that “effective, active government—government that innovates, that protects people who need protecting, that acts aggressively when action is needed—is dying. Privacy and transparency are the diseases. We need to

find a vaccine, and soon.”10 Stuntz concludes that “[i]n an age of terrorism, privacy rules are not simply unaffordable. They are perverse.”11 We live in an “age of balancing,” and the prevailing view is that most rights

and civil liberties are not absolute.12 Thus, liberty must be balanced against security. But there are systematic problems with

how the balancing occurs that inflate the importance of the security interests and diminish the value of the liberty interests . In this essay, I examine some common difficulties in the way that liberty is balanced against security in the context of data mining. Countless discussions about the tradeoffs

between security and liberty begin by taking a security proposal and then weighing it against what it would cost our civil liberties. Often, the liberty interests are cast as individual rights and balanced against the security interests, which are cast in terms of the safety of society as a whole. Courts and

commentators defer to the government’s assertions about the effectiveness of the security interest. In the context of data mining, the liberty interest is limited by narrow understandings of privacy that neglect to account for many

privacy problems. As a result, the balancing concludes with a victory in favor of the security interest. But as I will argue,

important dimensions of data mining’s security benefits require more scrutiny, and the privacy concerns are significantly greater than currently

acknowledged. These problems have undermined the balancing process and skew ed the results toward the security side of the scale. Debates about data mining begin with the assumption that it is an essential tool in protecting our security. Terrorists lurk among us, and ferreting

them out can be quite difficult. Examining data for patterns will greatly assist in this endeavor, the argument goes, because certain identifiable characteristics and behaviors are likely to be associated with terrorist activity. Often, little more is said, and the debate pro-ceeds to examine whether privacy is important enough to refrain from using such an effective terrorism-fighting tool. Many discussions about security and liberty proceed in this fashion. They commence by assuming that a particular security measure is effective, and the only remaining question is whether the liberty interest is strong enough to curtail that measure. But given the gravity of the security concerns over terrorism, the liberty interest has all but lost before it is even placed on the scale. Judge Richard Posner argues that judges should give the executive branch considerable deference when it comes to assessing the security measures it proposes. In his recent book, Not a Suicide Pact: The Constitution in a Time of National Emergency,13 Posner contends that judicial restraint is wise because “when in doubt about the actual or likely consequences of a measure, the pragmatic, empiricist judge will be inclined to give the other branches of government their head.”14 According to Posner, “[j]udges aren’t supposed to know much about national security.”15 Likewise, Eric Posner and Adrian Vermeule declare in their new book, Terror in the Balance: Security, Liberty, and the Courts,16 that “the executive branch, not Congress or the judicial branch, should make the tradeoff between security and liberty.”17 Moreover, Posner and Vermeule declare that during emergencies, “[c]onstitutional rights should be relaxed so that the executive can move forcefully against the threat.”18 The problem with such deference is that, historically, the executive branch has not always made the wisest national security decisions. Nonetheless, Posner and Vermeule contend that notwithstanding its mistakes, the executive branch is better than the judicial and legislative branches on institutional competence grounds.19 “Judges are generalists,” they observe, “and the political insulation that protects them from current politics also deprives them of information, especially information about novel security threats and necessary responses to those threats.”20 Posner and Vermeule argue that during emergencies, the “novelty of the threats and of the necessary responses makes judicial routines and evolved legal rules seem inapposite, even obstructive.”21 “Judicial routines” and “legal rules,” however, are the cornerstone of due process and the rule of law—the central building blocks of a free and democratic society. At many times, Posner, Vermeule, and other strong proponents of security seem to focus almost exclusively on what would be best for security when the objective should be establishing an optimal balance between security and liberty. Although such a balance may not promote security with maximum efficiency, it is one of the costs of living in a constitutional democracy as opposed to an authoritarian political regime. The executive branch may be the appropriate branch for developing security measures, but this does not mean that it is the most adept branch at establishing a balance between security and liberty. In our constitutional democracy, all branches have a role to play in making policy. Courts protect constitutional rights not as absolute restrictions on executive and legislative policymaking but as important interests to be balanced against government interests. As T. Alexander Aleinikoff notes, “balancing now dominates major areas of constitutional law.”22 Balancing occurs through various forms of judicial scrutiny, requiring courts to analyze the weight of the government’s interest, a particular measure’s effectiveness in protecting that interest, and the extent to which the government interest can be achieved without unduly infringing upon constitutional rights.23 For balancing to be meaningful, courts must scrutinize both the security and liberty interests. With deference, however, courts fail to give adequate scrutiny to security interests. For example, after the subway bombings in London, the New York Police Department began a program of random searches of people’s baggage on the subway. The searches were conducted without a warrant, probable cause, or even reasonable suspicion. In MacWade v Kelly,24 the United States Court of Appeals for the Second Circuit upheld the program against a Fourth Amendment challenge. Under the special needs doctrine, when exceptional circumstances make the warrant and probable cause requirements unnecessary, the search is analyzed in terms of whether it is “reasonable.”25 Reasonableness is determined by balancing the government interest in security against the interests in privacy and civil liberties.26 The weight of the security interest should turn on the extent to which the program effectively improves subway safety. The goals of the program may be quite laudable, but nobody questions the importance of subway safety. The critical issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the tradeoff in civil liberties. On this question, unfortunately, the court deferred to the law enforcement officials, stating that the issue “is best left to those with a unique understanding of, and responsibility for, limited public resources, including a finite number of police officers.” 27 In determining whether the program was “a reasonably effective means of addressing the government interest in deterring and detecting a terrorist attack on the subway system,”28 the court refused to examine the data to assess the program’s effectiveness.29 The way the court analyzed the government’s side of the balance would justify nearly any search, no matter how ineffective. Although courts should not take a know-it-all attitude, they should not defer on such a critical question as a security measure’s effectiveness. The problem with many security measures is that they are not wise expenditures of resources. A small number of random searches in a subway system of over four million riders a day seems more symbolic than effective because the odds of the police finding the terrorist with a bomb are very low. The government also argued that the program would deter terrorists from bringing bombs on subway trains, but nearly any kind of security measure can arguably produce some degree of deterrence. The key issue, which the court did not analyze, is whether the program would lead to deterrence significant enough to outweigh the curtailment of civil liberties. If courts fail to question the efficacy of security measures, then the security interest will prevail nearly all the time. Preventing terrorism has an immensely heavy weight, and any given security measure will provide a marginal advancement toward that goal. In the defer-ence equation, the math then becomes easy. At this point, it is futile to even bother to look at the civil liberties side of the balance. The government side has already won. Proponents of deference argue that if courts did not defer, then they would be substituting their judgment for that of executive officials, who have greater expertise in understanding security issues. Special expertise in national security, however, is often not necessary for balancing security and liberty. Judges and legislators should require the experts to persuasively justify the security measures being developed or used. Of course, in very complex areas of knowledge, such as advanced physics, nonexperts may find it difficult to understand the concepts and comprehend the terminology. But it is not clear that security expertise involves such sophisticated knowledge that it would be incomprehensible to nonexperts. Moreover, the deference argument conflates evaluating a particular security measure with creating such a measure. The point of judicial review is to subject the judgment of government officials to critical scrutiny rather than blindly accept their authority. Critical inquiry into factual matters is not the imposition of the judge’s own judgment for that of the decisionmaker under review.30 Instead, it is forcing government officials to explain and justify their policies. Few will quarrel with the principle that courts should not “second guess” the decisions of policy experts. But there is a difference between not “second guessing” and failing to critically evaluate the factual and empirical evidence justifying the government programs. Nobody will contest the fact that security is a compelling interest. The key issue in the balancing is the extent to which the security measure furthers the interest in security. As I have argued elsewhere, whenever courts defer to the government on the effectiveness of a government security measure, they are actually deferring to the government on the ultimate question as to whether the measure passes constitutional muster.31 Deference by the courts or legislature is an abdication of their function. Our constitutional system of government was created with three branches, a design structured to establish checks and balances against abuses of power. Institutional competence arguments are often made as if they are ineluctable truths about the nature of each governmental branch. But the branches have all evolved considerably throughout history. To the extent a branch lacks resources to carry out its function, the answer should not be to diminish the power of that branch but to provide it with the necessary tools so it can more effectively carry out its function. Far too often, unfortunately, discussions of institutional competence devolve into broad generalizations about each branch and unsubstantiated assertions about the inherent superiority of certain branches for making particular determinations. It is true, as Posner and Vermeule observe, that historically courts have been deferential to the executive during emergencies.32 Proponents of security measures often advance what I will refer to as the “pendulum theory”—that in times of crisis, the balance shifts more toward security and in times of peace, the balance shifts back toward liberty. For example, Chief Justice Rehnquist argues that the “laws will thus not be silent in time of war, but they will speak with a somewhat different voice.”33 Judge Posner contends that the liberties curtailed during times of crisis are often restored during times of peace.34 Deference is inevitable, and we should accept it without being overly concerned, for the pendulum will surely swing back. As I argue elsewhere, however, there have been many instances throughout US history of needless curtailments of liberty in the name of security, such as the Palmer Raids, the Japanese Internment, and the McCarthy communist hearings.35 Too often, such curtailments did not stem from any real security need but because of the “personal agendas and prejudices” of government officials.36 We should not simply accept these mistakes as inevitable; we should seek to prevent them from occurring. Hoping that the pendulum will swing back offers little consolation to those whose liberties were infringed or chilled. The protection of liberty is most important in times of crisis, when it is under the greatest

Page 16: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

threat. During times of peace, when our judgment is not clouded by fear, we are less likely to make unnecessary sacrifices of liberty. The threat to liberty is lower in peacetime, and the need to protect it is not as dire. The greatest need for safeguarding liberty is during times when we least want to protect it. In order to balance security and liberty, we must assess the security interest. This involves evaluating two components—the gravity of the security

threat and the effectiveness of the security measures to address it. It is often merely assumed without question that the secu-rity threat from terrorism is one of the gravest dangers we face in the modern world. But this assumption might be wrong. Assessing the risk of harm from terrorism is very difficult

because terrorism is such an irregular occurrence and is constantly evolving. If we examine the data from previous terrorist attacks, however, the threat of terrorism has been severely overstated. For example, many people fear being killed in a terrorist attack, but based on statistics from terrorism in the United States, the risk of dying from terrorism is miniscule. According to political scientist John Mueller, [e]ven with the September 11 attacks included in the count . . . the number of Americans killed by international terrorism since the late

1960s (which is when the State Department began its accounting) is about the same as the number killed over the same period by lightning, or by accident-causing deer, or by severe allergic reactions to peanuts.37 Add up the eight deadliest terrorist attacks in US history, and they amount to fewer than four thousand fatalities.38

In contrast, flu and pneumonia deaths are estimated to be around sixty thousand per year.39 Another forty thousand die in auto accidents each year.40 Based on our experience with terrorism thus far, the risk of

dying from terrorism is very low on the relative scale of fatal risks. Dramatic events and media attention can cloud a rational assessment of risk. The year 2001 was not just notable for the September 11 attacks. It was also the summer of the shark bite, when extensive media coverage about shark

bites led to the perception that such attacks were on the rise. But there were fewer shark attacks in 2001 than in 2000 and fewer deaths as well, with only four in 2001 as compared to thirteen in 2000.41 And regardless of which year had more deaths, the number is so low that an attack is a freak occurrence. It is certainly true that our past experience with terrorism might not be a good indicator of the future. More treacherous terrorism is possible, such as the use of nuclear or biological weapons. This complicates our ability to assess the risk of harm from terrorism. Moreover, the intentional human conduct involved in terrorism creates a sense of outrage and fear that ordinary deaths do not engender. Alleviating fear must be taken into account, even if such fear is irrationally high in relation to other riskier events such as dying in a car crash. But enlightened policy must not completely give in to the panic

and irrational fear of the moment. It should certainly attempt to quell the fear, but it must do so thoughtfully. Nevertheless, most policymakers find it quite difficult to assess the threat of terrorism modestly. In the face of widespread public panic, it is hard for government officials to make only moderate changes. Something dramatic must be done,

or political heads will roll. Given the difficulty in assessing the security threat in a more rational manner, it is imperative that the courts meaningfully analyze the effectiveness of security measures. Even if panic and fear might lead to the gravity of the threat being overstated, we should at least ensure that the measures taken to promote security are sufficiently effective to justify the cost. Unfortunately, as I will discuss in the next section, rarely do discussions about the sacrifice of civil liberties explain the corresponding

security benefit, why such a benefit cannot be achieved in other ways, and why such a security measure is the best and most rational one to take. Little scrutiny is given to security measures. They are often just accepted as a given, no matter how ill-conceived or ineffective they might be. Some ineffective security measures are largely symbolic,

such as the New York City subway search program. The searches are unlikely to catch or deter terrorists because they involve only a miniscule fraction of the millions of daily passengers. Terrorists can just turn to other targets or simply attempt the bombing on another day or at another train station where searches are not taking place. The vice of symbolic security programs is that they result in needless sacrifices of liberty and drain resources from other, more effective security measures. Nevertheless, these programs have a virtue—they can ameliorate fear because they are highly visible. Ironically, the subway search program’s primary benefit was alleviating people’s fear (which

was probably too high), albeit in a deceptive manner (as the program did not add much in the way of security). Data mining represents another kind of security measure, one that currently has little proven effectiveness and little symbolic value. Data mining programs are often not visible enough to the public to quell much fear. Instead, their benefits come primarily from their actual effectiveness in

reducing terrorist threats, which remains highly speculative. Thus far, data mining is not very accurate in the behavioral predictions it makes. For example, there are

approximately 1.8 million airline passengers each day.42 A data mining program to identify terrorists with a false positive rate of 1 percent (which would be exceedingly low for such a program) would flag eighteen thousand people as false positives. This is quite a large number of innocent people. Why is the government so interested in data mining if it remains unclear whether it will ever be very accurate or workable? Part of the government’s interest in data mining stems from the aggressive marketing efforts of database companies. After September 11, database companies met with government officials and made a persuasive pitch about the virtues of data mining.43 The

technology sounds quite dazzling when presented by skillful marketers, and it can work quite well in the commercial setting. The problem, however, is that just because data mining might be effective for businesses trying to predict customer behavior does not make it effective for the government trying to predict who will engage in terrorism. A high level of accuracy is not necessary when data mining is used by businesses to target

marketing to consumers, because the cost of error to individuals is minimal. Amazon.com, for example, engages in data mining to determine which books its customers are likely to find of interest by comparing bookbuying patterns among its customers. Although it is far from precise, it need not be because there are few bad consequences if it makes a wrong book recommendation. Conversely, the consequences are vastly greater for government

data mining. Ultimately, I do not believe that the case has been made that data mining is a wise expenditure of security resources. Those who advocate for security should be just as outraged as those on the liberty side of the debate. Although courts should not micromanage which security measures the government

chooses, they should examine the effectiveness of any given security measure to weigh it against the liberty costs. Courts should not tell the executive branch to modify a security measure just because they are not convinced it is

the best one, but they should tell the executive that a particular security measure is not effective enough to outweigh the liberty costs. The very point of protecting liberty is to demand that sacrifices to liberty are not in vain and that security interests, which compromise civil liberties, are sufficiently effective to warrant the cost.

Second - Relative certainty. The disad only may cause violence - surveillance definitely does. Privacy is paramount for dignity and protecting our unique individuality.

Schneier ‘6

Page 17: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Bruce Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute and the CTO of Resilient Systems. He is the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. Commentary, “The Eternal Value of Privacy”, WIRED, May 18, 2006, http://www.wired.com/news/columns/1,70886-0.html

The most common retort against privacy advocates -- by those in favor of ID checks, cameras, databases, data mining and

other wholesale surveillance measures -- is this line: "If you aren't doing anything wrong, what do you have to hide?" Some clever answers : "If I'm not doing anything wrong, then you have no cause to watch me." "Because the government gets to define what's wrong, and they keep changing the definition." "Because you might do something wrong with my information." My problem with quips like these -- as right as they are -- is that they accept the premise

that privacy is about hiding a wrong. It's not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect. Two proverbs say it best: Quis custodiet custodes ipsos? ("Who watches

the watchers?") and "Absolute power corrupts absolutely." Cardinal Richelieu understood the value of surveillance when he famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." Watch someone long enough, and you'll find something to

arrest -- or just blackmail -- with. Privacy is important because without it, surveillance information will be abused : to peep, to sell to marketers and to spy on political enemies -- whoever they happen to be at the time. Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance. We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private

journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need. A

future in which privacy would face constant assault was so alien to the framers of the Constitution that it never occurred to them to call out privacy as an explicit right. Privacy was inherent to the nobility of their being and their cause. Of course being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be inconceivable among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It's intrinsic to the

concept of liberty. For if we are observed in all matters, we are constantly under threat of correction, judgment,

criticism, even plagiarism of our own uniqueness . We become children, fettered under watchful eyes, constantly fearful that -- either now or in the uncertain future -- patterns we leave behind will be brought back to implicate

us, by whatever authority has now become focused upon our once-private and innocent acts. We lose

our individuality , because everything we do is observable and recordable. How many of us have paused during conversation in the past four-and-a-half years, suddenly aware that we might be eavesdropped on? Probably it was a phone conversation, although maybe it was an e-mail or instant-message exchange or a conversation in a public place. Maybe the topic was terrorism, or politics, or Islam. We stop suddenly, momentarily afraid that our words might be taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered. This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam

Hussein's Iraq. And it's our future as we allow an ever-intrusive eye into our personal, private lives. Too many wrongly characterize the debate as "security versus privacy." The real choice is liberty versus control . Tyranny, whether it arises

under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that's why we should champion privacy even when we have nothing to hide.

The 4th Amendment outweighs. An ethical ballot can’t even consider their security impact. That would treat privacy as mere inconvenience – obliterating liberty.

Smith ‘14

Page 18: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Peter J. Smith IV – attorney for the law firm LUKINS & ANNIS and Lead Council for This brief was was signed by the entire legal team, which includes four attorneys from the ELECTRONIC FRONTIER FOUNDATION and three additional attorneys from the AMERICAN CIVIL LIBERTIES UNION FOUNDATION - APPELLANT’S REPLY BRIEF in the matter of Smith v. Obama – before the United States Ninth Circuit Court of Appeals. October 16th – available at: https://www.eff.org/document/smiths-reply-brief

The government argues that it would be more convenient for law enforcement if the courts established a bright-line rule that extinguished all privacy in information shared with others. See Gov’t Br. 40. The government is

surely right about this. The Bill of Rights exists, however, not to serve governmental efficiency but to safeguard

individual liberty. Cf. Bailey v. United States, 133 S. Ct. 1031, 1041 (2013) (“ ‘[T]he mere fact that law enforcement may be made more efficient can never by itself justify disregard of the Fourth Amendment .’” (quoting Mincey v. Arizona, 437 U.S. 385, 393 (1978))); Riley, 134 S. Ct. at 2493 (“Our cases have historically recognized that the warrant requirement is ‘an important working part of our machinery of government,’ not merely ‘an inconvenience to be somehow “weighed” against the claims of police efficiency. ’”

(quoting Coolidge v. New Hampshire, 403 U.S. 443, 481 (1971))). Notably, the government made the same appeal for a bright-line rule in Jones and Maynard, see, e.g., Brief for the United States at 13, Jones, 132 S. Ct. 945, but the Supreme Court and D.C. Circuit rejected it.

Reject those privacy violations as an a priori imperative. Also proves that the disad’s all hype. Wyden ‘14

(et al; This amicus brief issued by three US Senators - Ron Wyden, Mark Udall and Martin Heinrich. Wyden and Udall sat on the Senate Select Committee on Intelligence and had access to the meta-data program. “BRIEF FOR AMICI CURIAE SENATOR RON WYDEN, SENATOR MARK UDALL, AND SENATOR MARTIN HEINRICH IN SUPPORT OF PLAINTIFF-APPELLANT, URGING REVERSAL OF THE DISTRICT COURT” – Amicus Brief for Smith v. Obama – before the United States Ninth Circuit Court of Appeals - Appeal from the United States District Court District of Idaho The Honorable B. Lynn Winmill, Chief District Judge, Presiding Case No. 2:13-cv-00257-BLW – Sept 9th, 2014 – This Amicus Brief was prepared by CHARLES S. SIMS from the law firm PROSKAUER ROSE LLP. This pdf can be obtained at: https://www.eff.org/document/wyden-udall-heinrich-smith-amicus)

Respect for Americans’ privacy is not a matter of convenience, but a Constitutional imperative .

Despite years of receiving classified briefings and asking repeated questions of intelligence officials in both

private and public settings, amici have seen no evidence that bulk collection accomplishes anything that other less intrusive surveillance authorities could not. Bulk collection is not only a significant threat to the constitutional liberties of Americans, but a needless one.9

Reject utilitarianism. It shatters all ethics and justifies the worst atrocities.Holt ‘95

(Jim Holt is an American philosopher, author and essayist. He has contributed to The New York Times, The New York Times Magazine, The New York Review of Books, The New Yorker, The American Scholar, and Slate. He hosted a weekly radio spot on BBC for ten years and he writes frequently about politics and philosophy. New York Times, “Morality, Reduced To Arithmetic,” August 5, p. Lexis)

Page 19: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

Can the deliberate massacre of innocent people ever be condoned? The atomic bombs dropped on Hiroshima and Nagasaki on Aug. 6 and 9, 1945, resulted in the deaths of 120,000 to 250,000 Japanese by incineration and radiation poisoning. Although a small fraction of the victims were soldiers, the great majority were noncombatants -- women, children, the aged. Among the justifications that have been put forward for President Harry Truman’s decision to use the bomb, only one is worth taking seriously -- that it saved lives. The alternative, the reasoning goes, was to launch an invasion. Truman claimed in his memoirs that this would have cost another half a million American lives. Winston Churchill put the figure at a million. Revisionist historians have cast doubt on such numbers. Wartime documents suggest that military planners expected around 50,000 American combat deaths in an invasion. Still, when Japanese casualties, military and civilian, are taken into account, the overall invasion death toll on both sides would surely have ended up surpassing that from Hiroshima and Nagasaki. Scholars will continue to argue over whether there were other, less catastrophic ways to force Tokyo to surrender. But given the fierce obstinacy of the Japanese militarists, Truman and his advisers had some grounds for believing that nothing short of a full-scale invasion or the annihilation of a big city with an apocalyptic new weapon would have succeeded. Suppose they were right. Would this prospect have

justified the intentional mass killing of the people of Hiroshima and Nagasaki? In the debate over the question, participants on both sides have been playi ng the numbers game. Estimate the hypothetical number of lives saved by the bombings, then add up the actual lives lost. If the first number exceeds the second, then Truman did the right thing; if the reverse, it was wrong to have dropped the bombs. That is one approach to the matter -- the utilitarian

approach. According to utilitarianism, a form of moral reasoning that arose in the 19th century, the goodness or evil of an action is determined

solely by its consequences. If somehow you can save 10 lives by boiling a baby, go ahead and boil that

baby . There is, however, an older ethical tradition , one rooted in Judeo-Christian theology, that takes a quite different view. The gist of it is

expressed by St. Paul’s condemnation of those who say, “Let us do evil, that good may come.” Some actions, this tradition holds, can never be justified by their consequences; they are absolutely forbidden . It is always wrong to boil a baby even if lives are saved thereby. Applying this absolutist morality to war can be tricky. When enemy soldiers are trying to enslave or kill us, the principle of self-defense permits us to kill them (though not to slaughter them once they are taken prisoner). But what of those who back them? During World War II, propagandists made much of the “indivisibility” of modern warfare: the idea was that since the enemy nation’s entire economic and social strength was deployed behind its military forces, the whole population was a legitimate target for obliteration. “There are no civilians in Japan,” declared an intelligence officer of the Fifth Air Force shortly before the Hiroshima bombing, a time when the Japanese were popularly depicted as vermin worthy of extermination. The boundary between combatant and noncombatant can be fuzzy, but the distinction is not meaningless, as the case of small children makes clear. Yet is wartime killing of those who are not trying to harm us always tantamount to murder? When naval dockyards, munitions factories and supply lines are bombed, civilian carnage is inevitable. The absolutist moral tradition acknowledges this by a principle known as double effect: although it is always wrong to kill innocents deliberately, it is sometimes permissible to attack a military target knowing some noncombatants will die as a side effect. The doctrine of double effect might even justify bombing a hospital where Hitler is lying ill. It does not, however, apply to Hiroshima and Nagasaki. Transformed into hostages by the technology of aerial bombardment, the people of those cities were intentionally executed en masse to send a message of terror to the rulers of Japan. The practice of ordering the massacre of civilians to bring the enemy to heel scarcely began with Truman. Nor did the bomb result in casualties of a new order of magnitude. The earlier bombing of Tokyo by incendiary weapons killed some 100,000 people. What Hiroshima and Nagasaki did mark, by the unprecedented need for rationalization they presented, was the triumph of

utilitarian thinking in the conduct of war. The conventional code of noncombatant immunity -- a product of several centuries of ethical progress among nations, which had been formalized by an international commission in the 1920’s in the Hague -- was swept away. A simpler axiom took its place: since war is hell, any means necessary may be used to end, in

Churchill’s words, “the vast indefinite butchery.” It is a moral calculus that , for all its logical consistency, offends our deep-seated intuitions about the sanctity of life -- our conviction that a person is always to be treated as an end, never as a means. Left up to the

warmakers, moreover, utilitarian calculations are susceptible to bad-faith reasoning: tinker with the

numbers enough and virtually any atrocity can be excused in the national interest. In January, the

world commemorated the 50th anniversary of the liberation of Auschwitz, where mass slaughter was committed as an end in itself -- the ultimate evil. The moral nature of Hiroshima is

ambiguous by contrast. Yet in the postwar era, when governments do not hesitate to treat the massacre of civilians as just another strategic option, the bomb’s sinister legacy is plain: it has inured us to the idea of reducing innocents to instruments and morality to arithmetic.

Page 20: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

1NC – Terror Disad

Uniqueness – Domestic surveillance successfully checks terror incidents now. Prefer longitudinal studies.

Boot ‘13

Max Boot is a Senior Fellow in National Security Studies at the Council on Foreign Relations. In 2004, he was named by the World Affairs Councils of America as one of "the 500 most influential people in the United States in the field of foreign policy." In 2007, he won the Eric Breindel Award for Excellence in Opinion Journalism. From 1992 to 1994 he was an editor and writer at the Christian Science Monitor. Boot holds a bachelor's degree in history, with high honors, from the University of California, Berkeley and a master's degree in history from Yale University. Boot has served as an adviser to U.S. commanders in Iraq and Afghanistan. He is the published author of Invisible Armies: An Epic History of Guerrilla Warfare from Ancient Times to the Present. From the article: “Stay calm and let the NSA carry on” - LA Times – June 9th - http://articles.latimes.com/2013/jun/09/opinion/la-oe-boot-nsa-surveillance-20130609

After 9/11, there was a widespread expectation of many more terrorist attacks on the United States. So far that hasn't happened. We haven't escaped entirely unscathed (see Boston Marathon, bombing of), but on the whole we have been a lot safer than most security experts , including me, expected. In light of the current controversy over the National Security Agency's monitoring of telephone calls and

emails, it is worthwhile to ask: Why is that? It is certainly not due to any change of heart among our enemies.

Radical Islamists still want to kill American infidels. But the vast majority of the time, they fail. The Heritage Foundation estimated last year that 50 terror ist attacks on the American homeland had been foiled since 2001. Some, admittedly, failed through sheer incompetence on the part of the would-be terrorists. For instance, Faisal Shahzad, a Pakistani American

jihadist, planted a car bomb in Times Square in 2010 that started smoking before exploding, thereby alerting two New Yorkers who in turn called police, who were able to defuse it. But it would be naive to adduce all of our security

success to pure serendipity. Surely more attacks would have succeeded absent the ramped-up counter-terror ism efforts undertaken by the U.S. intelligence community , the military and law enforcement. And a large element of the intelligence community's

success lies in its use of special intelligence — that is, communications intercepts. The CIA is notoriously deficient in human intelligence — infiltrating spies into terrorist

organizations is hard to do, especially when we have so few spooks who speak Urdu, Arabic, Persian and other relevant languages. But the NSA is the best in the world at intercepting communications. That is the most important technical advantage we have in the battle against fanatical foes who will not hesitate to sacrifice their lives to take ours. Which brings us to the current kerfuffle over two NSA monitoring programs that

have been exposed by the Guardian and the Washington Post. One program apparently collects metadata on all telephone calls made in the United States. Another program provides access to all the emails, videos and other data found on the servers of major Internet firms such as Google , Apple and

Microsoft. At first blush these intelligence-gathering activities raise the specter of Big Brother snooping on ordinary American citizens who might be cheating on their spouses or bad-mouthing the president. In fact, there are considerable safeguards built into both programs to ensure that doesn't happen. The phone-monitoring program does not allow the NSA to listen in on conversations without a court order. All that it can do is to collect information on the time, date and destination of phone calls. It should go without saying that it would be pretty useful to know if someone in the U.S. is calling a number in Pakistan or Yemen that is used by a terrorist organizer. As for

the Internet-monitoring program, reportedly known as PRISM, it is apparently limited to "non-U.S. persons" who are abroad and thereby enjoy no constitutional protections. These are hardly rogue operations. Both programs were initiated by President George W. Bush and continued by President Obama with the full knowledge and support of Congress and continuing oversight from the federal judiciary. That's why the leaders of both the House and Senate

intelligence committees, Republicans and Democrats alike, have come to the defense of these activities. It's possible that, like all government programs, these could be

abused — see, for example, the IRS making life tough on tea partiers. But there is no evidence of abuse so far and plenty of evidence —

in the lack of successful terrorist attacks — that these programs have been effective in disrupting terror ist plots. Granted there is something inherently

creepy about Uncle Sam scooping up so much information about us. But Google, Facebook, Amazon, Twitter, Citibank and other companies know at least as much about us, because they use very similar data-mining programs to track our online movements. They gather that information in order to sell us products, and no one seems to be overly alarmed. The NSA is gathering that information to keep us safe from terrorist attackers. Yet somehow its actions have become a "scandal," to use a term now loosely being tossed around. The real scandal here is that the Guardian and Washington Post are compromising our national security by telling our enemies about our intelligence-gathering capabilities. Their news stories reveal, for example, that only nine Internet companies share information with the NSA. This is a virtual invitation to terrorists to use other Internet outlets for searches, email, apps and all

Page 21: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

the rest. No intelligence effort can ever keep us 100% safe, but to stop or scale back the NSA's special intelligence efforts would amount to unilateral disarmament in a war against terrorism that is far from over.

(Note to students: a “longitudinal study” is research carried out over an extended period of time. In this case, several years.)

Link – curtailing surveillance boosts terror risks. That risk’s serious and underestimated.Lewis ‘14

James Andrew Lewis is a senior fellow and director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C., where he writes on technology, security, and the international economy. Before joining CSIS, he worked at the US Departments of State and Commerce as a Foreign Service officer and as a member of the Senior Executive Service. His diplomatic experience included negotiations on military basing in Asia, the Cambodia peace process, and the five-power talks on arms transfer restraint. Lewis received his Ph.D. from the University of Chicago. “Underestimating Risk in the Surveillance Debate” - CENTER FOR STRATEGIC & INTERNATIONAL STUDIES - STRATEGIC TECHNOLOGIES PROGRAM – December - http://csis.org/publication/underestimating-risk-surveillance-debate

Americans are reluctant to accept terrorism is part of their daily lives , but attacks have been planned or

attempted against American targets (usually airliners or urban areas) almost every year since 9/11. Europe faces even

greater risk, given the thousands of European Union citizens who will return hardened and radicalized from fighting in Syria and Iraq. The threat of attack is easy to exaggerate, but that does not mean it is nonexistent. Australia’s then-attorney general said in August 2013 that communications surveillance had stopped four “mass casualty events” since 2008. The constant planning and preparation for attack by terrorist groups is not

apparent to the public. The dilemma in assessing risk is that it is discontinuous. There can be long periods with

no noticeable activity, only to have the apparent calm explode. The debate over how to reform

communications surveillance has discounted this risk. Communications surveillance is an essential law

enforcement and intelligence tool. There is no replacement for it. Some suggestions for alternative approaches to surveillance , such as the idea that the National Security Agency (NSA) only track known or suspected terrorists,

reflect wishful thinking, as it is the unknown terrorist who will inflict the greatest harm.

Vigilance link - Strong intel gathering’s key to discourages initiation of BW attacks.

Pittenger ‘14

US Rep. Robert Pittenger, chair of Congressional Task Force on Terrorism, “Bipartisan bill on NSA data collection protects both privacy and national security” - Washington Examiner, 6/9/14, http://washingtonexaminer.com/rep.-robert-pittenger-bipartisan-bill-on-nsa-data-collection-protects-both-privacy-and-national-security/article/2549456?custom_click=rss&utm_campaign=Weekly+Standard+Story+Box&utm_source=weeklystandard.com&utm_medium=referral

Page 22: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

This February, I took that question to a meeting of European Ambassadors at the Organization for Security and Cooperation in Europe. During the conference, I asked three questions: 1. What is the current worldwide terrorist threat? 2. What is America’s role in addressing and mitigating this

threat? 3. What role does intelligence data collection play in this process, given the multiple platforms for attack

including physical assets, cyber, chemical, biological , nuclear and the electric grid? Each ambassador acknowledged the threat was greater today than before 9/11, with al Qaeda and other extreme Islamist terrorists

stronger, more sophisticated, and having a dozen or more training camps throughout the Middle East and

Africa. As to the role of the U nited S tates , they felt our efforts were primary and essential for peace and

security around the world. Regarding the intelligence-gathering, their consensus was, “We want privacy,

but we must have your intelligence .” As a European foreign minister stated to me, “Without U.S. intelligence, we are blind.” We cannot yield to those loud but misguided voices who view the world as void of the deadly and destructive

intentions of unrelenting terrorists. The number of terrorism-related deaths worldwide doubled between 2012 and

2013, jumping from 10,000 to 20,000 in just one year. Now is not the time to stand down. Those who embrace an altruistic

worldview should remember that vigilance and strength have deterred our enemies in the past.

That same commitment is required today to defeat those who seek to destroy us and our way of life. We must make careful, prudent use of all available technology to counter their sophisticated operations if we are to maintain our freedom and liberties.

Bioterror attacks cause extinctionMhyrvold ‘13

Nathan, Began college at age 14, BS and Masters from UCLA, Masters and PhD, Princeton “Strategic Terrorism: A Call to Action,” Working Draft, The Lawfare Research Paper Series Research paper NO . 2 – 2013

As horrible as this would be, such a pandemic is by no means the worst attack one can imagine, for several reasons. First, most of the classic bioweapons are based on 1960s and 1970s technology because the 1972 treaty halted bioweapons development efforts in the United States and most other Western countries. Second, the Russians, although solidly

committed to biological weapons long after the treaty deadline, were never on the cutting edge of biological research. Third and most important, the science and tech nology of molecular biology have made enormous advances , utterly transforming the field in the last few decades. High school biology students routinely perform molecular-biology manipulations that would have been impossible even for the best superpower-funded program back in the heyday of b iological- w eapons research. The biowarfare methods of the 1960s and 1970s are now as antiquated as the lumbering mainframe computers of that era. Tomorrow’s terrorists will have vastly more deadly bugs to choose from. Consider this sobering development: in 2001, Australian researchers working on mousepox, a nonlethal virus that infects mice (as chickenpox does in humans), accidentally discovered that a simple genetic modification transformed the virus.10, 11 Instead of producing mild symptoms, the new virus killed 60% of even those mice already immune to the naturally occurring strains of mousepox. The new virus, moreover, was unaffected by any existing vaccine or antiviral

drug. A team of researchers at Saint Louis University led by Mark Buller picked up on that work and, by late 2003, found a way to improve on it: Buller’s variation on mousepox was 100% lethal, although his team of investigators also devised combination vaccine and antiviral therapies that

were partially effective in protecting animals from the engineered strain.12, 13 Another saving grace is that the genetically altered virus is no longer contagious. Of course, it is quite possible that future tinkering with the virus will change that property, too. Strong reasons exist to believe that the genetic modifications Buller made to mousepox would work for other poxviruses and possibly for other classes of viruses as well. Might the same techniques allow chickenpox or another poxvirus

that infects humans to be turned into a 100% lethal bioweapon, perhaps one that is resistant to any known antiviral therapy?

I’ve asked this question of experts many times, and no one has yet replied that such a manipulation couldn’t be done. This case is just one example. Many more are pouring out of scientific journals and conferences every year. Just last year, the journal Nature published a controversial study done at the University of Wisconsin–Madison in which virologists enumerated the changes

Page 23: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

one would need to make to a highly lethal strain of bird flu to make it easily transmitted from one mammal to another.14 Biotechnology is advancing so rapidly that it is hard to keep track of all the new potential threats. Nor is it clear that anyone is even trying. In addition to lethality and drug resistance, many other parameters can be played with, given that the infectious power of an epidemic depends on many properties, including the length of the latency period during which a person is contagious but asymptomatic. Delaying the onset of serious symptoms allows each new case to spread to more people and thus makes the virus harder to stop. This dynamic is perhaps best illustrated by HIV , which is very

difficult to transmit compared with smallpox and many other viruses. Intimate contact is needed, and even then, the infection rate is low. The balancing factor is that HIV can take years to progress to AIDS , which can then take many more years to kill the victim. What makes HIV so dangerous is that infected people have lots of opportunities to infect others. This property has allowed HIV to claim more than 30

million lives so far, and approximately 34 million people are now living with this virus and facing a highly uncertain future.15 A virus genetically engineered to infect its host quickly, to generate symptoms slowly—say, only after weeks or months—and to spread easily

through the air or by casual contact would be vastly more devastating than HIV . It could silently

penetrate the population to unleash its deadly effects suddenly . This type of epidemic would be almost impossible to combat because most of the infections would occur before the epidemic became obvious. A technologically sophisticated terrorist group could develop such a virus and kill a large part of humanity with

it. Indeed, terrorists may not have to develop it themselves: some scientist may do so first and publish the details. Given the rate at which biologists are making discoveries about viruses and the immune system , at some point in the

near future, someone may create artificial pathogens that could drive the human race to extinction . Indeed, a detailed species-elimination plan of this nature was openly proposed in a scientific journal. The ostensible purpose of that particular research was to suggest a way to extirpate the malaria mosquito, but similar techniques could be directed toward humans.16 When I’ve talked to molecular biologists about this method, they are quick to point out that it is slow and easily detectable and could be fought with biotech remedies. If you challenge them to come up with improvements to the suggested attack plan,

however, they have plenty of ideas. Modern biotechnology will soon be capable, if it is not already, of bringing about

the demise of the human race — or at least of killing a sufficient number of people to end high-tech civilization and set humanity back 1,000 years or more. That terrorist groups could achieve this level of technological sophistication may seem far-fetched, but keep in mind that it takes only a handful of individuals to accomplish these tasks. Never has lethal power of this potency been accessible to so few, so easily. Even more dramatically than nuclear proliferation, modern biological science has frighteningly undermined the correlation between the lethality of a weapon and its cost, a fundamentally stabilizing mechanism throughout history. Access to extremely lethal agents—lethal enough to exterminate Homo sapiens—will be available to anybody with a solid background in biology, terrorists included.

The Disad turns the case via rollback and new civil liberty violations. Status Quo detection is key.

Clarke ‘13

(et al; This is the Final Report and Recommendations of The President’s Review Group on Intelligence and Communications Technologies. President Obama ordered a blue-ribbon task force to review domestic surveillance. This report releases the findings of that group. The report was headed by five experts – including Richard Alan Clarke, who is the former National Coordinator for Security, Infrastructure Protection, and Counter-terrorism for the United States. Other expert contributors include Michael Joseph Morell, who was the deputy director of the Central Intelligence Agency and served as acting director twice in 2011 and from 2012 to 2013 and Cass Robert Sunstein, who was the Administrator of the White House Office of Information and Regulatory Affairs in the Obama administration and is currently a Professor of Law at Harvard Law School. “LIBERTY AND SECURITY IN A CHANGING WORLD” – December 12th, 2013 – Easily obtained via a google search.

Page 24: In-Lab Mini-Debate – Sunday · Web viewIn-Lab Mini-Debate – Sunday Lesson plan explained Basic Lesson plan This is on different issues – but this is very similar to the blueprint

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAA&url=https%3A%2F2Fwww.whitehouse.gov%2Fsites%2Fdefault%2Ffiles%2Fdocs%2F2013-12 12_rg_final_report.pdf&ei=Db0yVdDjKIKdNtTXgZgE&usg=AFQjCNH0S_Fo9dckL9bRarVpi4M6pq6MQ&bvm=bv.91071109,d.eXY)

The September 11 attacks were a vivid demonstration of the need for detailed information about the activities of potential terrorists. This was so for several reasons. First, some information, which could have been useful, was not collected and other information, which could have helped to prevent the attacks, was not shared among departments. Second, the scale of damage that 21st-century terrorists can inflict is far greater than

anything that their predecessors could have imagined. We are no longer dealing with threats from firearms and conventional explosives, but with the

possibility of w eapons of m ass d estruction, including nuclear devices and biological and chemical agents. The damage that such attacks could inflict on the nation,

measured in terms of loss of life, economic and social disruption, and the consequent sacrifice of civil liberties, is extraordinary. The events of September 11 brought this home with crystal clarity. Third, 21st-century terrorists operate within a global communications network that enables them both to hide their existence from outsiders and to communicate with one another across continents at the speed of light. Effective safeguards against terrorist attacks require the technological capacity to ferret out such communications in an international communications grid. Fourth, many of the international terrorists that the United States and other nations confront today cannot realistically be deterred by the fear of punishment. The conventional means of preventing criminal conduct—the fear of capture and subsequent punishment—has relatively little role to play in combating some contemporary terrorists. Unlike the situation during the Cold War, in which the Soviet Union was deterred from launching a nuclear strike against the United States in part by its fear of a retaliatory counterattack, the terrorist enemy in the 21st-century is not a nation state against which the United States and its allies can retaliate with the same effectiveness. In such circumstances, detection in advance is essential in any effort to “provide for the common defence.” Fifth,

the threat of massive terrorist attacks involving nuclear, chemical, or biological weapons can generate a chilling and destructive environment of fear and anxiety among our nation’s citizens. If Americans came

to believe that we are infiltrated by enemies we cannot identify and who have the power to bring death, destruction, and chaos to our lives on a massive scale, and that preventing such attacks is beyond the capacity of our government , the quality of national life would be greatly imperiled. Indeed, if a similar or even more devastating attack were to occur in the future, there would almost surely be an impulse to increase the use of surveillance technology to prevent further strikes, despite the potentially corrosive effects on individual freedom and self-governance.