artificial financial intelligence

47
Texas A&M University School of Law Texas A&M University School of Law Texas A&M Law Scholarship Texas A&M Law Scholarship Faculty Scholarship 7-2020 Artificial Financial Intelligence Artificial Financial Intelligence William Magnuson Texas A&M University School of Law, [email protected] Follow this and additional works at: https://scholarship.law.tamu.edu/facscholar Part of the Civil Rights and Discrimination Commons, Computer Law Commons, Internet Law Commons, and the Law and Society Commons Recommended Citation Recommended Citation William Magnuson, Artificial Financial Intelligence, 10 Harv. Bus. L. Rev. 337 (2020). Available at: https://scholarship.law.tamu.edu/facscholar/1435 This Article is brought to you for free and open access by Texas A&M Law Scholarship. It has been accepted for inclusion in Faculty Scholarship by an authorized administrator of Texas A&M Law Scholarship. For more information, please contact [email protected].

Upload: others

Post on 24-Oct-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Artificial Financial Intelligence

Texas A&M University School of Law Texas A&M University School of Law

Texas A&M Law Scholarship Texas A&M Law Scholarship

Faculty Scholarship

7-2020

Artificial Financial Intelligence Artificial Financial Intelligence

William Magnuson Texas A&M University School of Law, [email protected]

Follow this and additional works at: https://scholarship.law.tamu.edu/facscholar

Part of the Civil Rights and Discrimination Commons, Computer Law Commons, Internet Law

Commons, and the Law and Society Commons

Recommended Citation Recommended Citation William Magnuson, Artificial Financial Intelligence, 10 Harv. Bus. L. Rev. 337 (2020). Available at: https://scholarship.law.tamu.edu/facscholar/1435

This Article is brought to you for free and open access by Texas A&M Law Scholarship. It has been accepted for inclusion in Faculty Scholarship by an authorized administrator of Texas A&M Law Scholarship. For more information, please contact [email protected].

Page 2: Artificial Financial Intelligence

ARTIFICIAL FINANCIAL INTELLIGENCE

WILLIAM MAGNUSON*

Recent advances in the field of artificial intelligence have revived long-standing debates about the interaction between humans and technology. Thesedebates have tended to center around the ability of computers to exceed thecapacities and understandings of human decisionmakers, and the resulting ef-fects on the future of labor, inequality, and society more generally. These ques-tions have found particular resonance in finance, where computers already playa dominant role. High-frequency traders, quantitative (or "quant") hedge funds,and robo-advisors all represent, to a greater or lesser degree, real-world instan-tiations of the impact that artificial intelligence is having on the field. This Arti-cle, however, takes a somewhat contrarian position. It argues that the primarydanger of artificial intelligence in finance is not so much that it will surpasshuman intelligence, but rather that it will exacerbate human error. It will do soin three ways. First, because current artificial intelligence techniques rely heav-ily on identifying patterns in historical data, use of these techniques will tend tolead to results that perpetuate the status quo (a status quo that exhibits all thefeatures and failings of the external market). Second, because some of the most"accurate" artificial intelligence strategies are the least transparent or explain-able ones, decisionmakers may well give more weight to the results of thesealgorithms than they are due. Finally, because much of the financial industrydepends not just on predicting what will happen in the world, but also on pre-dicting what other people will predict will happen in the world, it is likely thatsmall errors in applying artificial intelligence (either in data, programming, orexecution) will have outsized effects on markets. This is not to say that artificialintelligence has no place in the financial industry, or even that it is bad for theindustry. It clearly is here to stay, and, what is more, has much to offer in termsof efficiency, speed, and cost. But as governments and regulators begin to takestock of the technology, it is worthwhile to consider artificial intelligence's real-world limitations.

INTRODUCTION .................................................. 338I. ARTIFICIAL INTELLIGENCE IN THE FINANCIAL INDUSTRY ..... 341

A. The Technology of Artificial Intelligence ................ 342B. Artificial Intelligence Strategies in Finance ............. 348C. Existing Literature.................................... 352

II. ARTIFICIAL FINANCIAL INTELLIGENCE .......................... 355A. Data Dependency..................................... 355B. O verweighting........................................ 359C. Echo Effects ......................................... 363

BH. REGULATING ARTIFICIAL FINANCIAL INTELLIGENCE .......... ... 365A. Artificial Fairness .................................... 366B. Artificial Efficiency ................................... 368C. A rtificial Risk ........................................ 371D. The Role of Self-Regulation ........................... 373

* Associate Professor, Texas A&M University School of Law; J.D., Harvard Law School;M.A., UniversitA di Padova; A.B., Princeton University. The author wishes to thank Jack Gold-smith, Todd Henderson, Chris Brummer, Tom Vartanian, Bob Ledig, Sarah Jane Hughes, An-gela Walch, Tom Lin, and Rory Van Loo for their suggestions and advice.

Page 3: Artificial Financial Intelligence

338 Harvard Business Law Review [Vol. 10

IV. OBJECTIONS ............................................ 375A. The Mirror Critique .............................. 375B. The Precautionary Critique.. ....................... 378C. The Balance Critique ............................. 380

CONCLUSION. ................................................... 381

INTRODUCTION

In the novel Tell the Machine Goodnight, a super-powered computercalled the Apricity (named after the now-defunct English word for "the feel-ing of sun on one's skin in the winter") uses advanced artificial intelligenceto deliver lifestyle recommendations to users.' A simple cotton swab appliedto the mouth, and then swiped across the computer's reader, is all the com-puter needs to formulate a list of recommended changes for users to make intheir lives. These changes, if implemented, promise to lead to happier, morefulfilled lives, or, as Apricity's corporate legal department advises them tosay, to improvements in one's "overall life satisfaction." The story openswith one unfortunate user receiving his own personalized set of recommen-dations from the Apricity:

The machine said the man should eat tangerines. It listed two otherrecommendations as well, so three in total. A modest number,Pearl assured the man as she read out the list that had appeared onthe screen before her: one, he should eat tangerines on a regularbasis; two, he should work at a desk that received morning light;three, he should amputate the uppermost section of his right indexfinger.'

Pearl, the technician tasked with operating the device, watches the man raisehis right hand before his face and wonders if he is going to cry. She assureshim that his recommendation is "modest" compared to others she has seenand reminds him that the system boasts a 99.97% approval rating. "Theproof is borne out in the numbers," she concludes.'

Writing about artificial intelligence requires a certain dose of imagina-tion. This has always been the case. It is not just because artificial intelli-gence draws more than its fair share of attention in science fiction andfantasy circles, although this certainly plays a part. It is also, and perhapsmore importantly, because the term artificial intelligence conjures up imagesof a world that does not currently exist. When most people think of artificialintelligence, they think of superpowered computers that act and think likehuman beings, with complicated motives, wide-ranging capacities, and oftendangerous tendencies. This combination of qualities simply does not exist in

' KATE WILLIAMS, ThLL THE MACHINE GOODNIGHT (Ist ed. 2018).2 Id. at 1.3 Id. at 2.

Page 4: Artificial Financial Intelligence

Artificial Financial Intelligence

the state of today's artificial intelligence research. As more than one artificialintelligence researcher has remarked, most of the people making the dooms-day prognostications of artificial intelligence do not actually work in thefield. Those that do have significantly more modest forecasts about the po-tential of artificial intelligence. This is not to say that they believe artificialintelligence is an ineffective technology. It is just that they understand itslimitations.

At the same time, artificial intelligence algorithms have undeniablymade significant advances in recent years. Improved algorithms, bigger datasets, and more powerful computers have combined to give machine learningstrategies ever-growing capabilities. In the last decade alone, we have wit-nessed machine learning-based computers beat the world's best Jeopardycontestants,4 accurately identify breast cancer from pathology slides,5 anddrive cars around the streets of Phoenix, Arizona.6 Governments around theworld are researching autonomous weapons systems that could potentiallymark a major shift in the nature of warfare and national security.'

Financial institutions, intrigued by the potential of machine learning,have begun to explore how to incorporate artificial intelligence into theirown businesses. In recent years, large investment banks have hired awaytalented experts from academia and Silicon Valley to head up machine learn-ing divisions.' Fintech startups have created credit rating models and frauddetection algorithms based on artificial intelligence strategies.9 And high-frequency traders, quantitative hedge funds, and robo-advisors are imple-menting artificial intelligence into their businesses as well.'0 "Artificial fi-nancial intelligence" will likely take on increasing importance in years tocome.

But how should our legal structures respond to the rise of artificial fi-nancial intelligence? Should we attempt to encourage it, in the hopes of nur-

4 STEPHEN BAKER, FINAL JEOPARDY: THE STORY OF WATSON, THE COMPUTER THAT WILL

TRANSFORM OUR WORLD 251 (1st ed. 2012).See Jessica Kent, Google Deep Learning Tool 99% Accurate at Breast Cancer Detection,

HEALTH IT ANALYTICS, (Oct. 22, 2018), https://healthitanalytics.com/news/google-deep-learn-ing-tool-99-accurate-at-breast-cancer-detection.

' See Tim Higgins & Jack Nicas, Waymo's Self-Driving Milepost: Humans Take a Back-seat, WALL ST. J., (Nov. 7, 2017), https://www.wsj.com/articles/waymos-self-driving-mile-post-humans-take-a-backseat-I510070401.

'See generally Rebecca Crootoff, War Torts: Accountability for Autonomous Weapons,164 U. PA. L. REV. 1347 (2016); Rebecca Crootoff, Autonomour Weapons Systems and theLimits of Analogy, 9 HARV. NAT'L SEC. J. 51 (2018); GREG ALLEN & TANIEL CHAN, BELFER

CTR. FOR SCI. & INTL' AFFAIRS, ARTIFICIAL INTELLIGENCE AND NATIONAL SECURITY (2017).8 See Sarah Butcher, The Top Machine Learning Teams in Investment Banks, EFINANC[AL-

CAREERS, (May 23, 2018), https://news.efinancialcareers.com/us-en/315969/top-machine-learning-teams-banks.

' See Rory Van Loo, Making Innovation More Competitive: The Case of Fintech, 65UCLA L. REV. 232, 240 (2018).

0 See Machine-Learning Promises to Shake Up Large Swathes of Finance, THE EcONO-

MIST (May 25, 2017), https://www.economist.com/finance-and-economics/2017/05/25/ma-chine-learning-promises-to-shake-up-large-swathes-of-finance.

2020] 339

Page 5: Artificial Financial Intelligence

Harvard Business Law Review

turing a potentially transformative technology? Or should we try to restrictit, out of concerns about its broader risks for society? Is it possible to doboth? In order to answer these questions, we must first identify the problemsthat artificial financial intelligence poses, and the sources from which theseproblems spring. Many scholars argue that artificial intelligence poses exis-tential questions about the nature of discrimination, the future of work, andthe repercussions of super-human intelligence.'' This Article, however, willtake a somewhat contrarian position. It will argue that the primary danger ofartificial financial intelligence is not so much that it will surpass human in-telligence, but rather that it will exacerbate human error.12

Artificial financial intelligence can magnify the effects of human errorin three ways. First, because artificial intelligence techniques, and in particu-lar machine learning algorithms, rely heavily on identifying patterns in his-torical data, use of these techniques will tend to lead to results thatperpetuate the status quo in the data. Thus, if the data used to train artificialintelligence algorithms is flawed, either through poor selection methods orunavoidable problems in the external market itself, the resulting outputs ofartificial intelligence will reflect (and, indeed, strengthen) those flaws. Sec-ond, because many of the most powerful artificial intelligence strategies arethe least transparent or explainable, decisionmakers within financial institu-tions may give more weight to the results of the algorithms than they aredue. It is a well-known problem in machine learning, and in particular in thesub-fields of deep learning and neural networks, that the complexity of themethods used to reach a given result makes identifying a particular reason,

" See Tom C. W. Lin, The New Market Manipulation, 66 EMORY L. J. 1253, 1254 (2017)(arguing that artificial intelligence risks widespread market manipulation); Rory Van Loo, Dig-ital Market Perfection, 117 MICH. L. REv. 815 (2019) (arguing that Al-based automated assist-ants "could make some large markets more volatile, raising unemployment costs or financialstability concerns as more firms fail"); Rory Van Loo, Technology Regulation by Default:Platforms, Privacy, and the CFPB, 2 GEO. L. TECH. REV. 531, 544-45 (2018) (arguing that theConsumer Financial Protection Bureau should exert additional authority to inspect financialalgorithms); Chris Brummer & Yesha Yadav, Fintech and the Innovation Trilemma, 107 GEo.L. J. 235, 275 (2019) (arguing that "[g]iven the potential for [Al and machine learning] toresult in widespread, cascading costs, mapping the likely performance of sophisticated algo-rithms becomes especially necessary"); Gregory Scopino, Preparing Financial Regulation forthe Second Machine Age: The Need for Oversight of Digital Intermediaries in the FuturesMarkets, 2015 COLUM. Bus. L. REv. 439, 440 (2015) ("[H]umans who are operating as fu-tures market intermediaries . . . are likely to be displaced by digital intermediaries, that is,artificial agents that perform critical roles related to enabling customers to access the futuresand derivatives markets."); Luca Enriques & Dirk A. Zetzsche, Corporate Technologies andthe Tech Nirvana Fallacy, Eur. Corp. Governance Inst. Working Paper No. 457, 2019) (analyz-ing the ways in which modern algorithms exacerbate agency problems in corporate law).

2 Other scholars have made related points in other contexts, such as discrimination. SeeSolon Barocas & Andrew D. Selbst, Big Data's Disparate Impact, 104 Cal. L. Rev. 671, 671(2016) ("[A]n algorithm is only as good as the data it works with. Data is frequently imper-fect in ways that allow these algorithms to inherit the prejudices of prior decision makers.");More recently, financial regulation scholars have turned to this problem as well. See Tom C.W.Lin, Artificial Intelligence, Finance, and the Law, 99 FoRDHAM L. Rev. 531, 531 (2019) (ex-amining the "perils and pitfalls of artificial codes, data bias, virtual threats, and systemic risksrelating to financial artificial intelligence.").

340 [Vol. 10

Page 6: Artificial Financial Intelligence

Artificial Financial Intelligence

or even several reasons, for the result difficult to do. Combined with cogni-tive decision biases related to availability and herding effects, we can expectthat artificial financial intelligence will be hard to resist in financial deci-sionmaking processes. Finally, because much of the financial industry de-pends not just on predicting what will happen in the world, but also onpredicting what other people will predict will happen in the world, it is likelythat small errors in artificial intelligence (either in data, programming, orexecution) will have outsized effects on markets. The artificial intelligencestrategies of one firm may interact unpredictably with the artificial intelli-gence strategies of other firms." The resulting echo effects could harm boththe efficiency of markets and the stability of financial systems.14

What does this mean for financial regulation? In a way, it is a cause foroptimism. Financial regulators are well-placed to deal with artificial finan-cial intelligence because they have a wide array of laws and regulations cov-ering the relevant behaviors-ensuring fairness, promoting efficiency, andprotecting stability. While accomplishing these goals will not be easy, thegeneral categories of problems are ones to which financial regulators areaccustomed. At the same time, regulating artificial financial intelligence mayrequire slightly different regulatory mechanisms, or at least targets. Under-standing and monitoring artificial financial intelligence may, and indeedlikely will, require specialized expertise. It may also call for more intrusiveand hands-on inspections of sensitive business data. We may not need betterlaws, but we do need better information.

I. ARTIFICIAL INTELLIGENCE IN THE FINANCIAL INDUSTRY

The computer science field of artificial intelligence has exploded in re-cent years. New advances in technical approaches, computing power, andthe availability of data have led to notable successes in a variety of applica-tions, from image recognition" to natural language processing6 to medicaldiagnoses7 to game strategy.'" These advances have brought renewed atten-

" See Rory Van Loo, The Rise of the Digital Regulator, 66 DuKE L.J. 1267, 1294 (2017)(discussing the unpredictable ways that different firms' algorithmic pricing tools may interact).

1 See Hilary J. Allen, Driverless Finance, 10 HARv. Bus. L. Rnv. 158, 158 (2019) (argu-ing that financial regulators should adopt a precautionary approach to regulating financial al-gorithms due to their potential to create systemic risk for the broader financial system).

15 See Parmy Olson, Image-Recognition Technology May Not Be as Secure as We Think,WALL Sr. J., (June 4, 2019), https://www.wsj.com/articles/image-recognition-technology-may-not-be-as-secure-as-we-think-i 1559700300.

16 See Tom Young, et al., Recent Trends in Deep Learning Based Natural LanguageProcessing, ARXIV 1708.02709 (2018), https://arxiv.org/pdf/1708.02709.pdf.

17 See A. Michael Froomkin, et al., When Als Outperform Doctors: Confronting the Chal-lenges of a Tort-Induced Over-Reliance on Machine Learning, 61 ARiz. L. REv. 33 (2019).

" See Nick Statt, How Artificial Intelligence Will Revolutionize the Way Video Games AreDeveloped and Played, THE VERGE (Mar. 6, 2019), https://www.theverge.com/2019/3/6/18222203/video-game-ai-future-procedural-generation-deep-leaming; George Anadiotis, TheState of AI in 2019: Breakthroughs in Machine Learning, Natural Language Processing,Games, and Knowledge Graphs, ZDNEr (July 8, 2019), https://www.zdnet.com/article/the-

3412020]

Page 7: Artificial Financial Intelligence

Harvard Business Law Review

tion to artificial intelligence in the business world and, in particular, in thefinancial industry. Financial firms, who have always relied heavily on statis-tics and quantitative analysis in their services, are a natural fit for artificialintelligence strategies. Artificial intelligence, after all, attempts to identifypatterns and probabilities from historical data in order to improve results,such as predicting future statistics or identifying complicated associations.Increasingly, financial firms, such as quant hedge funds, high-frequencytraders, and robo-advisors, are turning to artificial intelligence to improvetheir own results. This new artificial financial intelligence ecosystem has, inturn, generated its fair share of hand-wringing in policy and academic cir-cles, where scholars have worried about cyber-systems that could potentiallyexceed human capabilities. This Part will describe the rise of artificial intelli-gence, its use in the financial industry, and the existing literature on howpolicymakers should respond to it.

A. The Technology of Artificial Intelligence

What is artificial intelligence? The term has many meanings, a problemderived from its long history in computer science and popular fiction.'9 Theterm itself is generally believed to have originated in 1956, when aDartmouth mathematics professor, John McCarthy, used it in connectionwith a conference he was organizing on the topic of thinking machines.2 0 Butthe general outlines of the idea go back much further. Edgar Allan Poe wrotean essay in 1836 about a mechanical chess player called The Turk that waspurportedly better than any human.2 1 Alan Turing, at least as early as 1950,was directly tackling the question of whether computers could think.22 Tur-ing famously concluded that the question itself was ill-formed and insteadneeded to be reconceptualized as whether machines could act in such a waythat they would be indistinguishable from humans.23 In order to determinethis, he developed a test, now called the Turing test, in which a human,corresponding through text with two players, one of whom was a human andone of whom was a computer, attempted to determine which one washuman.24 If he could not, then, under the Turing test, the computer could besaid to be intelligent.

state-of-ai-in-2019-breakthroughs-in-machine-learning-natural-language-processing-games-and-knowledge-graphs/.

" See NAT'L SC. & TECH. COUNCIL, PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLI-GENCE 5 (2016).20

See NILs J. NILSSON, THE QUEST FOR ARTIFICIAL INTELLIGENCE 52-56 (2009).21 See EDGAR ALLAN POE, Maelzel's Chess-Player, in Edgar Allan Poe: Complete Tales

and Poems 373 (2003).22 Alan M. Turing, Computing Machinery and Intelligence, 59 MIND 433, 433 (1950).2 Id. at 433-34.24 It turns out that the Turing test has generated its fair share of controversy, both with

respect to whether individual computers have in fact passed it and with respect to whether thetest itself is a good indicator of true intelligence on the part of machines.

[Vol. 10342

Page 8: Artificial Financial Intelligence

Artificial Financial Intelligence

More modem definitions of artificial intelligence tend to be somewhatmore inclusive. Berkeley computer scientist Stuart Russell defines artificialintelligence as:

the study of methods for making computers behave intelligently.Roughly speaking, a computer is intelligent to the extent that itdoes the right thing rather than the wrong thing. The right thing iswhatever action is most likely to achieve the goal, or, in moretechnical terms, the action that maximizes expected utility. 25

As an even cursory perusal of this selection will reveal, this definition runsinto its own set of problems. For one, we might say that a calculator does theright thing when it says that 1 + 1 is 2, but most observers would not saythat it is behaving intelligently in any ordinary sense of the term. For an-other, once we define intelligence to include an element of morality (does acomputer's action maximize expected utility?), we open the door to conten-tious, unanswerable questions that distract from the technology itself. Toborrow from Socrates, if a user asks a computer for a sword, and the com-puter receives the question, finds a sword for him in the real world,purchases it, and then delivers it to him, most observers would conclude thatthe computer is behaving intelligently.26 But if that same user requesting thesword happens to be insane and plans to use it to harm himself or others,then it would (normally) be wrong to give him the sword. Does this meanthat the computer is now not displaying artificial intelligence, because it isdoing the wrong thing, ethically speaking? These types of questions could beraised for any conceivable use of a computer, and they are not closely con-nected to the problems unique to artificial intelligence itself.

On the other side of the spectrum, narrower definitions of artificial in-telligence tend to circumscribe the range of the field unnecessarily. Manyartificial intelligence researchers use the term artificial intelligence to referto a particular set of techniques or algorithms that computers can adopt toachieve better results. So, in the 1980s, artificial intelligence became closelyassociated with an approach to problem-solving known as "expert system"design.27 Under expert systems, computers would be given a long set of if-then rules that they could then apply to problems in a given field?" The logic

25 STUART RUSSELL, Q&A: THE FUTURE OF ARTIFICIAL INTELLIGENCE, UNIV. OF CAL.

BERKELEY, http://people.eecs.berkeley.edu/-russell/temp/q-and-a.html.26 The discussion of the madman and the sword takes place in The Republic, Book I.

Appropriately for this article, it involves the question of whether justice requires you to payyour debts. See PLATO, REPUBLIC 6 (C. D. C. Reeve ed., G.M.A. Grube trans., Hackett Pub.Company, Inc. 2d ed. 1992) (c. 375 B.C.E.) ("Everyone would surely agree that if a sane manlends weapons to a friend and then asks for them back when he is out of his mind, the friendshouldn't return them, and wouldn't be acting justly if he did.").

27 See generally PAUL HARMON & DAVID KING, EXPERT SYSTEMS: ARTIFICIAL INTELLI-

GENCE IN BUSINESS (1985); Bruce G. Buchanan & Reid G. Smith, Fundamentals of ExpertSystems, in 4 HANDBOOK OF ARTIFICIAL INTELLIGENCE 149 (Avron Barr, et al. eds. 1981-89).

28 See ROBERT A. EDMUNos, THE PRENTICE HALL GUIDE TO EXPERT SYSTEMS 28-31

(1988).

2020] 343

Page 9: Artificial Financial Intelligence

Harvard Business Law Review

of expert systems was that humans who are specialists in a field tend tobehave according to a number of heuristics, and if those heuristics couldsimply be identified and then hard-coded into a set of simple rules for com-puters, then computers could achieve similar or even superior results to spe-cialists.29 It turned out that expert systems did not live up to the highexpectations of researchers, and their limitations eventually led to a declinein interest in artificial intelligence generally.3 0

More recently, however, artificial intelligence has had something of arenaissance, becoming associated with a new and promising techniqueknown as deep learning."1 Deep learning is a sophisticated, data-drivenmodel from a field of computer science known as machine learning. Ma-chine learning, rather than relying on a set of rules established by a program-mer, as in expert systems, instead starts with a data set and then attempts toderive rules on its own.3 2 For example, a computer might be given a set ofimages with labels attached-this is a person, this is a tree, this is a dog-and be asked to use the images (the "training set") to establish rules foridentifying when an image contains those objects.33 The machine learningalgorithm might then check the accuracy of these rules against another set ofimages (the "test set") to see if the rules work well outside of the originalcontext.4 Many different variants of machine learning exist, and one particu-lar variant, known as deep learning, has had notable success in recentyears.35 In deep learning algorithms, neural networks or units identify pat-terns in data, transform those patterns into outputs, and then pass those out-puts along to additional units. 36 The first "layer" of units might identifypatterns in the data, and then the next layer might identify patterns of pat-tems in the data, and so on and so forth.3 The algorithms associated withdeep learning techniques have proven remarkably accurate at improving ac-curacy and predictive power in machines, and they have become synony-mous with artificial intelligence in many circles.

2 9See Andrea Roth, Machine Testimony, 126 YALE L. J. 1972, 1998-99 (2017).See DANIEL CREVIER, Al: THE TUMUITUOus SEARCH FOR ARTIFICIAL INTELLIGENCE

204-8 (1993).3 See Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. DAVIS L.

REv. 399, 402 (2017); Jack M. Balkin, The Three Laws of Robotics, 78 OHIO ST. L. J. 1217,1219-20 (2016); Emily Berman, A Government of Laws and not of Machines, 98 B.U. L. REv.1277, 1284-90 (2018).

32 See KEvIN P. MURPHY, MACHINE LEARNING: A PROBABILISTIC PERSPECIlVE 1 (2012)("[W]e define machine learning as a set of methods that can automatically detect patterns indata, and then use the uncovered patterns to predict future data, or to perform other kinds ofdecision making under uncertainty . . . .").

3 See David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars ShouldLearn About Machine Learning, 51 U.C. DAVIS L. REV. 653, 684-88 (2017).

34 Id.1 Id. at 669-70.' See Curtis E.A. Karnow, The Opinion of Machines, 19 COLUM. Sa. & TECH. L. REV.

136, 141-50 (2017).3 Id.

344 [Vol. 10

Page 10: Artificial Financial Intelligence

Artificial Financial Intelligence

But defining artificial intelligence in such a narrow way, to encompassa small set of algorithmic techniques and to exclude all others, has itsproblems as well. The explosion of interest in deep learning has only oc-curred in the last few years, and it may well be that other, newer strategieswill emerge that tackle the same issues but use different methods. Thesenewer strategies may well be more effective than older ones. There wouldnot appear to be any categorical reasons for excluding new algorithmic strat-egies from the definition of artificial intelligence simply because they areunorthodox or new.

It is hard to regulate what we cannot define." Hopefully this discussionclarifies some of the confusion in the field about what precisely we are talk-ing about when we talk about artificial intelligence. Some people use it torefer generally to any instance when a computer performs an action cor-rectly, while others use it to refer to a specific set of algorithmic strategiesthat are currently en vogue in the industry. When computer scientists talkabout the AI revolution of recent years, they tend to be talking about themachine learning and deep learning strategies that researchers have deployedto such great success in certain fields. But policymakers, news media andcommentators often use artificial intelligence to refer more generally tothinking machines, regardless of the actual techniques that the machines areusing to accomplish their feats. It may be most useful to conceive of artifi-cial intelligence as a spectrum, with the capacity to perform simple tasks(such as arithmetic and other basic mathematical calculations) at the lowend, the capacity to match human performance somewhere in the middle,and the capacity to outperform humans at the high end. This conception isitself simplistic (computers are already better than humans at many tasks,while simultaneously being worse than humans at many others), but it helpsclarify the range of potential artificial intelligence applications in the world.And while this Article will focus in particular on machine learning, it isworthwhile to keep in mind that some other technique may come along thatreplaces it.

Now that we have settled on a workable definition of artificial intelli-gence, we can now turn to its real-world uses. In particular, it is useful forour purposes (that is, understanding how artificial intelligence is affectingthe financial industry, and how it might do so in the future) to know thetypes of things that current instantiations of artificial intelligence are good atdoing. It is also important to know why they are good at doing these things.

A list of the accomplishments of artificial intelligence in the last decadeis breathtaking. Here is just a short list of some of the most noteworthyachievements:

"s For an excellent analysis of the regulatory issues raised by the uncertain scope of "arti-ficial intelligence" as a concept, see Mark A. Lemley & Bryan Casey, You Might be a Robot,105 CORNELL L. REV. (forthcoming 2019). Lemly and Casey conclude that, in fact, there is nocorrect definition of artificial intelligence for regulators, and that regulators should insteadfocus on behaviors and actions.

2020] 345

Page 11: Artificial Financial Intelligence

Harvard Business Law Review

* In 2011, IBM's computing system, Watson, won a game of Jeopardyagainst two of the game show's most successful contestants, Ken Jen-nings and Brad Rutter, with a final tally of $77,147 to Jennings'$24,000 and Rutter's $21,600;3"

* Between 2010 and 2017, in a popular image recognition contest,deep learning algorithms lowered their error rate from 28% to 2.3%(surpassing humans, who, on average have a 5% error rate);"

* In 2016, by reviewing and analyzing all published literature on ALS,or Lou Gehrig's disease, IBM's Watson system was able to identifyfive previously unknown genes related to the disease;41

* In 2017, a computer program known as AlphaGo defeated theworld's best player of the board game Go with a score of threematches to zero;42

* In 2017, countries deployed forty-nine different automated weaponssystems that could detect and attack targets without humanintervention ;43

* In 2018, Waymo launched the world's first commercial self-drivingcar service, a technology based heavily on machine learningalgorithms."

These remarkable advances, displayed across a wide and diverse arrayof fields, have been made possible by three related developments: better al-gorithms, deployed by more powerful computers, applied to ever greateramounts of data.45 On the algorithm side, the major advances in recent yearshave been driven by machine learning, which, as described above, refer to acomputer's ability to analyze data and develop models for recognizing pat-terns and predicting future data-in other words, its ability to learn and im-

3 See STEPIEN BAKER, FINAL JEOPARDY: THE STORY OF WATSON, THE COMPUTER THAT

WILL TRANSFORM OUR WORLD 251 (2012).'See Imagenet, Large Scale Visual Recognition Challenge, http://image-net.org/chal-

lenges/LSVRC/2017/.41 Emma Hinchliffe, IBM's Watson Supercomputer Discovers 5 New Genes Linked to ALS,

MASHABLE, (Dec. 14, 2016), http://mashable.com/2016/12/14/ibm-watson-als-research/#oKfRVPG3C8ql.

42 A later version of the AlphaGo algorithm, known as AlphaGo Zero, defeated the priorversion of AlphaGo 100 games to zero. See Larry Greenemeier, Al versus AI: Self-TaughtAlphaGo Zero Vanquishes Its Predecessor, ScmwrlFc AMERICAN, (Oct. 18, 2017).

41 See VINCENT BOULANIN & MAAIKE VERBRUGGEN, MAPPING THE DEVELOPMENf OF Au-

TONOMY IN WEAPON SYSTEMS 26 (2017)." See Shuyang Cheng & Gabriel Bender, Autonomous Driving, MEDIUM, (Jan. 15, 2019),

https://medium.com/waymo/automl-automating-the-design-of-machine-leaming-models-for-autonomous-driving-141a5583ec2a ("At Waymo, machine learning plays a key role in nearlyevery part of our self-driving system. It helps our cars see their surroundings, make sense ofthe world, predict how others will behave, and decide their next best move.").

4 See NAT'L Sci. & ThCH. COUNCIL, supra note 19, at 6; STANFORD UNIVERSITY ONEHUNDRED YEAR STUDY, ARTIFICIAL INTELLIGENCE AND LIFE IN 2030 51 (2016).

346 [Vol. 10

Page 12: Artificial Financial Intelligence

Artificial Financial Intelligence

prove its performance over time.46 Advancements in the sub-field of deeplearning have played particularly important roles in this success.47 These ad-vancements have, in turn, been supported by enormous leaps in computingpower.48 Moore's Law-the famous proposition that the number of compo-nents per integrated circuit, and thus processing power, doubles roughlyevery eighteen to twenty-four months-demonstrates the magnitude of per-formance improvement over time.49 If human populations followed the sametrend, a village of 100 people in 1970 would have grown to 1.7 billion peo-ple by 2018. Even more relevant for artificial intelligence is the vast im-provement in the speed of graphical processing units (GPUs) used in thefield. 0 In 2000, Nvidia's top-of-the-line desktop GPU, the GeForce 256DDR, had a processing power of 480 million operations per second. By2018, Nvidia's top desktop GPU, the Titan RTX, had reached a processingpower of 16,312 gigaflops, a speed more than 30,000 times faster." Thevastly greater computational capacity of modem-day computers, combinedwith effective machine learning algorithms, have been unleashed on themassive amounts of data made available by the internet age.52 With the riseof the internet and mobile computing, people are generating more data, andmore sensitive and personalized data, than they have ever done before. Thisinformation has proven a fertile source of training data for artificial intelli-gence.53 As the computer scientist Andy Hertzfeld has stated, the artificialintelligence revolution of recent years

couldn't have happened no matter how brilliant your algorithmwas twenty years ago, because billions were just out of reach. Butwhat they found was that by just increasing the scale, suddenlythings started working incredibly well. It kind of shocked the com-puter science world, that they started working so good.54

So artificial intelligence has made enormous leaps in ability in recentyears, and these leaps have made it both practical and efficient for firms and

' See Calo, supra note 31, at 402.47 Id. at 401-02.48 See Alan Stevens, GPU Computing: Accelerating the Deep Learning Curve, ZDNEr,

(July 2, 2019), https://www.zdnet.com/article/gpu-computing-accelerating-the-deep-learning-curve/.

" For a broader discussion of the effects of Moore's law on artificial intelligence, see John0. McGinnis, Accelerating AI, 104 Nw. U. L. REv. 1253 (2010).

' See Michael Byrne, Why Machine Learning Needs GPUs, MOTHERBOARD, (Jan. 9,2018), https://motherboard.vice.com/en us/article/kznnnn/why-machine-learning-needs-gpus.

5' See CPUREvmw, nVidia GeForce 256 DDR, http://www.gpureview.com/GeForce256-DDR-card-114.html; SKUEZTECH, Nvidia Titan RIX, http://www.skueztech.com/db/gpu/nvidia-titan-rtx.

52 See Solon Barocas & Andrew D. Selbst, Big Data's Disparate Impact, 104 CALIF. L.REV. 671 (2016); Neil M. Richards & Jonathan H. King, Big Data Ethics, 49 WAKE FOREST L.REV. 393 (2014).

* See Calo, supra note 31, at 420-25.54

ADAM FISHER, VALLEY OF GENIUS: THE UNCENSORED HISTORY OF SIcoN VALLEY 423(2018).

34720201

Page 13: Artificial Financial Intelligence

Harvard Business Law Review

governments to use artificial intelligence to improve their day-to-day opera-tions. And at least for now, there are no signs of this progress slowing down.

B. Artificial Intelligence Strategies in Finance

The remarkable power of artificial intelligence to identify patterns incomplicated data and use these patterns to predict future behavior has madethe field a major area of interest for financial firms. This is true both for thelarge Wall Street banks that have traditionally dominated finance and for thewave of smaller, nimbler fintech firms that have sprung up to challenge themin recent years. On the one hand, large financial institutions like JP Morgan,Goldman Sachs, and Morgan Stanley are increasingly hiring machine learn-ing specialists and creating entire divisions devoted to artificial intelli-gence.5 On the other, a handful of smaller, more focused financial firmshave turned to artificial intelligence as a primary strategy for their busi-nesses.56 These trends suggest that artificial intelligence will likely play alarge role in determining the future of the financial sector.5 7

The potential uses of artificial intelligence in finance are nearly limit-less. As Charles Elkan, the head of machine learning at Goldman Sachs, hassaid, "within every business area there are opportunities to apply machinelearning."" That said, most financial firms to date have focused on a fewkey areas where machine learning has obvious applications. First, they areusing artificial intelligence to improve credit risk assessments forcounterparties. Second, they are using it to protect themselves from fraudand wrongdoing, either external or internal to the firm. And third, they areusing it to devise better trading strategies. It may be helpful to look at eachof these areas in turn to get a better sense of how artificial intelligence canaffect the financial world. One should keep in mind that most financial firmsare relatively tight-lipped about how, precisely, they are using artificial intel-ligence in their businesses. It is, after all, competitively valuable informa-tion. At the same time, broad categories may be identified."9

" See Sarah Butcher, The Top Machine Learning Teams in Investment Banks, EFINANCIAL-

CAREERS, May 23, 2018.5 See Sameer Maskey, How Artificial Intelligence is Helping Financial Institutions,

FORBES, (Dec. 5, 2018), https://www.forbes.com/sites/forbestechcouncil/2018/12/05/how-arti-ficial-intelligence-is-helping-financial-institutions/#441cd632460a.

" See Thomas Vartanian, Nefarious Nations Will Take the Lead on AI If the US Doesn't,THE HILL, (Dec. 12, 2018) ("As part of the country's critical infrastructure, the highest priorityshould be given to the development of a financial services and capital markets strategy tofoster Al innovations while at the same time protecting against its unprecedented threats.").

" Charles Elkan, Will Machine Learning Transform Finance, YALE INSIGHTs, (Jan. 3,2019), https://insights.som.yale.edulinsights/will-machine-leaming-transform-finance.

" For a comprehensive industry survey of bankers' opinion on artificial intelligence infinance, see Laura Noonan, AI in Banking: The Reality Behind the Hype, FINANCIAL TIMES,

(Apr. 11, 2018), https://www.ft.com/content/b497al34-2d21-l1e8-a34a-7e7563b0b0f.

348 [Vol. 10

Page 14: Artificial Financial Intelligence

Artificial Financial Intelligence

Perhaps no area of artificial financial intelligence has received as muchattention (or criticism) than its use in credit ratings.60 Banks are in the busi-ness of loaning money to people-bank loan volume is a carefully watchedsign, not just of bank profits, but also of economic health more generally.6 'One of the primary concerns that banks have when they extend a loan, -ofcourse, is that the borrower may not pay the money back. This means thatthe ability to accurately predict the credit risk of borrowers is tremendouslyimportant (and valuable) to financial institutions as a whole. They often out-source this process to credit scoring agencies, such as Equifax, Experian,and TransUnion. But as many observers have noted, the credit rating agen-cies' process itself is far from perfect.62 Joe Nocera of the New York Timeshas written that a "credit score is derived after an information-gathering pro-cess that is anything but rigorous" but, nevertheless, "essentially . . . hasbecome the only thing that matters anymore to the banks and other institu-tions that underwrite mortgages."63 Many financial firms believe that ma-chine learning algorithms could do better. The theoretical benefits are clear.By sifting through massive amounts of prior data about both defaulting andnon-defaulting borrowers, machine learning algorithms might be able toidentify hidden or unexpected variables that affect a borrower's likelihood ofrepaying a loan. Rather than relying on simple and obvious relations, such aswhether a borrower has defaulted on past loans or has large amounts ofcredit card debt, machine learning algorithms might find that other, less no-ticeable information, such as a borrower's purchase history, friend group, orTwitter posts, provides meaningful information about his ability to repay aloan. And indeed, financial firms are increasingly using artificial intelligenceto make these difficult assessments. For example, Zest Finance, a fintechstartup based in California, offers a machine learning-based credit modelingproduct for mortgages, auto loans, business loans, and consumer loans. 4 Itclaims that its algorithm, on average, leads to a 15% increase in loan ap-proval rates and a 30% decrease in charge-off rates, and that it only requires

' See Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Auto-mated Predictions, 89 WASH. L. REV. 1, 34 (2014); Christopher K. Odinet, Consumer Bit-Credit and Fintech Lending, 69 ALA. L. REV. 781, 858 (2018); Matthew A. Bruckner,Regulating Fintech Lending, 37 BANKING & FIN. SERVICES POL'Y REP. 1, 7 (2018); MatthewAdam Bruckner, The Promise and Perils of Algorithmic Lenders' Use of Big Data, 93 Cm.-KENT L. REV. 3, 60 (2018).

" See Peter Eavis, Is a Slowdown in Bank Lending a Bad Sign for the Economy, N.Y.TmIEs, (Oct. 12, 2018), https://www.nytimes.com/2018/10/12/business/dealbook/bank-lend-ing-slowdown-economy.html; Rachel Louise Ensign, Business-Loan Drought Ends for Banks,WALL S-r. J., (July 8, 2018), https://www.wsj.com/articles/business-loan-drought-ends-for-banks- 1531058400.

62 See Citron & Pasquale, supra note 60, at 10-16.63 Joe Nocera, Credit Score Is the Tyrant in Lending, N.Y. TInMES, (July 23, 2010), https://

www.nytimes.com/2010/07/24/business/24nocera.html.' See Steve Lohr, Banking Start-Ups Adopt New Tools for Lending, N.Y. TIMEs, (Jan. 18,

2015), https://www.nytimes.com/2015/01/19/technology/banking-start-ups-adopt-new-tools-for-lending.html.

2020] 349

Page 15: Artificial Financial Intelligence

350 Harvard Business Law Review [Vol. 10

three months to implement.15 The German startup Kreditech uses big dataand machine learning to assess borrowers' creditworthiness and even extendsloans, both domestically and internationally, in real time using its automatedsystem.6 Their model is so attractive and effective that some of the creditscoring agencies are themselves getting into the game, with Equifax recentlyadopting machine learning in its models and Experian similarly offering amachine learning product to their clients.67 And, to be clear, the effects ofAl-enhanced credit rating are not simply negative, providing a rationale fordeclining a loan to someone who might otherwise have received one. Almay also show that certain groups have been systematically underrated bycurrent systems, and thus more deserving of loans than traditionallybelieved.

Artificial intelligence also plays an important role in financial institu-tions' identification and prevention of fraud and wrongdoing. Finding effec-tive ways to prevent fraudulent payments or claims has been a constant thornin the side of the financial industry. And despite considerable efforts, fraudis rising-in 2016, 15.4 million U.S. consumers were victims of identityfraud, an all-time high, leading to $16 billion in losses.68 Most of these lossesstem from credit card fraud.6 9 In order to better identify these fraudulentbehaviors and, ideally, prevent them before they occur, financial institutionsare increasingly turning to machine learning algorithms.70 Machine learningalgorithms, after all, can analyze and assess the tremendous amounts oftransaction data constantly flowing across the sector much more quicklythan a human ever could. Bank of America is exploring how to incorporateartificial intelligence into its fraud detection systems.71 The startup Feedzaispecializes in using machine learning to prevent fraudulent payments in thefinancial industry, and its success has led Citigroup to incorporate its prod-uct into its own fraud prevention systems.72 Monzo, the popular Britishcredit card company, deployed a machine learning model for preventingfraud in its prepaid debit cards, allowing it to reduce fraud rates from 0.84%

65 See ZEsTAI, What is ZAML?, https://www.zest.ai/zaml.66 See WALL ST. J., Germany's Kreditech Scores $40M for Automated Lending, (June 25,

2014).67 See Alex Hickey, Equifax Debuts Machine Leaning-Based Credit Scoring System, CIO

DIVE (Mar. 28, 2018), https://www.ciodive.com/news/equifax-debuts-machine-leaming-based-credit-scoring-system/520095/.

' See AnnaMaria Andriotis & Peter Rudegeair, Credit-Card Fraud Keeps Rising, DespiteNew Security Chips-Study, WALL ST. J., (Feb. 1, 2017), https://www.wsj.com/articles/credit-card-fraud-keeps-rising-despite-new-security-chipsstudy-1485954000.

69 Id.70 See Sara Castellanos & Kim S. Nash, Bank ofAmerica confronts AI's "Black Box" With

Fraud Detection Effort, WALL ST. J., (May 11, 2018), https://blogs.wsj.com/cio/2018/05/1 1/bank-of-america-confronts-ais-black-box-with-fraud-detection-effort/.

7' Id.72 See Bus. WIRE, Citi Partners with Feedzai to Provide Machine Learning Payment Solu-

tions, (Dec. 19, 2018), https://www.businesswire.com/news/home/20181219005388/en/Citi-Partners-Feedzai-Provide-Machine-Learning-Payment.

Page 16: Artificial Financial Intelligence

Artificial Financial Intelligence

to less than 0.0 1% in just six months." Other financial institutions are exper-imenting with using machine learning more broadly to prevent bad behavioramong their own employees. In 2018, ING Bank partnered with a machinelearning startup specializing in speech recognition to spot and stop insidertrading and collusion within the firm.74

Finally, financial firms are also using artificial intelligence to improveinvestment returns. Again, the change here is more a matter of degree thanof kind. Investment firms, both large and small, have long used computers toanalyze stock information, identify price discrepancies, and make invest-ments." The artificial intelligence revolution of recent years, however, hasopened up new avenues for incorporating algorithms more deeply into in-vestment strategies. One clear use case is for quant hedge funds, which usealgorithms to determine investment decisions.76 Already familiar (and com-fortable) with computerized, automated investment decisions, quant hedgefunds might be expected to be early adopters of machine learning strategies.And indeed, it appears that they are. In 2017, one study found that 20% ofhedge funds make use of machine learning or artificial intelligence in theirinvestment strategies.7 Just a year later, that number had risen to 56%.11Large investment firms are investing heavily in machine learning as well.BlackRock, the asset management company, uses artificial intelligence tospot trends in financial data, including finding relationships between securi-ties or market indicators, scanning social media for insights into employeesentiment, and analyzing search engines for popular search terms.79 Regularinvestors can now take advantage of machine learning-based investmentstrategies as well. In 2017, EquBot launched an exchange traded fund, calledAT Powered Equity, that uses machine learning algorithms to decide on itsinvestment components.0 EquBot states that its mission is to "give everyone

7 See MONzo, FIGHTING FRAUD WITH MACHINE LEARNING, (Feb. 3, 2017), https://monzo.com/blog/2017/02/03/fighting-fraud-with-machine-leaming/; see also THE EcONOMIST, Ma-chine-Learning Promises to Shake Up Large Swathes of Finance, (May 25, 2017), https://www.economist.com/finance-and-economics/2017/05/25/machine-learning-promises-to-shake-up-large-swathes-of-finance.

7 See Intelligent Voice, Compliance Monitoring Enters Al Age at ING with IntelligenceVoice and Relativity Trace, (Dec. 3, 2018), https://www.intelligentvoice.com/2018/12/03/com-pliance-monitoring-enters-ai-age-at-ing-with-intelligent-voice-and-relativity-trace.

" See, e.g., Edward L. Pittman, Quantitative Investment Models, Errors, and the FederalSecurities Laws, 13 N.Y.U. J. L. & Bus. 633, 643-63 (2017).

76 See generally Thomas J. Brennan & Andrew W. Lo, Dynamic Loss Probabilities andImplications for Financial Regulation, 31 YALE J. ON REG. 667, 692-95 (2014); Scorr PAT-

TERSON, THE QUANTS: HoW A NEw BREED OF MATH WHIZZES CONQUERED WALL STREET AND

NEARLY DESTROYED IT (2011)." See Amy Whyte, More Hedge Funds Using AI, Machine Learning, INSTITUTIONAL IN-

VESTOR (July 19, 2018), https://www.institutionalinvestor.com/article/bl94hmlkjbvd37/More-Hedge-Funds-Using-Al-Machine-Learning.

78 Id." See Conrad De Aenlle, A.I. Has Arrived in Investing. Humans Are Still Dominating.,

N.Y. TIMEs, (Jan. 12, 2018)." See Bailey McCann, The Artificial-Intelligent Investor: Al Funds Beckon, WALL ST. J.,

(Nov. 5, 2017).

2020] 351

Page 17: Artificial Financial Intelligence

Harvard Business Law Review

access to investment opportunities that artificial intelligence can uncover."s"Another potential use for artificial financial intelligence is in investmentbanking, where firms might use artificial intelligence algorithms to identifyopportunities for acquisitions or divestments that might not be readily appar-ent to insiders.8 2

All of these developments suggest that artificial financial intelligence ispoised to become an integral part of the financial industry. Financial firmsare actively researching, and in many cases adopting, artificial intelligencestrategies. These strategies have been particularly useful in assessing risk,preventing fraud, and investing capital. And the trend is clear-artificial fi-nancial intelligence is on the rise. As further proof, one need only look as faras the chartered financial analyst (CFA) exam, the prized certification forfuture investment professionals. In 2019, the exam added computer science,with a focus on artificial intelligence and data mining, as a new subject thatall aspiring CFAs must have familiarity with by 2020.83 Future bankers willknow much more about artificial intelligence than they do today.

C. Existing Literature

The stunning success of artificial intelligence in recent years hassparked renewed interest in the field from academia. Scholars have begun toask pointed questions about the economic, social, and moral consequencesof turning over more and more of our lives to the control of algorithms.Legal scholars, as they are wont to do, have proposed new laws to constrainand guide the use of artificial intelligence. And while the literature is broadand continually growing, much like artificial intelligence itself, a few com-mon threads can be identified in the scholarship. First, a group of scholarshas argued that the progress of artificial intelligence risks widespread nega-tive effects on labor markets. Second, a number of scholars have argued thatthe results of artificial intelligence are so difficult to explain, and, thus, toscrutinize and monitor, that artificial intelligence will end up underminingthe regulatory state. Finally, a group of scholars has argued that artificialintelligence methods might perpetuate racial or other discrimination inbroader society.

On the question of employment, a number of scholars have raised con-cerns about artificial intelligence's capacity to supplant and replace humanworkers." Indeed, President Barack Obama issued a report on the topic, con-

" EquBot Website, https://equbot.com. The ETF has, however, underperformed-from itsinception in October 2017 to the end of 2018, it rose 3.1%, while the S&P 500 rose 5.1%. SeeDe Aenlle, supra note 79.

82 See Elkan, supra note 58.8 See Trevor Hunnicutt, CFA Exam to Add Artificial Intelligence, "Big Data" Questions,

REUTERS, (May 23, 2017)." See MARTIN FORD, THE RISE OF THE ROBOTS: ThCHNOLOGY AND THE THREAT OF A

JOBLESS FUTURE (2016); Erik Brynjolfsson & Andrew McAfee, Human Work in the Robotic

352 [Vol. 10

Page 18: Artificial Financial Intelligence

Artificial Financial Intelligence

cluding that we need new policies and laws to "educate and prepare newworkers to enter the workforce, cushion workers who lose jobs, keep themattached to the labor force, and combat inequality."*5 The idea of machinesreplacing humans in the labor force is not new-they have been doing sosince the Industrial Revolution. But artificial intelligence's vastly greaterpower, combined with its speed of improvement and breadth of application,present these concerns on a grander scale. Some scholars argue that artificialintelligence will lead to job losses as artificial intelligence simply eliminatesthe need for human workers in many fields. 6 Others argue that even if artifi-cial intelligence does not lead to job losses, it will still lead to increasedwealth disparities (or "income polarization") as the techno-savvy reap thebenefits of cheaper labor and the techno-ignorant are relegated to low wagesand long hours." In either case, the artificial intelligence revolution couldcause major labor force disruptions.

Another group of scholars has argued that the problem with artificialintelligence is explainability."8 One of the issues with machine learning-pro-duced results is that they are often difficult to interpret in easily understoodlanguage. Imagine, for example, if you asked a bank why they turned youdown for a loan, and, in reply, they handed you the source code of theirmachine learning algorithm and the values of millions of automaticallytuned algorithm parameters-this might be an accurate description of whatthey had in fact done to reach their result, but it would not be very helpfulfor understanding the motivation for the decision, or what you might do tochange it. This is less of a problem in some fields than in others. For exam-ple, if a machine learning algorithm is better at spotting cancer than humanexperts, we probably do not care much about how it is doing it. If it turns outthat the machine learning algorithm spotted a complex array of patterns thathave no intuitive counterpart in human reason, we probably would not care,as long as it works. But when we turn from the world of facts (do I havecancer?) to a world of judgment (should I loan money to this person?), thenthe process and rationale of the decisionmaker suddenly takes on greaterimportance. Regulators need to understand algorithms in order to ensure

Future: Policy for the Age of Automation, FOREIGN AFFAIRS, (July/Aug., 2016); Calo, supranote 31, at 425-27.

"EXEC. OFFICE OF THE PRESIDENT, ARTIFICIAL INTELLIGENCE, AUTOMATION, AND THE

ECONOMY (2016), https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/docu-ments/Artificial-Intelligence-Automation-Economy.pdf.

8 See Michael Guihot, et al., Nudging Robots: Innovative Solutions to Regulate ArtificialIntelligence, 20 VAND. J. ENT. & 'IbCH. L. 385 (2017).

8 See Cynthia Estlund, What Should We Do After Work? Automation and Employment

Law, 128 YALE L. J. 254 (2018)." See Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, 71

ADMIN. L. REV. 1 (2019); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explain-able Machines, 87 FORDHAM L. REV. 1085 (2018); David Lehr & Paul Ohm, Playing with theData: What Legal Scholars Should Learn About Machine Learning, 51 U.C. DAVIS L. REV.653 (2017); Joshua A. Kroll, et al., Accountable Algorithms, 165 U. PA. L. REV. 633 (2017).

2020] 353

Page 19: Artificial Financial Intelligence

354 Harvard Business Law Review [Vol. 10

they comply with the law.89 Consumers need to understand the algorithms totrust that they have been dealt with fairly, and to be able to change theirbehaviors.0 Indeed, the European Union has gone so far as to require com-panies that employ automated decisionmaking to provide "meaningful infor-mation about the logic involved."91 As one group of scholars has put it,"[b]ecause automated decision systems can return potentially incorrect, un-justified, or unfair results, additional approaches are needed to make suchsystems accountable and governable.""

In a similar vein, some scholars worry that the deployment of artificialintelligence could lead to disguised discrimination.93 The logic is as follows.Machine learning algorithms attempt to identify patterns in large data sets,and then draw conclusions from those patterns. In order to do their work,then, machine learning algorithms must be fed data, and large amounts of it.But that data itself is not necessarily neutral. It may (in fact, it likely will)incorporate some form of bias or discrimination from the outside world. Andif the data is biased or discriminatory, then the machine learning algorithmitself may become biased or discriminatory, too. This could happen in one oftwo ways: first, a nefarious actor might intentionally feed the machine learn-ing algorithm bad data;9 4 and second, a perfectly innocent, but naive, actormight unintentionally use data that has unrecognized bias in it." As an ex-ample of the first problem, intentional discrimination, if a firm (or a rogueemployee at a firm) wanted to refrain from doing business with individualsof certain races or religions, it might construct a machine learning algorithmthat looked unbiased but in fact encoded certain nefarious factors. After all,software engineers are the ones who have to make the hard decisions aboutwhat data to use, how to structure the data, and how to interpret it. Biascould creep in at any of these levels, and it might be difficult for outsideobservers to detect. But even if there is no intentional discrimination in themachine learning algorithm or the data set, it may still reflect the effects ofpast discrimination in the world. For example, if minorities received poorcredit scores in the past due to discrimination, then machine learning algo-rithms might learn that these minorities should be scored lower than their

" See Selbst & Barocas, supra note 88, at 1089-90.9 Id. at 1119-22.' Commission Regulation 2016/679, 2016 O.J. (L 119) 1.* Kroll et al., supra note 88, at 633.1 See id. at 678-82; Citron & Pasquale, supra note 60, at 10-16; Margaret Hu, Al-

gorithmic Jim Crow, 86 FORDHAM L. REV. 633 (2017); Barocas & Selbst, supra note 88;Anupam Chander, The Racist Algorithm?, 115 MICH. L. REv. 1023 (2017); Sonia K. Katyal,Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. Rnv. 54, 54 (2019)(arguing that "[tihe issue of algorithmic bias represents a crucial new world of civil rightsconcerns, one that is distinct in nature from the ones that preceded it"); Kate Crawford, ThinkAgain: Big Data, FOR. POL'Y, (May 10, 2013); Talia B. Gillis & Jann Spiess, Big Data andDiscrimination, 86 U. CHI. L. Rav. 458 (2019) (arguing that machine learning algorithmshinder the application of discrimination laws).

' See Lehr & Ohm, supra note 33, at 703-05.9 Id.

Page 20: Artificial Financial Intelligence

Artificial Financial Intelligence

the problem of "non-stationary" behavior.103 As noted before, scholars havepointed out that if a data set reflects a biased world (such as low creditscores for racial minorities), then machine learning algorithms based onthose data sets may well reinforce those biases. This is a major problem, andhas slightly different valences in the financial sector, which has a majorinfluence on the direction of economic development. Imagine, for example,a machine learning algorithm that was trained on a data set from the periodof 1995 to 2000. Basing its recommendations on the relative success of vari-ous industries during the dotcom boom, it might have concluded that, as ageneral rule, technology startups vastly overperformed other sectors. Itmight well have recommended that internet-focused companies such asPets.com (which raised $82 million in an IPO in 2000 before going bankruptjust nine months later)'1 or GeoCities (which was bought by Yahoo for $3.6billion in 1999 but quickly collapsed)105 were excellent investments. The ma-chine learning algorithm, by basing its recommendations on prior data froma market that was fundamentally different from the market post-2000, wassimply ineffective. The data was incurably biased. Similarly, the use of ma-chine learning might lead to the prolongation or even perpetuation of short-term trends in markets, to the detriment of economic efficiency. Withoutdata from the period after 2000, when the dotcom bubble burst, the machinelearning algorithm might well have recommended investments in the techsector and, by doing so, reinforced preexisting patterns of industry focus.One potential consequence of this is that bubbles could become more dra-matic-machine learning algorithms could magnify momentum in particularsectors or trends, leading to eventual and catastrophic collapse. Another po-tential consequence, and one that is perhaps even more troubling, is thatartificial financial intelligence might prevent bubbles from bursting. In otherwords, if a large section of financial institutions uses machine learning algo-rithms to decide on their investment strategies, and those machine learningalgorithms conclude that, say, the technology sector outperforms other sec-tors, then it may be difficult for the market to self-correct. Other industriesmight simply wither away from lack of investment or access to capital.

Third, the dependence of machine learning algorithms on data createsstrong incentives for financial institutions not only to put the data they cur-rently have to better use, but also to gather more of it.11 6 If the quality of

artificial intelligence as a financial tool grows as the size of its data set in-creases, then financial firms may search for new ways to get increasingly

"oSee MASASHI SUGIYAMA & MOTOAKi KAWANABE, MACHINE LEARNING IN NON-STA-TIONARY ENVIRONMENTS: INTRODUCTION TO COVARIATE SHIFT ADAPFATION (2012).

'" See Pui-Wing Tam & Mylene Mangalindan, Pets.com Will Shut Down, Citing Insuffi-cient Funding, WALL ST. J. (Nov. 8, 2000), https://www.wsj.com/articles/SB973617475136917228.

105 See Robert Cyran, Yahoo Is a Case Study in Poor Timing, N.Y. TIAs, (July 26, 2016),https://www.nytimes.com/2016/07/27/business/dealbook/yahoo-is-a-case-study-in-poor-timing.html.

" See Calo, supra note 31, at 420-23.

2020] 357

Page 21: Artificial Financial Intelligence

358 Harvard Business Law Review [Vol. 10

intrusive and sensitive data on citizens. Their practices could quickly raiseprivacy concerns. Part of the appeal of artificial intelligence is that it aims tofind patterns and relations that humans would never consider. But it cannotfind these patterns if it does not have data to work on. Engineers or invest-ment advisers that decide what information to include in data sets thereforehave a potentially unlimited world of information to scour. Perhaps yourFacebook friend list contains useful information about your credit score. Per-haps the prior personal relationships of a CEO have correlations with com-pany performance. Perhaps the email patterns, mobile phone conversations,or locations of consumers contain relevant information for mortgage defaultrates. We have already seen a backlash against the tech giants for their gath-ering, storage, and use of user data.107 But as the stories of privacy violationsproliferate, the value of data to firms is becoming increasingly clear. Artifi-cial intelligence may accelerate that trend.

Finally, and on a related note, the data dependency of artificial intelli-gence gives significant advantages to firms that have access to large datasets.108 Large Wall Street firms are sprawling, multi-faceted companies thatoperate in hundreds of different markets and sectors. This means that theycan see and track more information than anyone else. And at the risk ofrepetition, access to data in a machine learning environment is a competitiveadvantage. At the extreme, this might lead to large financial institutions tak-ing an insurmountable dominant position that no smaller startups could chal-lenge. This worry about artificial intelligence's benefits to the powerful hasbeen raised in the national security world. A recent study by the BelferCenter at Harvard University concluded that, at least in the near term,"bringing Al technology applications into the cyber domain will benefitpowerful nation-state actors."'" The most powerful nation states are the ac-tors that have the scientific knowledge, research budgets, and access to in-formation that are necessary to develop artificial intelligence systems. Thesame might be said of financial institutions: large financial institutions canhire away artificial intelligence experts from academia, industry, and theircompetitors, and they have the capital to fund long-term research into artifi-cial intelligence's applications for finance. Small startups simply do not havethis capacity. Over time, this trend might exacerbate the already sizeableamount of concentration in the financial sector. To be sure, the long-termconsequences of artificial intelligence might be to level the playing field, nottilt it. Machine learning algorithms are, after all, algorithms, which can becopied and disseminated much more easily than hard assets. To the extentthat prominent artificial intelligence strategies become widely accessible andusable by new financial firms, they may serve as a powerful equalizing fac-

107 See Anita L. Allen, Protecting One's Privacy in a Big Data Economy, 130 HARV. L.REv. F. 71, 71-72 (2016).

1" See Calo, supra note 31, at 424.'" See ALLEN & CHAN, supra note 7, at 20.

Page 22: Artificial Financial Intelligence

Artificial Financial Intelligence

tor in financial markets. But until then, the concern about concentration inthe financial sector will persist, and may, at the extreme, create antitrustproblems.10

The data dependency problem that artificial intelligence struggles with,and that raises issues for the financial sector, highlights a broader feature ofour financial system and the potential limitations of artificial financial intel-ligence. Capital markets simply are not like the kinds of problems that artifi-cial intelligence has, to date, excelled at. In cancer screening, imagerecognition, and language cognition, there is a fact that the artificial intelli-gence algorithm is attempting to discern-whether someone has cancer,what that image depicts, what that person said. But in capital markets, thereis no similar factual counterpart. To be sure, there are factual elements ofmarkets, from EBITDA to market capitalization to employee numbers andproduct sales (although even these can be fiddled with). But the fundamentalfeature of capital markets-the allocation of value to companies-is not somuch a fact as a belief. In order to outperform the market, one needs tounderstand not only the facts about a company, but also how other peoplewill view those facts. This is important because the very fact that a trend hasbeen discovered often means that the trend disappears. Momentum, smallcapitalization, low volatility: all of these are market trends that have, at onetime or another, delivered market superior returns."' But once others learnof the trend, they can quickly adopt strategies that take advantage of it, andthe very fact of their adopting those strategies eliminates the market superiorreturns. This is a fundamental feature of capital markets. Indeed, it is theunderlying basis of efficient capital markets theory. Artificial intelligence isnot well-equipped to deal with these sorts of problems.

B. Overweighting

Another issue with the use of artificial financial intelligence is the po-tential for human decisionmakers to rely too heavily on its recommenda-tions. Given the difficulty of explaining and understanding machine learningalgorithms and the outputs they generate, financial decisionmakers, includ-ing consumers, might simply default, without deliberation or debate, to ac-cepting the conclusions or recommendations that the machine learningalgorithms make. Even if decisionmakers are aware of the limitations of ma-chine learning, in the absence of clear methods for refuting or disproving theartificial intelligence outcome, artificial intelligence may take on undueweight in the structure of the financial industry."2

"o See Van Loo, supra note 9, at 251..i See Is Efficient-Market Theory Becoming More Efficient?, THE EcONOMIST, May 27,

2017, https://www.economist.com/finance-and-economics/2017/05/27/is-efficient-market-the-ory-becoming-more-efficient.

112 Of course, this may be desirable if we believe that financial decisionmakers are alreadybiased in problematic ways. Leaning on less-biased, or less harmfully biased, machines may

2020] 359

Page 23: Artificial Financial Intelligence

Harvard Business Law Review

We have already discussed one way in which machine learning algo-rithms fail: they often struggle with problems that possess non-stationaryqualities.' 3 When the fundamental nature of the market studied (for exam-ple, the credit risk of borrowers, the stock prices of companies, or the meth-ods of fraudsters) changes from one period to the next, the patterns andtrends that a machine learning algorithm can identify from one period's datawill not apply to the next. The non-stationary problem is particularly pro-nounced where the mere knowledge of the trend may lead to the trend itselfdisappearing, as might be the case if everyone knew that, say, the stockmarket always rose 10% every year. Arbitrageurs could simply immediatelypush the price of the stock up at the beginning of the year or even earlier.The trend would then disappear.

But machine learning algorithms also struggle with other knownproblems. One is "overfitting."ll 4 The goal of machine learning algorithmsis to find statistical trends in data and then generalize them so that they workfor new data. But if the algorithm generalizes in a way that incorporatestrends that are based on random factors that do not apply outside the particu-lar data set, then it is said to overfit the data, that is, it finds trends that matchthe training data but do not match the outside world. To take a simplifiedversion of this, imagine if a machine learning algorithm only had data ontwo companies, Apple and Kodak. It might conclude that all companies thatstart with "A" outperform the market, and all companies that start with "K"go bankrupt. This would obviously fit the training data perfectly, but it likelywould not generalize to the wider world of non-Apple and non-Kodak com-panies. The process of finding trends that do not just fit the available data,but will also generalize well to future or outside data, is a difficult one, andone that machine learning algorithms have a hard time doing.

Another known problem for artificial intelligence algorithms is low-prevalence data."' The idea here is that machine learning algorithms need acertain threshold amount of data on a given topic before being able to makereliable predictions about it. But where certain subpopulations within a dataset are rare and different from the broader data set, machine learning algo-rithms may simply fail to draw accurate conclusions about them. This hasbeen a serious problem in facial recognition algorithms: one study found thatthey incorrectly classified only 1% of light-skinned men, but more than one

then be preferable. See Holger Spamann & Lars Kldhn, Justice is Less Blind, and Less Legalis-tic, Than We Thought: Evidence from an Experiment with Real Judges, 45 J. LEG. STUD. 255,255 (2016) (finding that judges' decisions were affected by irrelevant characteristics of defend-ants but failed to mention these characteristics in their written reasoning).

113 See SUGIYAMA & KAWANABE, supra note 103, at xi.114 See STUART J. RUSSELL & PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN AP-

PROACH 705 (3rd ed. 2010); RICHARD BERK, STATISTICAL LEARNING FROM A REGRESSION PER-

SPECTIVE 142 (2008); Lehr & Ohm, supra note 33, at 684.11 See Lehr & Ohm, supra note 33, at 678-79.

360 [Vol. 10

Page 24: Artificial Financial Intelligence

Artificial Financial Intelligence

peers. Even if the algorithm explicitly could not take race into account, itmight find that other factors that correlate with race (such as names, geogra-phy, or other information) are just as effective.9 While scholars do not haveeasy solutions to these problems, they have pointed out their deeply harmfuleffects.

II. ARTIFICIAL FINANCIAL INTELLIGENCE

Artificial intelligence has grown exponentially in recent years as morepowerful computers apply more advanced algorithms to analyze more ex-pansive data sets. Financial institutions have increasingly deployed these ar-tificial intelligence strategies to improve their own results. Scholars have, inturn, begun to grapple with the thorny legal and ethical issues raised by theincorporation of artificial intelligence into the financial world. They haveraised questions about its effects on labor, its inscrutable outputs, and itscapacity for harmful discrimination. All of these are important problems,and ones that apply to the financial world. But at the heart of these critiquesis the assumption that artificial intelligence's dangers are rooted in its effec-tiveness. In other words, artificial intelligence may lead to widespread joblosses, opaque financial decisions, or even discriminatory results, because itis so capable at what it does. It is simply better at reading data and findingpatterns than humans are, and in doing so, it may create risks. But this Partwill argue that artificial financial intelligence's primary risks are not theproduct of its potential to surpass human intelligence. To date, prognostica-tions about artificial intelligence's "Cambrian explosion" are exaggerated.Instead, the real risk from artificial financial intelligence is its capacity tomagnify human failings. If artificial intelligence ends up being incorporatedwidely in the financial world, its harms will likely stem from inadequate datainput, improper usage, and feedback effects between algorithms. Theseproblems are dangerous and must be contained, but they are also, in a sense,less radical than some of the concerns popular in the media today.

A. Data Dependency

A common critique of artificial intelligence is that it relies heavily onprior data sets." As mentioned earlier, the machine learning algorithms thathave become the focus of current artificial intelligence research depend onreceiving and analyzing large amounts of data. This leads to several relatedproblems, some of which are specific to the financial sector, and others of

' See Citron & Pasquale, supra note 60, at 14; Rory Van Loo, The Corporation as Court-house, 33 YALE J. ON REG. 547, 579-80 (2016) (observing that businesses' algorithms maylead to racial discrimination by using socioeconomic factors as a proxy); Anya Prince &Daniel Schwarcz, Proxy Discrimination in the Age of Artificial Intelligence and Big Data, 105IOWA L. REv. 1257 (2020).

1 See Brummer & Yadav, supra note 11, at 274-75.

2020] 355

Page 25: Artificial Financial Intelligence

Harvard Business Law Review

which are more general. As The Economist has written, "the world's mostvaluable resource is no longer oil, but data."98

First, the use of artificial financial intelligence will cause important fi-nancial decisions to increasingly turn not on human judgments about thewisdom or soundness of the decision, but rather on the quality and nature ofthe data that are fed into machine learning algorithms. This information is,necessarily, determined by humans. Its effectiveness as a training tool willthus depend heavily on the knowledge and sophistication of the employeeswho decide which data to use and how to appropriately code it for machinelearning algorithms.99 iBM's Watson supercomputer, which uses machinelearning techniques and was widely billed as a potentially lifesaving cancerdetecting tool, was recently found to be issuing "unsafe and incorrect" treat-ment recommendations. While the causes are unclear, some observers be-lieve it is due to engineers training the software on small numbers ofhypothetical cancer cases, rather than real patient data.'" Others believe thatthe machine learning algorithm simply does not do a good job at handlingrare or recurring cancers because it simply lacks data on them.'0' And thefinancial world has seen similar mistakes. McKinsey reported that one AsiaPacific bank lost $4 billion when it applied interest rate models based onimproperly entered data.102 Thus, the use of machine learning in financialdecisions raises concerns about potentially harmful inaccuracies contained indata sets, as well as the possibility that these data sets will tend to miss outon rare but important financial occurrences (the black swan types of eventsthat many investors focus on).

Second, even if the data that is used in financial applications of ma-chine learning is not flawed, its use may create a series of related problemsin market efficiency. For one, the data may poorly reflect the nature of finan-cial markets. Machine learning algorithms, by their very nature, aim to findstatistical trends in data and then generalize them so they work for new data.This can be quite useful in a number of contexts, but it tends to run intoproblems when the nature of the field changes over time. This is known as

98 See The World's Most Valuable Resource Is No Longer Oil, But Data, THE EcONOMIST

(May 6, 2017), https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-re-source-is-no-longer-oil-but-data.

9 See Brummer & Yadav, supra note 11, at 274-75."n See Julie Spitzer, IBM's Watson Recommended "Unsafe and Incorrect" Cancer Treat-

ments, STAT Report Finds, BECKER'S HEALTH IT & CIO REPORT (July 25, 2018), https://www.beckershospitalreview.com/artificial-intelligenceibm-s-watson-recommended-unsafe-and-in-correct-cancer-treatments-stat-report-finds.html.

'0 See Daniela Hernandez & Ted Greenwald, IBM Has a Watson Dilemma, WALL ST. J.,(Aug. 11, 2018), https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147.

102 See Philipp Harle, et al., The Future of Bank Risk Management, McKLNSEY & COM-

PANY, (July 2016), https://www.mckinsey.com/business-functions/risk/our-insights/the-future-of-bank-risk-management.

356 [Vol. 10

Page 26: Artificial Financial Intelligence

Artificial Financial Intelligence

third of dark-skinned women.116 In the financial world, this might lead tosimilar problems, such as incorrectly risk-scoring minorities, but it mightalso lead to quite different ones, such as failing to identify uncommon butsignificant subpopulations of a market, such as unicorn startups that haveprovided enormous returns to venture capitalists."'

To be sure, machine learning experts are well aware of these problems.They have devised ways to deal with nonstationary markets, overfitting al-gorithms, and low-prevalence data. But these methods are imperfect and, atmost, can limit the problems, not resolve them. One researcher I spoke withhas a rule of thumb for assessing the claims of new artificial intelligenceproducts, something he calls the "87% rule." If an engineer or entrepreneurclaims that his artificial intelligence software can achieve accuracy rates ofhigher than 87% purely by fitting data, then the researcher assumes that thesoftware is either overfitting or unrealistically constraining the predictionproblem in a way that does not accurately reflect real-world conditions. If,instead, data-driven predictions provide accuracy around 87% and domainexpertise or insight are layered in to boost accuracy above that number, thenhe deems it consistent with systems that have been deployed successfully inpractice. In other words, even with cutting-edge machine learning algorithmsdeployed by experts in the field, the best-case result for most artificial intel-ligence strategies still fails a significant portion of the time.

However, there are a number of reasons to believe that, once financialfirms have incorporated artificial intelligence into their systems, it will behard to resist its persuasiveness."' The ready availability of a sophisticatedrecommendation based on millions of data points, and the relative difficultyof scrutinizing the reasoning or rationale of the recommendation, may leadbanks and firms to overweight artificial intelligence relative to other deci-sionmaking processes. As one doctor has said about the use of artificial in-telligence in hospitals, "In my practice, I've often seen how any tool canquickly become a crutch-an excuse to outsource decision making to some-one or something else.""9 This anecdotal evidence is only bolstered by ourgrowing knowledge of common cognitive biases in decisionmaking.

One well-known bias in human decisionmaking is the availabilitybias.120 Availability bias refers to the tendency of decisionmakers, in the face

1"6 Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparitiesin Commercial Gender Classification, 81 PROC. Machine LEARNING REs. 1 (2018), http://pro-ceedings.mlr.press/v81/buolamwinil8a/buolamwinil8a.pdf.

"' See Jennifer S. Fan, Regulating Unicorns: Disclosure and the New Private Economy,57 B.C. L. Rev. 583 (2016).

" See Nizan Geslevich Packin, Consumer Finance and AL: The Death of Second Opin-ions?, N.Y.U. J. LEGIs. & LEG. PUB. POL'Y (forthcoming 2020).

"' Dhruv Khullar, A.I Could Worsen Health Disparities, N.Y. TIMES, (Jan. 31, 2019),https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html.

2 See Christine Jolls et al., A Behavioral Approach to Law and Economics, 50 STAN. L.REv. 1471, 1477 (1998).

2020] 361

Page 27: Artificial Financial Intelligence

362 Harvard Business Law Review [Vol. 10

of uncertainty, to use readily available facts to inform their beliefs.'2 1 Thisseems like an obvious, and even rational, response to uncertainty-when wedo not know something, we use the facts we have at hand. But the result ofthis natural instinct is that, where the availability bias is present, deci-sionmakers tend to overweight the representativeness of easily recallable in-formation, and underweight more complex or distant information.12 2 Thisbias is amplified when individuals lack sufficient information to form inde-pendent conclusions about an issue.'23 In the financial world, where uncer-tainty is an unavoidable feature of the industry, the availability of an easilyaccessible recommendation, provided by an algorithm and not susceptible toready refutation, might well cause people to rely excessively on the al-gorithm over independently reasoned deliberation. They might simply de-fault to the algorithmically generated output.

Another well-studied cognitive bias is the anchoring effect.124 Theanchoring effect refers to people's tendency to be swayed by initial referencepoints, or anchors.125 The classic example is in housing prices: when peopleare asked how much they believe a house is worth, their answer will dependheavily on how much they are told the seller is asking for it.12

6 The result ofsetting an anchor is that decisionmakers will tend to use it as a referencepoint, and then make adjustments from that point, typically departing from itin only incremental and minimal ways.'27 The application to artificial finan-cial intelligence is clear. If an algorithm provides a recommended value foran asset, or a risk score for a potential borrower, it may serve as a stronganchor, informing and biasing subsequent discussion. Even if the ultimatedecisionmaker is aware of the flaws and limitations of machine learning, thevery process of providing a number or calculation will affect futuredecisions.

Finally, and perhaps most powerfully, artificial financial intelligencemay take on undue weight in financial decisions because of a phenomenonknown as "herd behavior." 28 Herd behavior is a relatively intuitive concept:

121 See Amos Tversky & Daniel Kahneman, Extensional Versus Intuitive Reasoning: TheConjunction Fallacy in Probability Judgment, 90 PSYCHOL. REV. 293, 295 (1983); Jolls et al.,supra note 120, at 1477.

122 Jolls et al., supra note 120, at 1477.123 See Timur Kuran & Cass R. Sunstein, Availability Cascades and Risk Regulation, 51

STAN. L. Rev. 683, 685-87 (1999).124 See Marcel Kahan & Michael Klausner, Path Dependence in Corporate Contracting:

Increasing Returns, Herd Behavior and Cognitive Biases, 74 WASH. U. L. Q. 347, 362 (1996);Gregory B. Northcraft & Margaret A. Neale, Experts, Amateurs, and Real Estate: An Anchor-ing-and-Adjustment Perspective on Property Pricing Decisions, 39 ORGANIZATIONAL BEHAV.& Hum. DECISION PROCESSES 84 (1987); Eric A. Zacks, Contract Review: Cognitive Bias,Moral Hazard, and Situational Pressure, 9 OHIo ST. ENTREPRENEURIAL Bus. L.J. 379, 394(2015).

125 See Kahan & Klausner, supra note 124, at 362.126 See Northcraft & Neale, supra note 124, at 87-90.127 Id.'" See David S. Scharfstein & Jeremy C. Stein, Herd Behavior and Investment, 80 AM.

EcON. REV. 465, 465 (1990); Kahan & Klausner, supra note 124, at 356.

Page 28: Artificial Financial Intelligence

Artificial Financial Intelligence

when people see other people taking an action, or refraining from doing so,they often imitate it, particularly if they have incentives to avoid divergingtoo much from the status quo.129 It is not, strictly speaking, irrational to doso. Individuals in management positions are often more likely to be punishedfor taking an unusual risk or action than for a taking a common one, even ifthey both turn out equally poorly.130 After all, if they are simply doing whateveryone else is doing, it is hard to argue that they misbehaved. But if theyare underperforming their peers in material ways, then their reputation willlikely suffer. In the world of artificial financial intelligence, it is easy toimagine that following the recommendation of a machine learning algorithmwould quickly become the low-risk, reputation-protecting action. If a deci-sionmaker follows an artificial intelligence-based recommendation, they candeflect blame to the algorithm itself. On the other hand, if they intentionallyrefuse to accept the artificial intelligence-based recommendation, then theywill have some explaining to do if their divergent opinion turns out to beincorrect.

To be sure, all of these biases are avoidable with sufficient care, dili-gence, and monitoring. Much of cognitive psychology is concerned withfinding ways to debias individuals and correct for these errors. But the biaseshighlight another important feature of artificial financial intelligence: how-ever much we may want to eliminate human interaction or interference withartificial intelligence, it ultimately creeps back in, often in unexpectedplaces. Financial decisionmakers need to review artificial intelligence rec-ommendations based on their own knowledge about fundamental changes inmarkets, changes that might not be well reflected in prior data sets. Financialengineers must review artificial intelligence for overfitting problems. But inthe absence of easily understandable results, decisionmakers may simply de-fer to the artificial intelligence algorithm, either out of ignorance, risk aver-sion, or simply inertia. The problem here is not so much the algorithmsthemselves, but our trust in them.

C. Echo Effects

Finally, artificial financial intelligence may lead to unexpected feed-back effects between competing artificial intelligence systems. This is awell-known problem in the world of high-frequency trading.'3' After the so-called "Flash Crash" of 2010, in which the Dow Jones plummeted 1000points in a matter of minutes, many observers blamed high-frequency traders

129 See Scharfstein & Stein, supra note 128, at 465.130 See Kahan & Klausner, supra note 124, at 356.131 See Yesha Yadav, How Algorithmic Trading Undermines Efficiency in Capital Markets,

68 VADo. L. REV. 1607, 1617-31 (2015); Pankaj K. Jain et al., Does High-Frequency TradingIncrease Systemic Risk?, 31 J. FrN. Micrs. 1, 1 (2016); Frank J. Fabozzi et al., High-FrequencyTrading: Methodologies and Market Impact, 19 REV. FuTuREs MKTS. 7, 9-10 (2011).

3632020]1

Page 29: Artificial Financial Intelligence

Harvard Business Law Review

for increasing the speed and volatility of markets.132 And while the likeli-hood of echo effects between artificial intelligence systems may be strongestin capital markets, where stock prices depend heavily on the decisions ofother players within the market, they may also appear in other areas of fi-nance, such as credit scoring, fraud prevention, and strategic advising.

One concern is that decisions made by artificial financial intelligencesystems may not truly be independent of one another. In other words, iffinancial institutions are all deploying similar machine learning algorithmson similar data, they may reach similar results. When humans make deci-sions, they at least nominally are doing so on their own (even if their deci-sions may be affected by the decisions of their peers). Not so for artificialintelligence. It is much simpler to copy an algorithm than it is to copy ahuman brain. As one scholar has written in the national security context:

When China stole the blueprints and R&D data for America's F-35fighter aircraft . .. it likely shaved years off the development time-line for a Chinese F-35 competitor-but China didn't actually ac-quire a modem jet fighter or the immediate capability to make one.... But when a country steals the code for a cyberweapons [sic],it has stolen not only the blueprints, but also the tool itself-and itcan reproduce that tool at near zero-marginal cost.133

If two competitors base their artificial intelligence strategies on the samealgorithms, they will likely reach the same conclusions about relevantproblems. This may well amplify both the speed and size of market swings,as financial institutions increasingly adopt broadly consistent views of themarket.

Another concern is that, even if financial institutions do not adopt thesame artificial intelligence strategies, their separate strategies may still leadto worrying feedback effects. One way this might occur is by machine learn-ing algorithms incorporating the results of other machine learning algorithmsinto their data sets. For example, if one financial institution's artificial intelli-gence system concludes that a borrower should not be given a loan, orshould only be given a loan at a certain interest rate, then that result mayfilter down into the data sets that are used to train other financial institutions'artificial intelligence systems. A loan rejection might, then, lead to moreloan rejections. This could happen even more quickly in liquid markets likestock exchanges, where artificial intelligence strategies depend on quicklyidentifying and acting on changes in the market. So even if investment com-

132 See Yadav, supra note 131, at 1628-29; Andrei Kirilenko et al., The Flash Crash: TheImpact of High Frequency Trading on an Electronic Market (May 5, 2014), https://www.cftc.gov/sites/default/files/idc/groups/public/@economicanalysis/documents/file/oce-flashcrash0314.pdf.

133 See Greg Allen, America's Plan for Stopping Cyberattacks Is Dangerously Weak, Vox(Mar. 27, 2017), https://www.vox.com/the-big-idea/2017/3/27/15052422/cyber-war-diplo-macy-russia-us-wikileaks.

364 [Vol. 10

Page 30: Artificial Financial Intelligence

Artificial Financial Intelligence

panies are using different artificial intelligence systems, the results of eachartificial intelligence system will affect the others. The overall outcome ofthese interactions is uncertain, but, as before, might lead to more sudden orvolatile market movements.'3

Finally, another concern, and one that is written about widely in theartificial intelligence industry, is the possibility of adversarial artificial in-telligence.13 5 Adversarial artificial intelligence (or machine learning) refersto the intentional manipulation of input data in order to fool artificial intelli-gence systems or lead them to unintended results.'36 If an adversary or com-petitor knows how a given artificial intelligence system works, it can oftendevise strategies that lead the system to malfunction. One way to do this isby inserting poisoned data into the system. A good example of this is imagerecognition in self-driving cars. Researchers have shown that by simply put-ting a few black and white stickers on a stop sign, they can trick machinelearning algorithms into misclassifying it 100% of the time, despite the factthat the stop sign would still be readily identifiable to any human. 137 As moreof the financial world adopts artificial intelligence algorithms into their busi-ness strategies, the potential for adversarial strategies rises exponentially.For example, if a criminal learns that a bank's algorithm identifies transac-tions as fraudulent only when certain patterns of behavior occur, he can sim-ply avoid the behaviors that trigger a fraud alert. Or if borrowers know thealgorithm by which lenders rate their credit risk, they can simply manipulatethe relevant variables to ensure they are viewed as creditworthy. Perhapseven more worrisome is the possibility that a competitor might manipulatestock markets based on knowledge of how other financial institutions' in-vestment algorithms work. Cybersecurity is already a major worry for finan-cial institutions. This worry only increases as algorithms perform more andmore of the significant actions present in financial markets.

HI. REGULATING ARTIFICIAL FINANCIAL INTELLIGENCE

Artificial financial intelligence raises a number of potential concerns,from its perpetuation of historical trends, to its effect on human decision-making, to its capacity to magnify and exacerbate volatility. But the merepossibility of harm, without more, does not suggest that we need new laws,or even new regulatory approaches. To justify new laws, we need to knowthat current laws are not working. To date, this threshold has not been met.The financial industry is one of the most heavily regulated industries in the

13 See Tom C. W. Lin, The New Financial Industry, 65 ALA. L. REV. 567 (2014).13 See Calo, supra note 31, at 419-20; Tom C. W. Lin, Financial Weapons of War, 100

MINN. L. REV. 1377 (2016).'36 See J. D. Tygar, Adversarial Machine Learning, 15 IEEE INTEfRNET COMPUTING 4, 4

(2011).137 See Kevin Eykholt et al., Robust Physical-World Attacks on Deep Learning Visual

Classification, arXiv 1707.08945, Apr. 2018, https://arxiv.org/abs/1707.08945.

2020] 365

Page 31: Artificial Financial Intelligence

Harvard Business Law Review

United States, and the kaleidoscope of regulators and regulations covernearly every activity that a financial institution could conceivably think ofdoing. The Consumer Financial Protection Bureau has authority to protectconsumers from unfair or abusive practices in the financial markets, and ithas been proactive in monitoring new financial technologies.'38 The Securi-ties and Exchange Commission (SEC) is tasked with protecting investorsand promoting fair, efficient capital markets, and it has itself adopted ma-chine learning methods to monitor markets."' The Financial Stability Over-sight Council monitors the stability of the nation's financial system andresponds to emerging risks, and it regularly reports on the effects of auto-mated trading systems and financial technologies.'4 Artificial financial intel-ligence has yet to be shown to fall into gaps in this framework.

That said, this does not mean that current laws are perfectly attuned tothe risks of artificial intelligence. It also does not mean that new uses ofartificial intelligence will not render current financial regulation ineffective.If anything, it is hoped that this Article serves to highlight to bankers,policymakers, judges, and academics the importance of artificial financialintelligence as a field, and to highlight the need for thoughtful discussionsboth of how it is being used and how it should be used.

In particular, regulators should keep in mind three broad goals in adopt-ing policies for addressing artificial financial intelligence: ensuring fairness,facilitating efficiency, and promoting stability. These goals are not particu-larly novel. They are the hallmarks of financial regulation, and they form thebackbone of most statutory frameworks in the industry.141 They do, however,have different valences when applied to the world of artificial financial intel-ligence. This Part will discuss the ways in which financial regulators shoulduse their existing authority to ensure that artificial financial intelligence doesnot undermine the free and fair functioning of financial markets.

A. Artificial Fairness

First, regulators should review artificial financial intelligence to ensurethat it is being implemented in a way that treats consumers and investorsfairly. Fairness here is meant to be an inclusive term. Fair markets are onesthat are free of discrimination,142 abusive practices,143 and fraud.' These arebroad and open-ended concepts, but they are essential underpinnings of our

138 See Van Loo, supra note 11, at 532-33.1' See Scott W. Bauguess, Acting Dir. and Acting Chief Economist, DERA, Champagne

Keynote Address at OpRisk North America 2017: The Role of Big Data, Machine Learning,and AI in Assessing Risks: a Regulatory Perspective (June 21, 2017), https://www.sec.gov/news/speechlbauguess-big-data-ai.

140 See Financial Stability Oversight Council, 2018 ANNUAL REPORT (2018), https://home.treasury.gov/system/files/261/FSOC2018AnnualReport.pdf.

"' See William Magnuson, Financial Regulation in the Bitcoin Era, 23 STAN. J. L. Bus.& FIN. 159 (2018).

142 See Citron & Pasquale, supra note 60, at 16-18.

366 [Vol. 10

Page 32: Artificial Financial Intelligence

Artificial Financial Intelligence

financial system. More importantly, they have been given tangible and well-defined meaning by regulators tasked with overseeing the industry.

In terms of discrimination, regulating artificial financial intelligencewill require measures to prevent algorithms from discriminating against par-ticular disfavored or minority groups. There is already ample legislative au-thority to act on this front. The Equal Credit Opportunity Act prohibitslenders from discriminating against potential borrowers on the basis of race,color, religion, national origin, sex, marital status, or age. 14 The Fair Hous-ing Act prohibits banks from considering similar characteristics when mak-ing mortgage decisions.146 Regulation B prohibits discrimination in creditscoring systems.147 The key here will be identifying when discrimination isoccurring on prohibited grounds. Given the complex and, in many cases,inscrutable operations of artificial intelligence algorithms, regulators willneed to develop methods for auditing financial algorithms and identifyingroot causes.'4 After all, there are many other variables that are correlatedwith race, sex, religion, and age, and artificial intelligence algorithms mightwell identify those variables and use them in impermissible ways, even ifthey are prevented from directly using the prohibited characteristics indecisions.

Promoting fairness in artificial financial intelligence will also requiregreater efforts to ensure that financial institutions properly disclose the na-ture of artificial intelligence risks to investors. These disclosures must beboth accurate and understandable. Again, there is ample legislative authorityfor regulators to act here. The Securities Exchange Act of 1934 prohibitsmanipulation, deception and fraud in connection with the sale of securi-ties.149 Rule lOb-5 prohibits individuals from making untrue statements ofmaterial fact or omitting material facts necessary to make statements notmisleading in connection with the sale of securities.150 The Investment Com-pany Act requires financial institutions to disclose ample information aboutinvestment policies.'"' The Investment Advisers Act requires financial advi-sors to give suitable investment advice to consumers, taking into account theclient's financial situation, investment experience, and investment objec-tives.'52 Regulators should use this broad legislative authority to set forth

143 See Stephen Choi, Regulating Investors Not Issuers: A Market-Based Proposal, 88CAL. L. REV. 279 (2000); Jerry W. Markham, Protecting the Institutional Investor - JunglePredator or Shorn Lamb?, 12 YALE J. REG. 345 (1995).

'" See John C. Coffee, Jr. & Hillary A. Sale, Redesigning the SEC: Does the TreasuryHave a Better Idea?, 95 VA. L. REV. 707, 761-62 (2009).

145 15 U.S.C. § 1691 (2012).'"42 U.S.C. § 3605 (2012).14' Equal Credit Opportunity Act (Regulation B), 12 C.F.R. § 202.5 (2011).14 See Citron & Pasquale, supra note 60, at 28.149 Securities Exchange Act of 1934 §10(b), 15 U.S.C. §78j(b).15o 17 C.F.R. § 240.10b-5(b) (2008)."' See Securities and Exchange Commission, Registration Form Used by Open-End Man-

agement Investment Companies, 63 FED. REG. 13,916, 13,917 (Mar. 23, 1998).152 See Rules and Regulations, Investment Advisers Act of 1940 17 C.F.R. § 275 (2012).

2020] 367

Page 33: Artificial Financial Intelligence

Harvard Business Law Review

clear guidelines on what sorts of artificial financial intelligence products areappropriate for investors, how those products can be marketed, and whatdisclosures must be made.

Regulators will also need to conduct regular reviews of artificial finan-cial intelligence to monitor for potentially harmful effects on consumers,such as excessive interest rates or improper financial advice.'53 Currently,too little information is available about how financial institutions are usingartificial intelligence, and how artificial intelligence is affecting financialmarkets. In 2019, Senator Elizabeth Warren wrote an open letter to financialregulators asking them how they were responding to algorithmic finance,including what they were doing to combat discrimination, how they wereoverseeing fair lending laws, and whether they are conducting analyses ofthe impact of algorithms on borrowers.15 4 These are basic questions whichshould be easily answerable. The fact that they are not suggests that regula-tors either are not acting to the full extent of their statutory authority, or thatthey are not being forthcoming about their priorities and procedures.

Ensuring the fairness of artificial financial intelligence will thus requirea searching analysis of the machine learning process, from algorithm design,to data gathering, to implementation procedures. Because artificial intelli-gence is by its very nature difficult to understand and explain, the burden onregulators will be high."' It may require regulators to hire experts on ma-chine learning, or deploy machine learning themselves.5 6 But the benefitsare also high. If private and public sector actors can do it right, they couldgreatly improve the fairness of financial markets.

B. Artificial Efficiency

Second, financial regulators must be mindful of artificial intelligence'seffects on the efficiency of financial markets. If artificial financial intelli-gence leads to the mispricing of stocks, or asset bubbles, or a reduction incompetition, or higher fees, then regulators must be ready to respond. Again,there is ample authority for them to do so. Indeed, promoting efficient mar-kets is at the core of financial regulators' mission. The SEC's stated missionis to "maintain fair, orderly and efficient markets."5 7 The Commodity Fu-tures Trading Commission's ("CFTC") mission is to "foster open, transpar-

153 See Robert Bartlett et al., Consumer-Lending Discrimination in the Fintech Era(NBER, Working Paper No. 25942, 2019); Rory Van Loo, The Missing Regulatory State: Mon-itoring Businesses in an Age of Surveillance, 72 VAND. L. REv. 1563, 1622 (2019).

154 Letter from Senator Elizabeth Warren to Federal Reserve, Federal Deposit InsuranceCorporation, Office of the Comptroller of the Currency, and Consumer Financial ProtectionBureau (June 10, 2019), https://www.warren.senate.gov/imo/mediadoc/2019.6.10%20Letter%20to%20Regulators%20on%20Fintech%20FINAL.pdf.

"' But not insurmountably so. See Kroll et al., supra note 88.156 See Bauguess, supra note 139."' See 15 U.S.C. § 78c(f).

368 [Vol. 10

Page 34: Artificial Financial Intelligence

Artificial Financial Intelligence

ent, competitive and financially sound markets.""' The Financial IndustryRegulatory Authority's ("FINRA") mission is to "promote marketintegrity."'

In many ways, artificial intelligence might be expected to improve effi-ciency in financial markets. After all, machine learning algorithms are beingdeployed for the very purpose of improving the speed and accuracy of frauddetection, risk management, and investment decisions.'6 The great advan-tage of machine learning strategies is that they can find unnoticed, but mate-rial, patterns within data and then make predictions based on thosepatterns.' Financial institutions have strong incentives to make sure thattheir algorithms work accurately and reliably.

But the mere fact that artificial intelligence algorithms are intended toimprove efficiency, or even that financial institutions have incentives to en-sure that they do so, does not mean that these algorithms will achieve theirintentions, or that in achieving them, they will not create unexpected conse-quences elsewhere. History is littered with examples of algorithms actingunexpectedly, or causing unexpected harm.162 Regulators must therefore beattuned to the mechanisms by which artificial intelligence algorithms mightcreate inefficiency in financial markets.

Although a thorough assessment of the causes of inefficiency withinartificial financial intelligence would require a close analysis of the sourcecode at issue, a few plausible risks are readily apparent. The first has to dowith macro trends in the economy. As discussed earlier, artificial financialintelligence relies for its efficacy on data sets that, necessarily, involve his-torical information.163 But if the historical trend that the information reflectsis itself undesirable or unsustainable, then artificial intelligence algorithmsbased on it might reinforce or strengthen the trend.Y" They might, for exam-ple, pile into an industry that had experienced rapid growth in recent months,or rapidly exit an industry that had experienced a slump. Indeed, so-calledmomentum trading is a popular strategy among quantitative funds.65 Again,this might be a good thing if it leads to assets more quickly reflecting theirobjective value. But it might equally be the result of a poorly designedalgorithm.

" See COMMODITY FUTURES TRADING COIMISSION, FY 2018 AGENCY FINANCIAL RE-

PORT 5 (Nov. 2018), https://www.cftc.gov/system/files/2019/04/24/2018afr.pdf.'" Our Mission, FINRA, https://www.finra.org/about/our-mission. (last visited May 14,

2020).* See supra Part LB.

161 See Lehr & Ohm, supra note 33, at 684-88.'62

See VIRGINIA EUBANKS, AUTOMATING INEQUALITY: How HIGH-TECH TOOLS PROFILE,

POLICE, AND PUNISH THE POOR (2018).63 See supra Part I.B.

16 See Robin Wigglesworth, Volatility: How "Algos" Changed the Rhythm of the Market,FINANCIAL TIMEs, Jan. 8, 2019, https://www.ft.com/content/fdclc064-1142-11e9-a581-4ff78404524e.

6' See Matt Prewitt, High-Frequency Trading: Should Regulators Do More, 19 MICH.

ThLECOMM. & TECH. L. REv. 131, 135 (2012).

2020] 369

Page 35: Artificial Financial Intelligence

Harvard Business Law Review

Another form of inefficiency might arise not so much from unintendedactions as intentional cooperation. Artificial intelligence algorithms increasethe risk that firms will collude to raise prices, even in the absence of a for-mal agreement to do so.'66 This presents a serious antitrust risk-indeed, theFederal Trade Commission is so worried about this problem that it has heldhearings on the topic.'7 Imagine, for example, that an artificial intelligencealgorithm deployed by banks to set mortgage rates learns that every time itlowers its rate, its competitor does as well, or, potentially worse, every timeit raises its rate, its competitor does as well. The interaction of these algo-rithms might well lead to higher prices for all. This is not merely hypotheti-cal. As early as 1994, the Department of Justice (DOJ) found that airlinesused a joint computerized booking system to set collusive ticket prices.'68

More recently, the DOJ has charged Amazon sellers with using algorithm-based software to engage in unlawful price fixing.16 9

Other forms of inefficiency might be even more pernicious. Opportu-nistic market actors could potentially exploit knowledge of the workings ofother firms' artificial intelligence algorithms to trigger unexpected behav-ior.7 0 As mentioned before, researchers have worried for some time aboutthe potential of adversarial AI to insert poisoned data into artificial intelli-gence systems in order to defeat their intended functioning.17' This type ofstrategy has obvious applications in the world of finance. Hedge funds withshort positions on a stock might try to cause other firms' algorithms to trig-ger sell orders.'72 Fraudsters might intentionally change their patterns of be-havior in order to avoid fraud detection algorithms."' Hackers might attemptto create financial panic by spreading false information on social media plat-forms or news sites. 14 All of these sorts of behavior could introduce new anddangerous forms of inefficiency into markets, and regulators must be ready

' See Kuhn & Tadelis, Algorithmic Collusion (2017).6See FEDERAL TRADE COMMISSION, THE COMPETITION AND CONSUMER PROTECTION IS-

SUES OF ALGORITHMS, ARTIFICIAL INTELLIGENCE, AND PREDICTIVE ANALYTICS (Federal TradeCommission 2018), https://www.ftc.gov/news-events/audio-video/audio/algorithmic-collusion.

'" See Antonio Capobianco, Algorithms and Collusion, ORGANISATION FOR ECONOMICCO-OPERATION AND DEVELOPMENT 5 (2017).

"' See Azriel Ezrachi & Maurice E. Stucke, Artificial Intelligence & Collusion: WhenComputers Inhibit Competition, 2017 U. ILL. L. REV. 1775, 1777 (2017).

"o See, e.g., Adam Janofsky, AI Could Make Cyberattacks More Dangerous, Harder toDetect, WALL ST. J., Nov. 13, 2019, https://www.wsj.com/articles/ai-could-make-cyberattacks-more-dangerous-harder-to-detect-1542128667.

17 See supra Part I.C.' For empirical evidence of hedge fund manipulation of stock prices, see Itzhak Ben-

David et al., Do Hedge Funds Manipulate Stock Prices?, 68 J. FIN. 2383 (2013)."' See Mary Frances Zeager et al., Adversarial Learning in Credit Card Fraud Detection,

2017 Sys. AND INFo. ENGINEERING DESIGN SYMP. 112, https://ieeexplore.iee.org/abstract/doc-ument/7937699.

74 See Michelle Cantos, Breaking the Bank: Weakness in Financial AI Applications,FIREEYE (Mar. 13, 2019), https://www.fireeye.com/blog/threat-research/2019/03/breaking-the-bank-weakness-in-financial-ai-applications.html.

370 [Vol. 10

Page 36: Artificial Financial Intelligence

Artificial Financial Intelligence

to identify and respond to them before they mature into more general marketfailures.

C. Artificial Risk

Finally, regulators must monitor artificial intelligence's effects on sys-temic risk. Observers in a wide array of areas, from national security to laborto elections, have discussed the ways in which artificial intelligence mightlead to major shifts in systemic stability. Financial regulators must be simi-larly wary of artificial financial intelligence's effects on the stability of mar-kets. Doing so will turn primarily on identifying the pathways through whichartificial financial intelligence could create contagion among financial actorsand then acting to increase the resilience of participants and systems to mar-ket shocks.

Assessing the systemic risk of artificial financial intelligence requires acareful understanding of the uses of artificial intelligence and the ways inwhich it is changing the nature of industry risks."' We have already seen thatartificial financial intelligence might lead to more volatile markets by, forexample, leading to faster and larger transaction volumes.'6 It might alsolead to more financial bubbles, or bigger ones, if it turns out that artificialintelligence algorithms tend to pile into "fashionable" market trends.77

Other risks turn on the artificial intelligence industry itself. For example, iffinancial institutions rely heavily on third-party providers for artificial intel-ligence algorithms or the data used to drive those algorithms, then thosethird-party providers might well become systemically important themselves,despite their non-financial focus.178 Additionally, if artificial financial intelli-gence leads financial institutions to focus on new variables or sectors (suchas social media posts or browsing activity), then those new variables couldsuddenly become the source of interconnectedness, and thus risk.179

It just so happens that debt, one of the centerpieces for artificial finan-cial intelligence today, is also a crucial factor in assessing the degree ofsystemic risk within the market.'I" Debt, after all, was at the heart of the lastfinancial crisis and is one of the primary foci of many systemic risk regula-tors today.'8 ' If financial institutions turn increasingly to artificial intelli-

See FINANCIAL STABILITY BOARD, ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

IN FINANCIAL SERVICES: MARKET DEVELOPMENTS AND FINANCIAL STABILITY IMPLICATIONS, at29-31 (2017).

"6 See supra Part II.C." Id." See FINANCIAL STABILITY BOARD, supra note 175, at 29.179 Cf id. at 31 (discussing how widespread adoption of algorithms may uncover intercon-

nections among variables that were previously thought to be unrelated).1so Hal S. Scott, The Reduction of Systemic Risk in the United States Financial System, 33

HARV. J. L. & PUB. Poit'Y 671, 677 (2010). See generally Steven L. Schwarcz, Systemic Risk,97 GEO. L. J. 193 (2008) (discussing the definition of systemic risk).

'"' See Prasad Krishnamurthy, Regulating Capital, 4 HARV. Bus. L. REv. 1, 1 (2014)("Most observers agree that the excessive debt or leverage of systemically important financial

2020] 371

Page 37: Artificial Financial Intelligence

Harvard Business Law Review

gence to make decisions related to creditworthiness and loan activity, thenthe risk profile of this sector will turn heavily on the inner workings of thealgorithms. As a result, systemic stability regulators will need to better un-derstand the ways in which artificial financial intelligence affects debt mar-kets and risk assessments.

Again, regulators have ample authority to act here. The Dodd-Frank Actcreated an entire department, the Financial Stability Oversight Council, tomonitor systemic risk. 182 Its mission is three-fold: first, to identify risks tothe financial stability of the United States; second, to promote market disci-pline by eliminating expectations that the government will shield financialinstitutions from losses in the event of failure; and third, to respond toemerging threats to the stability of the financial system."' Artificial financialintelligence seems to fit squarely within the third category-it is an emerg-ing risk, not a current one, to systemic stability. But in the future, if it be-comes more widely adopted, it might well shift to the first category.

Given the regulatory ambit of the Financial Stability Oversight Council,and its broad-ranging mission to police systemic stability, statutory authorityis unlikely to be an issue in regulating artificial financial intelligence's sys-temic risk. Competence and capacity, however, will be. Even within the pri-vate sector, there is a scarcity of expertise in machine learning fields.184 Thepublic sector suffers even more from this lack of experts and resources.1'And without experts capable of understanding the workings of artificial in-telligence algorithms, government regulators will struggle to monitor sys-temic risk within the sector.18 6 One important step here would be to giveregulators the ability to stress-test artificial financial intelligence systems,allowing them to provide synthetic inputs to a system and analyze theproperties of the outputs. But doing so will first require regulators to gain adeep knowledge of machine learning and artificial intelligence systems,something they do not currently possess.1 7

institutions (SIFIs) was a central reason why the housing crash of 2007-2009 led to arecession.").

182 See Hilary J. Allen, Putting the "Financial Stability" in Financial Stability OversightCouncil, 76 OH. ST. L. J. 1087, 1113-15 (2015).

183 Dodd-Frank Wall Street Reform and Consumer Protection Act, 12 U.S.C. § 5322(2012).

" See Bernard Marr, The AI Skills Crisis and How to Close the Gap, FORBEs, June 25,2018, https://www.forbes.com/sites/bernardmarr/2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#57fa30be31f3.

185 Cf Michael Kratsios, Why the US Needs a Strategy for Al, WIRED (Feb. 11, 2019)https://www.wired.com/story/a-national-strategy-for-ai (discussing the need for increased gov-ernment focus on artificial intelligence to maintain a national competitive edge).

186 See FINANCIAL STABILITY BOARD, supra note 175, at 34.18 Indeed, the need for greater machine learning expertise within government has been a

mainstay of many recent proposals to improve governance of the AI industry. See Cary Cog-lianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 GEO. L. J. 1147, 1216 (2017); Cary Coglianese & David Lehr, Trans-parency and Algorithmic Governance, 71 ADMIN. L. REv. 1, 30 (2019); Andrew Tutt, An FDAfor Algorithms, 69 ADMIN. L. REV. 83, 114 (2017); Andrew D. Selbst & Solon Barocas, TheIntuitive Appeal of Explainable Machines, 87 FORDHAM L. REV. 1085, 1093 (2018).

372 [Vol. 10

Page 38: Artificial Financial Intelligence

Artificial Financial Intelligence

D. The Role of Self-Regulation

This Part has focused primarily on what regulators can do to minimizethe risks and harms of artificial financial intelligence, but there is also muchthat financial institutions themselves can do. In fact, it is likely that self-regulation will be significantly more effective at cabining artificial intelli-gence's risks than regulatory enforcement actions could ever be.'" Regula-tors are, by their very nature, outsiders. They do not know the innerworkings of financial institutions nearly as well as insiders do, and they donot have the levels of expertise in machine learning that are available to theprivate sector. This means that, for artificial financial intelligence to fulfillits goal of making intelligent decisions, we will need bankers, engineers, andcomputer scientists to be cognizant of the broader social goals of finance.We cannot simply rely on regulators to step in to prevent financial institu-tions from taking risky actions-financial institutions themselves will needto be the primary gatekeepers of artificial intelligence.

What might self-regulation look like in artificial financial intelligence?Some general principles are clear. Bankers will need to consider the fairnessof the artificial financial intelligence systems they develop, whether theymay harm the efficiency of markets, and whether they might jeopardize sys-temic stability. They will need to be mindful of the data they use, the outputsthey receive, the actions they take, and the monitoring systems they have inplace. They may well need to go further and cooperate with other actors inthe industry to ensure that their systems do not interact in dangerous ways.

But self-regulation must be more than just platitudes about corporatesocial responsibility.189 It should also include efforts that look more like hardlaw. An illustrative example is FINRA.'9 FINRA is a self-regulatory bodyfor brokerage firms formed in 2007.191 But despite being an industry-run

8s Google itself has recognized the need for guiding principles for artificial intelligence inthe private sector. In April 2018, Sergey Brin, Google's founder, wrote in a letter to sharehold-ers that "[t]he new spring in artificial intelligence is the most significant development incomputing in my lifetime" but that "such powerful tools also bring with them new questionsand responsibilities," such as how they will affect employment, fairness, and safety. SERGEY

BRIN, 2017 FOUNDERS' LETTER (2018), https://abc.xyz/investor/founders-letters/2017/index.html. In June 2018, Google CEO Sundar Pichai published an essay on Google's principles forAl, setting forth the objectives that guide Google's artificial intelligence strategies. See SundarPichai, Al at Google: Our Principles (June. 7, 2018), https:/Iblog.google/technology/ai/ai-prin-ciples/. In February 2019, the company went further, issuing a white paper arguing that gov-ernments need to take action to govern artificial intelligence. GOOGLE, PERSPECTVES ON

ISSUES tIN Al GOVERNANCE (2018), https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf. But see Benjamin P. Edwards, The Dark Side of Self-Regulation, 85 U.CIN. L. Rev. 573 (2018).

189 See generally Jonathan R. Macey, Corporate Social Responsibility: A Law & Econom-ics Perspective, 17 CHAPMAN L. REv. 331 (2014); John M. Conley & Cynthia A. Williams,Engage, Embed, and Embellish: Theory Versus Practice in the Corporate Social ResponsibilityMovement, 31 J. CORP. L. 1 (2005).

19 See generally Andrew F. Tuch, The Self-Regulation of Investment Bankers, 83 GEO.

WASH. L. REv. 101 (2014).- Id. at 104.

2020] 373

Page 39: Artificial Financial Intelligence

374 Harvard Business Law Review [Vol. 10

organization, it does much that has the look and feel of law. Broker-dealersare obligated to join the body.'92 It writes standards of conduct that bind itsmembers.'93 It has the power to discipline members for violating its rules,including the ability to exclude brokers from practicing in the industry.194

This sort of organization-a self-regulatory organization with teeth-wouldbe helpful in establishing the rules of the road for artificial financial intelli-gence. Industry groups might, for example, develop a set of best practices oncybersecurity, machine learning algorithms, and data set development. In theend, these sorts of private sector endeavors are essential to ensure that artifi-cial financial intelligence leads to fair, efficient, and stable outcomes.

Some of this work is already taking place within the technology indus-try.'9 5 Computer science engineers are increasingly incorporating socialgoals into their machine learning guides, and they are actively looking forways to improve artificial intelligence algorithms' ability to take into accountother external values.9 6 One prominent example is TensorFlow.'19 Tensor-Flow is a machine learning system originally developed by Google for inter-nal use but which was eventually released to the public on an open-sourcebasis.19 8 Its training modules now include the following note:

As a modeler and developer, think about how this data is used andthe potential benefits and harm a model's predictions can cause. Amodel like this could reinforce societal biases and disparities. Is afeature relevant to the problem you want to solve or will it intro-duce bias?'"

It then includes a link to a longer discussion of "machine learning fairness,"with a set of recommended readings. While the efficacy of these warnings ina technical training setting might be questionable, it provides a good exam-ple of the ways in which computer scientists are trying to incorporatebroader goals into their algorithms. Financial institutions might similarlywork on introducing broader situational awareness into the engineers andbankers that create artificial financial intelligence systems.

'"9Id. at 104-05.19

1 Id. at 105.19 See Barbara Black, Punishing Bad Brokers: Self-Regulation and FINRA Sanctions, 8

BROOK. J. CORP. FIN. & CoM. L. 23 (2013)."' See Gabriel Goh et al., Satisfying Real-World Goals With Dataset Constraints, 29 ADV.

NEURAL INF. PROC. Sys. 1 (2016); Lucas Dixon et al., Measuring and Mitigating UnintendedBias in Text Classification, 2018 Al, ETHICS, AND Soc'Y CONF. 67 (2018).

196 See, e.g., Sanders Kleinfeld, A New Course to Teach People About Fairness in MachineLearning, GOOGLE Al (Oct. 18, 2018), https://www.blog.google/technology/ai/new-course-teach-people-about-faimess-machine-teaming/.

"' For a discussion of the history of TensorFlow's creation and its eventual release to thepublic, see Daniela Hernandez, Google's Artificial-Intelligence Tool is Offered for Free, WALLSTr. J., May 12, 2016, https://www.wsj.com/articles/googles-open-source-parsey-mcparseface-helps-machines-understand-english-1463088180.

19 Id.199 TensorFlow Data Validation Tutorial, TENSORFLOw, https://www.tensorflow.org/tfx/

tutorials/datavalidation/chicago_taxi. (last visited May 14, 2020).

Page 40: Artificial Financial Intelligence

Artificial Financial Intelligence

IV. OBJECTIONS

This Article has argued that artificial financial intelligence poses athreat to free and fair financial markets, not because of its capacity to exceedhuman intelligence, but because of its capacity to exacerbate human error.Because of its reliance on potentially inaccurate datasets, its biasing effecton human decisionmakers, and its unpredictable interactions with other algo-rithms, artificial financial intelligence could lead to harmful changes in howthe financial industry works. Regulators, thus, must be vigilant in ensuringthat financial institutions maintain efficient, fair, and stable markets.

These arguments, however, are not uncontroversial. In fact, they runcounter to three powerful theoretical strands in artificial intelligence scholar-ship. I will refer to these strands as the mirror critique, the balance critique,and the precautionary critique, but I might just as accurately have referred tothem as the cynical view, the optimist's view, and the pessimist's view. ThisPart will briefly explain what these critiques are, identify how they differfrom the views asserted in this Article, and offer a few thoughts on why Ifind them unpersuasive.

A. The Mirror Critique

One standard response to artificial intelligence today is to claim thatthere is nothing particularly new about it.200 Artificial intelligence may usedifferent algorithms from other programs, or have greater processing capaci-ties, but at heart, it is simply another form of computer code. And like allcomputer code, it has flaws. More specifically, the flaws of computer codemirror those of its creators.2 01 If the programmer who writes the code is risk-prone, unthoughtful, or irrational, then the algorithms will reflect that. Weshould not be surprised, then, that machine leaming algorithms are biased,overweight certain data, or respond unpredictably to changes. So, too, dohumans.202

Proponents of the mirror critique might well agree with the premise ofthis Article that artificial financial intelligence has flaws in relation to itsproblematic reliance on faulty data. But they would disagree that these flawsare unique or new, or that they require special, tailored responses from regu-

2" See Dan Patterson, Why AI is Nothing New, TECHREPUBLIC (Jan. 22, 2019), https://www.techrepublic.com/article/why-ai-is-nothing-new/; Richard Socher, AI Isn't Dangerous,But Human Bias Is, WoRLD EcoN. F. (Jan. 17, 2019), https://www.weforum.org/agenda/2019/01/ai-isn-t-dangerous-but-human-bias-is/.

201 See Jayne Williamson-Lee, How Machines Inherit Their Creators' Biases, MEDIUM

(July 9, 2018), https://medium.com/coinmonks/ai-doesnt-have-to-be-conscious-to-be-harmful-385dl43bd311; Jeff Cockrell, A.I. is Only Human, CHI. BooTH REv., Summer 2019, https://review.chicagobooth.eduleconomics/2019/article/ai-only-human; Kate Crawford, Artificial In-telligence's White Guy Problem, N.Y. TIMEs (Jun. 25, 2016).

20 For a recent summary of the many ways in which humans exhibit behavioral biases asa matter of psychology, see DANIEL KAHNEMAN, THINKING, FAST AND SLOW (2013).

2020] 375

Page 41: Artificial Financial Intelligence

Harvard Business Law Review

lators. If the flaws are precisely the ones that we worry about with humandecisionmakers, then there is nothing special about artificial intelligence,and regulators should simply maintain their current strategies of monitoringand sanctioning bad actions when they take place.

Before responding to this critique, it is perhaps worthwhile to notesome areas of agreement. This Article agrees that the "revolutionary" natureof artificial intelligence is often exaggerated.203 This Article agrees that arti-ficial intelligence's flaws are related to the flaws in human psychology.2

04

This Article agrees that we should not expect artificial intelligence to elimi-nate human error.2 05 All of these are important observations about the natureof artificial intelligence today, and they form part of this Article's core argu-ment. At the same time, the mirror critique is misguided in several ways.

First, while it may be true on a trivial level that the failures of artificialintelligence can be categorized in similar ways as the failures of human in-telligence, there are also important ways in which the failures are different.Most legal problems today, if described at a sufficiently high level of gener-ality, can be shown to be the product of similar problems. The concept ofnegligence in tort law bears striking similarities to the concept of fiduciaryduties in corporate law, but this does not mean that they are the same thingor, more importantly, that we should treat tort law and corporate law thesame. Similarly, while the general categories of problem are the same be-tween human decisionmakers and artificial intelligence algorithms, the na-ture and effects of these problems differ. Some of the differences are quitesimple: artificial intelligence algorithms can process information much morequickly than humans can, and thus we might expect that artificial financialintelligence might lead to errors propagating and spreading at higher rates.Other differences relate to the nature of decisionmaking in these systems: asdescribed before, adversarial artificial intelligence demonstrates just howsusceptible these algorithms are to intentional manipulation, and in ways thatthat most humans would spot.206 Certainly, there are also ways to foolhumans. But the methods are different. Thus, even if we do believe thatartificial intelligence's flaws are also human flaws, we must also recognizethat these flaws look very different when an algorithm is behind them.

A second way in which the mirror critique is unpersuasive is that thereare some risks from artificial financial intelligence that seem to exist in anentirely new category. One of the risks that this Article has identified is theinteraction between human decisionmakers and artificial financial intelli-gence. In other words, one of the problems with financial institutions imple-menting machine learning strategies is that human decisionmakers may relytoo heavily on the recommendations these algorithms give.207 Studies have

203 See supra Part H.B.204 Id.205 Id.206 See supra Part II.C.207 See supra Part II.B.

376 [Vol. 10

Page 42: Artificial Financial Intelligence

Artificial Financial Intelligence

shown, for example, that people tend to believe that algorithms are superiorto humans in making financial decisions.208 Thus, if artificial financial intel-ligence proliferates, it may create problems related to overweighting byhuman bankers. This suggests that artificial intelligence will introduce a newrisk that simply does not exist in traditional finance.

Third, setting aside objections relating to the extent of the harm and thenature of the harm, even if we concede that the categories of problem withinartificial financial intelligence and traditional finance are the same, this doesnot necessarily lead to the conclusion that they should both be regulated insimilar ways. The mere fact that two phenomena share similar risks does notsuggest that they will respond to similar regulatory interventions. Evenwithin human communities, we know that groups can respond quite differ-ently to government action.209 The imposition of fines for undesirable behav-ior might cause some groups to reduce the incidence of that behavior butcause other groups to increase the incidence of that behavior.2 10 Similarly,crafting tailored regulation to reduce the risks of artificial financial intelli-gence likely requires different approaches than those traditionally used. Asan initial matter, simply understanding the risks of particular artificial intelli-gence algorithms requires different kinds of knowledge-regulators needexpertise in computer science, neural networks, and data science, things thatthey have not traditionally focused on.2 1

1 Similarly, the interventions neededto reduce the risks of artificial intelligence are likely quite different thantraditional interventions in finance. Simple disclosure obligations may not besufficient for machine learning algorithms-source code is notoriously com-plex and inscrutable.212 Backtesting, dynamic testing, and zero knowledgeproofs (some of the proposals for analyzing machine learning algorithms)are more promising, and they require different sorts of action from regula-tors.2 13 All of this is to suggest that even if artificial financial intelligencepresents similar categories of risk, the regulatory actions necessary to reducethese risks are new and require specialized knowledge.

208 See Packin, supra note 118.20 See, e.g., Richard H. McAdams, The Origin, Development, and Regulation of Norms,

96 MICH. L. REV. 338 (1997); Janice Nadler, Expressive Law, Social Norms, and SocialGroups, 42 L. & Soc. Inquiry 60 (2017).

210 See, e.g., Uri Gneezy & Aldo Rustichini, A Fine is a Price, 29 J. LEGAL STUo. 1(2000).

211 See supra Part III.C.212 See Kroll et al., supra note 88, at 638 ("Perhaps the most obvious approach is to

disclose a system's source code, but this is at best a partial solution to the problem of accounta-bility for automated decisions. The source code of computer systems is illegible to nonexperts.In fact, even experts often struggle to understand what software code will do, as inspectingsource code is a very limited way of predicting how a computer program will behave.").

213 Id. at 668.

2020] 377

Page 43: Artificial Financial Intelligence

Harvard Business Law Review

B. The Precautionary Critique

Proponents of the precautionary critique take a different approach.2 14

They assert that the risks of artificial intelligence are not just new, they areexistential. They note the dramatic improvements in artificial intelligence'scapacities in just the last few years and argue that we should assume thatthese improvements will continue into the future.215 As a result, artificialintelligence could potentially, in the very near future, replace many, if notmost, human tasks and lead to dramatic increases in inequality.216 Thus,these Al pessimists assert, we need wide-ranging and intrusive new regula-tions to assure that artificial intelligence does not cause systemic harm to oureconomy and society.217 Another related strand of this critique claims that,even if artificial intelligence does not present a risk of surpassing the broadcognitive capacities of humans currently, it may do so in the future. And ifthat is the case, then regulators need to address these future harms now, notwait for them to materialize. In other words, they should adopt somethingakin to a precautionary principle: regulators should take anticipatory actionto protect people from plausible harm even before firm empirical evidenceof that harm materializes.21 1

214 Notable proponents of the precautionary critique include Elon Musk (who has claimedthat artificial intelligence is "potentially more dangerous than nukes"), Stephen Hawking (whohas said that "the development of full artificial intelligence could spell the end of the humanrace."), and, sometimes, Bill Gates (who has said that "a few decades after [machines aredoing most jobs] the intelligence is strong enough to be a concern . . . and [I] don't understandwhy some people are not concerned."). See Cade Metz, Mark Zuckerberg, Elon Musk and theFeud Over Killer Robots, N.Y. TuIEs, June 9, 2018, https://www.nytimes.com/2018/06/09/technology/elon-musk-mark-zuckerberg-artificial-intelligence.html; Rory Cellan-Jones, Ste-phen Hawking Warns Artificial Intelligence Could End Mankind, BBC, Dec. 2, 2014, https://www.bbc.com/news/technology-30290540; Pete Holley, Bill Gates on Dangers of ArtificialIntelligence: "I Don't Understand Why Some People are Not Concerned", WASH. PosT, Jan.29, 2015, https://www.washingtonpost.com/news/the-switch/wp/2015/01/28/bill-gates-on-dan-gers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/?noredi-rect=on&utm term= .f4a50453a03e.

215 See Estlund, supra note 87, at 258 ("[T]he prospects for job destruction are eye-open-ing. Robotic and digital production of goods and services, coupled with advances in Al andmachine learning, is poised to take over both routine or repetitive tasks and some more ad-vanced tasks.").

2 16 See ERIK BRYNJOLFSSON & ANDREW MCAFEE, RACE AGAINST THE MACHINE: How

THE DIGITIAL REVOLUTION Is ACCELERATING INNOVATION, DRIVING PRODUCTIVITY, AND IRRE-

VERSIBLY TRANSFORMING EMPLOYMENT AND THE ECONOMY (2012); Laura Tyson & MichaelSpence, Exploring the Effects of Technology on Income and Wealth Inequality, in AFTERPIKETTy: TiE AGENDA FOR ECONOMICS AND INEQUALITY 170 (Heather Boushey, J. BradfordDeLong & Marshall Steinbaum ed. 2017); FORD, supra note 84.

217 See Estlund, supra note 87, at 254 ("This Article charts a path for reforming that bodyof law [that is, employment law] in the face of justified anxiety and uncertainty about thefuture impact of automation on jobs.").

218 See Stephen G. Wood et al., Wither the Precautionary Principle? An American Assess-ment from an Administrative Law Perspective, 54 AM. J. Comp. L. 581, 581 (2006); LesleyWexler, Limiting the Precautionary Principle: Weapons Regulation in the Face of ScientificUncertainty, 39 U.C. DAVis L. REv. 459, 463 (2006).

378 [Vol. 10

Page 44: Artificial Financial Intelligence

Artificial Financial Intelligence

The precautionary critique would take issue with the claim in this Arti-cle that artificial intelligence's limited capacity today suggests that its pri-mary harms will stem from human error and its capacity to exacerbate thiserror. It would assert that this is a short-sighted view that fails to take intoaccount foreseeable and near-term improvements in the technology. As aresult, artificial financial intelligence presents a much greater risk to our fi-nancial system, and one that requires significant new regulatory initiativesand structures.

This Article concedes that if we believe that artificial intelligence willsoon, or even eventually, lead to superhuman computers that will eliminatevast swathes of the work force or, as Elon Musk fears, swarms of killerrobots, then we would need much more far-reaching reforms, both in lawand society. Thus, to a certain extent, the arguments in this Article presup-pose that the most extreme predictions about artificial intelligence are im-plausible. At the same time, if artificial intelligence does evolve to thissuper-state, our worries will be about much more than financial regulation.

But more importantly, the precautionary critique is itself susceptible toimportant criticism. If we adopted a strong precautionary approach, wemight well end up stifling innovation by ratcheting up the costs of techno-logical adoption.219 It might also be a waste of regulatory resources-if weend up regulating something that never comes about, or that evolves in sucha way that the regulation is ill-adapted to it, then we will have displaceduseful, near-term regulatory action with useless, long-term action.220 A wait-and-see approach can be a prudent regulatory strategy if it allows industryparticipants, government bodies, and the public to gather more informationabout the costs and benefits of the technology, or let the technology matureinto a more stable equilibrium.221 In the case of artificial financial. intelli-gence, where financial institutions are still experimenting and learning, regu-

219 See, e.g., Michael A. Heller & Rebecca S. Eisenberg, Can Patents Deter Innovation?The Anticommons in Biomedical Research, 280 Sci. 698, 698 (1998); Mark A. Lemley & A.Douglas Melamed, Missing the Forest for the Trolls, 113 COLUM. L. REV. 2117, 2125 (2013);Anthony Falzone, Regulation and Technology, 36 HARV. J. L. & PUB. PoL'Y 105 (2013).

220 See Sharon B. Jacobs, The Administrative State's Passive Virtues, 66 ADMIN. L. REV.

565 (2014); John P. Dwyer, Overregulation, 15 ECOLOGY L. Q. 719 (1988).221 See Stephen M. Bainbridge, Dodd-Frank: Quack Federal Corporate Governance

Round II, 95 MINN. L. REV. 1779 (2011); Paul G. Mahoney, The Pernicious Art of SecuritiesRegulation, 66 U. Cm. L. REV. 1373 (1999); Larry Ribstein, Bubble Laws, 40 HOUSTON L.REV. 77 (2003); Roberta Romano, Regulating in the Dark, in REGULATORY BREAKDOWN: THE

CRISIS OF CONFIDENCE IN U.S. REGULATION 86 (Cary Coglianese ed., 2012). But see JonathanMasur & Eric A. Posner, Unquantified Benefits and the Problem of Regulation Under Uncer-tainty, 102 CORNELL L. REV. 87 (2016); Lisa Schultz Bressman, Judicial Review of AgencyInaction: An Arbitrariness Approach, 79 N.Y.U. L. REV. 1657 (2004); Michael D.Sant'Ambrogio, Agency Delays: How a Principal-Agent Approach Can Inform Judicial andExecutive Branch Review of Agency Foot-Dragging, 79 GEO. WASH. L. REV. 1381 (2011);Glen Staszewski, The Federal Inaction Commission, 59 EMORY L. J. 359 (2009).

2020] 379

Page 45: Artificial Financial Intelligence

Harvard Business Law Review

lators would be ill-advised to launch a wholesale restructuring of theirregulation around a still-inchoate algorithm.222

C. The Balance Critique

If Al cynics say that artificial intelligence is nothing new, and Al pessi-mists say that artificial intelligence is a dire threat, Al optimists have a dif-ferent view.223 While acknowledging that artificial intelligence has risks,they assert that these risks are more than outweighed by the benefits.224

Moreover, there are also risks to not adopting artificial intelligence. On bal-ance, the benefits of incorporating artificial intelligence into business greatlyexceed the costs. Thus, to proponents of this balance critique, rather thansimply defaulting to an anti-technology position (like the Al pessimists do),we need to carefully balance the pros and cons, and we will find that thepros of encouraging the growth of artificial intelligence exceed the cons ofdoing so.

Again, this Article largely agrees with the substance of the balance cri-tique. When regulating a technology, we need to take into account both thecosts of the technology and the benefits of it.225 As these critics rightfullypoint out, there are harms from not adopting artificial intelligence strategiesif they can play a useful role in reducing fraud or increasing the efficiency ofcapital markets.226 Companies are likely better judges of this than govern-

222 See Steven L. Schwarcz, Regulating Financial Change: A Functional Approach, 100MrNN. L. REv. 1441, 1444 (2016) ("In thinking about regulating a dynamically changing fi-nancial system, it may be more effective-or at least instructive-to focus on the system'sunderlying, and thus less time-dependent, economic functions than to tie regulation to anyspecific financial architecture.").

223 See, e.g., Coglianese & Lehr, supra note 187, at 1223 ("For administrative agencies,what will distinguish the machine-learning era is not a substitution of human judgment withsome foreign and unfamiliar methodology, but rather an evolution of human judgment to in-corporate fundamentally similar-albeit more accurate and often practically more useful-processes of decision making made possible by advances in statistical knowledge, data stor-age, and digital computing. The analysis offered here provides an antidote to visceral reactionsagainst the use of artificial intelligence in the public sector, reactions we hope will give way toa measured optimism capable of guiding and improving the future of the administrativestate."); Steve Lohr Another Use for A.I: Finding Millions of Unregistered Voters, N.Y.TiMsS, Nov. 5, 2018, https://www.nytimes.com/2018/11/05/technology/unregistered-voter-rolls.html; Angus Loten, AI Tool Helps Companies Detect Expense Account Fraud, WALL ST.J., Feb. 26, 2019, https://www.wsj.com/articles/ai-tool-helps-companies-detect-expense-ac-count-fraud-11551175200.

224 See Vikram Mahidhar & Thomas H. Davenport, Why Companies That Wait to Adopt AIMay Never Catch Up, HARV. Bus. REv., Dec. 6, 2018, https://hbr.org/2018/12/why-compa-nies-that-wait-to-adopt-ai-may-never-catch-up.

225 See COMM. ON CAPITAL MKTs. REGULATION, A BALANCED APPROACH TO COST-BENE-FIT ANALYSis REFORM (2013); Hester Peirce, Economic Analysis by Federal Financial Regula-tors, 9 J. L. EcoN. & POL Y 569 (2013). For a discussion of the difficulties of conducting suchcost-benefit analysis in financial regulation, see John C. Coates IV, Cost-Benefit Analysis ofFinancial Regulation: Case Studies and Implications, 124 YALE L. J. 882 (2015).

26 See, e.g., Shani R. Else & Francis G.X. Pileggi, Corporate Directors Must ConsiderImpact of Artificial Intelligence for Effective Corporate Governance, Bus. L. TODAY, Feb. 12,

380 [Vol. 10

Page 46: Artificial Financial Intelligence

Artificial Financial Intelligence

ment officials, given their direct involvement and their indirect financial in-centives. Too often, the default view of regulators is that companies shouldbear the burden of proof to show that the benefits of their innovations exceedthe costs.227 This may be good politics, but it is bad policy.

At the same time, it is the duty of financial regulators to uphold free,fair, and stable markets. This Article has identified several ways in whichartificial financial intelligence might undermine these pillars of financialregulation, and thus regulators should be vigilant about the risks. More im-portantly, by identifying the risks and discussing ways to reduce them, wecan shift the balance of benefits and harms, making artificial financial intelli-gence more socially useful and less socially damaging. Presumably this is agoal that all groups can support.

So we have rejected the mirror critique, the precautionary critique, andthe balance critique, and, in the process, rejected the cynics, the pessimists,and the optimists. Where does that leave us? I would say with moderation.The most radical opinions often receive the most attention today, but thisdoes not mean that they are the most accurate. This Article, at its core, as-serts that artificial financial intelligence is important, it is new, and it ischanging finance, but also that we are well-prepared to deal with it. We havea comprehensive body of laws and regulations attuned to dealing with newrisks to the financial system, and our regulators are properly equipped torespond to the challenges of artificial intelligence. To be sure, they may needto adopt new regulatory tools or gain expertise in new areas of finance. Butthis is always the case with new financial technologies.228

CONCLUSION

In the final chapter of Tell the Machine Goodnight, in a chance encoun-ter on a train, Pearl runs into the man she had met earlier in the book, whomthe Apricity machine had told to amputate a finger. Pearl sees that the man isindeed missing his index finger. She approaches and asks him how he isdoing. He replies, "Oh, good. You know. Getting along. As one does." Thereader is left in a state of uncertainty. Was the computer right? Was he hap-pier as a result of taking its recommendation? Would he have been better offkeeping his finger intact? The reader simply does not know. But the storygets at a deeper truth about artificial intelligence: thinking about artificialintelligence is an exercise in imagination. It requires understanding how thetechnology works and how it is currently being used, but it also requires us

2019, https://businesslawtoday.org/2019/02/corporate-directors-must-consider-impact-artifi-cial-intelligence-effective-corporate-governance/.

227 See COMM. ON CAPITAL MKTS. REGULATION, supra note 225, at 9.228 See William Magnuson, Regulating Fintech, 71 VAND. L. REv. 1167 (2018).

3812020]

Page 47: Artificial Financial Intelligence

382 Harvard Business Law Review [Vol. 10

to imagine how it might work in the future, and to what new uses it might beput. While artificial intelligence is coming for finance, and, to a certain ex-tent, has already arrived, its applications, for now, are limited and experi-mental. Its risks derive primarily from the failure of human, not machine,intelligence. We would do best to keep that in mind.