Friday, December 4, 2020
Date : December 4, 2020
To : Young Lawyers

This post was originally published by Electronically in Touch, the official e-newsletter and blog of the Young Lawyers Section of the New York State Bar Association on May 22, 2020 and was last updated on November 29, 2020.

From the American Arbitration Association to the International Chamber of Commerce, a group of leading foreign arbitral bodies released a joint declaration on the COVID-19 pandemic soon after the pandemic began.1 The statement emphasizes the importance of cooperation among the various institutions to ensure that all are making the best use of digital technology for remote working. The COVID-19 pandemic has thus brought the use of technology in legal practice to the forefront.

The arbitration community has proven more enthusiastic in integrating new technologies into arbitration practice than lawyers working in general litigation. Nevertheless, the legal profession overall has a strong commitment to long-established practices and methodologies, including attending proceedings personally and presenting cases in paper format. The outbreak of COVID-19 has accelerated the arbitration community’s adoption of novel technology, such as increased use of electronic document management systems, e-filing, online signatures, and videoconferencing. We see what might have been years of technological transition for some taking place in a matter of days. As arbitration incorporates technology into its process, we need to be alert to possible risks and emerging problems.


AI has been widely recognized as an emerging innovation in arbitration that needs to be watched.2 Unlike the novel technologies mentioned above, which are mostly tools to streamline the arbitration process and increase the efficiency of the arbitral workflow, AI technologies are used to enhance or support the complex cognitive work of arbitrators or lawyers. For example, AI has a profound effect on legal research. Legal research firms such as LexisNexis and Westlaw are now integrating machine learning and natural language processing that can analyze search terms and then suggest results based on the user’s query and those of users who have made similar queries in the past. There are other prominent AI technologies in the legal market is such as ROSS Intelligence3 and Lex Machina. 4

Arbitration has not been immune to this trend. ArbiLex, a legal tech startup, leverages AI technologies to assist clients, counseling them in making strategic decisions—say, tribunal selection—by evaluating their approach and testing their positions.5 It thus offers risk management by providing feedback on whether the parties should arbitrate or settle. Like ArbiLex, these data-driven AI technologies can complement claim holders’ strategies or opinions in resolving arbitration cases.

AI algorithms can also accurately predict the outcome of cases. For example, computer scientists at the University of London have developed an algorithm that can predict with 79% accuracy the outcome of cases involving torture, the right to a fair trial, and the right to privacy that come before the European Court of Human Rights.6 Similarly, in a 2017 study, researchers developed an algorithm that correctly predicted 70.2% of the U.S. Supreme Court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes between 1816 and 2015.7

The pace of technological progress promises that AI arbitrators will be developed—likely sooner than later. As technology has advanced, the main question becomes whether (or, indeed, when) AI technologies will be able to decide or predict the outcome of an arbitral proceeding insomuch as they can replace the human arbitrators. Naturally, the possibility of an AI arbitrator substituting for a human one presents a multitude of concerns. The main questions are whether an AI arbitrator can validly rule on a dispute under the existing legal framework and whether an award rendered by an AI arbitrator would face vacatur or non-confirmation.


AI Arbitrator: The Federal Arbitration Act (“FAA”) does not explicitly stipulate—nor does it imply—that arbitrators must be human beings, but it does address arbitrators using pronouns such as “he” and “they.”8 Nonetheless, the FAA was drafted well before the digital era, and it was scarcely possible for the drafters even to contemplate an AI machine being nominated as an arbitrator. Thus, the semantic inquiry does not reveal much or provide us with a reliable answer.

When AI arbitrators arrive, this will arguably be one of the issues that will be decided. The decision will likely be made in the courts—rather than in legislatures—given that common law has long filled the gaps in the FAA.9 Suffice here to note that for now, there is no outright provision in the FAA that prohibits or restricts the use of an AI arbitrator in place of a human one. Considering the Supreme Court’s strong policy in favor of arbitration, appointing an AI arbitrator per se will not likely be an issue.

Bias: The adage that justice depends on what the judge—or, in this case, the arbitrator—ate for breakfast does not hold true for AI arbitrators. AI algorithms are neutral decision-makers since the algorithm applies the same rules for every decision it makes.10 With that said, when an AI arbitrator uses biased data in making a decision, it naturally reinforces the pre-existing biases in the data that is used to train the algorithm.

The FAA s.10(a)(2) provides that a court may vacate an award if it finds evident partiality on the part of an arbitrator.11 With that said, it is difficult to challenge an arbitral award based on evidence partially because of the U.S. Supreme Court’s judicial policy in favor of arbitration. Moreover, courts are split on what constitutes “evident partiality” after the U.S. Supreme Court’s ruling in Commonwealth Coatings Corp. v. Continental Casualty Co., where the Court was unable to articulate a standard. Justice Black’s opinion in Commonwealth stated that arbitral tribunals “must avoid even the appearance of bias.”12 The real issue in the case of AI arbitrators is, arguably, quite the opposite—no such “appearance” is possible in the first place.

As stated above, a procedurally neutral AI algorithm can nonetheless yield a biased result.13 However, a human may not be able to identify the problem because AI algorithms are something of a “black box.” That is to say, when input is being fed into the algorithm, the desired output based on machine-learning techniques can be produced—but without an explanation of why the output is as it is. The problem of the lack of transparency in the decision-making of AI arbitrators will make it more challenging to discover bias or discrimination, thereby increasing the difficulty of proving bias on the part of the arbitrator. 14

Public Policy: An AI-rendered arbitral award could face certain objections—or, at least, queries—based on public policy violations. Although the FAA does not expressly stipulate violation of public policy as a ground for vacatur, the public policy exception as developed in common law allows vacatur or non-enforcement of an award if it runs contrary to an explicit, well-defined, and dominant public policy.15 Thus, we must ask whether an AI-rendered award could run against the values entrenched in U.S. law and society. The application of public policy grounds will depend on the reception of U.S. courts to technology. Once AI becomes a pillar of the legal profession, the U.S. courts will likely welcome decisions rendered by AI arbitrators. The argument that public policy is a concept that develops and evolves in response to society’s needs over time also supports this conclusion. AI-rendered awards will be deemed more legitimate if there is an agreement between the parties appointing an AI arbitrator in the light of Volt Info. Scis. v. Bd. of Trs., where the Supreme Court held “arbitration is strictly a matter of contract” insofar as “parties should be at liberty to choose the terms under which they will arbitrate.”16


The contractual and rather informal nature of arbitration means the technology can be integrated into arbitration processes with ease. As AI makes its way to arbitration and as viable AI arbitrators appear to be just over the horizon, we must discuss whether the limited grounds for vacatur under the FAA as enacted in 1925 is broad enough the cover the issues that will arise with the integration of this technology, or whether we need to articulate new standards of review considering the intrinsic features and dangers of AI technology.


1 Arbitral institutions COVID-19 joint statement, ICC, (last visited Nov. 29, 2020)
2 Caroline Simson, 3 International Arbitration Trends To Watch In 2020, LAW360, (Jan. 1, 2020)
3ROSS INTELLIGENCE, (last visited Nov. 29, 2020)
4 LEX MACHINA, (last visited Nov. 29, 2020)
5 ARBILEX, (last visited Nov. 29, 2020)
6 Nikoloas Aletras, Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective, PEERJ COMPUTER SCI. (2016), available at
7 Daniel Katz et al., A general approach for predicting the behavior of the Supreme Court of the United States, 14 PLOS ONE (2017), available at
8 9 U.S.C.A. § 5
9 E.g., First Options of Chicago, Inc. v. Kaplan 514 U.S. 938 (1995) (recognizing kompetenz–kompetenz in US arbitration law).
10 Ric Simmons, Big Data, Machine Judges, and the Legitimacy of the Criminal Justice System, 52 U.C. DAVIS L. REV. 1067, 1081 (2018)
11 9 U.S.C.A. § 10(a)(2)
12 Commonwealth Coatings Corp. v. Cont'l Cas. Co., 393 U.S. 145, 150 (1968)
13 Anupam Chander, The Racist Algorithm? 115 MICH. L. REV. 1023, 1036 (2017).
15 E.g., E. Associated Coal Corp. v. United Mine Workers of Am., Dist. 17, 531 U.S. 57, 62 (2000); United Paperworkers Int'l Union v. Misco, Inc., 484 .US. 29, 30 (1987); W.R. Grace & Co. v. Local Union 759, Int'l Union of United Rubber, 461 U.S. 757, 766 (1983).
16Volt Info. Scis. v. Bd. of Trs., 489 U.S. 468, 472 (1989)

Leave a Reply

Your email address will not be published.