Saturday, April 27, 2024
154,225FansLike
654,155FollowersFollow
0SubscribersSubscribe

“To the Law Machine” revisited

A survey & analysis of methods and techniques for automation in the legal world

By Avinash Ambale

Artificial Intelligence (AI) in Law has a long history of research dating back to at least 1958. Despite decades of work, Artificial Intelligence has not scaled out of academia to real-life courtrooms and mediation chambers.

The reason, in our opinion is: theory of learning in computation has only recently caught up with adversarial inference or defeasible logic, a form of social learning widely used both in theory and practice of common law. We posit that Artificial Intelligence that uses Causal Inference models (a quantum leap from defeasible logic) approximate social learning very well. They provide a quantitative formulation for assignation of legal liability. We opine that mathematical formulations of non-zero-sum Game Theory could provide alternative dispute resolution (ADR) mechanisms for Consumer Law.

The central theme of this paper is an analysis of the theory (logic & mathematics) of learning, i.e., epistemology in computation and jurisprudence, individually and at their intersection. In our analysis, we find mathematical models are finally approximating real-life dispute resolution. However, these require legal documents to be in standardized, formal language. The models cannot comprehend the wide variety, style and format of legal documents.

We prescribe standardized document interchange and markup formats. Without these standardized inputs, Artificial Intelligence cannot automate negotiations & the decision process. It will fail to meet expected outcomes – provision of voluminous, consistent & speedy ‘access to justice’ in Consumer Law ODR.

Introduction

Let us imagine a lawyer’s chambers. To further indulge our imagination, the lawyer is behind an invisible pane of glass, both unseen and unheard. The only mode of interaction with the lawyer is by text. Below the pane of glass, there is an envelope sized slot, similar to ones at box office counters. You slide a note with questions in text format into the slot and retrieve textual legal opinions from this slot. Futuristic? Hardly – this consultation machine was extensively researched, symbolic logic derived & representational Boolean binary code for computing created in 1958. What is significant about 1958?

Structure

The answer to that question is the first section on the History of AI & Law in our literature review in this paper. The second section will describe computing epistemology for social learning similar to the adversarial system in the law. In the third section of this paper, we will look Causal Inference models and Game Theory models for Online Dispute Resolution. In the fourth section of this paper, we will look at possible reasons why AI has not scaled into real-life mediation chambers and court-rooms. In the concluding section, we will look at possible ways forward.

Assumption, Definitions & Interpretation of Terms

  1. The sleight-of-hand of the invisible lawyer serves the author’s purpose of a Turing Test[1] for law.
  2. The systematic logic or mathematical models for theory of learning, i.e., epistemology of legal thought can be executed on computing machines. Hence, we argue that all AI we illustrate in this paper, except one in an adjudicative setting is applicable to Online Dispute Resolution (ODR). To the argument that Online represents the internet as we commonly know it; we present the semantic argument that AI is computed on inter-connected networks – public or private, of computers and is symbolically represented in a connectionist model.

The only technical difference is the presentation layer: one, the pane of glass that could be a computer screen and two, the input/output slot for textual messages that could be electronic or written text. Hence, the AI i.e., systematic logic we discuss in this paper can be read as applicable to Online Dispute Resolution with only the presentation layer as a perceptual difference between what we commonly think of as Online and Offline.

  1. We submit that we have liberally interpreted the terms law and legal to mean both the adjudicative law process and Alternative Dispute Resolution (ADR) process throughout the text.
  2. In our definition, any set of human thought or decision-making processes that can be represented by mathematical constructs and symbols – with or without their embodiment in computer language – represents an Artificially Intelligent system. Our additional requirement for an AI System is one that continuously & automatically updates itself from available data; rather than being a static, rule-based system that does not update its mathematical model. Due to this requirement, we will exclude rule-based ODR systems like eBay Resolution Center & Modria ODR from this analysis.

Due to the common & popular conflation of the terms Artificial Intelligence and Deep Learning; throughout this text, when we refer to AI in general, we mean only the Deep Learning or Artificial Neural Network sub-field; not Machine Learning or other sub-fields.

While we might seek to pit zero-sum game Neural Networks against non-zero-sum game formulations; both hew to our definition of an AI, in that they are based on underlying mathematical constructs.

AI and (&) Law is to be read as AI applied to Law to distinguish it from AI in general.

  1. “To The Law Machine” was presented at the First Symposium on Mechanization of Thought Processes which explains the title of our paper “To the Law Machine” Revisited.
  2. In the context of this paper, it is important to draw a distinction in terminology. Automation is implied to mean a continuously running mechanized process without human intervention. Mechanization is implied to mean automation of an individual or discrete set of independent tasks or thoughts. Machine Intelligence (or machines) and Artificial Intelligence are used inter-changeably to refer to computerized mechanization of human thought and decisions. Algorithms are implied to mean mathematical models that can be precisely expressed in formalized computer languages.
  3. In the context of this paper, we use the term ‘balance of probabilities’ in a statistical, Bayesian manner; not necessarily and always in the ‘civil dispute legal standard’ manner.

Section 1: History of Artificial Intelligence and Law

What is significant about 1958? It follows 1956 by a mere two years & the significance of 1956 is that it is the official birthyear of the field and term ‘Artificial Intelligence’[2]. Following the birth of this new field of study, Artificial Intelligence (AI); its founding fathers John McCarthy and Marvin Minsky led the symposium on “the Mechanization of Thought Processes”[3] to collate, curate and present the work of a wide variety of scientists from various disciplines working towards a Mechanistic view of Thought Processes.

In 1958, at the very first symposium on “Mechanization of Thought Processes”, Dr. Lucien Mehl, a Maitre Des Requets to the Council of State, France presented his paper on “Automation in the Legal World”[4]. This was a logical framework with associated symbolic language to create both an Information Machine and a Consultation Machine. To Dr. Mehl, the goal of the Information Machine was to achieve a speedy, accurate & reliable information retrieval mechanism to free up time for proper legal research and logical thought. His motivation to create an Information Machine was the ever growing (at an alarming scale in his own words) scale of the number of laws and regulations & scope of jurisprudence. The goal of the Consultation Machine was to bring to legal science, the mathematical tools to create a systematic logical argument for legal problems whose solutions could unambiguously be drawn from available data.

In the introductory notes to his seminal work, Dr. Mehl describes a problem with the multiplicity of legal sources, a problem that persists to date. As an example, the governing laws and jurisdictions might be provincial, federal or global. The laws might be manifested as governing edicts laid by legislating bodies or as treatises and reviews by judicial authors; across a wide variety of documents such as contracts, treaties, laws and decrees. We will look at these issues in Section 4 of this paper on the challenges of AI & Law.

Dr. Mehl recommended a codification of texts from divergent sources of law – legislature, statutes or jurisprudence – into a common, harmonized standard prior to automation. Following Max Weber’s theory of rationalization as a precursor to mastery by calculation[5], specifically interpreting it as machine driven calculation; we infer that rationalization through codification of sources of law is an essential step preceding mechanization of legal thought. Like Dr. Mehl, we will not describe how to codify the divergent sources of law in this section. Unlike Dr. Mehl, we will look at a few efforts at codifying legal knowledge in Section 4 of this paper.

Dr. Mehl’s basic premise and underlying epistemological inference is that the body of law can be reduced to a few basic or elementary concepts. Or, to construct his argument differently, a limited set of elementary concepts expands into the wide body of legal knowledge. Dr. Mehl’s breakthrough was ground-breaking. He modeled elementary legal concepts as moving in an arithmetic progression. Simultaneously, he modeled data, notions, situations and problems evolving from these basic concepts as increasing in a dual exponential fashion. This unified model laid the systematic logical basis for expressing legal language in a Boolean binary framework. Using Boolean operators to construct dual exponential functions and deconstruct to arithmetic progression made possible the translation of legal language into computerized codification; thereby laying the foundations for mechanization and automation of Law.

Dr. Mehl showed that in cases of trade law – with just 6 basic concepts, there are 64 logical combinations and 16 quintillion (10 followed by 18 zeros) logical functions. Calculating 16 quintillion logical functions for Dr. Mehl’s illustrated case of tax computation of goods sold by a trader was impossible for the computing power available in the late 50s. It follows naturally that the technical implementation of this AI in trade dispute settlement was not feasible. Nonetheless, the ability to deconstruct and reconstruct legal language into Boolean operators is an extremely strong legacy to build AIs for Law.

Shifting forward in time, let us look at the work of another leading figure in AI & Law, L Thorne McCarty and his TAXMAN AI[6].  L Thorne McCarty took his work forward from Dr. Mehl’s “elementary concept” logic foundation. McCarty created computer representations of legal concepts in a very narrow area of US Corporate Tax Law – the re-organization of corporations. McCarty used abstract symbolic representations to model legal concepts due to the ability of these abstractions to be linked to computational structures. McCarty used corporate tax law as the area of law for implementation of computer models as, in his view, it has many layers of commercial abstraction that are “artificial and formal systems themselves, drained of much of the content of the ordinary world”, and because, by legal standards, it is very technical.[7] McCarty’s TAXMAN[8] is one of the first computer embodiments of the systematic logical models for legal reasoning.

His choice, in 1972 of a narrow area of law that is an abstraction and hence lending itself to be modelled easily in computer language analogy seems prescient even in 2019.  Current state-of-art of AI through Deep Learning is ANI (Artificial Narrow Intelligence), i.e., it has the ability to out-perform human intelligence in narrow tasks like image classification. From that perspective, selecting a narrow and deep area of focus in the law seems to serve the cause of AI & Law better than a broad, Grand Unified Theory for codification of all law and justice. Seeking a Grand Unified Theory to codify and automate all areas of law is like seeking Artificial General Intelligence (AGI). McCarty’s observation that “simplest legal problems of first-year law students are the hardest for AI because they require ordinary human experience, which is so alien to AI, but inherent to students.”[9]seems prophetic. Artificial Intelligence (Deep Learning) has not progressed to the stage where it can replicate human learning and experience. Differences between Deep Learning and human learning include the inability of the former to learn causal models of the world from very little data leveraging prior knowledge[10] (a theme we will progressively detail before going to causal models in Section 3)

We are chronicling the history of thought underlying both AI & Law and AI in general to illustrate and differentiate the theory of knowledge underlying both.  So far, we have looked at the first two decades of AI & Law by way of two seminal works. These two decades are also the first two decades of AI in general.

We will now turn our attention to work on AI & Law in the 80s and 90s. The most significant feature of these decades and continuing till the 2010s is a characterization of the period as an AI winter. AI winter, like all hype cycles starts with pessimism in the research community, amplified several times over by pessimism in general media culminating in a funding freeze by investors – private & public.

In the backdrop of the severe funding freeze for research and development of AI in general, we see the establishment of the first International Conference on Artificial Intelligence and Law (ICAIL) in May 1987 at Boston, Massachusetts[11]. The first ICAIL is widely regarded as the birth of an AI & Law research community with a truly international forum to present their research findings at the intersection of AI and law.

The first ICAIL marks not only the establishment of a robust AI & Law research community, but also a move towards a connectionist logical model away the underlying abstractionist models of Dr. Mehl and L Thorne McCarty. The argument for connectionist approaches was the failure of various symbolic systems to model abstract legal concepts. Connectionist approaches were proposed as a resilient architecture to wrangle the incomplete and inconsistent set of rules and descriptions that characterize Law.  Connectionist models draw from computational neuroscience and are restricted to the study of individual human brains, in the author’s opinion. Concepts like mirror neurons that are attributed to associative[12], inter-personal or social learning has not been incorporated into the theory of computing epistemology yet.

Connectionist models represent a divergence from defeasible reasoning or adversarial inference model in legal thought. Adversarial inference is a form of inter-personal or social learning. Its manifestation, in legal theory and practice is characterized by progressive learning of the Truth or Knowledge through an interaction of a minimum of three parties – the judge, the prosecution and the defense.

Artificial Neural Networks alternately labeled Deep Learning are computer embodiments of the connectionist approach that knowledge ‘emerges’[13] from the various connections of neurons similar to the human brain. We will return to Deep Learning and how a divergence away from legal theory of adversarial learning leads to an inadequate modeling of the legal system in Section 2. For the rest of this section, we will take a quick look at a few notable AIs in Online Dispute Resolutions with a discourse on their underlying systematic logic.

Softlaw by Peter Johnson and David Mead[14] is an online legal expert system released in the early 90s to serve legislation to public consumers. The objective was to simplify the internal logical complexity of legislative provisions for non-lawyer consumers. The motivation to achieve this is misinterpretation of legislative legal text – treating a disjunction as a conjunction, misinterpreting the order of evaluation of logical expressions or failing to recognize a double negative – can have dire consequences[15]. Softlaw aimed to address these dire consequences through a rigorous, 4-stage, systematic model. In Step 1, Softlaw created a verbatim model of legislation that includes all and only subject legislation. In Step 2, Softlaw took the path of creating overview of effect of legislation and avoiding all shortcuts in modeling logic. In Step 3, the authors acknowledge that the bulk of difficulty in interpreting legislation is due to the complexity of its structure. Hence, they created a systematic logical model for the explicit modeling of structure to complement the verbatim modeling of subject of legislation in Step 1. Step 4 allowed a separation of rule types to separate the structure of legislation from the meaning of certain words and from the function of judicial pronouncements on the interpretation of those words 13

Softlaw was acquired by Oracle Corporation and forms the basis for Oracle Policy Management. In the view of Adam Z Wyner, Associate Professor in Law and Computer Science at Swansea University; the AI & Law community has not followed suit with similar open-sourced tools for research and development despite the commercial success of Softlaw/Oracle Policy Management.

In 1997, R.P Loui presented the Room 5 system at ICAIL. Room 5 was an online legal expert system to allow users to argue legal cases. Their goal was to facilitate discussion of pending US Supreme Court cases by the broader, non-legal trained citizenry[16]. It is the opinion of the author that R. P. Loui’s work on community participation is either parallel to, or a precursor to Cass Sunstein’s works on prediction markets[17]and wisdom of the crowds[18]. Room 5 had an underlying systematic logic based on nested tables rather than the more common decision tree structures. Room 5 was used to demonstrate an online resolution of a simple stolen goods dispute in the case of a juvenile offender with pros and cons arguments for the approach. It is the opinion of Bart Verheij, President of the International Association of Artificial Intelligence and Law (IAAIL) and Chair of Artificial Intelligence and Argumentation at the University of Groningen that Room 5’s nested arguments is a superior representation as it does not readily allow for the graphical representation of what Pollock famously refers to as the undercutting argument.[19]

We now introduce the concept of defeasible logic or adversarial inference in the theory of knowledge. John Pollock, the father of defeasible logic or “Mr. Defeasible Logic” did not have much interest in the theories of legal reasoning though his formal, systematized logic and correspondent mathematical representations have a wide impact in the field of Artificial Intelligence and Law.

This marks a clear line-in-the-sand to establish a timeline for AI & Law. Both in terms of chronological timeline and systematic logic timelines; what we have looked at is historical, yesterday’s AI. In the next section, we will look at Today’s AI & Law.

Section 2: Defeasible Logic or Adversarial Inference

In the previous section, we looked at the history of systematic logic underlying both AI and law by way of illustrative examples of the computer manifestations of those logical constructs. We introduced the concept of defeasible logic or adversarial inference at the end of the last section and a clear, epochal shift in the timeline of Artificial Intelligence and Law that we characterized as Yesterday’s AI.

Yesterday’s AI did not unlearn when presented with conflicting information i.e., they do not use adversarial inference to progressively (socially) learn. Using yesterday’s AI for Law with an “individual-brain” connectionist model is like a one-sided justice system without inter-connected or social, adversarial learning . Yesterday’s AI only computes forward probability. Given a hypothesis, it will match evidentiary patterns across huge volumes of data.

Deep Learning, in most of its incarnations constitutes Yesterday’s AI. In statistical terms, conventional Deep Learning networks demonstrate prosecutor’s fallacy. Imagine this scenario in a courtroom. The prosecutor has previously introduced uncontested evidence to the court. Prosecutor questions an expert witness, “given the evidence, what is the probability that the defendant is innocent?” The expert witness says, “the odds of finding this evidence on an innocent man are so small that the court can safely disregard the possibility that the defendant is innocent”[20]

We owe to Thomas Bayes, a statistician and Presbyterian minister who answered theological questions with statistical rigor – the Bayes theorem that calculates the probability of a cause aka verdict (guilty or innocent) from the evidence aka effect. It is fairly straight-forward to compute forward probability, i.e., if we decide the cause (guilty), to compute probability of the effect aka evidence. Computing inverse probability, i.e., cause from effect (verdict from evidence) is not only not intuitive, but also tricky.

Using Bayes theorem, the defense counters, “if it might please the court, the prosecution obscures the fact that the probability of the defendant’s innocence is significantly different than presented. His innocence depends not just on the probability of said evidence; but on the likely higher prior probability of his innocence, the explicitly lesser probability of evidence in the case he was innocent as well as the cumulative probability of the evidence being on the defendant”.

A symbolic representation of the same in mathematical construct is below

P (H/E), {i.e., Probability of Hypothesis (Innocence or Guilt) Given (the / operator signifies given) Evidence} = P(H) {i.e., Prior Probability of Hypothesis} Multiplied by P(E/H) {i.e., conditional probability of Evidence given Hypothesis} Divided by P(E) {i.e., Probability of Evidence}

Restating the defense’s assertion in mathematical terms,

The legal fraternity might benefit from looking at Meadow’s Law and its egregious misuse of the prosecutor’s fallacy in securing wrongful death claims against mothers for infant deaths.[21]

It is the author’s opinion that this measure of uncertainty or conditional probability is missing from current neural network (Deep Learning) architectures. Bayesian networks provide a more robust and resilient architecture to represent Law because it incorporates inter-personal or social Learning and not just the “individual-brain” connectionist model of Deep Learning. The doctrine of adversarial inference in common law seems tailor-made for the application of Bayesian networks. There is sparse or no documentation on the influence Thomas Bayes and his work had on the origins of the adversarial system in England. The author stipulates his prosecutor’s fallacy in finding a link, however tenuous and notes that Thomas Bayes passed away in 1761, a year after Sir William Garrow, whose reforms helped usher in the adversarial legal system was born in 1760.

We called Deep Learning in most of its incarnations as Yesterday’s AI earlier in this section. Aided by celebrity scientists and super-successful entrepreneurs, advances in Deep Learning are breathlessly shilled by media as the end-point of evolution of homo sapiens in stories with headlines about Robot Overlords and Singularity. The theory of learning of Yesterday’s AI cannot accomplish what Courts in England could achieve two centuries ago; that of unlearning when presented with conflicting information and computing a balance of probabilities, i.e., inter-personal or social Learning.

If we pair yesterday’s AI which matches evidentiary patterns to hypothesis with another AI that generates alternate hypotheses from evidence (data), we have Today’s AI[22].This competing dyad of Neural Networks is aptly named Generative Adversarial Networks (GANs), in a seeming nod to the adversarial system in common law. The two AIs are competing to optimize diametrically opposing functions in a zero-sum game; but, they are agnostic to the outcome. The outcome is discovery of the Truth or in Sir William Garrow’s dictum “Presumed Innocent till proven guilty”. We finally begin to see the incorporation of  inter-personal or Social Learning into Artificial Intelligence in general; these are not Bayesian Deep Learning networks, yet. In reality, GANs have largely been used from 2018 onwards in only a very limited set of applications. One is accelerating drug discovery for diseases. This seems to be the only application area with positive societal impact. GANs have been garnering a lot of media attention primarily for questionable societal impact by the creation of Deep Fakes and forgery of fine art[23]. While the underlying logical model seems to have converged; implementations of computational law using these models don’t seem to have converged. There is very scant to little published research on the application of GANs to dispute resolution – adjudicative or ADR.

Conventional Neural Networks (Deep Learning) that gets giddying media attention for surpassing human skills in image classification works by using single point-estimates. These single point-estimates are used as weights to classify images. Creating a Deep Learning mechanism that uses probability distribution to truly mimic Bayesian adversarial learning is computationally very expensive. We  will not get into the trenches of the mathematics and relative costs & benefits of GANs and Bayesian Neural Networks. Instead, we will shift gears in Section 3 to look at causal inference models that represent a quantum leap up from Bayesian networks. We will look at recently published research that models causal inference from a real-life case to firmly establish cause-in-fact. These mathematical models establish cause-from-effect and interestingly, cause-from-multiple effects; a case of over-determination.

In Sections 1 and 2, we have seen the systematic logic underpinning AI & Law and how they diverged. Section 2 on Today’s AI shows the convergence of centuries old legal thought to the systematic logic of adversarial inference (social) learning.

In the next section, we will look at Tomorrow’s AI that goes a step beyond balance of probabilities to firmly establish causation. We will also look at game theory models for their application to Online Dispute Resolution.

Section 3: Causal Inference Models & Game Theory

Tomorrow’s AI goes beyond balance of probabilities to establish causation. Theoretical work on causal inference was presented at the ICAIL in June 2019[24]. A causal inference AI used the landmark Heneghan v Manchester Dry Docks & Others[25] case to identify and evaluate cause-in-fact. This AI focused on over-determination, i.e., which of the more than one causes leads to one outcome. Asbestos exposure was among 8 other causes modelled to evaluate effect on adenocarcinoma/lung cancer. The causal effect of ‘what’ caused the adenocarcinoma and ‘who’ among the multiple employers caused it were determined through these causative models.

This research had two objectives and associated rationales. One, there is a lot of debate at both semantic and metaphysical levels about the definition of causation[26]. Hence, the objective was to create a systematic logical model, represented in mathematical language for causation and associated legal liability. This would counter the inadequacy of the traditional “but-for” tests in cases of over-determination. Two, existing case precedent[27]  is a policy-based, ‘material contribution’ exception without a quantitative basis to define ‘material contribution’. Hence the objective was to define, by effect-to-cause mathematical models; a quantitative basis for material contribution to the effect where multiple contributors have caused effect

The research looked at three separate sub-fields of Artificial Intelligence, one – Causal inference and computation of Causal Calculus for a NESS test [28](Necessary Element of Sufficient Set of causes), two – Evidential reasoning to use causal stories and evidential arguments to analyse competing positions as a hybrid theory[29] and three – Argument schemes that analyse common reasoning patterns in arguments with critical questions to evaluate the strength of the arguments[30]

Facts of the Case

In 2011, Mr Heneghan started displaying symptoms of lung cancer. He died in 2013 due to adenocarcinoma – a malignant lung tumour. His estate claimed compensation for wrongful death caused by exposure to asbestos against 6 of the 10 employers he was employed at between 1961 and 1974. Mr Heneghan was a cigarette smoker, as well.

Commentary : This case is justifiably complicated due to the multiplicity of causes – cigarette smoking, multiple contributors of asbestos exposure, long latency period between asbestos exposure and morbidity, various alternative causes and confounders. Does the court use the conventional, strict burden-of-proof requirement or does it use the Fairchild exception?25  Does the court rely on the testimony of expert witnesses who use the Helsinki criteria[31] to compute estimates of each employer/defendant’s individual ‘material contribution’ as ranging from 2.5 to 10.5% – limits deemed to materially increase the risk of contracting the disease?

The research based on causal inference and argumentation schemes created a quantitative calculus to algorithmize the complicated decision-making.

The causal inference model went one step beyond the judge who ignored the smoking history of Mr Heneghan. It created two causal models:  one for asbestos exposure and another for smoking and added proportional values to the combination of these causal models.

It is unable to verify, with exactitude, which of the asbestos fibres caused the cell mutation leading to lung cancer. Hence, evidentiary considerations were not argued due to the strict ‘burden-of-proof’ requirements. Instead, the claimants sought to use the relaxed Fairchild exception; hence there was no evidentiary data to build up the evidential model hybrid theory 27 .

Using Causal Calculus from NESS theory along with mathematical set theory, the researchers established  mathematical formulae for the eventual decision. These formulations algorithmized the NESS test evaluation whether an element in a set is a sufficient contributory cause from the set of all causative elements.

In summary, this research has conclusively demonstrated the ability to create a hyper-rational, mathematical model of decision-making, specifically in tortious injury cases with many contributing causes and confounders. Though tested in a lab setting, this AI demonstrates ability to build  causal models from limited data – an inability of Deep Learning that we looked at in Section 2. These models can be applied to achieve uniform outcomes with or without human heuristics. (we will look at heuristics in Section 4) Causal Inference models, by incorporating social learning (and, in this case social learning from more than 7 opposing parties, their respective counsel, a jury and the judge and appeals court judges) move away from the rigid, individual-brain connectionist models underlying Today’s AI, i.e., Deep Learning.

In the section on the History of AI & Law, we looked at AI applied outside the adjudicative process in areas of law like tax law – trade and corporate, legislative law outreach, and citizen-law. We took a brief detour into the adjudicative process and tort law earlier in this section to illustrate latest advances in computational research applied to law. With computational research catching up to centuries-old legal thought; we have adversarial inference and causal inference computation models. These models are necessarily zero-sum even if the outcome is serving the cause of justice. The application of these computational models to ADR, specifically consumer law needs further research.  We will return to the hyper-rationality basis of AI and how that conflicts with the conception of social justice through a game theory concept. But, first a primer on game theory, specifically non zero-sum games with an illustrative example of online dispute resolution in consumer law.

To introduce game theory, let us look at economics – specifically the economics of marketplaces where consumers and retailers electronically trade goods; by definition e-commerce. Classical economics of Adam Smith applied to a two-way e-commerce market-place is a zero-sum game – one party has to lose for the other to win. In this formulation, unbridled competition in this marketplace delivers best results. Which means either retailers see their margins progressively erode to zero; or consumers see a progressive inflation in prices of goods sold[32]. In Adam Smith’s conception, competitive behavior drives market equilibrium. John Nash, through his famous Nash Equilibrium shows competitive behavior is a non-optimal equilibrium. The impact of John Nash on game theory cannot be overstated. In the context of his paper “The Bargaining Problem”, John Nash created a mathematical construct for maximizing utility for cooperative negotiators aka non-zero-sum games. This mathematical construct is widely used in problems from economics to political science. ODR, being a subset of ADR with its focus on win-win settlements outside the adjudicative process is well suited for applications of non-zero-sum game theories.

In micro-economics, utility maximization is defined as a problem consumers face: “how do we spend our money to maximize our utility?”. In a marketplace where consumers are in a cooperative bargaining situation with retailers; utility maximization is a two-person game of negotiation. In Nash’s mathematical formulation of the bargaining solution; both players get their status quo playoff (i.e., noncooperative playoff) in addition to a share of benefits occurring from the cooperation.[33]

Nash had a set of mathematical axioms (that we will not go into in this paper) as absolutes to be satisfied to maximize utility for both players. An optimal equilibrium that satisfies those axioms are precisely the points  that maximize the expression

u and v are utility functions of Players 1 and 2 respectively. d is a disagreement outcome.

In this formulation u(d) and v(d) are status quo utilities that either player enters into if they decide not to bargain with the other player.

This is a very elegant mathematical representation of the disagreement outcome or dispute hence validating their application in ADR, specifically mediation as applied to consumer law in marketplaces. However, Nash’s bargaining problem seeks to maximize overall good without any regard to equitable distribution of benefits. This is in direct contrast to John Rawls “maximizing the minimum utility” outlined in his Theory of Justice. We will see that conflict articulated in an eNegotiation system, Family Winner further below in this section.

Following John Rawls’ and Howard Raiffa’s maximin principle[34], a rigorous mathematical model for Negotiation of Multi-Objective Water Sources Conflicts was created[35]. This forms the basis for a commercial implementation of an automated eNegotiation tool for ODR in consumer law, specifically e-commerce, SmartSettle. SmartSettle enhances Nash’s bargaining problem by removing the need for each player in the two-person game to know the other’s preferences. This work is patented and implemented as the SmartSettle eNegotiation System[36]. SmartSettle released in the early 90s lacks live applications per their 2016 press release.[37]

In the early 90s, another eNegotiation system, Family Winner was created at Victoria University, Melbourne, Australia. The objective was to avoid trial law and the associated zero-sum games in settling Property Claims in a Divorce Settlement.  The researchers[38]  observe a fundamental conflict in building eNegotiation systems like Family Winner – is the system concerned with providing justice or supporting mediation?

Family Winner uses both game theory concepts and heuristics. We will take a brief look at heuristics in Section 4. Family Winner automatically computes trade-off rules from input information of importance values, i.e., the degree to which each party desires the undivided marital asset. The basic assumptions in Family Winner are: one, dispute can be modeled using Principled Negotiation; two, weights can be assigned to each of the issues in dispute and three, sufficient issues are in contention for each party to be compensated for losing an issue. The detailed mathematical formulation behind this process is beyond the scope of this paper as that would require dozens of pages of explanatory notes.

In real-world trials of Family Winner at the Victoria Legal Aid (VLA) family solicitors’ practices; the overriding concern was the bias towards mediation over justice. This follows the logical conflict between John Nash’s formulation and John Rawls maximin conceptualization we noted earlier. Further research is required to examine a possible combination of adversarial inference or causal inference that deliver established norms of jurisprudence with maximin non-zero-sum game theory that delivers mediated negotiation settlements as a possible solution for ODR in Consumer Law

In all, despite the spectacular progress of systematic logic and their technology implementations, AI or algorithmic models for eNegotiation have not scaled out of academia into real-life mediation chambers – online or offline. From our description of a Turing test for law in the introduction; we can see AIs successfully passing the test in this section. As we concluded this section, we came to the premise that despite tremendous logical and computational progress, AIs have not scaled out of academia.  In the next section, we will look at some possible reasons why. This following section will be time to focus on the input/output slot below the pane of glass in our version of the Turing Test and the wording of those questions.

Section 4: Possible reasons why AI has not scaled into real-life courtrooms/mediation chambers

Celsus in Justinian Digest 1 said “To Know the Law is not merely to understand the words; but as well their force and effect”.[39]

AI stops at lexical analysis, i.e., analysis of word structure, their frequency of occurrence etc. It does not have a semantic understanding of concepts for even everyday language. This sounds like the beginning of a joke – “two professors get on the internet”. But, it was a real-life experiment documented in a peer-reviewed research article about the pitfalls of AI with layman language. Two leading researchers at the intersection of AI & Law[40] perform a Google Search for “Artificial Intelligence” + “Online Dispute Resolution”. The top search results point to either their research articles or conference presentations; leading the researchers to conclude they are THE experts in this field. They try this search with a different string. “Artificial Intelligence” + “On Line Dispute Resolution”. What a difference a single space between on and line makes. These search results lead them to more scholars with published research. We are talking now about a problem with both AI (Google) as well as with creators/publishers of legal work – scholars and practitioners. If the creator/publisher of the text had added metadata markups to indicate synonyms, Google’s web crawlers could have indexed both and served appropriate search results. And, Almighty Google – a veritable Leviathan of the information age does not have a thesaurus to indicate on-line is the same as on line is the same as online. If this is the state-of-art with Google’s much-vaunted AI prowess with layman language; let us compound the problems a hyperbolic zillion times over for the legal lexicon, semantics and ontology. Among various projects to create a standardized legal ontology, The Center of Electronic Dispute Resolution in Netherlands stands out in its work– the BEST project[41] to map legal language to everyday language. The BEST project along with other ODR projects is now wound up due to financial unviability.

But, back to Google and the internet. Way before Google, were the standards – HTML and HTTP. HTML or Hyper-Text Markup Language provides a consistent framework to display any kind of textual content on the internet. With ample support to add plug-ins in a modular fashion, it grew to accommodate video, voice and other media on the internet. HTTP – Hyper Text Transfer Protocol seamlessly transfers data across multiple varieties of internet switches and routers to render them across a wide variety of computers, browsers and mobile apps in exactly the same form. Standardization of the language of the internet, i.e., machine readable and machine executable language led to a proliferation of internet sites; so much so that finding relevant results became impossible. This necessitated search engines and Google, with its superior indexing capabilities has become the de-facto Leviathan of information organization and retrieval. The fly-wheel effect of harmonized, standardized data provides a solid bedrock for the mathematical models of Google’s AI to function.

Equivalent markup languages and interchange protocols are lacking in the legal world. Standardization of legal documentation has been attempted and mandated by legislation across different countries. Machine readable and machine executable legal code has been in place for a few decades; but competing standards, lack of enforcement and different stakeholders makes its universal adoption a Holy Grail. For instance, in the EU – there is MetaLex and LKIF. Other than legal scholars and possibly computer engineers; the differences are hard to fathom for legal practitioners or laypeople. MetaLex aims to serve as the lowest common denominator for a common standard for the interchange of data. Confusingly, LKIF – Legal Knowledge Interchange Format calls attention to interchange by its very name but is meant for Interpretation of the Law or Knowledge Representation.  In the US, President Obama mandated all public government documentation to be released in machine readable form; though no specific format was mandated or followed. The Hammurabi project seeks to create a repository of computable law in the US to enable a law machine to take facts as inputs and return decisions. It started codifying parts of tax and immigration law in the US with a long way more to go. We are back at the beginning of AI & Law in 1958 with Dr. Mehl’s law machine and codifying the law. Half-a-century later, progress on codifying the law has not advanced as significantly as progress in the mathematical models to represent legal thought. If we recall Max Weber’s maxim from Section 1, “rationalize, then mechanize” – we are still in the stages of codifying the law for it to be machine readable and to be processed by the hyper-rational mathematical models of AI.

So, back to Celsus in Justinian. AI has not understood the words yet; so, it does not know the law. The Law has not presented its words harmoniously and consistently to AI. This mutually assured regression is possibly the single biggest reason why AI & Law has not seen the expected Cambrian explosion[42].

As to words’ force and effect;  we will just mention them in this paper as that is worthy of another paper in and of itself. The effect law seeks is justice and equity by way of maximin as  John Rawls called it. AIs being rational agents are the opposite;  they maximize utility and cannot consider equitable distribution of benefits. As to the laws’ force, that we literally interpret as enforcement; AIs bias in recidivism cases directly correlates to AI creators’ socio-economic demography.[43]

Let us set aside AI’s lack of understanding of words, their force & effect and look at a problem with human decision making – that of bounded rationality[44]. Legal reasoning assumes all participants in conflict are “rational agents”. As Cass Sunstein demonstrates through his research[45] at the intersection of behavioral economics and the law; human decision making is not perfectly rational at all times. Human decision making resolves disputes via several heuristics; heuristics that cannot be represented in Today or Tomorrow’s AI (as we labelled it). This is another major topic we will not cover in this paper.

Another major reason for lack of commercial implementations of AI & Law is the inability of AI to explain how it arrives at its results. We will not detail the black-box nature of AI nor latest advances in explainable AI in this paper.

Section 5 – Way forward

It is imperative that we create codification or standardization or machine-readable and executable standards and frameworks like LKIF, MetaLex, Hammurabi for consumer law in India. Not only do we have the problem described by Dr. Mehl with widely different legal documentation requirements, we also have myriad natural languages. The codification working committee should ideally be constituted with a mix of experts from academia, legal practice, judiciary, legislation and data engineers/scientists.

For existing ODR initiatives, it might be advisable to start looking at NLP (Natural Language Processing), a form of AI to automate the processing of free text entries claimants enter via emails or social media or notes in structured web-forms. AI can help build a three-tiered taxonomy of Category, Type & Item – this would help automate the workflow of routing the right problem to the right participant and is an essential first step before an automated eNegotiation or human-assisted eNegotiation. Extracting subject lines from unstructured text automatically could be another use of AI in ODR.

The next step after AI for text inputs would be AI for voice inputs. The same uses as above; but from human voice conversations – encompassing ODR or Online to mean voice interfaces and not just text interfaces. That way the lawyer in our Turing Test can hear and talk.

These are baby steps in the usage of AI for Consumer Law ODR; but the steps need to be supported by a solid bedrock of codified consumer law.

Bibliography:

[1] Wikipedia contributors, ‘Turing test,’  (Wikipedia, The Free Encyclopedia 2 September 2019, 18:57UTC),< https://en.wikipedia.org/w/index.php?title=Turing_test&oldid=913708188> accessed September 12, 2019.

[2] Kaplan A., Haenlein M., ‘Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence’ (2019) Business Horizons, 62 (1), 15,25

[3] Mechanisation of thought processes: proceedings of a symposium held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958. (National Physical Laboratory)

[4] MEHL, L.: Automation in the Legal World: From the Machine Processing of Legal Information to the “Law Machine”. Mechanisation of Thought Processes Proceedings of a Symposium held at the National Physical Laboratory, 24-27 November 1958. (1959, vol. II, Her Majesty’s Stationery Office, London) pp. 755-787

[5] Kim, Sung Ho, ‘Max Weber’, (The Stanford Encyclopedia of Philosophy (Winter 2017 Edition)) <https://plato.stanford.edu/archives/win2017/entries/weber/> [accessed 12 September 2019]

[6] L. Thorne McCarty, ‘Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning.,’ (1977) Harvard Law Review, Volume 90 March 1977, Number 5

[7] L. T. McCarty, ‘Some Requirements for a Computer-based Legal Consultant’, Technical Report LRP-TR-8, Laboratory for Computer Science Research, New Jersey: Rutgers University

[8] The TAXMAN program was written in 1972-73 and first discussed in a paper presented at the Workshop on Computer Applications to Legal Research and Analysis, Stanford Law School, April 28-29, 1972.

[9] McCarty, supra n. 32, p. 27.

[10] Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum, ‘Human-level concept learning through probabilistic program induction’ (2015) SCIENCE: 1332-1338

[11] Bart Verheij, Enrico Francesconi, Anne Gardner, “ICAIL 2013: The Fourteenth International Conference on

Artificial Intelligence and Law” (2014) <https://www.aaai.org/ojs/index.php/aimagazine/article/view/2523/2429> [accessed 12 September 2019]

[12] Kosonogov, V. ‘Why the Mirror Neurons Cannot Support Action Understanding’ ((2012). Neurophysiology. 44 (6): 499–502

[13] The author has emphasized Emerges to draw attention to emergent behavior that is a rigorous mathematical model of the saying “The whole is greater than the sum of the parts”

[14] Peter Johnson, David Mead ‘Legislative knowledge base systems for public administration: some practical issues.,’ (1991) ICAIL 91, 108-117

[15] Layman Allen, Charles Saxon, ‘Some Problems in Designing Expert Systems to Aid Legal Reasoning’ (1987) ICAIL,  94.

[16] R.P. Loui, J. Norman, J. Altepeter, D.Pinkard, D.Craven, J.Lindsay, M.Foltz, ‘A Testbed for Public Interactive Semi-Formal Legal Argumentation’ (1997) ICAIL,  207-214

[17] Cass R. Sunstein, ‘Deliberating Groups Versus Prediction Markets (or Hayek’s Challenge to Habermas). Episteme, Forthcoming’ University of Chicago Law & Economics, Olin Working Paper No. 321; University of Chicago, Public Law Working Paper No. 146. < https://ssrn.com/abstract=956189>

[18]Disclosure: The author has a granted US Patent US9033781B2 Robert Craig Steir, Michael Scott Brewster, Avinash Viswanath Ambale, ‘Designing a real sports companion match-play crowdsourcing electronic game’

[19]Douglas Watson, ‘Argumentation Methods for Artificial Intelligence in Law’ (2010), Springer-Verlag

[20] Fenton, Norman; Neil, Martin; Berger, Daniel “Bayes and the Law” (June 2016). Annual Review of Statistics and Its Application. 3: 51–77

[21] Wikipedia contributors, ‘Meadow’s law’,  (Wikipedia, The Free Encyclopedia 30 July 2019, 17:02 UTC),  <https://en.wikipedia.org/w/index.php?title=Meadow%27s_law&oldid=908583734> [accessed 13 September 2019]

[22] The term Today’s AI is deliberately mis-labelled. It is not widely used yet (Sept 2019) and is possibly Tomorrow’s AI. However, we mis-label this  to show the epochal shift from Today’s AI (that is really Tomorrow’s AI) using Adversarial Inference and Tomorrow’s AI (that is really day-after-tomorrow’s AI) and uses causal inference models.

[23] Karen Hao, ‘Inside the World of AI that forges beautiful art and terrifying deepfakes’ https://www.technologyreview.com/s/612501/inside-the-world-of-ai-that-forges-beautiful-art-and-terrifying-deepfakes/[accessed 13 September 2019]

[24] Ruta Liepina, Giovanni Sartor & Adam Wyner, ‘Evaluation of Causal Arguments in Law: The Case of Overdetermination’ (2019) ICAIL

[25] Heneghan v Manchester Dry Docks Ltd & Ors (2014) EWHC 4190 (QB)

[26] Michael S. Moore, ‘Causation and Responsibility: An Essay in Law, Morals, and Metaphysics’ (2009) OUP

[27] The rationale for a quantitative basis for  ‘material contribution’ is a policy-based exception in the case of overdetermination in the landmark ‘Fairchild v Glenhaven Funeral Services Ltd & Ors [2002] UKHL 22 (20 June 2002) [2002] 3 WLR 89’ case. Mr Fairchild had worked for a number of employers who had negligently exposed him to asbestos, eventually leading to his death from malignant mesothelioma. It was impossible to point out which of the employers exposed him to asbestos leading to his mesothelioma. The traditional test of causation, “on the balance of probabilities” was deemed inadequate to establish causation to a single employer. The judgement of the House of Lords was “the appropriate test of causation is whether the employers had materially increased the risk of harm to the claimants” – a ruling enshrined as the Fairchild Exception.

[28] Alexander Bochman, ‘Actual causality in a logical setting.’  (2018) IJCAI, 1730– 1736.

[29] Floris J Bex, Peter J Van Koppen, Henry Prakken, and Bart Verheij. ‘A hybrid formal theory of arguments, stories and criminal evidence.’ (2010) Artificial Intelligence and Law, 18(2):123–152

[30] Douglas Walton, ‘Argumentation methods for artificial intelligence in law.’ (2005) Springer

Science & Business Media

[31] Editors: Panu Oksa, Henrik Wolff, Tapio Vehmas, Paula Pallasaho and Heikki Frilander, ‘ASBESTOS, ASBESTOSIS, AND CANCER : Helsinki Criteria for Diagnosis and Attribution 2014’ (2014) <www.julkari./bitstream/handle/10024/116909/Asbestos_web.pdf> [accessed 13 September 2019]

[32] Let us set aside the fact that e-commerce marketplaces are not completely neutral as they skew the market either by providing their own captive supply or by artificial spikes in demand through incentives

[33] Nash, John ‘The Bargaining Problem’ (1950). Econometrica. 18 (2): 155 162.

[34]  Howard Raiffa, ‘The Art and Science of Negotiation’, (1982) Belknap Press of HUP

[35] Thiessen, E.M., and D.P. Loucks, ‘Computer-Assisted Negotiation of Multi-objective Water Resources Conflicts,’ (1992) Water Resources Bulletin, American Water Resources Association 28(1), 163-177.

[36] Ernest M. Thiessen, ‘Computer-based method and apparatus for interactive computer-assisted negotiations’

(1996), US Patent 5,495,412 (ICANS)

[37] https://smartsettle.com/2016/03/16/ecommerce/

[38] John Zeleznikow, Emilia Bellucci, ‘Family Winner: Integrating Game Theory and Heuristics to Provide Negotiation Support’ (2003) Legal knowledge and information systems : JURIX 2003 : the sixteenth annual conference. Bourcier, Danièle, ed. Frontiers in artificial intelligence and applications . IOS Press, Amsterdam, 21-30.

[39] Justinian, Digest, Book 1, Title 3, 17 Quotations in the Langdell Reading Room <https://hls.harvard.edu/library/about-the-library/history-of-the-harvard-law-school-library/quotations-in-the-langdell-reading-room/>[accessed 13 September 2019]

[40] Arno R. Lodder Computer/Law Institute, Centre for Electronic Dispute Resolution Amsterdam, Netherlands and Ernest M. Thiessen SmartSettle ICAN Systems Inc. Vancouver, Canada

[41] Elisabeth M. Uijttenbroek, Arno R. Lodder, Michel C.A. Klein, Gwen R. Wildeboer, Wouter Van Steenbergen, Rory L.L. Sie, Paul E.M. Huygen & Frank Van Harmelen ‘Retrieval of Case Law to Provide Layman with Information about Liability: Preliminary Results of the BEST-project’ (2008) Computable Models of the Law, 291-311

[42] We do not explicitly reference the vast volume of thought and scholarship, in and for India, on access to justice and equity for consumers. As technology researchers and practitioners, we think automated eNegotiation in consumer law ODR can help deliver this access at scale. We assume this to be the goals of consumer law ODR community as well.

[43] Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, ‘How We Analyzed the COMPAS Recidivism Algorithm’ (Propublica May 23, 2016)

<https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>[accessed 13 September

[44] Wikipedia contributors, ‘Bounded rationality’,  (Wikipedia, The Free Encyclopedia,  8 September 2019, 22:31 UTC) <https://en.wikipedia.org/w/index.php?title=Bounded_rationality&oldid=914703609> [accessed 13 September 2019]

[45] Sanjit Dhami; Ali al-Nowaihi; Cass R. Sunstein, ‘Heuristics and Public Policy: Decision Making Under Bounded Rationality’ (2018) Harvard Public Law Working Paper No. 19-04 83 Pages Posted: 20 Jun 2018 Last revised: 14 Mar 2019

—The author is CEO/Founder Pervazive Inc., a research lab focused on Artificial Intelligence & Computational Neuroscience

spot_img

News Update