{"id":334505,"date":"2024-03-21T16:52:45","date_gmt":"2024-03-21T11:22:45","guid":{"rendered":"https:\/\/www.indialegallive.com\/?p=334505"},"modified":"2024-03-21T16:52:45","modified_gmt":"2024-03-21T11:22:45","slug":"european-union-artificial-intelligence-law","status":"publish","type":"post","link":"https:\/\/www.indialegallive.com\/magazine\/european-union-artificial-intelligence-law\/","title":{"rendered":"Controlling the AI paradox"},"content":{"rendered":"\n
By Ashit Srivastava<\/strong><\/p>\n\n\n\n The legal diaspora has been at the forefront in matters of regulation of Artificial Intelligence (AI), bringing ethical regulations for the usage of AI. No other discipline has been more active than the predicament of law to curb or at least restrict the possible negative impact of AI. In fact, the modern generation is predominantly occupied with the question of liability cases of AI, ever since the onslaught of automation, especially self-driven automated cars. Running parallel has been the struggle in attempting to keep pace against this prolific jump made by technology in the last decade or so. In this mix has come the novel European legislation of an AI Act, being the first to do so legislatively.<\/p>\n\n\n\n Europe has a history of tackling technology and its possible repercussions. The 19th century industrial revolution that introduced ideas of capitalism in Europe, or the mid-1990\u2019s when they enacted the Data Protection Directive, the European continent has brought modern laws to deal with modern legal questions. The European Parliament passed the EU Artificial Intelligence Act by a full majority. Under the Act, AI developers, manufacturers or distributors may face a penalty up to euro 35 million. It is a tremendous step, knowing that the rest of the world has been attempting to put a check on the growth of AI mechanisms. Interestingly, instead of having an umbrella regulation for AI in the European continent, the enacted law attempts to regulate the AI in a classified manner, by bifurcation between \u201cUnacceptable AI\u201d, \u201cHigh-Risk AI\u201d and \u201cAI with limited or minimal risk\u201d. In the case of \u201cUnacceptable AI\u201d, there is a complete prohibition on the usage of the AI that can be used for targeting vulnerable groups, usage of manipulative techniques, capable of violating the fundamental rights of citizens and European value systems. To elaborate: <\/p>\n\n\n\n Article 6 and 7 of the AI Act provides for high-risk based AI. It includes:<\/p>\n\n\n\n \u201c(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;<\/p>\n\n\n\n (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.<\/p>\n\n\n\n 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.\u201d<\/p>\n\n\n\n Annex II of the enactment provides for List of Union Harmonization Legislation that needs to be reconciled with the EU AI Act. Additionally, Annex III provides for a list of systems that should be regarded as high-risk AI, such as:<\/p>\n\n\n\n Article 52 of the AI Act provides for the Limited Risk AI system. It includes \u201cTransparency obligations for certain AI systems\u201d.<\/p>\n\n\n\n Apart from the risk-based approach, there is another minimal-risk-based approach for AI that mostly covers AI-enabled video games that may have influence on the fundamental rights. <\/p>\n\n\n\n A deeper analysis exposes as to how the EU AI Act seems like an appropriate approach, understanding the different layers at which the AI impacts the life of an individual. Quite understandably, not all AI needs to be treated the same, the gravity and degree of influence and manipulative capacity and capacity to harm\u2014physically or psychologically\u2014is quite different. Additionally, the enactment has appropriately addressed the unacceptable element of the AI, knowing that for a decade or so, there are several simulations-based AI that have categorically targeted the unconscious human fallacies, some of them mentioned earlier, such as the ranking bias, the heuristic fallings or the affirmation bias\u2014these are just the tip of the iceberg.<\/p>\n\n\n\n The second layer of AI that has been appropriately put in the high-risk system are continuously being used in the judiciary, employment sector and law-enforcement. AI tools for determining recidivism factors have become common in most of the European countries. However, America is the one that has taken a lead in this direction through COMPAS (an AI tool for predicting the recidivism capacity of an individual). Similarly, the employment sector has been deploying the usage of AI, right from shortlisting of individuals for interviews to the selection of the eligible candidate. <\/p>\n\n\n\n If AI is being used at such a rapid speed, the possibility of discrimination cannot be fully ruled out, especially knowing that in cases of AI, the data upon which the algorithm is trained are mostly biased, thus resulting in more bias. A special mention needs to be made of the current artificial chaos created by deep fakes. Not only Europe, but the whole of South-Asia has also been under a constant attack of deep fakes.<\/p>\n\n\n\n The EU enactment has addressed the question by demanding more transparency for individuals that are being exposed to such content. Additionally, this could also be looked at from the perspective that with deep fakes becoming a part of the techno-socio aspect of civil life, what is required is a dividing line between what is real and what is artificial.<\/p>\n\n\n\n Though the EU Act is a welcome step towards a global AI regulation regime, there will be few critiques as well, mostly based on the theme that the Act may tend to overregulate. What sort of companies or entities will be in a position to bear the compliance burden will surely work as a disincentive for many developers and manufacturers of AI mechanism. <\/p>\n\n\n\n There is no doubt there has to be an ethical limit to the development of AI tools, but not to an extent that might stifle technological development. The EU Act has been unable to address this complex question appropriately. The technology jurisprudence is based on balancing the interests of the user, the interest of the government and private players and puts up the much-needed guardrails even as AI platforms and applications grow at alarming speed. <\/p>\n\n\n\n \u2014The writer is Assistant Professor of Law at Dharmashastra National Law University, Jabalpur<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":" The recently-enacted European Union Artificial Intelligence Act is a forerunner even as nations struggle to find laws to curb or restrict the misuse and negative impact of artificial intelligence. The Act is a welcome step towards a global AI regulation regime<\/p>\n","protected":false},"author":2,"featured_media":334509,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_jetpack_memberships_contains_paid_content":false,"jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[272,2],"tags":[122159,131165,131164],"jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/d2r2ijn7njrktv.cloudfront.net\/IL\/uploads\/2024\/03\/21123620\/Artificial-Intelligence-Act-for-web-min.jpg","_links":{"self":[{"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/posts\/334505"}],"collection":[{"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/comments?post=334505"}],"version-history":[{"count":0,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/posts\/334505\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/media\/334509"}],"wp:attachment":[{"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/media?parent=334505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/categories?post=334505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.indialegallive.com\/wp-json\/wp\/v2\/tags?post=334505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n
\n
\n