Ranjan Chadha's Blog

How good is Artificial Intelligence? Should it be used widely?


          How good is Artificial Intelligence? Should it be used widely? 

 

             In Steven Spielberg’s Minority Report (2002), based on a dystopian 1956 novella by Philip K Dick, it’s 2054 and the world has advanced to voice-controlled home automation, robot insects, and gesture-controlled computers. There’s also predictive policing: Psychics or precog can foretell the crime of murder and run the system. Even so, it turns out, the system is fatally flawed. The precogs can’t always agree on what they see; their lack of consensus is concealed, to preserve the image of the system as flawless.

             Here in India, the government recently passed the Criminal Procedure (Identification) Bill, 2022, in the Lok Sabha to provide for modern “measurement” techniques of convicts and other persons, to make the investigation of crime more “efficient and expeditious”, and increase conviction rates. The Bill seeks to repeal the Identification of Prisoners Act, of 1920, which needs modernization. However, the Bill goes a step too far and marks an increase in the surveillance powers of the State. The Bill expands the definition of “measurements” from finger impressions and footprint impressions in the 1920 law to include photographs, iris, and retinal scans, “physical, biological samples, and their analysis”, and “behavioral attributes”, including signatures and handwriting. As opined by Ms. Vrinda Bhandari in the Hindustan Times dated 1st April 2022 the Bill reeks of an increased risk of state surveillance and impinges on privacy. The Bill fails to provide any restrictions on the access of such data nor does it make access dependent on a prior judicial or administrative review. Further with a centralized structure, the Bill does not provide safeguards against any leakage of such data.  …..to provide for modern “measurement” techniques of convicts and other persons, to make the investigation of crime more “efficient and expeditious”, and increase conviction rates. This is vague terminology to say that it will be the whim and fancy of any official to collect any individual’s data. Then again collecting data is one thing but how that data is used is quite another. In the Bill, there is no mention of the usage of any AI algorithm and how the various issues arising out of the usage of an AI algorithm will be addressed and tackled.

             I am not surprised. Surveillance and collecting information are activities that occupy a major share of time and effort on every government’s agenda. And these days there is a lot of excitement and euphoria about the application of artificial intelligence to nearly every aspect of society — from commerce to government. The questions that beg answers are: “Can technology become the panacea for all the ills that plague good governance? Can we get data-driven algorithms that are tamper-proof and free from all kinds of human biases and prejudices? Can explicit and implicit ethics be made a part of such algorithms” 

AI, as a scientific research field

             AI, as a scientific research field,  has given rise to a wide array of computational techniques that enable computers to process large and complex datasets and quickly provide useful information.  AI aims to provide logically sound and evidence-based insights into datasets. In so far as these datasets ‘accurately’ represent phenomena in the world, such AI techniques can potentially provide useful tools for analyzing that data and choosing intelligent actions in response to that analysis, all with far less human labor and effort. This is the traditional approach to AI. This type of AI is essentially about creating a customized piece of software to address a complex issue or solve a specific problem by automating what would otherwise require colossal human mental effort.

             The adoption of data-driven organizational management — which includes big data, machine learning, and artificial intelligence (AI) techniques — is growing rapidly across all sectors of the information ecosystem. There is little doubt that the collection, dissemination, analysis, and use of data in government policy formation, strategic planning, decision execution, and the daily performance of duties can improve the functioning of government and the performance of public services. This is as true for law enforcement as any other government service. 

             Commercial and governmental institutions have long used statistics to develop representations of the world that can inform future actions and policies. In this sense, the AI revolution is really a continuation, and massive acceleration, of much longer and older trends of ‘datafication’ and computerization. What is new and unprecedented is the sheer volume of data, the speed at which it can now be effectively processed, the sophistication of the analysis of that data, the degree of automation, and the consequent lack of direct human oversight that is possible. 

The Problem 

             However, as more bureaucratic processes are automated, there are growing concerns about making data-centric decisions that have far-reaching implications on peoples’ life opportunities and rights. Significant and serious concerns have been raised around the use of data-driven algorithms in policing, law enforcement, and judicial proceedings. This includes ‘predictive policing’ — the use of historic crime data to identify individuals or geographic areas with higher risks for future crimes, to target them for increased policing. 

            Predictive policing as a term can refer to a variety of technologies and practices. The technical usage of the term usually refers to algorithmic processes for predicting locations or individuals with high probabilities of being involved in future crime, based on historical data patterns. Recent approaches utilize “big data” techniques and arguably entail forms of mass surveillance of the public.  Predictive policing is also an excellent example of how AI might be deployed more generally, and the ethical challenges that may arise.

            There were echoes of some of this in 2011 when the US got its first predictive policing software driven by artificial intelligence. The Los Angeles Police Department’s flagship program, Operation Laser was used to pinpointing locations connected to gun and gang violence. It crunched information about past offenders over two years, using technology developed by the data analysis firm Palantir and sought to predict which individuals were most likely to commit a violent crime, based on their criminal histories. The LAPD also used software called PredPol to predict “hot spots” for various crimes. There were no dramatic tales of robberies halted before they could happen, or old ladies saved from a mugger. Instead, by 2019, Laser was shut down. Both the programmes were widely criticised and discredited.

             In 2020, the LAPD canceled its contract with PredPol too. But in 2019, the LAPD began working with a company called Voyager Analytics, on a trial basis. Voyager claims it has an AI-driven solution that can piece together a picture of human behaviour, affinity, and intent based on people’s online behaviour. But the LAPD’s trial with Voyager ended in November 2019, it is not clear why.

            Voyager’s technology and services are representative of an emerging ecosystem of tech companies responding to law enforcement’s requirements for such tools to expand their policing capabilities. For overworked law enforcement, the motivation to use these tools is clear. It might help boost their capabilities to pinpoint hotspots of crime, discover suspects, or to detect unnoticed behaviors.

             Predictive policing has been controversial for multiple reasons, including questions of prejudice and ‘precrime’ and effectively treating people as guilty of (future) crimes for acts they have not yet committed and may never commit. This core controversy over prejudice and precrime is magnified and bloated by concerns over the hidden biases contained in historic data sets, and the obvious implications for racial, gendered, ethnic, religious, class, age, disability, and other forms of discriminatory policing. Facebook’s facial recognition and Twitter’s content warnings have shown, a machine can only form its opinions of right and wrong (or detect humans from a bot) based on the opinions of those who taught it how.

Freedom & Privacy Compromised

             The final hurdle is more subtle but no less important: whether an individual or population’s freedom and privacy ought to be compromised based on an algorithm, rather than any real-world evidence of wrongdoing – the minority report of the 2002 film’s title. Some of us believe that there can be no ultimate answer to this one. But there are others who rightly or wrongly believe that the answer is a plain and simple NO.

             As David L. Weisburd, an Israeli/American criminologist who is well known for his research on crime and place-based criminology, policing, and white-collar crime, points out, any form of policing involves a decision to give away a certain amount of freedom in exchange for a certain promise of security. It is this exchange that allows a police force to make arrests, levy charges, question suspects, and search premises. The trick is to not give up too much freedom, Weisburd says. When it comes to policing, Weisburd says, it is simply not acceptable for the police to use systems that are not transparent about the data and how it is used.

Drug Research Analogy

            “In modern democracies, you can’t have a drug approved for wide use until you’ve done research on its impacts,” he says. “That research not only includes whether or not it works but also whether it harms. We should be using the same model in policing. We don’t. Technologies can be effective but cause harm at the same time. Governments need to pay attention to this issue and balance the benefits and potential harm.”

The Ideal Alignment

              As data-driven organizational management — led by big data, machine learning, and AI techniques — continues to accelerate, and more processes are automated, there are growing concerns over the social and ethical implications of this transformation. Machine ethics is concerned with how autonomous systems can be imbued with ethical values. “AI ethics” considers both designing AI to explicitly { in a clear and detailed manner, leaving no room for confusion or doubt} recognize and solve ethical problems, and the implicit values and ethics of implementing various AI applications and making automated decisions with ethical consequences.  Ideally, explicit ethics, implicit ethics, and the embedding and regulation of the system in society should all align.

Fundamental Limitations

               This may be farfetched as of now. AI can play chess very well, in fact, computers have become increasingly hard to beat at chess. In fact, modern artificial intelligence is capable of wonders. It can produce breathtaking original content: poetry, prose, images, music, and human faces. It can diagnose some medical conditions more accurately than a human physician. Last year it produced a solution to the “protein folding problem,” a grand challenge in biology that has stumped researchers for half a century.

               Yet today’s AI still has fundamental limitations. Relative to what we would expect from a truly intelligent agent—relative to that original inspiration and benchmark for artificial intelligence, human cognition—AI has a long way to go. 

There are things that AI still can’t perform: 

              AI cannot; answer puzzles, make moral decisions, invent something at will, learn through experience, write software, use common sense to make decisions in real-time, care for humans, or empathize; AI cannot feel or interact with feelings like empathy and compassion. Therefore, AI cannot make another person feel understood and cared for.

              AI cannot; multitask, or be creative; AI cannot create, conceptualize, or plan strategically. While AI is great at optimizing for a narrow objective, it is unable to choose its own goals or to think creatively; dexterity: AI and robotics cannot accomplish complex physical work that requires dexterity or precise hand-eye coordination. AI can’t deal with unknown and unstructured spaces, especially ones that it hasn’t observed.

Far From Sentience

                The list is long but notice that all of these behaviors are attributes of sentience or consciousness. There is no one agreed-upon interpretation of sentience. Broadly, we might say that it’s the subjective experience of self-awareness in a conscious individual, marked by the ability to experience feelings and sensations. Sentience is linked to intelligence but is not the same. We may consider an earthworm to be sentient, although not think of it as particularly intelligent (even if it is certainly intelligent enough to do what is required of it). As it stands today AI is incapable of being even close to sentient. 

               As I have mentioned earlier concerns over the hidden biases contained in data sets, and the obvious implications for racial, gendered, ethnic, religious, class, age, disability, and other forms of discriminatory prejudice have been raised time and again in regard to the use of AI for making data-centric decisions that have far-reaching implications on peoples’ life opportunities and rights. This has initiated a huge debate on the use of AI and its ethics. The developers are aware of the issue but it seems that monetary interests take priority and the developers are waiting for AI to become sentient.  

It leads me to ask if at this stage is it ethical for us to use AI to make such decisions in the first place.

 

AvatarAuthor:- Ranjan “Jim” Chadha – a peripatetic mind, forever wandering the digital universe, in search & appreciation of peace, freedom, and happiness. So tune in, and turn on, but don’t drop out just yet!


9 thoughts on “How good is Artificial Intelligence? Should it be used widely?”

    1. My point is that AI is not yet sufficiently developed to make and enforce decisions, especially with hidden biases contained in data sets. The apparent implications for racial, gendered, ethnic, religious, class, age, disability, and other forms of discriminatory prejudice creep in time and again with the use of AI for making data-centric decisions, and such decisions have far-reaching implications on peoples’ life opportunities and rights.

  1. Nice one Chadha Sahib…like it or not AI is here to stay with human’s consent and it’s no more ARTIFICIAL

    1. My point is that AI is not yet sufficiently developed to make and enforce decisions, especially with hidden biases contained in data sets. The apparent implications for racial, gendered, ethnic, religious, class, age, disability, and other forms of discriminatory prejudice creep in time and again with the use of AI for making data-centric decisions, and such decisions have far-reaching implications on peoples’ life opportunities and rights.

      BTW it is very much artificail and in no way organic or natural. Take the example of the earth worm. Now that is Natural!!

  2. Technology comes with a cost at all levels Everyone is racing to get on the tech bandwagon without understanding the long term consequences !
    Well written n researched !

  3. My point is that AI is not yet sufficiently developed to make and enforce decisions, especially with hidden biases contained in data sets. The apparent implications for racial, gendered, ethnic, religious, class, age, disability, and other forms of discriminatory prejudice creep in time and again with the use of AI for making data-centric decisions, and such decisions have far-reaching implications on peoples’ life opportunities and rights.

    1. Jimmy, this may have been the best blog so far.
      AI is extremely complex and not easy to have a descent understanding.
      In this blog, you have highlighted the main issues and made ai question understandable.
      As you state ai is many things in many areas. Machine learning is probably one area where so far ai has been positive. The ai of everyday use, ie creating algorithms to mimic human intellegence, is also a positive for ai, though, in future this could go out of hand.
      The two biggest take aways are partly the huge amount of data that can be processed instantly to provide decision making information. It’s too complex and there are many dangers ahead.
      Maybe, the most important point is about all innocent as well as malicious biases. This is the key.
      Conclusion being, [hat as ai evolves, we need to keep an eye on in what areas and why, how it is used. Since most citizens can’t be bothered and the fact is that diving deeper is too complex and time consuming. The risk that ai will continued to be used for nefarious purposes us very high.
      The only salvation is if humanity changes its mind set and realizes that we all are the same. Technology, sp ai represents the amazing ess of the human mind and thought. Unfortunately, it requires the highest moral and human conduct.
      BTW, it was perfect length, not too long, not too short. The unique Swedish word is lagom, just right.
      Also, if you ask for comments you will get them. Don’t expect them to be short.

  4. Well written and interesting,Jim.If handled for the right means,it’s a boon for mankind,But there is a flip side,can be destructive,too,if used by the wrong hands.

Comments are closed.