In Search Of Explainable Artificial Intelligence – OpEd

By

By Alessandro Bruno

Today, if a new entrepreneur wants to understand why the banks rejected a loan application for his start-up, or if a young graduate wants to know why the large corporation for which he was hoping to work did not invite her for an interview, they will not be able to discover the reasons that led to these decisions.

Both the bank and the corporation used artificial intelligence (AI) algorithms to determine the outcome of the loan or the job application. In practice, this means that if your loan application is rejected, or your CV rejected, no explanation can be provided. This produces an embarrassing scenario, which tends to relegate AI technologies to suggesting solutions, which must be validated by human beings.

Explaining how these technologies work remains one of the great challenges that researchers and adopters must resolve in order to allow humans to become less suspicious, and more accepting, of AI.  To that effect, it’s important to note what AI is not. The very term AI conjures up images of sentient robots able to understand human language and act accordingly. While it may one day reach that point, for the time being, in the vast majority of cases AI refers to a complex software, programmed to make decisions based on the inputs it receives.

The big leap occurred when such software moved beyond being able to play and win at chess against humans (Google’s DeepMind) to being able to approve or deny credit such as Lenddo. In other words AI now largely consists of software, based on decisional algorithms. The self-driving apps that Apple, Google, and Tesla are developing are based on the same principles. Similarly, many humans come literally face to face with AI every day when they ask Siri, Alexa or Google Assistant to help them with a search or in finding a good restaurant.

AI, therefore, for most people, is a decisional system that produces results based on the data that feed a given algorithm. And while we’re discussing the subject of AI explainability, algorithms (derived from the IX century Baghdad mathematician al-Kharawazm who first recognized these) are procedures that resolve a specific problem through an established number of basic steps.

In a sense, explainable AI offers a solution to enable the relevant humans to understand how an AI algorithm made a particular decision. Why does anyone need to know this? Because the algorithms take decisions with serious ethical and legal ramifications. Therefore, in order for AI to advance and spread, it can only do so in a context of ‘explainability.’ Let’s consider self-driving cars. In case of an accident, the obvious question is ‘who is responsible?’ The issue of repairing the damage, to a vehicle, person, or property begs to be asked. Legal systems have been slow to adapt, even as automated systems are making decisions in situations, which not even the human algorithm programmers can predict.

The issue of self-driving vehicles may be the most obvious.

Who is responsible? Insurers and lawyers are already debating the issue. And despite the fact that related experimental vehicles have driven millions of miles in tests on actual roads, before drivers will be allowed to exploit the full potential of self-driving technology, legal knots will have to be worked out. And in order for that to happen, the AI algorithms will have to be able to explain how they make their decisions.

Until then, humans will have to take responsibility for decisional processes. And this limits the potential of AI.

Therefore, the future of ‘AI explainability’ is the future of AI itself.

AI will also need to explain itself, given its enormous philosophical implications.

In the Bible, the Creator asked Adam and Eve not to eat the fruit from the Tree of the Knowledge of Good and Evil. The Creator offered no explanation for the demand; and his two most valuable creations, Adam and Eve, defied the command. In the Bible, God expelled the two from eternal life. Mankind will not have this luxury. AI technology, like all other technology before it, cannot be expelled from the world-no matter how hard luddites may try. Yet, the ‘forbidden fruit’ story does pose two crucial A.I. problems:

Will AI systems also become independent enough to think for themselves and defy human orders (as many fear and as science fiction has suggested)? And why should humans abide by decisions taken by AI systems?

Of the two, the one that must be addressed first is the latter question.

In the opinion of many, including unlikely sources involved in selling AI dependent technologies – such as Elon Musk of Tesla Motors and SpaceX – artificial intelligence suffers from excessive opacity. Artificial intelligence technologies have generated programs that peruse curricula, cover letters, medical diagnostics or loan applications. Nevertheless, these programs make decisions in ways that seem arbitrary. Or, rather, they make decisions that neither the humans in control nor the programs can explain. Programmers and users simply cannot track the multitude of calculations that the AI program has used to reach its conclusion.

AI technology—if we can even use that plebeian term to describe something that’s as revolutionary as the discovery of fire—will be a turning point for humanity, not just the economy, making our lives better. It’s on the brink of devising systems, whether machines or processes, that can think, feel and even worship their creators…humans. Indeed, humanity has truly reached an Olympian moment. It has evolved to the point of having reshaped the world physically, it has now reached a stage where it can literally play ‘God’. If humans are now in a position to create ever more sentient ‘machines’, there are no guarantees that the machines will be happy to serve and obey their creators all of the time.

This article was published at Geopolitical Monitor.com

Geopolitical Monitor

Geopoliticalmonitor.com is an open-source intelligence collection and forecasting service, providing research, analysis and up to date coverage on situations and events that have a substantive impact on political, military and economic affairs.

Leave a Reply

Your email address will not be published. Required fields are marked *