Daniel Wagner On Artificial Intelligence – Interview

By

The following is an interview with Daniel Wagner, author along with Keith Furst, of the new book, AI Supremacy: Winning in the Era of Machine Learning.

RW: Can you briefly explain the differences between artificial intelligence, machine learning, and deep learning?

DW: Artificial intelligence (AI) is the overarching science and engineering associated with intelligent algorithms, whether or not they learn from data. However, the definition of intelligence is subject to philosophical debate-even the terms algorithms can be interpreted in a wide context. This is one of the reasons why there is some confusion about what AI is and what is not, because people use the word loosely and have their own definition of what they believe AI is. People should understand AI to be a catch-all term for technology which tends to imply the latest advances in intelligent algorithms, but the context in how the phrase is used determines its meaning, which can vary quite widely.

"AI Supremacy: Winning in the Era of Machine Learning", by Daniel Wagner and Keith Furst
“AI Supremacy: Winning in the Era of Machine Learning”, by Daniel Wagner and Keith Furst

Machine learning (ML) is a subfield of AI that focuses on intelligent algorithms that can learn automatically (without being explicitly programmed) from data. There are three general categories of ML: supervised machine learning, unsupervised machine learning, and reinforcement learning.

Deep learning (DL) is a subfield of ML that imitates the workings of the human brain (or neural networks) in the processing of data and creating patterns for use in decision-making. It is true that the way the human brain processes information was one of the main inspirations behind DL, but it only mimics the functioning of neurons. This doesn’t mean that consciousness is being replicated, because we really do not understand all the underlying mechanics driving consciousness. Since DL is a rapidly evolving field there are other more general definitions of it, such as a neural network with more than two layers. The idea of layers is that information is processed by the DL algorithm at one level and then passes information on to the next level so that higher levels of abstraction and conclusions can be drawn about data.

Is China’s Social Credit Score system about to usher in an irreversible Orwellian nightmare there? How likely is it to spread to other dictatorships?

The social credit system that the Chinese government is in the process of unleashing is creating an Orwellian nightmare for some of China’s citizens. We say “some” because many Chinese citizens do not necessarily realize that it is being rolled out. This is because the government has been gradually implementing versions of what has become the social credit system over a period of years without calling it that. Secondly, most Chinese citizens have become numb to the intrusive nature of the Chinese state. They have been poked and prodded in various forms for so long that they have become accustomed to, and somewhat accepting, of it. That said, the social credit system has real consequences for those who fall afoul of it; they will soon learn about the consequences of having done so, if they have not learned already.

As we note in the book, the Chinese government has shared elements of its social credit system technology with a range of states across the world. There is every reason to believe that authoritarian governments will wish to adopt the technology and use it for their own purposes. Some have already done so.

How can we stop consumer drones from being used to aid in blackmail, burglary, assassination, and terrorist attacks?

As Daniel notes in his book Virtual Terror, governments are having a difficult time keeping track of the tens of millions of drones that are in operation in societies around the world. Registering them is largely voluntary and there are too few regulations in place governing their use. Given this, there is little that can be done, at this juncture, to prevent them from being used for nefarious purposes. Moreover, drones’ use on the battlefield is transforming the way individual battles will be fought, and wars will be waged. We have a chapter in the book devoted to this subject.

Google, YouTube, Twitter and Facebook have been caught throttling/ending traffic to many progressive (TeleSur, TJ Kirk) and conservative (InfoWars, PragerU) websites and channels. Should search engines and social media platforms be regulated as public utilities, to lend 1st Amendment protections to the users of these American companies?

The current battle being waged–in the courts, legislatures, and the battlefield of social media itself- are already indicative of how so many unanswered questions associated with the rise of social media are being addressed out of necessity. It seems that no one–least of all the social media firms–wants to assume responsibility when things go wrong or uncomfortable questions must be answered. Courts and legislatures will ultimately have to find a middle ground response to issues such as first amendment protections, but this will likely remain a moving target for some time to come, as there is no single black or white answer, and, as each new law comes into effect, its ramifications will become known, which means the laws will undoubtedly need to become subsequently modified.

Do you think blockchain will eventually lead to a golden era of fiscal transparency?

This is hard to say. On one hand, the rise of cryptocurrencies brought with them the promise of money outside the control of governments and large corporations. However, cryptocurrencies have been subject to a number of high-profile heists and there are still some fundamental issues with them, such as the throughput of Bitcoin which is only able to process around a few transactions per second. This makes some cryptocurrencies less viable for real world transactions and everyday commerce.

The financial services industry has jumped on the blockchain bandwagon, but they have taken the open concept of some cryptocurrencies and reinvented it as distributed ledger technology (DLT). To be part of DLTs created by financial institutions, a joining member must be a financial institution. For this reason, the notion of transparency is not relevant, since the DLT will be controlled by a limited number of members and only they will determine what information is public and what is not.

The other issue with the crypto space right now is that is filled with fraud. At the end of the day, crypto is an asset class like gold or any other precious metal. It does not actually produce anything; The only real value it has is the willingness of another person to pay more for it in the future. It is possible that a few cryptocurrencies will survive long-term and become somewhat viable, but the evolution of blockchain will likely continue to move towards DLT that more people will trust. Also, governments are likely to issue their own cryptocurrencies in the future, which will bring it into the mainstream.

Taiwan has recently started using online debate forums to help draft legislation, in a form of direct democracy. Kenya just announced that they will post presidential election results on a blockchain. How can AI and blockchain enhance democracy?

Online debate forums are obviously a good thing, because having the average person engage in political debate and being able to record and aggregate voting results will create an opportunity for more transparency. The challenge becomes how to verify the identities of the people submitting their feedback. Could an AI program be designed to submit feedback millions of times to give a false representation of the public’s concerns?

Estonia has long been revered as the world’s most advanced digital society, but researchers have pointed out serious security flaws in its electronic voting system, which could be manipulated to influence election outcomes. AI can help by putting in place controls to verify that the person providing feedback for legislation is a citizen. Online forums could force users to take a pic of their face next to their passport to verify their identity with facial recognition algorithms.

Should an international statute be passed banning scientists from installing emotions-specially pain and fear-into AI?

Perhaps, for now at least, the question should be: should scientists ban the installation of robots or other forms of AI to imitate human emotions? The short answer to this is that it depends. On one hand, AI imitating human emotions could be a good thing, such as when caring for the elderly or teaching a complex concept to a student. However, a risk is that when AI can imitate human emotions very well, people may believe they have gained a true friend who understands them. It is somewhat paradoxical that the rise of social media has connected more of us, but some people still admit that they lack meaningful relationships with others.

You don’t talk much about India in your book. How far behind are they in the AI race, compared to China, the US & EU?

Surprisingly, many of the world’s countries have only adopted a formal AI strategy in the last year. India is one of them; It only formally adopted an AI strategy in 2018 and lags well behind China, the EU, the US, and variety of other countries. India has tremendous potential to meaningfully enter the race for AI supremacy and become a viable contender, but it still lacks a military AI strategy. India already contributes to advanced AI-oriented technology through its thriving software, engineering, and consulting sectors. Once it ramps up a national strategy, it should quickly become a leader in the AI arena–to the extent that it devotes sufficient resources to that strategy and swiftly and effectively implements it. That is not a guaranteed outcome, based on the country’s prior history with some prior national initiatives. We must wait and see if India lives up to its potential in this arena.

On page 58 you write, “Higher-paying jobs requiring creativity and problem-solving skills, often assisted by computers, have proliferated… Demand has increased for lower skilled restaurant workers, janitors, home health aides, and others providing services that cannot be automated.” How will we be able to stop this kind of income inequality?

In all likelihood, the rise of AI will, at least temporarily, increased the schism between highly paid white-collar jobs and lower paid blue-collar jobs, however, at the same time, AI will, over decades, dramatically alter the jobs landscape. Entire industries will be transformed to become more efficient and cost effective. In some cases this will result in a loss of jobs while in others it will result in job creation. What history has shown is that, even in the face of transformational change, the job market has a way of self-correcting; Overall levels of employment tend to stay more or less the same. We have no doubt that this will prove to be the case in this AI-driven era. While income inequality will remain a persistent threat, our expectation is that, two decades from now, it will be no worse than it is right now.

AI systems like COMPAS and PredPol have been exposed for being racially biased. During YouTube’s “Adpocalypse”, many news and opinion videos got demonetized by algorithms indiscriminately targeting keywords like ‘war’ and ‘racism”. How can scientists and executives prevent their biases from influencing their AI?

This will be an ongoing debate. Facebook removed a PragerU video where a woman was describing the need for strong men in society and the problem with feminizing them. Ultimately, Facebook said it was a mistake and put the video back up. So the question becomes who decides what constitutes “racist” or “hate speech” content? The legal issues seem to emerge, if it can be argued that the content being communicated are calling on people to act in a violent way.

Could the political preferences of a social media company’s executives overrule the sensibilities of the common person to make up their own mind? On the other hand, India has a string of mob killings from disinformation campaigns on WhatsApp, mostly from people who were first time smartphone users. Companies could argue that some people are not able to distinguish between real and fake videos so content must be censored in that case.

Ultimately, executives and scientists will need to have an open and ongoing debate about content censorship. Companies must devise a set of principles and adhere to them to the best of their ability. As AI becomes more prevalent in monitoring and censoring online content there will have to be more transparency about the process and the algorithms will need to be adjusted following a review by the company. In other words, companies cannot prevent algorithmic biases, but they can monitor them and be transparent with the public about steps to make them better over time.

Amper is an AI music composer. Heliograf has written about 1000 news blurbs for WaPo. E-sports and e-bands are starting to sell out stadiums. Are there any human careers that you see as being automation-proof?

In theory, nearly any cognitive or physical task can be automated. We do not believe that people should be too worried, at least for the time being, about the implications of doing so because the costs to automate even basic tasks to the level of human performance is extremely high, and we are a good ways away from being technically capable of automating most tasks. However, AI should spark conversations about how we want to structure our society in the future and what it means to be human because AI will improve over time and become more dominant in the economy.

In Chapter 1 you briefly mention digital amnesia (outsourcing the responsibility of memorizing stuff to one’s devices). How else do you anticipate consumer devices will change us psychologically in the next few decades?

We could see a spike in schizophrenia because the immersive nature of virtual, augmented, and mixed reality that will increasingly blur the lines between reality and fantasy. In the 1960s there was a surge of interest in mind-expanding drugs such as psychedelics. However, someone ingesting LSD knew there was a time limit associated with the effects of the drug. These technologies do not end. Slowly, the real world could become less appealing and less real for heavy users of extended reality technology. This could affect relationships between other humans and increase the nature and commonality of mental illness. Also, as discussed in the book, we are already seeing people who cannot deal with risk in the real world. There have been several cases of animal mauling, cliff falls, and car crashes among individuals in search of the perfect “selfie”. This tendency to want to perfect our digital personas should be a topic of debate in schools and at the dinner table.

Ready Player One is the most recent sci-fi film positing the gradual elimination of corporeal existence through Virtual Reality. What do you think of the transcension hypothesis on Fermi’s paradox?

The idea that our consciousness can exist independently from our bodies has occurred throughout humanity’s history. It appears that our consciousness is a product of our own living bodies. No one knows if a person’s consciousness can exist after the body dies, but some have suggested that a person’s brain still functions for a few minutes after the body dies. It seems we need to worry about the impact of virtual reality on our physical bodies before it will be possible for us to transcend our bodies and exist on a digital plane. This is a great thought experiment, but there is not enough evidence to suggest that this is even remotely possible in the future.

What role will AI play in climate change?

AI will become an indispensable tool for helping to predict the impacts of climate change in the future. The field of “Climate Informatics” is already blossoming, harnessing AI to fundamentally transform weather forecasting (including the prediction of extreme events) and to improve our understanding of the effects of climate change. Much more thought and research needs to be devoted to exploring the linkages between the technology revolution and other important global trends, including demographic changes such as ageing and migration, climate change, and sustainable development, but AI should make a real difference in enhancing our general understanding of the impacts of these, and other, phenomena going forward.

A version of this article was published at International Policy Digest.

Russell A. Whitehouse

Russell A. Whitehouse is a freelance social media consultant, photographer and global policy essayist for sites like Eurasia Review, International Policy Digest, and Modern Diplomacy.​

Leave a Reply

Your email address will not be published. Required fields are marked *