Categories
Blogs

A Brief Exploration of the Looming “Trough of Disillusionment” of Artificial Intelligence

By: Alexis Campestre –March 14, 2020

There have been a total of four confirmed deaths as a result of Tesla’s autopiloting system in their autonomous vehicles, meanwhile TSLA stock skyrockets. PriceWaterhouseCoopers estimates that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.”[1] It is almost impossible not to feel the breath of AI on our very necks from online shopping to credit reports. An article on BusinessInsider.com 59 impressive things artificial intelligence can do today’, reports a litany of ways in which AI is impacting society from health care to finance. The volume of both discussion and application of Artificial Intelligence, and more narrowly, Machine Learning and Deep Learning, have risen substantially in the last five years, as shown in Figure 1, Topics of Interest Over Time.

Figure 1: Topic of Interest Over Time – Google Search Trends, from Google Trends

Yet, despite all of the hoopla, the AI onslaught is possibly nearing a “trough of disillusionment” in terms of the technology hype cycle as reported by Gartner in their 2019 “Gartner Hype Cycle for Emerging Technologies,” Figure 2. Gartner defines the “trough” as a period in which interest wanes due to experimental and/or implementational failures and decreasing investment in development subsides. Following the ‘trough’ is the slope of enlightenment, a period of renewed growth, development and optimism.[2] In this blog I propose what I suspect may be the proverbial wrench resulting in a looming trough of disillusionment for the advancement and application of artificial intelligence in the coming decade(s) and then offer some ideas for future exploration.

Figure 2: 2019 Gartner Technology Hype Cycle

As computer scientists, mathematicians and engineers work to close the chasm in recreating the human mind with computers, they have hit a wall. What is the big hurdle science must overcome to go from comparatively ‘simple’ rule-based systems and pattern recognition (the current tech) to a higher plane of ‘thought’: conceptualization, imagination, and understanding? How do humans create something where the systems will “learn on their own through data experiencing as opposed to human programming[3]”? This question is itself perhaps as complex as any potential solutions conceived or offered here in this blog. Of course, there is just as much debate regarding the morals of even attempting such a feat. Is it humanity’s place to create ‘life’ in unnatural ways? Would the creation of such a being be recognizable to us? The morality question is an important one, but not the focus of this blog.

I draw upon the ground-breaking research explored in Daniel Kahneman’s book, “Fast and Slow Thinking,” in which he has the reader consider the human mind as two distinct judgment and decision-making centers, ‘System 1’ and ‘System 2’. With the implications of Kahneman’s research on the way humans think and process decisions, and then matching these concepts with efforts taking place at the edge of Artificial Intelligence and Deep Learning, I conclude that perhaps it is impossible without a “Sense of Agency” to achieve General Artificial Intelligence (True AI). I refer the reader to Table 1: Useful Vocabulary before continuing, as the terminology will be useful in conceptualizing the ideas presented below (normally this sort of table would be found in an appendix, but I believe the nuance of our humanity requires explicitness in definition of meaning when attempting to convey ideas).

Table 1: Useful Vocabulary

General Artificial IntelligenceCapable of approaching the world as a human does. Can understand and reason and can understand problems and imagine. Has agency, recognizes other agents as subjective objects (persons) to be interacted with. This is referred to in psychology as the “Theory of the Mind”[4] and I suggest is the essence of System 2 thinking.
Narrow AI (Weak AI)A computer or algorithm that requires input “A” and delivers output “B”, limited by the data provided. Good at solving tasks previously encountered but requires substantial and repetitive data. Does not have the ability to understand why. Does not have Sense of Agency or have any understanding of self or self-perception.
Machine LearningFalls under Weak AI – Uses algorithms to train a statistical model to perform a specific task without explicit ongoing instruction from an external user. Relies on pattern recognition and inference. Requires MASSIVE amounts of structured data, i.e., to train a machine to recognize a dog in a photo is similar to educating a human toddler – via reprimand and reward (See Figure 3). A key difference is the amount of data required to train an algorithm versus a human toddler. A toddler can typically identify an image of a dog in 2-3 instances, whereas an algorithm requires tens of thousands of pictures to accomplish the same simple task.
Deep LearningFalls under Weak AI – A subset of machine learning that involves layers of algorithms often organized in a ‘neural network’ that provide their own interpretation of inputs to identify components of a whole. Tries to mimic the neurons of the human mind and does not require structured data. Continuing with the example of the photo of a dog – a deep learning system would identify similar characteristics, like curved eyes, fuzzy nose, wagging tail, etc., and compile the components into a probabilistic whole, i.e., probably a dog.
System 1Thinking and decision-making process in the human mind that relies on intuition and trained skill. Effortless and automatic, some refer to this as our survival system which allows us to anticipate the future.
System 2Where one’s perception of “self” is created. Described as slow thinking that requires work and is mentally exhausting. Works in tandem with System 1, affirming or disaffirming the conclusions made in System 1; constructs a coherent story of the past.[5]
Sense of Agency (SoA)Sense of Agency refers to the feeling of controlling one’s own actions and, through these actions, events in the external world. Sense of Agency could be measured in experimental settings by asking participants to explicitly judge whether action caused an outcome event or by using implicit measures, such as the compression of perceived time between action and outcome.[6]

Unlike an algorithm, the reader should be able to review the definitions provided in Table 1: Useful Vocabulary and will likely, almost effortlessly, ‘jump to conclusions’ and ‘grasp at straws’ as to how the terminology may be relevant to the following concepts discussed. This is the readers’ System 1 thinking at work – to quickly and seemingly without effort or thought, develop a coherent story about what comes next; to anticipate. This is precisely where Narrow AI excels.

Immediately after reading the last paragraph, it is likely System 2 kicked into high gear and set about affirming or disaffirming any conclusions or coherent story (or lack thereof) that was immediately created by System 1. This back and forth taking place between the two different systems is the function of understanding and perception that defines us as human beings and separates us from Narrow AI (aside from the gooey stuff).

Figure 3: What Machine Learning Can Do, identifies a number of amazing tasks undertaken by current AI technology, yet on the Lex Fridman podcast that aired January 14, 2020, Kahneman claims that current application and most research in the fields of machine learning and deep learning are more akin to a “System 1 product than a System 2 product.”[7]. Stated another way, all of the applications of AI in today’s world are limited; they do not have any understanding of causality and there is no meaning to interaction, they do not employ the concept of a Sense of Agency, or perception of self.

W161026_NG_WHATMACHINEv2

Figure 3: What Machine Learning Can Do[8]

The trough of disillusionment facing AI lies in what it cannot do; AI is still pretty dumb. To understand what I mean when I say, “AI is dumb,” contemplate the difficulty of a machine trained to pick up an object in comparison with a human (Moravec’s Paradox), and, consider the total amount of information it takes to recognize an object (such as an apple); for a human toddler between 1-3 instances, whereas an algorithm requires thousands of iterations. Although the advancements in the algorithms and the hardware related to achieving tasks, such as those in Figure 3, have undergone some mind-blowing developments in the last decade, only a small percentage of research has been focused on overcoming the ultimate hurdle and forging the transition from Weak AI to General Artificial Intelligence.

A screenshot of a cell phone

Description automatically generated

The “Learnability-Godel Paradox” suggests that some problems might be effectively unsolvable by AI.[9] To summarize Godel’s Paradox, which is founded on his Incompleteness Theorems (1929), consider the following sentence: This sentence is false. The previous sentence is both true and false and cannot be represented mathematically (Read: Liar’s Paradox). The problem as it relates to the advancement of AI, is explained somewhat succinctly in a Nature magazine article, Machine Learning Leads Mathematicians to Unsolvable Problem:

A 1940 result by Gödel (which was completed in the 1960s by US mathematician Paul Cohen) showed that the continuum hypothesis cannot be proved either true or false starting from the standard axioms — the statements taken to be true — of the theory of sets, which are commonly considered to be the foundation for all of mathematics. Gödel and Cohen’s work on the continuum hypothesis implies that there can exist parallel mathematical universes that are both compatible with standard mathematics — one in which the continuum hypothesis is added to the standard axioms and therefore declared to be true, and another in which it is declared false.

In other words, the very nature of this type of question is unexplainable by a Narrow AI that is created on existing number theory, problem set theory, and computational learning theory, which, they all are.

Conclusion:

Artificial Intelligence, or at least certain aspects of AI, have been integrated into almost every element of our daily lives. Increasingly, humans are entrusting personal security, health, and basic, everyday decisions to an algorithm, such as how best to avoid traffic congestion. Even important life decisions are increasingly turned over to machines that have “learned” and pre-determined the best decision(s) for us. Narrow AI is provenly effective at what we might describe as System 1 thinking and has provided a multitude of benefits and some negatives to society. However, it continues to fail utterly in achieving true sentience – Artificial General Intelligence (AGI).

This failure to achieve AGI may be a primary contributing factor to the looming trough of disillusionment, as forecasted in the Gartner Hype Cycle. I postulate that applying Kahneman’s psychological theory of mind processing (System 1 and System 2) may be the key to understanding and unlocking the secrets to achieving AGI. Without determining how to replicate System 2 thinking in our Narrow AI, we may not ever bridge the gap from Narrow AI to Artificial General Intelligence.

One suggestion is to give a Narrow AI system a programmed date of death, or a fear of death, as to instill a perception of self, or Sense of Agency. Another researcher of Quantum Biology suggests, “evolution led to general intelligence in biological life forms, so that same process could also be used to develop computerized intelligent systems” (Skopec I., 2017, Skopec II., 2018). I wonder whether Quantum Computing might make it possible; could the quirky nature of the superposition of a qubit could shed light on a new path towards an AGI, allowing our algorithms to input, “This sentence is false,” without short circuiting? Finally, I feel we must ask ourselves what we are reallyhoping to achieve – to create a tool that will assist humanity in living a better quality of life, or is the aim to create, dare I say, a friend?

Works Cited

Castelvecchi, D.  (2019). Machine learning leads mathematicians to unsolvable problem. Nature 565, 277

Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

Haggard, P. (2017). Sense of agency in the human brain. Nat Rev Neuroscience 18, 196–207

Ng, A. (2016). What artificial intelligence can and can’t do right now. Harvard Business Review9.

Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515-526.

Skopec, R. (2018). Evolution Continues with Quantum Biology and Artificial Intelligence. ARC Journal of Immunology and Vaccines3(2), 15-23.

West, D. M., & Allen, J. R. (2018). How artificial intelligence is transforming the world. Report. April24, 2018.

Podcast:

https://lexfridman.com/daniel-kahneman/ – Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI


[1] West, D. M., & Allen, J. R. (2018). How artificial intelligence is transforming the world. Report. April24, 2018.

[2] www.gartner.com/en/research/methodologies/gartner-hype-cycle

[3] https://emarsys.com/learn/blog/real-ai/

[4] Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515-526.

[5] Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

[6] Haggard, P. (2017). Sense of agency in the human brain. Nat Rev Neuroscience 18, 196–207

[7] https://lexfridman.com/daniel-kahneman/

[8] https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now

[9] Castelvecchi, D. Machine learning leads mathematicians to unsolvable problem, Nature 565, 277 (2019).

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s