Let’s pierce the darkness and cut to the chase. In this article, we’re taking a moonshot at demystifying AI by going to the heart of the issue: exploring artificial intelligence itself. Taking a note from Euclid’s Elements, we’ll start with a few definitions.
- Artificial Intelligence: A thinking machine, or any device that’s capable of cognitive tasks like reasoning, intelligent design, or creativity. For now, let’s consider this a working definition because, as we’ll explore, terms like intelligence and thought tend to sit on shifting sands.
- Machine Learning: Within AI, machine learning (ML) is a group of technologies that use algorithms and data to observe patterns. These inductive, statistical models are responsible for many of the recent breakthroughs in AI.
- Neural Network: A specific type of ML tech, neural networks are loosely based on the workings of human brains and represent some of the most advanced AI systems currently at our disposal.
Like Russian nesting dolls, neural networks sit inside ML, which in turn sits inside AI.
Now, the main problem that we face when we attempt to pin a definition onto AI is intelligence itself. How do we define intelligence? What does it mean to be intelligent? One path forward is to look at a definition from Merriam-Webster: “the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).”
The idea of testing a machine’s capacity for thought has been around since Alan Turing, the father of computer science, proposed it in 1950. The Turing Test engages the machine in what he called an “Imitation Game,” in which a person interrogates another person and a machine without knowing which is which. After five minutes of questioning, the interrogator tries to distinguish them, and, if the machine can fool the interrogator into believing it is the person at least 30% of the time, then that’s reason to believe it’s a thinking machine.
Of course, we can find fault in the Turing Test, and so researchers have proposed some other options to determine a system’s intelligence. These include the “coffee test,” where one goes into an average American household and figures out how to make coffee by finding the coffee, turning on the machine, etc.; the “robot college student test,” where one enrolls in a university, takes classes, and earns a degree; and lastly the “employment test,” where one needs the potential to conduct the various tasks involved in economically important jobs.
Despite their individual shortcomings, these tests do provide useful metrics for defining intelligence. The one fact that they make obvious is that our AI technology is still a far cry from actual intelligence. While AI is already transforming our lives and is expected to grow into a $190.6B industry by 2025, our most advanced tech remains narrow in application. Technologies like neural networks excel at specific tasks, but ask AlphaGo to butter toast instead of vanquish world Go champions, and it won’t stand a chance.
AI’s Purposes: Right Here and Right Now
Instead of splitting hairs over what it means to be intelligent or pondering a future of artificial superintelligence, let’s turn our gaze to the present moment. Instead of trying to define AI or even intelligence itself, we should think about the ways that this tech is already affecting our world. The following are three ways that we’re seeing AI serve our society.
The First Purpose: Automation
Using machines to automate dull, dirty, and dangerous tasks is nothing new. From self-driving cars to chatbots to Industry 4.0, we’re just beginning to unlock AI’s automation potential. A McKinsey and Co Insight Report explains “that currently demonstrated technologies could automate 45 percent of the activities people are paid to perform and that about 60 percent of all occupations could see 30 percent or more of their constituent activities automated.” Specifically, they point to data collecting and processing, which both “have a technical potential for automation exceeding 60 percent.”
As leading AI researcher Andrew Ng writes for the Harvard Business Review, “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” The following chart shows some automation tasks that today’s ML is tackling.
Source: Andrew Ng, Harvard Business Review
Whether this transition will eliminate jobs and lead to rampant unemployment or lead to the creation of even more high-value jobs remains an open question. We believe that rather than replacing humans, machines will work alongside us to augment our capabilities. ATMs didn’t get rid of bank tellers; instead, the job function has changed--the job now focuses more on customer success and developing relationships, rather than simply exchanging cash.
Nobody denies that change is hard and that transitions are often painful, but we see a more prosperous world that’s enriched by automation.
The Second Purpose: Understanding the Brain
By simply looking at vernacular like “neural networks,” it’s easy to see that neuroscience and artificial intelligence share a history. Their symbiotic relationship creates opportunities for scientists from both disciplines. At their core, neuroscientists and AI researchers are motivated by similar goals: probing the significance of thought, understanding reasoning, and figuring out just what it means to be intelligent.
Neil Savage writes for Nature:
“It’s only natural that the two disciplines would fit together, says Maneesh Sahani, a theoretical neuroscientist and machine learning researcher at the Gatsby Computational Neuroscience Unit at University College London. ‘We’re effectively studying the same thing. In the one case, we’re asking how to solve this learning problem mathematically so it can be implemented efficiently in a machine. In the other case, we’re looking at the sole existing proof that it can be solved—which is the brain.’”
Take, for example, language processing. Dr. Tom Mitchell at Carnegie Mellon University is working on neural representations of language by simultaneously studying reading in the human brain and by teaching a computer to read. He explains how these two systems lead to “cross-fertilization” where one side provides new insights for the other to explore and how they use ML to analyze brain scans more quickly, more efficiently, and more accurately than ever before.
One promising sign is that both neuroscience and AI research still have a long way to go. We’re nowhere near a comprehensive understanding of the brain’s functioning just as we aren’t close to AI that’s capable of transfer learning by applying previous knowledge to novel situations. The good news is that these two fields inform each other to hasten progress.
The Third Purpose: Accelerating Creativity
The third purpose is using AI as a tool to speed up our own creative processes. While creativity remains a firmly human characteristic, artists like Gene Kogan are already using AI to create new works, mathematicians are using it to prove new theorems, and Go players are incorporating a move made by AlphaGo that no human could understand.
Examples abound of AI establishing a foothold in our creative endeavors. Kulitta, a music composition software created by Yale’s Dr. Donya Quick, fooled some "music sophisticates" into thinking it was composed by Bach. IBM's Watson created the first ever AI-created movie trailer for Morgan in 2016. Here, we see the line between human creativity and machine generated “creativity” beginning to blur.
Nonetheless, using AI as inspiration for art remains the most prevalent artistic use case. Machine learning systems can provide impulses based on data that the machine thinks might be relevant to us, which in turn augments our creativity. Essentially, smart machines can accelerate creativity by serving as a kind of ‘creative arm’ that feeds and inspires creation.
The Future of AI: What it Can and Cannot Do
In his book The Master Algorithm, prominent AI researcher Pedro Domingos gives the following hypothesis about the future of artificial intelligence: “All knowledge—past, present, and future—can be derived from data by a single, universal learning algorithm.” While he remains optimistic about the role of AI in our collective future and the ability of Moore's Law to get us there by fueling exponential technological growth, his theorem hinges on one key constraint: data.
Andrew Ng calls this hunger for data and our (in)ability to provide it AI’s “Achilles’ heel.” Even if we could develop a learning algorithm that’s theoretically capable of learning anything and everything, we’d be hard-pressed to feed it enough data to reach that state.
A great deal of the public discussion around AI centers on science-fiction topics like Artificial General Intelligence (AGI), self-conscious robots, or a machine uprising in which our creations turn against us. To muddle the mixture even further, the scientific community remains divided in opinion: Bills Gates voiced his concern and Elon Musk even went so far as to call AI “our biggest existential threat,” while names like Zuckerburg, Ng, and Domingos stress the ways that AI can create peace and prosperity.
At the end of the day, much of this discussion comes down to what we believe about fundamental questions like “what is the composition of the universe?” and “what is the self?” Though we cannot speak in absolutes, we want to share our beliefs about the future of AI.
First, AI can, given a specific task and data, get better and better at doing one specific thing. At least in the near-term, we cannot expect any machines to become so intelligent that they suddenly start taking on new roles or “thinking for themselves.” Second, AI cannot exercise free will, as it can only make choices based on its programming.
Finally, as for the question of whether an AI can become conscious, we believe that today’s research does not lead us down the path towards it. The only evidence of conscious “machines” we know of are humans and possibly other living beings. We have come into existence through the long and complex process of evolution. Does that imply that for us to create self-conscious machines, we will need to simulate evolution? In The Fourth Age, Byron Reese concludes that “conscious computers may just be something that goes in that small drawer of things that may be truly impossible, like traveling back in time.” Though it may be possible eventually, it is far from certain and not remotely possible with today’s technology.
While we’re never sure what tomorrow may bring, we can say this for certain. Narrow AI is already here, and it’s having a huge impact on the world. At least for our purposes, automation is our primary concern. By automating a task as mundane as wading through millions of documents [link to previous article on unstructured data] to look for sensitive data like needles in a haystack, AI systems save enterprises a significant amount of time and money but, more importantly, they cut down on risk. The human mind gets tired; it simply cannot achieve a high accuracy on voluminous, repetitive tasks. In this race to achieve and do more, humans are drifting away from being human, towards being machines. By automating such mundane, repetitive tasks, AI forces us to become more human.
Yes, Text IQ cares about the theoretical implications of natural language processing (NLP) and other AI solutions. But we’re here to get a job done. Contact us to learn how.