I was doing a little traveling recently and, after arriving at the airport about 3 hours too early, I decided I needed to find a book. In the bookstore I landed on this brightly salmon-colored cover with the title Prediction Machines: The Simple Economics of Artificial Intelligence stamped across the front. So, I grabbed it and found a place to settle in.
Props to the authors for bringing clarity to the AI “revolution”.
This book is the combined effort of renowned economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb. The authors use this book as a vehicle - filled with real-world examples and connections to economics - to drive a more grounded perspective on the AI “revolution”.
Disclaimer: You don’t need to be a tech genius to read this book.
Fortunately, the ideas presented throughout the book are well-developed and aren’t bogged down in too much technical detail. So, if hearing “AI” freaks you out, I would encourage you to set your discomfort aside.
What’s addressed here isn’t some mystical fortune swirling around in a crystal ball - AI is here, it’s been here, and its impact will continue to increase worldwide.
Google, for example, “has more than a thousand AI tool development projects underway across every category of its business, from search to ads to maps to translation. Other tech giants worldwide have joined Google.” Also, “companies like Uber are using AI to develop autonomy with the hope of taking even the driving decisions out of consumers’ hands.”
If you’re curious about other areas AI is already a part of our lives today, check out Forbes article: 10 Powerful Examples Of Artificial Intelligence In Use Today.
Again, I would encourage you to look at this book as a way to empower yourself with information and expand your perspective on the future.
Time to eliminate a few misconceptions… (opinion included).
There are lots of opinions flying around that seem to have very little ‘grounding’ information. This is why I want to go over three “quick” thoughts that might help dispel some of these fallacies.
1. Exponential human-like superintelligence.
Let’s break this down a little bit… First of all, many of the AI system operations are based on predefined rules and are still essentially programmed using if-then logic.
Second, sure humans can “teach” these systems to do a lot of things, but that still doesn’t mean they’re anywhere near being able to function independently or “think” for themselves.
Third, current AI algorithms cannot reason, provide their own interpretations, or form judgments. This is why many people won’t be impressed until they can formulate their own “thoughts”.
According to Steve McMaster, “I’ll be more impressed when we feed ALL of the data into a program of some kind and it makes its own conclusions. People are really good at seeing patterns in seemingly random data, I want to see someone make computers do that. Try making a computer deal with chaos and see what happens.” Yet, according to an article shared with me by Bill Mathews, there are thoughts that something like this is being done now: “Intelligent Decision-Making: An AI-Based Approach”. (What do you think?).
As the book points out, the current (and more realistic) enthusiasm surrounds the fact that machines can help reduce uncertainty through prediction. This idea still leaves the more meaningful “judgment” up to humans… basically, it’s not going to be any time soon that machines will be going rogue in the superintelligence department.
2. It’s all happening at the speed of lightning!
Have there been some neat advances with machines? Yes. Is the predictive nature of machines shifting business strategies? Yes. Is AI going to increasingly impact decision-making across the globe? Sure.
(Side note: Due to all the complexities and dimensions involved in artificial intelligence, I think it’s unrealistic to look at advancement as if it’s going to be a singular, linear, standard rate of progress).
According to Prediction Machines, right now we’re still only seeing rudimentary “point” tools with AI. These products are being fed very specific sets of data and generating specific predictions or performing specific tasks.
“Today, AI tools predict the intention of speech (Amazon’s Echo), predict command text (Apple’s Siri), predict what you want to buy (Amazon’s recommendations), predict which links will connect you to the information you want to find (Google search), predict when to apply the brakes to avoid danger (Tesla’s Autopilot), and predict the news you will want to read (Facebook’s newsfeed). None of these AI tools are performing an entire workflow.”
After having a few conversations with coworkers and friends, the consensus seems to be that machines are like babies that will have to be nurtured and taught. Although we’re seeing certain dimensions of AI that are moving forward at a seemingly fast pace, this type of “teaching” is not going to be a speedy one.
3. The end of humans as we know it.
Elon Musk, Stephen Hawking, and Bill Gates have certainly voiced concerns about the future of AI and the potential destruction of humanity. There are also other individuals, like renowned psychologist Daniel Kahneman, who believe AI will eventually become wiser than humans.
In his explanation, Kahneman proposes three main differences between computers and humans and why he believes computers will eventually be wiser than humans:
“One is obvious: the robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. The third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having too narrow a view. That is the essence of wisdom; it’s broad framing. A robot will be endowed with broad framing. I say that when it has learned enough, it will be wiser than we people because we do not have broad framing. We are narrow, thinkers, we are noisy thinkers, and it is very easy to improve upon us. I do not think that there is very much that we do that computers will not eventually [learn] to do.”
I would imagine, however, that even if machines start to “think” for themselves and begin to supercede humans (on more levels than prediction) that they might find some sort of human-machine combination appealing, instead of wiping out humanity completely. Would they really want to lose out on the phenomenal creativity, emotions, and other advantageous capabilities natural to humans? (Unless they end up viewing these elements as frivolous or idiotic and then decide the elimination of humanity is the best course of action… can't say I would necessarily blame them for that).
If you think about it, there are plenty of topics covered in older science fiction books that were once considered impossible and are now very much a part of our world today. So, it seems silly (and a little pretentious) to rule out the possibilities at this point.
Now that I’ve gone on my little AI tangent, let me sum things up.
There’s a lot of hype and misconception swirling around AI and other related topics. Books like Prediction Machines provide a level of focus and easy to understand information that cuts through the nonsense. I believe this book has value for a wide audience - business leaders involved in making strategic decisions, policy makers impacting societal changes, students looking at career evolution, and others interested in seeing the bigger picture of AI’s potential should grab a copy.
Thoughts and arguments welcome - feel free to tweet me at @unfoldmybrain. Thank you for reading!