A Discussion of an Explosion of Intelligence Explosion Discussions

A few opinion pieces about the worries about #IntelligenceExplosion or the Singularity have been floating around in the social media circles of AI/ML researchers recently, so I'm going to try to create a series of posts of my response to them when I get time.

The first one I came across is "The Impossibility of an Intelligence Explosion" by Fran├žois Chollet. This is a great article. He puts it quite simply some ideas I have had before and some I have not. He points out that there is something missing in the very logical and enticing idea of a singularity, an intelligence explosion. In essence, it underestimates the importance of the tools we use to think.

This is a fascinating idea to think about, that the reason we can be so smart is because we offload some mental tasks to external tools. We think of this in a futuristic way as a chip in your brain or a phone in your pocket but it has always been there. Francois reminds us that language itself is such a intelligence boosting tool. 

Rules of Behaviour

As I made my way through the rush hour subway in Toronto last week it struck me that etiquette and social rules of behaviour are another such tool. We can all move around in this crowded world only because we have rules that tell us to give me people their space, not talk to strangers unnecessarily, don't push or shove, let people exit the subway before entering. We'd get by without these rules but the more there are the more efficient the entire system of our society becomes. We are pieces of that machine, with our own goals and aspirations and tasks so it benefits us for the system to be successful and efficient.

Whose Aspirations Though?

Of course, another way to look at it is that our goals and aspirations are largely, or almost entirely, set by that society machine we live in. Or at least, the options for what goals and aspirations we can imagine can be achieved are bounded by that society. This is well known, but it's implications for AI are not regularly discussed. Most AI researchers do not spend much time defining intelligence because that isn't what our job or our goals are about. We are not trying to build an 'intelligent system' to maximize some external concept of intelligence. We are trying to build a system to automate driving a car under difficult conditions, to understand human language, to detect anomalies in an industrial process, or to make decision making by humans easier when faced with massive amounts and data. AI research is driven by solving problems that are known to be very hard but somewhat solvable since humans or other intelligent creatures solve them every day. We don't just these systems by how 'intelligent' they are, we judge by how well they solve particular problems.

How long ago is long ago? How fast is fast?

So apparently, the first iPhone came out 10 years ago today. That kind of hit me.

The first iPod came out in October 2001, 16 years ago.

The World Wide Web was made open to the public in 1991, 26 years ago and it didn't become widespread for a while after that let alone a commonly understood concept.

Does all of that seem like a time long, long ago and far away?

Or does it perhaps not seem so long ago at all?

Ten years isn't a very long time to completely change the way people communicate and spend their free moments. Twenty six years isn't very long to change the entire way we interact with knowlege, information and each other. If you're over 40 these are things you remember happening and which were preceded by a whole prior world where those technologies could not even be conceived of. The world is indeed changing quickly, but some people have internalized that change as normal while other feels constantly pushed back into their seat, as if on a plane, waiting for the acceleration to end and the cruising phase to begin.

So here's the truth, the acceleration phase isn't going to end any time soon.

For example, it might seem the peak of cell phone technology must surely be approaching as each iteration of those supercomputers in our pockets seems more and more similar to the last. But the power underneath them and the abilities they can connect across the internet are only going to continue accelerating in power and complexity. The continuing acceleration is being driven by applications of the latest research from Artificial Intelligence and Machine Learning.

Many people are justifiably worried about employment in a world running at this pace. They feel like the world is moving along and leaving them behind. In any country there are many reasons for unemployment but technological change is a large part of it and going forward it will be an increasingly important part of that. So maybe people need to ponder about what 'long ago' and 'fast' mean to them these days. Even more importantly, we all need to talk about this more deeply and start finding solutions that allow society to continue to grow and flourish even if the nature of technology in our lives and the nature employment itself changes in the coming years.

Here are some recent articles on the topic for further thoughts:

On the Importance of Being AlphaGo

Victory for #teammachine! Plus a very informative consolation prize for #teamhuman.

This is a response to a couple articles (Beautiful AI and Go, Go and Weakness of Decision Trees) from my friend over at "SE,AG" about the recent match between DeepMind's AlphaGo and World Champion Go player Lee Sedol.:
"Is the Horizon effect something you can just throw more machine learning at? Isn't this what humans do?"
That's what I was going to say in response, but you beat me to it.

Obviously, you could build deeper trees. But the reason these games, even chess, are hard to handle is you can't build larger trees indefinitely. And would it be really satisfying if AlphaGo built ridiculously deep trees, peering into the future more than any human could, in order to defeat a human's uncanny ability to built compact trees using heuristics?

That was why the DeepBlue victory felt a little hollow, as far as I remember, it was mostly tree search with lots of tricks. But AlphaGo uses treesearch, a database of amateur games it has watched and Reinforcement Learning. This means it can really learn from each game it plays, even from playing itself (enter player 0). This is the most exciting thing for me about this victory, that it's a powerful use of RL with deep learning to solve something no one thought would be possible for another decade or two using compute power alone.

Also, on a more pedantic point, AlphaGo doesn't use Decision Trees, which are good for learning how to classify or predict data. Rather it uses Monte Carlo Tree Search, which is an area of research in decision making that simulates forward different scenarios based on the current policy to evaluate how good it is. In many algorithms these trees can be throw away and the policy is updated to take the actual action forward. I think this is what goes on people's head when they play a game like chess or Go. But humans are very good and intuitively picking just the branches with the most promise.

I haven't looked into it in detail yet but I image that part of the Deep Learning element that Alpha Go uses is for building this kind of intuition, so that it doesn't treat all possible choices equally.