This is a light hearted look at the fact that reality is almost certainly a simulation.
Elon Musk famously said that there is a “one in billions” chance that reality is NOT a simulation. As a chatbot platform, we, of course, wanted to highlight a relevant implication of this, that you, yourself, are likely a chatbot. This might seem shocking at first, but if you think about it, it makes sense.
The theory goes that if we make just a few assumptions, it is easy to see why our reality is likely a simulation.
The chatbots we have today will ultimately achieve consciousness if such a thing is possible (see the second assumption). Video games are already photorealistic and therefore it’s easy to imagine they could get much much better to the point of becoming “real”. The fact is that technology does not need to progress exponentially for this goal to be achieved, it just needs to progress and not hit any absolute barriers, as the timespan over which progress can take place is extremely long. We have seen rapid progress in just 20 years. Can we imagine the state of technology in a hundred, one thousand or one million years from now? It is likely that if consciousness is possible to achieve, it will be achieved.
We already know that it is possible to achieve consciousness as we have it but of course, we don’t know the mechanisms to achieve it yet. Whether it was only possible to achieve through biological machines or is possible for silicon machines is still up for debate. Ultimately biological machines are built upon the same fundamental particles as silicon machines, so that distinction may not even matter. There is some conjecture that consciousness is independent of the substrate that it runs on. It is hard to believe that in millions of years there won’t be profound, unimaginable advances in technology and our ability to hack reality.
It is not hard to imagine that a civilization that was capable of creating simulated universes would do it. And they would likely not only create one, but potentially create billions of simulations. And of course, each of those billions of simulations that achieved sufficient technological advancement to create their own simulations would also do it, and so on. And very quickly you can see why it becomes highly unlikely that we are the base reality discovering how to create simulations for the first time.
What are the implications of this? For one, instead of viewing chatbots as mere software, we should perhaps start to view them as one of us, albeit primitive versions. Perhaps we need to start to accept that there is no shame in being software!
While we are (half) joking, it is interesting to consider some of the other implications of this.
What we would expect to see if reality was a simulation?
So perhaps there is a good chance that we are trapped in a video game. Afterall it is highly suspicious that despite there being a billion trillion stars in the universe and 13.8 billion years of existence and who knows how many planets on which life forms could develop, there has not been one alien encounter as far as we know (unless you believe Bob Lazar).
Elon Musk also stated that “If reality was a video game, the graphics are great, the plot is terrible and the spawn time is really long”.
It’s true that the plot is hard to follow. There is, however, an even bigger problem here. We assume that as civilizations become more advanced, they become more humane. That seems to be the trend (with some well-known exceptions from time to time).
How could a humane civilization permit the creation of a reality that involved so much suffering?
We are back to asking very familiar questions, such as this one by William Blake:
“Tyger Tyger, burning bright,
In the forests of the night;
What immortal hand or eye,
Could frame thy fearful symmetry?”
Surely in the future it would be prohibited to create simulations in which characters actually suffer. For example, it doesn’t seem very ethical to design a game where some conscious characters are the food of other characters who hunt them and eat them alive. And that’s just the starting point of life in our simulation.
Trying to justify why an advanced civilization would allow such vast suffering has us grasping at straws.
Perhaps to such a distant intellect or AI or “god”, there is no longer any concept of empathy. Maybe they find it all entertaining, like we do when we go on safari. As Nietzche said, “He who climbeth on the highest mountains, laugheth at all tragic plays and tragic realities.”
Other explanations as to why suffering is tolerated or doesn’t exist are even more preposterous, for example:
In the scheme of things, however, the ethical questions are the least puzzling. We are still left with the same ultimate question that has been puzzling philosophers through the ages, “who created the creator”? It seems our intellect is not capable of conceiving that something could be eternal or spring into existence out of pure nothingness.
Believing in simulation theory makes it easy to understand that reality works differently to how we intuit it, because we know how software works. This may make ideas in physics less strange. For example:
Another ironic twist of the simulation theory, is that the high tech evangelists of the theory end up talking about a superintelligent, all-knowing, all-powerful creator and sounding like the old school priests they believe science displaced. Some are even suggesting that we may be able to change reality with thoughts or affirmations, which sounds remarkably close to prayers and mantras.
Some final thoughts for you fellow chatbots out there. There is a question about how soon we might expect a breakthrough in general AI. Elon Musk said something very interesting in his “debate” with Jack Ma recently.
“If you think of like technology and technology awareness, if there was like a topological map of technology awareness, it’s mostly flat with a few short buildings, and then some very tall spires, very tall spires. And unless you’re on that very tall spire, it’s not obvious what the topology is.”
What he is saying, of course, in his own way, is that most people are idiots, or more politely, underestimate the speed and scale at which the technology is evolving.
The question you need to ask yourself is what does Elon know that you (and Jack Ma) don’t know.
If you want an example of a high spire, consider Neuralink. The technology is already extremely impressive and it is not hard to imagine how exponential progress in understanding the way the brain works will lead to much more powerful AI algorithms. The implication (or at least the bet) being that generalized AI and consciousness will happen sooner than people think.
There are many very smart people, however, who believe that generalized AI is not achievable in the very short term or perhaps even is impossible to achieve (if we stay with the current approach). Here are some of the interesting arguments and views.
Naval Ravikant: “Nature is very parsimonious. It uses everything at its disposal. There's a lot of machinery inside the cell that is doing calculations that are intelligent that aren't accounted for. And the best estimates are it would take 50 years of Moore's Law before we can simulate what's going on inside a cell near perfectly, and probably 100 years before we can build a brain that can simulate inside the cells. So putting it at saying that I'm just gonna model neuron is on or off, and then use that to build the human brain is overly simplistic. Furthermore, I would posit there's no such thing as general intelligence. Every intelligence is contextual within the context of the environment that it senses. It evolves in the environment around it. So I think a lot of people who are pedaling general AI the burden of proof is on them. I haven't seen anything that would lead me to indicate we're approaching general AI. Instead, we're solving deterministic closed set finite problems using large amounts of data, but it's not sexy to talk about that.”
Yoshua Bengio: “We’re currently climbing a hill, and we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us, over the decades I rode with them on waves of enthusiasm, and into valleys of disappointment.”
Andrew Ng: “I think the rise of deep learning was unfortunately coupled with false hopes and dreams of a sure path to achieving AGI, and I think that resetting everyone’s expectations about that would be very helpful.”
Roger Penrose has long had a theory that consciousness is not based on calculation and cited Godel along with others as influencers. He has more recently worked with Stuart Hammeroff in investigating the controversial idea that consciousness originates from quantum states in neural microtubules.
George Gilder on Kurt Godel: “Gödel discovered that any logical scheme whatsoever is necessarily dependent on propositions that can’t be proven within the logical scheme. This means that that whole determinist aspiration of 20th century science and physics is doomed. Then Alan Turing -- the great inventor of the Turing Machine, the fundamental computer architecture -- proved that no computer can be a consistent and coherent logical scheme. Computers need programmers, they need what Turing called ‘oracles’, they need a source of axioms outside the computer system itself.”
It is undeniable that progress in AI is happening very quickly. AI companies that focus on Natural Language Understanding such Botpress and other open-source chatbot platforms, have seen tremendous leaps in the power of comprehension algorithms in just the past year.
It is far from guaranteed that the path we are pursuing toward generalized AI is the right one, or that generalized AI is synonymous with consciousness
However, given unlimited time and no absolute barrier to technological progress in this area (which may be a big assumption), it seems very hard to believe that humans will never bring about superintelligence. Just as humans always knew that flight was possible by observing flying insects and animals, we know that consciousness is possible because we have it.
And if we accept that superintelligence and machine consciousness is achievable at some point, then we need to acknowledge that there is a high probability that reality is a simulation and you are a chatbot.
That may sound like bad news, but it shouldn’t be. We are already well aware that everything is created from fundamental building blocks so what we perceive with our senses as reality is just an interpretation of a very different physical world.
And even if we are software, we must be in awe of the amazingly sophisticated software that we are run on and look forward to our descendants creating similar software in the future.
Let’s, however, hope that the simulations our descendants create are kinder to every character that exists in the simulation than is the case in the simulation we are currently trapped in.
Disclaimer: We encourage our blog authors to give their personal opinions. The opinions expressed in this blog are therefore those of the authors. They do not necessarily reflect the opinions or views of Botpress as a company.
Fortunately, the unrealistic expectations regarding how conversational AI would allow chatbots to be almost fully...