Nick Bostrom’s essay “Are You Living in a Computer Simulation?” brings up the intriguing, although rather mind-boggling concept of super-intelligence with the simulation argument involving post-human civilizations. His theory is laid out using seven sections, each of which builds off the idea of post-humans running ancestor simulations, and the fact that there may be a chance that we are actually a part of these simulations.
As a brief recap, the general premise of Bostrom’s theory stems from the fact that humans have made tremendous strides in the realm of technology, and our previous generations have invented an enormous technological infrastructure. This leads to the humbling implication that the technology we have today is rather limited and will be considered simple and quite basic in comparison to what our posterity down the road will have in their lifetimes. Therefore, future civilizations will have enough technological development to aptly run ancestor simulations. Bostrom then arrives at three conclusions, and hypothesizes that at least one of them must be true.
Upon an initial reading of Bostrom’s work, one may somewhat understand where he is coming from and maybe even agree with his theory. However, upon a deeper analysis, more questions seem to arise, and such was the case with my experience while reading his piece.
I somewhat disagree with Bostrom’s idea that we are living in a simulation. First and foremost, Bostrom never formally defines what exactly he means by “simulation.” Essentially, a simulation could be anything, and by not defining what he intends it to be, Bostrom’s theory comes off as confusing.
Another possible, although kind of minor critique is that fact that Bostrom assumes too much. It seems to be implied that if people are in fact being simulated, then they are unaware of it, meaning the simulators do not tell their subjects what is happening. By the protocols of modern science, this is unethical, leading to the assumption that these super-humans of the future are themselves unethical people.
Going off of the unethicality issue, it seems rather illogical to me that a civilization so advanced that it can run these simulations has no legal or moral protections established for the living subjects it is manipulating. If we are, in fact, in a simulation, then it is safe to say that our simulators create everything we experience, both good and bad. While this may not be a problem for the good things, this also means that our simulators are responsible for all the crimes, murders, etc. that we may face. If the legal system of this future is anything like our current one, then all the aforementioned things should be illegal, but if our simulators are exposing us to them anyways, it can be assumed that morailty is simply unconsidered. While this may or may not be a major flaw in Bostrom’s theory (depending on how you view it), I still believe it’s worth pointing out.
Another flaw I found with Bostrom’s theory is the fact that it is circular and to some extent, self-destructive. To start off, it can be said that the people running the simulation are incredibly intelligent and efficient – more so than us – because they are able to run these simulations in the first place. With this in mind, the whole theory speculates about a world higher than ours, while the whole simulation itself seems to suggest that if we are living in it, then we would have no way to know anything about such a higher world and what its people are like, what their intentions are, etc. Basically, the supposition that we are living in a simulated world leads us to believe that we can’t trust the assumptions, which led us to our first conclusion, making it a very self-contradictory situation and therefore semantically meaningless.
Taking all of the aforementioned assumptions into account, it can be argued that Bostrom’s hypothesis violates the principle of Occam’s razor. Occam’s razor is essentially a principle that states that between multiple hypotheses, the most reliable one would be the one that includes the fewest assumptions. Therefore, other somewhat similar theories regarding higher intelligence may technically be more reliable than Bostrom’s, provided that they include fewer assumptions and provided that we actually take them into account.
Sources:
http://en.wikipedia.org/wiki/Occam’s_razor
http://www.simulation-argument.com/computer.pdf
http://www.longecity.org/forum/topic/3402-against-the-argument-that-we-live-in-a-simulation/
http://users.digitalkingdom.org/~rlpowell/rants/simulation_errors.html
I want to touch on a few of your points but first I would like to say that I do see the merits behind the use of Occam’s Razor, as the simplest explanation for life is that we are just living a normal unaltered existence. However, I want to challenge the argument you base on the “unethical approach” of a complete simulation.
One can argue that murder, jealousy, fear, lust, and all of the negative human emotions are simply an indispensable part of human experience. So imagine a world without all of these things, one would aptly describe such a peaceful existence as an eden or a utopia of sorts. However, as the studies on utopias and dystopias have demonstrated, perfect civilizations are fundamentally flawed. (See Metropolis, Elysium , The Matrix Reloaded or any film or novel based on a utopian society).
This brings me to my next point. So if we are all humans plugged into a computer simulation that is based on a utopian society, as a part of a human experience we would inevitably find flaws in such a perfect existence and revolt. Conversely, if we were merely unaware computer programs programed with emotions and thoughts to resemble the human experience, such creations would surely accept this flawed existence as they lack the intuition to notice the synthetic nature of said society.
Coming back to your original point the only type of simulation that humans would not question is one that feels organic. One that resembles a society where humans are capable of free will, and making both good and evil decisions. Through this assumption I am lead to question whether or not we can truly prove we are not in some sort of complex simulation. And at the end of the day I am able to present a concrete argument or way of proving that we are not in a complex simulation.
If we indeed are in a complex constantly evolving computer simulation then our simulators have found a way to completely deceive our sensory neurons as every sense is being recorded as genuine by our brain. As the idea of such a complex simulation surely violates the basic notion of Occam’s Razor I can only argue that Occam’s Razor would lead to an incomplete judgement as we believe such complex simulations to be nearly impossible. If such technology was readily available such as it may be in a possible future, then we would surely consider the possibility of a computer simulation with a bit more weight.
Going off of your critique that the super humans would then be unethical if they had such a simulation, I would like to offer a few points disagreeing with this. This could be related back to Descartes’ point that perhaps we are being controlled and deceived by an evil demon as opposed to a benevolent god. This could be a potential argument that perhaps our super human descendants are evil as opposed to good. That is merely a possible though not highly likely scenario.
Assuming however that these super humans are ethical, looking at modern science can offer an interesting spin on this. Currently, we complete plenty of experiments on rats, mice, monkeys, etc. We can induce devastating, fatal diseases, put them in stressful situations by administering shock, etc. The list goes on. While there are people who contest the ethicality of this, research continues in the idea that sacrificing and using these lives is ethical if it is for the greater good. This, in the same way, could be the thinking of the post human civilizations.
There is also the possibility that their concept of ethics differs from ours. Perhaps the study of the past (looking at us as their ancestors) for reasons such as avoiding mistakes of the past, etc. is more ethical than allowing themselves to repeat those mistakes. If we are merely experimental, then it is doubtful that it would be considered unethical.
Also, consider the Hitchhiker’s Guide to the Galaxy. I’m not sure if you have read it but in the book, humans are actually experimental subjects of super intelligent mice. While we sit here thinking we are in control, we are really the subjects. A major component of the book though is that while humans are the experimental subjects, they are left to their own devices in that they aren’t being directly controlled by the mice. It was more of a test to see what humans would do if left on their own. This could be an interesting proposition to consider.
Occam’s razor was an interesting principle to learn about from your post. I had never heard of it before and while it does make sense, I agree with Jared’s point that the high likelihood of living in a simulation is what lends itself to strong consideration. The assumptions are not really as far-fetched as they seem, nor are there that many. They operate on the basic proposition that a post-human civilization survives with the technological capabilities to run such ancestor-simulations, something that is highly probable.
Firstly, Occam’s Razor is not always correct, it is merely a principle. It is not always the simplest solution that is the solution. Secondly, Bostrom does not say that we are living in a computer simulation. He merely states that there is about a 33.33% chance that it is very likely that we are in a computer simulation. This means that not only is there a 66.67% chance that we most likely not in a computer simulation, but in the case that it’s likely that we are in a computer simulation, there is still the chance that we are not due to the word “likely”.
To point out the unethical issue, I believe that is covered by one of the three options being that a posthuman civilization does not make computer simulations. This could be because they are not interested in doing that or because of ethical or legal issues like you mentioned.
Going off of your point about us not knowing exactly what a higher civilization than us would actually be like, and how the inhabitants would really look and act is solely based on our ideas of what we may want to do based on how our culture and our ideas are. There may be different cultures who could argue that they may never want to control on conquer something that in depth, not only due to ethics but maybe they just don’t even think that far. However, based on our ideas of what we have done in the past, slavery for example, controlling a population and being in control in a social hierarchy of sorts seems like that may just be a cultural phenomenon and we are attributing it to any other higher intelligence. To run ancestor simulations and make them think and feel things without their permission and do work and live through horrors and slum conditions is unethical and in sorts it could be considered slavery on certain levels. However, with our history in the world the need to conquer and have that sort of power and now with technological advances, the idea that a higher civilization is now at the top of the hierarchy controlling everything may just be an attribution that we are putting on them, if they do exist.
Because we know nothing about what they are like, who is to say that they are so beyond our intelligence that they would never even create a social hierarchy system of sorts and would even want to be in control of something that large, such as millions and billions of programs of “people” to do what they want them to for entertainment or otherwise. What if they have learned from the ethical issues in our past and are like Bostrom says, are uninterested in running ancestor simulations? Or what if they in fact worship the ancestors and would never create a recreation of something so important to them and their history, since in order for higher intelligence to appear after us, we must then be a part of their history.
So although I agree with the probabilities Bostrom gives for all three propositions, I think we may be overestimating an advanced civilizations interest in controlling possible worlds, and underestimating our ability to project our thoughts, fears, feelings, and ego centered desires of control on another being we know nothing about–similar to how we project our intelligences and thoughts and try to impose them on animals who may or may not think or feel in the exact same way.