All posts by Bradley Wolters

About Bradley Wolters

radical, cool, awesome, fun-loving dude

Human Mind vs. Alternate Mind

In the section on functionalism, Churchland describes an alien life form that contains an alien psychological constitution.  This alien’s constitution is based on the element silicon, not carbon.  Now silicon acts the same as carbon due to its position on the periodic table, yet it is still different than carbon (number of protons, neutrons, etc.).  Churchland states that this alien brain, “can sustain a functional economy of internal states whose mutual relations perfectly parallel the mutual relations that define [human’s] mental states” (Churchland 36).  This means that the alien brain that is made up of different material than ours, can act similarly as ours does.  If that is the case, and those mental states are causally connected to inputs that parallel our on connections, then “the alien could have pains, desires, hopes, and fears just as we do, despite the differences is physical system that sustains…those functional states” (Churchland 36-37).  This means that there can exist life forms of a certain makeup that can have a consciousness and don’t have to be made up of the same material that we are made up of.

Churchland then extends his argument to artificial systems.  He states, “were we to create an electronic system-a computer of some kind-whose internal economy was functionally isomorphic with our [constitution] in all the relevant ways, then it too would be the subject of mental states” (Churchland 37).  If you think you’ve seen this before, you’d be right.  It’s very similar to the substrate-independence thesis we talked about while analyzing Bostrom’s computer simulation argument.  It basically states that “mental states can supervene on any of a broad class of physical substrates” (Bostrom 2).  Now this ‘broad class of physical substrates’ can extend to alien life (as I discussed in the first paragraph) or artificial intelligence as Bostrom has discussed in his paper.  Bostrom just assumes the fact to be true in his argument, but what if it was actually physically possible to design a computer (or something of similar data running capacity) to be isomorphic in functionality with our own personal design?  Now it seems weird, I know, but we already have robots that can perceive human expression, and display distinct emotion based on the context of the interaction.

I would also like to add that us as humans like to think of ourselves as superior beings in the world, yet if we look specifically at our brains and compare them to other animals’ brains, our nervous center of our brain is only slightly more complex than that of other animals.  In addition, our brain’s weight in proportion to an average human’s weight is not the greatest among all the animal species.  Is it not logically possible than that there could exist other animals besides humans who could think, feel and perceive the world just as we do?  Could other animals not also have a consciousness?  We generally don’t think about this because we can’t communicate with other animals.  There is no real reason why animals can’t have a consciousness, as consciousness is a private matter, no one else can know if another has a consciousness (though everyone else besides you could potentially be a zombie, but that’s for another time).  We also assume animals can’t have a consciousness because we have both domesticated many of them, and feel we can control just about all of the animal species out there.  This, simply put, is human arrogance at its highest.

What this all means is that “there are almost certainly many more ways than one for nature…to put together a thinking, feeling, perceiving creature” (Churchland 37).  My question to you is do you think it is possible to have computers, or find life forms, or something not of our constitution that can think, feel, perceive, or have a general consciousness?

 

Additional sites used:

http://www.gizmag.com/erwin-robot-mimic-emotions/30769/

http://realtruth.org/articles/090806-002-science.html

 

Are we Brains Floating in Vats??!?!?!?

Well are we???  I couldn’t possibly tell you, because maybe I don’t know it.  This is a classic scenario designed by philosophers to put forth the question: can we know anything?  There are many who would argue that we can and provide many examples, ‘I’m sitting at my desk typing this blog post.’  Because I can verify it, and it is true, then I have clear knowledge of it.  Right?  Well, not according to the skeptical hypothesis which is “a scenario in which you are radically deceived about the world, yet your experiences of the world is exactly as it would be if you were not radically deceived” (Pritchard, 169).  This states that you as an individual feel as though you were living in a reality, but in actuality, it isn’t reality and you are being suspended in a state of belief as though it were reality.  This also means that if you were to consider something knowledge in this virtual reality, it wouldn’t be considered knowledge in actual reality because it isn’t true.  Hence my title.

Nozick thought of this before and used this example in his argument against Feldman.  “If someone is floating in a tank and oblivious to everything around them and is given electrical and chemical stimulation to the brain, the belief that he or she is floating in a tank with his or her brain being stimulated cannot be known by that person.” (Nozick, 2)  Although he uses this tank floating idea to support a different argument, it shows philosophers have thought about the idea before.

This is similar to the movie The Matrix (Pritchard 169-170) where an individual, Neo, has lived in a virtual reality oblivious to the fact that he is being controlled by supercomputers.  He feels and thinks as though he is in reality, yet as it turns out, he hasn’t been.  Now Neo has obtained experiences and different personality traits that all contribute to his knowledge of everything, yet as it turns out everything in Neo’s world is false.

This thought process can also be referenced from Inception where the main character Cobb is trying to redeem his past illegal failures by infiltrating the subconscious of an individual and implanting an idea without the individual ever knowing.  Later, that individual will continue through his life never knowing that this idea that stands out so fresh in his mind was never his to begin with and was secretly implanted in his mind by foreigners unbeknownst to him.

My question then is that if we were to be under the influence of some supercomputer, or a brain floating in a vat, or have ideas implanted in our minds, would the knowledge that we gain (or think we gain) in those situations truly count as knowledge?  If I were to type this blog post, and have knowledge of having done so, but as it turns out, only have done that because I am in a virtual reality brought on by an Oculus Rift kind of technology, would I truly have knowledge of having typed this post?  Or would it not be considered knowledge because I’m not in a definable, physical reality?