I haven’t written a philosophical post but it’s finally here. I want to think about how we infer that something thinks. This question will become more important when computers become more and more human like in function. I’ve written this post in free-form so I come to no conclusion. I do find that, all things considered, it will be hard to figure out when computers think.
I’m sure everyone that has encountered computers has though about the question “do computers think?.” They might even have sat down for a few minutes and thought about it and stopped after having a Terminator scene flash before their eyes. Chilling. The most interesting aspect about this question is that we take for granted the fact that other humans think.
I mean, c’mon, think about it (yup, that’s a pun). You can’t get inside my brain and verify that I have mental processes. You make inferences that I think rightly point you to the fact that I am a thinking. Some of us even believe these inferences apply to some animals. That makes this question even more complicated. Animals don’t exhibit all of our mental capabilities but somehow we think the concept of thinking applies to some of them.
Of course, the question get stickier when not only the form of thinking is different but so is the hardware. You can take me and another human to a fMRI machine and see that when processing images we have quite similar areas’ light up. You can open up our heads and you’d see we have quite similar corrugated masses in our skulls. You toss us a ball and we have quite similar responses and so on. Now you take me me and some dog. You again see the many similarities and decide that the dog too has mental processes occurring, albeit at a lower-level. How do you make this sorts of judgment when the subjects differ not only in response but also in terms of processes?
We have no problem in imagining a thinking being with a different make up and different responses than ours, however, it’s hard to think about how we would judge that they are thinking. At this point you can’t just take me and this being to an fMRI machine or test our responses to stimuli. What is scary is that we could for all intents and purposes find a being built like us (carbon based) but evolved in a different world that just doesn’t respond like we would. Their responses may be what we would call thought-less.
Returning to our question, how will we tell computers think? I’m not sure and it looks like it might be harder to test than we like to think. That is, there may not be a universal turing test. Or maybe we are the model for high-intellectual and mental processes. Unlikely, but it sure would be an astonishing fact of the universe.