Let me say up front that I don't profess a deep understanding of this subject. The problems associated with intuition are numerous, and critical analysis of the role of intuition in philosophy is, as far as I can tell so far, a relatively recent and esoteric thing. It has never come up in any of the philosophy subjects that I took at university. Even so, it is clear that intuition plays an important role in thought and understanding generally, so let us consider it, and perhaps gain some appreciation of the scope of our ignorance, even if our initial investigations are a little naive.
The problems associated with intuition are somewhat related to the problem of consciousness (from my perspective at least). I think that a preliminary discussion of the "hard" problem of consciousness (as they call it) will put us in the right frame of mind, so let us consider consciousness first.
Consciousness is hard to define, but the most important thing about it from a philosophical perspective is that it pertains to subjective experience
, not complex behaviour
. To illustrate, consider a rock: it is neither physically complex, nor does it exhibit any active behaviour, and few would suggest that it experiences
its environment in any way. Sunlight may fall on it, and warm it up, but it does not have a subjective experience of warmth.
Next, consider a computer. It is composed primarily of minerals, and shares a lot in common with rocks in that regard. Its primary difference is that it has been crafted into a complex machine, and is therefore capable of quite complex behaviour. Similar to the rock, however, few would suggest that it experiences
anything. It may have a camera and microphone attached, and thus be capable of obtaining "sense data" from the environment -- even committing the data to memory and playing it back -- but it "sees" and "hears" only analogously to our senses. The machine is not subjectively experiencing
this data in the way we do: the data is merely received, encoded, stored, retrieved, decoded, and replayed.
Next, consider a science fiction android: a robot built in the form of a man, and with human-like artificial intelligence. In terms of its physical constitution, this machine is not qualitatively different from its computer and rock counterparts. The complexity of the machinery has increased again, and so has the behaviour, but there is no qualitative difference that ought to cause us to think that the machine actually experiences
the data that it processes. If such a machine were programmed to behave like a human, then presumably it would answer "yes" if asked it whether it experiences things, but the answer would be no more true than if a computer were to print "yes" on its screen in answer to the same question -- or if someone were to paint "yes" on a rock, as though that were the rock's answer.
But consider us, as humans. We can not coherently deny that we
experience. If one entertains the thought, "I do not experience anything," one will immediately experience
having that thought, and so the thought defeats itself. In general, we suppose that other people also experience things in the same way, because there is no special difference between us and others that might permit it. We might be tempted to believe the android's lie about experiencing things precisely because it shares so many things in common with us, but it is still more like a complex rock than a human being in this regard.
On the other hand, we might imagine another science fiction construct: the artificial human. Not merely an electromechanical android -- this construct is flesh and blood, like Frankenstein's monster, only more human than that -- more like the "replicants" of Blade Runner
. They are manufactured, but they are biological, just as we are. Do they
experience? If we say "yes", then we are either ascribing consciousness to a certain complexity in matter, or certain kinds of matter, or both, since that is all the difference between the android and the replicant. If we say "no", we must answer the question, "what is the difference between a human
and a replicant which allows one to have conscious experience, while the other does not?" Both of them have the same kind of biological brain, after all.
This is where we might start to feel the full force of the "hard" problem, because, on reflection, there is nothing about our own
physical make-up that would suggest the possibility of experience in the first place. The fact
of experience is something undeniable, not an indirect conclusion of observation and inference, but as direct as a fact can be: every
observation is an experience; every thought
is an experience. When we look at the physical stuff of ourselves, however, we see nothing qualitatively different from the android, computer and rock. Different chemicals are involved, with different degrees of complexity, but complex and specific chemicals seem like the wrong sort of thing to be the key to consciousness. Indeed, the very laws of physics don't offer a suitable hook on which to hang the phenomenon of experience that we call "consciousness".
Small wonder that Descartes (of "I think, therefore I am" fame) was persuaded that minds must be a different kind of thing than bodies. While such "substance dualism" (as it is usually called, whether appropriately or not) has largely fallen out of favour among philosophers (though not among the unwashed masses), the problem remains. To quote the colourful philosopher Jerry Fodor, "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious."
Of course, there are a lot of metaphysical materialists out there who require
consciousness to be grounded in the physical, so this poses a problem to them. One popular explanation is that consciousness "emerges" from matter. By my analysis (and I'm not alone), this either hides magic in the word "emerge", or is the wrong kind of explanation. Emergent behaviour is easy enough to see: the Mandelbrot set emerges from iteratively squaring complex numbers in the vicinity of zero. It's an infinitely complex thing which emerges from a very simple equation. Strange attractors are simple systems which have emergent complexity of this sort. Consciousness, however, is neither a type of complexity, nor a type of behaviour. To claim that consciousness
can emerge in such a way is either to mistake experience for something that it isn't, or it is roughly on a par with claiming that a ghost can emerge from a chalk pentagram.
The other popular explanation is that consciousness is an illusion. Little needs to be said about this, other than to emphasise that some people really do advocate it in earnest. It seems entirely self-defeating to me (and, I expect, to most people). An illusion is a particular species of experience -- namely, a non-veridical one. So for experience to be an illusion, you'd need to be experiencing something that wasn't actually an experience. Anyone who can wrap their mind around that is a more accomplished mental contortionist than I.
I know of one other option which saves metaphysical materialism: panpsychism
. This is the idea that we were mistaken about rocks not feeling warm when the sun shines on them, or something like that. It suggests that all matter has a property of consciousness. This seems pretty far out, but it would provide the right kind of basis (in principle, at least) for "higher consciousness" such as human consciousness to be an emergent property of appropriately configured matter. "Emergence" without panpsychism posits that an entirely new category of thing emerges from matter, whereas with panspychism it would be a more credible case of an existing thing coalescing into a system with unexpected properties. I have absolutely no idea what kind of "experience" a rock or an individual atom would have, though, and it raises many of the old "substance dualism" questions of interaction between the physical and the mental in a new form.
Consciousness poses an additional problem to Darwinists precisely because it isn't a behaviour. If we suppose that a brain is a bare minimum requirement for consciousness, and that some brains are too simple for it, then consciousness must have evolved somewhere down the "animals with brains" lineage -- possibly quite late. But why would consciousness evolve at all? Imagine two animals at the evolutionary threshold of consciousness. They are identical, except that one has consciousness, and the other doesn't. They both have identically complex behaviour: it's not possible to tell which is which experimentally -- only the conscious one knows that it is conscious. Natural selection thus has no means to favour one over the other. It's not possible
to select for consciousness. Darwinist explanations must therefore suppose that it is just a fluke, or that it was a side-effect of something else that did convey an advantage.
So that's our warm-up problem -- the "hard" problem of consciousness -- in a nutshell. Next time I'll take us on to "intuition", and the problems it poses. Just to give you an idea where we're going with this, though, recall that all thought
constitutes experience -- that is, thinking is not only something that we do, but something that we experience doing. But thought needs a subject: even the simplest thought in the youngest mind needs to be about
something. Thoughts need a frame of reference: meaningful ways in which to relate to each other and to other experiences.
As I see it, that frame of reference is provided by our intuition.