The old house on Luz Avenue at Mylapore in Chennai has a what-you-see-is-what-you-get air about it. The photographs and memorabilia in the drawing room evoke the proud lineage of the legal luminary, Alladi Krishnaswamy Iyer.
The lady confined to her bed in the adjoining room is his daughter. I cannot see her, but I hear her calling for help every once in a while. Nursing and support staff flit in and out of the room in response. All activity in the house revolves around her. I have come to meet her son, the renowned explorer of the human brain, Dr. V.S. Ramachandran. Professor of psychology and neuroscience and Director of the Centre for Brain and Cognition at the University of California, San Diego, he comes home to Chennai - at least twice a year, he says - to be with his mother. Our conversation is interspersed by her constant summons, "Rama, Rama" - and each time, he abruptly excuses himself to tend to her. This will keep happening, he apologises. But I am as fascinated by the devoted son as the passionate scientist.
The passion is palpable in his bearing, in the onrush of words and his forceful gestures. The accomplished physician morphs into the adventurous neuroscientist, and yet again into the curious psychiatrist. His brinkmanship with science is breathtaking. The fame of the author of the path-breaking work, Phantoms in the Brain , and its coda, The Emerging Mind (delivered initially as the Reith Lectures 2003), of the recipient of many academic honours and awards, sits lightly on him as he excitedly delves into the unknown. As he expounds his findings and pet theories, one can't help feeling that here is the messiah of the mind speaking:
Sashi Kumar: Thank you Professor Ramachandran for agreeing to this interview for Frontline. If we might begin at the intersection between philosophy and neuroscience... although a neuroscientist yourself, you seem to straddle both fields fairly comfortably. How do you see the intrusion philosophy has made into neuroscience? Are you just coping with it, or is it a familiar and friendly field?
V.S. Ramachandran: Well, you know my passion is mainly science, research, experimental work. Yes, of course, there are theoretical implications. Inevitably, when you do neuroscience - cognitive neuroscience or behavioural neurology - it throws up all kinds of philosophical questions, such as what is mind, what is the relationship between qualia, sensations, the activity of neurons, what is the nature of the self, the question of personal identity. Philosophers have thought about all of these issues for a long time. But, as often happens, as we advance in science, this has enormous implications for philosophy. Some people could regard these as antithetical, but they really are not, because, obviously, quantum mechanics has profound implications for understanding causality, the meaning of causality. Some of the greatest philosophers like Kant and Ernst Mach also inspired Einstein. So it's always a cross fertilization of ideas.
But since the late 1980s when Patricia Churchland spoke about this concept of `co-evolution', which - if I am summarising it correctly - introduced the philosophy of science to neuroscientists and vice versa, do you think the two are moving along abreast of each other? Or would you say that as neuroscience continues to discover more and more of what's happening in the human mind or the brain, philosophy will recede and be painted into a corner?
That's a good question. I think with some philosophical questions, I would even say with many, that will happen, they will be painted into a corner. But there will always be some fundamental issues of epistemology, such as: why do we exist? Why is there anything, rather than nothing? Questions of that nature. These questions are not going to go away because of science. Science doesn't attempt to deal with these questions. On the other hand, the strange thing about consciousness is that we are not even sure whether it is a philosophical question or a scientific question.
I mean, the way we approach it is that we say, look, obviously the liver is not conscious, the brain is conscious, as far as we know. Somebody could dispute that and say: how do you know for sure? Now, we don't know for sure. But science is not about knowing for sure. It's about knowing beyond reasonable doubt - so there's more in common with the law here than people realise. You can only be beyond reasonable doubt that something is true. So the brain is associated with consciousness, the liver is not.
But even within the brain, certain areas seem to be more involved in what we call consciousness. And what we call consciousness also seems to be several processes which we are lumping together in one word. And it's possible we can dissect these different processes and map them in different brain structures. That will enrich your understanding of consciousness. And then questions like, where is consciousness, or what is it, will recede into the background. It's a bit like when people ask: what is life? You know, living things are different, they have the vital spark. Now we know there is no vital spark, there's the DNA molecule, DNA replication, transcription, there's RNA, the Kreb's cycle... once you understand all these processes nobody comes and says, yes, but you have to tell me what life is.
But even in the realm of consciousness it would seem it is subjective consciousness that is the problematic area. What is called 'qualia', right?
Correct.
Whereas there is a realm of objective consciousness which has been scientifically traced and methodically explored?
Yes, absolutely.
Is it the limitation of our ability to come to terms with subjective consciousness, or is it a mystery out there?
Well, we don't know that. You see, there's an analogous problem in physics where once Einstein came up with the idea of a space-time metaphor, the notion of what the present is. The whole idea of passage of time, it has been claimed, is an illusion, it's not something that physics acknowledges. Yet we know that time passes. I am not a physicist, so I may be talking through my hat. But from my conversations with some leading physicists, we are not even sure whether this is a philosophical problem or a scientific one.
Similarly, with consciousness and qualia, I have argued elsewhere that it is intimately linked to your sense of self. If you don't have a self, there's no question of qualia, right? The whole question of me experiencing red - if there is no me, there is no question of experiencing red. So these are dual aspects of the same phenomenon, the sense of personhood, the sense of me. And the ability to know that you are aware of green, which I call a meta representation. So these are two sides of the coin. Now, that's proving to be the most elusive problem in all of neuroscience, all of biology.
But as scientists, what we are trying to do is to approach this problem indirectly, just as people understood heredity and DNA by looking at viruses. Are viruses half alive? We crystallise them, they behave like chemicals. But we also know that they replicate, and now we know that there are RNA molecules. So once we have understood DNA, nobody asks if the virus is really alive or not. Similarly, in consciousness, one day we may achieve a mature enough understanding of what it is. That might happen. Or it may forever remain a mystery, like time itself is a mystery in physics.
But you don't want to just assume it's insoluble. So what many of us scientists do is look at patients with disturbances in consciousness, look at what parts of the brain are involved and try and tackle the problem, in much the same way as the problem of heredity was - people homed in on the chromosome and then said it's not the histone, it's the DNA molecule; homed in on the molecule and found that it's the double helix, and the complementarity of the helix explains the complementarity of the parent and offspring.
Crick solved that.
Yes, so is there a similar grand climactic solution to the problem of qualia and consciousness, or is it going to be many little problems, which yield to experiments, and then the problem will gradually just recede into the background?
You mentioned Einstein. Einstein held that theory determines observation. In that light, is there a theory or a philosophy that you assume as the basis of your research?
It's not just me. I mean, if you ask anybody in my field, the underlying belief is that there are neurons and that their activity more or less explains, or corresponds to, what we mean by being conscious, being aware. Not all neural activity - there's a lot going on in your brain which doesn't emerge into consciousness - but a subset of this neural activity. And once you understand that, it's a perfect one-to-one correlation. We then start asking, but what is red? You see, this firing explains when you see red and when you don't, when you see green and when you don't; but what is the actual sensation?
The standard answer of neuroscientists would be: that's like asking you to explain all the properties of matter - the electrons, string theory and all that. And then suppose you ask what actually is an electron? The scientists will tell you that's not a meaningful question. You'll get a similar answer from any neuroscientist.
But that doesn't mean the problem disappears. If you are a thinking, questioning human being, you always wonder about why you are here. When you talk about consciousness, it is also linked to questions like the meaning of existence - why am I here? Is it purely accidental that we are born? Is it, as King Lear said, that when we are born we cry because we have come to this great stage of fools? Of course, all the great poets, all the great philosophers, have written about this.
The only thing I will say is there have been all these revolutions in science. If you think about it, each of them has had a dehumanising impact.
First, for example, there was the Copernican revolution saying there is nothing special about this little speck of dust, which is what the earth is. That's already humiliating - that you don't have a privileged place in the cosmos.
Then comes the Darwinian revolution saying you're just a hairless ape. That, again, is humiliating - that you are not the climax of creation, but are, in fact, the product of random processes of natural selection.
Then this Freudian revolution, saying you are not in charge, that your behaviour is largely governed by unconscious motives and drives.
And then the most recent thing, the DNA - that there is no vital spirit, it's a molecule. As Watson said, there are only molecules, everything else is sociology. He was, of course, saying this tongue in cheek. And now the neuroscientist's version - Crick's astonishing hypothesis - that you are just a pack of neurons; that's all a human being is. Now the question is: is that true? We don't know yet. We have to remain agnostic. But we have to take that as far as we can. That's the way science works.
Another interesting question is: why do we get such a thrill out of being humiliated, in being diminished in each instance? I think the reason is that actually we are not being diminished. And this harks back to Eastern mysticism and Vedantic philosophy.
Unless we are the masochist in your (Reith) lecture, who likes to have a cold water shower at 4 in the morning, and therefore doesn't.
That's correct. That's absolutely right. It's because you have this narcissistic illusion that you are somehow the centre of everything. So, in a sense, indirectly by saying you are not a separate soul, that you're really part of this great dance of Siva, far from being humiliating, it's actually ennobling. That would be my explanation of why people are always fascinated by questions about origins, and their place in the cosmos.
For someone with that sweep of perspective, you seem, in your [Reith] lecture, to assume that many brain functions are localised - that there is a specific part of the brain devoted to that function. For example, you talk of the fusiform gyrus for analysing colour information. Are many cognitive processes localised thus? Or is it likely that you can make sense of certain processes in a larger scale picture or framework. And if so, are there tools in neuroscience to capture such processes?
I think there are several reactions I have to that. Ultimately biology, unlike physics which is all about overarching or general theories, deals with the details. Details matter. Some functions, you are going to find, are localised fairly sharply in specific brain areas as well. Nobody would deny that the cough reflex, which is also a function of the brain, is a specific reflex that goes to the brain stem; nobody is going to debate that. These debates arise only about functions that are poorly understood.
Now, whether these are localised or not is an empirical question. Some may turn out to have small sets of circuits involved, others may involve large chunks of neural tissue. But this should not be transformed into a philosophical question, like Fodor and others have tried to make it. Now, I welcome their contribution in that they try to clarify what you mean by localised, what you mean by not localised. But they don't take it any further. What I want to know is: which function is localised, where is it localised, to what extent is it localised?
It's a bit like the nature/nurture question. People always polarise this. When I ask is it mainly genes, or is it environment, it turns out this is a meaningless question because there's always an interaction between the two. So your question should be: how is this particular behaviour, or to what extent is it, influenced by genes? Is the variability in this behaviour explained by genetic variability? To what extent is it explained by environmental variability? How do these two interact?
You can take an ape and put it in a public school. It's not going to develop a proper public school accent. On the other hand, if a human is raised in a cave he's never going to be a scholar. This is obvious.
It's amazing how much acrimony goes into this without understanding that the only way to understand this is to do experiments. And some things, like the cough reflex, nobody would say you learnt it. It's completely genetic, or mostly genetic. But in something else, say language, learning is involved to a large extent, especially your lexicon, the words. Nobody would deny that.
How much credence would you give to the functionalist interpretation of neural states that some philosophers seem to promote?
I think that's a bad idea because, in biology, structure, rather, informs function. Understanding structure and understanding how things interact are often a short cut to understanding functions. We are not opposed to functionalism as a theory. But as a methodology, it's seriously flawed. For example, with heredity, you can only get so far with functionalism. It's only when they did X-ray crystallography and understood the double helical structure that they saw the genetic code.
If some philosopher had said, `No, No, heredity is a function and needs functional analysis,' we'd still be stuck with Mendelian genetics; we wouldn't have understood DNA. I think the same is true of the brain.
But, and this may be a naive comparison, if you are doing a programme for a computer, it's not really about the electrical connections of the computer, right?
Correct.
And the same programme could be done on a computer of a different makeup?
Absolutely. But the key difference is - that's why I said in theory that's correct - you should be able to understand the functions of the brain as input-output, without recourse to looking inside it. But the brain, being a biological system, doesn't work the way the engineer expects it to work. There are all sorts of evolutionary constraints - the manner in which it evolved, all sorts of quirks.
Sometimes it's like a bag of tricks. And these cannot be derived from first principles, based on functional considerations. You often have to just go in there, see what the wiring is. Even if it's possible theoretically, in practice it's much easier to do by actually going in there.
Suppose you want to find out how a car works, you need to open it up and find out its mechanism. The same is true of the brain. It's true of heredity and DNA. I am opposed to purely functionalist or theoretical approaches to brain function. And if you look at the last twenty or thirty years, it's the experimental approaches that have succeeded, by and large. But the other side of the coin is that just a purely anatomical approach - your looking only at the structure - is not going to work either.
Horace Barlow provides a good analogy - of a Martian coming to earth. Let's assume on Mars they don't have sex because they are like amoeba, they just divide. When he comes and looks at earthlings and dissects them, he finds the testicles. Now he tries to understand what the testicles are. He dissects and finds these wriggling sperms. He'll think they are some parasites. He won't have any idea what the testicle is without knowing about sex. So functionalism there is critical in interpreting the structure. So it's always a two-way street. It is not either functionalism or not structuralism, but both.
You know, that's one thing that annoys me about philosophers - they do it more than scientists actually. I may be wrong about this, but they tend to polarise; because that's their occupation, you know, I mean...
Hair splitting?
Hair splitting. Also for instance, [Patricia] Churchland is a very good friend and she comes and peers over my shoulder to see what we are up to, we have students in common, people who are doing Ph.D philosophy come to my lab, and graduate in neuro-philosophy. She's a great pioneer in that respect.
But if you take many other philosophers - I can practically give you a whole list - they actually hurl personal insults at each other. I don't know if you ever read the exchanges in the New York Review of Books . It gets very personal. And this is an aspect of philosophers I have never understood, until one of my students trained in philosophy put his finger on it. He said there are no real issues they can discuss; they don't do experiments. So this is their job, you know, which they thrive on.
So you believe that it's really neuroscience that is best set to understand, let's say, the nature of consciousness?
Yes, absolutely. I have no doubt about it. That doesn't mean that we'll solve everything. I am not saying that we'll finally understand the transcendental nature of art, nor why did we come to be - things of that nature - but that's really metaphysics, rather than philosophy. And those metaphysical questions may always remain with us. They are not going to go away. But on specific questions, like why we experience red, I'm agnostic. I think neuroscience may well explain those things.
It's interesting you touch on the fact that you are agnostic. I have heard you say in an interview elsewhere, on television, that you are not atheistic.
Yes, I am not atheistic. That's why I said I am agnostic.
So you wouldn't, like Carl Sagan, say that god becomes littler and littler as science expands?
No, because I think this deals with a whole different issue. It deals with the realm of why I am here, what is existence all about... and I think the Indian mind is more tuned to these things than the western mind. The western mind is more pragmatic and you don't invoke an entity beyond Occam's Razor, unless you need to. The term `agnosticism' was introduced by Huxley, whom people often regard as an atheist. But he was very clear about it: I am opposed to specific ideas that the world was created in 6000 B.C., but not evolved from ape-like ancestors - in that respect I am an atheist. You can oppose that. But if you say that there is `Brahman', I think, as humans, we have a need for something like that.
So is your approach to cognitive science based on empirical studies or intuition?
Intuition is what gets you started; and then you need empirical studies. We mentioned brain imaging earlier. In every lab now in the United States, in every corner, there's an fMRI being done, an EEG being done. There is this brain imaging mania. It's almost become like a voyeuristic phrenology, harking back to the 19th century, to see what lobe is doing what.
Now, there is so much of it being done that purely by accident some of it is going to be good. Technology often drives science - just think of the telescope, the microscope - and so obviously it's a good thing overall. But the brain behind the microscope, the brain behind the telescope, is just as important, if not more important. Hundreds of children were using telescopes as toys, but Galileo had to come along and look at the stars.
Likewise, brain-imaging technology often lulls you into a false sense of having understood what's going on. So sometimes, not having technology - that's my own approach and that of some of my colleagues, we use it only when it's absolutely essential, just like medical diagnostics. We rely more on intuition in doing simple experiments, because if you rely on fancy medical imaging, you become less creative.
Paradoxically, not having access to technology forces you to be more ingenious. That's the bonus. And that's the point that is often overlooked, while at the same time acknowledging that technology is going to radically transform our understanding of the brain.
Would you then care to speculate on the nature of consciousness?
I did, to some extent, in The Emerging Mind . You see, unfortunately, the word `consciousness', like `life', is used in many different ways. It's a collection of different mechanisms. One is qualia. And then there's the notion of self awareness. I am aware that I am seeing red. It's debatable whether a cat is aware that it is seeing red - no doubt it sees red and it reacts appropriately. Now, there's a tantalising question: if you are not aware that you are seeing red, are you even aware of red? In other words, without knowing that you know, the word `know' doesn't mean anything.
I guess you start from the Cartesian premise that I think, therefore I am. You have to proceed from there, don't you?
Exactly, so that's where you get into a convoluted...
MIRROR NEURONS AND AUTISM
But the poet Rimbaud says you should no longer say `I think.' You should say, `it thinks me.'
Right. It's interesting you should mention Rimbaud, because he's also synesthetic. That's one of the phenomena we've been studying lately, where people get their senses muddled up. When they hear tones, they see colours. When they see numbers, they see colours. People used to think this was some kind of quirk. People even used to think it was bogus, that they were making it up. About seven or eight years ago, my student Ed Hubbard and I showed that this is a genuine sensory phenomenon, which parts of the brain are involved, and that it has deeper implications for understanding things like what is metaphor.
Now we have become very interested in this discovery of mirror neurons. These are neurons in the front of the brain. You record from them - this has been done by Giaccamo and Rizzolatti in 1996 or '97, I believe - and these neurons in the frontal lobes recorded from a monkey fire every time the monkey reaches for a peanut. So one neuron fires when the monkey reaches for the nut, another neuron fires when it pulls a lever, another when it pushes something, another neuron when the monkey picks up a nut and puts it in its mouth.
People used to think of these as command neurons, which are orchestrating a sequence of muscle twitches to perform certain skilled actions, or semi-skilled actions, or volitional actions. Now, what Rizzolatti found, to his amazement, was that some of these neurons would fire when the monkey watches another person performing that action. This was truly extraordinary.
When they first discovered it, many people didn't see its significance. I saw this observation and I published an essay on `Edge', a website maintained by John Brockman, where I said that this was going to do for psychology what DNA did for biology - help explain a host of hitherto mysterious mental capacities, like the early emergence of language, words, proto language, metaphor, your ability to transmit culture through imitation, through emulation, empathy, your ability to put yourself in another person's shoes, look at the world from his point of view. All of this, which had seemed quite inaccessible to science, now becomes instantly accessible.
Look at what these neurons do. The neurons are allowing you to put yourself in the other monkey's shoes, the other person's shoes. So I call them the empathy neurons, or the Dalai Lama neurons. In fact, at UCLA, they found these neurons in the anterior cingulate, which, in humans, respond to pain. You poke this chap and the neuron will fire. So people used to think of these as pain sensory neurons.
But guess what? The same neuron will fire if you poke somebody else. And this is where I joke about the possibility: what if you do this to a sadist or a masochist. That hasn't been done. But here is the neuron effectively dissolving the barrier between the self and the other - it fires in you when you hurt somebody else! So when you say I feel his pain, it's not just metaphoric. This neuron is telling you that, in some sense, you are literally feeling his pain.
You could ask, if that's true, why don't I actually feel the pain; why do I just feel empathy? I think what's going on is that another part of the brain is signalling that the brain receptors are not being actually stimulated. That explains why you have empathy but don't actually experience the pain.
So, because these neurons are dissolving the barrier between you and others, I call them the Dalai Lama neurons. This is the basis of all eastern mysticism. This is an exciting discovery and it may help explain many aspects of human consciousness, human mind, which would have been inconceivable even twelve years ago, that is, before this was discovered.
One of the things we have discovered in our lab is the cause of the cruel disorder called autism, which you see in children. The word `autism' literally means `aloneness.' The child withdraws from the world and there is a high probability of autists being actually intelligent once you draw them out. But autistic children, typically, are reluctant to interact socially. They have grossly impoverished language and communication skills, no empathy for other people, and inability to adopt another person's point of view. And, as has been known for a long time, they have a huge problem with metaphor, the metaphoric use of sentences.
A bulb flashed in my mind and I said: `My God, what's gone wrong in autism are precisely the functions of mirror neurons. So maybe autism is caused by deficits of mirror neurons.' So Eric Altschuler, who was a postdoctoral fellow in the lab in 1999, and I said, let's test this idea and measure human EEG, brain waves, in autistic children. What we found was that in autistic children the command system was completely normal. If they reach out and grab something, the neurons fire. But when they watch somebody, there is no activity. So we had evidence - we hadn't proved it conclusively - that autism is caused by a deficit in the mirror neurons.
Previously, people had suggested other theories of autism. For example, there's a theory that autistic children have a deficient theory of other minds. All of us construct a theory about what is going on in someone else's mind, enabling us to predict their behaviour. Autistic children are not able to do that. But this is not really an explanation, because it is restating the observation. So I didn't find it very helpful.
Similarly, people have found enlarged brains in cases of autism. They have found changes in the cerebella. But this doesn't explain the symptoms. When the cerebellum is damaged in a child for some other reason, like a stroke, you get intention tremor, ataxia and abnormal eye movements. You get other problems. But you don't get social isolation, withdrawal.
On the other hand, the impoverishment of the mirror neuron system explains the symptoms that are unique to autism alone and are not seen in any other disorders.
Now this is based only on one or two patients. There is this challenge. Initially another group came and said it used some other technique and so on and didn't find the same changes that we did. Now the same group has repeated this on dozens of subjects and essentially found that our theory is correct. My own student, Linsey Oberman, has now recorded over a dozen autistic children and a dozen normal children and found this holds up. Now, whether the deficiency in the mirror neurons is, in turn, caused by some other deficiency, we don't know. But what you want to get at with any neurological disorder is whether you can explain the symptoms you see in terms of what's gone wrong. So, in short, we found the cause for autism in 2000.
In your Reith lecture, again, you seem to suggest that the conscious experience of willing something like, say, bending your finger, occurs after the brain actually begins performing the function - sending signals to bend the finger. It's as if our conscious living is a result of bending the finger, rather than the other way around.
Even though we... ?
Even though we assume it's the opposite. Don't you think this has implications for... free will, for instance, or responsibility of the individual?
Well, this observation was made by Benjamin Libet and others. It turns out that if I tell somebody to wiggle his fingers whenever he wants to in the next five minutes, the amazing thing is, when he wiggles his finger, you can measure when he wants to on the second hand of a clock and you can check where was the second hand at the time he sent the command to wiggle the finger - and usually he sends the command just before the clock was at the twelve. But you have picked up the brain signal a full second before!
So you can tell the guy, ahead of time, that you are going to will it now, in principle - we haven't actually succeeded in doing that yet because of technical problems. Now, if it's his will, how can you tell him ahead of time when he's going to will it?
This raises the whole conundrum about whether free will is an illusion. And so it obviously has all kinds of implications for philosophy. Now, whether it means free will is real, or it is completely illusory and we are all puppets on some sort of cosmic string, is something we can talk about for hours. But it's a clear example of how neuroscience has a direct relevance to philosophical questions.
I think you have anticipated that when, in your Reith lecture, you quote Freud saying: "Your conscious life is nothing but an elaborate post hoc rationalisation of things you really do for other reasons."
Yes, that would be the neuroscience view. But on the other hand, to balance this out, I was reading a book by Erwin Schrodinger, the famous physicist who has written eloquently about mind and body and brain, but coming at it completely from the opposite direction - actually from the Vedantic perspective. He says, look, everything we know about neuroscience is a cascade of chemicals. You can't deny that staring you in the face. And yet every time you say, I moved my finger, how do you reconcile this? Either you have to say neuroscience is wrong, or you have to say I am wrong. One of these has to be wrong.
But he says, not necessarily. He says, you have to say that will is actually acting through me - the cascade of chemicals - and in fact pervades the cosmos; and this gets close to the idea of Brahman.
And you would go along with that?
No, but I think it's a very provocative and important idea which is completely ignored by mainstream neuroscience, by mainstream physics. I think he's on to something very, very important, as I say in my recent essay on the `Edge' website, where I talk about the brain in the vat.
The speculative part of `The Emerging Mind' concerns aesthetic theory and the whole area of synesthesia. For example, that language rose out of synesthesia. There has been a lot of scepticism of late about some such scientific sub fields, particularly evolutionary psychology, due to what many see as excessive speculation. For someone who is very strong on the empirical, and also got a lot of empirical results, you seem to have no hesitation in plunging into the realm of the speculative.
I think you've put your finger on it. On synesthesia, there's not much there that is speculative. It's empirical science, we've done the experiments, we've done the brain imaging, people have confirmed it umpteen times. Of course, it started as a speculative enterprise, but it's basically experimental science. What's speculative are its implications for metaphor and language. I would say, some of it is speculative, some of it is not.
Is it part of your pursuit of mirror science?
On the other hand, my work in aesthetics and art - that's pure speculation. Some philosophers attacked us - me and Semir Zeki, who has written about the neural basis of aesthetic response. I think they have a deep-seated misunderstanding about what science is all about.
We are like children. There's a more playful aspect to science. It's not a deadly serious enterprise like philosophy. Now, there are some philosophers who also have that sense of playfulness - like Wittgenstein did, like Bertrand Russell did. But for the Oxford school, for example, it's a deadly serious enterprise.
If I have done my home work, established my credentials in one area of neuroscience, it's perfectly OK to have fun, to make some whimsical forays into other areas; because we've paid our dues, there are areas in which we've made solid contributions.
It's very common among physicists. Penrose has done solid work in physics, and he does speculation in science; some of it is outlandish and most people don't agree. Chomsky has done solid work on linguistics, but his political views, people don't agree with. Likewise I feel that...
By the way, since you mention Chomsky, you arrive at the conclusion that his theory of language being wired into the human being is close to divine intervention. He won't be happy with that.
No, no, I mean only one aspect of what he said. On the fact that there are hardwired rules in language, I accept Chomsky's claim and his genius. The extent to which it is hardwired - that is the debate. I think there's more nurture involved. But he would agree.
You know, any great scientist has to start with a sort of a caricature. This is why he was attacked so viciously by philosophers and sociologists and linguists. I am trying to do something similar with aesthetics, and so is Semir Zeki.
Speculation is all right in science so long as you make it clear that it is speculation. Even though speculation may be playful, there is a serious agenda. After all, philosophers have tried to understand art for three millennia. To put it bluntly, not much progress has been made. So, we're just having a crack at it. Philosophers have the view that we scientists are people who wear a lab coat, who go through a step-by-step deduction, subject each hypothesis to a deadly rigorous, rational, deduction.
Q.E.D.
Yes, Q.E.D. You do that with Euclidian geometry. Science doesn't work like that. It's like a fishing expedition, especially in the early stages, for science in its infancy. You engage in speculation, there's a sense of joyous abandon, and you deliberately make provocative remarks, like Chomsky did.
For example, I say all art is caricature. Of course, I don't mean that literally. It's to get people thinking about these issues. But underlying this playfulness there is also a serious agenda. Now once a scholar understands that, he or she has a different attitude to what I am saying. The trouble is there are many pedantic philosophers who don't understand the spirit of the enquiry. That's one problem. So my goal is to see whether there are artistic universes.
The second problem is that sometimes I use the words `art' and `aesthetics' interchangeably. That's a mistake. Again, as I said, when you are beginning an enterprise, that's acceptable. You don't worry so much about semantic hygiene. For example, Francis Crick used to talk about consciousness. So this philosopher from Oxford in the audience gets up and says: Professor Crick, first define consciousness; then we can talk about it. You don't even have a definition of it. This is not how we do things.
Crick's reply was: My dear chap, we leave that to the philosophers. There was never a time in the study of biology where ten of us sat around and said, look, let's define life first. Get a clear definition. Then we can investigate it. We just went out there and found out what it was.
So science as an enterprise is totally different from philosophy. It's a highly pragmatic business.
So, I said, OK, here are ten laws of art. And I make it clear that these are not like laws of physics.
These are trial balloons?
Exactly! They are trial balloons. While a majority of scholars were intrigued and impressed, some of them get riled up and very territorial. And some art historians wrote, how would he feel if we came into Dr. Ramachandran's field and started speculating? I would welcome it.
There are no boundaries in scholarship. So two points: one, they have no understanding of how science works - only the best philosophers do. The second problem is the failure to distinguish between aesthetics and art.
So I shouldn't say, `universal laws of art.' I should say, `universal laws of aesthetics', because art is a loaded expression. [Takes book from the top of the table and places it below the table]: This [act] is a work of art. Disprove it. So I don't even want to go there. There's a whole arbitrariness that comes into it, especially when you get to contemporary art.
Earlier Indian art - Abhinavagupta - was adhering to canons of aesthetics. Contemporary art, just like a lot of Joycean literature, became a rebellion, because they are saying we define art by that which is not art. How dare you question us? If you start saying that, I don't want to go there. So what I am saying is, let's talk about laws of aesthetics. Some of it is also going to be relevant to artistic experience. Some of it won't be.
Many a philosopher has actually told me this is all stupid; there is no such thing as an aesthetic universe, because I can do this [takes a book and drops it on the table] and it's art. That's not an argument. But then, I could, equally, say, tell me one person who doesn't find the Taj Mahal attractive. Out of a hundred, you may find one. But ninety-nine find it a life-changing experience. Tell me one child who doesn't enjoy a kaleidoscope.
So what I suggest is that for every one of these aesthetic principles or laws, there are three corners: how, what, why. `What', meaning what is the law - symmetry, a statement, that's easy enough. Why symmetry? You mentioned evolutionary psychology. In other words, why did it evolve? It evolved to detect prey, predator or mate. That's all there is to it. Otherwise the Taj Mahal would not be appealing. That may sound reductionist, but that's the truth. But that doesn't mean if you simply make something symmetrical, it's going to look beautiful. This is the mistake they make. So like that, I have a whole list. Some of it may turn out to be important laws, some of it may fizzle out and I may be wrong.
There seems a parallel here in what you say elsewhere in `The Emerging Mind' about the correspondence of sound and the figurative illustration - the `kiki-booba' example where `kiki' summons up sharp, angular shapes and `booba', rounded shapes. Are you suggesting there would be a similar kind of consonance in the realm of art?
That's correct. One of my principles in art is visual metaphor. You are calling it consonance. Nobody can deny that this principle exists. If I were to give a child a `booba' shape and call it `kiki'...
There will be dissonance.
It will be a crime! Another problem is a deep-seated fear of reductionism. They think if you explain something, you explain it away and it disappears. And territorial fear. It's ridiculous. In scholarship you can't be territorial. It is people who cross the boundaries who make discoveries.
But then when you talk about modern art and the distortion in it actually bringing it closer to the original, the problem may be that the intent or purpose of art is not to simulate or approximate to any original. That the original is farthest from the mind of the artist.
No. No. That's correct. It's not a simulation, but at the same time, it may actually heighten the original in an interesting way.
But should the `original' be part of the discourse?
That's where art and aesthetics part company. Because you can have what is called conceptual art, where it doesn't have to be anything. You can have a blank canvas and call it art because it's about the vast emptiness of the universe. Now, can I explain that without getting into why conceptual emptiness is attractive? That's why I said, using the term `aesthetics,' you're safe. And if you say art has partly got to respect aesthetics, it's fine. But once you start getting into Dada and say it's a rebellion against anything lawful, then, by definition, I cannot come up with some law of art, can I?
That's it's own rule, isn't it?
That's it's own rule. Exactly. But I strongly believe that some of the tricks they use to distort it... why is it that you see a Rodin and you get turned on, but when some streetside artist distorts it randomly you can see that it's rubbish? I don't think it's purely subjective or indoctrination. I think the great artist is cleverly tapping into these laws and amplifying it. It's not realistic, it's hyper-realistic. It's doing something, which you are not aware of, to titillate those neural circuits.
And therefore there is some virtual archetype to which it aspires?
Platonically yes. But I am giving it a neural substantiation and arguing that you have an alphabet in your brain and the clever artist is somehow tapping into that alphabet and amplifying it. Now, this cannot explain a toothbrush and a toilet bowl.
So Rodin may be explained by this, but not Duchamp?
Absolutely. That's a good way of stating it. Even Picasso may be OK.
Let's move to some methodological issues. When studying animal brains one can induce lesions, which may not be possible with human brains. Also, for many experiments of a neurobiological nature, animals can be used as models of the human body, but this is not possible for interesting areas like language use, which are unique to humans. So you are left with non-interventionist techniques for a good part, aren't you?
Yes, but remember that sometimes nature gives you an experiment, because you may have a patient with a very small lesion. If the stroke is sufficiently small, with the imaging techniques now available, you can pinpoint. You can find another patient with a lesion in a different location. And then you do the behavioural experiment and correlate. Even better than that, now you've got magnets which can temporarily inactivate a part of the brain. And then you see how does the behaviour change.
So there's no sense of constraint?
One constraint is that with this temporary inactivation, we can still do only surface structures, not deeper structures.
BRAIN IN A VAT
Are there any problems with the ethical dimensions of neuroscience experimentation, like, say, where does legitimate enquiry stop and exploitation begin? Do you have a framework of ethical codes you work with?
Yes and no. Obviously there are tremendous ethical issues here. But I would say no more than the problems of cloning, the problems of abortion, the problems of nuclear weapons - they are ethical problems, but they are in a sense political as well. They have to be tackled politically.
Take, for example, an extreme case I have mentioned on the `Edge' website. Imagine five centuries from now I am a mad neuroscientist and I produce a brain in a vat. I take your brain and put it in a vat. I can give all the right patterns and make you think you're Bill Gates, Hugh Hefner, Mark Spitz, Crick... everybody you want to be - but retaining your identity. In other words, your personal cherished memories of upbringing in Kerala - I am not wiping off all that, but, as bonus, you also have the abilities of a Crick, of a Gates and so on. I give you the choice of that artificial brain or the real you continuing. Now, ninety per cent of people pick the real you.
But that's my limitation, isn't it?
Exactly. Because, logically, you're already a brain in a vat - there is the cranial cavity and all that. I ask you which vat do you want and you prefer the crummy vat! But to create the simulations in the vat, you have to understand the brain enough to understand culture, for instance. And culture, by its very nature, depends on the contingencies of what happened.
Every culture is unique because of its trajectory. Without understanding that, how can you programme it in the vat? So you could say that this Frankenstein scenario will never happen. You could argue both ways: that the time will come when all this will be understood; or that given the uniqueness of cultural trajectories, and without a peculiar concatenation of environmental and genetic circumstances, how can you create it?
But does neuroscience set the ground more for the essence than the existence? If you marginalise the existential, and inasmuch as culture - the formative aspects of it - has more to do with the existential...
Science, by its very nature, is about general principles. That is why science is different from history, which is about a particular thing that happened. Somebody could come and say, OK, make everybody like Bill Gates - because you can understand one brain; you don't need to understand all cultures and so on. This gets into ethics.
Five centuries from now, do you want a universe, an earth, with thousands of warehouses with thousands and thousands of brains and vats, all thinking they are Bill Gates, enjoying themselves? This dilemma is the ultimTheate dilemma! It's like, to be or not to be - that is the question.
Did you make an oblique reference earlier to the possibility of neuroscience drawing from some things that are unique to eastern philosophy or Hinduism?
Not so much in neuroscience as in the epistemology of `who am I?' And there I like Schrodinger's approach to it - and the Vedantic approach to it - saying there is this fundamental asymmetry in nature between my personal point of view, my puny self, my brief personal existence, my vantage point... which simply doesn't exist in physics.
Physics doesn't acknowledge your vantage point. There is only the universe with lots of people, lots of events. The personal point doesn't have privileged status. And yet from your point of view, it is everything.
So there is this fundamental asymmetry, which western science simply doesn't deal with. It just says it's trivial, it's a non question, whereas eastern philosophy is perpetually obsessed with it - this is the whole atman-brahman, dvaitam-advaitam conundrum. Not that I have the answer to this. But I think we have to deal with it.
Do you get valuable insights for your science in any of this?
I do, because western science simply denies the existence of your personal self. Vedanta, and Erwin Schrodinger, says you can't do that because it is the only reality you know directly. Is the world that you have created, and science that is created from your mind, pushing you out? That's also illogical, isn't it? It says there is no place for you.
So the only way to reconcile these points of view is to say that you always existed and are all-pervasive, and the notion that it is private and is confined to this thing called your body is the illusion - all this maya business.
So if I am able to reconcile the Vedantic-Schrodinger approach with neuroscience or physics without this schizophrenic mentality, it's fine. But I must confess it's still an eternal riddle. It's not that I am claiming to have solved it. But ninety-nine per cent of scientists are not even aware of this.
The brain, with a trillion or so synapses, is such a fascinating field.
Yes, but before that, you may think that this brain in the vat business is some science fiction stuff. It's not going to happen any time. And I agree it's not going to happen soon. But we are approaching that. Your blog, your i-way-, i-view, email, webmail - we are all being already assimilated into the brahman of cyberspace. What is the individual, but a node in this huge Internet? We are approaching that stage where we are becoming like the brain in the vat.
The second point I make is that people think it outlandish that you can use the information in your brain. You see, one brain is associated with one mind. Suppose I replace your brain with an identical brain, obviously you will continue in that brain, because your brain is already being replaced every few months.
So the atoms are not critical. It's the information. But if the information is critical, what if I put two vats? Which one do you continue with? Who decides that? Is it god? There are all these conundrums that emerge.
While the brain is this tremendous diversified, digital kind of milieu, you also point out that the bottleneck is `attention.' There it looks like it becomes analogue. This whole digital realm moves into an analogue sequencing, because you can only pay attention to one thing at a time.
You're right, because people get this false sense of security that it's all on their fingertips. There's no point in having it on your fingertips; [pointing to his temple] it needs to be here. And you've got the bottleneck.
So ultimately it's all our understanding of it, isn't it?
Absolutely. I find this in a lot of my students. They don't read journal articles any more. We used to go to the library, xerox an article, take it home and read it. Now I know I can wake up in the middle of the night and my journal article is there on the Internet. So I have this false sense that it's already there and that I can get it into my brain any time I want. So the net result is I never read it. My colleague Sydney Brenner, who is a Nobel Prize winner, tells his students, don't xerox it, neurox it!
Where do you go from here? What is the most exciting thing you are looking forward to doing, or working on?
One is very rigorous empirical neuroscience, like our work with autism, trying to link it with neurons, trying to understand synesthesia, perception and so on. Now I don't mind, unlike many of my colleagues - actually many of my colleagues do it too - engaging in other pursuits on the side. Speculating on things which are very, very difficult problems, like how do you do sequential juggling of symbols in the head? Or what is aesthetics?
So long as you make it clear that these are, as you said, trial balloons. I don't think five years ago people were thinking about neurology and aesthetics. And now I find, after I published this article, and Zeki published his book, that there are two camps. There is a fan following and there are those who oppose us.
But there are scientists who would be worried about the kind of speculation that you hazard only because it might detract from the solid work that they have done.
No, it rarely detracts from one's solid work. Any scientist knows that science often begins as a speculative adventure; `What if' is the driving force behind most major discoveries - as when Einstein said, `What if I move away from that clock at velocities approaching that of light.' We don't simply gather facts through observations, or, as Medawar said, `We are not cows grazing on the pasture of knowledge!'
Secondly, when you have `paid your dues' doing solid experimental work, people don't mind your speculating a little on the side, as when Crick speculates on consciousness.
The important thing is to make it clear which part of your argument rests on solid ground and which part is speculative. So long as you do that, your colleagues, at least the bright ones, always welcome speculative ideas.
Thank you very much.
COMMents
SHARE