Question
Can computers fall in love?
Answer
I disagree strongly with Gavan Woolery 's answer [update: not so strongly after Gavan's 2 updates], which is the standard "strong AI" position, not because I think there is something ineffable about "love" in particular that AIs cannot mimic, but because
a) there are non-trivial problems that remain to be solved on the specific road to "artificial love" and
b) the specific problem inherits the general mysteries of the problem of consciousness, which, if ignored, make the question completely vacuous
This specific AI game won't be over till the fat lady sings, but I think that can happen.
The general problem, inherited from debates about the nature of consciousness, is unlikely to ever be resolved, since it is in the unfalsifiable domain of metaphysics, not an engineering problem in AI. At this level, it is only an AI problem in a thought-experiment sense. Scaffolding on the road to philosophy, like John Searle's famous "Chinese Room" argument, which is also a philosophy problem masquerading as an AI problem.
To borrow David Chalmers' terms, there are "easy" and "hard" problems involved in creating artificial love. The easy problem has to do with "love" in particular. The "hard" problem is basically the problem of explaining consciousness.
The "easy" problem (a misnomer; it merely means "not mysterious") is to merely replicate, with sufficient accuracy and motivation, the functional phenomenology of love. This may end up being an extremely complicated project, but it is not philosophically mysterious in any fundamental way. The thought experiment is easy enough to describe (if you use this outline to write an NSF proposal, you'll owe me lunch).
First, you would study the sexual and mating behaviors of many species (using, for example, the research described in Matt Ridley's "The Red Queen"). You would determine the markers that distinguish pair-bond formation in humans, and set it apart from the behaviors of gorillas and bonobos and other species that offer relevant comparisons. You would study the neurochemistry of attachment. You'd do things with fMRI machines.
Second, having isolated the phenomenology thought to be relevant, you would construct several falsifiable theories of function that attempt to explain why the attachment behavior we call "falling in love" has evolved at all.
Third, once you've done that, you'd attempt to come up with plausible evolutionary-psychology trajectories that explain why the served biological function actually came to exist, and why "falling in love" is at least the locally-optimal solution to the problem it solves, against a suitable and plausible evolutionary fitness-function landscape.
That's your analysis case. Now synthesis. To avoid creating vacuous AIs that mimic the phenomenology of love without fulfilling its functions or having its local optimality properties, you would LOOK for problems for which the locally-optimal and functional solution that emerges looks phenomenologically similar to love.
If there are problems like this that are actually meaningful, you're done. But finding such a problem is not an easy task.
If you do, and you successfully create a "love-based" solution to the problem, you've solved the "easy" problem of artificial "falling in love." One can hypothesize, for instance, that some particular problem in AI, like fighting computer viruses, is best solved through antivirus AIs capable of "love." This is not an arbitrary example. There is an actual theory within the field of sexual selection, that sexual dimorphism first evolved to combat parasitism. I wrote about this particular idea in a post on an idea I call "sexual computing": http://www.ribbonfarm.com/2010/0...
Anything short of this level of complexity that claims to be "artificial love" is an angels-on-the-head-of-a-pin exercise in AI silliness (or some grandstanding TED-baiting TV science). There are certain AI researchers, who shall remain unnamed, whose work is an exercise in precisely this kind of silliness. I have voodoo dolls of these individuals into which I stick pins every full moon (heh! and you thought it was arthritis!)
But that's only the EASY problem.
The hard problem is much easier to state. Assuming the easy problem is solved, is there an "I" inside our hypothetical male/female antivirus programs, that subjectively feels something like the human experience of love?
In the philosophy of mind, this sort of question is called a "qualia" question. The hard problem is whether there is a subjective entity that experiences a "love" qualia. All the complexity of the easy problem is one giant red herring.
My position is simple: whether or not we solve the "easy" problem is irrelevant. It will have exactly zero bearing on the hard problem.
To see this, you don't have to talk about things as messy as love. Just consider the perception of the color blue. You can already build AIs (heck, it doesn't even have to be an AI) that can detect "blue" and act meaningfully in response. My cellphone is "blue capable."
But is there something inside my cellphone that experiences "blueness" in the sense we do? Be very careful with your answer. It has NOTHING to do with complexity. It's not ABOUT a sufficiently complex algorithm. If it helps, ponder the fact that there are extremely primitive creatures that can experience blue. Think at the level of the SIMPLEST creature to which you are comfortable attributing an internal "I," that can experience "blueness." For me, the lower limit is probably some sort of simple frog (do frogs have color vision? oh well, you get the point).
Heck, we haven't even solved the even simpler question of whether you and I experience blue the same way, let alone you and an AI (this is known as the inverted spectrum problem in the philosophy of mind) or whether you experience blue at all. Perhaps I am the only one in the whole universe who experiences "blue" and everybody else is just a very clever robot that has I/O behavior indistinguishable from mine (this argument is called the 'philosophical zombie' argument).
You'd be wise not to argue this point with me here. I am not trying to pre-empt debate, but there is a reason there are shelf-loads of books and papers about this stuff. This is not a subject for light-weight sparring on Quora.
I started a whole series of blog posts on the philosophy of mind which I abandoned, but I did get to some preliminary thoughts on this sort of stuff. See http://www.ribbonfarm.com/2007/1... ... I intend to finish the series after I retire, but if you can't wait, read David Chalmers' "The Conscious Mind."
People who understand these issues think "consciousness" is still a mystery.
Those who think these issues have been resolved are "strong AI" types.
Those who don't understand that there IS in fact a very tricky question here... well.
a) there are non-trivial problems that remain to be solved on the specific road to "artificial love" and
b) the specific problem inherits the general mysteries of the problem of consciousness, which, if ignored, make the question completely vacuous
This specific AI game won't be over till the fat lady sings, but I think that can happen.
The general problem, inherited from debates about the nature of consciousness, is unlikely to ever be resolved, since it is in the unfalsifiable domain of metaphysics, not an engineering problem in AI. At this level, it is only an AI problem in a thought-experiment sense. Scaffolding on the road to philosophy, like John Searle's famous "Chinese Room" argument, which is also a philosophy problem masquerading as an AI problem.
To borrow David Chalmers' terms, there are "easy" and "hard" problems involved in creating artificial love. The easy problem has to do with "love" in particular. The "hard" problem is basically the problem of explaining consciousness.
The "easy" problem (a misnomer; it merely means "not mysterious") is to merely replicate, with sufficient accuracy and motivation, the functional phenomenology of love. This may end up being an extremely complicated project, but it is not philosophically mysterious in any fundamental way. The thought experiment is easy enough to describe (if you use this outline to write an NSF proposal, you'll owe me lunch).
First, you would study the sexual and mating behaviors of many species (using, for example, the research described in Matt Ridley's "The Red Queen"). You would determine the markers that distinguish pair-bond formation in humans, and set it apart from the behaviors of gorillas and bonobos and other species that offer relevant comparisons. You would study the neurochemistry of attachment. You'd do things with fMRI machines.
Second, having isolated the phenomenology thought to be relevant, you would construct several falsifiable theories of function that attempt to explain why the attachment behavior we call "falling in love" has evolved at all.
Third, once you've done that, you'd attempt to come up with plausible evolutionary-psychology trajectories that explain why the served biological function actually came to exist, and why "falling in love" is at least the locally-optimal solution to the problem it solves, against a suitable and plausible evolutionary fitness-function landscape.
That's your analysis case. Now synthesis. To avoid creating vacuous AIs that mimic the phenomenology of love without fulfilling its functions or having its local optimality properties, you would LOOK for problems for which the locally-optimal and functional solution that emerges looks phenomenologically similar to love.
If there are problems like this that are actually meaningful, you're done. But finding such a problem is not an easy task.
If you do, and you successfully create a "love-based" solution to the problem, you've solved the "easy" problem of artificial "falling in love." One can hypothesize, for instance, that some particular problem in AI, like fighting computer viruses, is best solved through antivirus AIs capable of "love." This is not an arbitrary example. There is an actual theory within the field of sexual selection, that sexual dimorphism first evolved to combat parasitism. I wrote about this particular idea in a post on an idea I call "sexual computing": http://www.ribbonfarm.com/2010/0...
Anything short of this level of complexity that claims to be "artificial love" is an angels-on-the-head-of-a-pin exercise in AI silliness (or some grandstanding TED-baiting TV science). There are certain AI researchers, who shall remain unnamed, whose work is an exercise in precisely this kind of silliness. I have voodoo dolls of these individuals into which I stick pins every full moon (heh! and you thought it was arthritis!)
But that's only the EASY problem.
The hard problem is much easier to state. Assuming the easy problem is solved, is there an "I" inside our hypothetical male/female antivirus programs, that subjectively feels something like the human experience of love?
In the philosophy of mind, this sort of question is called a "qualia" question. The hard problem is whether there is a subjective entity that experiences a "love" qualia. All the complexity of the easy problem is one giant red herring.
My position is simple: whether or not we solve the "easy" problem is irrelevant. It will have exactly zero bearing on the hard problem.
To see this, you don't have to talk about things as messy as love. Just consider the perception of the color blue. You can already build AIs (heck, it doesn't even have to be an AI) that can detect "blue" and act meaningfully in response. My cellphone is "blue capable."
But is there something inside my cellphone that experiences "blueness" in the sense we do? Be very careful with your answer. It has NOTHING to do with complexity. It's not ABOUT a sufficiently complex algorithm. If it helps, ponder the fact that there are extremely primitive creatures that can experience blue. Think at the level of the SIMPLEST creature to which you are comfortable attributing an internal "I," that can experience "blueness." For me, the lower limit is probably some sort of simple frog (do frogs have color vision? oh well, you get the point).
Heck, we haven't even solved the even simpler question of whether you and I experience blue the same way, let alone you and an AI (this is known as the inverted spectrum problem in the philosophy of mind) or whether you experience blue at all. Perhaps I am the only one in the whole universe who experiences "blue" and everybody else is just a very clever robot that has I/O behavior indistinguishable from mine (this argument is called the 'philosophical zombie' argument).
You'd be wise not to argue this point with me here. I am not trying to pre-empt debate, but there is a reason there are shelf-loads of books and papers about this stuff. This is not a subject for light-weight sparring on Quora.
I started a whole series of blog posts on the philosophy of mind which I abandoned, but I did get to some preliminary thoughts on this sort of stuff. See http://www.ribbonfarm.com/2007/1... ... I intend to finish the series after I retire, but if you can't wait, read David Chalmers' "The Conscious Mind."
People who understand these issues think "consciousness" is still a mystery.
Those who think these issues have been resolved are "strong AI" types.
Those who don't understand that there IS in fact a very tricky question here... well.