2001, Bladerunner, The Matrix, Alien and Star Trek (TNG)  all have machines which at least appear to be conscious and emotional. The idea is so ingrained in us that we expect an Artificial Intelligence to be a carbon copy of a human being, will it be? Will AI computers be conscious? Will they have emotions?
One of the biggest questions about artificial intelligence concerns the matter of consciousness, or "self-awareness". The entities I spoke of in part 6 may have many of the attributes of living beings but they are are still "machines", they are not conscious and they do not have emotions.
Will machines become conscious?
There is no answer to this question since we do not at this point even understand our own consciousness or how it works. How can we add consciousness to a machine when we don't understand it ourselves? How do you program self-awareness? Can it even be programmed?
On the other hand, perhaps programming wont be necessary...
It has been theorised that consciousness is an "emergent behaviour", a behaviour which appears out of the complexities of an existing system. Could consciousness be a side effect of intelligence and if so will it's development in our entities be inevitable?
A biological brain is very different from a silicon one. Physicist Roger Penrose  has written that quantum effects are present in the human brain and these are important as they could be responsible for non-algorithmic functions. These effects will not show up in a digital system. An analogue system can be simulated but Simulating quantum effects is likely to be impossible and if they are important just faking them may not work. Are these quantum effects important for consciousness to be present? If so purely digital entities may never become conscious.
If a machine does become conscious it becomes something else entirely, something you can have a conversation with, something which may even have a will. Something which may no longer be interested in it's masters bidding. Something which could turn against us and attack.
A conscious system brings up all sorts of problems, will it be emotionally driven in the way humans are? Will it even have emotions? If so how will it react to them?
This is the question behind Bladerunner where the "Replicants" knowing they have a limited life-span go back to their creator in an effort to try and stop their demise. They do not have an entire life to get used to emotions so are highly immature when it comes to dealing with them and consequently tend to leave a trail of bodies in their wake.
An interesting aspect of the film is that the Replicants are considered as non-human and have no rights. Some people have already demanded human-like rights for the more intelligent animals so what will be the situation for a conscious entity? There is no problem if they are not conscious, but if they are (or at least appear to be) should we award them human-like rights? What would have been master director Stanley Kubrick's last film, "AI Artificial Intelligence" deals with the question of a conscious artificial entity who has very human emotions but no human rights.
"His love is real, but he is not"
One thing I think fiction gets wrong is the idea of emotions in artificial entities. It's by no means clear that they will have emotions and even if they do they are likely to be very different from ours. We are biological entities and there are many factors which effect us and our emotions.
Being chemical based means chemicals effect us. Food, drink, the temperature, the humidity and all manner of other things can change our mood and in extreme circumstances can sometimes radically change our behaviour. For the most part most of us are completely unaware of the effects that many day to day chemicals can have on us. If you are a big coffee drinker try going without for a day and notice how agitated you become, on the other hand notice how chocolate can make you happy (chocolate was once considered a dangerous drug). Foods are full of chemicals and they can have unexpected results, indeed some migraine sufferers have unexpectedly traced their suffering to specific foods.
We also have "programming" of sorts except we don't usually refer to it in such a manner. Curiosity, reproduction and survival are all part of our programming. These effect our emotions in a big way and pretty much dictate large parts of our lives. If you are interested in a pretty girl is this because you took a conscious decision to be interested or is it because you are simply following your subconscious programming to reproduce? Maybe we're not so different from machines after all.
"I met someone who looks a lot like you, She does the things you do, but she is an IBM"
The human entity is a complex system. The artificial entity is a completely different form of complex system which will be effected by different things. I do not believe they will act like us and I'm not convinced they will have emotions, if they do I am convinced they will experience them in a very different way.
They may be conscious but I don't know if anyone can really say one way or the other right now. I don't think we'll have too much to worry about from emotional computers though, I expect they'll be about as emotional as Star Trek's Mr Spock. If they have emotions it's because they'll fake them.