Skip to 0 minutes and 12 seconds I think probably what the greatest challenge to simulate human intelligence is how far we want to get beyond the concept of the robot or the intelligent device, the Watson or whatever. It just performs like a human or it understands what it’s doing, its conscious of its actions and it takes responsibility for its actions.
Skip to 0 minutes and 30 seconds So, how much we can have this robot, so if I make a mistake this is going to affect people’s lives, this is going to have an impact on these people or are we just going to say I know that I want this prescribing system so simply this is what the person, the patient says is wrong with them, here’s the appropriate drug and its doing it just based on information and learning its not doing it on it being conscious of its actions and also the understanding of its actions and things like empathy.
Skip to 1 minute and 6 seconds So, I’m interested in social robots so do we need our robot to behave human intelligence or human like intelligence or do we need it to emphasise with somebody, or do we just need it to do its job almost? So, if I’m looking after someone in a hospital, or in a care home, or retirement village is it enough if they need help or they need reminding of their medication it just remembers and it says take your medication. Or do we need it to say I know if you don’t take your medication you’re going to become ill or whatever.
Skip to 1 minute and 43 seconds So, we have to decide in the near future if we want these to have human like characteristics in terms of empathy, consciousness, or its enough for the robot for it to appear to have these capabilities.
The greatest challenge
In this video, Mark Elshaw discusses what he considers to be the greatest challenge in simulating human intelligence.
To what extent do you agree with what Mark Elshaw has to say? How does what he says fit in with what you have learned so far?
© Coventry University