Skip main navigation

Ask Viktor

An opportunity to ask the course educator your questions
crowd of question marks
© University of Strathclyde
Throughout this course, you will have the opportunity to ask Dr Dörfler the questions that matter to you.
Please post your questions for this week in the comments section below.
It seems that it is the end of the course and the summer started: we have reached a historical minimum of two questions this week, both of them asked by Hamed Khademipour. Thank you very much Hamed. However, as in the last two weeks, there were very interesting discussions taking place over the week. I would also encourage everyone to read, if you have not done yet, the week 4 ‘Ask Viktor’, particularly as I was already talking about topics related to what I am going to address here. Both, last week and this week it is all about the future – quite distant future to that. Last week I was talking about the near future and the fundamental questions of morality and trust. But I was also briefly commenting on how I don’t know what is going to happen in a 100 years from today. Thus, Hamed decided to push me even further this week, and he asked what is going to happen in 500 years’ time, particularly with reference to AI. You may be surprised, but this is somewhat easier to answer: I take that this question really means whether it is possible in principle.

What to expect in the distant future? – Is the thinking machine a possibility?

So here is the first question by Hamed:
”In regards with the step 5.3 and your reference to the book “Emperor’s new mind” you mentioned according to the book that the AI of a fictional supercomputer in future is no where close to human’s AI while getting beaten by a 13 year old person. What about distant future, say, 500+ years from now. I mean, who would have thought back in renaissance time that a box connected to electricity will beat the grand master of chess (Kasparov) in 500 years time.”
I want to note that my answer is an entirely personal view – you are actually asking what I believe in. The short answer is that no, I don’t believe that the real AI, a thinking machine in the human sense of the word, is even an in principle possibility. But it is only a matter of belief. I can try to explain why I think I believe that, and do who do believe in the thinking machine can tell what they do believe in it. We cannot prove each other wrong or ourselves right at the moment. I will never be able to prove them wrong, as they can always say that we are not quite there yet. However, if they are right, they will be once able to prove me wrong by creating the thinking machine, and then they can say, see Viktor, you were wrong: here is the thinking machine. Of course, I will then try to do what the 13-year old did; I will ask ‘What’s up partner?’ Of course, they probably also read Roger Penrose’s book, so they will probably pre-program an answer to this question, so I will need to play smarter – and quite literally, try to outsmart the machine. So a reasonable person would not place herself or himself in this position. But I am unreasonable. As George Bernard Show said:
“Reasonable people adapt themselves to the world. Unreasonable people attempt to adapt the world to themselves. All progress, therefore, depends on unreasonable people.”
What we have at the core of this difference of belief is what the mind is. They believe that they don’t even need to attempt to understand the mind, as it is sufficient if they can replicate the brain, and that will simply reproduce the mind. However, I believe that this is not possible. The brain is simple compared to the mind, and I don’t believe that the brain produces the mind. Sure, the two are somehow related – but not in that oversimplified way. As I don’t believe that the brain produces the mind, I think that we if we wanted to build a mind, we should actually understand the mind. However, this is not possible, as nothing can understand something that is of equal or higher complexity than itself. I am not sure who said something along these lines:
If the mind was sufficiently simple to understand it, we would be too simple to understand it.
However, I have emphasised previously that I am very afraid of those, who want to reduce thinking to data processing. NOT because I think that they may succeed, but because they may convince many people that they did. Remember my fight with the guys at Google about the program that supposedly understand the concept of dog?

The historic role of technology

The second question by Hamed was:
Victor, what do you think of this interpretation to our future by the famous radical Historian Yuval Noah Harari?
First of all, thank you for the link – it is an interesting interview, I highly recommend it to everyone. I have now added it to the ‘further readings’ list. I don’t fully agree, as you may expect, but I think what he says is definitely worth listening to. I think his logic is impeccable. No disagreement there. However, we do disagree about the premises. I don’t think he adequately understands computers. However, even if I think that those who think that it is possible to build a thinking machine are wrong, I think it is still worth exploring what would happen if they did… And this is he does.
There were also some sales-oriented tricks that I did not like: He says that there is too much about how the thinking machines will make our lives great, and he wants to provide a negative view as a counter-point. In fact, if you search YouTube or the internet more generally, as I did in preparation for this course, there is much more about the negatives than about the positives. However, Yuval goes far beyond the ‘usual’, which is typically limited to how we will all lose our jobs to the robots. And, as I said, his logic is impeccable.
There is also one sort of semi-mistake: there were ‘useless’ classes, more precisely ‘underclass’, before in history. Just think about what Plutarch is telling us about the Spartans throwing weak or deformed babies off the cliff or about the outcasts and untouchables of various cultures. What has never happened before is that the ‘useless’ were well educated and made ‘economically useless’. And yes, if we are to believe that such technology is possible, technology and the thinking machine in particular could play a huge role in this.
I think it is also very interesting to think about how such technology could enable the rich to be more talented than the poor – but I don’t believe that this is possible. I do believe that many improvements could be made to bodies, as much as we understand them, perhaps once (back to your 500-year prediction now Hamed) even a fully artificial body will be a possibility. However, he also talks about e.g. enhancing creativity – I don’t think this is possible in the sense of bio-engineering. We have at least 20,000 years of history in that, typically using some psychotropic material (natural or artificial). What these occasionally achieve, for a shorter or longer time, is removing certain barriers, on the long term usually leading to very unhealthy and often disastrous consequences. But even short term they don’t make anyone creative.
Imagine for a moment that we discover the deviant gene. A government could opt to forbid deviance, and fix this gene in any babies, in order to stop people becoming criminals. And it could work. However, it would also stop creativity. There would be no more criminals but there would also not be any great scientists, poets, painters, or movie directors. Being creative is equally deviant from the norm than being a criminal. Of course, it will not come to that, as we still don’t understand much of genetics. Some statistical analysis could find that most criminals would have a particular gene patterns, but there are two problems with this: (1) We should not stop looking there, as we may see that there is also a large other part of the population with the same genetic pattern – of course, there is no database with everyone’s genetic data and even if there was, it would take forever processing this. (2) Changing something about that genetic pattern could also cause all the affected to be blue eyed, or have six toes, or get high blood pressure at the age of 45. We have no idea what genes actually are and how many things can be inherited. For instance, I was over 40 when I have noticed that my beard gets white in exactly the same shape as my father’s when he way around 40. And when my younger brother got around 40, his beard was also getting white exactly along the same pattern…
Why is this important? Because Yuval is perfectly right that there are already algorithms deciding about whether you get credit or not. They discover certain patterns in the data footprint of those who fail to repay their debt. And you fit the pattern. They don’t bother to look that there are others who also fit the pattern but pay back as required, if there were many who fail to pay, the bank may decide to refuse the credit to everyone with such patters – after all, who cares if they also lose some potential customers who would pay back. We are here again: I am not afraid of technology. But I am very afraid of people who want to substitute thinking with data processing.
© University of Strathclyde
This article is from the free online

Understanding Information and Technology Today

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education