Skip to 0 minutes and 11 secondsIn terms of what the next big technological leap might be, that's going to impact our risk profiles on resilience from technologies - actually again this is perhaps not something we should think of in terms of a single technology development. So we may have these kind of ideas, that it may be something like artificial intelligence or biotechnology, but actually probably more important to us in the future is how those technologies converge or combine. And to, which may create greater risks, because we're not used to dealing with combined technologies - we do deal quite discretely with technologies in terms of their governance at the moment.

Skip to 0 minutes and 46 secondsToday we think we are at the fourth Industrial Revolution, where a lot of the digital technologies really come into existence, and where they are not working in isolation, but really strengthening each other. Technology such as artificial intelligence, Internet of Things, blockchain, will fundamentally have a big influence on how we produce things, but also have the possibility to disrupt business models which, if not managed right, can have a huge impact on us as a society. I think technology will move forward dramatically, the world will change dramatically. Just really having a serious guess what it's going to be like in even ten years time - it will be dramatically different from what it is now.

Skip to 1 minute and 36 secondsOne problem we have with policy and governance is that standard regulatory processes are not sufficiently quick to deal with the incredible pace of the emerging technologies and how they unfold. So we need to find different ways of doing governance. We need to be more agile, we need to be more flexible, and we need to find different mechanisms where we involve the business sector, the citizens, and the people that are deploying and developing these new emerging technologies to co-create and co-design new governance principles together. We may think of there being individual technologies which are driving existential risks, or indeed driving benefits to humanity as well.

Skip to 2 minutes and 24 secondsSo you might think of a technology like bio-technologies, which can be the applied for the promotion of human health, so better understanding of disease, helps us develop better treatments, but also that there could be risks from those same technologies being misused to cause human disease outbreaks. There's a sort of more underlying factors that we may often not really think about, that are driving risks or indeed the benefits of technologies. They can be more to do with not just the technological systems but also our social, political, and economic systems.

Skip to 2 minutes and 58 secondsSo we also need to think through how we can shape those systems, so the regulation of technologies, so that it drives those technologies down particular paths which are more likely to be beneficial and less likely to pose risks. We know that the fourth Industrial Revolution, just like previous industrial revolutions, can have great opportunities for us as a society. But a lot of people also mention against the potential negatives of this technology. There are the risk to increase inequalities in our society, and also, the impact it will have on jobs is currently very much unsure.

Skip to 3 minutes and 35 secondsThe risk I see for the fourth Industrial Revolution is that we don't take up ownership on those new technologies and how they are deployed, because then we risk depending on the interpretation of what ethics and what good governance are by a few minor actors in our society. We see that data and data ownership is very important, and that this and data ownership is controlled by the countries that are determining data policy today, primarily the US and China, or by a few business giants.

Skip to 4 minutes and 18 secondsWe have to be very careful, because ethical standpoints change, technological standpoints change, and a lot of the things we think at the moment "No, it's never going to happen!", because practically it's not okay at the moment, or ethically it's not okay at the moment - in a few years time it may well happen. But when it comes to the military, every country is very much a law unto itself, and what's okay for them is commercially viable - if this country can have an advantage or that country has an advantage they can make profit on it.

Skip to 4 minutes and 53 secondsBecause we are not thinking of individual technologies or individual inventions, different technologies that really strengthen each other, the pace of change is not linear but more exponentially faster in the past. Where usually improvements and productivity were confined to an industry, or confined to one country - now a lot of the technologies are based on data, and data travels across borders. And therefore we have to think about these technologies in an international context, rather than in a national context. A few people would point to one of the biggest threats being the future of artificial intelligence.

Skip to 5 minutes and 35 secondsAnd again I would emphasise that they're not necessarily seeing this as an artificial intelligence that, as in a movie scenario, is kind of out to get humanity, that is evil in some way, but this may just become a clash of what the artificial intelligence is trying to achieve, and what is needed for humanity to survive. And again I think some of that comes from always feeling that threat - if there's something superior, a superior intelligence, how does it look down on us? Does it look down on us? And are we seen as inferior, would it protect itself above protecting humans?

Skip to 6 minutes and 4 secondsArtificial intelligence drives on algorithms that are being used, and the way we shape and write these algorithms has a huge impact on the results that the artificial intelligence solutions will provide us. Whenever we have artificial intelligence solution, we have to make sure that there is no bias, and that really is ensured by the way we code our artificial intelligence algorithms. It is therefore for example for me very important that the people who quote these algorithms abide to very strict ethical principles, but that they also represent us as a society.

Skip to 6 minutes and 46 secondsWhat we have at the moment effectively is - for specific therapeutic treatment, part of that person's brain can be an artificial intelligence system, which effectively decides on what signals are being sent into the person's brain. Now, if we take that further and say, well we're going to use it not just for therapy but for enhancement - question number one is, how exactly are we going to link up with the brain? Elon Musk has suggested a neural

Skip to 7 minutes and 20 secondslace, which is: something is squidged in underneath the skin on your head. So it has the possibility of enhancing our abilities in all sorts of ways. Even the experts in these areas aren't quite clear on particularly when some of these technologies may be able to improve themselves, where that will lead to, and whether there are possibilities that we would get, for example, a level of artificial intelligence that's in advance of human intelligence. The worry I've got in 2050 is that some of the predictions of people like Ray Kurzweil, even Stephen Hawking, have all come about, and we have a society that is run by intelligent machines. So that then poses us back to how we would govern such a technology.

Skip to 8 minutes and 2 secondsIs there a way of making sure it's aligned with the value of the future survival of humanity? But in turn this becomes a major problem, and I think this is why there is such a concern about where those risks this might go. It's because we find it so difficult to establish what human values are, what we hold, and to hold them in a non- conflicting manner. So we often hold values that are in conflict with each other. So if we can't do that, and work out between ourselves what our values are, how can we train something else, this artificial intelligence, to be protecting those correctly?

Skip to 8 minutes and 38 secondsSo I think that's where this worry comes from, is that this technology could become beyond a human capacity to control - and then what happens?

The impact of technologies on the future

We’ve seen that technological advances have the potential to completely change the societal landscape for which resilience is being developed, but what are these advances and what risks and advantages do they bring?

Watch the video in which three specialists in the field of emerging technologies discuss what the next technological leaps are likely to be and what impact these will have on future societies.

Dr Catherine Rhodes is the Executive Director of the Centre for the Study of Existential Risk and Senior Research Associate for the Biosecurity Research Initiative at St Catharine’s College, Cambridge.

Her research revolves around the interaction of science and governance in addressing the major global challenges that societies are likely to face in the future.

Kris Broekart is the Government Engagement Lead in Centre for the Fourth Industrial Revolution with the World Economic Forum in Geneva.

His interests include the development of new governance procedures and mechanisms that will be able to respond to the challenges of the fourth industrial revolution.

Professor Kevin Warwick is a Professor of Cybernetics at Coventry University.

His research in the field of cybernetics has included self experimentation that has lead him to be referred to as the world’s first cyborg. He successfully underwent a neuro-surgical implantation procedure into the median nerves of his own left arm to link his nervous system directly to a computer. He has used this to assess the latest technology for use with the disabled.

Further Resources

In May 2019 Catherine Rhodes was co-author of an opinion piece for the Uk’s Metro Newspaper’s ‘The Future of Everything’ series:

Rhodes C, Sundaram L, Holt L (2019 ) Artificial Diseases will Pose a Threat to Humans, but not in the way you’re Thinking [online]. available from https://metro.co.uk/2019/05/16/artificial-diseases-will-pose-a-threat-to-humans-but-not-in-the-way-youre-thinking-9419362/

Kris Broekart was a co-author of the following report from the World Economic Forum into future governance.

WEF (2018) Agile Governance, Reimagining Policy-Making in the Fourth Industrial Revolution [online]. available from http://www3.weforum.org/docs/WEF_Agile_Governance_Reimagining_Policy-making_4IR_report.pdf

Professor Kevin Warwick has a substantial body of academic work, many of which are available as open access from the Pure Portal at Coventry University. One article that may be of particular interest is an essay from 2016 in which he discusses transhumanism:

Warwick, K. (2016) ‘Transhumanism: Some Practical Possibilities’. FIfF-Kommunikation. Zeitschrift für Informatik und Gesellschaft [online] (2), 24-25. available from https://pureportal.coventry.ac.uk/files/4029936/transcomb.pdf

Share this video:

This video is from the free online course:

Foundations in Resilience, Security and Emerging Technology

Coventry University