Ethics and dangers of artificial intelligence
Not all aspects of Artificial Intelligence (AI) are positive. Although ethical and legal considerations fall outside the scope of this open course and the AI Technologies for Business and Management program, it’s worth having a brief look at some of the main issues concerning the use of AI systems.
The dangers of AI
There are always concerns about allowing AI too much autonomy. The Dow Jones ‘Flash Crash’ of 2010 led to accusations of ‘algorithmic trading’ contributing to the problem. While this was disproved, it did highlight concerns over AI systems running unchecked. Recent incidents involving Tesla’s self-driving mode have also raised fears.
While the root cause is usually cited as human misuse of the AI system – in other words, the human error of failing to oversee the AI – these incidents do little to allay the fears of those opposed to the increased use of AI.
Beyond the ethical considerations of autonomous AI systems, as described above, we must also consider the ethics of the collection, storage and use of the data itself.
To help deal with this, the EU has introduced the General Data Protection Regulations (GDPR). Over 80 countries have similar data protection laws in place. In general, these place requirements on data gathering: have we made it clear what data we are gathering, what is will be used for, how long we will keep it, and so on?
The seven key principles of data protection are:
- Lawfulness, fairness and transparency
- Purpose limitation
- Data minimisation
- Storage limitation
- Integrity and confidentiality (security)
Investigate the legal framework around data protection in your locality. What moral, ethical and legal implications might this have on a business? Try and think of both positive and negative impacts.
© Coventry University. CC BY-NC 4.0