Skip main navigation

Resisting AI

In this step, we see how everyday resistance to AI can manifest and how lack of regulation is exacerbating AI related issues in creative industries.
In the previous step you heard of one of the main reasons many people have to be hesitant about engaging with AI technologies: its environmental impact. In this step, we see how everyday resistance to AI can manifest and how lack of regulation is exacerbating AI related issues in creative industries.
As should now be clear from this course, our goal is to build educator (and student) AI literacy. There are no simple answers and whilst we have shared many examples of possibility and potential, we have also illuminated critical issues and causes for pause and reflection.
By accessing this course, you are engaging, likely reflecting, and hopefully thinking with more clarity about the implications—narrow and broad—of AI for you and your students. But what of those who are actively resisting the ubiquity of AI? Is an AI-embedded future as inevitable as we are led to believe?
One King’s researcher, PhD student Dylan Orchard, is exploring everyday resistance to AI. Another student, an undergraduate on King’s BSc Neuroscience & Psychology programme, Alexandra Nicolae, is also an artist and feels passionately about the immediate and long-term impacts of AI and the lack of current regulation. Below, each of them offers an essay, capturing different aspects of the impacts of AI.

Dylan Orchard

“They all say ‘yes’, but they do ‘no’. That has to be it. There’s no other way to explain these figures. If they would do ‘yes’, the figures would have improved by now. [sigh] It would be easier if they would just say ‘no’. Then you could have a conversation about it [staring blankly].” [1]
A snippet from a manager noticing those who quietly resist AI in the workplace.
AI further extends its reach, both in private and public settings, making the realities of AI technologies ever clearer to an increasing number of people. Decision making, cultural imposition, servicing the data economy that drives it—more and more of us are finding ourselves subject to a system that is, at best, only vaguely understood. More often it’s pure mystery; a black box touted by salesmen, politicians, and grifters as the final answer to any question, remote from challenge by virtue of an authority that promises everything but explains little.
Billions are invested in creating hyped-up narratives, presenting each new AI deployment as something new and revolutionary. In reality, though, certain tells reveal familiar patterns. The magical answers offered remind us of old routines of efficiency, cost-cutting, and austerity. Benign decision-making processes ordained by automated indifference reach our daily lives as punitive demands to adhere to new systems. These generate even more data and erode our own human engagement for the sake of an eternal audit that knows progress only as an effective production of numbers.

Small acts of resistance

Faced with these challenges, old traditions are given new impetus. Go slows, work to rule, foot dragging, lying to authority, evasion, machine breaking—everyday resistance has a thousand different forms and, in the face of AI, they can all evolve on the frontline of tech-solutionist deployments.
On a day-to-day level, most of us will engage in tactics of resistance against AI and datafication at some point. ‘Adjusting’ details on a form to deny data or get a result rather than just handing facts into the void, ignoring the chatbot and making a call, a QR code doesn’t get scanned, a platform gets ignored. They’re all forms of disengaging with a system that endlessly consumes data points and hands out decisions about how to work, how to manage your home life, or even how to depict ‘a dog with six legs’ (eg through Midjourney’s algorithm of mediocrity).
These accessible resistances can seem completely trivial. The giants of AI are unlikely to be all that fazed by me lying about my age on signup forms or not using ChatGPT. When a shiny new AI platform is rolled out at work, ignoring it and writing your own email, or making your own lesson plan might not be revolutionary (although it might have meaning for the person on the other end of it when you invest human time and creativity in the job), but these forms of resistance do serve two major functions.
For one, they cumulatively build a culture of rejection and negation; even where you have to say ‘yes’ but end up doing ‘no’, the grip of new tech-solutionism is loosened. Like delivery drivers dealing directly with customers, once they find ways around the algorithms that manage them, rejection, no matter how unseen and unnoticed, can frame a whole new culture of (dis)engagement.
Secondly, each individual act of resistance doubles as a statement, to ourselves if no one else. Resistance creates a new space to reject what’s imposed. For artists, that means reaffirming that art, as opposed to assemblage, is a human practice; for Uber and Deliveroo drivers, it’s a claim to autonomy, to a right to dictate their own rhythms and methods of labour; for educators, it may be to assert that human interaction, the process of learning, not the algorithmic enshrinement of statistical results, is the priority. In each case, to remind ourselves of these things is to protect a resource of agency and identity against distant automation.

“They all say ‘yes’, but they do ‘no’…”

When observing everyday tactics of resistance to AI, ‘no’ will always be the keyword. These accessible practices are, if you choose to look, alarm bells ringing. As AI rapidly embeds itself into our society, the red lines of what we don’t want can be quickly paved over by enthusiasm and fear of missing out. In the mundane reactions to AI by those subject to the new systems it generates, signals reveal what really matters and the points at which they want to retain their own agency and control. These aren’t challenges to be overcome and these tactics aren’t loopholes to be closed; they’re warnings to heed if any AI deployment is to be done in service of people rather than for control over them. And sometimes when the answer is ‘No!’, it needs to be accepted.

References

  1. Ybema S, Horvers M. Resistance Through Compliance: The Strategic and Subversive Potential of Frontstage and Backstage Resistance. Organ Stud [Internet]. 2017 [cited 2025 Mar 7];38(9):1233-1251. Available from: https://doi.org/10.1177/0170840617709305

Alexandra Nicolae

The technological boom of the 21st century has revolutionised how we see and use the internet. Working from home, an exponential rise in social media use, and the true birth of artificial intelligence (AI) have taken the internet by storm since the first COVID lockdown of March 2020.

Visual art in an AI world

As an artist of eight years, with thousands of hours of experience across both physical and digital media, watching the rise of AI in the field of art has been the most soul-crushing, detrimental phenomenon to manifest in the online landscape.
The mass, malicious exploitation of AI by fraudulent accounts has generated widespread unrest in online communities. Pair this with the overall digitisation of artists, from traditional media to the software counterparts, and you get a community full of uncertainty, mistrust, and a newly found sense of doom.
“Surely it’s AI, it’s not real. Why do the proportions look off? Why does the shadow not sit right? I mean—look at the hands! Show the speed paint, or else no one will believe you’ve drawn this.”

Accusations like this plague comment sections across platforms such as TikTok and Instagram, once popular choices for beginning artists. This hostility now makes it hard for individuals to maintain a positive outlook when uploading their art online, as online hate and cyberbullying is often just as impactful as verbal abuse in real life. Suddenly, what made art so inherently human, the uniqueness and freedom of having no rules, had disappeared almost overnight. Artists are now no longer free to adapt unique art styles, or to simply make mistakes.

It’s not just visual art

The visual art industry is not the only one at risk, however. Statistics show that Spotify laid off a total of 2,290 workers in 2023 alone in favour of using AI for its services [2]. It is worrying that companies have begun using AI to replace humans—this would have been unheard of in the online medium before AI became so popularised. A tool that was originally made to help humanity towards bettering itself, and provide everyone with optimising tools to boost human performance, has now started to corrupt and replace—but why? Due to AI’s almost inexistent laws and regulations, the public can access it without any restriction, often on easy-to-use platforms. Psychological phenomena such as online anonymity, the lack of authority-imposed regulations, and non-existent sanctions against AI content have allowed people to easily blur the boundaries between right and wrong, ethical and unethical. It is the unethical use of AI that sees deepfake videos of celebrities, pitch-perfect duplicates of famous voices, and fraudulent art accounts that post generated images labelled as original artwork.

AI can be extremely useful if accessed ethically. It can aid understanding in technical fields such as coding, and help summarise key concepts for neurodivergent individuals who, for example, might struggle to read through a 25-page scientific report, but still want to understand concepts without the clinical jargon. AI’s sterile operational pattern makes it good at exactly that: sterile, binary work. It should not meddle in the humanities, claiming spots in industries that have for so long thrived from the beautiful uniqueness of the human mind.

References

  1. Padilla S. Spotify slashes staff to move faster into AI – and Wall Street loves it [Internet]. United States: CNN; 2023 Dec 10 [cited 2025 Mar 17]. Available from: https://edition.cnn.com/2023/12/10/tech/spotify-betting-on-ai-podcasts/index.html

Join the conversation

What do you think about resistance to AI in education? Can resistance be a productive force in shaping meaningful and useful engagement with AI, rather than just an obstacle? To what extent would better regulation help protect creative industries? How much agency do educators have in directing this change?

This article is from the free online

AI in Education

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now