Worker responses to AI

The third in a three-part series on artificial intelligence in an age of insecurity

by John Bratton

• In the last issue, I explored how AI already poses tangible harms to our planet, society, and workers. While AI has significant benefits, for example, in the treatment of cancers, how it’s implemented at the workplace, exposes predictable downsides.

Algorithmic management — the use of AI for optimising ‘efficiency’ or in Marxian terminology, maximising relative surplus value — is behind the intensification of work.

Algorithms tell the workers what to do, how to do it, and how quickly to do it. One Amazon warehouse worker described how algorithms work like this:

‘You just need to follow the scanner, which tells you go here, go there, pick this, and pick that. You don’t need to think.’ Behind the AI hype, the implementation of AI is first and foremost driven by capital’s objective to maximise profit.

To round-up the discussion on AI and work, here I address the key question, how can workers protect themselves against the harms of AI?

In developing labour strategies to protect the interests of workers three observations are worth emphasizing:

First AI is not politically neutral but rather embodies the priorities, values, and interests of AI owners.

Second, AI systems are trained using data from the past which reflect structural inequalities, the misogynistic representation of women, and racist stereotyping, often without transparency.

Third, labour strategies will reflect the balance of power in society. The more workers understand about AI, the better equipped they are to advocate for political action and to use their collective power more effectively.

Legislation, union and workers’ engagement, and direct action can help to protect the interests of workers. Since the Industrial Revolution, legislation — think of the early Factory Acts — has shaped how technology has been applied by employers

Today, while currently there are no explicit statutory regulation of AI governing its use in the workplace, several existing areas of law potentially offer some limited protection.

The Equality Act 2010 prohibits discrimination by employers on the grounds of any protected characteristics and because AI technologies exhibit human biases, recruitment decisions based on AI tools may be unlawful.

The Human Rights Act 1998 has provision for the right to privacy, which places some restrictions on the use of algorithmic surveillance tools used to monitor workers in the workplace and the home.

Britain however is an outlier compared to the EU in its omission to protect workers and deliver the benefits of AI to all. The EU’s Artificial Intelligence Act (AIA) provides a risk-based approach towards AI regulation.

Legislative ‘guardrails’ are helpful, but because AI is developing so fast, the impact of AI on workers can change without proactive intervention, which means that worker representatives’ should be alert to the day-to-day experiences of the workers using AI.

As a labour strategy, algorithmic impact assessments (AIA) are a new form of ‘voice’ that potentially afford workers leverage to influence the implementation of AI.

Mirroring the aims of ‘new technology agreements’ of the 1980s, AIA aim to ‘deep dive’ into the risks and effects of any proposed AI technologies, and then develop action plans to prevent or mitigate any impacts.

While touted as a strategy to tackle the risks of AI, the process is unlikely to unfold without difficulty. The unequal balance of power is likely to influence which ideas, risks, and plans are accepted or rejected. It helps to have high levels of unionisation in the workplace.

Direct action by workers can act as a counterbalance to the rise of AI. In Capital I, Marx writes about the ‘strife’ between workers and machinery. Writing about the Luddite movement, synonymous with opposition to technological progress, Marx explains that workers directed their attacks, not against machinery, but against the way in which technologies were used by factory owners.

‘It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital,’ writes Marx. In AI short history, workers are directing their ire against the mode in which AI tools are employed by their owners.

For example, in 2024, Hollywood writers, fearful of redundancy, took direct action demanding that AI tools be used only to assist, rather than replace, research or script ideas for films.

On Valentine’s Day in February, thousands of Uber and Bolt drivers in the Independent Workers’ Union of Great Britain (IWGB) took direct action to bring attention to the poor pay and economic insecurity that they face in their platform-mediated work.

As one Uber driver said, ‘We power Uber, we power this system. If we’re united, we can make a change.’

A combination of legislation, workplace engagement, and direct action is crucial to helping workers protect themselves against the harms of AI. Without human agency workers risk ever greater levels of exploitation and becoming slaves to AI.

• John Bratton is co-editor (with Laura Steele) of ‘AI and Work: Transforming Work, Organizations & Society in an Age of Insecurity’ — published by Sage, January 2025


Subscribe now to receive the regular Voice newspaper PDF
https://scottishsocialistvoice.wordpress.com/subscriptions/

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.