Skip to main content

AI: Alienating Intelligence

They say that the machines are coming for our jobs, but when we look behind the hype we see the same forces of capitalism that have existed for centuries, writes the FUTURE OF WORK GROUP

IN many ways, the adoption of Artificial Intelligence (AI) systems by the capitalist class parallels the adoption of other technologies like machinery and robotics.

The capitalists identify that their business is dependent on the availability of skills that are embodied in the workers and seek to divide the labour, automating the parts they perceive as “skilled” to keep wages down.

Quality is not as important as profit through exploitation: just as the mass-produced industrial commodities were less refined than the artisan-produced work of craft masters, so the knowledge work of a computer is lower quality than that of a skilled expert.

In each case, this reduction in quality is a choice made to maximise profit, not an inevitable result of automation.

For example, the automated navigation directions followed by a gig-economy driver will be lower quality than the route chosen by a taxi driver with “the knowledge” of the roads and traffic patterns in their area.

We no longer hear about drivers following their sat navs onto farm tracks or into rivers, but not because the technology has become great: just because it has reached a barely acceptable standard.

As a result of this automation, workers are “alienated” to use Marx’s term: removed from their human nature and perceived by their employers only as embarrassingly organic parts in a machine. Employees who give up their labour power to perform a specific limited task in return for the wages of subsistence.

Rather than using computers as tools to improve their work, workers are used by the machines to do the bits of the job that the computers aren’t up to.

Seen through this lens, AI is the machinery of knowledge labour in the same way that the steam engine was the machinery of physical labour. It takes parts of the intellectual work previously performed by employees  —  the word “computer” referred in the early 20th century to a person who prepared tables of calculations — divides that work and automates part of it. Human workers are left doing the parts that cannot, or have not yet, been automated.

There are also important differences between the automation of physical labour with machinery and the automation of intellectual labour with computers and AI.

The creation of factories brought workers together in the cities and on shop floors, making possible the great advances in collective organisation and unionisation of the era.

Meanwhile a gig-economy worker, or a salaried worker who has been sent home (probably to use equipment and furniture they paid for themselves) may only interact with their company through an app that allocates tasks and surveils their performance. No colleagues, no foreman, no camaraderie: the alienation is total.

Under these conditions, building the trust and relationships necessary to organise a trade union presence in the workplace is greatly hindered. The rise in homeworking during the Covid-19 lockdown has illustrated how necessary the casual “water-cooler” conversations between colleagues are.

The right to this form of communication and organisation must be defended; we have the right to know the names of our coworkers and fraternise and have ad hoc, private discussions with them, whether we are physically in the same workplace or in front of a smartphone or computer screen. We must be able to name the person making decisions about our jobs and careers and hold them to account when those decisions are bad.

Now the layers of middle management who rose through the previous decades demanding more productivity and greater efficiency from their reports are alongside their former reports, being automated out of a job.

The paradox of capitalism wanting more and more people to consume more and more commodities, while paying less and less to produce them, remains wherever the “efficiencies” are found.

From a technology perspective, there are two distinct categories of AI systems. Machine learning software is “trained” to perform categorisation tasks by being shown examples of things from different categories. This type of software can expose biases in the contexts where it’s used which come from gaps or inequities in the training data, like Google’s racist image recognition algorithm that tagged black people as “gorillas,” or Amazon’s sexist CV filter that preferred male candidates.

The training work, which gives companies using AI their competitive advantage, may be done by the precariat (as is the case with social media content filters, where people are paid a piecework rate to decide whether a given image represents child abuse, terrorist beheading, or benign content) or even by the customers: when Google asks you to click on all the squares that contain palm trees, you’re giving them free labour.

There are untrained AIs too, which don’t take any pre-existing information as input but learn to maximise “utility” or to defeat another AI that is looking for flaws in their work. While these do not learn their biases from human trainers, this does not make them neutral: “the computer said so” is not a gold standard of fairness.

An untrained AI called Pulse which automatically enhances photographs will gladly show you the pictures of white people it generated from photos of Barack Obama or Alexandra Ocasio-Cortez. So we can see that not being trained by biased humans doesn’t stop the computers from being biased, it just might make it harder for the companies to understand and address that bias.

The alienation brought about by AI is not only alienation from the satisfying craft of making high-quality products at work, but alienation from society: through perpetuating historical biases as shown above and by undermining our democratic position in that society.

Algorithms decide which news we get to see and which politicians get to target us with hyper-specific election advertising. If we get to vote, we get to vote for the change a small number of billionaires in California would like to see in the world.

It’s not that the machines are coming for our jobs. The capitalists are coming for our jobs and machines are the tools they use to make it happen. But they also come for our societies, our workplaces and our ways of living and we must resist those who use technology like AI against us at every turn.

The Communist Party’s Future of Work conference last winter was a great success. The attendance was in the thousands and the feedback from within the party and the wider labour movement was very positive. The intention of the conference was to promote wider debate and a number of branches and individuals identified the need for more work on the application of Marxist economic theory to the phenomena discussed at the conference. A CPB working group was therefore established and has met regularly in recent weeks. Along with this piece, an article on “The Labour Theory of Value without Labour” will appear in Communist Review in the near future and a third piece will draw the political conclusions that will be discussed within the CPB prior to publication.

The Future of Work Group is: Paul Gurnett (Colchester CPB), Leo Impett (Durham CPB), Hugh Kirkbride (Bristol Bath & Gloucester CPB), Graham Lee (Coventry & Warwick CPB) Andrew Maybury (Birmingham CPB), Nathan Russell (Bristol, Bath & Gloucester CPB and West of England YCL).

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 10,887
We need:£ 7,113
7 Days remaining
Donate today