How Can We Stop the Machines From Devouring Our Humanness?
The cost of (seeming) convenience and speed can be steep

Welcome to all the new subscribers who found my work this week — I’m so happy you’re here!
No update on that thing just yet, unfortunately. In the meantime, I’m trying not to let it take up any more of my time and get back to the usual work. If you’re enjoying it, consider giving this essay a like, sharing it, or becoming a paid subscriber. Or, if you prefer, you can always buy me a coffee instead. Thank you!
These past few days, I’ve had this quote by writer and artist William S. Burroughs stuck in my head, again:
What does the money machine eat? It eats youth, spontaneity, life, beauty, and, above all, it eats creativity. It eats quality and shits quantity.
It certainly feels like almost every actual machine we’ve breathed life into qualifies as one — as a money machine. And yet here we are, with some of us now gleefully or absent-mindedly becoming its obedient servants, placing them on pedestals once reserved for goddesses and gods, and praying to them for the things they most desire — attention and influence and fame and money and all that other stuff humans never seem to get enough of, no matter how it’s earned.
This has all become quite clear to me in the wake of the recent plagiarism scandal I was at the centre of. So much of what we see online, so much of that seeming success and deep thought is just smoke and mirrors, propped up — to a frankly frightening extent — by machines that couldn’t care less what they’re doing and why. They can justify any action, compose any apology, and lie, lie, and lie.
I believe (perhaps naively) that it’s still possible to spot this. To weed out the human-like from the human-made.
The bad news is, though, we’re all increasingly becoming fodder for that machine-powered system, too, even if we don’t necessarily realise it, and we’re all being devoured by it, in one way or another.
AI technology — particularly large language models (LLMs), a type of generative AI and arguably the most popular machine at the moment — is increasingly hard to avoid. Two tools I’ve used for years in my work, Notion and Grammarly, now have AI functionality baked in as well. I have to say, seeing that little ‘Ask AI’ icon gets me more and more worried. Will these tools one day scrape my data, work, and research, now served up to them on a digital silver platter, to train their models? Will I find even more echoes of my thoughts and ideas, haunting me from the strangest of places?
For now, both companies claim not to do that.
But, of course, that could easily change at some point in the future.
Another tool I’ve used, WeTransfer, the file-sharing service, recently sent shockwaves through the creative community after updating its terms of service. As of 14 July, users were told that they are granting WeTransfer ‘a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use [their] Content for the purposes of operating, developing, commercialising and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy.’ This is essentially lawyer-speak for ‘if you upload files through our service, we’ll use them to feed our hungry AI models.’
After a fierce backlash, WeTransfer appeared to have backtracked on this decision; technically, though, I don’t think there’s anything stopping them from slipping that clause back in again.
Still, the reality is that our data has been up for grabs for years. If you’ve used Meta’s platforms (Facebook, Instagram, Threads, WhatsApp) and didn’t explicitly opt out of your data being used, then your public content — posts, comments, photos, videos, and even interactions — has been used to train their AI. (As of earlier this year, this applies to users in the EU, too.) The same goes for using Google’s products — email, search engine, etc. In fact, the company recently admitted in court that it can use website content to train its models as well, even when publishers opt out.
If you’re a writer and/or researcher, then the chances that your work has already been fed to the machines are even higher. Programmer and Atlantic contributor Alex Reisner uncovered that nearly a quarter of a million books were used, without permission, to train generative-AI systems built by Meta, Bloomberg, and others. More recently, Reisner found that entire academic journals — including Nature, Science, and The Lancet — are being scraped and fed to the machines, too.
Big companies, including generative-AI companies, also often deploy web ‘crawlers’ and ‘scrapers — automated digital parasites that roam the web, extracting data from everywhere they can, including behind paywalls.
The utter lack of transparency on how this happens, and what exactly the machines are swallowing whole and from where, also heavily contributes to the issue of AI bias, a topic I’ve written extensively about in recent years — including in this recent piece. One experiment found, for instance, that when asked for salary advice, ChatGPT recommended a lower figure to a female applicant than to a male one, despite identical qualifications, experience, and job roles. Most worryingly, AI bias often doesn’t just reflect human biases — it amplifies them, too.
What comes out of the end of AI’s digestive tract after it’s gorged on our thoughts and ideas and jokes and art and research is too frequently biased, stolen, inaccurate, hallucinated, and just… strange. The best way I can describe it is ‘uncanny valley.’ It’s almost human. But it isn’t. It’s empty, like a matryoshka doll. Peel back the layers of fluff-heavy, seemingly coherent text, and you’ll find nothing of substance.
Yet we use these machines, and then, in turn, the machines use us. But even this grim exchange doesn’t seem to come with the benefits we were promised.
Washing machines, vacuum cleaners, assembly lines, and then, later on, personal computers and smartphones, along with a suite of apps and tools, all promised to increase productivity and free us from the drudgery of daily life. Well. For a species so obsessed with inventing productivity-boosting, time-saving technologies, we’re remarkably quick to forget the promises they came with.
Contrary to economist John Maynard Keynes’ famous prediction that, by the early 21st century, the workweek would shrink to just 15 hours thanks to all those shiny new technologies, we’re not really working any less. In most places around the world, the 40-hour work week, shaped a century ago by industrial standards, is still the norm. Actually, in some countries, working hours have increased since the 1980s — up 10% in the US, 5% in the UK and France, and 1% in Germany.
The machines didn’t exactly give us back our time. If anything, they’ve only sunk their greedy teeth deeper and deeper into it.
In economics, this is known as the Jevons Paradox: efficiency gains increase demand for workers’ labour, instead of decreasing it. It happened with coal, and then the steam engine, and now it’s happening again. As anthropologist Vivek V Venkataraman points out in an essay for Aeon:
Indeed, our lives today are the Jevons Paradox in microcosm. Frictionless technology at our fingertips leads to the paradoxical situation of our smartphone screens becoming crowded with apps, our days increasingly divided into small things, and our attention shattered. Things that were meant to make our lives easier simply tempt us to put more things on our plates, increasing the amount that we work, and wreaking havoc on our well-being.
In her 1983 book More Work For Mother, scholar Ruth Schwartz Cowan argued that ‘labour-saving’ household appliances similarly didn’t lighten the domestic load — they merely raised expectations around it, which are still overwhelmingly placed on women. You’re saving time on a chore? Lovely. Now there’s certainly no excuse not to do it perfectly. Or to take on the extra labour that comes with the device itself. In her brilliant recent essay on mental labour, writer
shared that in May of this year, she exchanged 229 communications — 52 emails, 130 WhatsApp messages, and 47 text messages — about her two kids’ school and sports activities. In just one month.Much has also been said in recent years about how our devices, particularly phones, blur the line between work and life. Because we’re nearly always on. Always reachable, just one ping away from hopping on another call or answering an email. AI doesn’t seem to be taking away from our plates either. In a recent survey by Upwork, done among 2,500 workers, freelancers, and executives, nearly 80% said that using generative AI in their jobs is adding to their workload and hampering their productivity, not improving it.
One result is that leisure time — the space for play and proper rest and socialising and just doing nothing at all — is leaking away, eroded by rising demands of modern life and that ever-buzzing device in our pockets. And the little free time we do have, we then often spend… with machines, too. They recommend what we should watch and read and buy, in the process filling our heads with viral nonsense and our homes with plastic crap. They replace human connection with digital companions and real thought with highlight reels and bullet points. There’s barely any time or space left to be human. To simply exist, without being pulled in a dozen directions at once.
But why do we allow this to happen? Why do we allow the technology to progress, while we seem to be stuck in, more or less, the same place, if not worse?
Money machines, of all kinds, run on more than just fuel or steel or data — they run on myths. Myths that justify their existence, glorify their excesses, and downplay the sacrifices they demand.
We’re told that we must ‘move fast and break things’ to usher in a Brave New World of tech-facilitated human flourishing. But speed doesn’t matter nearly as much as direction does. It doesn’t matter if you’re going fast if you’re heading towards collapse.
We’re told ‘greed is good, actually,’ and that rising accumulation of capital and power in the hands of a few will eventually ‘trickle down’ like manna from heaven. But it’s those who own the machines who now profit the most from their advances, while everyone else is left increasingly squeezed.
We’re told ‘hard work pays off,’ and encouraged to keep our heads down in the hamster wheel. But it’s not the people working the hardest who are ever rewarded — not the garment workers, the electronics assemblers, the farm labourers, nor those toiling through endless unpaid domestic shifts.
These various myths essentially function as a sort of tranquiliser, numbing us to the reality that our labour and thought is being absorbed into highly profitably machines we then have to compete with, and that despite a century of technological leaps and a dramatic surge in productivity, we’re still working just as much, if not more, with so little time left actually to enjoy being alive. As Keynes observed, ‘for we have been trained too long to strive and not to enjoy.’
It’s time we snap out of it, though.
Technology isn’t inherently evil. It does what we program it to do, shaped by the values we hold — or fail to. And right now, I’m afraid it’s the latter that’s winning. In a capitalist-patriarchal world where profit and domination trump collective well-being and just about everything else, that’s hardly a surprise, of course. But it doesn’t have to be this way. We can put back human dignity, rest, connection, creativity, equity, and joy at the centre of what we build — machines, systems, and societies alike.
Sure, we can also be cautious and refuse to let the machines swallow us whole when we still have a choice. But what happens when we don’t? In some places, you technically can request to have your data removed from companies’ AI training sets, but whether that request is respected is another matter entirely. The same goes for trying to live ‘mindfully’ — whatever that even means anymore — slowing down, paying attention, filtering what we read and engage with, etc, etc.
Ultimately, the machines and those who own them won’t stop devouring what’s currently on the menu — our data, our creativity, our time, our very humanness — unless we take a collective stand and rewrite the societal scripts that allow it.
Donna Haraway’s words in her 1985 essay, A Cyborg Manifesto, still ring so true today:
The need for unity of people trying to resist worldwide intensification of domination has never been more acute.
If we just keep spinning on this not-so-merry-go-round, it’s only a matter of time before our own humanness becomes unrecognisable to us.
The online ecosystem, where authenticity, kindness, and creativity are so frequently replaced by performativity, superficiality, and algorithm-optimised engagement, certainly offers a preview of this. But that could seep into the offline world, too — if it hasn’t already.
And then what?
My big concern is that generative writing tools is going to cripple our ability to think critically. This, along with the hypnosis of doomscrolling (you so eloquently described it as a kind of "tranquiliser, numbing us to the reality that our labour and thought is being absorbed into highly profitably machines") is a very real threat that could see us sleep walk into a horrific kind of world. In Western Australia, where I live, it was just announced that 40% of Year 3 students are below the national standard for reading and writing - this is a huge worry. Our generation is lucky in that we do have high literacy skills and are still young enough to adapt to a rapidly changing world, but gosh is it a worry for young people who might not be equipped with the tools to combat the machines. We must be disciplined enough to still exercise our minds, write, read and think - I'm happy to let machines do my washing and cleaning, but I'll keep doing the thinking.
Interesting and thought-provoking read. Thanks for sharing.
I think the biggest concern I have with AI and how it is being used within our society, is the resultant loss of critical thinking skills among humans. Though this isn't entirely linked to AI—social media and news outlets have a lot to answer for as well.