How To Win A Fight With A Thinking Machine (2026-) | Part 01/20
The Artist and The Computer
Marina Abramović and ULAY - Relation in Time (1977)
01.01 Can a computer make art?
Can a computer make art?
Yes, of course it can. You're a computer. You are a machine that takes input, processes it, and produces outputs. And you can make art. Ergo, a computer can make art.
Can a computer have original thought? Can a computer innovate? Can a computer produce emotion? Can it experience emotion? Can it create life? Can it show intelligence? Of course it can, you're a computer and you can do all of these things. This isn't really helping, is it? Are we, perhaps, asking the wrong questions?
Computers are not new. They've existed in their current micro-electronic form for around 80 years now, a human lifetime, and were around in other forms for a long time before that. When Alan Turing was setting out the foundations for Computer Science in the 1930s no computing machines, as we understand them today, had yet been built. In Turing's time "Computer" was a job title; someone who performs long mathematical calculations. "Parallel computing" was two people performing the same calculation and then comparing the result. For Turing, computers were, like Soylent Green, people.
A computer is anything that can take input, process information, and output something afterward. You're computing right now, reading these words and processing them into your own thoughts. I'm running a similar process in reverse, encoding the argument I'm trying to make into words you might understand. Before we invented machines that could exceed our capabilities in this area it was a core function of the human animal. It still is.
The other core role, the flip side of the coin, is Artist. Artist and Computer might be seen as the two halves of conscious experience - the power of creativity and expression, and the ability to process and manipulate abstract information. The old right/left hemisphere split of primitive neuroscience. A century ago we, the human animal, were the planet's greatest practitioners of both these skills. But over a mere three or four generations we have surrendered one aspect, almost entirely, to these marvellous computing machines we've invented. Whilst the other we now cling to like a life raft, fearing that too is replaceable.
"Can a computer make art?" is a question I've spent much of my career considering. This series of posts (bear with me) will continue the thinking that framed my Generative Art book, fleshing it out into a full argument for how, in an algorithmic age, we might imagine a healthy human-machine symbiosis.
Externalising our computing skill has supercharged our capabilities, and radically changed the structure of our societies. We have mechanised it far beyond human limitations, to a point it barely feels human any more. But we haven't yet lost control of it. The artist and the computer is still a vital collaboration. All that has changed is the interface.
01.02 Technological unemployment / The driverless elevator
The economist Maynard Keynes, a contemporary of Turing, wrote in 1930 that "we are being afflicted with a new disease [...] technological unemployment". He was not alone in this concern, the fear of automation having been, and continues to be, one of our cultural constants since the invention of the wheel. Yet I've been unable to find any sources, either contemporary or retrospective, mourning the loss of the job of Computer. There were no Luddites smashing the mechanical looms of the 1950s and 60s. And today, with hindsight, Computer is now a role we'd consider almost cruel to ask a human to perform.
Similarly, we don't mourn the loss of jobs such as switchboard girls or elevator operators, other tedious, mechanical jobs that have been automated out of existence. Modern minds instead question why telephone connections needed manual switching or why elevators ever needed drivers in the first place. The reason was because these technologies, when new, were a little bit scary.
An elevator operator had little agency apart from pressing the buttons, but they did provide one important feature: a human face. These tiny boxes that shot up into the air were, frankly, a terrifying prospect. As are a lot of technological innovations. The arrival of the "driverless elevator" was actually quite a big deal, but we adapted to it quickly.
We might look back on these roles nostalgically, as symbolic of more innocent times, but we don't really view them as slivers of core human worth and dignity that have been stripped from us. We simply accept them as transitional roles that were always, really, better suited to machines.
And anyway, computing is something we can still do, if we choose. There's nothing stopping us doing it for fun, if not money. We like solving problems and matching patterns. We find pleasure in Sudoku, Wordle, chess and number puzzles, even though we know we live in a world where we have technology capable of performing these tasks way better than we ever could. That doesn't bother us too much. And I'm sure if you wanted to while away a quiet afternoon pushing elevator buttons for people it wouldn't upset anyone either.
Perhaps this is why we cling to human creativity so tightly, because we previously surrendered other human utilities so casually. The artistic side of us - the creative, emotional, intuitive and innovative - are qualities we, currently, have difficulty imagining could ever be replaced. They are our last USPs. Today any driverless elevator fears we may harbour are mostly reserved for technologies that might "out-think" us.
It's more likely, however, that the root of our fear of automation is really the fear of losing our human commercial value, in an economic system that requires the majority of us to work for our livings. The ideal of a human future when the machines do all the work, so we don't have to, should be a utopian dream. Not nightmare. But the reason it's so scary is because we know our economic systems are slower to evolve than our technology. If tomorrow we were to introduce the machines that rendered the need for human labour entirely redundant, ahead of us would be a hard, and probably bloody, transition to a society that would share these economic gains fairly.
Of course, with a safe distance from the disruption of the time, we know the decline in human computing and the growth in electronic computing in the latter half of the 20th Century created a lot more new jobs than old jobs were lost. The scale of what became possible when this function was mechanically assisted, and the subsequent demand for people to do the assisting, far eclipsed the jobs that existed when computing was within human limits. This is the story with a lot of Keynes's "technological unemployment".
01.03 Coders vs Users
I make my living as a coder. I've learnt how to talk to machines and get paid for it. This job barely existed when I was at school, and I had certainly never met a professional who did it for a living before I began doing it myself. I've been around long enough to have witnessed many ups and downs in the industry (your dad fought in the browser wars, son), but still I have no fear of this job role ever being fully replaced by AI. For reasons I'll gradually unpack in coming posts.
I taught myself to code as a child in the 1980s, on one of Clive Sinclair's ZX Spectrums. The hardware layer of computers, built on the digital electronics principles of on/off states and logic gates, and organised according to an architecture designed by John Von Neumann in the 1940s, was well established by then. And it hasn't changed very much since.
The bedroom hackers of the 1980s were sold computers as tools to help with their maths homework, or to do the household accounts, but it became quickly established that they were crap at that. Their best application for these primitive machines was gaming, and a software industry servicing this demand grew quickly. But, unlike the gaming machines of today, the Von Neumann architecture of the machine was not hidden behind a shiny GUI.
It was hard to avoid becoming a coder back then. Even to load a game, from audio cassette or floppy disk, one had to learn a few basic commands. Every home computer user of the 1980s knew the command "load", at the very least, while the more ambitious went on to master commands like "peek" and "poke", that could directly change values in areas of memory. This could be used to, say, manually edit a high score table or unlock an inaccessible level.
Being greeted by a command line - a flashing cursor - when booting up a computer is a very different user experience to the pretty interfaces we have today. The box of electronics presents itself in a way that is truer to its self - a dumb chunk of matter, sitting there doing nothing until it is commanded. The growth of graphical user interfaces (GUIs) through the 90s put an end to this, greatly increasing the reach of computing into the home and work place, but also first establishing the "pay no attention to the man behind the curtain" barrier between meat and metal.
A GUI gives a user a limited set of options. It might be a large set of options, but they are still restrained to actions designers want users to perform. This is sensible in most cases, as it is otherwise very easy to bork an expensive bit of hardware by tinkering around with it at a low level. But it radically changes the relationship between the human and the tool. The human is no longer fully in control, they have been given training wheels and bumpers to keep them from causing damage.
This is the essential difference between coders and mere users: users do as they're told; coders don't. The art of User Experience (UX), another profession that didn't really exist when I was at school, is how to tell users what to do in a such way that they don't feel like they're being told what to do. "Intuitiveness" is the name we give to this quality.
The intuitiveness of machine interactions is partly the reason we've grown to love them so. And to trust them. This "abstraction layer" removes a lot of the pain, the trial and error, the bugs and stubborn misunderstandings, that arise when talking to machines. And it means we don't have to keep reinventing the wheel, all the common functions have been preprogrammed for us. We've grown used to the idea of trusting the machine to translate our intention into results, without really having to know how they do it. As long as we get the same result every time, we're happy to trust the predefined process.
But this is where the separation happened - when the art of computer programming became seen as something arcane and specialised. Not for the layperson. It was also when ‘the algorithm' became seen as a black box that we didn't need to worry about, something that had been crafted by brains better than ours. We were taught to feel comfortable trusting ‘the algorithm' without having to worry about understanding it. These ideas, that made the bedrock of good usability at a personal level, became perverted into some horrible new forms when scaled up to larger systems. Systems that often now manage us, rather than us managing them.
Any modern fear we have of thinking machines as a technology that might supersede us, and/or threaten our very humanity, stems from this separation.
01.04 When technology achieves boring
Parallel to this separation has been the increased "generalness" of these general-purpose machines. Electronic computers were first built to calculate missile trajectories. Today they do so much more. They are so ubiquitous, and so trusted, we barely notice them. We think nothing of paying for a pint of milk by waving the computer we have in our pocket within near-field range of one of the shop's computers, and then walking out leaving them to sort it all out between themselves. Once this would have felt like theft. Today it just feels like efficiency, saving us precious seconds every day.
The final stage of a new technology's life cycle is when it achieves boringness. Air travel, once the wildest fantasy of human imagination, is now something so tedious we need constant distraction to endure it. Video telephony has become so dry we prefer to do work meetings in-person. Instantaneous global communications? Boooorrrrriiiing. Extended life spans? Really, is there anything more boring than old people ...?
Computing, though, may never achieve this state because, while milk transactions are boring, there are still a load of other applications that are not. With fresh ones always around the corner. The same processors that deal with every day commerce we also use to make animation, play music, games, tell stories and connect us to our friends. The computer is an instrument that can be played in so many different ways it's sometimes easy to forget that it's the same instrument.
When Ada Lovelace wrote to Charles Babbage proposing that his Difference Engine might be used to do more than just add numbers, this would have been quite the imaginative leap. Even in Turing's day this might have felt counterintuitive. But another century later, it is a given. If anything, we've now gone too far in the other direction, and have difficulty conceiving of things computers can't do. We no longer see them as having any limitations. It is this idea, which is not at all accurate, that fills feeds with horror stories about the dangers of AI and has our inner Artist fearing redundancy.
I believe that holding onto a coder mindset, through a period in history when everything is being computerised, gives one a better understanding of how the modern world works. But then I would say that, wouldn't I? I also believe, though, that this understanding is not exclusive to techies. It's possible to better understand how thinking machines integrate with the world without having to go as far as mastering archaic syntaxes that can talk to a database. Which is why I'm writing this.
Coding no longer belongs solely to the Sciences, any more than computers do. I think we should treat it more as one of the Humanities, because it is such an integral part of modern philosophy, language and the human cultural sphere. The principles of coding are responsible, whether or not we are aware of it, for so much of how our world is organised.
Coding is the way I, personally, reconnect the Computer with the Artist. This is my favoured interface, which is probably closer to the metal than most of us are comfortable with. But at the other end of this spectrum is the tendency to imagine our thinking machines are magical black boxes, and then telling fantastical stories about what they might do. This is dangerous thinking, far divorced from reality.
Computing touches every aspect of human experience and communing with thinking machines, if done right, can be a magical, expressive and transformative experience. It has the power to effect real, tangible change. Coding is a form of language capable of altering reality. And this superpower is available to you, not just them.
Next time ...
How To Win A Fight With A Thinking Machine Part 02: Abstraction. Coming soon...
more words:
- the artist and the computer (2026)
- facestealer
⎘(2023) - saving the metaverse
⎘(2022) - elon's pocket singularity
⎘(2022) - Novelty Waves
⎘(2014) - social networking with the living dead
⎘(2013) - no one ever cried at a website (2013)
- open source art
⎘(2013) - Generative Art: A Practical Guide
⎘(2011)