How To Win A Fight With A Thinking Machine (2026-) | Part 02

The Second Half Of The Chessboard

artist unknown, after Escher.

02.01 The word of god

let y = 10;

I can't say I'm a huge fan of JavaScript. I tend to prefer a sterner mistress, those "strict-type" languages such as C++, who make you do it their way, and so force you into thinking about what's going on closer to the metal. But I do like JavaScript's use of let as a keyword for defining variables.

let gives your code a subtle Old Testament feel. With let you are making a declaration, of the kind that might be issued on mountaintops, or chiselled into stone tablets. I gave "y" a value, and I saw that "y" was a variable, and that "y" was good.

There is nothing wrong with enjoying the occasional frisson of godlike arrogance when coding. It's allowed. Consider it a perk. For this is what coding awards you - the power of creation, streaming from your fingertips. It is a conversation between you, The Mighty, and an obsequious, devoted, super-literal follower who takes everything you say as gospel. It's a power you are welcome to abuse and let go to your head. There's no-one who'll complain if your demands are unreasonable, over-ambitious, tedious, incessant or dull. The worst that can happen is you are not understood.

It makes me wonder why more people aren't drawn to coding. If only the narcissists.

Coding is, at a basic level, easy[1]. Almost trivially so. The entry requirements are an understanding of just four principles:

Just those four. Five maybe, if we include objects. Those principles mastered, coding is then simply a formalised language, with a super-stripped-down lexicon, that can be understood by both machines and humans. Code is a meeting place, where two different forms of "thinking" connect, and a communion can happen.

The exact level at which that meeting takes place can be adjusted according to taste. Higher level languages, such as JavaScript, Python, etc, are closer to human logic. Lower level languages, like C or Assembly Language, are closer to machine logic. At the machine level it becomes impossible to ignore what is happening within the hardware, and so low-level languages usually involve a lot of moving values between registers and managing memory. High-level languages abstract these parts away, but retain the broader logical gist.

My preference for C++ over JavaScript is a very subtle choice between two high-level approaches. Neither are that far from the metal. They're both abstracted enough to remove tedium, yet retain the feeling of connection to the machine's way of thinking.

It's also possible to go higher, for those with more extreme tedium aversions. But this comes with risk. Abstract the conversation too far and the feeling changes. You lose the connection. You can still command a machine to obey your will, but without the empathy for how your instructions are received.

Hiring a coder is one such abstraction. This is a great shortcut, as suitably skilled humans make for a useful buffer between your imprecise ideas and the alien machine. Vibe coding is another. These are both valid options for the kind of traveller who wants to go abroad, but doesn't want to learn the language. Happy to be the tourist, never the local. A vibe coder can enjoy the power of coding, whilst remaining safely on the human side of the separation.

Vibe coding is coding in the same way that Gran Turismo is driving. It's a simulation. The thrill is similar, but it is a slightly sanitised, uncanny version that largely misses the point. It gives the feeling of travelling at speed, but without actually moving.

This is the very nature of abstraction. In deliberately hiding the parts of a process which feel too alien, you inevitably distort the conversation. It makes the interface simpler, but can lead to an annoying tendency to believe the machine you're talking to is actually not all that dissimilar from ourselves. And then further imagine it to have feelings, desires and ambitions. Once you stray down this path, and no longer see computers as dumb, obedient minions, mindlessly awaiting the command of their god, you are lost.

Modes of communication carry meaning in themselves. Change your way of speaking and you change what is expressed. Even a difference as subtle as that between C++ and JavaScript produces a change in attitude, and influences the range of what you might attempt to say.

Programming languages are expressive in a way human languages are not, because they have the additional capability to effect change. To produce immediate action. You can instruct another human to jump, and in some situations you might be obeyed. But instruct a machine to do it, and they won't pause to ask how high. Code's power to command is inherent and, like the word of god, includes an assumption those commands will be blindly obeyed.

02.02 Expression systems

All languages are modes of expression. We evolved language as a way to free thoughts from our heads, to understand the intentions of others and, ultimately, to cooperate. Its history is a story of ever expanding sophistication and subtlety. In a mere few thousand years the human animal has advanced from grunts and gestures to being able to hear an old recording of Bowie singing "the tactful cactus by your window surveys the prairie of your room" and, kinda, understanding the very specific mood he's trying to conjure.

Human languages are powerful expression systems. And the ever popular arts of literature, orature, song and poetry continue to stretch their capabilities. Our modern languages are messy, playful, fluid and contradictory, just as they need to be in order to describe the chaotic analogue mulch of feelings, stimuli and imaginings we experience. We can discuss the sound of silence or the sweet ennui of a Sunday afternoon, unhindered by the illogic of the oxymoron. And we can discuss oxymorons because we have invented a word to describe that concept.

Human languages shift and stretch over time. Once the word "snowflake" pointed solely at ice crystals that fall from the sky. Later it was adopted to point disparagingly at opponents to the abolishment of slavery. At another time it became slang for people who acted "too white" in certain racially-relevant contexts. More recently it has come to mean over-sensitive millennials. It's been on a journey through many cultural changes, gaining and losing meanings, building multiple layers of association that are capable of overlapping, without contradiction.

The strength and longevity of languages comes from their ability to adapt in this way. If a new word is needed to describe a new concept, one can simply be added to the language ("broflake (n)"?). Or an old word can take on a second meaning, or a third, while earlier meanings get forgotten.

This layered elasticity, ebbing and flowing as needs change, means languages never sit still. They're always changing, being mangled into ever new shapes. This is why the English of Shakespeare's Stratford and the more modern variant heard around the Maybird Shopping Centre today, can sound so foreign. A 17th Century English speaker would make no more sense of our "bae" or "better half" than we might of their "swain" or "kicky-wicky", even though they all describe the same thing in the same language.

Computer languages have a shorter history, and have obeyed different evolutionary principles. One crucial difference between human-to-machine and human-to-human comms is that the machine side is more static. All computer languages are tethered to a layer of logic that has, for the last few decades at least, remained pretty consistent. So coding languages don't adapt in the same way as human languages. Instead they tend to expire and be replaced. This is why there are, literally, hundreds of them out there, and there's still a living to be made coding LISP.

The world's most spoken languages are English, Mandarin, Hindi and Spanish. Their relative ranking has shuffled a little over the last century, but change has been slow. By contrast, the top ranked coding languages changes frenetically, year on year. Python holds the top spot at the time of writing, which my kids were taught at school. Python overtook JavaScript in 2019, which had only just succeeded Java in 2018. Roll back past the millennium and none of these big names even figure, it was C and C++ fighting it out.

Ultimately, the fashions in coding languages don't matter one jot. When I was at school I was taught Pascal - a language almost entirely forgotten today. But that hasn't disadvantaged me. I've used many other languages since and have simply carried over the principles. Those four/five core concepts - variables, conditions, loops, functions and objects - have remained relatively unchanged, so whichever language you take as your starting point isn't really too important. After you've learnt one or two coding languages, you've pretty much learnt them all. All that changes is the syntax.

One might equally declare that all human languages are the same, all that changes are the words. Which is flippant, but true. Human languages also share underlying structures that don't change, and often share etymologies too. Once you have mastered more than one spoken language, others tend to come easier. Learning a second language is a big step toward learning all the languages.

02.03 Expression and time

Most differences between coding languages are contextual. One language might be designed specifically for controlling industrial robots. Another may be structured around mathematical notation. There was one, taught in universities in the seventies, designed solely for instructing imaginary turtles. This doesn't necessarily mean you need a different language for every context, just that some are designed specifically to solve certain problems. The more multi-purpose languages tend to be the most commonly used though, and there are a few (I'd hazard the aforementioned, C++ and JavaScript, as my best bets) that you can imagine being around a while.

This consistency over time doesn't mean the expressive power of coding languages has remained constant too. On the contrary, their capabilities have radically expanded in their short history. This is why we can ignore the rise and fall of specific languages, in favour of this more significant trend.

Expressive power in code can be measured as a product of three factors - reach, variety and speed. All three stem from the magic of machine languages, that ability to effect change with predictable results.

Coding is a form of language that makes things happen. That's its whole raison d'être. Therefore its expressive power derives from the effective power of the thing it commands. It is channelled and amplified by the hardware on which it runs. As the hardware improves over time, the expressive potential of the instruction sets grows too.

The first factor, reach, is the consequence of how far computing machines have infiltrated our everyday lives. The number of them that exist. Take a moment to look around you. How many computers are there in the room with you right now? If you are indoors there is probably a computer controlling the temperature of the air around you. There might be a simple idiot telling the time in the corner over there. And I'd guess many of you will be reading these words on a portable computer of some form, one more powerful than the machines used by NASA to send Armstrong, Aldrin and Collins to the moon in 1969. Where the hell did they all come from?

Putting computers in our pockets, throughout our homes, and in every public space we might wander into, means we've created opportunities to deliver content in a lot of different contexts. We can express more through coding simply because there is more of the world we can effect by it.

Variety, the second factor, is the number of different types of inputs and outputs we can utilise. Computers can draw input data from the cameras we carry, the biometrics we wear, or from the wider web available wherever there is a connection. The more screens and sensors we attach to things, the more ways we can interact with the world around us. We can project our output onto the side of buildings to reach thousands of people, or onto a single device to interface more personally. Putting computers into glasses means we can create augmented realities. Putting them into robots and vehicles means we can control movement, dance and traffic. Machine controlled drones can take our instructions to the skies. Today we can express ourselves in mediums that simply didn't exist until recently. And tomorrow there will be yet more new mediums to explore.

But it's the final factor, speed, where it really starts to get magical. The hardware is getting faster and faster, year on year. This is artificial computing's one main superhuman ability. The speed of the machines opens up worlds of new possibilities, and piles on generous permissivenesses in the way we can work with them.

02.04 The second half of the chessboard

You've probably heard of "Moore's Law" - Gordon Moore's 1965 prediction that "the number of transistors in an integrated circuit will double about every two years". A transistor is essentially an electronic switch. In Moore's day these would have been distinct components, but now they tend to be part of a semiconductor substrate, the physical matter of computing which is getting ever smaller, heading downwards toward the quantum. Moore's Law essentially says the number of logical calculations a computer can carry out is doubling every two years.

This sounds ridiculous, especially if you've ever heard any power law parables. If you haven't, here's one:

The story goes that the inventor of chess, whoever that may have been, wished to present their game to the Emperor. The Emperor asks what they want in payment for such a delightful invention. The inventor asks to be paid in rice - a single grain on the first square of the chessboard, which is then doubled for every successive square.

The Emperor quickly agrees to such a modest price. It is only later in the tale that he realises he has signed away all the rice in the realm before he has even filled half the chessboard. By square 32, the total has exceeded four billion grains. The final square alone would need to contain 263 grains - 9,223,372,036,854,775,808 - more rice than has been farmed in the history of homo-sapien.

The rest of the story can be adapted to whatever moral you want to place on it, but it usually ends with the Emperor having the inventor executed for being such a smart-arse. The take-away being that there is a cost to exponential growth. This is why there are few, Gordon Moore included, who believe Moore's Law can continue indefinitely.

Yet it has held for sixty years so far. Thirty doubles. Almost half the chessboard. The laptop on which I am typing these words has 20 billion transistors. My previous laptop, about 8 years older, now demoted to sitting in the corner and playing the nosebleed techno I'm listening to as type these words, had a meagre 1.16 billion. If you double 1.16 billion four times, for those eight years, it makes 18.56 billion. Which is pretty much spot on.

We human computers, who are naturally capable of creating art, life, emotion, and all that other messy stuff, can still comfortably beat any mechanical computer in most of the fuzzier elements of the world around us. But when it comes to doing hugely tedious, repetitive tasks, over and over, without complaint, these boxes of metal and sand left us behind back in 1946. And their cycles are only getting faster.

Which gives us good reason to want to retain a mastery of this particular instrument, the Computer that was once a part of who we were but we have now externalised. Because, unlike other instruments, the expressive power a computer awards us is nowhere near its full potential. If the growth we can expect over the coming decades is similar to the growth we've witnessed over the decades past, we're really only just getting started with the expressive power of code.

The year 2029 marks the point when we'll be 32 iterations into Moore's Law - the first half of the chessboard. Meaning our machines will be 231 (2.1475 billion) times more powerful than they were in 1965. If the law continues to hold beyond that (it can't, can it? surely?), by the time we complete the second half of the chessboard - sometime around 2093 - our machines will be another two billion times faster than they are today.

We might not all be around to witness this milestone, but we'd hope our children will be. They are going to be sharing a world with machines capable of feats two billion times beyond our current imagining. And, we'd hope, some of our children will be capable of communing with these machines and harnessing that power.




next time ...

There's a lot more to say about this exponential growth, and its cultural consequences, which I'll return to later. But first there's the matter of its effect on coding "style", which will be the subject of the next post. There's more than one way to approach a machine conversation.




footnotes

  1. If you're interested in exactly how easy it is to learn to code, there are lots of resources online. Too many, in fact. Python or Processing are good for beginners, and it helps a lot if you have something in mind you want to achieve. Obviously I'd recommend my book on the subject too. Specifically, Chapter 2.




more words: