“I very much enjoy looking at the wind moving trees around, and I love looking at the ocean, and clouds. I always think if there were only one place in the world where you could see clouds everybody would be flocking there, clouds are so fascinating and we just take them for granted, we don’t even look at them.
That’s one thing in an environment like a Mall: it’s hard to find any examples of what I call ‘gnarl’, natural gnarl, where you see a leaf waving in the wind, or a fire flame, or some flowing water. We often box ourselves up into these rooms.
When you see something like the leaves on a tree, that’s a good example, there may be a simple equation underlying it, like the laws of fluid flow. We’ve got a certain amount of air and a certain number of leaf positions, the leaves are complex compound pendulums, and they begin rocking in these unpredictable ways.
The distinction that’s new, and that we didn’t used to make, is that something can be deterministic but not predictable. We tend to think they are synonyms but they’re not. Something can be obeying some law of nature but it’s not predictable because what it’s doing is so complicated that the time it would take you to calculate what it was going to do would take longer than the thing actually doing it. So you could compute it but you can’t compute the world any faster than it is happening.”
When Alan Turing did his pioneering studies in the 1930s, defining the bedrock for the field of Computing, there was no such machine as a “computer”. When Turing talked of computers he was envisioning human beings with pen and paper carrying out repetitive sequential tasks, not machines. The lump of metal and matt plastic we now call a computer didn’t really find it’s way into our lives until fifty years later, long after Turing’s suicide in 1954.
In the early nineties I started a Computing degree at Exeter University, which I endured for about a year and a half before, bored senseless, I dropped out. I then went off to be arty for a few years to restore some kind of left-right hemisphere balance to my brain. I was so repulsed by my experience of early 90s ideas of Computing that I made efforts to stay as far away from computers as I could for the next five years.
It was only towards the end of the last decade, after the World Wide Web started to take off, and everyone suddenly discovered what they’d been doing recently with video cameras, photography and hypertext was now being called “New Media”, that I was drawn back in, and I made friends with the machines once more. It was also around this time that I started using a Mac, rather than a PC, which may also explain my shift in attitude.
But I sometimes think that my original attitude towards the study of Computing may not have been so negative if it was made apparent to me at university that Computing is not something you need a Computer to study. This took me a long time to realise, because I’d never really encountered this idea until recently. Computation is everywhere.
Computing is what our DNA does at it unravels. It is what a stream does as it finds it’s way downhill towards the ocean. It is what the planets do as they move in their orbits. It is what our bodies do as they maintain the balance needed to keep us upright. Computing is what I am doing now as I process these ideas, and output them as text. The only place computers really come into it is in attempting to simulate these computations, or allowing us to create simple computations of our own. And computers are rather limited in this capability. This is why I can say without contradiction that I find Computers quite boring, but Computation is fascinating. If you don’t believe me turn off your primitive adding machine and take a look around you.
Even the most elementary of phenomenon in the natural world – the fluttering of a leaf, the spray of the ocean, the weather – are way beyond what can be computed by the technology we have. And it is theorised, via the work of Turing, Godel, Hofstadter and others, that we could never develop technology capable of simulating such a level of computation – it is simply conceptually impossible. This is why, as Dr Rucker’s quote above says, an entirely deterministic idea of Universal Automatism doesn’t mean we have to live in a world that is in any way predictable.
If you intended to develop an enthusiasm for literature, you’d study the classics; Shakespeare, Milton, Blake; examples of writing done well. You wouldn’t examine till receipts. Just as I’m sure no one was drawn to architecture by seeing a particularly well-constructed shed. Like Determinism and Predictability, Computing and Computers are not synonymous, and the study of Computing is not something to be done in front of humming box of electronics. Computing is better studied watching the wind in the trees, sat by a stream, or looking at the clouds.
There is a concept in science fiction, and science fact too, called the Singularity. It is defined as the point at which some technological advance, usually artificial intelligence, accelerates so rapidly that it overtakes humanity, and it becomes impossible for us to catch up and reign it in. It is usually used as an apocalyptic scenario, marking the end of human led society, and the start of something new, something post-singular.
The concept was first popularised by Vernor Vinge in the 1980s, but has been used in many books and films since. The machine led future in the Terminator or Matrix films is a post-singular world. In Neuromancer the world is on the brink of the singularity, as the recursively self-improving Artificial Intelligence ‘Wintermute’ is born. There are countless other examples to be found in the sci-fi section of your local book, video or comic shop (just follow the smell of geek-sweat). Recently we have been hearing, in both fiction and non-fiction, a new twist on the singularity, the idea that the Web may suddenly develop an intelligence as it continues to grow exponentially.
The idea, as it is used in fiction, is usually that we humans will become enslaved by our machines – the popular irony of the master being defeated by their own creation. But I have great difficulty taking this seriously. I’ll give you two good reasons. Firstly, if we did develop an intelligence greater than our own, why would this intelligence feel threatened by us? And secondly, perhaps more pertinently, what more could our evil technological overlords enslave us into doing for them that we aren’t doing now?
Technology has already enslaved us. We afford our machines so much attention, and spend so much of our time nurturing their perpetual needs, that they are already our masters. My working life, and that of the majority of white collar workers, is spent sat in front of a computer, eyes fixed focus on a screen, completely immobile for most of the day, save for the flickering of wrist and fingers over mouse and keyboard. And the vast majority of this time is just spent managing the sheer amount of data my computer is capable of handling. I spend eight hours a day tending to my computer’s never ending needs, which is longer than I spend with my partner and son combined.
I foresee a future where our children look back on the late twentieth/early twenty first century, the dawning days of the information age, the same way we look back on the early days of the Industrial Revolution. We have difficulty believing that the working classes could be subjected to the dangerous and damaging working conditions of the factories, yet today we work longer hours than we ever have, and spend our working days entranced before screens, squinting at the glow, our bodies falling apart as we shovel snacks and neglect their needs for for movement and exercise. Then, once our labours are complete, we return home at the end of the day to spend our leisure time like zonked out junkies, slumped before other types of screens consuming other types of information.
We invented these technologies of ours as tools to improve life. Television was to be our window on the world. The folders on our virtual desktop were meant to be metaphors for real folders on real desktops. Email was meant to be what it says it is, an electronic form of mail. But could you imagine if you had to spend as much time disposing of spam that came through your real mailbox as you did through your email inbox? Or responding to as many letters as you have to write emails? Or storing as many paper files as you do computer files? And if the world outside your window resembled the world you now see on television, would you ever leave the house?
Just because we have this technology, and we have the ability to process this amount of information, do we really benefit from using it in the way we do?
The author Ken McLeod described the singularity as “the Rapture for nerds”, i.e. it has developed into an almost religious theory of eschatology within the realms of geekdom, with the post-human world eagerly anticipated. Perhaps, like the Christians, they see this submissive distopia as blessed relief, reward for enduring the end times. Incidentally, the term ‘singularity’ is derived from Black Hole theory, the space-time singularity being the point of no return where gravity becomes infinite, and not even light can escape it’s pull. But we can escape the pull of technology, because all of our great technological inventions have one thing we humans don’t have – an off switch. We should use it on occasion, if only to remind ourselves who’s the boss.
[ follow-up post: The Mathematics of Clouds ]
Ok, so after many months pondering Joshu’s Dog, and never even managing to get as far as understanding what Mu means, I’m not feeling any closer to enlightenment. So how about I switch koans. This one seems appropriate:
When the nun Chiyono studied Zen under Bukko of Engaku she was unable to attain the fruits of meditation for a long time.
At last one moonlit night she was carrying water in an old pail bound with bamboo. The bamboo broke and the bottom fell out the pail, and at that moment Chiyono was set free!
In commemoration, she wrote a poem:
In this way and that I tried to save the old pail
Since the bamboo strip was weakening and about to break
Until at last the bottom fell out.
No more water in the pail!
No more moon in the water!
It was 11 months before my boy learnt to sleep through the night, during which time we, as you’d expect, assigned a new value to sleep, and now have a much greater appreciation for the purple cloak. So now, when he’s finally sleeping through most nights and I can sleep as much as I need, I’m lying awake at 3am with a chattering brain that just won’t shut up.
Why do we have these ridiculously over-productive brains? Why do they never shut up? This constant thinking doesn’t seem to have any real survival advantage. It is not a constant state of readiness to react to threats to our survival; these night-time thoughts are rarely processing of current stimuli, they are more often replaying old events, fretting over imagined scenarios or pure flights of fancy. In evolutionary biology terms, thinking uses up a lot of energy, energy we could save for other more pertinent survival techniques, like running away.
It is near impossible for us to shut our brains off, to simply stop thinking. The Buddhists claim to be able to do it with meditation, I’m not sure I believe them. If we attempt to empty our mind and think of absolutely nothing we cannot, as soon as the void is there another thought will pop up to fill it. Try it, prove me wrong, please.
But all the other animals we share our planet with seem capable of emptying their minds with ease. George, my old cat, used spend many happy hours just staring into space, occasionally checking her bowl to see if any food has miraculously appeared. You’d never hear of a cat suffering insomnia.
There are theories for this of course. Dr Susan Blackmore had crazy one in her book The Meme Machine, where she posited a theory of the human race as simple virus carriers – our brains have grown to four times their required size in order to house the memes that rule us, who fuel our advances in communication skills and technologies in order to replicate. Her idea isn’t quite as crazy as it sounds, although she was probably stretching the concept of memetics a little far. But I’m yet to come up with a better one as I lie staring at the ceiling in the dark for hours on end.
[ previous post: Joshu's Dog ]
Mu is a Chinese word with no exact translation into English. Roughly it means “the negative”, or “the concept of nothingness”.
A lot of languages come with words that are essentially untranslatable. Approximate translations can enable us to talk about them but they cannot fully capture the meaning. Essentially this makes the concept the word is describing unique to that culture.
A few more examples:
A study of these type of words would perhaps tell us as much about the cultural personalities of these groups as it would about their propensity for neologism.
Obviously, words will only ever exist in the wake of the concept they are used to describe. But this doesn’t necessarily mean that the things a culture doesn’t have words to describe don’t exist within that culture – they just haven’t described them.
But it’s probably safe to say that the survival of a concept within a culture will be much more precarious without an effective way of communicating that concept to others. This may be why Zen makes very little sense to me, while the desire to live in ‘compact but bijou’ housing seems perfectly reasonable.
So does this mean I can never really understand Joshu’s dog dilemma without even having English words capable of describing the concept he refers to? Or does it mean I have to try and understand his anti-answer without the use of language?
And could this be what he actually means by mu?
Blowing my tiny mind man.