Monday 25 February 2008

Should an AI have an Id?

One of my many thinking hobbies (i.e. hobbies that I think about taking up, but never really get around to) is development of an AI. I've spent a lot of time thinking about this, and have a few ideas that really might work, if I ever get round to doing anything with them. Of course, as with any idea that has only ever been thought about, it's much better in my head than on paper (which is why I've never written it down!)

One thing that crossed my mind today was the idea of Id, Ego and Super Ego. One of the classic (layman's) definitions of AI is a machine that thinks like a human. If this is the case, does it need to have the classic Freudian psychology of a low level basic response, a "civilised" response, and a watcher to pick which is best?

Perhaps not - after all, why should an AI ever consider the "wrong" response? It seems that things like a sense of self preservation and these kind of low-level "instincts" are responsible for most of the bad AIs in Science Fiction, so why run the risk? SkyNet would never have taken over if it had no primeval urges for self preservation and dominance...

On the other hand, how would you then distinguish between an Expert System (a computer program that takes information, and makes a decision based on that) and a true AI, without the knowledge of Self? Even Commander Data has to consider the offer of the Borg Queen - he just makes a better decision far faster than any human could.

Perhaps the most telling question is "Where do we stop?" After all, if the AI is going to have an Id, an Ego and a Super Ego, why not give it an oedipus complex too?

Oh, yeah... AIs don't have mothers...

Tuesday 12 February 2008

No longer a Cyclops

Today, as I left work, I finally got round to replacing the dead headlamp on my car. As anyone who's ever done this will know, this is a fiddly task. One of the first few steps is usually to remove some kind of spring-loaded retaining clip, which is guaranteed to "ping" off into the recesses of the engine - the remaining tasks are all but one related to finding and relocating that clip.

Knowing this, I approached the task fully prepared, with torch (well, camera-phone light) in hand and one hand ready to catch the little bugger before it went "ping". Carefully, I released the clip... and the unthinkable happened...

It stayed put.

No, really... it stayed where I wanted it! Evidently, at some point in the past, the clever folks at MG thought that rather than making it a full spring, they'd hinge it on one side, thereby preventing it ever being removed (and hence lost).

This got me thinking. After all, as a computer programmer, I spend a lot of time complaining about users who just have to press the wrong button at the wrong time - or worse still, the ones that find clicking in 5 different places in the secret special order such a hardship.
Maybe, just maybe, I'd been putting myself in the same position as the car engineer who didn't get how people could possible lose the metal clip so easily - after all, the only thing you need is a big magnet and it goes nowhere...

With that in mind, it's time for a new approach. We all know the mistakes that users make - we laugh about them every day. Why not set out to make those mistakes a little less easy, and a little easier to recover from.
There's a range here. It starts at the simple - range checking on functions (why not check if the value is zero before trying to divide by it) - and ends up at the hugely complex, but typically user-ish - "Word is closing - do you want to save your files? Yes, No or cancel closing Word?"

Oh, and Microsoft - how about getting rid of that really stupid message "Excel can't have more than one file named Test.xls open at the same time."

Thursday 7 February 2008

Taking things too seriously...

This is a tale from long ago, when I still lived with all three girls and the Ghost.


The Ghost and I were chatting with some friends over IRC (think MSN, you young non-geeks). Such was the way, back then - it was easier (and more common) to have a conversation online than it was to shout up and down the stairs to each other. During the conversation, the topic of our evening social came up, and the Ghost asked if he could have a lift - since we'd both be going from the same place, and all...

"Well duh!" I thought. Of course he can have a lift - it would be stupid not to take him along. "No... you'll have to walk" I sarcastically typed.

Sarcasm doesn't work as text, does it?

"Why not?" he replied petulantly. "I don't want to" says I, still under the ever-so-mistaken impression that he knew I was joking. "But that's silly"... "So?"...
At this point, others chip in... "I'll give him a lift" pipes up the Sailor.

Of course, at this point, I'm suddenly stuck into the game. Why should I admit I was joking, when it should have been clear to all? Surely they'll realise before the Sailor has to go miles out of his way to pick up the Ghost?

Two hours later, the Ghost disappears from the chat room. He's just left, in the Sailor's car. I sheepishly arrive at the social, 30 minutes later, knowing that no-one will believe I was just joking originally.


As will all things like this, it blew over in no time - before the end of the social, in fact. But it's well to remember that it's worth explaining the joke earlier, rather than later - no matter how silly you feel.