15 01 2007

Just finished reading “Maelstrom” by Peter Watts. I just have to say that this book, one has to admire the level of detail Peter Watts goes into describing such hot topics as the evolution of Artificial Systems.

In this book we have a series of wildlife on the Interweb (called the Maelstrom on those times) that upon finding out that some specific kind of contents give them free and secure access (secure as in not being deleted) to move around Maelstrom. So, the seek everything there is to know about The Concept, and in order to be able to make a living continue using it they start generating information about The Concept. They don’t know what it is, they know that using it is beneficial for them… they don’t care. Wildlife in the Maelstrom doesn’t cooperate, they just survive, but thanks to The Concept they somehow start cooperating between themselves, start developing specialiced kinds of wildlife.

So, the wildlife decide that in order to survive, they have to somehow help The Concept in the real world so it never goes out of fashion. There is the small, non-important detail of The Concept spreading βehemoth through the world, and βehemoth contaminating large expanses of the world and the Firestorm following in their wake (every spot with an outbreak gets burned to hell in order to maintain the quarantine by the authorities). But it’s not their problem.

Eventually these disorganized large quantities of wildlife evolves and becomes Anemone, a group of specialized systems that function together and liked the name. They also are still convinced (as the Smart gels in the Starfish) that simple is good and complex is bad.

The improtant thing about this is that… doesn’t this remind you of something like… biological systems evolution with a way too fast clock-rate? I just can’t wait the day we get computers systems that can evolve themselves without us hacking intervening in it.

My next book in queue is: Behemoth: B-Max, need to see how this ends.




2 responses

15 01 2007

I agree, Peter Watts it’s an amazing writer, with a very unique voice.

I would be very scared the day computer systems evolve by themselves; that implies a lot of advances; some of which, might be cool, but are quite dangerous, this brings me back to Asimov’s Robot Series, and the three laws of robotics.

The problem, is that once artificial systems are capable of evolving, they are a jump of being sentient beings, which opens a cans of worm, I really don’t want to open.

You can say I’m a pessimist, and that’s fine, but I do think we should think a bit harder of the consequences to our actions, and be responsible with our decisions.

It might fun to play God, but the consequences can be catastrophic.

16 01 2007
Ricardo Restituyo

I have a feeling that if we ever get able to create such a system, we will somehow feel threatened by it and not create it just b/c of fear to what it can become.

The three laws are a very good safeguard, but as can be seen in most Asimov books, the systems they govern somehow get to evolve in some way or get into some kind of logic that allows them not to honor them. Some kind of residue always gets acumulated that somehow makes the system give the next step.

Maybe evolution is a capability of anything whose functioning is based on logic: evolution may be something logic and not purely biological.

Maybe somebody has written about that already in some scientific paper. Any hints?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: