Thursday, January 29, 2009

Abiognesis in a Bottle

Unfortunately, even with the fastest personal computers available, a large amount of resources are required to conduct experiments involving evolution. In the current implementation of the Balanced-Force Evolution Machine, 30 Avoiding cells compete against 30 Attacker cells, with the former attempting to minimize being shot by the Attackers' cannons and the latter doing the opposite. The average duration of a generation is set to approximately 25 seconds and at the time of this writing has achieved 600 generations on my AMD quad core. Notable advancements in group behavior are beginning to appear as illustrated to the left. To minimize damage inflicted upon them, the Avoider cells appear to be hiding in the corner. The cells on the outside of this cluster are protecting those within and periodically appear to swap places. As intuitive as this might sound, it is a remarkable adaptation considering that all of these cellular robots were originally conceived completely absent minded. After a few weeks of running this simulation, I suspect far more impressive behaviors to emerge. At the moment, all we can do is wait.

Monday, January 26, 2009

Balanced Forced Evolution

This experiment involves the interaction between two mutually capable artificial organisms and has been derived from the algorithms of my first open ended evolution machine. Each "cell", fitted with a 1Mhz brainfuck compatible processor with evolution capabilities, steering control, thrusters and decelerators, one dimensional "laser" scanners for vision, and an optional gun, compete to either maximize their kill points or minimize death points. The two classes, Avaiders and Attackers, respectively have essentially opposite goals. While their mechanics and strength are all exactly equal and forever locked "as-is", they each will inevitably evolve different strategies and behaviors to maximize their survival instincts. A simple user interface is provided so the experimenter can peer within their minds, to see through their eyes, examine their genetic information and lineage, and monitor statistics of the current and past generations of success with both graphs and history charts. Ultimately these groups of beings of opposing forces, may form coalitions to better their odds. It's not as simple as survival of the fittest, as was once suggested. There is in fact, many factors involved, and too many variables to have ever been intuitively known. The attempt of this experiment is to not only reveal evolution's secrets, but to force the two competing sides into an ever advancing race for dominance. As the opposing force advances, the other team will either be destroyed or succeed in a beneficial mutation. A control factor within the simulation, is to disallow death by reverting to a previous generation, in other words, to basically allow the losing side to try again with a possibly more effective solution. The end result is uncertain, but certainly might promote intelligent behavior and possibly the beginnings of a self aware entity.

Friday, January 16, 2009

Cellular Grid Rechargeable Batteries

One of the primary disadvantages of rechargeable batteries is the expended time required to recharge them and the necessary process used to ensure continued long life. Since extremely rapid recharges, by use of high voltage, have the tenancy to generate irregularities in the battery's cells' electrodes, current regulation is a critical function of reliable regeneration of today's batteries.
A possible remedy involves breaking down the battery into a matrix of hundreds or thousands of micro cells. These miniaturized chemical cells could be recharged more reliably at high speed and each cell could be controlled individually by the use of multiplexing circuitry. In the same way that a memory chip is divided into bytes and each byte can be randomly accessed non sequentially, this high tech battery could be designed to automatically scan through the cells which needed a recharge and skip those which didn't. Even more importantly, each cell could be recharged depending on it's own internal status, eliminating the possibility of damage to the battery as a whole. With a microprocessor controlled system, the battery could self organize, on demand, to produce the desired current and voltage output, without any losses associated with the problems of voltage regulators. Battery life, percentage charged, and charge cycles remaining could be measured with accuracy far exceeding anything ever seen before. The possibilities are endless.

One Instruction Set Computers


The logical extreme of reduced instruction set computers, as previously briefly mentioned in this article, is the use of a single parametrized op-code. To properly put this into perspective, modern x86 architectures, like the computer you are likely using now, and other CISC (complex instruction set computers) use hundreds, if not thousands, of individual instructions. The use of so many different commands within the processor forces the design to be very complex and leads to an increased chance of glitches and ultimately reduces maximum clock speed.

On the other hand, a reduced instruction set computer architecture (RISC), especially the one instruction set version (OISC), involves minimal circuitry inside the actual processor, simplifying the design and allowing components to be placed in a smaller area to enhance the speed by reducing propogation delays and latencies.



The model under investigation here is known as "Subtract-and-Branch-If-Less-than-or-Equal-to-Zero", symbolized as "subleq", and coded with no opcodes - since with the use of a sinlge op-code, the computer will not require explicit knowledge of what is expected, the opcode is implied and memory space is saved. Subleq takes three parameters, also known as operands, marked a, b, and c. The basic, and only, function of this code is to set the value at b equal to b minus a. In otherwords, b = b-a. For those that are not familar with typical assignment operators used in computer programming, the way this statement is evaluated is by taking the current value of whatever b-a happens to be. This value is then stored in and overwrited to, "assigned to", the value of b... meanwhile no change is made to a. IF the new value at b is less than or equal to 0 then program execution is "jumped" to the location held by paramter c, or more simply, the processor's program counter is set to the value of c. If a jump is not required you can't simply say 0 for c, since that would cause the program to reset and start over, 0 is the starting point for a program counter (typically). It is generally accepted to leave the third parameter c blank in these cases, which informs the processor that regardless of the previous operation, just go to the next instruction. It's still a jump, but only by one unit. Alternatively, the user can simply write in the location of the next instruction, it's exactly the same.

There are many ways to implement this design and slight modifcations are possible to create new instruction sets. Since program lengths can become extraordinarily long in such an instruction set, it can also be beneficial to add specialized hardware to the processor to allow multiplication or other mathematical functions to be performed in a single cycle. However, for proof of concept, the OISC mentioned here, is capable of universal computation! All with only one instruction.

If you've never run across such a claim before, I don't expect you to immediately understand how to program in this machine language. It's interesting that, at this point, you know everything about this one instruction set computer and yet, at the same time, probably wouldn't have any idea where to begin creating a program to drive it.

Well, here's a start, to perform addition of two numbers on this computer, all you have to do is...
    ADD a, b == subleq a, Z
subleq Z, b
subleq Z, Z
Good luck OISC programmers!

Saturday, January 10, 2009

Accelerometers - MEMS by Analog Devices


The picture to the left is the ADXL322 double axis accelerometer with analog output lines, a close variant to the triple axis version used within Nintendo's Wii video game console. The PCB on which it is shown mounted is only a break out board and has the necessary capacitors to determine the IC's samples per second rate. These easy to use and easy to solder modules typical cost around $30 USD. However, if your soldering skills are sufficient, you'd be much better off buying the accelerometer alone for as low as $3. With the rapid decline in price on these micro engineered machines, hobbyists and businesses everywhere are taking advantage of the benefits these tiny devices can provide.
These accelerometers, with sampling speeds of over 1000 times per second, can be used to directly measure dynamic acceleration and the static acceleration of gravity which can be mathematically converted to terms such as velocity, position, oritentation, jerk, and G force. With the variety of different calculations that can be performed all from this single chip, it is easy to realize a vast variety of different machines and devices such as digital hourglasses, computer pointing devices, degree of impact sensors, free fall sensors (especially important if you drop your laptop), Star Wars type lightsaber toys, image stablization for binoculars, telescopes, and cameras... the list is endless. A single chip such as this, can provide all of the sensors within a single package for a fully capable self driving car, since extrapolation of even position can be somewhat accurate, but more importantly, speed and impacts can be known, without the need to count wheel rotations and hope the wheels didn't skip, nor wire impact sensors 360 degrees around the chassis. If triple axis is desired, the ADXL330 is a very well known and widely available model to provide just that. Several other versions can provide PWM outputs, instead of analog, for easy integration with a digital microcontroller. So, please go buy some of these devices and build something amazing, and let me know how it turns out.

Wednesday, January 7, 2009

Self Driving Automobile - Version One Point Zero



To simply press a button and expect your car to safely and flawlessly drive itself and its occupants to a destination still seems far off in the future. To navigate an innumerable possible obstructions, law variations, and unpredictable human behaviors in surrounding traffic nearly requires an impossibly complicated set of instructions and algorithms far too vast for today's processors. At least, this is the current consensus of most artificial intelligence researchers. However, new methods of machine learning, pattern recognition, object avoidance, path finding algorithms, and prediction mechanisms just might make this dream a reality.

To achieve this possibility, I have developed a simple experiment, involving a reduced risk environment. After all, placing one's own vehicle in jeopardy is both financially and morally dangerous. Instead, an inexpensive radio controlled model car will suffice, of course deprived of its internal radio receiving circuit. In it's place, an advanced learning processor will assume total control over the vehicle's movements. Enhanced by a network of proximity sensors, accelerometers, and a 360 degree VGA camera which can be rotated to the CPU's desire, the learning machine will explore it's world, learning the causal relationships between events such as approaching wall and an immanent collision. Under self control, the car is able to exceed normal operating limits with high speed 20 volt sprints, rear differential locking, and dynamic electronic breaking to perform the otherwise impossible maneuvering for quick escapes from the difficult to predict real world variables.

Rather than using a neural network the processor will emulate a virtual system which is capable of open ended evolution. Much of my previous involvements in this approach are explained in prior postings within this blog. In essence, this virtual system will be coded by a genetic sequence. Since nano technology and auto assembly is currently beyond most hobbyists' budgets, this car will not have any replication, repair, or homeostasis abilities, so with respect to most definitions it will not be considered alive. However, possibly more importantly, it's mind - the virtual system being emulated by the car's processors, will be capable of self organization and full blown open evolution, affected directly by the natural laws of natural selection. This means, basically, that it's "brain" can be physically changed (within it's virtual world), not only it's software, but also it's hardware. The genetic sequence which encodes the construction of the automaton's mind is to be stored within a 4GB Flash card, which in theory is enough to hold the entire human genome.

Since open ended evolution can be aimless at times, I have decided to add directives to the processor which controls the virtual evolution experiment. It is important to understand the levels of abstraction involved with this device, so I will straight forwardly reiterate. The car is physical and it has a physical processor and physical sensors. The processor, however, generates a virtual computer through a process called emulation. It is that virtual computer that sees through the sensors and drives the car. The physical processor will supply some parameters to the evolution experiment, such as collision results in death and therefore whatever specific genetic code was running at the time is a failure. Performance will be ranked and behaviors such as object avoidance, trajectory prediction, and camera pattern recognition will emerge. The system will be rewarded for exploring larger areas, requiring greater accuracy of the vision system to avoid impacts while traveling. Unlike GM's supposed claim of a self driving car, this machine would be capable of learning, limited only by cheap memory, and would never require human intervention.

The evolutionary process, taking the car from a system which doesn't understand the visual input at all to an expert driver, could potentially take several years to complete. It would be taught rather than programmed and the algorithms generated by this selection effort could be utilized in a wide variety of useful tasks which one day may be combined in a more advanced design to safely and reliably transport the world's population to their destinations in fully automatic, self aware vehicles.