
Thursday, January 29, 2009
Abiognesis in a Bottle

Monday, January 26, 2009
Balanced Forced Evolution

Friday, January 16, 2009
Cellular Grid Rechargeable Batteries
A possible remedy involves breaking down the battery into a matrix of hundreds or thousands of micro cells. These miniaturized chemical cells could be recharged more reliably at high speed and each cell could be controlled individually by the use of multiplexing circuitry. In the same way that a memory chip is divided into bytes and each byte can be randomly accessed non sequentially, this high tech battery could be designed to automatically scan through the cells which needed a recharge and skip those which didn't. Even more importantly, each cell could be recharged depending on it's own internal status, eliminating the possibility of damage to the battery as a whole. With a microprocessor controlled system, the battery could self organize, on demand, to produce the desired current and voltage output, without any losses associated with the problems of voltage regulators. Battery life, percentage charged, and charge cycles remaining could be measured with accuracy far exceeding anything ever seen before. The possibilities are endless.
One Instruction Set Computers

The logical extreme of reduced instruction set computers, as previously briefly mentioned in this article, is the use of a single parametrized op-code. To properly put this into perspective, modern x86 architectures, like the computer you are likely using now, and other CISC (complex instruction set computers) use hundreds, if not thousands, of individual instructions. The use of so many different commands within the processor forces the design to be very complex and leads to an increased chance of glitches and ultimately reduces maximum clock speed.
On the other hand, a reduced instruction set computer architecture (RISC), especially the one instruction set version (OISC), involves minimal circuitry inside the actual processor, simplifying the design and allowing components to be placed in a smaller area to enhance the speed by reducing propogation delays and latencies.
The model under investigation here is known as "Subtract-and-Branch-If-Less-than-or-Equal-to-Zero", symbolized as "subleq", and coded with no opcodes - since with the use of a sinlge op-code, the computer will not require explicit knowledge of what is expected, the opcode is implied and memory space is saved. Subleq takes three parameters, also known as operands, marked a, b, and c. The basic, and only, function of this code is to set the value at b equal to b minus a. In otherwords, b = b-a. For those that are not familar with typical assignment operators used in computer programming, the way this statement is evaluated is by taking the current value of whatever b-a happens to be. This value is then stored in and overwrited to, "assigned to", the value of b... meanwhile no change is made to a. IF the new value at b is less than or equal to 0 then program execution is "jumped" to the location held by paramter c, or more simply, the processor's program counter is set to the value of c. If a jump is not required you can't simply say 0 for c, since that would cause the program to reset and start over, 0 is the starting point for a program counter (typically). It is generally accepted to leave the third parameter c blank in these cases, which informs the processor that regardless of the previous operation, just go to the next instruction. It's still a jump, but only by one unit. Alternatively, the user can simply write in the location of the next instruction, it's exactly the same.
There are many ways to implement this design and slight modifcations are possible to create new instruction sets. Since program lengths can become extraordinarily long in such an instruction set, it can also be beneficial to add specialized hardware to the processor to allow multiplication or other mathematical functions to be performed in a single cycle. However, for proof of concept, the OISC mentioned here, is capable of universal computation! All with only one instruction.
If you've never run across such a claim before, I don't expect you to immediately understand how to program in this machine language. It's interesting that, at this point, you know everything about this one instruction set computer and yet, at the same time, probably wouldn't have any idea where to begin creating a program to drive it.
Well, here's a start, to perform addition of two numbers on this computer, all you have to do is...
ADD a, b == subleq a, ZGood luck OISC programmers!
subleq Z, b
subleq Z, Z
Saturday, January 10, 2009
Accelerometers - MEMS by Analog Devices



Wednesday, January 7, 2009
Self Driving Automobile - Version One Point Zero
To simply press a button and expect your car to safely and flawlessly drive itself and its occupants to a destination still seems far off in the future. To navigate an innumerable possible obstructions, law variations, and unpredictable human behaviors in surrounding traffic nearly requires an impossibly complicated set of instructions and algorithms far too vast for today's processors. At least, this is the current consensus of most artificial intelligence researchers. However, new methods of machine learning, pattern recognition, object avoidance, path finding algorithms, and prediction mechanisms just might make this dream a reality.
Wednesday, December 31, 2008
One Bit Manipulation - A Reduced Instruction Set Computer

The esoteric computer programming language, brainfuck, a reduced instruction set language and slight variation of p prime prime, uses a set of only eight instructions to perform universal computation. In otherwords, it can perform operations equivalent in complexity to any other computer, regardless of the fact that most modern processors use instruction sets consisting of several thousands of commands. Brainfuck uses basic operations to control both it's program counter and it's one and only memory pointer. By moving the memory and program pointers back and forth and incrementing and decrementing the values contained within up and down with conditional control, the limited language is capable of solving any algorithmically expressable problem. The 8 instructions are:
+ (increment value at pointer)
- (decrement value at pointer)
> (move pointer right)
< (move pointer left)
[ (jump forward to the matcing "]" *they can be nested if value at pointer is zero
] (jump backward to the command after the matching "[" if the value at the pointer is non zero
. (output the value at the pointer)
, (accept an input and store it in the location pointed to by the pointer)
Although this set may seem already maximally reduced, it can be decreased further! Firstly, the input and output commands are completely unnecessary. They simply provide a convenient method of transfering data between the processor and the external world. Other methods exists, such as memory mapping, and because of this, they can be eliminated.
On top of that, the brainfuck language assumes that the registers comprising the memory locations are 8 bit, however 8 bit wide registers are not necessary and reducing them to the logical extreme of only one bit allows for the + and - commands to be reduced to a single instruction. Since incrementing or decrementing a 1 bit value simple reverses the current value, for example ones turns to zeros and a zeros turns to ones, only one command is necessary and so for purposes of this article we can express it as "x", meaning to flip the current bit.
With this modified language, constructed from only 5 opcodes, one can create an even less efficient way of solving any problem. Is it possible to reduce the instruction set even further? Actually, surprisingly, the answer is yes. By the inclusions of more complicated internal architecture and addition of parameter based operations, the instruction set can be reduced to only one instruction. This impractical system doesn't do much more than to illustrate the methods by which a single instruction computer can work, however, it greatly reduces the quantity of transistors required to fabricate the processor and assuming that one day transistor switching times are reduced sufficiently, it could very well become cost effective.