In prior posts I discussed some issues around self-replication. Today I'd like to briefly cover another related issue: software bugs.
A software bug is an error or a mistake in programming code that causes a program to act in unintended ways. The resultant behavior can be annoying or bring down multiple systems. Bugs can occur at many levels, from the program code itself to the compiler used to build the software to the hardware that it runs on...but the end result is usually unpredictable.
Looking into the future, let's assume that one day we'll have self-replicating machines. Let's then assume those devices are driven by software. With most software on the market, however, there are bugs. Lots of them in some cases.
For instance, the operating system commonly known as Windows, is actually many interconnected programs working together. Windows XP is thought to contain over 45 million lines of code and subsequent versions contain many more. Despite the vast engineering staff employed to work on such products (development, testing, etc.), it's a given that with such complexity comes the potential for bugs. Those bugs are often exploited (at the expense of consumers and businesses) until Microsoft patches them. In other cases, the bugs can cause programs to crash, throw off calculations, or simply be annoying.
But does this only occur with huge programs? Hardly.
For example, the Mars Climate Orbiter mission was undone because of a mismatch in units (the onboard software used metric newtons, while the ground software used Imperial units). On a programming level, that is not a complicated bug. On a mission level, it meant demise.
With future small-scale self-replicating devices, a software bug could alter the character and behavior of the device in unforeseen ways. Perhaps makers of such devices will incorporate a switch to disable them should things spiral out of control. On the other hand, once a device is released it may not have any kind of "patching" capability, unless it is networked wirelessly. Taken down to the nanoscale, it may be impossible to rein in an errant device.
Extrapolate that to a large scale, and who knows what will happen. What if those machines are somehow networked together with one another but fail to receive patching instructions? This would take us beyond the realm of Autofac or even the grey goo scenario I mentioned in a previous post. Those bugs, or errors, could create just enough of a course alteration to create a system that no one could predict or maintain.
Research into these fields will go on, of course, but these issues should be considered well in advance of the product development stage. After all, the world of commerce is littered with products that were intriguing in the lab or even on paper, but did not pan out in the real world. With a self-replicating machine, however, even simple software bugs could unleash a new paradigm: mechanized mutation.
I'm not convinced anybody is prepared for that.