Just a quick update today...the science fiction short story, The View From Under the Bridge will be available for free on Amazon.com tomorrow morning (Saturday, December 1st) for one day only. This story is part of a larger collection, Corridors, which should be available in a few weeks.
If you like this particular story, feel free to check the two other stories that are currently available, Firebugs and Image Management.
Friday, November 30, 2012
Monday, November 26, 2012
Self Replicating Lemons
In prior posts I discussed some issues around self-replication. Today I'd like to briefly cover another related issue: software bugs.
A software bug is an error or a mistake in programming code that causes a program to act in unintended ways. The resultant behavior can be annoying or bring down multiple systems. Bugs can occur at many levels, from the program code itself to the compiler used to build the software to the hardware that it runs on...but the end result is usually unpredictable.
Looking into the future, let's assume that one day we'll have self-replicating machines. Let's then assume those devices are driven by software. With most software on the market, however, there are bugs. Lots of them in some cases.
For instance, the operating system commonly known as Windows, is actually many interconnected programs working together. Windows XP is thought to contain over 45 million lines of code and subsequent versions contain many more. Despite the vast engineering staff employed to work on such products (development, testing, etc.), it's a given that with such complexity comes the potential for bugs. Those bugs are often exploited (at the expense of consumers and businesses) until Microsoft patches them. In other cases, the bugs can cause programs to crash, throw off calculations, or simply be annoying.
But does this only occur with huge programs? Hardly.
For example, the Mars Climate Orbiter mission was undone because of a mismatch in units (the onboard software used metric newtons, while the ground software used Imperial units). On a programming level, that is not a complicated bug. On a mission level, it meant demise.
With future small-scale self-replicating devices, a software bug could alter the character and behavior of the device in unforeseen ways. Perhaps makers of such devices will incorporate a switch to disable them should things spiral out of control. On the other hand, once a device is released it may not have any kind of "patching" capability, unless it is networked wirelessly. Taken down to the nanoscale, it may be impossible to rein in an errant device.
Extrapolate that to a large scale, and who knows what will happen. What if those machines are somehow networked together with one another but fail to receive patching instructions? This would take us beyond the realm of Autofac or even the grey goo scenario I mentioned in a previous post. Those bugs, or errors, could create just enough of a course alteration to create a system that no one could predict or maintain.
Research into these fields will go on, of course, but these issues should be considered well in advance of the product development stage. After all, the world of commerce is littered with products that were intriguing in the lab or even on paper, but did not pan out in the real world. With a self-replicating machine, however, even simple software bugs could unleash a new paradigm: mechanized mutation.
I'm not convinced anybody is prepared for that.
A software bug is an error or a mistake in programming code that causes a program to act in unintended ways. The resultant behavior can be annoying or bring down multiple systems. Bugs can occur at many levels, from the program code itself to the compiler used to build the software to the hardware that it runs on...but the end result is usually unpredictable.
Looking into the future, let's assume that one day we'll have self-replicating machines. Let's then assume those devices are driven by software. With most software on the market, however, there are bugs. Lots of them in some cases.
For instance, the operating system commonly known as Windows, is actually many interconnected programs working together. Windows XP is thought to contain over 45 million lines of code and subsequent versions contain many more. Despite the vast engineering staff employed to work on such products (development, testing, etc.), it's a given that with such complexity comes the potential for bugs. Those bugs are often exploited (at the expense of consumers and businesses) until Microsoft patches them. In other cases, the bugs can cause programs to crash, throw off calculations, or simply be annoying.
But does this only occur with huge programs? Hardly.
For example, the Mars Climate Orbiter mission was undone because of a mismatch in units (the onboard software used metric newtons, while the ground software used Imperial units). On a programming level, that is not a complicated bug. On a mission level, it meant demise.
With future small-scale self-replicating devices, a software bug could alter the character and behavior of the device in unforeseen ways. Perhaps makers of such devices will incorporate a switch to disable them should things spiral out of control. On the other hand, once a device is released it may not have any kind of "patching" capability, unless it is networked wirelessly. Taken down to the nanoscale, it may be impossible to rein in an errant device.
Extrapolate that to a large scale, and who knows what will happen. What if those machines are somehow networked together with one another but fail to receive patching instructions? This would take us beyond the realm of Autofac or even the grey goo scenario I mentioned in a previous post. Those bugs, or errors, could create just enough of a course alteration to create a system that no one could predict or maintain.
Research into these fields will go on, of course, but these issues should be considered well in advance of the product development stage. After all, the world of commerce is littered with products that were intriguing in the lab or even on paper, but did not pan out in the real world. With a self-replicating machine, however, even simple software bugs could unleash a new paradigm: mechanized mutation.
I'm not convinced anybody is prepared for that.
Friday, November 16, 2012
Endless Factories
Over the past few weeks, I've been covering lots of recent developments in the worlds of nanotech and 3D printing, as well as a brief overview of the concept of fractals (as it pertains to self-replicating patterns). Today, however, I'd like to briefly address an intriguing issue in the world of replication...that of "self replication".
Previously, I mentioned a little bit about the RepRap machine, which can potentially reproduce parts for yet another RepRap machine. The end goal, of course, is to create a machine that can fully replicate itself. So far, the videos I've seen only show the machines making parts, but not actually assembling them. Now, the concept of self-replication is just that...a machine reproducing copies of itself and even assembling that copy without any human intervention. The copying process would likely be driven by software code...much like DNA is used by a cell as a set of instructions.
This idea is all fine in theory, but it's doubtful there will be any error-checking in the copying process. A self-copy is only going to be as good as the original master machine itself...which means if there are any hidden "bugs" in the original, they'll be in the copy, too.
In addition to large scale attempts at self-replication, the race is also on to create self-replicating nanobots. Assembly is much more difficult at the nano scale, and although significant strides have been made to create nanoscale machines, self replication presents a new set of challenges. Despite the impressive engineering involved in such a creation, it seems that the tradeoff for size will also result in an increase in vulnerability and fragility, at least at an individual level.
How does one overcome such an obstacle? One way would be an increase in the volume of individual machines. This idea can be easily found in the world of biology, but in terms of nanotechnology, the best analogy might be bacteria. On the surface, this sounds like a good idea, but there is the potential of a "gray goo" scenario unfolding (as described by Eric Drexler). This scenario involves self-replication that gets completely out of hand...to the point that it could cause chaos on an enormous scale especially if the devices consume local resources in order to reproduce themselves.
Science fiction has a colorful history when it comes to describing possible scenarios with self-replicating machines...from Stargate SG-1 to Star Trek to books/short stories such as Philip K. Dick's Autofac. This is one topic I hope to explore in the next project I'm working on, Fractal Standard Time.
Unfortunately, I think research into this area will end up being more of an afterthought...well after the actual devices are created and released into the world. It would be one thing to deal with RepRap machines run amok, but nanomachines? Depending on the creation, the consequences of flawed design in both hardware and software would be unpredictable at best and maybe even unstoppable at worst. How can you prepare to fend off a device you can't even see?
Next time I'll discuss one of the potential hidden flaws to these theoretical machines: software bugs.
Previously, I mentioned a little bit about the RepRap machine, which can potentially reproduce parts for yet another RepRap machine. The end goal, of course, is to create a machine that can fully replicate itself. So far, the videos I've seen only show the machines making parts, but not actually assembling them. Now, the concept of self-replication is just that...a machine reproducing copies of itself and even assembling that copy without any human intervention. The copying process would likely be driven by software code...much like DNA is used by a cell as a set of instructions.
This idea is all fine in theory, but it's doubtful there will be any error-checking in the copying process. A self-copy is only going to be as good as the original master machine itself...which means if there are any hidden "bugs" in the original, they'll be in the copy, too.
In addition to large scale attempts at self-replication, the race is also on to create self-replicating nanobots. Assembly is much more difficult at the nano scale, and although significant strides have been made to create nanoscale machines, self replication presents a new set of challenges. Despite the impressive engineering involved in such a creation, it seems that the tradeoff for size will also result in an increase in vulnerability and fragility, at least at an individual level.
How does one overcome such an obstacle? One way would be an increase in the volume of individual machines. This idea can be easily found in the world of biology, but in terms of nanotechnology, the best analogy might be bacteria. On the surface, this sounds like a good idea, but there is the potential of a "gray goo" scenario unfolding (as described by Eric Drexler). This scenario involves self-replication that gets completely out of hand...to the point that it could cause chaos on an enormous scale especially if the devices consume local resources in order to reproduce themselves.
Science fiction has a colorful history when it comes to describing possible scenarios with self-replicating machines...from Stargate SG-1 to Star Trek to books/short stories such as Philip K. Dick's Autofac. This is one topic I hope to explore in the next project I'm working on, Fractal Standard Time.
Unfortunately, I think research into this area will end up being more of an afterthought...well after the actual devices are created and released into the world. It would be one thing to deal with RepRap machines run amok, but nanomachines? Depending on the creation, the consequences of flawed design in both hardware and software would be unpredictable at best and maybe even unstoppable at worst. How can you prepare to fend off a device you can't even see?
Next time I'll discuss one of the potential hidden flaws to these theoretical machines: software bugs.
Tuesday, November 13, 2012
Corridors Update
Progress is being made on the cover for Corridors...and hopefully I'll have it available before Christmas. In the meantime, I'm releasing three short stories from the collection. Right now they are only available in Kindle format on Amazon...here, here and here. They will also be available on Barnes & Noble and iTunes within the next week or so.
The three stories I've made available are Firebugs (about dueling electronic insects), Image Management (a day in the life of an employee at a company that offloads your memories for you), and The View From Under the Bridge (a story about someone who discovers a box of cartridges at a garage sale...that are full of human personalities).
The full short story collection will be coming soon, to be followed by another short story collection early next year. As always...stay tuned!
The three stories I've made available are Firebugs (about dueling electronic insects), Image Management (a day in the life of an employee at a company that offloads your memories for you), and The View From Under the Bridge (a story about someone who discovers a box of cartridges at a garage sale...that are full of human personalities).
The full short story collection will be coming soon, to be followed by another short story collection early next year. As always...stay tuned!
Subscribe to:
Posts (Atom)