This abbreviated post–about a topic that I consider important stuff (I have generally given up trying to make such matters audible via blog)–was prompted by my running across a fairly-recent tweet I posted (Twitter: more feathers blowing in the hurricane).
My tweet said:Â strong AI by 2023 is far greater a techno-threat to society than threshing machines. NeoLuddites, what now?
AI is, of course, Artificial Intelligence–whose near-term incarnation should be, but is not, one of those technological consequences best dealt with BEFORE rather than after we see the dust cloud of its arrival. Sentient AI is just beyond the horizon. If you think computer technology has intruded too far into humanity’s, society’s and your family’s life already, hold onto your hats.
And yet, we once again seem to be content to acquiesce to the notion that, if it can be done, it should be done. The mere ability implies the moral imperative to apply.
Indeed, there is considerable good to come from the inevitable collaboration of nanotechnology, robotics, synthetic biology and computers. I consider, for example, the near-certainty of elder-care robots.
If you think this is an absurd exaggeration of a non-issue, spend some time following the links from this Mashable treatise on the issues involved.
The author concludes that our response should be FAB: Fear, Awareness and Bias. The consequence of a silicon-based dominant force on Earth sounds like science fiction. If we remain unaware and apathetic, it will most certainly be science fact in the lifetimes of some of you reading this seemingly-outrageous blog post.
The advent of the Power of the Peaceful Atom once held such promise. We just didn’t notice the moral-ethical dust cloud beyond the horizon of our myopic vision, and by our dispassion became the Destroyer of Worlds, as Oppenheimer sadly admitted.
Now back to your regularly scheduled,Â benign and parochial Fragments tree hugging.