Randy McDonald (rfmcdpei) wrote,
Randy McDonald
rfmcdpei

[BLOG-LIKE POSTING] On the existential risks of first-generation transhumanism

Discover-hosted blogs have come out in favour recently of human cognitive augmentation, perhaps in the transhumanist tradition. Science Not Fiction's Kyle Munkittrick argues that the recent controversy over an energy drink, Four Loco, reflects deeper concerns.

Every time I pass a Duane Reade (aka CVS/Walgreens) here in the city I think about wandering in and picking up some caffeine pills. I want to do it so that I can cut down on my java intake and, concomitantly, my monthly contribution to Starbucks’ profit margin. Yet I feel weird and crazy when the urge strikes, so I pass on the pills and buy a Mountain Dew at a bodega instead. Why? Why do I, Mr. Transhumanist, have trouble with the idea of taking caffeine pills? They are safer, more precise, work better, and don’t have high-fructose corn syrup or require a half-gallon of creamer for me to consume them. The fear of caffeine pills is, on face, irrational.

The answer seems to be that we have a bias towards imperfection and inaccuracy in our enhancement. Coffee has random amounts of caffeine. Every Red Bull and vodka is a different mixture. Both taste, uh, “good.” On the flip side, Four Loko and Joose have the exact same amount of stimulants in each unit. Caffeine pills offer even more precision. Genuine cognitive enhancing drugs, like Ritalin, Adderall, and modafinil, are equally precise but significantly more potent. As such, those drugs are not just prescription only, but the prescriptions themselves are heavily regulated. As power and precision go up, our concern over the form of enhancement goes up as well.

Four Loko is just another victim of our bias towards imperfect enhancers. I need a third cup of coffee to think about this more. Or will it be my fourth?


Meanwhile, in another post he argues that the upgrading of the human form might be necessary for the survival of our species.

Every apocalyptic film seems to trade on the idea that there will be some lone super-genius to figure out the problem. In The Day The Earth Stood Still (both versions) Professor Barnhardt manages to convince Klaatu to give humanity a second look. Cleese’s version of the character had a particularly moving “this is our moment” speech. Though it’s eventually the love between a mother and child that triggers Klaatu’s mercy, Barnhardt is the one who opens Klaatu to the possibility. Over and over we see the lone super-genius helping to save the world.

Shouldn’t we want, oh, I don’t know, at least more than one super-genius per global catastrophe? I’d like to think so. And where might we get some more geniuses? you may ask. We make them.

In his essay, “The Singularity: A Philosophical Analysis”, philosopher David Chalmers notes that there is a very real chance that if machines become self-aware and start improving themselves, we’re going to have a problem (*cough* Skynet *cough* Liquid T-1000 *cough, cough*). One of his potential solutions is to enhance ourselves to keep up:

This might be done genetically, pharmacologically, surgically, or even educationally. It might be done through implantation of new computational mechanisms in the brain, either replacing or extending existing brain mechanisms. Or it might be done simply by embedding the brain in an ever more sophisticated environment, producing an “extended mind” whose capacities far exceed that of an unextended brain.


Does any of that sound familiar? Perhaps a little film called Gattaca may ring some bells? Chalmers is arguing enhancement may be necessary to prevent extinction. Why not extrapolate that logic to other existential risks. Alien invasion? Superhumans would probably put up a better fight. Skynet goes live? An army of hackers with a collective IQ of 200+ and neuro-integrated interfaces would clean that up in a jiffy. But what about our current problems? Although heavy-handed, the message in both versions of The Day the Earth Stood Still is that humanity’s greatest existential threat is itself. War, suffering, poverty, and environmental destruction all seem like problems that would merit allowing our best and brightest to become even better and brighter for the sake of everyone.


That's all well and good. The major problem will be with the first generation of augmentation of technologies. These unfortunate human beings may suffer badly indeed, as Quiet Babylon's Tim Maly observed.

On the ground, the realities of the only brain-mounted interface I know of – cochlear implants – are brutal. Here’s a taste: You can’t hear music. For a sense of what that’s like, try these demos. The terrifying truth is that once you’ve signed up for one kind of enhancement (say, the 16 electrode surgery) it’s very hard to upgrade, even if Moore’s law ends up applying to electrode counts and the fidelity of hearing tech.

If you are an early adopter for this kind of thing, the only thing we can say for sure about it is that it’ll be slow and out of date very soon. Unless they find a way to make easily-reversible surgery, your best strategy is to wait for the interface that’s whatever the brain-linkage equivalent is to 300dpi, full colour, high refresh screens.

You’ll have to wait. Medical technology advances in fits and starts. When Barney Clark was hooked up to his 400 pound monstrosity of an artificial heart in 1982, things did not go well for him.

During the 112 days that he survived, he underwent four additional operations, had several episodes of bleeding, and experienced prolonged periods of confusion. He even asked to die on several occasions.


Medical advancements demand sacrifices. Someone needs to wear the interim devices. Desperation is one avenue for adoption. Artificial hearts are still incomplete and dicey-half measures, keeping people alive while they wait for a transplant or their heart heals. This is where advances in transplants and prosthetics find their volunteers and their motivation for progress. It’s difficult to envision a therapeutic brain implant – they are almost by definition augmentations.

An avenue to irreversible early adoption is arenas where short term enhancement is all that’s required. The military leaps to mind. With enlistment times measured in a few short years, rapid obsolescence of implants doesn’t matter as much; they can just pull virgin recruits and give them the newest, latest. If this seems unlikely, consider that with the right mix of rhetoric about duty and financial incentives, you can get people to do almost anything including join an organization where they will be professionally shot at.


I've joked with friends about getting a brainchip installed, something that would allow for virtual telepathy (no more phone numbers!) and intelligence augmentation besides. If Munkittrick's dream could be fulfilled safely, I might have been an early adopter. I'm not inclined to do so now.
Tags: futurology, human beings, intelligence, non blog, science fiction, transhumanism
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 9 comments