16 July 2008

It could be worse

'tis but a scratch.

I'm going to a conference on global catastrophic risks which runs from 17 to 20 July in Oxford. This may seem is an odd way to pass time, but the case for looking seriously at a range of catastrophic scenarios, up to and including human extinction, has been well made a few times. In this lecture, for example, Martin Rees does a pretty good job of outlining some cosmic challenges for humanity.

By way of background, a new book edited by Nick Bostrom and Milan Cirkovic is helpful (its introduction is available via the conference reading list). Also useful (although not directly connected to the conference) are posts by James Cascio on An Eschatological Taxonomy and The Big Picture: Collapse, Transcendence or Muddling Through.

I am less convinced of the transhumanism agenda [1], apparent in Bostrom and Cirkovic's inclusion of ageing in the category of catastrophic risks. I hope to keep an open mind (and better understand a range of arguments including those advanced by Russell Blackford of the IEET), but my initial skepticism is informed by the following points (which are unlikely to be original and have probably been better stated, and answered, elsewhere):
* acceptance of human fragility is a starting point for compassion and 'humanity' as we know it. Renewal, including the complete innocence of the new born, is part of the glory. The great chain of love over generations is part of us.[2]

* what's good for the individual may not not necessarily be what's good for the species. The very old but indefinitely strong, fit and active would accumulate all the power, rather as Swift's Struldbrugs: "[these] immortals would in time become proprietors of the whole nation, and engross the civil power, which, for want of abilities to manage, must end in the ruin of the public." [3]

* even if they pan out, the efforts of Aubrey de Grey et al may be aimed at the wrong target. The idea of a singularity should be treated with critical distance [4], but if you accept that, as Rees says, there is more time ahead in the cosmos for complexity and intelligence to develop than there is time behind then he's probably right that future life will be as different from humans as we are from bacteria.
Of course, continuing to be able to have these debates depends in part on whether humanity gets through the existential risks [5] of the next few decades or so intact. For this we have our current abilities as humans and the institutions and networks we are capable of developing over those decades. But at heart our success or failure will probably depend, as Rees says, on whether we show at least as much moral courage as the likes of Joseph Rotblat and those who shared his view, perhaps sentimental, of humanity:
We appeal, as human beings to human beings. Remember your humanity and forget the rest. If you can do so, the way lies open for a new paradise; if you cannot there lies before you the risk of universal death.[6]

Footnotes

1. In John Gray's view (Black Mass, p. 56), there is a straight line from Leon Trotsky's revolutionary violence to transhumanism. I'm not convinced this is right either -- particularly not in reference to those who think about transhumanism/singluarlity in a sophisticated way.

2. I realise this starts to sound almost 'religious', but it's an interesting exercise to try to articulate what one thinks are his cherished beliefs. Of course I am in favour of as long, healthy and rich a life as possible for as many as possible. But remember the warning against megalomania whispered to Roman generals at a triumph: 'Remember you are mortal'.

3. Quite independently of this discussion, someone put it to me that the existence of people like Sheldon Adelson (profiled here) was the best argument against indefinite life extension. [On similar grounds, this person said, 'personhood' in law for corporations was extremely dangerous. It's familiar argument, but interesting that it should come from someone who works at the most senior levels of American business.] One counterargument, I suppose, would be that in the right circumstances those who lived a very long time would become wise and/or know that they would live to see the long term consequences of their actions.

4. A good very short intro to thinking about the idea is provided, again, by James Cascio in Singularities enough and time.

5. Including climate change. See, for example, James Lovelock here.

6. The Russell-Einstein Manifesto.

No comments: