Transhumanism with Guest Speaker David Pearce
As earlier announced, we have invited transhumanist philosopher David Pearce to be a guest speaker here and he has kindly accepted. We are hoping to learn more about his work and the very interesting field in which he's involved.
David is a key figure in the transhumanist movement and a co-founder of Humanity+ (formerly known as The World Transhumanist Association). For those of you who are unsure of the basics of transhumanism, David provides a useful, concise introduction in this video:
For a more detailed take on David's ideas, the following is helpful:
Or check out David's book The Hedonistic Imperative and his website Hedweb.com
Other important thinkers in the broad transhumanist sphere include Ray Kurzweil and James Hughes.
Needless to say, transhumanism is a controversial subject and it's status open to debate. As such, some of us may come to the subject with strong preconceptions/opinions. This is all fine, but please bear in mind, David is contributing time from his busy schedule to help us learn more about his field and while critique and questioning is welcome, we hope everyone will be measured and respectful in tone.
This thread is intended as an initial AMA (ask me anything) where you can put your questions/critiques to David. Please keep these to a maximum of 250 words. There's no guarantee he'll get around to everyone, but I'm sure he'll make the effort to answer the more interesting posts, some of which may, according to David's prerogative, be separated into threads of their own. So, have at it and I'll let David know this is up and we're ready for him.
(Obviously, he may need some time to digest the questions and answer them around his schedule, so please be patient.)
David is a key figure in the transhumanist movement and a co-founder of Humanity+ (formerly known as The World Transhumanist Association). For those of you who are unsure of the basics of transhumanism, David provides a useful, concise introduction in this video:
For a more detailed take on David's ideas, the following is helpful:
Or check out David's book The Hedonistic Imperative and his website Hedweb.com
Other important thinkers in the broad transhumanist sphere include Ray Kurzweil and James Hughes.
Needless to say, transhumanism is a controversial subject and it's status open to debate. As such, some of us may come to the subject with strong preconceptions/opinions. This is all fine, but please bear in mind, David is contributing time from his busy schedule to help us learn more about his field and while critique and questioning is welcome, we hope everyone will be measured and respectful in tone.
This thread is intended as an initial AMA (ask me anything) where you can put your questions/critiques to David. Please keep these to a maximum of 250 words. There's no guarantee he'll get around to everyone, but I'm sure he'll make the effort to answer the more interesting posts, some of which may, according to David's prerogative, be separated into threads of their own. So, have at it and I'll let David know this is up and we're ready for him.
(Obviously, he may need some time to digest the questions and answer them around his schedule, so please be patient.)
Comments (422)
Do you see technological advances in the next two centuries delivering the conditions of transhumanism, or are you thinking in longer (or shorter) time periods?
What do you think the chances are of environmental collapse in the next 100 years derailing the necessary technical developments to allow transhumanism?
What kind of economic arrangements are most and least likely to advance transhumanist goals? Capitalism is not a good candidate to deliver super well-being to everyone.
What role, if any, do you see a re-evaluation of rationality and decision making logics playing in the transhumanist project? How ought we go about that? And how do we get around the problem of using broken tools to make only more powerful broken tools?
For example, Neuralink, by Elon Musk, hopes to address something equivalent albeit not explicitly stated to the goal of transhumanism with brain machine interfaces to keep pace with ever more intelligent computers.
What's your thoughts about this endeavor and hurdles it faces?
I guess if this is to be a question, it would be: has he gotten any pushback on that in the professional sphere, and what has that been like?
Oh and also: does he know a good term for anti-hedonist views in general? Because I feel like I’m sorely lacking any catch-all term that doesn’t name something more specific than just that.
Anti-natalism is regularly discussed on this forum, with members having different degrees of sympathy towards it.
You describe yourself as a "soft anti-natalist". What is your basis for this? And do you buy into Benatar's asymmetry theory? (which suggests that the pain of a pinprick would make it so that a life otherwise full of pleasure would have been better off not being started).
Regarding the three “supers” mentioned, I’m curious about how the three are interrelated. Particularly super-intelligence and super-wellbeing. Generally speaking, intelligence means learning/knowing what is true. However, truth is often unpleasant, and would therefore seem to detract from one’s wellbeing, at least occasionally.
Also, you seem to advocate for essentially the removal all suffering. Much of our suffering derives from our biological needs (food, sleep, etc.). So would these needs need to be removed in order to eliminate suffering? If so, this too would seem to detract from the goal of super-wellbeing, because much of our happiness is rooted in pursuing, and hopefully meeting, these needs. Basically what I’m saying is that if you eliminate our biological needs, you also risk eliminating our very will to live. What would our motivation for life be without experiencing desire? It’s like Buddhism without the concept of nirvana, enlightenment, rebirth, etc. A state of eternal contentment and complacency seems to be what the outcome would look like. Do you feel this would be more desirable than our current state of affairs? Thank you for your time.
darthbarracuda
Most transhumanists are secular scientific rationalists. Only technology (artificial intelligence, robotics, CRISPR, synthetic gene drives, preimplantation genetic screening and counselling) can allow intelligent moral agents to reprogram the biosphere and deliver good health for all sentient beings.
Global warming? There are geoengineering fixes.
Overpopulation? Fertility rates are plunging worldwide.
Famine? More people now suffer from obesity than undernutrition.
I share some of Jacques Ellul's reservations about the effects of technology. But only biotechnology can recalibrate the hedonic treadmill, eradicate the biology of involuntary pain and suffering and deliver a world based on gradients of intelligent bliss:
https://www.hedweb.com/hedethic/sentience-interview.html
Jacques Ellul himself was deeply religious. He felt he had been visited by God. Most spiritually-minded people probably feel that transhumanism has little to offer. Perhaps they are right – my own mind is a desolate spiritual wasteland. But science promises the most profound spiritual revolution of all time. Tomorrow’s molecular biology can identify the molecular signatures of spiritual experience, refine and amplify its biological substrates, and deliver life-long spiritual ecstasies beyond the imagination of even the most god-intoxicated temporal-lobe epileptic.
Will most transhumans choose to be rationalists or mystics?
I don’t know. But biotech can liberate us from the obscene horrors and everyday squalor of Darwinian life.
Welcome, David! We appreciate any time you can give us, so please do proceed at your own pace. :smile:
The future we imagine derives mostly from the sci-fi we remember. Life in the 24th century will not resemble Star Trek. Not merely does the “thermodynamic miracle” of life’s genesis mean that Earth-originating life is probably alone in our Hubble volume. The characters in Star Trek have the same core emotions, same pleasure-pain axis, same fundamental conceptual scheme and same default state of waking consciousness as archaic humans. Even Mr Spock is all too human. It’s hokum.
Realistic timescales for transhumanism? Let’s here define transhumanism in terms of a “triple s” civilisation of superintelligence, superlongevity and superhappiness. Maybe the 24th century would be a credible date. Earlier timescales would be technically feasible. But accelerated progress depends on sociological and political developments that reduce predictions to mere prophecies and wishful thinking. In practice, the frailties of human psychology mean that successful prophets tend to locate salvation or doom with the plausible lifetime of their audience. I’m personally a lot more pessimistic about timescales for a mature “triple S” civilisation than most transhumanists. Sorry to be so vague. There are too many unknown unknowns.
Environmental collapse? The only way I envisage collapse might happen is via full-scale thermonuclear war and a strategic interchange between the superpowers. Sadly, this is not entirely far-fetched. Evolution “designed” human male primates to wage war against other coalitions of human male primates. I fear we may be sleepwalking towards Armageddon. Note that environmental collapse wouldn’t entail human extinction, though relocating to newly balmy Antarctica (cf. https://motls.blogspot.com/2019/10/60-c-of-global-warming-tens-of-millions.html) would be hugely disruptive. Let’s hope these fears are wildly overblown.
Capitalism? Can a system based on human greed really deliver the well-being of all sentience? I'm sceptical. Free-market fundamentalism doesn’t work. Universal basic income, free healthcare and guaranteed housing are preconditions for any civilised society. Above all, murdering sentient beings for profit must be outlawed. Factory-farms and slaughterhouses are pure evil (cf. https://www.hedweb.com/quora/2015.html#slaughterhouses). The cultured meat revolution will presumably end the horrors of animal agriculture. But otherwise, I think some version of the mixed economy will continue indefinitely. Anything that can be digitised soon becomes effectively free. This includes genetic information. The substrates of bliss won’t need to be rationed. In the meantime, preimplantation genetic screening and counselling for all prospective parents would be hugely cost-effective – especially in the poorest countries. I hope all babies can be designer babies rather than today’s reckless genetic experiments.
Quoting David Pearce
A thermonuclear war would indeed be a fine way to ring down the curtain, but perhaps a less efficient method would be sufficiently effective. I am not suggesting a human species-terminating event. Rather, extensive -- and occasionally severe -- environmental degradation could rob the species of the surpluses needed to support a large research and development establishment. In time we may be able to dig ourselves out of the environmental hole we are still busy excavating.
Do you think super-intelligence will be achieved and enjoyed incrementally, or will this happen in a single exceptional leap? Is the present brain capable of being uplifted to super-intelligence, or will it be necessary to design a better biological brain-build before uplift can occur? A bigger, better frontal cortex; a less volatile limbic system, more memory, better sensory processing? Brains much smaller than ours manage remarkably complex behavior (but just skip over philosophy). Can our brains be made a more efficient structure, before we add a practice effect?
I have experienced an unearned but nice level of contentment which has lasted now several years. I locate the source of this contentment in the limbic system. Is it age? I'm 75. Do you see super-happiness as the result of changing our emotion-generating system, or as a result of super-intelligence? Maybe one of the things that makes the God of Israel so angry is his alleged omniscience--The God Who Knew Too Much?
Thanks for joining us.
My concern with transhumanism is that it's beneficiaries like yourself are either unaware of or downplaying what I (maybe erroneously) refer to as the "human condition"; namely that state we all find ourselves in in which we are unable to achieve the moral ideals we aspire to - I want to not be selfish, and yet I am; I want to not give in to base pleasures, and yet I do; I want to devote my time and energy to higher ideals, and yet I spend hours watching vapid youtube videos (or even doing something slightly more noble like going down wiki rabbit holes or listening to weird music that I'm not sure if I like or not). I worry about the gap between lofty goals such as those that are transhumanistic on one hand, and the cold hard reality of purely human existence on the other. Tie that in to what @darthbarracuda mentioned about Ellul, and I'm wondering if transhumanism isn't just a sort of pubescent lack of understanding about the human situations we all find ourselves in; technology, after all, is simply humans harnessing our (ever-changing understanding of) nature in a very imperfect, and often destructive way. Cryptocurrency itself is detrimental to the environment.
What I'm really worried about is a transhumanistic approach to the human situation that is not based on an accurate understanding of that human situation; an approach that assumes too much and introspects about ourselves far too little.
Somewhat related; how does transhumanism address addiction?
This ties in with what Bitter Crank is asking and @Noble Dust points out with his "human condition".
Apologies if this is a dumb question that can be easily researched but, @David Pearce, what is considered "superintelligence" within transhumanism? Some theories pose several types of intelligence and quite a few don't necessarily fit in the scientific positivist vibe of transhumanism. A list to illustrate:
Aesthetic intelligence
Collective intelligence (a result of social processes and communciation)
creativity
crystallized intelligence (abilities based on knowledge and experience)
existential intelligence (philosophical reasoning, abstraction)
fluid intelligence
intentionality
interpersonal intelligence
intrapersonal intelligence (together often "emotional intelligence")
kinesthetic intelligence
linguistic intelligence
musical intelligence
organizational intelligence
self awanreness
situational intelligence
spatial intelligence
logical-mathematical intelligence
For instance, how would being hypersensitive and aware of your own and other people's feelings affect our logical-mathematical intelligence at any given time? Even resolving whatever "bandwidth" issues we currently have, causing us to only focus on one thing at a time, what does it mean to simulatenously follow a law that requires a punishment and our compassion wanting to forgive the criminal?
In other words, given the various types of intelligence and there not being a clear hierarchy, what do you think it would in practice mean to be superintelligent?
Irremovable human biases?
Yes. One example is status quo bias. A benevolent superintelligence would never have created a monstrous world such as ours. Nor (presumably) would benevolent superintelligence show status quo bias. But the nature of selection pressure means that philosopher David Benatar’s plea for voluntary human extinction via antinatalism (Better Never To Have Been (2008)) is doomed to fall on deaf ears. Apocalyptic fantasies are futile too (cf. https://www.hedweb.com/quora/2015.html#dptrans).
So the problem of suffering is soluble only by biological-genetic means.
The orthogonality thesis?
All biological minds have a pain-pleasure axis. The pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Thus there are no minds in other life-supporting Hubble volumes with an inverted pleasure-pain axis. Such universality of (dis)value doesn’t mean that humans are all closet utilitarians. The egocentric illusion has been hugely genetically adaptive for Darwinian malware evolved under pressure of natural selection; hence its persistence. Yet we shouldn’t confuse our epistemological limitations with a deep metaphysical truth about the world. Posthuman superintelligences will not have a false theory of personal identity. This is why I’m cautiously optimistic that intelligent agents will phase out the biology of suffering in their forward light-cone. Yes, we may envisage artificial intelligences with utility functions radically different from biological minds (“paperclippers”). But classical digital computers cannot solve the phenomenal binding/combination problem (cf. https://www.hedweb.com/hedethic/binding-interview.html). Digital zombies can never become full-spectrum intelligences, let alone full-spectrum superintelligences. AI will augment us, not supplant us.
Better tools of decision-theoretic rationality?
Compare the metaphysical individualism presupposed by the technically excellent LessWrong FAQ (cf. https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq) with the richer conception of decision-theoretic rationality employed by a God-like full-spectrum superintelligence that could impartially access all possible first-person perspectives and act accordingly (cf. https://www.hedweb.com/quora/2015.html#individualism).
So how can humans develop such tools of God-like rationality?
As you say, it’s a monumental challenge. Forgive me for ducking it here.
Challenges for transhumanism?
Where does one start?! Here I’ll focus on just one. If prospective parents continue to have children “naturally”, then pain and suffering will continue indefinitely. All children born today are cursed with a terrible genetic disorder (aging), a chronic endogenous opioid addiction and severe intellectual disabilities that education alone can never overcome. The only long-term solution to Darwinian malware is germline gene-editing. Unfortunately, the first CRISPR babies were conceived in less than ideal circumstances. He Jianku and his colleagues were trying to create cognitively enhanced humans with HIV-protection as a cover-story (cf. https://www.technologyreview.com/2019/02/21/137309/the-crispr-twins-had-their-brains-altered/). All babies should be CRISPR babies, or better, base-edited babies. No responsible prospective parent should play genetic roulette with a child’s life. Unfortunately, the reproductive revolution will be needlessly delayed by religious and bioconservative prejudice. If a global consensus existed, we could get rid of suffering and disease in a century or less. In practice, hundreds if not thousands of years of needless pain and misery probably lie ahead.
Neuralink? It’s just a foretaste. If all goes well, everyone will be able to enjoy “narrow” superintelligence via embedded neurochips – the mature successors to today’s crude prototypes. Everything that programmable digital zombies can do, you’ll be able to do – and much more. Huge issues here will be control and accountability. I started to offer a few thoughts, but they turned into platitudes and superficial generalities. “Narrow” superintelligence paired with unenhanced male human nature will be extraordinarily hazardous.
Super well-being?
Let’s say, schematically, that our human hedonic range stretches from -10 to 0 to +10. Most people have an approximate hedonic set-point a little above or a little below hedonic zero. Tragically, a minority of people and the majority of factory-farmed nonhuman animals spend essentially their whole lives far below hedonic zero. Some people are mercurial, others are more equable, but we are all constrained by the negative-feedback mechanisms of the hedonic treadmill. In future, mastery of our reward circuitry promises e.g. a hedonic +70 to +100 civilisation – transhuman life based entirely on information-sensitive gradients of bliss (cf. https://www.gradients.com). Currently, we can only speculate on what guise such superhuman well-being will take, and how it will be encephalised. What will transhumans and posthumans be happy “about”? I don’t know – probably modes of experience that are physiologically inaccessible to today's humans (cf. https://www.hedweb.com/quora/2015.html#irreversible). But one of the beauties of hedonic recalibration is that (complications aside) it’s preference-neutral. Who wouldn’t want to wake up in the morning in an extremely good mood – and with their core values and preference architecture intact? Aristotle’s “eudaimonia” or sensual debauchery? Mill’s “higher pleasures” or earthy delights? You decide. Crudely, everyone’s potentially a winner with biological-genetic interventions. Compare the zero-sum status-games of Darwinian life. Unlike getting rid of suffering, I don’t think superhappiness is morally urgent; but post-Darwinian life will be unimaginably sublime.
What about hedonic uplift for existing human and nonhuman animals prior to somatic gene-editing? Well, one attractive option is ACKR3 receptor blockade (cf. https://www.nature.com/articles/s41467-020-16664-0), perhaps in conjunction with selective kappa opioid receptor antagonism. Enhancing “natural” endogenous opioid function and raising hedonic set-points is vastly preferable to taking well-known drugs of abuse that typically activate the negative feedback mechanisms of the CNS with a vengeance. An intensive research program is in order. Pitfalls abound.
In the long run, however, life on Earth needs a genetic rewrite. Pharmacological stopgaps aren't the answer.
“A miracle chip?”
Transhumanists don’t advocate intracranial self-stimulation or unvarying euphoria. For a start, uniform bliss wouldn’t be evolutionarily stable; wireheads don’t want to raise baby wireheads.
Transhumanists don’t advocate getting “blissed out”. Instead, we urge a biology of information-sensitive gradients of well-being. Information-sensitivity is critical to preserving critical insight, social responsibility and intellectual progress.
I’m sad to hear of the pushback you’ve received on the forum. Instead of saying one is a “negative utilitarian”, perhaps try “secular Buddhism” or “suffering-focused ethics” (cf.
https://magnusvinding.com/2020/05/31/suffering-focused-ethics-defense-and-implications/). I sometimes simply say that I would “walk away from Omelas”. No amount of pleasure morally outweighs the abuse of even a single child: https://www.cmstewartwrite.com/single-post/a-question-for-david-pearce. If a genie made you an offer, would you harm a child in exchange for the promise of a millions of years of indescribable happiness? I'd decline – politely (I'm British).
Academic pushback? I guess the average academic response isn’t much different from the average layperson’s response. An architecture of mind based entirely on information-sensitive gradients of well-being simply isn’t genetically credible – whether for an individual or a civilisation, let alone a global ecosystem (cf. https://www.gene-drives.com). At times my imagination fails too. Of course there are exceptions – but the academics who’ve directly been in touch to offer support are almost by definition atypical.
A fairly common critical response would probably be Professor Brock Bastian's The Other Side of Happiness: Embracing a More Fearless Approach to Living (2018):
https://www.hedweb.com/social-media/pairagraph.html
Quoting David Pearce
You don't?
Quoting David Pearce
Quoting David Pearce
Longevity is not a stable evolutionary state either! I did not think that was a big deal for you:
Quoting David Pearce
Do you appeal to evolutionary stability - or seek to transcend it?
Quoting David Pearce
Alexander Graham Bell originally suggested 'ahoy' be adopted as the standard greeting when answering a telephone. Not many other people use it the way it was intended.
Quoting David Pearce
The problem I foresee is that, currently, people get 'blissed out' because they don't want to think; they want to be less sensitive to information - not more so.
But you seem to have missed my point. There are technologies we have available, we need to apply to survive as a species, we still don't apply. Where's the incentive to make immortals that are sublimely contented and wicked smart?
p.s. I know transhumanists don't advocate actual immortality.
Hello David!
As been said, thank you kindly for sharing some of your thoughts and time here. Just 2 quick questions that relate to the 3rd Super, can you please define the following concepts that you used to describe your thesis:
1. "Involuntary Suffering"
2. "Pro-Social"
I am trying to parse both the practical and theoretical implications of those concepts, so as to understand Transhumanism a bit more... .
Thank you in advance.
Quoting David Pearce
I don't want to come across like some neo-Luddite who hates all technology, but:
Reprogramming the biosphere etc could result in it being dependent on the technological infrastructure. And if this infrastructure fails, then the biosphere will be unable to recover on its own. I think about a business who has nearly its entire operations digitized in the cloud; when those servers go down, the business is screwed. Though perhaps you could provide some examples of geoengineering fixes that don't have the possibility of catastrophic failure.
With respect to famine, the fact that obesity is more common then undernutrition is an example of technology solving a problem, only to introduce another one.
Quoting David Pearce
Could you elaborate on these reservations you share with Ellul?
Quoting David Pearce
I am not religious or spiritual myself, but I think Ellul's critique of technology can be evaluated independently of his religious beliefs.
Would a super-rational scientific soma really be spiritual? What do you mean by spiritual here?
The idea that pleasure and pain are largely if not wholly relative is seductive. It’s still probably the most common objection to the idea of a civilisation based entirely on gradients of bliss. However, consider the victims of life-long pain and depression. Some chronic depressives can’t imagine what it’s like to be happy. In some severe cases, chronic depressives don’t even understand what the word “happiness” means – they conceive of happiness only in terms of a reduction of pain. Now we wouldn’t (I hope) claim that chronic depressives can’t really suffer because they’ve never experienced joy. Analogous syndromes exist at the other end of the Darwinian pleasure-pain axis. Unipolar euphoric mania is dangerous and extraordinarily rare. Yet there is also what psychologists call extreme "hyperthymia". Hyperthymics can be very high functioning. My favourite case-study is fellow transhumanist Anders Sandberg (“I do have a ridiculously high hedonic set-point”). Anders certainly knows he is exceedingly happy – although unless pressed, he doesn’t ordinarily talk about it. He is also socially responsible, intellectually productive and exceptionally smart. In common with depression and mania, hyperthymia has a high genetic loading. Gene editing together with preimplantation genetic screening and counselling for all prospective parents offer the potential prospect of lifelong intelligent happiness for future (trans)humans. For sure, creating an entire civilisation of hyperthymics will be challenging. Not least, prudence dictates preserving the functional analogues of depressive realism – at least in our intelligent machines. But unlike ignorance, known biases can be corrected.
I’m a dyed-in the-wool pessimist by temperament. But for technical reasons, I suspect the long-term future of sentience lies in gradients of sublime bliss beyond the bounds of human experience.
Hi David, I was wondering if your philosophy is more aggregate-centered, or individual-centered. It seems to me to be more aggregate-centered. Often these ethical philosophies overlook the pain and suffering of individuals to effect/affect the greatest change. One example here is that you admit that this world can be pretty monstrous, and would not be something a benevolent superintelligence would want. However, your vision of a transhumanism utopia seems something in a far off future. Presumably, from the time now until that future time, billions of people will have lived and suffered. That being said, wouldn't David Benatar and antinatalism's argument in general be the best alternative in terms of suffering prevented? Basically, if you prevent the suffering in the first place, you have cut off the suffering right from the start. And as Benatar's asymmetry shows, no "person" suffers by not being born to experience the "goods" of life. It's a win/win it seems.
Thank you for your in depth answer. I hope you don't mind me following up on (only) this one point:
(above quote from linked document for context)
The interpretive emphasis on the human body physically simulating that body's self awareness is well taken. I would like to take that simulation idea and push on its boundaries - the boundaries of the body, when the body is seen as a space-time process.
I was wondering if you had any comments regarding the scope of that process of simulation cf the extended mind thesis? And possibly an ethical challenge this raises to the primacy of biogenetic intervention in the reduction of long term suffering: if the human mind's simulation process is saturated with environmental processes, why is the body a privileged locus of intervention for suffering reduction and not its environment?
Also in that context of the philosophical puzzles of gene : environment interaction, what challenges and opportunities do you think the heritability of epigenetic effects raise for the elimination of suffering through biogenetic science?
Echoing concern, if there is technological enhancement, won't this be vulnerable to hacking (smart cars can be hacked and controlled, brakes disabled, etc.) or man-made or natural EMPs? Wouldn't this device allow a transhuman to be murdered or "disabled" with no evidence?
Also, where does one draw the line between a human with significant technological/genetic enhancements, a true cyborg or laboratory experiment, and a mere robot/non-human abomination?
Best,
Quoting David Pearce
I'm unclear what you mean here by "genetically credible", but my guess would be that you mean we are genetically predisposed to disbelieve in a wholly hedonistic morality (thus explaining why acceptance of it is so atypical, as you say). Is that an accurate guess as to your meaning? If not, can you elaborate what you mean?
And on a related note, I'm wondering if your suggestions to self-label with things like "secular Buddhism” or “suffering-focused ethics” is intended to be a response to this bit of my initial question:
Quoting Pfhorrest
If so, then I think I was unclear. I was wondering if you know a good blanket term for the kind of position that's opposed to views like yours and mine, the kind of position that holds that reducing suffering is either unnecessary or insufficient for morality.
Thanks again!
That strikes me as the kind of wishful thinking typical of people idolizing and idealizing science and technology. There are at present no such thing as a geoengineering fix for climate change. Technology and science are some of the root cause of global warming: they gave us the means to screw up the climate. Now of course, they might also give us the means to survive in spite of climate change, but they can't fix it. It's just too big.
The future is not just some continuous, indefinite technological "progress". That's a sci-fi myth.
Yes, humans are diverse – by some criteria. On the other hand, we tend to share the same core emotions, same pleasure-pain axis, same sleep-wake cycles, same progression of youth and aging, same kind of egocentric world-simulation (etc) as our primate ancestors. Above all, sentient beings are prone to suffer. Perhaps posthumans will find humans as diverse as we find members of an ant colony. Either way – and most relevant to your question – no one values the experience of unbearable agony or suicidal despair – or even plain boredom. Even ostensible counter-examples to the primacy of pleasure over pain, such as masochism, simply reinforce the sovereignty of the pleasure-pain axis. Masochists love the release of endogenous opioids as much as the rest of us. We’d all be better off if experience below hedonic zero is replaced by a civilised signalling system. We’d all be better off with a motivational architecture based entirely on information-sensitive gradients of well-being. Hedonic recalibration and uplift can radically enhance everyone’s quality of life.
Naively, Heaven might sound monotonous compared to the torments of Hell, or even compared to everyday Darwinian purgatory. In practice, genome editing promises a richer diversity of genes and allelic combinations than is possible under a regime of natural selection. The diversification of sentience has scarcely begun. For example, transhumans will be able to access billions of exotic state-spaces of consciousness as different as is dreaming from waking consciousness. What they’ll have in common is they’ll all be generically wonderful.
Are there any practical examples of the types of changes you're suggesting in effect or is it all just theoretical? On the home page of your website you mention that many modern medical technologies would have been inconceivable or thought of as impossible in the past, but it would be a fallacy to then infer that our current ideas on what is possible are therefore false and so that the abolitionist project is therefore possible.
Thank you. Yes, I accept a version of the asymmetry theory. The badness of suffering is self-intimating, whereas there is nothing inherently wrong with inexistence. And yes, I’m a “soft” antinatalist. Bringing pain-ridden life into the world without prior consent is morally indefensible. Nor would I choose to have children on the theory that the good things in life typically outweigh the bad. Enduring metaphysical egos are fiction. That said, I don’t campaign for antinatalism. “Hard” antinatalism is not a viable solution to the problem of suffering. Staying childfree just imposes selection pressure against any predisposition be an antinatalist. As far as I can tell, the selection pressure argument against “hard” antinatalism is fatal (cf. https://www.hedweb.com/quora/2015.html#agreeantinatal).
Contrast the impending reproductive revolution of designer babies. As prospective parents choose the genetic makeup of their offspring in anticipation of the behavioural and psychological effects of their choices, the nature of selection pressure will change. Post-CRISPR and its successors, there will be intensifying selection pressure against our nastier alleles and allelic combinations at least as severe as selection pressure against alleles for, say, cystic fibrosis. Imagine you could choose the approximate hedonic set-point and hedonic range of your future children. What genetic dial-settings would you choose?
The Pinprick Argument? Recall that negative utilitarians want to abolish all experience below hedonic zero. So if any apparently NU policy-proposal causes you even the slightest hint of disappointment – for example sadness that we won’t get to enjoy a glorious future of superhuman bliss – then other things being equal, that policy-proposal is not NU. So NUs can and should support upholding the sanctity of life in law, forbidding chronic pain-specialists from euthanizing patients without prior consent, and many other political policy-prescriptions that are naively un-utilitarian. Not least, NUs are not plotting Armageddon (well, most of us anyway: https://theconversation.com/solve-suffering-by-blowing-up-the-universe-the-dubious-philosophy-of-human-extinction-149331).
Evolutionary speaking we apparently needed all emotions to survive. So in case of a future survival event you'd still want to have the reptilian brain response. Increasing overall blissfulness seems like a good idea nevertheless. I just don't want to be the guinea pig (sorry for the term).
Perhaps when psychology is treated as a philosophy, and neurology as the defining science I'll have more faith in the practical applications of transhumanism. As a philosophy it's really fascinating.
Thank you.
What is the relationship between superintelligence and super-wellbeing?
It’s tricky. The best I can manage is an analogy. Consider AlphaGo. Compared to humble grandmasters, AlphaGo is a superintelligence. Nonetheless, even club players grasp something about the game of chess that AlphaGo doesn’t comprehend. The “chess superintelligence” is an ignorant zombie. I don’t know how posthuman superintelligences will view the Darwinian era that spawned them. Maybe posthumans will allude, rarely, to Darwinian life in the way that contemporary humans allude, rarely, to the Dark Ages. Most humans know virtually nothing about the Dark Ages beyond the name – and have no desire to investigate further. What's the point? Maybe superintelligences occupying a hedonic range of, say, +80 to +100 will conceive hedonically sub-zero Darwinian states of consciousness by analogy with notional states below hedonic +80 – their functional equivalent of the dark night of the soul, albeit unimaginably richer than human “peak experiences”. The nature of Sidgwick's "natural watershed" itself, i.e. hedonic zero, may be impenetrable to them, let alone the nature of suffering. Or maybe posthuman superintelligences will never contemplate the Darwinian era at all. Maybe they’ll offload stewardship of the accessible cosmos to zombie AIs. On this scenario, programmable zombie AIs will ensure that sub-zero experience can never re-occur within our cosmological horizon without any deep understanding of what they’re doing (cf. AlphaGo). In any event, I don’t think posthuman superintelligences will seek to understand suffering in any full-blooded empathetic sense. If any mind were to glimpse even a fraction of the suffering in the world, it would become psychotic.
Whatever the nature of mature superintelligence, I think it’s vital that humans and transhumans investigate the theoretical upper bounds to intelligent agency so we can learn our ultimate cosmological responsibilities. Premature defeatism might be ethically catastrophic.
Suffering and desire?
Buddhists equate the two. But the happiest people tend to have the most desires, whereas depressives are often unmotivated. Victims of chronic depression suffer from “learned helplessness” and behavioural despair. So the extinction of desire per se is not nirvana. Quite possibly transhumans and posthumans will be superhappy and hypermotivated. Intuitively, for sure, extreme motivation is the recipe for frustrated desire and hence suffering. Yet this needn’t be so if we phase out the biology of experience below hedonic zero. Dopaminergic “wanting” is doubly dissociable from mu-opioidergic “liking”; but motivated bliss is feasible too. If you’ll permit another chess analogy, I always desire to win against my computer chess program. I’m highly motivated. But I always lose. Such frustrated desire never causes me suffering unless my mood is dark already. The same is feasible on the wider canvas of Darwinian life as a whole if we reprogram the biosphere to eradicate suffering. Conserving information-sensitivity is the key, not absolute position on the pleasure-pain axis:
https://www.hedweb.com/quora/2015.html#hedonictreadmill
In today's world, although we've reaped a great deal of benefit from the ongoing computing revolution, it seems the general public has become more cynical recently of how technology is being used to push ads, surveil citizens, radicalize people on social media and increase the wealth gap. I'm using that as an example of a technology that has transformed society with a lot of early utopian ideals.
Thank you.
Yes, transhumanists aspire to end involuntary suffering. It’s tempting to be lazy and normally say just “end suffering” but the “involuntary” is worth stressing. No one is credibly going to force you to be happy. Many of the objections one hears to the abolitionist project focus on the strange suspicion that someone, somewhere, intends to engineer coercive bliss and force the critic to be cheerful. As it happens, I cautiously predict that eventually all experience below hedonic zero will disappear in to evolutionary history (cf.
https://www.express.co.uk/news/science/1239855/Transhumanist-writer-David-Pearce-technology-transhumanism-humanity-plus). But prediction is different from proscription. Perhaps the thorniest consent issue will be hedonic default settings. When the biology of suffering becomes technically entirely optional – and it will – should tradition-minded parents be legally allowed to have pain-ridden children “naturally” via the cruel genetic crapshoot of sexual reproduction? And must their children wait until they are eighteen (or whatever the legal age of majority) to be cured? Eventually, creating malaise-ridden babies like today’s Darwinian malware may seem to be outright child abuse. This prediction needs to be expressed with delicacy lest it’s misunderstood.
Pro-social?
Some conceptions of a superintelligence resemble a SuperAsperger. Ill-named “IQ” tests measure only the “autistic” component of general intelligence. “Superintelligence” shouldn’t be conceived as some kind of asocial singleton. Recall how human evolution was driven in part by our prowess at mind-reading, cooperative-problem solving and social cognition. True, contemporary accounts of posthuman superintelligence always reveal more about the cognitive limitations and preoccupations of the authors than they do about posthuman superintelligence. But full-spectrum superintelligences won’t resemble autistic savants – or “paperclippers”:
https://www.hedweb.com/quora/2015.html#dpautistic
I share your bleak diagnosis of Darwinian life:
https://www.hedweb.com/quora/2015.html#antinatal
But David Benatar and other “hard” antinatalists simply don’t get to grips with the argument from selection pressure. Antinatalists can’t hope to win:
https://blogs.scientificamerican.com/bering-in-mind/gods-little-rabbits-religious-people-out-reproduce-secular-ones-by-a-landslide/
See too my response to Down The Rabbit Hole above.
By contrast, for first time a few mainstream publications are starting to realise that genome editing makes a world without suffering possible:
https://www.newyorker.com/magazine/2020/01/13/a-world-without-pain
Like you, I find knowing that billions of sentient beings will suffer and die before the transhumanist vision can come to pass is dispiriting. I just can’t think of any sociologically credible alternative.
Not to derail the thread into an anti-natalist debate but I find that Benatar is just plain wrong because if suffering is intrinsic to life then life doesn't cause suffering, just like water doesn't cause itself to be wet. If suffering isn't intrinsic to life, then for ending all life to be the proper solution, life would have to be a sufficient cause for suffering. Yet I currently don't suffer, so life is merely a necessary cause for suffering and not sufficient. Since it's never a proximate cause, the statement "life causes suffering" means as little as "the big bang did it". Antinatalists are simply wrong because they don't understand causality and use words like "suffering" and "cause" in a way that's not commensurate with how they are understood in law, philosophy or ethics.
Also plugging my previous question about superintelligence: https://thephilosophyforum.com/discussion/comment/519298
You might wish to take some of the questions related to the possibility that the future may not be better but worse than the past. There is this sense that transhumanism is just too starry-eye optimistic, that it is not just sci-fi but in many ways it is yesterday's sci-fi, a line of thought typical of the 90's (e.g. The Elementary Particles by Houellebecq) but obsolete today.
The 90's were when the democratisation of IT was supposed to make us all friends, but we now can see that it has led instead to much irrationality, hatred and lies being spread in the culture. Inequalities are growing, the filthy rich are sucking up the incomes of the middle class. Climate change is not going away any time soon, meaning it will be a disruptive factor for several thousand years and most probably will result is a massive reduction of world population.
It seems to me that transhumanism is an outdated form or style of imagining the future, when people thought that technology was inherently good. We know better now.
Each of your points deserves a treatise. Forgive me for hotlinking.
Two classes of humans, enhanced and unenhanced?
Yes, it’s a possible risk. But the cost of genome sequencing and editing is collapsing. Likewise computer processing power. The biggest challenge won’t be cost, but ethics and ideology. Intelligence-amplification involves enriching our capacity for perspective-taking and empathy – and extending our circle of compassion to even the humblest minds. Transhumanists (cf. https://www.transhumanist.com) advocate full-spectrum (super)intelligence: https://www.biointelligence-explosion.com. The Transhumanist Declaration (1998, 2009) expresses our commitment to the well-being of all sentience – not a world of Nietzschean Übermenschen.
Coercion?
One human invention worth preserving is liberal democracy.
An end to evolution?
On the contrary, the entire biosphere is now programmable:
https://www.hedweb.com/quora/2015.html#killed
The beauties of Nature?
Nature will be more beautiful when sentient beings aren’t disembowelled, asphyxiated and eaten alive:
https://www.reprogramming-predators.com/
Persuading religious traditionalists?
Well, a world where all sentient beings can flourish isn’t the brainchild of starry-eyed transhumanists. It’s the “peaceable kingdom” of Isaiah. Transhumanists fill in some of the implementation details missing from the prophetic Biblical texts. For instance, the talk below was delivered to the Mormon Transhumanist Association:
https://www.hedweb.com/social-media/paradise.pdf
Hacking?
Yes, it’s a potential threat. But Darwinian life is a monstrous engine for the creation of suffering. Animal life on Earth has been “programmed” to suffer. It’s a design feature, not a bug or a hack. Darwinian malware should be patched or retired.
A “mere robot/non-human abomination?”
I’d need to know what you have in mind. But if we phase out the molecular signature of experience below hedonic zero, then the meaning of “things going wrong” will be revolutionised too.
The prospect of a “triple S” civilisation of superintelligence, superlongevity and superhappiness still strikes most people as science fiction. By way of a reply, I’m going to focus just on the strand of the transhumanist project that strikes me as most morally urgent, namely overcoming the biology of involuntary suffering. Technically but not sociologically, everything I discuss could be achieved this century with recognisable extensions of existing technologies. Nothing I explore involves invoking e.g. a Kurzweilian Technological Singularity or machine superintelligence as a deus ex machina to solve all our problems. Even helping obscure marine invertebrates and fast-breeding rodents is technically feasible now – although pilot studies in self-contained artificial biospheres would be wise. Technically (but not sociologically), a “low pain” (as distinct from a “no pain”) biosphere could be created within decades. And imagine if all prospective parents were offered preimplantation genetic screening and counselling / gene-editing services so their future children could enjoy benign versions of the SCN9A (cf. https://www.wired.com/2017/04/the-cure-for-pain/), FAAH and FAAH-OUT genes. Imagine if cultured meat and animal products lead to the closure of factory-farms and slaughterhouses worldwide. Imagine if we spread benign versions of pain- and hedonic-tone-modulating genes across the biosphere with synthetic gene drives: https://www.gene-drives.com. Imagine if all humans and nonhuman animals were offered pharmacotherapy (cf. https://today.rtl.lu/news/science-in-luxembourg/a/1542875.html) to boost their endogenous opioid function. Imagine if we took the World Health Organization definition of good health seriously and literally: “complete physical, mental and social well-being”. Biological-genetic interventions would be indispensable. The WHO commitment to health is impossible to fulfil with our legacy genome. To stress, I’m not urging a Five Year Plan (as distinct from a Hundred-Year Plan) and certainly not delegating stewardship of the global ecosystem to philosophers! Rather, we need exhaustive research into risk-reward ratios and bioethical debate. How much pain and suffering in the living world is ethically optimal? Intelligent moral agents will shortly be able to choose:
https://www.hedweb.com/quora/2015.html#eatanimal
Utopia, dystopia or muddling through?
It’s a question of timescales. I’m sceptical experience below hedonic zero will exist a thousand years from now – and maybe much sooner. I suspect quasi-immortal intelligent life will be animated by gradients of superhuman bliss. So I could be mistaken for an optimist. I don’t believe nascent machine superintelligence will turn us into the equivalent of paperclips (cf. https://www.hedweb.com/quora/2015.html#dpautistic); I discount grey goo scenarios; I reckon e.g. https://www.hedweb.com/quora/2015.html#engineering is more challenging than it sounds; and I think the poor will have access to biological-genetic reward pathway enhancements no less than the rich. Not least, I think we are living in the final century of animal agriculture, a monstrous crime against sentience on a par with the Holocaust.
However, I’m not at all optimistic that humanity will avoid nuclear war this century. “Local”, theatre or strategic nuclear war? I don’t know.
How can nuclear catastrophe be avoided?
I could offer some thoughts. But alas "Dave’s Plan For World Peace" will make limited impact.
So I fear unimaginable suffering still lies ahead.
Could the future be worse than the past?
It’s a horrific thought. I promise I take s-risks seriously – although not all s-risks:
https://www.hedweb.com/quora/2015.html#dpsrisk
However, it’s worth drawing a distinction between two kinds of technology. Traditional technological advances do not target the negative-feedback mechanisms of hedonic treadmill. In consequence, there is little evidence that the subjective well-being – or ill-being – of the average twenty-first century human differs significantly from the average subjective well-being / ill-being of the average stone-age hunter-gatherer on the African savannah. Indeed, some objective measures of well-being / ill-being such as suicide rates and the incidence of serious self-harm suggest modern humans are worse off. By contrast, biological-genetic interventions to elevate hedonic range and hedonic set-points promise to revolutionise mental health. I’m as hooked on iPhones, air travel, social media and all the trappings of technological civilisation as anyone. I’m also passionate about social justice and political reform. But if we are morally serious about the problem of suffering, then we’ll have to tackle the root of the problem, namely our sinister genetic source code. Only transhumanism can civilise the world.
Pain has also its rewards. Ever tried Thai massage? It's very painful but very good. It straightens you up.
Quoting David Pearce
Ironically, this vision of yours scares me far more than the possible collapse of our civilization. Because it wouldn't be the first civilization to collapse; these things have happened before. But a life without downsides or limits, that has never happened before.
If I might quote Robert Lynd, “It is a glorious thing to be indifferent to suffering, but only to one's own suffering.” You say, “I tend to like life as it is, suffering included.” If you are alluding just to your own life here, cool! But please do bear in mind the obscene suffering that millions of human and nonhuman animals are undergoing right now. Recall that over 850,000 people take their own lives each year. Hundreds of millions of people suffer from depression and chronic pain. Billions of nonhuman animals suffer and die in factory-farms and slaughterhouses. I could go on, but I’m sure you get my point. Life doesn't have to be this way.
You say the transhumanist vision “scares” you? Why exactly? Quasi-immortal life based on gradients of intelligent bliss needn't be as scary as it sounds.
While I can't speak for Olivier, I can offer a plausible reason. Unlimited (or perhaps enhanced) pleasure could lead to unlimited (or enhanced) suffering, something that cannot be experienced as of yet. A person or animal can be tortured yes, horrendously even. I recall an old "king" used a method of coating his enemies in sugary substances, tying them to a boat, and sending them adrift in the ocean to be devoured, slowly, by insect larvae and vermin. Quite horrible, as were other forms of torture but nevertheless the human body has a limit to what it can take and will either shut down or succumb to traumatic insanity, thus alleviating the suffering. What Olivier's concern may be is that while you, as a decent person trying to help humanity by creating unlimited or constant pleasure without end, may be abused by those who wish to do the opposite and instead create unlimited and never-ending torment. As you say, the Darwinian life is a nightmare, and so, those who succumbed to it are probably more or less in charge. You wish to give them an indestructible sword, forged out of good belief and benevolence as well as the idea it will always be used for such, but he and others would protest that this is foolish.
Also, there are humane ways of harvesting animals for meat (instant kill). Outlawing of meat is unlikely to be agreed upon by any majority anytime soon.
People would take their lives in your ideal world too, if only because it'd be boring.
Industrial farming we must abolish, I agree, primarily for animal rights reasons. I'm not ready to be vegetarian quite yet but can feel the appeal.
But I can see that pain serves a purpose, it keeps animals alive, teaches them what to avoid. I can also see that this world's ecology depends on predation. That when you kill all the predators of a species, you often condemn it to destruction too. The European squirrel population once crashed like that, because its main predator (martens) had been hunted down for their fur. A deadly disease whipped out the squirrels soon after as the diseased animals were no longer taken out by martens. I heard of a similar case in the US with deers and wolves.
Nature involves predation and parasites and diseases and what not. The whole animal kingdom can only exist by eating plants and/or other animals. Only plants are autotrophic. And plants have feelings too.
You can't stop Darwinian evolution, it's too late to put that Djin back in his bottle. If anything, new diseases will keep appearing forever; not only diseases for our species but for all species, they keep sprouting. Darwin always wins.
Because to me, pain is not the real problem but a symptom. Oppression is the real problem, and it is everywhere. My sense of good and evil is political and moral, not technological. Now you could argue of course that we could edit out oppression from the human genome, but it may be impossible to do so.
I share your dark view of humanity.
Yet should we discourage a scientific understanding of depression and its treatment for fear some people might use the knowledge to make their victims more depressed? Should we discourage a scientific understanding of pain, painkillers and pain-free surgery for fear some people might use the knowledge to inflict worse torments on their enemies? Most topically here, should we discourage research into the "volume knob for pain" for fear a few parents might choose malign rather than benign variants of SCN9A for their offspring? If taken to extremes, this worry would stymie all medical progress, not just transhumanism. Indeed, one reason for accelerating the major evolutionary transition in prospect is to put an end to demonic male "human nature".
Boredom? Its elimination will be trivial compared to defeating the biology of aging:
https://www.hedweb.com/quora/2015.html#eliminate
Predation?
https://io9.gizmodo.com/the-radical-plan-to-eliminate-earths-predatory-species-1613342963
(I didn’t write the headline.) No sentient being need be harmed by some light genetic tweaking.
Plant sentience?
It’s a hoax:
https://www.hedweb.com/quora/2015.html#plantzombies
In this gem of a philosophical novel called The Dimension of Miracles, by Robert Sheckley, an average New Yorker, Tom Carmody, wins at the galactic lottery due to some galactic mistake, and travels to the galactic capital to receive his (less than galactic) prize. His return back to earth is much delayed because he doesn't know his way back.
He meets with all sorts of folks, including a god who has decided to anihillate all his creation because they kept complaining about material life on this valley of tears he had made for them... Ingrates. When his creatures started to seek and pray for reunification with their deity, he granted their wish and killed them all.
One problem for Carmody is that, since he is removed from his home environment, he is left without his usual predators (car traffic, diseases, etc.). This contradicts the 'universal law of predation' which states that all organisms must have predators. So the universe creates ex nihilo a predator specific to Carmody, that perpetually pursues and aims to eat our hero, from one chapter to the next... :-)
Thank you very much for the response.
I brought up the pinprick argument, as despite being a NU myself, I believe it defeats Benatar's asymmetry theory. In his book he bites the bullet, concluding that the pinprick would make it so a life otherwise full of pleasure would have been better off not being started. Surely you can't agree with this?
I also take it from your posts that you believe there are principles one must follow (sanctity of life etc), even if the likely consequences are more suffering? What is your answer to Smart's benevolent world-explorer?
The conjecture that predation among species is inevitable is no more tenable than the conjecture that predation among races is inevitable. This isn’t because selection pressure is going to slacken; on the contrary, selection pressure will intensify. But intelligent moral agents can decommission natural selection. A combination of genetic tweaking and cross-species fertility regulation (immunocontraception, remotely tunable synthetic gene drives, etc) together with ubiquitous AI surveillance is going to transform life on Earth.
https://www.abolitionist.com/reprogramming/index.html
Many thanks. Should human intuitions of absurdity weigh more in ethics than in, say, quantum physics? That said, I defend what might be called "indirect negative utilitarianism", but is really just strict negative utilitarianism. Thus enshrining the sanctity of human and nonhuman animal life in law doesn’t lead to more net suffering. Naively, yes, the implications of negative utilitarianism are apocalyptic (cf. https://www.utilitarianism.com/rnsmart-negutil.html). Indeed, classical utilitarian philosopher Toby Ord calls negative utilitarianism a “devastatingly callous” doctrine. But whereas negative utilitarians can ardently support an advanced transhumanist “triple S” civilisation, classical utilitarians are obliged to obliterate it with a utilitronium shockwave:
https://www.hedweb.com/social-media/pre2014.html
Many thanks. Could environmental degradation derail transhuman civilisation?
I'm sceptical. But I suspect a climate catastrophe such as the inundation of a major Western metropolis will be needed to trigger serious action on mitigating global warming. One possible solution might be coordinated international legislation to enforce drastic reductions in CO that become effective only in, say, 10 years’ time.
A runaway “intelligence explosion” of recursively self-improving software-based AI?
Again, I’m sceptical:
https://www.hedweb.com/quora/2015.html#intelexplos
Classical Turing machines can’t solve the binding problem:
https://www.hedweb.com/hedethic/binding-interview.html
Digital zombies have profound cognitive limitations that no increase in speed or complexity can overcome. In my view, zombie AI will augment sentience, not replace us.
Superhappiness?
The molecular signature of pure bliss is unknown, although its location has been narrowed to our ultimate “hedonic hotspot” in a cubic centimetre in the posterior ventral pallidum. Its discovery will be momentous. But the creation of superhappiness, let alone information-sensitive gradients of superhappiness, will depend on the solution to the binding problem. And the theoretical upper bounds to phenomenally-bound consciousness – whether blissful or otherwise – are unknown.
Thank you. Suffering and the extended mind thesis?
(cf. https://en.wikipedia.org/wiki/Extended_mind_thesis)
One of the authors of the extended mind thesis, Andy Clark, is explicitly a perceptual direct realist. Clark’s co-author, David Chalmers, sometimes writes in a similar vein. If some version of the extended mind thesis were true, then the abolitionist project would need to be re-evaluated, as you suggest.
However, as far as I can tell, the external world is inferred, not perceived. Strictly, it’s a metaphysical hypothesis (cf. https://www.hedweb.com/quora/2015.html#distort). Thanks to evolution, biological minds each run skull-bound phenomenal world-simulations that take the guise of their external surroundings. Within your world-simulation, your virtual iPhone is an extension of your bodily self. Within your world-simulation, you may perceive the distress of the virtual bodily avatars of other organisms. If you’re not dreaming, then their virtual behaviour causally co-varies with the behaviour of genuine sentient beings in the wider world.
But suffering is in the head.
Hello David. Do you have an argument for why, as far as you can tell, the external world is inferred and not perceived? Is it some version of the argument from illusion/hallucination? If so, how do you respond to the usual criticisms of these arguments by externalists, i.e. that these arguments tend to confuse metaphysical with epistemological issues.
Incorrect. How could we have evolved, swinging through the trees - looking to catch the next branch, running from lions, not falling over cliff edges, and so on and on - if our sensory equipment were not accurate to reality as it really exists? How could there be art or traffic lights, or colour coded electrical insulation if reality is subjectively constructed?
Subjectivism is wrong. An objective reality exists, independently of individuals, and we perceive it - as it really is, albeit with limited sensory apparatus.
Why do you not look more systematically to the potential benefits of science?
As far as I can tell, physical reality long predates the evolution of phenomenally-bound minds in the late Precambrian. As I said, I’m a metaphysical realist. But each of us runs an egocentric world-simulation. Phenomenal world-simulations differ primarily in the identity of their protagonist. My belief that I’m not a Boltzmann brain or a mini-brain in a neuroscientist’s vat (etc) is metaphysical. The belief rests on a chain of inferences – and speculation I find credible. The external environment partly selects the content of one’s waking world-simulation; it doesn’t create it:
https://www.hedweb.com/quora/2015.html#immanuel
Quoting David Pearce
to:
Quoting David Pearce
Egoistic delusion about the significance of my own existence; putting aside the "sonder" of being just a passer-by in the experience of others, does not imply objective reality is subjectively constructed.
https://www.dictionaryofobscuresorrows.com/
Recall I argue against the view that reality is subjectively constructed. But each of us runs a phenomenal world-simulation that masquerades as the external world. Mind-independent reality may be theoretically inferred; it's not perceptually given.
If objective reality exists, and we perceive it, surely the natural emphasis falls upon the validity of our understanding - and scientific method as a means to establish objective knowledge.
This then implies a far more systematic approach to the potential benefits of technology - whereas, it seems to me, your phenomenological approach makes no epistemic demands, and so justifies fantasising about technologically derived hedonism - while objectively, barrelling toward extinction.
If you'll pardon my bluntness - I don't mean to be rude, why don't you begin with solving climate change and securing a prosperous sustainable future, before proposing super-longevity, super-intelligence and utter well being?
For what it's worth, I think you're right - those things do hove into the realms of possibility, but only if we survive our technological infancy.
The inferential realist account is more epistemically demanding. The perceptual naïve realist believes that (s)he directly communes with the external world – an approach that offers all the advantages of theft over honest toil. By contrast, the inferential realist tries to explain how our phenomenally-bound world-simulations are neurologically possible. It's a daunting challenge:
https://www.hedweb.com/quora/2015.html#categorize Â
Climate change? A prosperous sustainable future? There is no tension here. I promise transhumanists are as keen on a healthful environment and economic prosperity as you are!
There's this surreal scene when Carmody finally makes it back to earth. He finds the vegetation rather different than what he remembers, until he meets a T-Rex cub, who kindly invites him for dinner. He then realizes that he is back on earth alright but not at the right time: he's in the jurassic.
So he follows young T-Rex to his home and has dinner with them T-Rex folks. He is the first mammal they encounter who can speak, most mammals they know are quite dumb, so they are fascinated by Carmody, especially when he tells them that he comes from the future. Then the T-Rex father, a self-satisfied, rather conventional fellow, asks Carmody about the future of the relationship between dinosaurs and mammals... To which Carmody politely answers that in the future, the relationship between dinosaurs and mammals is better than it ever was.
Maybe our future relationship with ants will be even better!
Were the inferential realist account an argument based in evolution, that had already acknowledged that the organism evolves in relation to a causal reality; such that the essential accuracy of sensory perception is promoted by the function or die algorithm of evolution, then we can consider how perception works - and I would accept that we experience "an internal representation, a miniature virtual-reality replica of the world."
I don't begin with trying to understand the mechanisms of perception, but rather with the fact of perception. Art, traffic lights, colour coded electrical wires - all refute the idea that:
"representative realism, also known as epistemological dualism, is the philosophical position that our conscious experience is not of the real world itself but of an internal representation, a miniature virtual-reality replica of the world."
https://en.wikipedia.org/wiki/Direct_and_indirect_realism
Without that grounding in evolutionary biology, the inferential realist account is just subjectivism with a fresh coat of paint, because the effect is essentially the same. Focusing on the mechanisms of perception, you soon lose sight of the art, traffic lights and colour coded electrical wires that prove enormous commonality of perception of an objectively existing reality, necessary to our survival as a species - both up to this point, and in future.
I wholly accept that:
Quoting David Pearce
...but the question, surely - is how we get there from here. It won't be by ignoring the gas leak in the cellar to hang beautiful pictures in the hallway. There's not many people around know the first damn thing about science and technology - and clearly you do, but you're putting the roof on before we've dug the foundations.
Complications aide, sentient beings exhibit a clearly expressed wish not to be harmed. So compassionate biology doesn't entail "engineer[ing] every single life form on earth to do exactly what you think is good". Rather, intelligent moral agents should ensure that all sentient beings can flourish without being physically molested. Genome-editing is a game-changer. Yes, there are some (human and nonhuman) predators who want to prey on the young, the innocent and the vulnerable. But there is no "right to harm". A civilised biosphere will be vegan.
Modern physics reveals that mind-independent reality is radically different from our egocentric virtual worlds of experience. I say a bit more, e.g.
https://www.hedweb.com/quora/2015.html#johnsearle
A tiger does not apologize.
Although I think of myself as more of an indirect realist than a naive realist, I wonder if this is really the correct way to phrase it. Consider that if I am drinking water then it follows that I am drinking H[sub]2[/sub]O, but also that if I know that I am drinking water then it doesn't follow that I know that I am drinking H[sub]2[/sub]O – because I don't necessarily know that water is H[sub]2[/sub]O. Or consider that if I have met Joe Biden then I have met the President of the United States, but also that I only infer that Joe Biden is the President of the United States from the things I have seen on TV or read on the Internet.
So I think there is both an epistemological and a metaphysical aspect to this, and that perhaps metaphysically, waking experience just is a mind-independent reality being "perceptually given", whereas epistemologically, that I am having a waking experience and that waking experience just is a mind-independent reality being "perceptually given" is "theoretically inferred".
True, a tiger does not apologise. Nor does a psychopathic child killer. Their victims are of comparable sentience. Unless rather naively we believe in free will, neither tigers nor psychopaths are to blame in any metaphysical sense for the suffering they cause. But their blamelessness is not an argument for conserving tigers or psychopaths in their existing guise.
The fact that we each run a phenomenal world-simulation rather than perceive extracranial reality doesn't entail that our world-simulations are unreal – any more than the mind-dependence of our world-simulations entails that extracranial reality is unreal. It's often socially convenient to ignore the distinction and pretend we share common access to a public macroscopic world. But shared access is still a fiction.
Thanks, a lot to unpack there. I worry that the expression "perceptually given" is doing a lot of work in your account. One's experience of a macroscopic world is an intrinsic property of neural patterns of matter and energy. This kind of experience may be shared by dreaming minds, brains-in-vats, Boltzmann brains – and awake humans who have evolved over millions of years of natural selection. In other words, the experience of a macroworld isn't intrinsically “perceptual” – the extracranial environment is neither sufficient nor necessary for the experience, Rather, one’s phenomenal macroworld has been harnessed by natural selection to play a particular functional role in awake animal nervous systems, namely the real-time simulation of fitness-relevant patterns in the extracranial environment. "Perception" is a misnomer.
If inferential realism / a world-simulation account is correct, then thorny semantic issues arise:
https://www.hedweb.com/quora/2015.html#hardparadox
Quoting David Pearce
You deny the truth of a shared external reality. If you also deny the reality of free will of the individual, as inferred from the individual's capacity to create the external simulation, then how can you ground any moral ethics? What is the cause of human activities if neither the external nor the internal? Where do you position "activities" in general in this schema if they are not caused by the external reality, nor the internal free will? Is activity an illusion? If so, then why do anything?
What about the suffering tigers anihillate? When their prey is dead, the prey won't suffer anymore. That's chalked up as a positive, right? If life is an abomination, death ought to be a blessing.
I believe in the existence of mind-independent reality (cf. https://www.hedweb.com/quora/2015.html#idsolipsism). Its status, from my perspective, is theoretical not empirical. The fact that one can't directly access the world outside one's transcendental skull doesn't make it any less real.
Grounding ethics?
Agony and despair are inherently disvaluable for me. Science suggests I’m not special. Therefore I infer that agony and despair are disvaluable for all sentient beings anywhere:
https://www.hedweb.com/quora/2015.html#metaethics
The problem of suffering can't be solved by tigers killing their victims any more than it can be solved by psychopaths killing orphans. Well-fed tigers breed more offspring who go on to terrorise more herbivores. I wouldn't personally be sad if the cat family were peaceably allowed to go extinct; but most people are aghast at the prospect. So instead, genetic tweaking can allow the conservation of tigers and other members of the cat family minus their violent proclivities.
Very many thanks. Yes, I remarked that most critics don't find an architecture of mind based entirely on information-sensitive gradients of bliss to be a genetically credible prospect. If pressed, such critics will normally allow that a minority of chronic depressives are animated entirely by gradients of ill-being. The possibility of people with the opposite syndrome, i.e. life animated entirely by gradients of well-being, simply beggars their imagination. That’s why case studies of exceptional hyperthymics (e.g. Jo Cameron) who never get depressed, anxious or feel pain are so illuminating. The challenge is to create a hyperthymic civilisation.
Terminology? The opposite for our position is dolorism. It's historically rare. More common is the bioconservatism exemplified by Alexander Pope in his Essay on Man (1733), "One truth is clear, WHATEVER IS, IS RIGHT".
Voltaire satirised such an inane optimism in Candide (1759).
Tackling the entrenched status quo bias of bioconservatism can be hard. One way of overcoming status quo bias is to pose a thought-experiment. Imagine humanity encounters an advanced civilisation that has abolished the biology of suffering in favour of life based on gradients of intelligent bliss. What arguments would bioconservative critics use to persuade the extra-terrestrials to revert to their ancestral biology of pain and suffering?
I have also realized that "hard" antinatalism is impractical so I just left it at that for a long time now but I'm finding that your arguments for transhumanism presents a more practicable alternative. So thanks for holding this AMA and introducing me to this position!
Thank you. You are very kind. The Transhumanist movement is diverse, indeed fragmented. For instance, Nick Bostrom and I both advocate a future of superintelligence, superlongevity and superhappiness, but “existential risk” means something different to an ardent life lover and a negative utilitarian (cf. https://www.hedweb.com/transhumanism/). A commitment to the well-being of all sentience is item 7 of 8 in The Transhumanist Declaration (1998, 2009). This prioritisation probably reflects the relative urgency most transhumanists feel. Superintelligence and superlongevity loom larger in the minds of most transhumanists than defeating suffering.
That said, I'd urge any secular Buddhist to embrace the transhumanist agenda. Recall how the historical Gautama Buddha was a pragmatist. If it works, do it! Indeed, the abolitionist project might crudely be called Buddhism and biotech. The only way I know to abolish the biology of suffering short of sterilising Earth is to rewrite our legacy source code. Other makeshift remedies are just stopgaps. Most technological advances don’t get to the heart of the problem of suffering. We need a genetically-driven biohappiness revolution.
So as a very rough analogy, as I understand it, you think of experience as something like a footprint in sand that may or may not have been caused by a boot, and the extent to which the features of the footprint "resemble" the features of the boot is an open question, as is how we are able to think and talk about the boot when presented with only a footprint?
To hopefully better explain my previous post I offer the different analogy of mixing hot water with coffee beans to make coffee. The relationship between the coffee beans and the coffee isn't merely causal; the coffee beans are directly present in the coffee. Perhaps "mixing" an external world object with one's sensory apparatus "makes" an experience in the same sort of way, with the external world object directly present in the experience. I think this view is somewhat similar to enactivism: "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems ... participate in the generation of meaning ... engaging in transformational and not merely informational interactions: they enact a world."
Of course, there is still the epistemological problems of knowing that what one has is coffee (a waking experience with a directly present external world object) and not some qualitatively identical coffee substitute (a dream) and knowing the extent to which the features of the coffee (the experience) are features of the coffee beans (the external world object), but I think it may help resolve some of the metaphysical or semantic problems.
That so, explain art - and not only creating art, but meaningful discussions about art. Explain what the artist thinks they are doing when painting a picture of a bridge in the fog. And how it can possibly be, that I come along, a hundred years later and say, "Hey - cool foggy bridge, dude!"
The way I see it our sensory apparatus evolved in relation to a causal reality over millions of years, and while limited, is necessarily accurate to reality - as it really exists, to allow for survival. We could not have survived if perception were subjectively constructed.
Since the 1634 trial of Galileo there's been a philosophical conspiracy to down-play science as a means to establish objective truth; starting with Descartes' subjectivism. Methodologically, Mediations on First Philosophy is a weak, sceptical argument compounding a misdirected search for certainty consistent with Church dogma - that argues, what if I'm being deceived by a powerful demon, and all the world is an illusion?
It's a retrograde step, epistemically, when you consider that William of Ockham had long since established the principle of sound reason known as Occam's Razor, "it is vain to do with more that which can be done with fewer." Had Descartes put his hand in the fire, rather than a ball of wax, he'd soon have discovered something exists, both objectively, and prior to "cogito."
Because the simplest adequate explanation is the best - it follows that we evolved in relation to a causal reality, and our sensory equipment is essentially accurate to reality, and similar person to person, to allow for survival. How can it be any other way? Spotting predators, and prey, and ripe fruits in the forest canopy - require we perceive reality as it really is - and allows in the fullness of time, for the creation and appreciation of art.
Thanks, your striking footprint / boot analogy hadn't occurred to me; but yes, in a sense. I'm still thinking about the coffee! Either way, the difference between dreaming and waking consciousness isn't that when awake one perceives the external world. Rather, during waking life, peripheral nervous inputs partially select the contents of one’s phenomenal world-simulation. When one is dreaming, one's world-simulation is effectively autonomous.
Now for the twist. A Kantian might say that all one can ever know is phenomena. The noumenal essence of the world is unknown and unknowable. But the recently-revived intrinsic nature argument "turns Kant on his head":
https://www.hedweb.com/quora/2015.html#galileoserror
I hope you'll forgive me for ducking questions of art here. However, when it comes to science, I'm a realist and a monistic physicalist, but not a materialist:
https://www.hedweb.com/quora/2015.html#dualidealmat
Taking modern physics seriously yields a conception of reality very different from the world-simulation of one's everyday experience.
If you highlight a passage within my post - as I will do with yours now, and then click the little curly arrow bottom left, next to where it says 2 hours ago, it will transfer the highlighted passage to the text box. Thus:
Quoting David Pearce
Also, I will get a notification of your reply.
What exactly do you mean by "theoretical" here? I see this statement as self-contradicting. To say that something is theoretical is to say that it is mind-dependent. To say that reality is theoretical, but mind-independent is to contradict yourself. In other words I don't see this as a valid way to account for the reality of he external world, to say that it is theoretical, yet also mind-independent It can only be one or the other.
Quoting David Pearce
Are you not a unique individual? It is this very idea, that what is valuable to me, is valuable to everyone else, which is the root of jealousy, coveting, greed, hoarding, and numerous other vices. This is probably why Plato, in The Republic, centered justice around having a respect for each others differences, rather than the false assumption that we are all the same, which Science doesn't really suggest.
It's possible your recent comment has been deleted. But the reason I don't advocate the extinction (as distinct from genetic tweaking) of the cat family is precisely the visceral responses of outraged cat lovers. Ethically speaking, should we conserve, for example,
https://www.youtube.com/watch?v=6GATu6KKu2g
[Viewer discretion advised: please don't watch if you already agree that intelligent moral agents should end predation. But in the abstract, "predation" sounds no more troubling than halitosis.]
Civilisation will be vegan.
Strange... I wonder what problem they had with it.
Anyway, I suspect felids are not particularly interested in your advice. You are welcome to change yourself into some computer if you want to, but leave cats alone. They can make their own life choices.
Why do you think death is problematic? If suffering is the problem, death is a perfect solution for it. There is a contradiction in hating life as it is and hating death at the same time. Either life is beautiful hence death is bad, or life is shit hence death is a bliss.
But "life is shit and death is bad" makes no logical sense to me.
One can take modern physics seriously in at least two distinct ways: instrumentally or realistically. Taking it realistically has been argued to lead to incoherence. What would your reply be to this kind of argument : https://metaphysicsnow.wordpress.com/2018/03/29/common-sense-versus-physics
Consider lucid dreaming. When having a lucid dream, one entertains the theory that one's entire empirical dreamworld is internal to the transcendental skull of a sleeping subject. Exceptionally, one may even indirectly communicate with other sentient beings in the theoretically-inferred wider world:
https://www.the-scientist.com/news-opinion/researchers-exchange-messages-with-dreamers-68477
What happens when one "wakes up"? To the naĂŻve realist, it's obvious. One directly perceives the external world. But the inferential realist recognises that the external world can only be theoretically inferred. For a good development of the world-simulation metaphor, perhaps see Antti Revonsuo's Inner Presence (2006):
You remark, "To say that something is theoretical is to say that it is mind-dependent." But when a physicist talks of, say, the theoretical existence of other Hubble volumes beyond our cosmological horizon (s/he certainly doesn’t intend to make a claim of their mind-dependence. Of course, how our thoughts and language can refer is a deep question. Naturalising semantic content is hard: https://www.hedweb.com/quora/2015.html#aboutness
Unique individuals?
Yes, our egocentric world-simulations each have a different protagonist. Yet we are not "uniquely" unique. When I said that "science suggests I'm not special", I was alluding simply to how I perceive myself to be the hub of reality is (probably!) a fitness-enhancing hallucination:
https://www.hedweb.com/quora/2015.html#moralvalues
What makes you suppose I want to change myself into a computer?!
(cf. http://www.hedweb.com/quora/2015.html#braincomp)
Either way, power breeds complicity, whether we like it or not. Humans would (I hope) rescue a small child from the jaws of a lion. It's perverse not to rescue beings of comparable sentience. No one deserves to be disembowelled, asphyxiated or eaten alive. Of course, ad hoc solutions to the problem of predation are unsatisfactory. Hence the case for a pan-species welfare state and veganising the biosphere.
I think the process of augmentation would result in harm and people would neglect their augments. In a more virtual reality, the process may be more perfect.
The technical nature in creating technology is not beyond us, but I think our surgical skills are lacking and inconsistent.
A petty version of Transhumanism may work, exo-augmentation.
Further, what's wrong with the original, biological human; isn't Transhumanism better suited for a virtual world?
In my view, instrumentalism threatens to collapse into an uninteresting solipsism. Instead, we'd do well to interpret the mathematical formalism of modern, unitary-only quantum mechanics realistically. Just as the special sciences (chemistry, molecular biology etc) can be derived from physics, likewise science should aim to derive the properties of our minds and the phenomenally-bound world-simulations they run from fundamental physics. This is a monumental challenge. But if there isn't a perfect structural match, then presumably some kind of dualism is true.
Let's stick to physicalism. I explore the quantum-theoretic version of the intrinsic nature argument. It’s counterintuitive:
https://www.hedweb.com/quora/2015.html#quantummind
Sometimes, wisdom consists in not acting even when you could act. We should leave other species alone, to the extent possible, eg by way of nature reserves.
Quoting David Pearce
I live in a very old city, Rome. Here, humans were at some point sending children into the jaws of lions for fun. And worse things. And people would go to see the show. But the interesting thing here is that those kids often went into the lions' jaws willingly. All they had to do to live was to perform some rites for the emperor's worship, but they would rather not.
They were called martyrs, which means "witness", for they bore witness that there was an entity greater than the emperor, and that only He should be worshipped.
I'm not a believer anymore but those guys were onto something. In secular terms, , they 'witnessed' that no man is a god, that no man deserves to be treated as a god, and that no man should act as a god.
This idea is behind my fears of your technological utopia. We are not gods.
I note that this idea is now a truism, and you must have encountered it. But it wasn't a truism 2000 years ago, and it is the Christian martyrs who hammered it into the social conciousness by willingly enduring the worst sufferings for the sake of it, for centuries. Because amongst the endured Romans who went merrily to the theater, week after week, to see some stupid Christians get fed to the lions, SOME felt their heart melt. SOME understood that these guys were serious, that there was something deeply subversive in their acceptance of pain and death. They were telling the antique world: "We don't give a shit about your power, about all your tortures, about all your refined ways to kill. We're not afraid. We're the captains of our own souls, and we will pray the way we want to pray. Thank you very much."
And quite a few of them Romans came to think in petto that those Christians were admirable. That's how the martyrs won them out, ultimately. By the virtues of suffering. Life is complicated.
Quoting David Pearce
Death is a necessary aspect of life. Logically, death is simply the absence or end of life so death is logically necessary if life is to exist. Practically, entropy can't be beaten forever and thus all living creatures beyond a certain complexity threshold die, ultimately.
And when we die, our meat isn't lost on the livings. Tigers or worms, someone will eat you. And that may sound bleak but that is not a tragedy. That is simply the price to pay for the immense privilege to have lived.
Aging is a frightful disorder. Medical science should aim for a cure.
More generally, mental and physical suffering are vile. A predisposition to suffer is genetically hardwired. For example, hundreds of millions of people worldwide suffer from chronic depression and pain. Even nominally healthy humans sometimes suffer horribly as a function of our legacy code. In the absence of biological-genetic interventions, the negative-feedback mechanisms of the hedonic treadmill will play out in immersive virtual worlds no less than in basement reality.
What's more, we shouldn’t retreat into escapist VR fantasy worlds until we have solved the problem of suffering in Nature and have created post-Darwinian life.
In short, transhumanism is morally urgent.
"In my view, instrumentalism threatens to collapse into an uninteresting solipsism."
How does instrumentalism, if tied with commonsense realism, lead to solipsism? Perhaps instrumentalism tied to idealism of some kind might, but the argument on the link I provided seems to be trying to bolster commonsense realism, on the grounds that the alternative winds up in incoherence. Which leads me to the second follow up, which is just a repeat of my initial question: how would you respond to that argument?
Biotech (genome editing, synthetic gene drives, etc) turns the level of suffering in Nature into an adjustable parameter. Yes, traditional conservation biologists favour preserving the snuff movie of traditional Darwinian life: sentient beings hurting, harming and killing each other.
Intelligent moral agents can do better.
I have no problem with the idea that the external world is theoretically inferred, because I lean toward idealism, but I think "theoretically inferred" is a stretch. This is because inference is a conscious rational process, and I think recognition that there are things external or independent of oneself is a deeper capacity, not dependent on logical inference
However, naive realism and naive idealism can be very similar in the sense that they both suffer the same problem, which is that without a medium between the perception and the thing sensed, we cannot account for the existence of mistakes. So from the idealist perspective, there must be something which separates the ideas of your mind from the ideas of my mind, otherwise we'd be thinking each others thoughts. What exists between us is that medium, and we call it the external world.
Quoting David Pearce
I don't think this analogy really suffices to resolve the issue. If we make the external world consist solely of possibilities (as in the use of "theoretical" in your example), then we need some principles whereby we discern what is real, actual, or true to the world. The "world" is what is supposed to be common to us. Aristotle used hylomorphism for example, the concept of "matter" gives us the realm of possibilities external to us, while "form" is applied to what is real, actual, allowing us to understand through common terms.
If we do not have any such principles to apply, then there is nothing to prevent your theoretical world from being completely different from my theoretical world. And there need not be any commonality, or even an inclination toward consistency between us, because there is no single reality, or the Truth, which we ought to conform our ideas to. So allowing that the world is theoretical in this way, will only increase strife between various people who have no desire to make their theories compatible with the theories of others. And strife produces conflict.
Quoting David Pearce
Now don't you see the problem here? If the external world is theoretical, as you said, then a human being's mind, as the holder of a theory, is by that fact, the hub of reality. We cannot escape this unless we have some way to get the theory outside of the mind. Therefore under your idealist principle that the external is theoretical, that I am the hub of reality is a true principle. And if this principle is "fitness-enhancing", it must be good. So it is clearly wrong to call this an hallucination unless we deny that the world is theoretical. To say that this is an hallucination is an act of self-deception, but for what end is that deception being applied? If it is not self-deception, is it aimed at others?
Intelligent moral agents can do far better than bioengineer cats for the moral satisfaction of seeing them eat lettuce.
What do you think it feels like to be eaten alive? The horrors of "Nature, red in tooth and claw" are too serious to be written off with jokes about eating lettuce. For the first time in history, it's technically possible to engineer a biosphere where all sentient beings can flourish. I know of no good moral reason for perpetuating the horror-show of Darwinian life.
Quoting Olivier5
I deleted it because I couldn't tell if you (Olivier5) meant it in good humour or not. Since David seems to have taken it in good humour I guess it's fine!
If you're talking about the existing biosphere, that would mean reengineering almost everything including microbes. I'd be really worried about that going badly wrong. Seems way more challenging than creating a new one on Mars or Venus (not that terraforming is easy, but your timescale seems to be over centuries or millennia).
But as for the morality of it, do you think we should do the same if we come across an alien biome? Would we want advanced aliens to come tame our world?
If pain-ridden Darwinian ecosystems exist within our cosmological horizon, then I would indeed hope future transhumans will send out cosmic rescue-missions. It's a tragedy that no such rescue-mission ever reached our planet; it could have prevented 540 million years of unimaginable suffering. Here on Earth, delegating stewardship of the global ecosystem to philosophers would be unwise. But pilot studies of self-contained happy ecosystems are feasible even now. CRISPR-based synthetic gene drives are an insanely powerful tool of compassionate stewardship. However, before we start actively helping sentient beings, we must first stop systematically harming them:
https://www.hedweb.com/quora/2015.html#slaughterhouses
I was dead serious (pun intended).
I see a contradiction in hating life as it is (Darwinian life, which is the one and only life we know) and hating death at the same time. Either life is bad and death is good, or vice-versa life is good and death is bad. Someone assuming that life is a tragedy and death is also a tragedy, is a bit too hard to please in my view.
It was J. L. Austin (of all people!) who acknowledged that "common sense is the metaphysics of the stone age". Folk physics is not my point of departure. What counts as "common sense" is time- and culture-bound. My common sense most likely differs from your common sense. By "commonsense realism", do you mean perceptual naĂŻve realism? Either way, we want to explain the technological success story of modern science. Anti-realism leaves the technological success of modern science a miracle. Maybe we can take an agnostic instrumentalist approach. Yet a lot of us want to understand reality. Why does quantum mechanics work? What explains the Born rule?
Most scientists are materialist physicalists. Materialism leads to the insoluble Hard Problem of consciousness:
https://www.hedweb.com/quora/2015.html#whohardprobsol
Most scientists treat the mind-brain as pack of classical neurons. Treating the mind-brain as an aggregate of decohered, membrane-bound neurons leads to the insoluble phenomenal binding/combination problem:
https://www.hedweb.com/quora/2015.html#categorize
I don’t know if non-materialist physicalism is true; but it’s my working hypothesis.
You'd be glad to know, then, that "Darwinian life" is fast disappearing from our planet. The Global Assessment Report on Biodiversity and Ecosystem Services released on 6 May 2019 states that, due to human impact on the environment in the past half-century, the Earth's biodiversity has suffered a [s]catastrophic[/s] welcome decline unprecedented in human history. An estimated 82 percent of wild mammal biomass has been lost, while 40 percent of amphibians, almost a third of reef-building corals, more than a third of marine mammals, and 10 percent of all insects are threatened with extinction. We are on our way to cure the biosphere of all Darwinian evil.
But then we wouldn't be here. Some alien reengineered biosphere would have been the result. No dinosaurs either :(
Maybe that's a resolution to the Fermi Paradox. Benevolent aliens go around reengineering life. They just haven't gotten around to us yet. Do you think sentient life should be allowed a choice in the matter?
I think compassionate stewardship of the living world is morally preferable to uncontrolled habitat destruction.
Darwinian malware had no choice about being born. Under a regime of natural selection, the lives of most sentient beings are “nasty, brutish and short”. Elsewhere I’ve urged upholding the sanctity of human and nonhuman life in law - not because I believe in such a fiction, but because failure to do so will most likely lead to worse consequences.
Would a notional benevolent superintelligence opt to conserve a recognisable approximation of Darwinian life-forms?
Or optimise our matter and energy for blissful sentience?
I don’t know.
But in the real world, a policy framework of compassionate conservation would seem to be a workable compromise.
Any solution to the problem of suffering must be technically and sociologically credible. It's a daunting challenge, but I know of no alternative:
https://www.hedweb.com/quora/2015.html#abolitionismbioethics
There is no OFF button.
If you're just gonna mock our guest, you'll be deleted.
Species is predictable, having so much focus on having a harmonious animal kingdom where animals continuously evolve is pointless.
Focus on human-kind for a great, beneficent relapse.
Though David thinks the answer is Transhumanism, and I don't, I still agree his premises are correct, and if people agree with it then it's entirely a possible cure for suffering in the world. I would opt for virtual reality and less restricting laws to enhance experience.
So in terms of priorities, let's fix ourselves first. Let's reduce our ecological footprint and not send the climate into thousands of years of desertification. Once that is done, IF that is done, maybe we can start to work on reforming the diet of tigers, if there are still around.
But then, there is also the question of us Homo sapiens being only one species among many. We are not gods and should not behave as gods. Nature made us, she's our mother, and we should respect her I think.
I was wondering what your take is on the opioid crisis. Were not all concerned hedonistic pleasure seekers?
Thank you. You are very kind. I'm curious though. You believe there is potentially a cure for suffering, but it's not transhumanism. First, we need to define what is meant by the term "transhumanism". If we don't edit our source code, recalibrate the hedonic treadmill, and genetically change "human nature", then I'm at a loss to know how the abolitionist project can ever be realised - short of apocalyptic scenarios that are unfruitful to contemplate.
I'm mystified why you value life per se rather than certain kinds of life. Do you really believe we should value and attempt to conserve, say, the parasitic worm Onchocerca volvulus that causes onchocerciasis a.k.a. "river blindness"? If so, why? Yes, humans are "playing god". Good. We should aim to be benevolent gods.
The underlying cause of the opioid crisis is that we are all born endogenous opioid addicts. The neurotransmitter system most directly involved in hedonic tone is the opioid system. Human and nonhuman animals are engineered by natural selection with no durable way to satisfy our cravings. Most exogenous opioid users are ineffectively self-medicating. Exogenous opioids just activate the negative-feedback mechanisms of the CNS. A solution to the opioid crisis is going to be complicated, long-drawn-out and messy. But ACKR3 receptor blockade potentially offers the prospect of hedonic uplift for all (cf. https://www.azolifesciences.com/news/20200622/New-LI383-molecule-can-help-treat-opioid-related-disorders.aspx). More research is urgently needed. Note I'm not (yet) urging everyone to get hold of ACKR3 receptor blockers. There are too many pitfalls and unknowns.
If we manage to survive the storm that's coming, that is.
While it is possible to eliminate viruses or parasites that play no ecological role, to try and change a whole ecosystem is very risky. Perhaps you can understand why with a metaphor. Darwinian life is akin to capitalism. It's ugly, but it works. It works without anyone telling the system how to work, it self regulates. Your imagined life would be akin to centrally planned economy: it's nicer in theory but it doesn't work very well in practice because it relies on a few people at the top making the right decisions at all times, and they sometimes do mistakes, or just abuse of their position.
A decentralized, self-regulated system is more resilient than a centrally regulated system. And that's a dimension on which Darwinian life will always trump engineered life.
I hope I’m not too late to the party but I’ll start by asking a question relating to the debate between what I heard get called "Agent Neutral”and “Agent Relative” forms of Utilitarianism . Do you think we have any extra non-instrumental reason to minimize our own suffering or the suffering of our loved ones relative to the reason that we have to minimize the suffering of a complete stranger? Also, do you think that we have less non-instrumental reason to minimize the suffering of our enemies and people that we despise?
It's a question of timescales. Classical utilitarianism is often held to be agent?neutral. But in the long run, it's unclear whether classical utilitarianism is consistent with a world of agents. For the classical utilitarian should be working towards an apocalyptic, AI-assisted utilitronium shockwave – some kind of all-consuming cosmic orgasm that maximises the abundance of bliss within our Hubble volume. Compare the homely dilemmas of the trolley problem. Intuitively, this kind of technological enterprise is centuries or millennia distant, so irrelevant to our lives. Yet temporal discounting is not an option for the strict classical utilitarian. Note that negative utilitarianism doesn’t entail this apocalyptic outcome. For the negative utilitarian, a transhuman civilisation based entirely on information-sensitive gradients of bliss will be ideal.
In the short-term, the practical implications of a classical utilitarian ethic are less dramatic. Given human nature, agent?neutrality is psychologically impossible. Any attempt by legislators to enforce agent-neutrality would probably lead to worse consequences, i.e. more unhappy people. Likewise, if one is an aspiring effective altruist who gives, say, 10% of one's income to charity, then beating oneself up about not charitably giving away 90% of one's income is counterproductive. Most likely, heroic self-sacrifice will lead to "burn out" - an un-utilitarian outcome. So in practice, being a (classical or negative) utilitarian involves all sorts of messy compromises. But then real life is messy – no news here.
Should one aspire to minimise the suffering of the people we despise as much as loved ones? Yes. In practice, such impartial benevolence is impossible. But getting our theory of personal identity right can help:
https://www.hedweb.com/quora/2015.html#individualism
Yes:
https://www.wireheading.com/hypermotivation.html
Wireheading is not a viable solution to the problem of suffering.
Quoting counterpunch
In my view, the right way to seek pleasure is through genetic recalibration of the negative-feedback mechanisms of the hedonic treadmill:
https://www.gradients.com
Good answer. Design happier healthier babies! But where to stop?
We could engineer a world with hedonic range of 0 to +10 as distinct from our -10 to 0 to +10. But we could also engineer a civilisation of (schematically) +10 to +20 or (eventually) +90 to +100. Critics protest that a notional civilisation with a hedonic range of +90 to +100 would "lack contrast" compared to the rich tapestry of Darwinian life. But a hedonic range of, say, +70 to +100 will be feasible too.
If a global consensus emerges for compassionate stewardship of the living world, then the problem of suffering is tractable. We're not going to run out of computer power. Every cubic metre of the planet will shortly be accessible to surveillance and micromanagement – although synthetic gene drives allow the ecological option of remote management too.
On the one hand, we can imagine dystopian Orwellian scenarios – some kind of totalitarian global panopticon. But biotech also gives us the tools to create a world based on genetically preprogrammed gradients of bliss. In a world animated by information-sensitive gradients of well-being, there are no "losers" in the Darwinian sense of the term. Members of today's predatory species can benefit no less than their victims:
https://www.hedweb.com/quora/2015.html#stopkilling
I suspect that, were we to live in such a civilisation, our mean hedonistic expectation will simply adjust to somewhere around 95. Anything below 95 will be deemed a disappointment if not a "micro-agression", and anything above 95 will get recorded as satisfying and truly a pleasure. In short, I suspect the gradient is relative, not absolute.
I think it is feasible. Horrible, that everyone would go around grinning all day, but feasible to genetically engineer humans to be healthier, happier. We could have wings! But if people weren't prone to chronic depression half the time where would the philosophers come from? For me, happiness is transitory, and arises from positive contrasts. I think that's why people like shopping. They buy something new and it cheers them for a while, and then fades into the background of 'stuff I've got.' I find it very difficult to conceive of happiness as a constant state, and perhaps self referentially, believe there's something worthwhile about my misery. That said, I would like wings. That would be cool!
The only kind of idealism I take seriously just transposes the entire mathematical apparatus of modern physics onto an experientialist ontology: non-materialist physicalism (cf. https://www.physicalism.com). I used to assume the conjecture that the mathematical formalism of quantum field theory describes fields of sentience was untestable. How could we ever know what (if anything!) it's like to be, say, superfluid helium?
I now reckon such pessimism is premature:
https://www.hedweb.com/quora/2015.html#conpredicts
At the risk of stating the obvious, no one sympathetic to the abolitionist project need buy into my idiosyncratic speculations on quantum mind and the intrinsic nature of the physical.
As a temperamentally depressive negative utilitarian, I find lifelong happiness hard to conceive too. But a civilisation based on gradients of superhuman bliss is technically feasible. IMO, our impending mastery of the pleasure-pain axis makes such a future civilisation likely, too, though I vacillate on credible timescales.
"Information-sensitive gradients of well-being" is more of a mouthful than "constant happiness". Nonetheless the distinction is practically important. Despite my normal British prudery, I sometimes give the example of making love. Lovemaking has peaks and troughs. But done properly, lovemaking is generically enjoyable for both parties throughout. The challenge is to elevate our hedonic range and default hedonic tone so that life is generically enjoyable – despite the dips.
The misconception that pain and pleasure are relative is tenacious. But we need only consider the plight of severe chronic depressives to recognise that it's false. Chronic depressives never cease to suffer even though some of their days are less dreadful than others. Hyperthymics lie at the opposite extreme.
Phasing out the biology of suffering in favour of life based on information-sensitive gradients of well-being can be "perfect" in its implementation in the same sense that getting rid of Variola major and Variola minor was "perfect" in its implementation. Without Variola major and Variola minor, there is no more smallpox. Without the molecular signature of experience below hedonic zero, there can be no more suffering. It's hard to imagine, I know.
So a higher average of happiness. Okay, but how close is genetic science to identifying the specific genes and/or areas of the brain they want alter? I'm not the 'playing God' hysterical type, but a procedure performed on me, with my informed consent would be one thing, but on an individual as yet unborn, and not only that, but to wrest the entire genetic future of humanity from biology? To alter his children's and his children's children's genetics forever after? That's a lot to take on, and morally difficult to justify. That said, I would like wings! That would put me way out front in the intergenerational genetic arms race that would surely ensue!!
It's complicated: https://www.hedweb.com/quora/2015.html#devote
But which version of the ADA2b deletion variant, COMT gene, serotonin transporter gene, FAAH & FAAH OUT gene (cf. https://www.irishtimes.com/life-and-style/health-family/the-intriguing-link-between-sensitivity-to-physical-and-psychological-pain-1.3846825) and SCN9A gene (cf. https://www.wired.com/2017/04/the-cure-for-pain/) would you want for your future children?
As somatic gene therapy matures for existing humans, which allelic combinations would you choose for yourself?
Of course, bioconservatives would maintain that the genetic crapshoot of traditional sexual reproduction is best. If they prevail, then a Darwinian biology of misery and malaise will persist indefinitely.
Hi David, gene editing to the degree you are proposing has not been done to a real human (except perhaps one in China). What if it's the case that the way the phenotype and epigenetic results of gene editing work, it creates less happy humans, who suffer more? We are betting that the practical application will somehow prove out the theories. What if it doesn't and gene editing too has become a dead end?
Preimplantation genetic diagnosis and screening is worth distinguishing from gene-editing. Ratcheting up hedonic range, hedonic set-points and pain thresholds in human and nonhuman animal populations would certainly be feasible using nothing but preimplantation genetic screening alone; but germline gene-editing will be quicker.
The rogue He Jiankui case was indeed unfortunate. Well-controlled prospective trials will take time. So will preparing the ethical-ideological groundwork. Most people are accepting (if uncomfortable) with the idea of genetic interventions to prevent, say, Huntingdon's, Tay-Sachs or sickle cell disease, but not yet receptive to the prospect of selecting higher pain-thresholds or hedonic set-points.
I'm sceptical gene-editing could prove to be a dead-end. Modulating even a handful of genes such as the half-dozen I mentioned in my reply to counterpunch above could create radical hedonic uplift across the biosphere. Rather, I'm glossing over all sorts of risks, complications and unknowns – permissible in a philosophy forum when we are discussing the fundamental principles of a biohappiness revolution, but not if-and-when real-world trials begin.
It's also very hard to do, I know.
Indeed. At times, my heart sinks at the challenges. But if we don't upgrade our legacy code, then pain and suffering will continue indefinitely.
What of the quote "Some of the worst things in my life never even happened" by Mark Twain. The mind is more than malleable enough to deliver levels and depths of suffering on par with physical torture. How will gene editing make us feel toward tragedy, such as death, etc? If we no longer even have the ability to be distraught at that which is tragic.. is this really progress toward humanity? If you're ill or injured, pain can sometimes be the only thing to inform you something's not quite right. If it becomes merely a vague "numbness" of no severity or actual discomfort, especially if it doesn't scale up like biological pain does.. well, is that really safe?
I've heard testimonies of people who became addicted to strong opiate painkillers, some by major surgery, some out of recreation. The pleasure rewired their body so greatly that when they had to come off of them cold turkey it was described as "the worse pain imaginable" as if "[one's] bones were being crushed into dust" throughout the entirety of their body. What if gene-editing doesn't remove suffering but simply re-calibrates it in an unfavorable way?
As you may have gathered, I'm a "better the devil you know" kind of guy when it comes to these matters.. a keeper of Pandora's Box, if you will.
Quoting Outlander
You can't "edit out" human nature, without resulting in either a passive animal or monotonous robot. Do you really think, if this results in the success you envision, those rich and often less-inclined toward human well being will let it continue toward the masses? Why would they? You would simply usher in an age of Greek Mythology and "gods" ("Any sufficiently advanced technology is indistinguishable from magic").
9 APRIL 2021
by The Francis Crick Institute
Researchers at the Francis Crick Institute have revealed that CRISPR-Cas9 genome editing can lead to unintended mutations at the targeted section of DNA in early human embryos. The work highlights the need for greater awareness of and further research into the effects of CRISPR-Cas9 genome editing, especially when used to edit human DNA in laboratory research.
https://www.crick.ac.uk/news/2021-04-09_researchers-call-for-greater-awareness-of-unintended-consequences-of-crispr-gene-editing
Dan Simmons has a pair of sci-fi books on just this topic. It imagined a transhumanist society on Earth, filled with plenty, as well as an even more evolved "posthuman" society of God-like humans recreating the Trojan War on Mars, with themselves as Greek gods. Kind of a techno-Illiad.
I don't know if I'd really recommend it. Hyperion, his sci-fi take on the Canterbury Tales is a lot better, but it has a ton of interesting ideas, it just doesn't come together, even with like 1,400 pages to do so.
---
Anyhow, thank you David for the thread. This is a very interesting topic. I don't have too much to add of my own. As a leader in local government I have to constantly balance murky utilitarian calculations with political feasibility. The transhumanist project has to work on a scale an order of magnitude greater.
I have my doubts. I'm just finishing up Will Durant's excellent history of classical Greece, and it's remarkable how similar their political problems are to those of modern Western democracies. It seems like there has been nothing new under the sun despite all the "progress."
It might just be that the idea offends me on a psychological level. The transhumanist project is like a reverse Tower of Babel, bringing heaven down to Earth. It reminds me of Dostoevsky's Grand Inquisitor or the Buddha's life before seeing death.
It seems to me that you are an agent neutral utilitarian because you seem to believe that there are only instrumental reasons to prioritize the welfare of others. An agent relative utilitarian thinks that the extent to which a given episode of suffering is intrinsically bad is relative to whom the suffering belongs to. I consider myself to be a pretty strong agent relative utilitarian because I’m mostly an ethical egoist. I believe that you have much more reason to focus on minimizing your own suffering instead of minimizing the suffering of others(if we set instrumental considerations aside).
The argument that I use to support this position relates to how I feel that hedonism is strongly compatible with egoism. Hedonism seems to go hand in hand with egoism because suffering seems to be bad by virtue of how it feels. Because of this, it seems to matter a lot who exactly has to endure that episode of suffering. Suffering doesn’t seem to be just some weird abstract concept that exists in some platonic realm like say the concept of preference satisfaction. It seems to be an actual felt experience. Because of this, I don’t think it makes sense to take some kind of a weird third person “perspective of the universe” when evaluating the extent to which the episode of suffering is bad. I think this sort of thing would actually take away the argumentative strength of the hedonistic viewpoint. What makes hedonism so compelling to me is how real I feel that the badness of my own suffering is and how hard it is for me to be skeptical of the badness of my own suffering. By contrast, I don’t even know if other people are capable of suffering let alone that I have some kind of weird abstract reason to care about it. It seems to me that the reasons that we might have to minimize the suffering of others are almost just as speculative as the reasons given by objective list accounts of welfare as to why you should think that something like knowledge can be intrinsically good. I’d be interested if you have a critique for this sort of reasoning or a different argument that you think helps reject egoism. I can also clarify the argument more but I’m trying to be brief.
I share your reservations about gung-ho enthusiasm for technology. But transhumanism is a recipe for deeper self-understanding. The only way to develop a scientific knowledge of consciousness is to adopt the experimental method. Alas, a post-Galilean science of mind faces immense obstacles: https://www.hedweb.com/quora/2015.html#psychedelics
Not least, it's hard responsibly to urge the use of psychedelics to explore different state-spaces of consciousness until our reward circuitry is ungraded to ensure invincible well-being.
Quoting Noble Dust
If I might quote Pascal,
“All men seek happiness. This is without exception. Whatever different means they employ, they all tend to this end. The cause of some going to war, and of others avoiding it, is the same desire in both, attended with different views. The will never takes the least step but to this object. This is the motive of every action of every man, even of those who hang themselves.”
Transhumanism can treat our endogenous opioid addiction by ensuring that gradients of lifelong bliss are genetically hardwired.
How do we validate "science" then? From what I understand, the principles of science are validated by empirical evidence, yet according to the principles above any such science must be made compatible with one's own theoretical independent world, to be accepted as correct. The theoretical independent world within each of us, provides the standard for judgement of what to accept or not to accept within the realm of science.
So, when you say Quoting David Pearce How do you justify this, or even find principles to accept it as having a grain of truth? You have described principles which make each and every individual person completely separate, distinct and unique, "special". Now you claim that science tells you that you are not special, and this is the basis for your claim that sentient beings everywhere disvalue agony and despair.
Quoting David Pearce
I do not believe that science suggests you are not special. I think science suggests exactly what you argued already, that we're all distinct, "special", each one of us having one's own distinct theoretical independent world. And I think that this generalization, that we are all somehow "the same", is an unjustified philosophical claim. So I think you need something stronger than your own personal feelings, that agony and despair are disvaluable to you, to support your claim that they are disvaluable to everyone.
I think your claim to 'science" for such a principle is a little off track, because we really must consider what motivates human beings, intentions, and science doesn't yet seem to have a grasp on this. So we can see for example, that agony and despair (of others) is valued in some cases such as torture, and even in a more subtle sense, but much more common, as a negotiating tactic. People apply pressure to others, to get what they want. And if you believe that agony and despair ought not be valued like this, we need to defer to some higher moral principles to establish the right of that. Where are we going to get these higher moral principles when we deny the reality of true knowledge concerning the common world, the external world which we must share with each other?
However, if we consider what people want, we can validate science and generalizations through "the necessities of life". Don't you not think that we need some firm knowledge, some truths, concerning the external world which provides us with the necessities of life, in order to produce agreement on moral principles? Would you agree that it is the external world which provides us with the necessities of life? Isn't this what we all have in common, and shouldn't this be our starting point for moral philosophy, the necessities of life which we must take from the external world, rather than your own personal feelings about agony and despair?
Hey, I have another question, are there any aspects of the current homo sapiens that you would identify as already transhuman?
Compare the introduction of pain-free surgery:
https://www.general-anaesthesia.com/
Surgical anaesthesia isn't risk-free. Surgeons and anaesthesiologists need to weigh risk-reward ratios. But we recognise that reverting to the pre-anaesthetic era would be unthinkable. Posthumans will presumably reckon the same about reverting to pain-ridden Darwinian life.
Consider again the SCN9A gene. Chronic pain disorders are a scourge. Hundreds of millions of people worldwide suffer terribly from chronic pain. Nonsense mutations of SCN9A abolish the ability to feel pain. Yet this short-cut to a pain-free world would be disastrous: we'd all need to lead a cotton-wool existence. However, dozens of other mutations of SCN9A confer either unusually high or unusually low pain thresholds. We can at least ensure our future children don’t suffer in the cruel manner of so many existing humans. Yes, there are lots of pitfalls. The relationship between pain-tolerance and risky behaviour needs in-depth exploration. So does the relationship between pain sensitivity and empathy; consider today's mirror-touch synaesthetes. However, I think the biggest potential risks of choosing alleles associated with elevated pain tolerance aren't so much the danger of directly causing suffering to the individual through botched gene therapy, but rather the unknown societal ramifications of creating a low-pain civilisation as a prelude to a no-pain civilisation. People behave and think differently if (like today's fortunate genetic outliers) they regard phenomenal pain as "just a useful signalling mechanism”. Exhaustive research will be vital.
Candidly, no. Until we edit our genomes, even self-avowed transhumanists will remain all too human. All existing humans run the same kind of egocentric world-simulation with essentially the same bodily form, sensory modes, hedonic range, core emotions, means of reproduction, personal relationships, life-cycle, maximum life-span, and default mode of ordinary waking consciousness as their ancestors on the African savannah. For sure, the differences between modern and archaic humans are more striking than the similarities; but I suspect we have more in common with earthworms than we will with our genetically rewritten posthuman successors.
A hundred-year moratorium on reckless genetic experimentation would be good; but antinatalism will always be a minority view. Instead, prospective parents should be encouraged to load the genetic dice in favour of healthy offspring by making responsible genetic choices. Base-editing is better than CRISPR-Cas9 for creating invincibly happy, healthy babies:
https://www.labiotech.eu/interview/base-editing-horizon-discovery/
I have a (maybe) straightforward question regarding the value assumptions in negative utilitarianism, or even utilitarianism in general, and the possible consequences thereof.
My intuition against utilitarianism always has been that pain, or even pain 'and' pleasure, is not the only thing that matters to us, and so reducing everything to that runs the risk of glossing over other things we value.
Would you say that is just factually incorrect, i.e. scientific research tells us that in fact everything is reducible to pain (or more expanded to pain/pleasure)?
And if it's not the case that everything is reducible to pain/pleasure, wouldn't genetic alteration solely with the purpose of abolition of pain, run the risk of impoverishing us as human beings? Do we actually have an idea already of how pain and pleasure are interrelated (or not) with the rest of human emotions in the sense that it would be possible in principle to remove pain and keep all the rest intact?
Thank you for your time, It's been an interesting thread already.
You say you're "mostly an ethical egoist". Do you accept the scientific world-picture? Modern civilisation is based on science. Science says that no here-and-nows are ontologically special. Yes, one can reject the scientific world-picture in favour of solipsism-of-the-here-and-now (cf. https://www.hedweb.com/quora/2015.html#idsolipsism). But if science is true, then solipsism is a false theory of the world. There’s no reason to base one’s theories of ethics and rationality on a false theory. Therefore, I believe that you suffer just like me. I favour the use of transhumanist technologies to end your suffering no less than mine. Granted, from my perspective your suffering is theoretical. Yet its inaccessibility doesn't make it any less real. Am I mistaken to act accordingly?
Thank you. Evolution via natural selection has encephalised our emotions, so we (dis)value many things beyond pain and pleasure under that description. If intentional objects were encephalised differently, then we would (dis)value them differently too. Indeed, our (dis)values could grotesquely be inverted – “grotesquely” by the lights of our common sense, at any rate.
What's resistant to inversion is the pain-pleasure axis itself. One simply can't value undergoing agony and despair, just as one can't disvalue experiencing sublime bliss. The pain-pleasure axis discloses the world's inbuilt metric of (dis)value.
However, it's not necessary to accept this analysis to recognise the value of phasing out the biology of involuntary suffering. Recalibrating your hedonic treadmill can in principle leave your values and preference architecture intact – unless one of your preferences is conserving your existing (lower) hedonic range and (lower) hedonic set-point. In practice, a biohappiness revolution is bound to revolutionise many human values – and other intentional objects of the Darwinian era. But bioconservatives who fear their values will necessarily be subverted by the abolition of suffering needn’t worry. Even radical elevation of your hedonic set-point doesn't have to subvert your core values any more than hedonic recalibration would change your football-team favourite. What would undermine your values is uniform bliss – unless you're a classical utilitarian for whom uniform orgasmic bliss is your ultimate cosmic goal. Life based on information-sensitive gradients of intelligent well-being will be different. What guises this well-being will take, I don't know.
Thank you for the response. I hope you don't mind a follow up question, because this last paragraph is something I don't quite fully get yet.
I understand that many of our emotions and values are a somewhat arbitrary result of evolution. And I don't really have a fundamental bioconservative objection to altering them, because indeed they could easily have been be otherwise. What puzzles me is how you think we can go beyond our own biology and re-evaluate it for the purpose of genetic re-engineering. Since values are not ingrained in the fabric of the universe (or maybe I should say the part of the universe that is not biological), i.e. it is something we bring to the table, from what perspective are we re-evaluating them then. You seem to be saying there is something fundamental about pain and pleasure, because it is lifes (or actually the worlds?) inbuilt metric of value... It just isn't entirely clear to me why.
To make this question somewhat concrete. wouldn't it to be expected, your and other philosophers efforts notwithstanding, that in practice genetic re-engineering will be used as a tool for realising the values we have now? And by 'we' I more often then not mean political and economic leaders who ultimately have the last say because they are the ones financing research. I don't want to sound alarmist, but can we really trust something with such far-reaching consequence as a toy in power and status games?
I agree that solipsism is likely to be false and I think it’s more likely than not that you are capable of suffering. I was just bringing up the Epistemic problem that we seem to have regarding knowing that other people are capable of suffering and that the Epistemic problem doesn’t seem to exist for egoists. I don’t think the possibility of many other people not being capable of experiencing suffering is too far fetched though. It doesn’t necessarily require you to accept solipsism. I think there are other ways to argue for that conclusion. For example, take the view held by some philosophers that we might be living in a simulation. There might be conscious minds outside of that simulation but you couldn’t do anything to reduce suffering of those other conscious minds so you might as well just reduce your own suffering. In addition, it’s possible that it is the case that most but not all people are p zombies.
I also want to point out that the possibility of these metaphysical views being true isn’t really the primary argument that I have for egoism. Rather, they are secondary consideration which should still give someone a reason to believe that we have additional reason to prioritize our own welfare in some minor way just in case those views turn out to be true. For example, say that I think there is roughly a 10% chance that those views are true. Wouldn’t this mean that I have an additional reason to prioritize my own suffering more by 10%?
Quoting David Pearce
I would say that it depends on what reasons you have for accepting hedonism. I accept hedonism primarily because of how robust that theory seems to be against various forms of value skepticism and value nihilism. Imagine that you were talking to a philosopher who says that he isn’t convinced that suffering is intrinsically bad. What would your response be in that situation? My response would probably be along the lines of asking that person to put his hand on a hot stove and see if he still thinks there’s nothing intrinsically bad about suffering. I think there is a catch with this sort of response though. This is because I’m appealing only to how he can’t help but think that his own suffering is bad. I can’t provide him with an example that involves someone else putting a hand on a hot stove because that would have no chance of eliminating his skepticism.
Of course, one might object that we shouldn’t really be that skeptical about value claims and we should lower the Epistemic threshold for reasonably holding beliefs about ethics and value. But, then I think there would have to be an additional explanation given for why hedonism would still be the most plausible theory as it not completely clear to me as to why having the intrinsic aim of minimizing suffering in the whole world is more reasonable than another kind of more abstract and speculative intrinsic aim like the intrinsic aim of minimizing instances of preference frustrations for example. In addition, even if we lower the threshold for reasonable ethical belief, it still seems that we should care more about our own suffering just in case that altruism turns out to be false. Altruism seems to encapsulate egoism in a sense that pretty much every altruist agrees that it is good to minimize your own suffering but egoists usually think that minimizing the suffering of others is just a complete waste of time.
Are you familiar with Rudolf Steiner? I have plenty of reservations about him, but would be curious if you have any thoughts. It's interesting to note that, despite his eccentricities, perhaps the main thing he put forth that still is working (arguably) well is biodynamics.
Quoting David Pearce
I'm admittedly a bit behind on transhumanist thought (so why am I posting here?) but the idea of being able to hardwire gradients of bliss smacks of hubris to me. I think of Owen Barfield's analogy of the scientific method in which he describes "engine knowledge" vs. "driver knowledge". A car mechanic understands how the engine works, and what needs to be fixed in order for the motor to work (scientist). A driver of the car understands why the engine needs to work properly: to take the driver from point A to point B. There are countless reasons why a driver might need to go from point A to point B. The driver is not a scientist, by the way; maybe an artist, or maybe a humble average Joe who just wants to provide for his family. Who is wiser? [can provide reference to this Barfield analogy, will just take a few minutes of digging].
In my mind, you have engine knowledge. I have driver knowledge.
It seems to me that you are indulging in a vision of a paradise, that probably serves the same purpose as the Christian paradise: console, bring solace. It's a form of escapism.
Isaiah 11:
6 The wolf will live with the lamb,
the leopard will lie down with the goat,
the calf and the lion and the yearling together;
and a little child will lead them.
7 The cow will feed with the bear,
their young will lie down together,
and the lion will eat straw like the ox.
8 The infant will play near the cobra’s den,
and the young child will put its hand into the viper’s nest.
9 They will neither harm nor destroy
on all my holy mountain,
for the earth will be filled with the knowledge of the Lord
as the waters cover the sea.
A plea to write good code so future sentience doesn't suffer isn't escapism; it's a policy proposal for implementing the World Health Organization's commitment to health. Future generations shouldn't have to undergo mental and physical pain.
The "peaceable kingdom" of Isaiah?
The reason for my quoting the Bible isn't religious leanings: I'm a secular scientific rationalist. Rather, bioconservatives are apt to find the idea of veganising the biosphere unsettling. If we want to make an effective case for compassionate stewardship, then it's good to stress that the vision of cruelty-free world is venerable. Quotes from Gautama Buddha can be serviceable too. Only the implementation details (gene-editing, synthetic gene drives, cross-species fertility-regulation, etc) are a modern innovation.
Exactly. Yahweh has given way to Science as the Supreme Being in many modern minds, and therefore science is now required to deliver the same stuff that Yahweh was previously in charge of, including paradise. The Kingdom of God has been replaced in our imagination by the Kingdom of Science.
Creating new life and suffering via the untested genetic experiments of sexual reproduction feels natural. Creating life engineered to be happy – and repairing the victims of previous genetic experiments – invites charges of “hubris”. Antinatalists might say that bringing any new sentient beings into this god-forsaken world is hubristic. But if we accept that the future belongs to life lovers, then who shows greater humility:
(1) prospective parents who trust that quasi-random genetic shuffling will produce a benign outcome?
(2) responsible (trans)humans who ensure their children are designed to be healthy and happy?
The preferences of predator and prey are irreconcilable. So are trillions of preferences of social primates. The challenge isn't technological, but logical. Moreover, even if vastly more preferences could be satisfied, hedonic adaptation would ensure most sentient beings aren't durably happier. Hence my scepticism about "preference utilitarianism", a curious oxymoron. Evolution designed Darwinian malware to be unsatisfied. By contrast, using biotech to eradicate the molecular signature of experience below hedonic zero also eradicates subjectively disvaluable states. In a world animated entirely by information-sensitive gradients of well-being, there will presumably still be unfulfilled preferences. There could still, optionally, be social, economic and political competition – even hyper-competition – though one may hope full-spectrum superintelligence will deliver superhuman cooperative problem-solving prowess rather than primate status-seeking. Either way, a transhuman world without the biology of subjective disvalue would empirically be a better world for all sentience. It’s unfortunate that the goal of ending suffering is even controversial.
IMO, asking why agony is disvaluable is like asking why phenomenal redness is colourful. Such properties are mind-dependent and thus (barring dualism) an objective, spatio-temporally located feature of the natural world:
https://www.hedweb.com/quora/2015.html#metaethics
Evolution doesn’t explain such properties, as distinct from when and where they are instantiated. Phenomenal redness and (God forbid) agony could be created from scratch in a mini-brain, i.e. they are intrinsic properties of some configurations of matter and energy. It would be nice to know why the solutions to the world's master equation have the textures they do; but in the absence of a notional cosmic Rosetta stone to "read off" their values, it's a mystery.
Quoting ChatteringMonkey
I've no short, easy answer here. But fast-forward to a time later this century when approximate hedonic range, hedonic set-points and pain-sensitivity can be genetically selected – both for prospective babies and increasingly (via autosomal gene therapy) for existing humans and nonhuman animals. Anti-aging inteventions and intelligence-amplification will almost certainly be available too, but let's focus on hedonic tone and the pleasure-pain axis. What genetic dial-settings will prospective parents want for their children? What genetic dial settings and gene-expression profiles will they want for themselves? Sure, state authorities are going to take an interest too. Yet I think the usual sci-fi worries of, e.g. some power-crazed despot breeding of a caste of fearless super-warriors (etc), are misplaced. Like you, I have limited faith in the benevolence of the super-rich. But we shouldn't neglect the role of displays of competitive male altruism. Also, one of the blessings of information-based technologies such as gene-editing is that once the knowledge is acquired, their use won't be cost-limited for long. Anyhow, I'm starting to sing a happy tune, whereas there are myriad ways things could go wrong. I worry I’m sounding like a propagandist rather than an advocate. But I think the basic point stands. Phasing out hedonically sub-zero experience is going to become technically feasible and eventually technically trivial. Humans may often be morally apathetic, but they aren't normally malicious. If you had God-like powers, how much involuntary suffering would you conserve in the world? Tomorrow's policy makers will have to grapple with this kind of question.
I certainly won't deny the fact that pain is real for me and other people, and I wouldn't even deny that pain is inherently disvaluable, where I would want to push back is that it needs to be the only thing we are concerned with. It seems more complicated to me, but maybe this is more a consequence of my lack of knowledge on the subject, I don't know.
Here's an example you probably heard many times before, sports. We seem to deliberately seek out and endure pain to attain some other values, such as fitness, winning or looking good... I wouldn't say we value the pain we endure during sports, but it does seem to be the case that sometimes we value other things more than we disvalue pain. So how would you reconcile this kind of behavior with pain/pleasure being the inbuilt metric of (dis)value?
Do you make a difference between (physical) pain and suffering? To me there seems be something different going on with suffering, something different from the mere experience of pain. There also seems a mental component where we suffer because of anticipating bad things, because we project ourselves into the future... This would also be the reason why I would make a difference between humans and most other animals because they seems to lack the ability to project further into the future. To be clear by making this distinction, I don't want to imply that we shouldn't treat animals vastly better than we do now, just that I think there is a difference between 'experience of pain in the moment' and 'suffering' which possibly could have some ethical ramifications.
Quoting David Pearce
I actually agree with most of this. Ideally I wouldn't want these kind of powers because they seem way beyond the responsibilities a chattering monkey can handle. But if it can be done, it probably will be done... and given the state and prospects of science at this moment, it seems likely it will be done at some point in the future, whether we want it or not. And since we presumably will have that power, I suppose it's hard to deny the responsibility that comes with that. So philosophers might as well try and figure out how to best go about that when it does happen, I can certainly support that effort. What I would say is that I would want to understand a whole lot better how it all exactly operates before making definite claims about the direction of our species and the biosphere. But I'm not the one doing the research, so I can't really judge how good we understand it already.
Is one or two people being treated to an unnatural amount of suffering (at any point in the future) worth it to provide bliss for the masses? Shouldn't someone that would walk away from Omelas walk away from this technology?
I don’t see how that’s a problem for PU though. I think they could easily respond to this concern by simply stating that we regrettably have to sacrifice the preferences of one group to fulfill the preferences of a more important group in these sorts of dilemmas. I also think this sort of thing applies to hedonism also. In this dilemma, it seems we would also have to choose between prioritizing reducing the suffering of the predator or prioritizing reducing the suffering of the prey.
Quoting David Pearce
Well, I’m curious to know what reasons do you think that we have to care specifically about the welfare of sentient creatures and not other kinds of entities though. There are plenty of philosophers that have claimed that various non-sentient entities are legitimate intrinsic value bearers as well. For example, I’ve heard claims that there’s intrinsic value in the survival of all forms of life including non-sentient life like plants and fungi. I’ve also heard claims that AI programs could have certain achievements which are valuable for their own sake like the achievements of a chess playing neural network AI that taught itself how to play chess by playing chess with itself billions of times and that could now beat every human player in the world. I’m just wondering what makes you think that sentient life is more special than those other sorts of entities. The main reason that I can think of for thinking sentient beings are the only type of value bearers seem to appeal to the truthfulness of egoism.
Yes, anyone who understands suffering should "walk away from Omelas". If the world had an OFF switch, then I'd initiate a vacuum phase-transition without a moment's hesitation. But there is no OFF switch. It's fantasy. Its discussion alienates potential allies. Other sorts of End-of-the-World scenarios are fantasy too, as far as I can tell. For instance, an AI paperclipper (cf. https://www.hedweb.com/quora/2015.html#dpautistic) would align with negative utilitarian (NU) values; but paperclippers are impracticable too. One of my reasons for floating the term "high-tech Jainism" was to debunk the idea that negative utilitarians are plotting to end life rather than improve it. For evolutionary reasons, even many depressives are scared of death and dying. As a transhumanist, I hope we can overcome the biology of aging. So I advocate opt-out cryonics and opt-in cryothanasia to defang death and bereavement for folk who fear they won't make the transition to indefinite youthful lifespans. This policy proposal doesn’t sound very NU – why conserve Darwinian malware? – but ending aging / cryonics actually dovetails with a practical NU ethic.
S(uffering)-risks? The s-risk I worry about is the possibility that a technical understanding of suffering and its prevention could theoretically be abused instead to create hyperpain. So should modern medicine not try to understand pain, depression and mental illness for fear the knowledge could be abused to create something worse? After all, human depravity has few limits. It's an unsettling thought. Mercifully, I can't think of any reason why anyone, anywhere, would use their knowledge of medical genetics to engineer a hyper-depressed, hyperpain-racked human. By contrast, the contemporary biomedical creation of "animal models" of depression is frightful.
Is it conceivable that (trans)humans could phase out the biology of suffering and then bring it back? Well, strictly, yes: some philosophers have seriously proposed that an advanced civilisation that has transcended the horrors Darwinian life might computationally re-create that life in the guise of running an "ancestor simulation". Auschwitz 2.0? Here, at least, I’m more relaxed. I don't think ancestor simulations are technically feasible – classical computers can't solve the binding problem. I also discount the possibility that superintelligences living in posthuman heaven will create pockets of hell anywhere in our forward light-cone. Creating hell – or even another Darwinian purgatory – would be fundamentally irrational.
Should a proponent of suffering-focused ethics spend so much time exploring transhumanist futures such as quasi-immortal life animated by gradients of superhuman bliss? Dreaming up blueprints for paradise engineering can certainly feel morally frivolous. However, most people just switch off if one dwells entirely on the awfulness of Darwinian life. My forthcoming book is entitled “The Biohappiness Revolution" – essentially an update on “The Hedonistic Imperative”. Negative utilitarianism, and maybe even the abolitionist project, is unsaleable under the latter name. Branding matters.
Surely if something like the human brain and its resultant consciousness can come into being "by chance" (via evolution by natural selection over millions of years), then shouldn't it be possible for intelligent life (with the help of advanced computers) to artificially create something similar? And given that things like dreams where there's little to no external stimuli involved are possible then shouldn't artificially stimulated experiences also be possible?
Artificial mini-brains, and maybe one day maxi-minds, are technically feasible. What's not possible, on pain of spooky "strong" emergence, is the creation of phenomenally-bound subjects of experience in classical digital computers:
https://www.binding-problem.com/
https://www.hedweb.com/hedethic/binding-interview.html
Even if we prioritise, preference utilitarianism doesn’t work. Well-nourished tigers breed more tigers. An exploding tiger population then has more frustrated preferences. The swollen tiger population starves in consequences of the dwindling numbers of their prey. Prioritising herbivores from being predated doesn’t work either – at least, not on its own. As well as frustrating the preferences of starving predators, a population explosion of herbivores would lead to mass starvation and hence more even frustrated preferences. Insofar as humans want ethically to conserve recognisable approximations of today’s "charismatic mega-fauna", full-blown compassionate stewardship of Nature will be needed: reprogramming predators, cross-species fertility-regulation, gene drives, robotic “AI nannies” – the lot. From a utilitarian perspective (cf. https://www.utilitarianism.com), piecemeal interventions to solve the problem of wild animal suffering are hopeless.
Quoting TheHedoMinimalist
Mattering is a function of the pleasure-pain axis. Empirically, mattering is built into the very nature of the first-person experience of agony and ecstasy. By contrast, configurations of matter and energy that are not subjects of experience have no interests. Subjectively, nothing matters to them. A sentient being may treat them as significant, but their importance is only derivative.
Perhaps not, but I don't think that the use of classical digital computers is central to any ancestor simulation hypothesis. Don't such theories work with artificial mini-brains?
One may consider the possibility that one could be a mind-brain in a neurosurgeon’s vat in basement reality rather than a mind-brain in a skull as one naively supposes. However, has one any grounds for believing that this scenario is more likely? Either way, this isn’t the Simulation Hypothesis as envisaged in Nick Bostrom’s original Simulation Argument (cf. https://www.simulation-argument.com/).
What gives the Simulation Argument its bite is that, pre-reflectively at any rate, running a bunch of ancestor-simulations is the kind of cool thing an advanced simulation might do. And if you buy the notion of digital sentience, and the idea that what you’re now experiencing is empirically indistinguishable from what your namesake in multiple digital ancestor-simulations is experiencing, then statistically you are more likely to be in one of the simulations than in the original.
Elon Mask puts the likelihood that we’re living in primordial basement reality at billions-to-one against. But last time I asked, Nick doesn't assign a credence to the Simulation Hypothesis of more than 20%.
I think reality has only one level and we’re patterns in it.
I think the question to ask is why we nominally (dis)value many intentional objects that are seemingly unrelated to the pleasure-pain axis. "Winning” and demonstrating one is a dominant alpha male who can stoically endure great pain and triumph in competitive sports promises wider reproductive opportunities than being a milksop. And for evolutionary reasons, mating, for most males, is typically highly rewarding. We see the same in the rest of the Nature too. Recall the extraordinary lengths some nonhuman animals will go to breed. What’s more, if (contrary to what I’ve argued) there were multiple axes of (dis)value rather than a sovereign pain-pleasure axis, then there would need to be some kind of meta-axis of (dis)value as a metric to regulate trade-offs.
You may or may not find this analysis persuasive; but critically, you don't need to be a psychological hedonist, nor indeed any kind of utilitarian, to appreciate it will be good to use biotech to end suffering.
Yes, Nietzsche for instance chose 'health/life-affirmation' as his preferred meta-axis to re-evaluate values, you seem to favor pain/pleasure... People seem to disagree on what is more important. But maybe there's a way, informed among other things by contemporary science, to get to more of an objective measure, I don't know... which is why I asked.
Quoting David Pearce
This is certainly a fair point, but I'd add while it would be good to end (or at least reduce) suffering, It needn't be restricted to that. If we are going to use biotech to improve humanity, we might as well look to improve it on multiple axis... a multivalent approach.
Some of the soul-chilling things Nietzsche said make him sound as though he had an inverted pain-pleasure axis: https://www.nietzsche.com
In reality, Nietzsche was in thrall to the axis of (dis)value no less the most ardent hedonist.
In any event, transhumanists aspire to a civilisation of superintelligence and superlongevity as well as superhappiness – and many more "supers" besides.
I am not without idiosyncrasies. But short of radical scepticism, the claim that agony and despair are disvaluable by their very nature is compelling. If you have any doubt, put your hand in a flame. Animals with a pleasure-pain axis have the same strongly evolutionary conserved homologous genes, neurological pathways and neurotransmitter systems for pleasure- and pain-processing, and the same behavioural response to noxious stimuli. Advanced technology in the form of reversible thalamic bridges promises to make the conjecture experimentally falsifiable too (cf. https://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html). Reversible thalamic bridges should also allow partial “mind-melding” between individuals of different species.
I remember you saying something to the effect that you dread the thought of the existence of extra-terrestrial life.
What probabilities would you put on the existence of alien life? What about the chances of meeting or communicating with them?
My best guess is that what Eric Drexler called the "thermodynamic miracle” of life's genesis means that the percentage of life-supporting Hubble volumes where primordial life emerged more than once is extremely low. If so, then the principle of mediocrity suggests that we're alone. There will be no scope for cosmic rescue missions – even if we optimistically believe that Earth-originating life would be more likely to spread suffering elsewhere than to mitigate it.
If this conjecture turns out to be correct, then our NU ethical duties will have been discharged when we have phased out the biology of suffering on Earth and taken steps ensure it doesn't recur within our cosmological horizon.
Naturally, I could be mistaken.
For a radically different perspective, see:
https://www.overcomingbias.com/tag/aliens
Well I don't think the point of those quotes was to glorify pain in itself, but rather the function it plays in human biology, in attaining things he valued more. I think you're actually saying something similar when you talk about preserving information-sensitivity and nociception if we were to do away with pain. Of course at the time Nietzsche didn't have the option of separating the two with biotech.
I don't agree with this. As much as you apprehend it as compelling, it is not the truth. The point I made is that agony and despair are often seen as valuable when they are inflicted upon others. The extreme case is torture, but the common practice is the more subtle application of agony and despair in the pressure tactics of negotiating. So you cannot remove the value from these so easily. And if you look closely, you'll see that agony and despair play an integral role in most all human relationships. Without these feelings we'd be emotionless robots.
Now I really think it is a bad idea to turn human beings into emotionless robots no matter how strong your own personal opinion on this issue is. Perhaps you might compromise, and look at some types of agony and despair as inherently bad, or some intense forms of these as inherently bad, but I hope you don't really think that you can throw a blanket over them all like this. That is what Chattering Monkey has been trying to tell you. Some forms of pain are necessary to build strength. And not only is exposure to certain types of pain necessary to build physical strength, exposure to different types of agony and despair are necessary to build strength of character.
You could look at Aristotle's ethics for an example. Virtue does not consist of negating the bad for the sake of its opposite, it is to be found in the mean between the two opposing extremes. That is what we call moderation. We really cannot approach ethics with the attitude that such and such feelings are bad, let's annihilate all these bad feelings and we'll be left only with good feelings. So you might instead propose a way to take the edge off our feelings, eliminate the extremes. And I think the medical profession already works toward this end with medications. But even this is not completely accepted in our society because it dulls the emotions.
That out of the way, I'd like to pick your brain regarding a certain point of view that's encapsulated in the following quote:
[quote=Unknown]Any suffuiciently advanced technology is indistinguishable from nature
As far as this particular post is concerned who said it matters not so much as what exactly is being said.
You mention three supers: 1. Superintellgence, 2. Superlongevity and 3. Superhappiness
Those are, even to detractors of transhumaninsm, very noble thoughts and if we're to fault them we can only do so by attacking not the main ideas themselves but the secondary support structures that hold up the transhumanist philosophy.
Coming to the issue I want to bring to your attention, one question, "where does evolution figure in transhumanist philosophy?"
I ask because, if evolution is true and it's been in play for at least a few billion years, shouldn't the status quo for intelligence, longevity, and happiness be optimum/maximum for the current "environment". In other words, we have in terms of the trio of intelligence, longevity, and happiness, the best deal nature has to offer. We shouldn't, in that case, attempt to achieve superintelligence, superlongevity, and superhappiness. The plan to do that might backfire for the current state of our world can't support such phenomena or, more to the point, they're failed evolutionary experiments.
To give you an idea of what I mean:
1. Superintelligence: Atlantis. FAIL!
2. Superlongevity: Adaptation to changes in environment currently only possible through sexual reproduction which, as you know, requires space for new offspring which is only possible if the old die. FAIL!
3. Superhappiness: Drug addiction. FAIL!
In very cheeky words, if nature were asked to opine on transhumanism, she would say, "Been there, done that, didn't work out"
A penny for your thoughts.
Well, I don’t think it makes too much sense to criticize a theory of normative ethics by claiming that it doesn’t work. It seems to me that the point of normative ethical theories like PU is to figure out what ethical goals we should pursue rather than how we should accomplish those ethical goal. If a particular PU theory doesn’t have anything to say about wild animal suffering then the proponent of that theory probably just doesn’t think that wild animal suffering is important to resolve. I think we would need to make an argument in favor of focusing our time and energy on wild animal suffering. Otherwise, how could we know that ending the suffering of wild animals is a worthy pursuit and a wise use of our time.
Come to think of it, you may have been talking about the possibility of an infinite multiverse, where suffering is, well, infinite? Is this something you are concerned about?
Thank you for the kind words. Evolution via natural selection is a monstrous engine for the creation of pain and suffering. But Darwinian life contains the seeds of its own destruction. Yes, our minds are well-adapted to the task of maximising the inclusive fitness of our genes in the environment of evolutionary adaptedness (EEA). So humans are throwaway vehicles for self-replicating DNA. But sentient beings are poised to seize control of their own destiny. Recall that traditional natural selection is "blind". It's underpinned by effectively random genetic mutations and the quasi-random process of meiotic shuffling. Sexual reproduction is a cruel genetic lottery. However, intelligent agents are shortly going to rewrite their own source code in anticipation of the likely behavioural and psychological effects of their choices. As the reproductive revolution unfolds, genes and allelic combinations that promote superintelligence, superlongevity and superhappiness will be strongly selected for. Gene and allelic combinations associated with low intelligence, reduced longevity and low mood will be selected against.
My work has focused on the problem of suffering and the challenge of coding genetically hardwired superhappiness. But consider the nature of human intelligence. Are human minds really “the best deal Nature has to offer”, as you put it? Yes, evolution via natural selection has thrown up an extraordinary adaptation in animals with the capacity for rapid self-propelled motion, namely (1) egocentric world-simulations that track fitness-relevant patterns in their local environment. Phenomenal binding in the guise of virtual world-making (“perception”) is exceedingly adaptive (cf. https://www.hedweb.com/quora/2015.html#lifeillus). Neuroscientists don't understand how a pack of supposedly classical neurons can do it. Evolution via natural selection has thrown up another extraordinary adaptation in one species of animal, namely (2) the virtual machine of serial logico-linguistic thought. Neuroscientists don't understand how serial thinking is physically possible either. However, what evolution via natural selection hasn't done – because it would entail crossing massive "fitness gaps” – is evolve animals supporting (3) programmable digital computers inside their skulls. Slow, serial, logico-linguistic human thinking can't compete with a digital computer executing billions of instructions per second. So digital computers increasingly outperform humans in countless cognitive domains. The list gets longer by the day. Yet this difference in architecture doesn't mean a divorce between (super)intelligence and consciousness is inevitable. Recursively self-improving transhumans won't just progressively rewrite their own source code. Transhumans will also be endowed with the mature successors of Neuralink; essentially Net-enabled "narrow" superintelligence on a neurochip (cf. https://theconversation.com/neuralinks-monkey-can-play-pong-with-its-mind-imagine-what-humans-could-do-with-the-same-technology-158787). I’m sceptical about a full-blown Kurzweilian fusion of humans with our machines, but some degree of "cyborgisation" is inevitable. Get ready for a biointelligence explosion.
Archaic humans are cognitively ill-equipped in other ways too. Not least, minds evolved under pressure of natural selection lack the conceptual schemes to explore billions of alien state-spaces of consciousness:
https://www.hedweb.com/quora/2015.html#psychedelically
I could go on; but in short: expect a major evolutionary transition in the development of life.
The possibility that we live in a multiverse scares and depresses me:
https://www.abolitionist.com/multiverse.html
I'm not sure the notion of physically realised infinity is intelligible. But even if Hilbert space is finite, it's still intuitively big – albeit infinitesimally small compared to a notional infinite multiverse.
Biotech informed by negative or classical utilitarianism can get rid of disvaluable experience altogether over the next few centuries. Preference utilitarianism plus biotech might do so too if enough people were to favour a biological-genetic strategy for ending suffering: I don’t know. Either way, any theory of (dis)value or ethics that neglects the interests of nonhuman animals is arbitrarily anthropocentric. Nonhuman animals are akin to small children. They deserve to be cared for accordingly. The world needs an anti-speciesist revolution:
https://www.antispeciesism.com/
I read that as directed evolution by which we can engineer our genes, the difference between it and natural evolution being the former is planned with careful deliberation while the latter is, as you know, random-mutation based. The question that follows naturally though is this: what if a brainstorming session of the world's leading minds came to the conclusion that the best game plan/strategy is precisely what we thought we could do better than i.e. random mutation is the solution "...to seize control of their (our) destiny"? You many ignore this point if you wish but I'd be grateful amd delighted to hear your response.
Quoting David Pearce
I have to admit transhumanist ideals are legit goals worth pursuing and I endorse the whole enterprise with great enthusiasm. Good ideas are so hard to come by these days and I deeply admire people such as yourself who've taken the world's problems this seriously and come up with novel and bold solutions. I hope it works out for all of our sakes. The clock is ticking I believe.
Thank you. Have a good day.
One of the most valuable skills one can acquire in life is working out who are the experts in any field, then (critically) deferring to their expertise. But who are the world's leading minds in the nascent discipline of futurology? In bioethics? We hold the designers and programmers of inorganic robots (self-driving cars etc) accountable for any harm they cause. Sloppy code that causes injury to others can't be excused with a plea that the bugs might one day be useful. By contrast, humans feel entitled to conduct as many genetic experiments involving sentient organic robots as they like, regardless of the toll of suffering (cf. https://www.hedweb.com/quora/2015.html#agreeantinatal). Anyhow, to answer your question: if the world’s best minds declared that we should conserve the biological-genetic status quo, then I think they'd be mistaken. If we don’t reprogram the biosphere, then unimaginable pain and suffering still lie ahead. The horrors of Darwinian life would continue indefinitely. Transhumanists believe that intelligent moral agents can do better.
On point. :up:
Quoting David Pearce
If I may say so, in my humble opinion, you clearly have your feet on the ground and though your ideas are somewhat science-fictiony (obvious but bears mentioning) they're meant to tackle problems that are real and urgent.
I've been thinking about suffering and its counterpart happiness for the past few months, never got around to giving it more than a couple of hours of brain-time though. Anyway, what I feel is happiness or more accurately pleasure appears to be highly addictive and we all know one of the most poweful habit-forming drugs in the world are opiods and isn't it curious that endorphins - pleasure biochemicals - resemble opiods in composition and structure?
If you were to allow me to go out on a limb and take you into what's a slight detour from the main issue and ask you to join me in discussing conspiracy theories, I'd say plants with chemicals that produce happiness in humans is rather suspicious don't you agree? Smells fishy to me. Of course this may come across as infantile to you but could plants be manipulating us in ways we're oblivious to? This would be a setback for transhumanism wouldn't it? After all, hedonism isn't then about us and our well-being but could very well be an elaborate deception "masterminded" by plants. There seems to be no evidence for this though but...one can never be too sure.
Aside from the crazy idea I offered in the preceding paragraphs for your amusement, all I wish to convey is how there's a vanishingly thin line between pleasure and an extremely addictive drug. This should, at the very least, prompt you to be cautious about hedonism. I'm not saying that I'm not hedonistic myself. Who isn't? But I don't want to be pleasure or happiness junkie any more than I want to be addicted to heroin or opium. Perhaps I'm talking out of my hat at this point but I'm curious to hear your thoughts on this rather outlandish theory of mine. If you have the time and if you find it deserves a response from you of course.
Quoting David Pearce
I'm in full agreement with you. For instance, even if it turns out that random mutation is the best solution to bring about superintelligence, superlongevity, and superhappiness, "intelligent moral agents" can tweak the processes involved - speed up/slow down the mechanism, add/delete DNA segments, etc. - in order to ensure the best outcomes. If another better technique comes to light, even better.
I'm very pro science, but what you're suggesting gives me the creeps - and plays right into accusations that science is arrogant and amoral. I can make a rational argument for the benefits of science starting with that which is most fundamentally necessary to survival, but where is the imperative here? Eliminating an inherited disease from the germ line I can understand, but where's the proof that depression doesn't serve a useful purpose? And if we begin to design babies - how will we know when to stop? There are questions here that cannot be answered by making fun of the Amish, one might look to answer given that this technology is right around the corner.
If we take a gene's-eye-view, then a predisposition to depression frequently did serve a useful purpose in the environment of evolutionary adaptedness. See the literature on the Rank Theory of depression:
https://www.sciencedirect.com/science/article/abs/pii/S0165032718310280?via%3Dihub
But this is no reason to conserve the biology of low mood. Depression is a vile disorder.
Quoting David Pearce
Maybe not an actually existing infinity that would be subject to paradoxes, but a potential infinity is possible. Universe/s could exist or spawn forevermore?
I actually have Graham Oppy's book Philosophical Perspectives on Infinity but I haven't got around to starting it yet.
Cosmology is in flux. We understand enough, I think, to sketch out how experience below hedonic zero could be prevented in our forward light-cone. At times I despair of a political blueprint for the abolitionist project, but technically it's feasible. But even if we're alone in our Hubble volume, does suffering exist elsewhere which rational agents are impotent to do anything about? I worry about such things, but it's not fruitful. As soon as intelligent agents are absolutely certain that our ethical duties have been discharged, I think the very existence of suffering is best forgotten.
I think there are versions of PU that are compatible with the idea of transhumanism and changing the preferences of sentient beings. For example, some proponents of PU might argue that the best way to minimize preference frustrations or maximize preference satisfactions is by altering the preferences of entities that have preferences so that those preferences become more attainable and the satisfaction of those beings would thus be more sustainable. I don’t think that PU is inherently against the idea of altering preferences with technology. In fact, I have actually argued in the past that those who subscribe to a preference satisfaction theory of welfare should be willing to plug themselves into the experience machine and then use that machine to brainwash themselves into having purely hedonistic preferences that the EM could easily satisfy. This way they would be guaranteed to have a perfect preference satisfaction to preference frustration ratio.
Quoting David Pearce
I don’t think that’s true. I think that you can neglect the interests of nonhuman animals and avoid being anthropocentric if you also equally neglect the interests of human beings that are as intelligent and emotionally complex as typical non-human animals. I don’t think it’s even repugnant to most people that we should mostly neglect the interests of not only animals but all humans as well. After all, it seems that almost nobody is willing to donate a significant amount of money to help children in their own community much less wild animals. So, I’m not seeing how your comparison between animals and small children would even be that compelling when it comes to persuading people that we should care about wild animal suffering. It’s even less compelling to me because I happen to really dislike children. You probably have a better chance convincing me to care about wild animals lol.
With all due respect, you have no understanding of what it means to be human. You aren't using any logical arguments in defense of your position, so I won't use any either.
Maybe. But if so, would you argue that the World Health Organization should scrap its founding constitution ("health is a state of complete physical, mental and social wellbeing") because it too shows no understanding of what it means to be human? After all, nobody in history has yet enjoyed health as so defined.Â
If a small child were drowning, you would wade into a shallow pond to rescue the child – despite your professed dislike of small children, and your weaker preference not to get your clothes wet? I'm guessing you would do the same if the drowning creature were a dog or pig. Humanity will soon (by which I mean within a century or two) be able to help all nonhuman sentience with an equivalent level of personal inconvenience to the average citizen, maybe less. Technology massively amplifies the effects of even tiny amounts of benevolence (or malevolence).
An ethic of negative or classical utilitarianism mandates compassionate stewardship of the living world. This does not mean that convinced negative or classical utilitarians should urge raising taxes to pay for it. The intelligent utilitarian looks for effective policy options that are politically saleable. I never thought the problem of wild animal suffering would be seriously discussed in my lifetime, but I was wrong: https://en.wikipedia.org/wiki/Wild_animal_suffering. If asked, a great many people are relaxed about the prospect of less suffering in Nature so long as suffering-reduction doesn’t cause them any personal inconvenience. Hence the case for technical fixes.
But you are addicted to opioids. Everyone is hooked:
https://sites.lsa.umich.edu/berridge-lab/research-overview/neuroscience-of-linking-and-wanting/.
Would-be parents might do well to reflect on how breeding creates new endogenous opioid addicts. For evolutionary reasons, humans are mostly blind to the horror of what they are doing:
https://www.hedweb.com/quora/2015.html#agreeantinatal
Addiction corrupts our judgement. It's treatable, but incurable. Transhumanism offers a potential escape-route.
So, why base your philosophy, transhumanism, on what you admit is an addiction? Wouldn't that be a big mistake? Transhumanism, specifically its hedonic component, would in essence be a drug cartel catering to the entire global population and that too from cradle to grave. I raise this issue because addiction is a positive feedback loop and if you convince people that pleasure's the be all and end all of life, I feel things might snowball out of control. I'm a hedonist by the way but that's exactly what worries me - a cocaine addict simply can't see beyond cocaine.
To the best of my knowledge, there is no alternative. The pleasure-pain axis ensnares us all. Genetically phasing out experience below hedonic zero can make the addiction harmless. The future belongs to opioid-addicted life-lovers, not "hard" antinatalists. Amplifying endogenous opioid function will be vital. Whereas taking exogenous opioids typically subverts human values, raising hedonic range and hedonic set-points can potentially sustain and enrich civilisation.
I've just been looking at a list of genetic disorders, and I'm not as happy as I might be wasn't one of them.
https://en.wikipedia.org/wiki/List_of_genetic_disorders#:~:text=Full%20genetic%20disorders%20list%20%20%20%20Disorder,%20%201%3A50%2C000%20%2035%20more%20rows%20
There's a strong moral case for eliminating disorders like these. I don't see the strong moral case, and great potential hazard in genetically jacking up natural opiate production by the brain. People act based on how they feel - and while that isn't always super, we've survived thus far.
Yet depression is a devastating disorder. It has a high genetic loading. Depression is at least as damaging to quality of life as the other genetic disorders listed. According to the WHO, around 300 million people worldwide are clinically depressed. "Sub-clinical” depression afflicts hundreds of millions more. If humanity conserves its existing genome, then depressive disorders will persist indefinitely. The toll of suffering will be unimaginable. Mastery of our genetic source code promises an end to one of the greatest scourges in the whole world. By all means, urge exhaustive research and risk-benefit analysis. But I know of no good ethical reason to conserve such an evil.
I think there is a perfectly egoistical explanation for why I would rescue that child. If I rescue that child then it’s quite likely that I would get financially rewarded or at least given recognition that would cause me to experience pleasure. By contrast, if I allow the child to die then someone might have that on camera and I could socially shamed and stigmatized for it. In addition, I think we often feel shame when violating social expectations even if we disagree with those said expectations. I would feel ashamed about letting the child die and that would cause me to suffer. I don’t think that me feeling ashamed about it and being willing to act in order to prevent that shame from occurring necessarily means that I believe on an intellectual level that I ought have rescued the child if there was no social expectation to do so.
It also seems to me that altruistic negative utilitarianism actually implies that it might be better to allow the child to die. After all, that would alleviate the entire lifetime of suffering that the child would have to experience if he goes on to live(assuming there isn’t an afterlife). I’m not sure if that would make up for the suffering caused to the parents of that child or the suffering caused by the drowning. Of course, you would also have to consider the possibility of the child surviving and becoming disabled if you don’t rescue the child and someone else does at a later time. But, it’s also possible that you rescue a child and that condemns that child to continue living as a disabled person rather than having all his suffering alleviated by death. Though, it could also be argued that because of hedonic adaptation, being disabled doesn’t actually contribute that much suffering to your life. I think it’s an open question whether or not a negative utilitarian should rescue that child and that might get used as an argument against negative utilitarianism. Ironically enough, my egoistic hedonism seems to be much more immune to such objections because I think it’s usually a pretty closed question whether or not rescuing a child would hedonistically benefit you.
Quoting David Pearce
Yes but who’s going to pay for those technical fixes? Wouldn’t it cost plenty of money to implement any sort of technical fix as a means to end the suffering of wild animals?
You're moving the goalposts, which, as I'm sure you're aware, is a rookie move. You get a free pass because you're a guest speaker; if you were not, you would be laughed off the stage.
The genetic loading aspect aside, which is still potentially addressed by my point, what causes depression? Loss, corruption, or damage of something that once made one happy. What causes happiness? Ability to feel both good and bad, both pleasure and pain. You talk about this "hedonic zero" which admittedly I don't fully comprehend and do wish you to explain in more detail (if not once more).. however there are simple factors in play. People enjoy winning a bet because they had a good chance of not and losing money. People enjoy an evenly matched game of chess for example because of much of the same reasons. Would you enjoy playing a game of chess against a grandmaster other than to say you did so? If as an experienced player would you enjoy playing a game against a complete novice? The answer to these questions are one and the same. David, please watch this, if not the full actual episode and then tell me what you think.
Sorry, could you unpack the footballing metaphor for me? What “goalposts”? As far as I know, I’ve consistently been arguing for replacing the biology of pain and suffering with life based on gradients of genetically programmed well-being.
This happened to me once, in an airport lounge. I had hours to wait and the old gentleman next to me in the lounge was reading a chess magazine. I asked him if he wanted to play. He did, though he did not look particularly excited about it, like if playing chess was a bit boring to him... Anyway, I looked around the airport shops for a set, found one, and came back to the lounge with it.
What followed was rather humbling. I could not make it pass 20 moves in any of the six or seven games we played. When I congratulated him, he said he was a grandmaster.
Apologies, by “hedonic zero” I just mean emotionally neutral experience that is perceived as neither good nor bad. Hedonic zero is what utilitarian philosopher Henry Sidgwick called the “natural watershed” between good and bad experience – though it’s complicated by “mixed states” such as bitter-sweet nostalgia.
Chess? I enjoy playing against a super-grandmaster. I lose every time. By contrast, I wouldn’t ever enjoy losing against a human opponent. This is because I’m a typical male human primate. Playing chess against other humans is bound up with primate dominance behaviour of the African savannah. I trust future sentience can outgrow such primitive zero-sum games.
Thank you for the link to The Twilight Zone (cf. https://en.wikipedia.org/wiki/A_Nice_Place_to_Visit).
Perhaps see my response to “What if you don’t like it in Heaven?”:
https://www.hedweb.com/quora/2015.html#heaven
In short, if we upgrade our reward circuitry, then all experience will be heavenly by its very nature.
Negative utilitarianism (NU) is compassion systematised. NUs aren’t in the habit of letting small children drown any more than we’re plotting Armageddon. I’m as keen on upholding the sanctity of life in law as your average deontologist. Indeed, I think the principle should be extended to the rest of the animal kingdom, so-called “high-tech Jainism”: https://www.hedweb.com/transhumanism/neojainism.html
The reason for such advocacy is that NU is a consequentialist ethic. Valuing and even sanctifying life is vastly more likely to lead to ideal outcome, i.e. the well-being of all sentience, than cheapening life. Quoting TheHedoMinimalist
A pan-species welfare state might cost a trillion dollars or more at today’s prices – maybe almost as much as annual global military expenditure. It’s unrealistic, even if humans weren’t systematically harming nonhumans in factory-farms and slaughterhouses. However, human society is on the brink of a cultured meat revolution. Our “circle of compassion” will expand in its wake. The most expensive free-living organisms to help won’t be the small fast-breeders, as one might naively suppose (cf. https://www.gene-drives.com), but large, slow-breeding vertebrates. I did a costed case-study for free-living elephants a few year’s ago: https://www.abolitionist.com/reprogramming/elephantcare.html
Any practically-minded person (they exist even on a philosophy forum) is likely to be exasperated. What’s the point of drawing up blueprints that will never be implemented? Yet the exponential growth of computer power means that the price of such interventions will collapse. So it’s good to have a debate now over the merits of traditional conservation biology versus compassionate conservation. Bioethicists need to inform ourselves of what is – and isn’t – technically feasible. On the latter score, at least, the prospects are stunningly good. Biotech can engineer a happy biosphere. Politically, such a project may take hundreds or even thousands of years. But I can’t think of a more worthwhile goal.
Wait, so are you like a rule utilitarian then? Also, it seems to me that you can argue that we should uphold the sanctity of life in law without thinking that we should prevent a child from drowning. For one, I think that the notion of preserving the sanctity of life that exists in law mostly has to do with the prohibition against ending lives rather than an obligation that one must prevent the ending of a life. I don’t think it’s usually illegal for someone to refuse to prevent the child from drowning. Given this, I’m not sure why you think that this scenario isn’t even an open question or a tricky judgement call for a NU. After all, aren’t NUs ultimately trying to reduce the amount of the suffering in the world as their primary goal? Wouldn’t preventing the child from drowning have a significant potential to increase the amount of suffering in the world? If the answer to the latter question is yes, then wouldn’t letting the child drown be potentially compatible with the notion that NU is compassion systematized? Also, Wouldn’t you just define compassion as the prevention or alleviation of suffering as a NU?
I’m a strict NU. And it’s precisely because I’m strict NU that I favour upholding the sanctity of human and nonhuman life in law. Humans can’t be trusted. The alternative to such legal protections would most likely be more suffering. Imagine if people thought that NU entailed letting toddlers drown! Being an effective NU involves striking alliances with members of other ethical traditions. It involves winning hearts and minds. Winning people over to the abolitionist project is a daunting enough challenge as it is. Anything that hampers this goal should be discouraged.
So, do you think that it should illegal to let a child drown even under circumstances where you don’t have parental duties for that child and you don’t have a particular job that requires you to prevent that child from drowning like a nanny or a lifeguard?
Quoting David Pearce
But, what if a person is a secret NU and he decides to let the child drown? Most of the time, it seems to me that the public wouldn’t know if someone let the child drown because they were a NU since it seems that most NUs only talk about being NUs under an anonymous online identity. Given this, it seems to me that NUs do not actually need to believe that we should prevent people from dying in order to maintain alliances with other ethical theories. Rather, I think they would just need to be collectively dishonest about their willingness to let people die as long as it wouldn’t do anything to worsen the reputation of NU.
Yes.Quoting TheHedoMinimalist
Such calculated deceit is probably the recipe for more suffering. So it's not NU. Imagine if Gautama Buddha ("I teach one thing and one thing only: suffering and the end of suffering”) had urged his devotees to practice deception and put vulnerable people out of their misery if the opportunity arose...
I agree with you that it might be best for a fairly high profile NU like yourself to teach your fans and people who might be interested in NU that they should prevent children from drowning. I think you can and kinda have created an implicit double message when describing the reasons for why they should prevent the child from drowning. The main reason that you have stated seems to be related to this being a good PR move for NU. But, I don’t think this genuinely teaches your NU fans that they really shouldn’t allow children to die. Rather, you seem to just be teaching them(in a somewhat indirect and implicit way) to not damage the reputation of NU. Your fans are not stupid though. They know that you seem have your reasons for teaching what you teach and I think they would assume that you might actually want them to let a child die even if you can’t express that sentiment without creating a negative outcome that would lead to more suffering.
In addition, people who believe in other ethical theories are often not stupid either. If you seem to openly state that your main reason for thinking that you should rescue a drowning child has to do with you wanting to appease them, then do you really think that this would work in making people who believe in other ethical theories really think that your views do not really have the anti-life implications that they might think that they have? It seems to me like you would need to come up with another reason for not being anti-life as a NU or I think you would paradoxically end up causing the proponents of other ethical theories to distrust NU even more. I think most people wouldn’t take kindly to someone accepting a particular viewpoint as a means of getting their viewpoint to become less unpopular.
I second that motion but a word of caution. Although anyone who claims not to be a hedonist is lying through faer teeth, I've always been doubtful in re the status of happiness (pleasure & suffering) in a means-ends context. Hedonic ideas tend to treat happiness as an end in itself, as something of intrinsic worth but happiness can also be viewed as a means to some other end. Consider for example the rather "mundane" example of sex - its pleasure rating is off the charts - and I contend that's because of the how important sex is to survival and adaptation. Put simply, in the case of sex, pleasure is a means - a reward system - put in place by evolution to keep us hooked, as it were, to the two-backed beast i.e. pleasure is simply a means to ensure an end which is continued procreation. Ergo, hedonism could be a case of conflating means and ends and it's my suspicion that the more important something is for evolutionary success, usually interpreted as continued existence/survival, the more pleasurable it is and conversely, the more detrimental to evolutionary success (existence/survival) something is, the more painful it is. In very no-nonsense terms, life makes an offer we can't refuse - pleasure is just too damned irresistible for us to reject anything that has it as part of the deal and thereby hangs a tale, a tale of diabolical deception (kindly excuse the hyperbole but somehow it doesn't feel wrong to describe it as such). The story of hedonism and all things allied to it can be adapted to films. Picture a crime boss (life, evolution) whose evil plan is to make people addicted to a highly potent drug (happiness) and use the addicts to do his bidding in return for a "fix" (a dose of that drug). He could even manipulate the dose of the drug in such a way that the junkies who manage to do what he wants most gets a bigger dose of the drug. That this film adaptation of hedonism is going to be a crime thriller is telling, no?
I think this is right. I think pain/pleasure are indicators for what is good or bad, not what is good or bad itself.
Consider the following analogy, smokedetectors serve the function of alerting us when there is a fire. The bad thing is not the smokedetector going of, it's the fire it signals that is bad.
Analogously pain signals us that something bad is happening, for example that your skin is getting burned when you have your hand on a hot stove.
Negative utilitarianism or hedonism is akin to saying that the solution to the problem is getting rid of smokedetection. It just doesn't make sense to me from the get-go.
Agony and despair are inherently bad, whether they serve a signaling purpose (e.g. a noxious stimulus) or otherwise (e.g. neuropathic pain or lifelong depression).
Almost no one disputes subjectively nasty states can play a signalling role in biological animals. What's controversial is whether they are computationally indispensable or whether they can be functionally replaced by a more civilised signalling system. The development of ever more versatile inorganic robots that lack the ghastly "raw feels" of agony and despair shows an alternative signalling system is feasible. "Cyborgisation" (smart prostheses, etc) and hedonic recalibration aren't mutually exclusive options for tomorrow's (trans)humans. A good start will be ensuring via preimplantation genetic screening and soon gene-editing that all new humans are blessed with the hyperthymia and elevated pain-tolerance ("But pain is just a useful signaling system!") of the luckiest 1% (or 0.1%) of people alive today:
https://www.hedweb.com/quora/2015.html#physical
More futuristic transhuman options, i.e. an architecture of mind based entirely on gradients of bliss, can be explored later this century and beyond. But let's tackle the most morally urgent challenges first.
Yes, well put. In their different ways, pain and pleasure alike are coercive. Any parallel between heroin addicts and the drug naĂŻve is apt to sound strained, but endogenous opioid addiction is just as insidious at corrupting our judgement.
The good news is that thanks to biotech, the substrates of bliss won't need to be rationed. If mankind opts for a genetically-driven biohappiness revolution, then, in principle at least, everyone's a "winner". Contrast the winners and losers of conventional social reforms.
I don't think anything is really 'inherent'. If they serve a signalizing purpose than they themselves are not bad, but the circumstances that lead to agony and despair are. I'll grant you that yes, in the case of chronic pain and depression, the agony and despair are bad themselves without any signaling or other purpose... But if you permit me using the same analogy I made in my previous post, this seems to be a case of malfunctioning smokedetectors. If they go off all the time without cause, then yes they needs fixing. But some malfunctioning smokedetectors are not a reason to get rid of all smokedetection, nor does it make getting rid of smokedetection an end in itself. So by all means yes, we should try to find a solution for chronic pain and depression... I just don't think those specific cases are necessarily representative or to be generalized to all pain and pleasure.
Quoting David Pearce
Maybe they can be replaced or maybe they cannot, that's a technical question. What's also controversial I'd say is whether we 'should' replace them by a more 'civilised' signaling system. What is deemed more civilized no doubt depends on the perspective you are evaluating it from.
I think, and we touched on this a few pages back, a lot of this discussion comes down to the basic assumption of negative utilitarianism, and whether you buy into it or not. If you don't, the rest of the story doesn't necessarily follow because it builds on that basic assumption.
Thanks, you raise some astute but uncomfortable points. Asphyxiation is a ghastly way to die, but even if death were instantaneous, there is something rather chilling about an ethic that seems to say pain-ridden Darwinian humans would be better off not existing. Classical utilitarianism says the same, albeit for different reasons; ideally our matter and energy should be converted into pure, undifferentiated bliss (hedonium).
However, if some version of the abolitionist project ever comes to pass – whether decades, centuries or millennia hence – its completion will presumably owe little to utilitarian ethicists. Maybe the end of suffering will owe as little to utilitarianism as pain-free surgery. I believe that transhuman life based on gradients of bliss will one day seem to be common sense – and in no more need of ideological rationalisation than breathing.
An ontic structural realist (cf.
https://plato.stanford.edu/entries/structural-realism/#OntStrReaOSR) might agree with you. But if someone has no place in their ontology for the inherent ghastliness of my pain, so much the worse for their theory of the world. And I fear I'm typical.
Quoting ChatteringMonkey
But their signalling "purpose" is to help our genes leave more copies of themselves. Agony and despair are still terrible even when they fulfil the functional role of maximizing the inclusive fitness of our DNA.
Quoting ChatteringMonkey
Recall transhumanists / radical abolitionists don't call for abolishing smoke-detection, so to speak. Nociception is vital; the "raw feels" of pain are optional. Or rather, they soon will be...
Quoting ChatteringMonkey
Perhaps the same might be said of medicine pre- and post-surgical anesthesia. However, a discussion of meta-ethics and the nature of value judgements would take us far afield.
Quoting ChatteringMonkey
I happen to be a negative utilitarian. NU is a relatively unusual ethic of limited influence. An immense range of ethical traditions besides NU can agree, in principle, that a world without suffering would be good. Alas, the devil is in the details...
I like the idea of transhumanism. Happiness (pleasure & suffering) is something that's close to my heart but I'm guessing I'm not alone in this regard.
I want to run something by you though. A while back, on another thread, I voiced the opinion that pleasure and pain were like those LED light indicators on the many contraptions you can buy at a store. So a green light (pleasure) turns on when the contraption (we) is working well and a red light (pain) flashes when the contraption is malfunctioning. It appears that for some reason this system for tracking the wellbeing of a person, unlike that for contraptions (machines), has acquired the qualities of being pleasant (pleasure) and unpleasant (pain). I suggested that what we could do, if feasible, is sever that link between pleasure and the pleasant feeling that comes with it and similarly between pain and unpleasantness. In other words, I envision a state, a future state, in which injury/harm to mind and body would simply cause a red light to flash and when something good happens to us, all that does is turn on a green light, the unpleasantness of pain or the pleasantness of pleasure will be taken out of the equation as it were. I suppose this, again, is just another version of my position that pleasure and pain can be construed as means to ensure our wellbeing and to treat them as ends might just indicate that we've missed the point. It seems I've run out of ideas. Will get back to you if anything catches my eye. Thank you. Have a good day.
Complete "cyborgisation", i.e. offloading all today's nasty stuff onto smart prostheses, is one option. A manual override is presumably desirable so no one feels they have permanently lost control of their body. But abandoning the signalling role of information-sensitive gradients of well-being too would be an even more revolutionary step: the prospect evokes a more sophisticated version of wireheading rather than full-spectrum superintelligence. At least in my own work, I've never explored what lies beyond a supercivilisation with a hedonic range of, say, +90 to +100. A hedonium / utilitronium shockwave in some guise? Should the abundance of empirical value in the cosmos be maximised as classical utilitarianism dictates? Maximisation is not mandatory by the lights of negative utilitarianism; but I don't rule out that posthumans will view negative utilitarianism as an ancient depressive psychosis, if they even contemplate that perspective at all.
I think this more or less brings us back the original point of our exchange.
One of biological life's defining features is making more copies of the genes it is build out of.
Biological life is the origin, and so far as we know, the only thing that evaluates in this universe.
How then can one come to a conclusion that more life is bad?
Without life in the universe nothing matters either way right?
Quoting David Pearce
The devil is in the details indeed, I don't think many traditions would agree that sterilization of our forward light-cone is the most moral course of action for instance...
Anyway, I enjoyed the discussion, thanks for that.
Indeed so. Programming a happy biosphere is technically harder than sterilizing the Earth. But I can't see the problem of suffering is soluble in any other way.
Indeed! We love choices don't we? even if it's the case that one of them is foolish/mad/both. Something worth exploring but outside the scope of this discussion (or not???) I suppose. I'm trying to consider the scenario in which people might opt out of transhumanism even though it promises so much and has the means to keep those promises. What if transhumanism becomes a reality but people use it only for recreational purposes, you know like going to Disney land? I'm fairly certain that transhumanism would be just too much "fun" to be thought of as a much-deserved break from reality - it would become, no sooner than all its major features become available in the market, a way of life with global appeal - an offer we don't have it in us to refuse. Yet, we seem to be so concerned by choice, would we insist on having an off switch to transhumanist super states? I wonder.
Quoting David Pearce
I'm inclined to agree. Though we want to remove the thorn in our side (suffering), let's face it, what we really want is the rose (happiness). G'day.
I would speculate that Asphyxiation is actually probably more peaceful than the ways that most people die today from natural causes. I would much rather have someone drown me to death when I’m around 60 than to have my body slowly get ravaged by cancer or some other common life ending illness. Though, I’ll grant you that NUs are not required to believe that you should allow children to drown. I just think that it’s not implausible for a NU to think that he would be benefiting the child by allowing her to die since life offers plenty of future opportunities for suffering and it’s not clear whether this suffering is outweighed by the suffering caused by the drowning.
Quoting David Pearce
Well, it is chilling to most people but there are some people like me that don’t seem to think that these implications are problematic. Also, it seems that the idea of genetically modifying humans to be incapable of suffering is chilling and disturbing to most people as well. You probably wouldn’t think that this is much of an argument against transhumanism though.
Consider the core transhumanist "supers", i.e. superintelligence, superlongevity and superhappiness.
If you became a full-spectrum superintelligence, would you want to regress to being a simpleton for the rest of the week?
If you enjoyed quasi-eternal youth, would you want to crumble away with the progeroid syndrome we call aging?
If you upgraded your reward circuitry and tasted life based on gradients of superhuman bliss, would you want to revert to the misery and malaise of Darwinian life?
Humans may be prone to nostalgia. Transhumans – if they contemplate Darwinian life at all – won't miss it for a moment.
Pitfalls?
I can think of a few...
https://www.hedweb.com/quora/2015.html#downsides
I wonder to what extent hesitancy stems from principled opposition, and how much from mere status quo bias?
People tend to be keener on the idea of Heaven than the tools to get there.
Some suspicion is well motivated. The history of utopian experiments is not encouraging.
"But this time it's different!" say transhumanists. But then it always is.
That said, life based on gradients of intelligent bliss is still my tentative prediction for the long-term future of sentience. Suffering isn't just vile; it's pointless.
I see no good reason to disagree with what you say. Darwinian life, as you put it, can't even hold a candle to Transhumanist existence as and when it becomes a reality. Who in her right mind will turn down super-anything let alone superintelligence, superlongevity, and superhappiness.
Correct me if I'm wrong but transhumanism envisions the triad of supers (superintelligence, superlongevity, and superhappiness) to work synergistically, complementing each other as it were to produce an ideal state for humans or even other animals. What if that assumption turns out to be false? What if, for instance, superintelligence, after carefully considering the matter, comes to the conclusion that neither (super)longevity nor (super)happiness deserves as much attention as they're getting in transhumanist circles and recommends these supers be scrapped. I'm not sure if similar arguments can be made based on the other supers but you get the idea, right?
Compared to the superintelligent state transhumanism will one day help us achieve, we, as of this moment, are downright simpletons and thus there's a high likelihood that any claims we make now with regard to what superintelligent transhumans in the distant future might aspire towards is going to be way off the mark. In short, there seems to be significant risk to transhumanism's core ideals from superintelligence. Perhaps this is old news to you.
Yes, talk of a "triple S" civilisation is a useful mnemonic and a snappy slogan for introducing people to transhumanism. But are the "three supers" in tension? After all, a quasi-immortal human is scarcely a full-spectrum superintelligence. A constitutionally superhappy human is arguably a walking oxymoron too. For what it's worth, I'm sceptical this lack of enduring identity matters. Archaic humans don't have enduring metaphysical egos either. "Superlongevity" is best conceived as an allusion to how death, decrepitude and aging won't be a feature of post-Darwinian life. A more serious tension is between superintelligence and superhappiness. I suspect that at some stage, posthumans will opt for selective ignorance of the nature of Darwinian life – maybe even total ignorance. A limited amnesia is probably wise even now. There are some states so inexpressibly awful that no one should try to understand them in any deep sense, just prevent their existence.
1. Why is a constitutionally superhappy human "...arguably a walking oxymoron"? Do you mean to say that such a state has to be, in a sense, made complete with the other 2 supers?
2. The way it seems to me, an "...enduring identity..." is the cornerstone of any hedonic philosophy and for that reason applies to transhumanism too. It's my hunch that we care so much about suffering and happiness precisely because of an "...enduring identity..." that, true or not, we possess. "Suffer" and "Happy" are meaningful only when they become "I suffer" and "I'm happy" i.e. there must be a sense of "...enduring identity..." for hedonism to matter in any sense at all.
By way of bolstering my point that an "...enduring identity..." is key to hedonism I'd like to relate an argument made by William Lane Craig which boils down to the claim that human suffering is, as per him, orders of magnitude greater than animal suffering for the reason that people have an "...enduring identity..." I suppose he means to say that being self-aware (enduring identity) there's an added layer to suffering. Granted that William Lane Craig may not be the best authority to cite, I still feel that he makes the case for why hedonism is such a big deal for us humans and by extension to transhumanism.
3. I suppose you have good reasons for recommending (selective) amnesia in re Darwinian life but wouldn't that be counterproductive? Once bitten, twice shy seems to be the adage transhumanism is about - suffering is too much to bear (and happiness is just too irresistable) - and transhumanists have calibrated their response to the problems of Darwinian life accordingly. To forget Darwinian life would be akin to forgetting an important albeit excruciatingly painful lesson which might be detrimental to the transhumanist cause.
Sorry, I should clarify. Even extreme hyperthymics today are still recognisably human. But future beings whose reward circuitry is so enriched that their "darkest depths" are more exalted than our "peak experiences" are not human as ordinarily understood – even if they could produce viable offspring via sexual reproduction with archaic humans, i.e. if they fulfil the normal biological definition of species membership. A similar point could be made if hedonic uplift continues. There may be more than one biohappiness revolution. Members of a civilisation with a hedonic range of, say, +20 to +30 have no real insight into the nature of life in a supercivilisation with a range that extends from a hedonic low of, say, +90 to an ultra-sublime +100. With pleasure, as with pain, "more is different" – qualitatively different.
Quoting TheMadFool
As a point of human psychology, you may be right. However, I'd beg to differ with William Lane Craig. The suffering of some larger-brained nonhuman animals may exceed the upper bounds of human suffering (cf. https://www.hedweb.com/quora/2015.html#feelpain) – and not on account of their conception of enduring identity (cf. https://www.hedweb.com/quora/2015.html#parfit). This is another reason for compassionate stewardship of Nature rather than traditional conservation biology.
Quoting TheMadFool
I agree about potential risks. Presumably our successors will recognise too that premature amnesia about Darwinian life could be ethically catastrophic. If so, they will weigh the risks accordingly. But there is a tension between becoming superintelligent and superhappy, just as there is a tension today between being even modestly intelligent and modestly happy. What now passes for mental health depends on partially shutting out empathetic understanding of the suffering of others – even if one dedicates one's life to making the world a better place. Compare how mirror-touch synesthetes may feel your pain as their own. Imagine such understanding generalised. If one could understand even a fraction of the suffering in the world in anything but some abstract, formal sense, then one would go insane. Possibly, there is something that humans understand about reality that our otherwise immensely smarter successors won't grasp.
:ok: :up:
So posthumans, as the name suggests, wouldn't exactly be "humans." Posthumans would be so advanced - mentally and physically - that we humans wouldn't be able to relate to them amd vice versa. It would be as if we were replaced by posthumans instead of having evolved into them.
Quoting David Pearce
Bravo! I sympathize with that sentiment. Sometimes it takes a whole lot of unflagging effort to see the light and this for me is one such instance of deep significance to me.
Quoting David Pearce
I absolutely agree. My own thoughts on this are quite similar. I once made the assertion that the truly psychologically normal humans are those who are clinically depressed for they see the world as it really is - overflowing with pain, suffering, and all manners of abject misery. Who, in faer "right mind", wouldn't be depressed, right? On this view what's passed off as "normal" - contentment and if not that a happy disposition - is actually what real insanity is. In short, psychiatry has completely missed the point which, quite interestingly, some religions like Buddhism, whose central doctrine is that life is suffering, have clearly succeeded in sussing out.
Yes. Just as a pinprick has something tenuously in common with agony, posthuman well-being will have something even more tenuously in common with human peak experiences. But mastery of the pleasure-pain axis promises a hedonic revolution; some kind of phase change in hedonic tone beyond human comprehension.
Quoting TheMadFool
After decades of obscurity and fringe status, a policy agenda of compassionate conservation may even be ready to go mainstream. Here is the latest Vox:
https://www.vox.com/the-highlight/22325435/animal-welfare-wild-animals-movement
Quoting TheMadFool
Well said. In contrast to depressive realism, what passes for mental health is a form of affective psychosis. Yet perhaps we can use biotech and IT to build a world fit for euphoric realism – a world where reality itself seems conspiring to help us.
Does the transhumanist vision ultimately lead to something of a morgue where bodies are stored side-by-side and atop on another in a state of unconsciousness (or even cold storage) offering an identical experience to reality, say a new world to explore the size of ten Earths with geographic features only found on other planets along with personal indestructibility? Would we even require bodies anymore, perhaps it will become a requirement to give one's up?
Wouldn't shrinking people (an old myth that perhaps may be a crypto-technological metaphor) make more sense for human well being? An entire metropolitan city of 1 million people - each and every denizen - would now be able to live in their own private three-story mansion with 20 bedrooms, an Olympic sized swimming pool, and their own private petting zoo and race track (or something) in no more combined space and area than an average strip mall. Fancy that eh?
I like what you said earlier: Quoting David Pearce
Perhaps I'm way off the mark in the way I interpret it but for what it's worth I'll give you an idea of how I made sense of what you said. To begin with, you've been regularly employing numbers in your posts on transhumanism. My guess is you have some kind of a numerical hedonic scale which you're using to make rough or perhaps even precise measurements of hedonic parameters (pleasure & suffering). What struck me as deeply insightful is, I quote, "'more is different' - qualitatively different". The word "more" implies the hedonic scale I was talking about earlier and one can imagine a slider on it that marks off differences in what I suppose is the intensity of pleasure or pain that's numerically i.e. quantitatively expressed.
I don't know if it matters to transhumanism or not but my intuition informs me that if there's a large enough quantitative distance between two hedonic states, the difference might be perceived as a qualitative one. In other words, a hedonic value of +20 may still be comprehensible in terms of pleasure or pain but one that's +100 may be experienced not as either pleasure or pain but as something about which we can only speculate at the moment.
An analogy might help. I've been to gyms and done some weight training. I recall trying to lift some weights and what I did was slowly increase the weight I was lifting which in effect is quantitatively increasing the stress on the muscles of my arms. In the initial stages, all I could feel was the strain on my muscles gradually rising but at a certain point, the strain transformed into pain. This, to me, seems like quantitative differences, if big enough, might be perceived as qualitative differences. Sorry if my short anecdote fails to do its job of elucidating my point but it's the best I could do.
The implication for transhumanism is this: A hedonic value of +90 or +100 may not be experienced as pleasure at all; at the very least, what we suppose is pleasure at hedonic levels +90 or +100 may be so radically different from what is pleasure at hedonic levels +20 or +30 that we would be forced to make a distinction between them - one is pleasure in its present recognizable form and the other is...???...anyone's guess.
I have a feeling that this isn't either a fatal flaw or even a minor irritation for transhumanism but I'd like your opinion nonetheless.
G'day.
A morgue doesn't quite evoke the grandeur of a "triple S" civilisation. But I guess it's conceivable. Even today, we each spend our life encased within the confines of a transcendental skull – not to be confused with the palpable empirical skull whose contours one can feel with one's "virtual" hands (cf. https://www.hedweb.com/quora/2015.html#lifeillus). Immersive VR or some version of the transcension hypothesis is one trajectory for the future of sentience. Rather than traditional spacefaring yarns – who wants to explore what are really lifeless gas giants or sterile lumps of rock!? – maybe intelligence will turn inwards to explore inner space. The experience of inner space – and especially alien state-spaces of consciousness – can be far bigger, richer and more diverse than interplanetary or hypothetical interstellar travel pursued in ordinary waking consciousness. For what it's worth, I've personally no more desire to spend time on Mars than to live in the Sahara desert.
Anyhow, to answer your question: I don't know. For technical reasons, I think the future lies in gradients of superhuman bliss. I've no credible conception of what guises that bliss will take. It will just be better than your wildest dreams.
I suppose that would require that you're already superhuman.
At what point do you think we might cross the threshold between human and superhuman? Could there be a distinguishing feature which would mark a difference of species? Or, would making such a distinction amount to racism?
Do you believe that there are limits to the extent in which a person is ethically obligated to get involved on someone else's behalf? Do you think there comes a point in which a person is justified in saying, "not my problem"?
Cirith Ungol's song
It seems to me that the common-sense intuition is that, while suffering is indeed bad, there are limits to how much a person is obligated to try to reduce it, and furthermore, that it is more important to not increase the amount of suffering in the world than it is to reduce it. It at least initially seem like we have some degree of freedom to be a "spectator", so that some things are just not our fault, and we have no responsibility to make things better (although it is admirable if we so choose).
What are your thoughts on this?
The quote was:
[quote=Blaise Pascal]All of humanity's problems stem from man's inability to sit quietly in a room alone.[/quote]
I take it that in your perfect world, just sitting quietly in a room alone would be "enough", above hedonic zero. But of course that wouldn't consign us to perpetual inaction as we just sat in a room alone doing nothing forever (like the "wire-heads" in Larry Niven's Known Space universe, who are addicted to direct electronic stimulation of their pleasure centers), because we would still have the opportunity for even more enjoyable experiences if we went out and accomplished things, learned things, taught others, helped them in other ways, etc.
Does that sound about right?
Thank you. Lots of complications to unpack here! We now know that wireheading, i.e. intracranial self-stimulation of the mesolimbic dopamine system, simulates the desire centres of the CNS rather than the opioidergic pleasure centres (cf. https://www.paradise-engineering.com/brain/). But let's here use "wireheading" in the popularly accepted sense of unvarying bliss induced by microelectrode stimulation: a perpetual hedonic +10. Short of genetic enhancement, there is no way for human wireheads to exceed the upper bounds of bliss allowed by their existing reward circuitry; but for negative utilitarians, at least, this constraint isn't a moral issue. As a NU, I reckon a entire civilisation of wireheads that had discharged all its responsibilities to eradicate suffering would be morally unimpeachable. However, I don't urge wireheading except in cases of refractory pain and depression. It's not ecologically viable because there will always be strong selection pressure against any predisposition to wirehead. The idea of wireheading appeals mostly to pain-ridden depressives.
Another scenario combines hedonic recalibration with the VR equivalent of Nozick's Experience Machines (cf. https://www.hedweb.com/quora/2015.html#experiencemachine). Immersive VR may transform life and civilisation. But once again, I don't advocate ubiquitous Experience Machines because of the nature of selection pressure in basement reality. Any predisposition not to plug into full-blown Experience Machines will be genetically adaptive.
However, there is third option that is potentially saleable, ecologically viable and also my tentative prediction for the future of sentience. Genetically-based hedonic uplift and recalibration isn't, strictly speaking, pleasure-maximizing. Recall how today's high-functioning hyperthymics are blissful, but they aren't "blissed out". A civilisation based on information-sensitive gradients of intelligent bliss is not a perfect world by strict classical utilitarian criteria. Recalibrating hedonic range and hedonic set-points in basement reality may even be "conservative", in a sense. Your values, preference architecture and relationships can remain intact even as your default hedonic tone is uplifted. Critical insight and social responsibility can be conserved. Neuroscientific progress can continue unabated too – including perhaps the knowledge of how to create a hedonic +90 to +100 supercivilisation.
Heady stuff. Alas, Darwinian life still has vicious surprises in store.
A good question. IMO a plea of "not my problem" is irrational and immoral. In my view, closed individualism is a false theory of personal identity: https://www.hedweb.com/quora/2015.html#individualism
Insofar as you are rational, and insofar it as lies within your power, you should help others as much as you should help your future namesakes ("you"). For sure, there are massive complications. Evolution didn't design us to be rational. Reality seems centred on me. I'm most important, followed by family, friends and allies:
https://www.hedweb.com/quora/2015.html#interpret
Intellectually, I know this is delusional nonsense. But the egocentric illusion is so adaptive that it's effectively hardwired across the animal kingdom. If one aspires to exceed one's design specifications and instead display "moral heroism", then one risks burnout.
What does helping others mean in practice? Well, "technical solutions to ethical problems" is one pithy definition of transhumanism. The biotech and IT revolutions mean that shortly we'll be able to help even the humblest forms of sentience without risk of burnout, and indeed with minimal personal inconvenience. For instance, I'd strenuously urge everyone to go vegan; humanity's depraved treatment of nonhuman animals in factory-farms and slaughterhouses defies description. Yet the most effective way to "veganise" the world will be accelerating the development and commercialisation of cultured meat and animal products. Most people are weakly benevolent; if offered, they'll choose the cruelty-free cultured option. Animal agriculture will presumably be banned after butchery becomes redundant. A similar hard-headed ethical approach is needed to tackle the problem of wild animal suffering. Devoting half one's life to feeding famished herbivores in winter would be too psychologically demanding; piecemeal interventions would also be ineffective and maybe cause more long-term suffering. By contrast, hi-tech solutions to wild animal suffering are easier to implement and potentially much more effective.
Biologists define a species as a group of organisms that can reproduce with one another in their natural habitat to produce fertile offspring. We can envisage a future world where most babies are base-edited "designer babies". At some stage, the notional coupling of a gene-edited, AI-augmented transhuman and an archaic human on a reservation would presumably not produce a viable child.
Bioconservatives may be sceptical such a reproductive revolution will ever come to pass. Yet I suspect some kind of reproductive revolution may be inevitable. As humans progressively conquer the aging process later this century and beyond, procreative freedom as traditionally understood will eventually be impossible – whether the carrying capacity of Earth is 15 billion or 150 billion. Babymaking will become a rare and meticulously planned event:
https://www.reproductive-revolution.com
Possibly, you have a more figurative sense of "superhuman" in mind. My definition of the transition from human to transhuman is conventional but not arbitrary. In The Hedonistic Imperative (1995) I predicted, tentatively, that the world's last experience below hedonic zero in our forward light-cone would be a precisely datable event a few centuries from now. The Darwinian era will have ended. A world without psychological and physical pain isn't the same as a mature posthuman civilisation of superintelligence, superlongevity and superhappiness. But the end of suffering will still be a momentous watershed in the evolutionary development of life. I'd argue it's the most ethically important.
Quoting David Pearce
Wouldn't there be a vast portion of the human population which for one reason or another would not engage in this designer baby process? I would think that they might even revolt against it. In any case, I can see a divide where each, the designer and the natural would look at each other as different. And in these cases we don't usually look at the other as better.
Quoting David Pearce
Judging by the way that human beings have given themselves control over the Covid19 virus I don't have much faith in their capacity to conquer any natural processes which produce death. I tend to think that making a living thing resistant to a specific fatal process only leaves it more vulnerable to another fatal process. Life is a very delicate balance, and "the aging process" is an illusion because there is no one simple process which constitutes "aging". The fountain of youth is a myth.
Quoting David Pearce
I noticed you are also concerned about nuclear war. Aren't you a little worried that the movement toward designer babies could actually trigger a nuclear war as a revolution against this sort of human manipulation? Also, there are countless other things which could foil this process, like biological warfare. I would think that your desired human transformation would require the entirety of the human population working together towards that one goal. And, as I indicated above, I believe that is extremely unlikely because many will reject this as unnatural, or as in opposition to God. Do you see how many people in the world reject something so simple as a Covid19 vaccination? Both atheist and theist moralists have reason to reject your proposal. Therefore I think you proposal would only create a division between those in favour, and those against, and if those in favour persisted as if they were starting to proceed into the project without unanimous consent, they might be exterminated as a threat.
@David Pearce, also on this sentiment, I was going to suggest if such a- in my view outlandish- reality would ever come to real and actual practice or fruition.. people would want to see the results for themselves first. Not just the first transhumans gleefully or perhaps nonchalantly being happy or blissful or any of the aforementioned in your proposal right away or for a few months or even a few years... but how they truly fare in life and through multiple generations. Perhaps a controlled experiment would be in order. An entire civilization. Somewhere far, far away where they cannot reach the outside non-transhuman world and the outside world cannot reach them. But where we can all monitor them vigorously, both advocate and opponent. Or at least a non-biased person who can offer the moral equivalent (example, a group of volunteers and their kids). Otherwise.. I tend to believe Metaphysician has a quite compelling if not forceful point to be confronted.
Why "outlandish"? For sure, untested genetic experiments conceived in the heat of sexual passion are "normal" today. But there may come a time when creating life via a blind genetic crapshoot will seem akin to child abuse. Recall that Darwinian life is "designed" to suffer. I reckon responsible future parents will want happy children blessed with good code. All sentient beings deserve the maximum genetic opportunity to flourish.
I had a topic I made before. Did I get it right in that topic?
See:
https://thephilosophyforum.com/discussion/8735/david-pearce-on-hedonism
Thank you. You are very kind. Some people may be a bit disconcerted that a negative utilitarian should talk so much about happiness, pleasure and even hedonism. But IMO, engineering a world with an architecture of mind based on information-sensitive gradients of well-being will prove to be the most realistic way to end suffering.
Suppose that a minority of parents do indeed decide they want "designer babies" rather than haphazardly-created babies. The explosive popularity of personal genomics services like 23andMe shows many people are proactive regarding their genetic make-up and genetic family history – and by their partner's genetic code too. Suppose that the genetic basis of pain thresholds, hedonic range, hedonic set-points, antiaging alleles and, yes, alleles and allelic combinations associated with high intelligence becomes better understood. Naturally, most prospective parents want the best for their kids. To be sure, designing life is a bioethical minefield. But what kind of "revolt" from bionconservatives do you anticipate beyond simply continuing to have babies in the cruel, historical manner? No doubt the revolution will be messy. That said, I predict opposition will eventually wither.
Because the transhumans would be superhuman in some ways, they would be seen as a threat to the naturalists (or whatever you want to call them), and the God-fearers. And, as artificially produced, the naturalists would look at them as emotionless computers or robots, and feel the same threat that some people today feel about robots taking over the world and wiping out human existence. So they'd want to protect their children from this scourge of artificial beings, by doing whatever they possibly could to prevent them from being created.
Yes, I agree. Ferocious controversy lies ahead.
Recall the first CRISPR babies were produced not to enhance the innate well-being of the gene-edited twins, but to enhance their intelligence with HIV-protection as a cover story (cf. https://www.technologyreview.com/2019/02/21/137309/the-crispr-twins-had-their-brains-altered/). I don't know if the parents of Lulu and Nana were aware of the potential cognitive enhancement that CCR5 deletion confers in "animal models". Yet if they knew, they'd most likely approve. Chinese parents tend to be particularly ambitious for their kids. Here's one of my worries. Other things being equal, intelligence-amplification is admirable; but intelligence is a contested concept (cf. https://www.hedweb.com/quora/2015.html#iqintelligence). And the kind of intelligence that prospective parents are most likely to want amplified is the mind-blind, "autistic" component of general intelligence captured by "IQ" tests / SAT scores (etc), not social cognition, higher-order intentionality and collaborative problem-solving prowess (cf. https://psychology-tools.com/test/autism-spectrum-quotient). Just consider: would you be more excited by the prospect of becoming the biological parent of a John von Neumann or a Nelson Mandela? Yes, a hypothetical "high IQ" civilisation could potentially be awesome. Science, technology, engineering, and mathematics (STEM) disciplines would flourish. Yet there are poorly understood neurological trade-offs between a hyper-systematizing, hyper-masculine, "high-IQ"/AQ cognitive style and an empathetic cognitive style (cf. https://psycnet.apa.org/record/2018-63293-005). If designer babies were left entirely to the discretion of prospective parents, then a "high-IQ" civilisation would also be a high-AQ civilisation. The chequered history of so-called "high IQ" societies (cf. https://en.wikipedia.org/wiki/High-IQ_society) doesn't bode well. Note that I'm not making a value-judgement about what AQ is desirable for the individual or for civilisation as a whole, nor saying that high-AQ folk are incapable of empathy. But our current restrictive conception of intelligence is a recipe for the tribalism that intelligent moral agents should aim to transcend. Full-spectrum superintelligences will have a superhuman capacity for perspective-taking – including the perspectives of the unremediated / unenhanced.
More generally, genetically enhancing general intelligence is technically harder than coding for mood enrichment. The only way I know to create an accelerated biointelligence explosion would be unlikely to pass an ethics committee:
https://www.biointelligence-explosion.com
Ethically, I think our most urgent biological-genetic focus should be ending suffering:
https://www.abolitionist.com
It looks like we're pretty much on the same page here, except where you see a difficult task, I see an impossibility which ought not even be attempted. It's a waste of time and resources, and of course there is the risk factor, of creating extreme division within the human community which I've already mentioned. This is because living beings tend to have very strong feelings concerning the well-being of their offspring, feelings which are not necessarily rational. So to put it bluntly, if you think this process "would be unlikely to pass an ethics committee", why are you discussing it as if it is a viable option? Isn't conspiracy toward something unethical itself unethical?
Variety is a very important aspect of life, I'd argue it's the essence of life. And it is the foundation of evolution. The close relationship between variety and life is probably why we find beauty in variety. Beauty is closely related to good, and the pleasure we derive from beauty has much capacity to quell suffering. This is why there is a custom of giving people who are suffering flowers.
But on the other side of that spectrum, suffering is just as much varied as life is. So the goal of ending suffering through bioengineering is not feasible. This is because such bioengineering endeavours always create uniformity, and the goal of creating difference would produce random monsters. To end all the different sources of suffering would require that all people be the same. I don't think that a thing which has been designed not to suffer could even be called alive.
Sorry, I should have clarified. By an "accelerated biointelligence explosion [that] would be unlikely to pass an ethics committee", I had in mind a deliberate project: cloning with variations super-geniuses like von Neumann (https://www.cantorsparadise.com/the-unparalleled-genius-of-john-von-neumann-791bb9f42a2d); hothousing the genetically modified clones; and repeating the cycle of cloning with variations in an accelerating process of recursive self-improvement. This scenario is different from "ordinary" parents-to-be using preimplantation genetic screening and counselling and soon a little light genetic tweaking. I don't predict an accelerated biointelligence explosion as distinct from a long-term societal reproductive revolution. The reproductive revolution will be more slow-burning. It will most likely start with remedial gene-editing to cure well-acknowledged genetic diseases that almost no one wants to conserve. But humanity will become more ambitious. Germline interventions to modulate pain-tolerance, depression-resistance, hedonic range, prolonged youthful vitality and different kinds of cognitive ability will follow.
Quoting Metaphysician Undercover
Genome editing can create richer variety than is possible under a regime of natural selection and the meiotic shuffling of traditional sexual reproduction. But diversity isn't inherently good. Darwinian life offers an unimaginable diversity of ways to suffer.
Quoting Metaphysician Undercover
I promise Jo Cameron and Anders Sandberg ("I do have a ridiculously high hedonic set-point!") are very much alive. The challenge is to ensure all sentient creatures are so blessed. Well-being should be a design specification of sentience, not a fleeting interlude.
Well, I think the principal issue is that "suffering" is a very broad, general term, encompassing many types. So the questions of what types of suffering ought to be eliminated, and would eliminating some types increase others, or even create new unforeseen and possibly extremely severe types, is very pertinent.
All experience below hedonic zero has something in common. This property deserves to be retired – made physiologically impossible because its molecular substrates are absent. Shortly, its elimination will be technically feasible. Later, its elimination will be sociologically feasible too. A world without suffering may sound "samey". Heaven has intuitively less variety than Hell. However, trillions of magical state-spaces of consciousness await discovery and exploration. Biotech is a godsend; let's use it wisely:
https://www.hedweb.com/quora/2015.html#irreversible
Not my business, but you surely have money. So, get a massage for 24 hours straight, longer even. Force yourself to be there in a state of pleasure, platonic if possible but whatever. You'll get bored of it after some time. You've never had a you day? Where you just eat, do what pleasures you, be pleasured, etc, it gets dull after some time. Beyond the natural need for sleep you just want to feel "normal" or "left alone" after some time. This is human nature. Increasing the hedonic "resting level" does not, at least in my theory, eliminate the hedonic treadmill, if one is aware of greater possibility, one is inclined to seek it with body and mind. If one no longer wishes to strive, would you still call this humanity? What differentiates a group of your imagined transhumans with a group of cell phones plugged in and charged to full capacity?
Not humanity, but transhumanity.
No one ever gets bored of mu-opioidergic activation of their hedonic hotspot in the posterior ventral pallidum. But the genetic or pharmacological equivalent of "wireheading" (a misnomer) is not what I or most other transhumanists advocate. In future, steep or shallow information-sensitive gradients of well-being can be navigated with as much or as little motivation as desired. Indeed, it's worth stressing how hedonic uplift can be combined with dopaminergic hypermotivation to counter the objection that perpetual bliss will necessarily turn us into lotus-eaters.
How is the gap overcome legally? I've been in the nootropics community for about 10 years, and there's some kind of strong desire to advance human cognition with drugs like the MAPS association or otherwise Neuralink. Yet, legally its hard to overcome some of the problems associated with transhumanity and neoclassical legalism in the West...
Yes, the legal obstacles to transhumanism are significant.
For instance, if one is an older person who doesn't want to miss out on transhuman life, then at present it's not lawfully possible to opt for cryothanasia at, say, 75 so one can be cryonically suspended in optimal conditions. If instead you wait until you die "naturally" aged 95 or whatever, then you'll be a shadow of your former self. Your prospects of mind-intact reanimation will be negligible. Effectively irreversible information loss is inevitable.
If you are a responsible prospective parent who wants to choose benign genes for your future children, then currently you can't pre-select benign alleles for your offspring unless you are at risk of passing on an "officially" medically-recognised genetic disorder.
If you are the victim of refractory depression or neuropathic pain, you can't lawfully sign up for a surgical implant and practise "wireheading".
Drugs are another legal minefield. Intellectual progress, let alone the development of full-spectrum superintelligence, depends on developing the study of mind as an experimental discipline: a post-Galilean science of consciousness. After all, one's own consciousness is all one ever directly knows; there's no alternative to the experimental method if one wants to explore both its content and the medium. Classical Turing machines can't help. Digital computers are zombies; despite the AI hype, they can't deliver superintelligence. Likewise, there are tenured professors of mind who are drug-naive. Drug-naivety is the recipe for scholasticism. Most of the best work investigating consciousness is done in the scientific counterculture, not in academia.
That said, dilemmas must be confronted. It's no myth: exploring psychedelics is hazardous: https://www.hedweb.com/quora/2015.html#psychedelics
Informed consent to becoming a psychonaut is impossible. And I'm a hypocrite. Woe betide anyone who tries to stop me taking whatever I choose; yet if I were a parent, then I wouldn't want my children's professors to be permitted to introduce them to psychedelics.
The long-term solution to the hazards of experimentation is genetically programmed invincible well-being. All trips will then be good trips – sometimes illuminating, sometimes magical, but always enjoyable in the extreme. But this scenario is still a pipedream. A moratorium on psychedelic research until we re-engineer our reward circuitry is a moratorium on knowledge.
I don't have a satisfactory answer.
Quoting David Pearce
What makes you think we understand enough to prevent suffering in the whole forward light cone?
To follow up on a question I asked on page 1, after reviewing the material, do you agree with Benatar's Axiological Asymmetry?
Just as, tragically, a few genetic tweaks can make someone chronically depressed and pain-ridden, conversely a few genetic tweaks can make someone chronically happy and pain-free. CRISPR-based synthetic gene drives (cf. https://en.wikipedia.org/wiki/Gene_drive) that defy the naively immutable laws of Mendelian inheritance allow the deliberate spread of such benign alleles to the rest of Nature even if they carry a modest fitness cost to the individual, which is counterintuitive and sounds ecologically illiterate. For sure, I'm omitting many complications. But an architecture of mind based entirely on gradients of well-being is technically feasible, with or without smart prostheses.
Quoting Down The Rabbit Hole
As a negative utilitarian, I agree with (a version of) David Benatar's Axiological Asymmetry. A perfect vacuum would be axiologically as well as physically perfect. I just don't think the asymmetry has the "strong" anti-natalist policy implications that Benatar supposes. The nature of selection pressure means the future belongs to life-lovers. Therefore, NUs should work with the broadest possible coalition of life affirmers to create a world where existing sentience can flourish and new life is constitutionally happy, i.e a world based on information-sensitive gradients of bliss. If intelligent beings modify their own source code, then coming into existence doesn't have to be a harm.
I guess what people will want to say/ask/want to see is.. okay David. Go ahead. Do it with your own kids, do whatever it is as you say. And let us all openly observe them before any talk of ;legislation or anything that involves anybody else.. is involved. Reasonable enough, yes?
If humans were to abolish all forms of suffering with technology, as you say, I think the only way of fully accomplishing this would be by constraining the limits of consciousness itself. Just like in the early dystopian We, people would have their imaginations removed, or carefully adjusted so that they could only ever imagine pleasurable things.
I know that you could reply that a concept and the affect it invokes are distinct; that there is no logical relationship between them. That goodness/badness are like different colors of paint being applied to a neutral gray.
In which case, humans would be incapable of feeling negative feelings regarding things we usually find important to feel negative feelings towards. For instance, the death of a loved one invokes sadness. Would technologically-enhanced humans feel sadness?
What about other feelings, like that of accomplishment, that require some degree of struggle beforehand? Would there be an "accomplishment pill" that people would take when they want to feel accomplished, or a "love pill" when people want to feel loved (even if they have accomplished nothing, and have nobody to love)?
Would the removal of all forms of negative feelings include feelings that are important for morality? I can imagine a situation in which blissful slaves work constantly, die frequently, all with a happy smile and no sense that what is being done to them is wrong. How would we be compassionate? How would we feel guilt?
Recall I'm a "soft" anti-natalist. I don't feel ethically entitled to bring more suffering into the world, genetically mitigated or otherwise:
https://www.antinatalism.com
Quoting Outlander
It's precisely because creating new sentience does involve someone else that we should try to mitigate the harm:
https://www.hedweb.com/quora/2015.html#antinatal
Quoting David Pearce
Which version don't you agree with?
With a hedonic range of, say, +20 to +50, our imaginations could be stunningly enriched.
Quoting darthbarracuda
I feel entitled to want my death or misfortune to diminish the well-being of loved ones. I don't think I'm entitled to want them to suffer on my account. There can be diminished well-being even in posthuman paradise – although death and aging will eventually disappear, and posthuman hedonic dips can be higher than human hedonic peaks.
Quoting darthbarracuda
Engineering enhanced motivation and a consequent sense of accomplishment is feasible in conjunction with a richer default hedonic tone. The dopamine and opioid neurotransmitter systems are interconnected, but distinct.
Quoting darthbarracuda
Information-sensitive gradients of well-being are consistent with a strong personal code of morality and social responsibility. Depression, undermotivation and anhedonia tend to subvert one's values; conversely, depression-resistance makes one stronger, in every sense.Quoting darthbarracuda
Low mood is the recipe for subordinate behaviour; elevated mood promotes active citizens:
https://www.huxley.net
Quoting darthbarracuda
Compare a "hug drug" and empathetic euphoriant like MDMA ("Ecstasy"):
https://www.mdma.net
Quoting darthbarracuda
The functional analogues of guilt may be retained; but let's get rid of its ghastly "raw feels":
https://www.gradients.com
David Benatar's version of the asymmetry argument purports to show that existence is always comparatively worse than non-existence. After intelligent moral agents phase out the biology of suffering, this claim will no longer hold.
Sure, and after dark becomes light it's no longer dark. Anything can be changed by this view of "oh after such and such" is applied. We need solutions. Concrete results. At least suggestions. You have a vision, that's great, so does everyone. What will you do in the here and now to see it follows through?
I'm having a hard time imagining what a hedonic dip would be like that did not involve some form of suffering. How do I remember that times were better without being disappointed with the present moment?
I don't want the people I care about to suffer either, but if nobody cared if I were gone, that would be a very lonely existence. Loneliness that would have to be eliminated with technology. Companionship would not be genuine. If you feel sad when a loved one is gone, that is good, it is good that you feel bad, because it means your relationship was genuine.
It seems like authentic, genuine experiences may not be possible in a world without suffering. Things would no longer have any weight or meaning. Which of course would be a negative feeling that would need to be eliminated. The importance of meaning would be lost, and nobody would even care.
I'm a researcher, not the leader of a millenarian cult with messianic delusions. Yes, just setting out a blueprint of what needs to be done feels inadequate. I'd like to do more. But reprogramming the biosphere will take centuries.
Compare the peaks and dips of lovemaking, which (if one isn't celibate) are generically enjoyable throughout.
Quoting darthbarracuda
Relationships in which one wants a loved one ever to suffer are inherently abusive, another cruel legacy of Darwinian life. Thinking about the biological basis of human relationships can be unsettling, but they are rooted in our endogenous opioid dependence.
Quoting darthbarracuda
The happiest people today typically have the strongest relationships. By contrast, depression undermines relationships not least by robbing the victims of an ability to derive pleasure from the company of friends and family. Critics sometimes say we face a choice between happiness and meaning. It's a false dichotomy. Superhappiness will create a superhuman sense of significance by its very nature:
https://www.hedweb.com/quora/2015.html#david
Was Jesus not a teacher? A leader or dealer in hope? You tell us here, against all currently possible odds a heaven of infinite currently unimaginable bliss awaits, if we only listen to you, and if not, a Hell also awaits, a lifetime of Darwinian hell? You two could be brothers at this rate!
Quoting David Pearce
The similarities continue. What did Jesus say to the first follower when asked what they are to do? : the response was "Change the world"!
I spoke in jest. But there's a serious point here. Evolution via natural selection is a fiendish engine for spreading unimaginable suffering. But selection pressure has thrown up a cognitively unique species. Humans are poised to gain mastery over their reward circuitry. Technically, we could phase out the biology of suffering and reprogram the biosphere to create life based gradients of super-bliss – yes, "Heaven" if you like, only much better. What's more, hedonic uplift doesn't involve the proverbial "winners'' and "losers". Biological-genetic elevation of my hedonic set-point doesn't adversely affect you any more than elevation of your hedonic set-point adversely affects me. Contrast the zero-sum status games of Darwinian life ("Hell"). Anyhow, a genetically-driven biohappiness revolution deserves serious scientific and philosophical critique. A big thank you to the organizers of The Philosophy Forum. But to answer your point, this transhumanist vision of post-Darwinian life ("Heaven") also deserves a larger-than-life billionaire or charismatic influencer to take these ideas mainstream.
Yet it created you, did it not? So, it is therefore now, in addition to this, an incredible engine for spreading unimaginable bliss, if your ideas are to be believed. Some bite the hands that feed them, but are you not attempting to amputate it altogether?
This should be called hyperhumanism!
Hell has an escape-hatch. Reaching it is a daunting challenge. But biotech offers tools of emancipation. Maybe posthumans will indeed enjoy eons of indescribable happiness. I certainly hope so. But Darwinian life is monstrous. No amount of bliss can somehow morally outweigh such obscene suffering. I wish sentient malware like us had never existed. It's not a fruitful thought, I know. May posthumans be spared such knowledge.
"Hyperhumanism" might be a more reassuring brand than transhumanism. But the pain-pleasure axis discloses the world's inbuilt metric of (dis)value. It's not some species-specific idiosyncrasy. By contrast, fear of death may be peculiar to a handful of intelligent animal species. Fear and death alike will eventually be preventable.
So they say. But compare:
https://www.huxley.net/
What reason is there to want to prevent death, if not for a fear of it?
Bereavement and the loss of loved ones cause immense heartache.
I could have said much more, e.g. about personal identity (or rather its absence) over time:
https://www.hedweb.com/quora/2015.html#parfit
However, I may be misunderstanding your worry; if so, apologies. Please let me know!
Will God-like superintelligences be akin to Nietzschean Übermenschen – contemptuous of the weak, the vulnerable and the cognitively humble? Or does superhuman intelligence entail a superhuman capacity for perspective-taking and empathic understanding? For sure, talk of an expanding circle of compassion can make proponents sound naive. Most students of history or evolutionary psychology will be sceptical of moral progress too. But we can't just assume that God-like superintelligences will be prey to the egocentric illusion. Ultimately, egocentrism is no more rational than geocentrism – and presumably destined to go the same way:
https://www.hedweb.com/quora/2015.html#purpose
I think this is very well said and summarizes my own basic question of what the foundational ethical theory is used or implied to justify the transhumanism project.
If there's a framework taken for granted -- such as one of the "big tents" like utilitarianism or kantianism or even Nietzscheism or a variation on post-modernism (none of which, insofar as the label is concerned, might give a clear idea, but it would point in a general direction and terminology) -- it would give some context to the underlying moral or ethical purpose transhumanism is addressing.
Of course, I'd also have no issue with the idea of researching the subject as a technological tool without any moral judgements about its proper use, if any (i.e. just doing the objective scientific investigation), just as a nuclear physicist may say their research does not imply they are making the judgement nuclear bombs or even nuclear reactors should be built, and if so in what context (that the moral and political questions are complex and "science" doesn't take a position on them), but in reading parts of the conversation it definitely seems there is an underlying moral and political project and in clear opposition to specific philosophies (such as the evangelical Christians) "standing in the way of progress". In other words, it's clear what the "others" are that this project is against , but it does not seem clear what this project is really "for".
Not that I am defending the evangelical or otherwise conservative Christians (I criticize them harshly all the time), but if one takes specific and clear issue with one world view, it is my disposition, that one has oneself a clear and specific position from which one makes clear and specific criticisms (if one's criticism are used to justify one's own project, and not just critical thinking for the sake of it without pointing to an alternative "better" position).
As with other posters, the answers to "why is it good?" seem vague.
The claim: aging, death, disease, cognitive infirmity and indeed involuntary suffering of any kind are wrong. A transhumanist civilization of superlongevity, superintelligence and superhappiness can overcome these ancient evils. If we act wisely, then future life will be sublime.
Apologies, you're right. The moral underpinnings of the transhumanist project as outlined are indeed poorly defined. Yet there's another problem. In my answers, I've glossed over differences in personal values among extremely diverse transhumanists:
https://www.transhumanist.com/
The transhumanist movement embraces moral realists and antirealists; the secular and the religious; negative, classical and preference utilitarians; ethical pluralists and agnostics; and folk whose views defy easy categorisation. Only a minority of transhumanists share my NU conviction that our overriding moral obligation is to mitigate, minimise and prevent suffering – everything else being icing on the cake:
https://www.hedweb.com/hedethic/sentience-interview.html
What transhumanism isn’t is gung-ho technology-worship. Everyone acknowledges that the potential risks of AI and biotechnology are far-reaching:
https://www.hedweb.com/quora/2015.html#downsides
But we’ll shortly have the technical tools create posthuman paradise – quasi-eternal life animated by gradients of intelligent bliss. On some fairly modest assumptions, life in a "triple s" civilisation will be orders of magnitude better than life today.
Maybe all that is possible, but there's a lot of science and technology needs developing before super-longevity could be sustainable. If the future is sublime or not, will not depend primarily on CRISPR. It will depend primarily on energy technology, carbon capture and storage, desalination and irrigation and recycling technology being applied first. Or do you suggest that people can be blissfully happy with the sky on fire?
Maybe that is your suggestion - and herein lies the question: how far would you go with the genetic toolkit to survive in a world where you've developed genetics to a fine art, but let the environment run to ruin? Alligator skin?
David, thou art a gentleman and a scholar of the highest degree. Here and now that is. Your presence and engagement here is appreciated beyond the words received. I would much enjoy to share a few drinks with you at your preferred pub in your native England, what's left of it that is. That said, if you don't mind, what are your responses to the following criticisms of the words or rather common understandings and generally assumed beliefs behind:
Aging: Is this not proof of progress? Proof of the journey at the destination? A fine wine is aged, and is worth a considerable amount. Perhaps you refer to the physical and biological affects of aging being essentially decreased ability and fortitude. Well, in a race with no timer, why bother to move?
Death: I view death as a necessity in this world, not as you might think however but more of an escape hatch. For example, being trapped in a cave. In such a scenario one would rather starve to death then sit there for eternity. It's complicated, though without death what appreciation would there be for life really, would it not simply be just another prison?
Disease: I'm a firm believer in overpopulation, that things happen for a reason. If I were to be infirmed myself in a clinical setting perhaps I would be encouraged to rethink this, if not revealed as somewhat of a hypocrite. What doesn't kill you makes you stronger, and what one brings upon themselves (this is to exclude random cases of illness) one should learn or at least desire to pass on a lesson.
Cognitive infirmity: While I don't have an effectual or "actual" philosophical counterargument or "justification" for as it's a simple biological reality, I'll offer an anecdote. Well a sentence that one can be derived from. There's two water guns, the first ones which casually lose pressure as the tank depletes, and the newer ones (granted from a score ago) that maintain constant pressure (CPS "constant pressure system") until the tank is actually depleted. Which would you prefer?
Involuntary suffering: Welcome to Earth or life. Is it a blessing or curse? A reward or a sentence? Either way we're all going to be here for possibly a while, best do your time. There's no true escape, David. Other than to avoid true eternal bliss.
--
Onto your three supers:
Longevity: Again, I challenge you to be alone or perhaps even with friends (or more realistically mixed company who hold opposing views to yours as this is reality) for an extended period. Everything is fine, no one has to worry about anything, all needs are met, and it's just you and other people. How long could you take it? Until you do this, some may say, accurately I might add, you're just talking or blowing smoke. A non-religious person (ie. when you die, you're dead and cease to exist in any possible, real or imagined form or state of existence) one of course would cleave to this like a frightened lamb in the presence of a pack of wolves.. and this is the only reality that can be proven so is thus understandable, but what if we don't really know all there is to know? Can you not be wrong?
Intelligence: Simply put, they say ignorance is bliss. Perhaps they could be wrong. Perhaps you may be. Let me ask you, what were the best times of your life? I'll wager money that it was discovering new things, emotions, or moments. You know I'm right. What fun is a game you've already beat or a book you've already read compared to one you haven't?
Happiness: What is happiness? Why would one seek or even value peace if one could never pursue or make war? Love if one never knew hate or fun if one never knew boredom? Duality I believe some speak of. They may have it wrong or corrupted but perhaps they don't. What makes you happy David, if you don't mind sharing? What makes you sad? Why is that? If either were removed (incidents, experiences, or knowledge behind said emotions), would you feel the same toward the other?
Again, these are just criticisms meant to prod your response and beliefs on the matter, a response which I am eager to receive. "Just curious", as they say.
There's no tension between radical life-extension, genetic mood-enrichment and responsible stewardship of Earth. For instance, one reason that many people are unwilling to accept even modest personal inconvenience to tackle global warming is the assumption they won't be around personally to suffer the consequences. Let's face it, a 3mm rise in the mean sea levels each year doesn't sound too alarming unless you happen to live on a low-lying island or a coastal floodplain. Therefore willingness to accept tax-hikes for the benefit of posterity is limited. By contrast, indefinite youthful life-spans would also radically lengthen our normal time-horizons. Moreover, troubled people aren't necessarily more environmentally-conscious than unusually happy people. Indeed, other things being equal, the happiest people probably tend to care most about conserving what they conceive as our beautiful planet. After all, paradise (cf. "A Perfect Planet": https://en.wikipedia.org/wiki/A_Perfect_Planet) is usually reckoned more worth preserving than purgatory.
In short, I know of no tradeoff.
Thanks for the kind words. They are very much appreciated. A lot to chew on. Let me start with your question:
Quoting Outlander
Knowledge. The suffering in the world (more strictly, the universal wavefuction) appals me. I long for blissful ignorance. Alas, it would be irresponsible to urge invincible ignorance until all the ethical duties of intelligence in the cosmos have been discharged.
Until then, knowledge is a necessary evil.
I have wondered about the psychological attitude of human beings to accept death as an inevitable consequence of living, where evil is a disregard for life.
It seems strange that people seem to accept these facts as tautological, and instead continue saving money in a bank account instead of investing it in something so fundamental as to live longer or potentially for as long as possible.
Why is this all so?
A high capacity for self-deception is probably critical to what now passes for mental health. Until recently, helping people rationalise aging, death and suffering was wholly admirable: nothing could be done about the "natural" order of things. Despite my dark view of Darwinian life (cf. "Pessimism Counts in Favor of Biomedical Enhancement: A Lesson from the Anti-Natalist Philosophy of P. W. Zapffe" by Ole Martin Moen: https://link.springer.com/article/10.1007/s12152-021-09458-8), I urge opt-out cryonics, opt-in cryothanasia, and a massive global project to defeat the scourge of aging.
One tool of life-extension is mood-enrichment. Happy people typically live longer.
So, it's essentially pessimism, right? What about being not optimistic about this all; but, rather pragmatic in stating that aging should be lessened or decreased.
In the grand picture of things, 70-80 years is miniscule for a species to collectively survive or undertake grand projects like space exploration or multiplanetary colonies, yes?
Yes. Human lifespans are inadequate for interstellar travel, let alone galactic exploration. Human lifespans are inadequate for investigating the billions of alien state-spaces of consciousness accessible to exploration by future psychonauts. Only the drug-naïve (cf. John Horgan's The End of Science (1996)) could believe that the world's greatest intellectual discoveries lie behind rather than ahead of us. I won't pretend the pursuit of knowledge is my motivation for wanting humanity to defeat aging. But then most people – and certainly most transhumanists – aren't negative utilitarians.
Wow, can you imagine the boredom of being in a spaceship flying to another galaxy? To see what? There must better reasons for wanting an extended life than this. But what are they exactly? If we remove all suffering, doesn't the extended life just turn into one long boring flight to nowhere. Might as well be an eternal brain in a vat.
I'm inclined to agree. If we accept the contention of Rare Earthers that the rest of our galaxy is lifeless, then the allure of interstellar travel may pall. Granted, the biology of boredom is easier to retire than the biology of aging. Extrasolar space travel doesn't have to consist of decades or centuries of tedium. Even so, what's the point of it all? If lifeless rocks appeal to your sensibilities, then why not live in a barren desert closer to home?
Perhaps it could be argued that vacant ecological niches tend to get filled. It's plausible that an advanced posthuman civilisation will decide to use AI to optimise matter and energy within its cosmological horizon. But here we are well into the realm of science-fiction.
Quoting Metaphysician Undercover
Here it's possible we may differ. The suggestion one sometimes hears that we should conserve suffering because "heaven" would be tedious is ill-conceived:
https://www.hedweb.com/quora/2015.html#heaven
Perhaps consider the most intensely rewarding experiences of human life. They are experienced as intensely significant by their very nature. Thus no one says, "I feel sublimely happy but my life feels empty and meaningless." Replacing the biology of misery and malaise with life based on information-sensitive gradients of bliss will also solve the problem of meaninglessness. Indeed, despite being a pessimistic, button-pressing negative utilitarian, my tentative prediction is that the least meaningful moments of posthuman life will be experienced as vastly more significant than any human "peak experience" in virtue of their richer hedonic tone.
"Transcendent" meaning is a different question; it's not clear that the idea is even intelligible. But just as I anticipate the world's last unpleasant experience will be a precisely dateable a few centuries (?) from now, likewise the last time anyone reflects that life is meaningless is likely to be a precisely datable event too. The end of the Darwinian era will be a moral and intellectual watershed in every sense.
Let's take an example then, competition. Winning a competition is one of the most intensely rewarding experiences for some people. Even just as a spectator of a sport, having your team win provides a very rewarding experience. But we can't always win, and losing is very disappointing. How do you think it's possible to maintain that intensely rewarding experience, which comes from success, without the possibility of disappointment from failure? It seems like a large part of the rewarding feeling is dependent on the possibility of failure. We can't have everyone winning all the time because there must be losers. And there would be no rewarding experience from success, without the possibility of failure. How could there be if success was already guaranteed?
"It's not enough to succeed. Others must fail", said Gore Vidal. “Every time a friend succeeds, I die a little.” Yes, evolution has engineered humans with a predisposition to be competitive, jealous, envious, resentful and other unlovely traits. Their conditional activation has been fitness-enhancing. In the long run, futurists can envisage genetically-rewritten superintelligences without such vices. After all, self-aggrandisement and tribalism reflect primitive cognitive biases, not least the egocentric illusion. Yet what can be done in the meantime?
Well, one of the reasons I've focused on hedonic uplift and set-point recalibration is that the dilemmas of social competition can be side-stepped. Depressives and hyperthymics alike prefer winning to losing. But if you're an extreme hyperthymic, losing doesn’t cause your hedonic tone to dip below zero, just as if you’re a chronic depressive, winning doesn't raise your hedonic tone above zero. So at the risk of sounding like a crude genetic determinist, I say: let's aim to create a hyperthymic society via some biological-genetic tweaking.
I've outlined possible routes to explore such as ACKR3 receptor blockade and kappa opioid receptor antagonism; routine preimplantation genetic screening and counselling for prospective parents; germline gene-tweaking of FAAH and FAAH-OUT genes (etc); and a whole bunch of stuff to help nonhuman animals. If society puts as much effort and financial resources into revolutionising hedonic adaptation as it's doing to defeat COVID, then the hedonic treadmill can become a hedonistic treadmill. Globally boosting hedonic range and hedonic set-points by biological-genetic interventions would certainly be a radical departure from the status quo; but a biohappiness revolution is not nearly as genetically ambitious as a complete transformation of human nature. And complications aside, hedonic uplift doesn't involve creating "losers", the bane of traditional utopianism.
So this "intensely rewarding experience" which we get from succeeding in competition, you designate as seated in a vice, or vices, This would mean that it is a bad rewarding experience which ought to be eliminated. But on what principles do you designate some rewarding experiences as associated with vices, and some as associated with virtues? I would think that if you want to eliminate some such intensely rewarding experiences, and emphasize others, you would require some objective principles for distinguishing the one category, vice, from the other, virtue.
Quoting David Pearce
I think I've already mentioned the problem with this perspective. That is the divisiveness that such a proposal (which you admitted might be unethical) would induce. Global cooperation is not facilitated without consistent belief. Look at the issue of climate change for example, and even an immediate threat to the lives of many, like COVID, does not obtain unanimous consent to the designated required response. You might find a good example of global cooperation with the issue of CFCs and the ozone layer. That was a serious issue which seemed to obtain global cooperation.
However, it appears to me like such cooperation is more likely to be obtained in the face of serious evil, rather than the effort to obtain some designated good. So I feel like the challenge to you would be to demonstrate that failing to follow your proposed program would be a great threat to humanity. I perceive three levels of attitude toward action, or inaction, in relation to such a proposal. There are those who say "do it", and may start such an action, those who say "do nothing" (status quo), and those openly opposed to doing it. It seems like those who say "do it" have a huge task to persuade the others, and bring them onboard, which must be carried out prior to starting any such action. This would require a huge effort of education and some very strong principles. That is because starting any action without first persuading the others, logically would shift those in the "do nothing" group over to the "openly opposed" group.
Competing against earlier iterations of oneself or an insentient AI doesn't raise ethical problems. More controversial would be competing in zero-sum games against other (trans)humans where losing causes a drop in the well-being of one's opponent without their ever falling below hedonic zero. Such competition is problematic for the classical utilitarian, but not for the negative utilitarian. However, what I'd argue is morally indefensible is demanding that the loser involuntarily suffers when experience below hedonic zero becomes technically optional. Contemplating the pain of a defeated opponent sharpens the relish of some winners today. Let's hope such ill will has no long-term future.
An antirealist about value might contest even this fairly modest principle. My response to the antirealist:
https://www.hedweb.com/quora/2015.html#metaethics.
But as I said, emphasizing hedonic uplift and set-point recalibration over traditional environmental reforms can circumvent most – but not all – of the dilemmas posed by human value-systems and preferences that are logically irreconcilable.
Quoting Metaphysician Undercover
If depression isn’t a serious evil, then I don’t know what is – human “mood genes” are sinister beyond belief. Anyhow, governance by philosophers isn't imminent. Nor is rule by transhumanists, though transhumanist memes appear to be spreading. Sadly, I don't foresee what I'd like to materialise – a Hundred-Year Genetic Plan of worldwide hedonic uplift and recalibration under the auspices of the WHO to fulfil the goal of its founding constitution. What’s more credible is genome-editing to tackle well-recognised monogenetic diseases followed by interventions to tackle a genetic predisposition to abnormal pain-sensitivity, low mood and other forms of mental ill-health. Yes, I find this a disappointingly slow prospect. All of what today pass as enhancement technologies will be recognised by posthumans as remediation.
Wild cards? Well, part of me yearns for the hedonic equivalent of https://en.wikipedia.org/wiki/Great_man_theory
– a visionary politician who takes a biohappiness revolution from the margins to the mainstream. Alas, I’m not holding my breath.
Mood genes? Sure - that's what it is! Are you saying I don't have good reason to be depressed? Are you saying that, in the same circumstances, that this isn't, necessarily - how I should feel about things? Now I don't know how I feel about anything, and yet seem evolved to emotionally navigate a complex environment, quite well. I'm not happy about the state of that environment, but imagine that, we looked first to the most fundamental implications of a scientific worldview, applied technology to harness limitless clean energy from magma, sequestered carbon, desalinated water to irrigate land for farming and habitation - away from forests and river valleys, recycled, farmed fish. Are you saying I would still be depressed because of my mood genes? I could be happy! I'm not, but I think I could be!
Everything seems to come into the final conclusion that human species/humanity is not perfect, and in fact, have also made a lot of failures & slow progress in everything. That's why I'm thinking that the next evolution should, or even must be Transhumanism, to break away from all the disappointing limitations of reality.
My question is simple but very urgent/important one:
How can I, as just an ordinary person, can contribute to quicken the progress of Transhumanism?
Also, do you think Transhumanism will have any possibility to finally become mainstream in public?
How can we really make sure that Transhumanism will really work, instead of failing or eventually got diminished & slowly disappearing as if it never exists, considering how short attention span of our human species/humanity/mankind?
Thank you, and looking forward to your responses.
Niki, awesome, would you consider getting your own website / YouTube channel with a version in bahasa Indonesia? People tend to be more receptive to a new idea if the message is conveyed in their native language.
Quoting niki wonoto
It's been well said that humans tend to overestimate the effects of change in the short-run and underestimate its effects in the long run. Yes, all this grandiose talk about a glorious future civilisation of superintelligence, superlongevity and superhappiness may ring a little hollow when one is forced to confront the problems of everyday life – bills to pay, chores to do, and the messiness of interpersonal relationships. But Darwinian life as we understand it has no long-term future.
Quoting niki wonoto
Cognitive frailty, aging, death and all manner of physical and psychological suffering is "part of what it means to be human". But biotech and IT will shortly make such horrors optional. I don't want to sound like a naĂŻve technological determinist, but just consider: if offered the chance to become immensely smarter, happier and indefinitely youthful, how many people will prefer to be intellectually handicapped, malaise-ridden and decrepit?
At the risk of tempting fate, I'll say it: history is (probably) on our side.
Many completely paralysed people with "locked in" syndrome suffer terribly. But the high genetic loading of default hedonic tone together with the negative-feedback mechanisms of the hedonic treadmill mean that a large minority if not a majority of locked-in patients report being happy:
https://www.newscientist.com/article/dn20162-most-locked-in-people-are-happy-survey-finds/
Conversely, some able-bodied people "have it all", yet they are chronically miserable. I'm not trying to downplay the importance of social, economic and political reform in making the world a better place – or protecting the environment. Yet if we're ethically serious about solving the problem of suffering, we'll need to tackle its genetic source.
I've endured months of attacks by subjectivist fundamentalists for suggesting there's truth value to scientific knowledge; and a good part of that I suspect, is inspired by a religious or spiritual disdain for science. Indeed, I believe Descartes was inspired by Galileo's arrest and trial, to drop physics and develop subjectivist philosophy in line with a religious emphasis of the spiritual to the de-emphasis of the mundane. Subjectivism has been promoted philosophically, to the cost of the objective - and this is the root cause of the climate and ecological crisis. With me so far?
So, I'm trying to convince people who've got 400 years worth of religious and philosophical reasons to suspect science of heresy, that on the contrary, science can be trusted as a rationale to tackle the climate and ecological crisis.
Do you not see how you confirm the worst fears of the regressives - by what seems to me, a castle in the air - with enormous, terrifying implications you seem almost deliberately unaware of - even when asked about them repeatedly. You say:
Quoting David Pearce
But you are undermining science as a rationale with your Frankenstein-esque suggestions, that we genetically engineer ourselves into a race of supermen, while ignoring the moral, social, political, economic environmental implications of using science in such a way. You propose genetically enhanced longevity for example, and do not seem to realise that longevity would be problematic in all sorts of ways, not least, environmentally.
If you accept science is true, you have to approach the problem of the future systematically, and that begins with energy, not with:
Quoting David Pearce
I am ethically serious about affording our species the chance of a future; and suggest that is the first ethical priority implied by a scientific worldview. I don't know where deliriously happy designer babies that live forever comes on such a list of scientifically rational ethical priorities, but I'm pretty sure limitless clean energy from magma is logically prior in the order condescendi.
In what sense is aiming to phase out the biology of suffering "Frankenstein-esque"? Either way, the biggest obstacle to tackling man-made climate change and environmental degradation is short-termism. Yes, you're right, creating a world where people don't crumble away and perish has implications for the environment. But the impact won't necessarily be as pessimists suppose. Crudely, if you think you're going to be around for hundreds or thousands of years (or more), then you are more likely to care about the long-term fate of the planet than if you reckon it will be someone else's problem.
Quoting counterpunch
But transhumanists do not advocate "deliriously" happy designer babies. Delirium is inimical to cognition. Rather, they urge information-sensitive gradients of well-being. Intelligence-amplification is one of the core tenets of transhumanism.
It's not the pain of the opponent which I am talking about here, it's the aspect of the pleasure derived by the winner, which is produced by knowing that the pain of losing has been avoided. So the winner does not wish ill-will on the loser, only attempting to avoid the potential of the pain for oneself. That's "sportsmanship", you do not intend ill-will on the opponent, only the best for yourself. But the game is designed such that there is a loser. In the competition, all competitors know that someone (or team) will suffer the pain of lose. In good sportsmanship, it is the goal of the competitors to win and avoid such pain. It is not their intent to inflict pain on others. The joy in winning is intensified not by the thought that others are in pain, but by knowing that the pain of lose, for oneself, has been avoided.
So, competing against earlier iterations of oneself or an insentient AI does not address the issue, because we still must allow for the possibility that one loses, and therefore suffers from the lose. Replacing the opponent with an AI does not remove the necessity for the possibility of lose, and the consequent pain and suffering. The issue here is that much joy and happiness, and the drive, motivation, or ambition for success, comes from the desire to avoid the pain and suffering caused by failure. If we remove that pain and suffering, extinguish the possibility of failure, make the AI always lose no matter what, or whatever is required to negate the possibility of suffering, then there is no drive or ambition to better oneself.
Quoting David Pearce
So what would be the point to continually inducing the joy and pleasure of winning in a person, without requiring the person to actually compete and win, or even do anything, to receive that pleasure? If it is not required to do the good act, to receive the pleasure of doing a good act, then when is anyone ever going to be doing anything good?
But to return to the earlier example of playing chess, one can fanatically aspire to improve one's game and play to win even though one will invariably lose. I know of no reason why the "raw feels" of experience below hedonic zero need conserving. After all, our intelligent machines don't need to suffer in order to become smarter or competitively more successful. In future, suffering will be redundant for (trans)humans too.
Quoting Metaphysician Undercover
Two invincibly happy (trans)humans can play competitive chess against each other and both improve their game. Honestly, I don't see the problem!
Quoting David Pearce
Have you ever suffered, David? Ever noticed or experienced hardship enough to motivate you to do something .. oh apparently you have as this is the purported moral and intellectual basis of this movement of yours. Where would you be without these occurrences? How impactful do you think they were for spurring positive change? Apparently great, if your mission is so dire. So tell me. What motivation, drive, and desire will others expect in your envisioned world? Any drive to even get out of bed? Any at all? Some may argue, this idea damns those confined by it to an even worse fate then the current mitigated Darwinian hell we have (civilized society, manners, rules, occasional decency, etc.). Your response?
What's in question isn't whether suffering in all its guises can sometimes be functionally useful; it sure can. Rather, what needs questioning is the widespread assumption that the "raw feels" of suffering are computationally indispensable. If the indispensability hypothesis were ever demonstrated, then this result would be a revolutionary discovery in computer science: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
For what it's worth, I hold the distinctly unorthodox view that phenomenal binding is non-algorithmic. What's more, the phenomenal unity of experience is often massively adaptive for biological minds. But we need only study the happiest hyperthymics to realise that suffering isn't indispensable to either to cognition or motivation. Neither is happiness, though happier people tend to be better motivated. See too the work of Kent Berridge on the double dissociability of "liking" and "wanting" (cf. https://sites.lsa.umich.edu/berridge-lab/research-overview/neuroscience-of-linking-and-wanting/), mu-opioidergic hedonic tone and dopaminergic motivation.
In short, information-sensitive hedonic gradients are the key to intelligent, motivated behaviour – irrespective of one's absolute location on the pleasure-pain axis. And ethically, information-sensitive gradients of bliss are surely preferable to gradients of misery and malaise, the plight of so many sentient beings alive today.
I don't agree. One cannot play to win if the person knows that winning is impossible. In a similar way one cannot get the same enjoyment from winning if the person knows that losing is impossible. So there must be the real possibility of losing (suffering) if winning is to be enjoyable.
Quoting David Pearce
The point is that one must lose, and suffer from the loss, if the other is to win and obtain the enjoyment of winning. If we remove the winning and losing from the game, we can't call it a competitive game.
Quoting David Pearce
The issue, in my mind, is not whether suffering is indispensable, but the question of whether we can have gain without the possibility of suffering. If it is the case, as I believe it is, that all actions which could result in a gain, also run some risk of loss, and loss implies suffering, then to avoid suffering requires that we avoid taking any actions which might produce a gain. But if gain is necessary for happiness, and this is inevitable due to biological needs, then the goal of happiness cannot include the elimination of suffering. Therefore the goal of eliminating suffering must have something other than happiness as its final end. What could that final end be? If eliminating suffering is itself the final end, but it can only be brought about at the cost of eliminating happiness, then it's not such a noble goal.
Perhaps consider e.g.
https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
Competition doesn't inherently involve suffering. I'd love to win against the program I play chess against, but losing never causes me to suffer. Granted, the idea of zero-sum games, and the pain caused by losing to a human opponent, is so endemic to Darwinian life that it's easy to imagine that it's inherent to competition itself. But life doesn't have to be this way. Hedonic uplift can be combined with recursive intellectual self-improvement. Either way, we should all be free to choose. As biotech matures, no one should be forced to endure the substrates of involuntary suffering.
Quoting David Pearce
Do you think you'd enjoy it more if you put it on a slightly lower difficulty?
I'm sceptical! Either way, I think the point stands. The end of suffering isn't tantamount to the end of competition, let alone the end of intellectual progress. Sentience deserves a more civilised signalling system.
The way I see it is that as human beings, we are first and foremost, animals. That's what defines us, although we like to separate ourselves from the other animals, to say that we're somehow a special type of animal. Let's say that specialness as "civilised". If we're already civilised, then what could it even mean to suggest making us more civilised? If civilised is a general category, then all sorts of particular instances qualify as civilised, and what would make one "more" civilised than another? If we are not yet civilised, then what really does "civilised" mean? Distinguishing us from the other animals, is not even justified now. Unless we answer this type of questions, assuming that such and such is "more civilised", is simply an unjustified assumption.
It’s uncivilised for sentient beings to hurt, harm and kill each other. It’s uncivilised for sentient beings to undergo involuntary pain and suffering – or any experience below hedonic zero. The nature of mature posthuman civilisation is speculative. But I reckon the entire Darwinian era will be best forgotten like a bad dream.
Involuntary pain and suffering is most often derived in accidental ways, mistakes and not knowing the potential source, and how to prevent it. I do not think it is possible to eliminate the possibility of such pain and still remain living beings. Nor do I think we ought to attempt to eliminate the possibility of such involuntary pain, as this possibility is what inclines us to think, and develop new epistemological strategies for certainty.
What I think is a far more significant and important issue is the matter of voluntarily inflicting pain and suffering on others. If your goal is to manipulate the human being towards a more civilized existence, then the propensity for human beings to mistreat others is what you ought to focus on, rather than the capacity for pain. See, you appear to be focused on relieving the symptoms, rather than curing the illness itself.
Does suffering define what it means to be human? (cf. "A World Without Pain
Does hurting make us human?" https://www.newyorker.com/magazine/2020/01/13/a-world-without-pain)
If so, then very well: let us become transhuman. I think you'd be on much firmer ground if you focused on the potential pitfalls of life without psychological and physical pain. They are legion. Done ineptly, an architecture of mind based entirely on gradients of bliss isn't just potentially risky for an individual, but a leap into the unknown for civilisation as a whole. Hence the need for exhaustive research.
Quoting Metaphysician Undercover
Genetically eradicating the predisposition to suffer is more than symptomatic relief; it's a cure. I'm as keen as anyone on improving human behaviour to humans and nonhuman animals alike. We are quasi-hardwired to cause suffering – and to suffer ourselves in turn. And it's not just enemies who cause grief to each other. See e.g. The Scientific Reason Why We Hurt The Ones We Love Most – https://www.huffingtonpost.co.uk/entry/aggression-research_n_5532142.
In the long run, maybe universal saintliness can be engineered. But the goal of eradicating suffering is modest in comparison.
"When you observe nature you will learn it is the pleasure of the bee to gather the honey from the flower"
What are we not doing here and now? Earth is a.. vessel of a sort. Perhaps we may be going somewhere, perhaps not. Is the Sun the greatest gravitational body in the Universe? Not by far.
To see what? Is the reveal not the greatest part of the trick? As you said, ignorance is bliss, mystery is the spice of life, and knowledge is Hell. Perhaps?
You sound passionate, David. I've asked this before and perhaps you may feel annoyed even by responding again but let this if nothing else be a rhetorical question.
What interests you? Why is that? Perhaps because there is a problem to be solved. Imagine sitting in a room full of "solved" or completed Rubix cubes. Would you not wish for someone if not even yourself to twist one toward unexpected parameters? I once again challenge you to try this setting for yourself. And perhaps you may see, there is fire and water for a reason.
Alien state-spaces of consciousness interest me:
https://www.hedweb.com/funpages/sasha-dave.html
A drug-naĂŻve conceptual scheme recognises only two state-spaces, dreaming and waking consciousness. There exist trillions of unexplored state-spaces of consciousness, mutually unintelligible and incommensurable, in part at least. Whereas lucid dreamers have at least glimmerings of an understanding of waking consciousness, the drug-naĂŻve have no insight into what they lack, nor even the privative terms with which to name their ignorance. A post-Galilean science of mind is still a pipedream.
So why stick to wordy scholasticism rather than become a full-time psychonaut? Essentially, dark Darwinian minds can't safely explore psychedelia, nor responsibly urge others to try. For now, most knowledge of reality must remain off-limits. Scientists play around with the mathematical formalism (all science formally reduces to the Standard Model and General Relativity), but science has no real understanding of most of the solutions to the equations. Abolishing the molecular signature of experience below hedonic zero can allow serious investigation of the properties of matter and energy rather than idle word-spinning. The idea that abolishing suffering will abolish intellectual challenge or the growth of knowledge is a myth. Billions of years of investigation lie ahead. But let's investigate the properties of Heaven rather than Hell. And let's first discharge our moral obligation to eradicate suffering:
https://www.abolitionist.com
The question is: What, in your opinion, lies beyond happiness?
Transhumanists are working under a hedonic assumption and the journey along those lines began since the dawn of human civilization and transhumanists, benefiting from roughly 7 to 8 thousand years of effort, haphazard though it was, have now provided the clearest picture of what, to use a religious term, heaven would look like. What lies beyond heaven?
Posthuman heaven is probably just a foretaste of the wonders in store for sentience. Humans don’t have the conceptual scheme to describe life in a low-grade heavenly civilization with a hedonic range of +10 to +20, let alone a mature heaven with hedonic architecture of mind that spans, say, +90 to +100. The puritanical NU in me sometimes feels it’s morally frivolous to speculate on Heaven+ or Paradise 2.0. Yet if theoretical physicists are allowed to speculate on exotic states of matter and energy, then bioethicists may do so too – and bioethicists may have a keener insight into the long-term future of matter and energy in the cosmos.
But first the unknowns. Neuroscience hasn’t yet deciphered the molecular signature of pure bliss, merely narrowed its location to a single cubic millimetre in the posterior ventral pallidum in rats, scaled up to a cubic centimetre in humans. Next, neuroscience hasn’t cracked the binding problem: https://www.hedweb.com/hedethic/binding-interview.html
Unless you’re a strict NU, there’s not much point in creating patterns of blissful mind-dust or mere microexperiential zombies of hedonistic neurons. Rather, we’re trying to create cognitively intact unified subjects of bliss endowed with physically bigger and qualitatively richer here-and-nows. And next, how will tomorrow’s bliss be encephalised? The Darwinian bric-a-brac that helped our genes leave more copies of themselves in the ancestral environment of adaptation has no long-term future. The egocentric virtual worlds run by Darwinian minds may disappear too. But the intentional objects and default state-spaces of consciousness of posthumans are inconceivable to Darwnian primitives. Next, how steep or shallow will be the hedonic gradients of minds in different ranks of posthuman heaven? I sometimes invoke a +70 to a +100 supercivilisation, but this example isn’t a prediction. Rather, a wide hedonic range scanario can be chosen to spike the guns of critics who claim that posthuman heaven would necessarily be less diverse than Darwinian life with our schematic hedonic -10 to 0 to +10. Lastly, will the superpleasure axis continue to preserve a signaling function? Or will mature post-posthumans opt to offload the infrastructure of superheaven to zombie AI, and occupy hedonically “perfect” +100 states of mind indefinitely?
Talk of “perfection” is again likely to raise the hackles of critics worried about homogeneity. But a hedonically “perfect” +100 here-and-now can have humanly unimaginable richness. Monotony is a concept that belongs to the Darwinian era.
The above discussion assumes that advanced posthumans won’t be strict classical utilitarians who opt to engineer a hedonium/utilitronium shockwave. One political compromise is to preserve a bubble of complex civilization underpinned by information-signaling gradients of well-being that is surrounded by an expanding shockwave of pure bliss – not an ideal world by the lights of pure CU, but something close enough.
(apologies if I'm treading already trodden-ground, I'm coming to the thread late)
There's a story by Henry James I've always liked called The Beast In the Jungle. That story is about a man who, for as long as he can remember, was always convinced that somewhere in his future lay a terrible event. He didn't know what it was, only that it would be horrible. He has trouble expressing it to anyone, but eventually meets a woman who is very sympathetic to him. The story chronicles their meetings over the years, with her attentively, consolingly listening to his fears of the beast. Eventually she dies, and while standing at her grave he's struck with an intense flashing memory of their whole friendship - he realizes that the whole time she was trying to express her love for him through sympathizing with him, and that he never actually saw it, focused as he was on the beast. And, of course, that's when 'the beast' hits him - that is the beast.
That's a story about focus on a bad/sad thing. He lives his life in preparation/constant thinking about the bad thing, and in doing so misses everything else. But I often worry that a version of it might hold for transhumanist visions of the future.
Might it be the case that in focusing attention on an 'outside' ( a future, elsewhere, later) where the anti-beast (the very good) will happen -where it will be given to us wrapped and perpetually +100 - we're drawing attention away from learning how to see the modest +n's in life, and learning how to cultivate them into n+1s? Maybe experiencing +100s is inseparable from learning to recognize and cultivate +1s, here and now, struggling with them?
I have trouble not thinking in narrative terms, and when I go into periods where I think a lot about future happiness, these periods usually terminate with me daydreamily 'seeing' the story of my life as someone focusing on a distant happiness while all the possibilities of that happiness passed me by. In the daydream story, I realize that the +100 has to begin with a +1 here I never began to cultivate, because I was focused on the +100 there.
When I'm in those phases I'm always haunted by this Rilke line
"A wave swelled toward you
out of the past, or as you walked by the open window
a violin inside surrendered itself
to pure passion. All that was your charge.
But were you strong enough? Weren’t you always distracted
by expectation"
Isn't there a danger that in entertaining transhumanism, we're always 'distracted by expectation' in just this way? (though you can also imagine in act II of such a story, there's always someone would come in and moralize in just this way)
You have a point. I've never read The Beast in the Jungle, but I get the gist: https://en.wikipedia.org/wiki/The_Beast_in_the_Jungle
However, here is a counterargument. Transhumanism at its best aims to promote the well-being of all sentience, or at least all sentience in our forward light-cone. I take a range of pills and potions, and intend to be symbolically cryothanased aged 75 or so with a view to reanimation, but it's not as though I anticipate seeing the Promised Land with any great confidence – quite aside from my scepticism about the metaphysics of enduring personal identity. Rather, our responsibility as intelligent moral agents is to try to ensure that future beings don't suffer in the way that human and nonhuman animals do today. We’re stepping-stones. No one should have to undergo the ravages of aging, witness the death of a loved one, experience psychological illness, or undergo the mundane frustrations and disappointments of Darwinian life. No sentient being should be factory-farmed, perish in the death factories, or starve or fall victim to a predator in Nature. The fact that many / most / all of us will never personally live to see the glorious future of sentience doesn't diminish our obligation to work to that end.
Sounds like one helluva party! Who in his right mind can say "no" to that!
A couple of points...
1. This may sound crazy but I'll say it anyway. The suffering-happiness duo, if one approaches this from a Darwinian angle, serve a function distinct in value from any value that can be attributed to either of the two. In my humble opinion, the things that make us happy are pro-life i.e. in most or all cases, that which makes us happy are that which makes us live longer and conversely, that which makes us suffer are anti-life, shortens our life-span. Thus, happiness and suffering are ultimately about living as long as possible i.e. happiness and suffering, whatever value one may choose to ascribe to them as transhumanists are currently doing, boils down keeping the flame of life burning to the maxium extent possible; in other words, the objective, the end, here seems to be immortality and happiness-suffering are merely the means. The question is, should we spend so much time working on, making a big deal of, the means instead of focusing on, putting our backs into, achieving the end (immortality)? It would be something like obsessively hoarding money (the means) instead of using it intelligently (the end).
2. I'm sure you're familiar with buddhism - it too is a hedonically-charged philosophy and the first line in buddhism reads, "life is suffering." I'm sure you can relate to that but there's a crucial difference between buddha's philosophy and transhumanist ideology if I may describe it as such.
Both buddhism and transhumanism acknowledge suffering as undesirable and happiness as a desideratum. However, to borrow computing terms, buddhism is about updating as it were our software - leave the world as it is but change/adapt our minds to it in such a way that suffering is minimized and happiness is maximized (I'll leave nirvana out of the discussion for the moment) - and transhumanism is about upgrading our hardware - change the world and also change our brains towards the same ends.
The issue is, if happiness-suffering can be modulated by just changing the way we think of the world, isn't transhumanism in that sense a misguided venture? The converse question - if happiness-suffering can only be dealt with by changing our world and our brains physically? - suggests the opposite, that buddhism is a gross misconception of reality.
Alas, naysayers exist, even on this forum. But yes, transhuman life based on gradients of superhuman bliss will exceed our wildest expectations.
Quoting TheMadFool
My view of Darwinian life is so bleak that I'm more likely to quote Heinrich Heine, "Sleep is good, death is better; but of course, the best thing would to have never been born at all." Sorry. But aging and bereavement are sources of such misery that I share the transhumanist goal of their abolition via science. Mastery of our reward circuitry promises to make the nihilistic sentiments of NU antinatalists like me unthinkable.
Quoting TheMadFool
Yes. Pain and suffering have always been inevitable. Symptomatic relief is sometimes possible, but not far-reaching. But for the first time in history, we can glimpse the prospect of new reward architecture – not just the alleviation of specific external causes of suffering, but any suffering, and even the conceivability of suffering – and not in some mythical afterlife, but here on Earth. If we opt to edit our genomes, the world's last unpleasant experience may be a few centuries away. Pursuing the Noble Eightfold path can't recalibrate the hedonic treadmill or break the food chain, so a pragmatist like Gautama Buddha born today would surely approve. The hardware/software metaphor for the mind-brain shouldn't be taken too literally, but yes, transhumanism promises a revolution in both.
Another distinction between traditional Buddhism and transhumanism is the role of desire. Buddhists equate desire with suffering. Frustrated desires sometimes cause misery. Biotech can do permanently what mu-opioid agonist drugs do fleetingly, namely create serene bliss - an absence of desire that should be distinguished from the anhedonia and amotivational syndrome experienced by many depressives. However, serene bliss is the recipe for a world of lotus-eaters (cf. Tranquilism (2017) by Lucas Gloor: https://longtermrisk.org/tranuilism/). The prospect of quietly savouring beautiful states of consciousness for all eternity doesn't appal me; but expressing delight in idleness is a culturally unacceptable sentiment, for the most part at any rate. Witness the modern cult of "productivity". So it's worth stressing a viable alternative scenario, a world of gradients of superhappiness and hypermotivation. This combination ought not to be possible if the Buddhist diagnosis of our predicament were correct. Pathological manifestations of this combination of traits can be seen today in euphoric mania. Yet it's an empirical fact that the temperamentally happiest "normal" people today typically experience the most desires, not the least. Many depressives experience the opposite syndrome. Maybe in the very long-term future, advanced superbeings will opt to live in perpetual hedonic +100 super-nirvana. But perhaps a more sociologically realistic scenario for the next few eons is hypermotivated bliss – crudely, a world of doing rather than contemplating, and a civilisation where all sentient beings feel profoundly blissful but not "blissed out": https://www.superhappiness.com/
:rofl: However, jokes aside, this particular brand of thinking has been offered for public consumption at a global scale by Buddhism, especially the part that goes, "...the best thing would to have never been born at all" :point: nirvana which boils down to the statement: head to the nearest exit from the cycle of suffering referred to as samsara and for heaven's sake don't come back! :smile:
Quoting David Pearce
And yet you build an entire philosophy out of just one aspect of it viz. happiness/pleasure and suffering/pain. I'm not trying to say anything to the effect that transhumanism is defective/deficient but, in a sense, transhumanism hasn't really left Darwinism behind has it? It's still quite clearly very much in its grips, deeply troubled by the same things - pleasure/pain - that troubled the dinosaurs presumably. A modern solution (transhumanism) for an ancient, ancient problem (suffering/happiness).
Quoting David Pearce
:up: I second that. I suppose his philosophy suffers from technological ignorance i.e. he didn't have the benefit of modern scientific knowledge and the possibility that there was another way - transhumanism - out of the quagmire of suffering never occurred to him. Had he had even an inkling of what is now bandied about by technologists and futurists as possible, I'm sure he would have seen the light so to speak.
Quoting David Pearce
I'd like two, no three, no four, no five,.. of that please. :smile:
I suppose transhumanists are calling it as they see it - we can, sit venia verbo, cut all the crap we tell each other and finally, seriously, and like adults, discuss what we really want - superhappiness (supernirvana) - and come up with a good plan how we're going to get there!
Absolutely.
At the risk of sounding like a crude technological determinist, some strands of the transhumanist agenda are bound to happen anyway. For instance, radical antiaging interventions won’t need selling. Bioconservatives will seize on the chance to stay youthful as eagerly as radical futurists. Today’s lame rationalisations of death and aging will be swept aside. Likewise with AI and the growth of “narrow” superintelligence, immersive VR and so forth. I’m most cautious about timescales for the end of suffering and the third “super” of transhumanism, superhappiness. When scientific understanding of the pleasure-pain axis matures, the end of suffering and a civilisation of superhuman bliss are probably just as inevitable as pain-free surgery. But the sociological and political revolution needed for all prospective parents world-wide to participate in a reproductive revolution of hyperthymic designer babies is fathomlessly complicated. I fantasise about a WHO-sponsored Hundred-Year Plan to defeat the biology of suffering. Almost certainly, this is just utopian dreaming. Political genius is needed to accelerate the project.
Quoting David Pearce
If I may say so, some Buddhists (Tibetans mostly I suppose) would, at some point, connect the dots and come to the realization that transhumanists are reincarnations of Siddhartha Gautama :smile: They seem to have as of yet failed to make that connection. I hope they do and soon; I'm sure a little help from the 535 million Buddhists around the world will do the transhumanist cause some good. Expect yourselves to be worshipped at some point is what I have to say.
Hah. You’re very kind. Individual transhumanists are all too human. But your essential point stands. The abolitionist strand in modern transhumanism is really secular Buddhism, minus the metaphysical accretions. Suffering is vile, stupid and computationally redundant.
I very much like the emphasis on a general happiness that goes beyond one's own, the idea of the present as a stepping stone to the future. I can definitely see how my post - and references - seemed to emphasize personal happiness - but I'm coming at it from another angle. I've experienced rather harsh psychological illness, and the passing of my mother when I was 25. All of this was - and sometimes continues to be - devastating. But at the same time....I was kind of an asshole before then. And it was going through those experiences, real suffering, that allowed me to shift-tracks from a kind of self-centered hedonism and cultivate something like empathy (partially, I'm still often an asshole, but a little less so)
Now, maybe this is a Stockholm-Syndrome approach: suffering scooped me up and, having no other psychological choice, I was gaslighted into loving it. I can see that take; I don't think it's true....though, to be fair, that's just what the stockholm syndrome'd person would say.
(plus I have a unfair cheat: reincarnation. You mention above metaphysical scepticism about personal identity. I have that scepticism too and have plunged into some of the literature. Doing so has led me to believe in a very qualified kind of reincarnation (not that I'll get my mom back, or that she'll ever be her again ---I've read too many horror stories about rogue alchemists not to know that the desire to resurrect the dead only breeds monsters). Her personal identity ended when she died. But she was blocked by certain harsh emotions and hardened habits, and was released from that...contraction of being, so to speak, into the opener space)
But in general I really do think that there may be something to the old idea that undergoing suffering is a condition for a more finely-tuned happiness. I know that that's a cliche on the face of it (maybe even a particularly pernicious culturally-implanted one!) but I think there is also a robustness to certain ways of approaching it that go beyond the cliche.
To flesh that out would go beyond the scope of posting on here, but the tldr is:
What I meant in my first post isn't that we should prioritize our own happiness at the expense of the future inhabitants of our lightcone, but something like : happiness could be the gradual unfurling that comes when a community agrees to focus on the here and now - which focus means confronting and working-out deferred, repressed suffering ...kneading it out, undergoing it- the future happiness coming organically through the kneading, like the relief from tension in a muscle. I'm thinking of like the serenity after a good, full cry. At the same time you reach catharsis you realize: I need to be a kinder person, and I now have a sense how to. It's the same thing.
I know this might seem wishy-washy! But there's a lot to recommend it - only time and space constraints allow only a quick and simple suggestive pointing.
Allow me to pass over where we agree and focus on where we may differ. Each of us must come to terms with the pain and grief in our own lives. Often the anguish is very personal. Uniquely, humans have the ability to rationalise their own suffering and mortality. Rationalisation is normally only partially successful, but it’s a vital psychological crutch. Around 850,000 people each year fail to "rationalise" the unrationalisable and take their own lives. Millions more try to commit suicide and fail. Factory-farmed nonhuman animals lack the cognitive capacity and means to do so.
However, rationalisation can have an insidious effect. If (some) suffering has allegedly been good for us, won’t suffering sometimes have redeeming features for our children and grandchildren – and indeed for all future life? So let’s preserve the biological-genetic status quo. I don't buy this argument; it’s ethically catastrophic. For the first time in history, it's possible to map out the technical blueprint for a living world without suffering. Political genius is now needed to accelerate a post-Darwinian transition. Recall that young children can't rationalise. Nor can nonhuman animals. We should safeguard their interests too. If we are prepared to rewrite our genomes, then happiness can be as "finely-tuned" and information-sensitive as we wish, but on an exalted plane of well-being. Transhuman life will be underpinned by a default hedonic tone beyond today’s “peak experiences”.
One strand of thought that opposes the rationalising impulse is represented by David Benatar's Better Never To Have Been, efilism and “strong” antinatalism:
https://www.hedweb.com/quora/2015.html#main
Alas, the astute depressive realism of their diagnosis isn't matched by any clear-headedness of their prescriptions. I hesitate to say this for fear of sounding messianic, but only transhumanism can solve the problem of suffering. Darwinian malware contains the seeds of its own destruction. A world based on gradients of bliss won’t need today's spurious rationalisations of evil. Let’s genetically eliminate hedonic sub-zero experience altogether.
I have a sense that suffering helps us navigate, and altering the genetic hedonic pre-disposition would reciprocally alter the genetic basis of suffering, and leave us incapable of overcoming even the slightest obstacle. Like the fat people in the floaty chairs! Or the 30 million dead colonists on Miranda - in the film Serenity, who had unending bliss forced upon them, and just laid down and died.
This is quite aside from the fact that it's an unsystematic application of science, that by rights should start with limitless clean energy, carbon capture, desalination and irrigation, hydrogen fuel, total recycling, fish farming etc; so as to secure a prosperous sustainable future. I know that would make me, genuinely, much happier.
@David Pearce The following is more of an enthusiastic engagement, that recalls or calls upon several truths you assert in your premise. The very thing you call savage and cruel or even stupid, Darwinian existence, is the thing that formed you and your beliefs, this is nothing other a fact in would seem. So, seeing as Hell creates Heaven it would appear, again from your own statements, what of the following theory.
Ok I remember. I was watching a government ad about the dangers of smoking, some woman lost half her face or something. Alright. So. What if, we endure these horrors, and one day, those who do things that are hazardous to our current biology (smoking, drinking in excess, etc.) continue to do so and yes suffer resulting in a later generation that is no longer subject to the diseases of their predecessors. Meaning, the more we smoke, drink, and yes some will die and suffer horribly, but those who live, will be a generation where they can enjoy the pleasures of smoking and drinking and I suppose for sake of argument other dangerous drugs, without these horrid effects. Isn't the natural system creating transhumans already, per se?
I understand your argument is wishing to skip this altogether, either by simply having no need to indulge in harmful substances for we will already be at bliss. But nevertheless there will always be more bliss to be had, if not is this not a prison your movement attempts to create? People will always seek more pleasure. Will they not?
The pleasure-pain axis plays an indispensable signalling role in organic (but not inorganic) robots. When information-signalling wholly or partly breaks down, as in severe chronic depression or mania, the results are tragic. But consider high-functioning depressives and high-functioning hyperthymics. High-functioning hyperthymics tend to enjoy a vastly richer quality of life. Let's for now set aside futuristic speculation on an advanced civilization based on gradients of superhuman bliss. What are the pros and cons of using gene-editing to create just a hyperthymic society – where everyone enjoys a hedonic set-point and hedonic range comparable to today's genetically privileged hedonic elite?
You mention the fictional colonists of Miranda:
https://en.wikipedia.org/wiki/Serenity_(2005_film)
Their fate is not a sociologically credible model for a world based on gradients of genetically programmed well-being. Hyperthymics tend to be ardent life-lovers. What's more, the extent to which any of us persevere or "give up" will itself soon be amenable to fine-grained control (cf. "Researchers discover the science behind giving up": https://newsroom.uw.edu/news/researchers-discover-science-behind-giving). If desired, medical science can endow us with fanatical willpower sufficient to appease even the most avid Nietzschean. Depressives are prone to weakness of will and self-neglect.
Quoting counterpunch
Fish are sentient beings. Intelligent moral agents should enable fish to flourish, not exploit and kill them
(cf. https://www.theguardian.com/environment/2018/oct/18/horrific-footage-reveals-fish-suffocating-to-death-on-industrial-farms-in-italy).
I promise that transhumanists are as keen as anyone on a prosperous, sustainable future. Upgrading our reward circuitry will ensure that sentient beings are better able to enjoy it.
Hyperthymics engage in denial to rationalise their overly-positive mood. They lose the ability to navigate rationally, and the consequences can be just as tragic. Suffering doesn't go away just because the person isn't weeping. We are after all social beings, and hyperthymics go around inflicting their risk taking, attention seeking, libidinous psychology on others.
Quoting David Pearce
There would be no downsides if people were incapable of experiencing them. The world could be falling apart around them and they wouldn't care. That's my point. People need to be pissed off about things in order to prevent them. Thou shalt not kill. Who cares? Thou shalt not steal. Who cares? Hyperthymics don't care.
Quoting David Pearce
It's fiction, sure - but it illustrates a point, that relates to my previous comments about a systematic, scientific approach to the application of technology. Start with limitless clean energy from magma, carbon capture, desalination, irrigation - and secure sustainable prosperity for the world, and maybe people wouldn't be depressed. Starting with genetic engineering is exactly why we are headed for extinction; that we use science, but don't observe a scientific understanding of reality, so apply the wrong technologies for the wrong reasons.
Quoting David Pearce
Are they? I thought they were:
Quoting David Pearce
Or do you reserve such dehumanising ideas solely for humans? Humans are sentient beings at the top of the food chain. Fish are meat. Humans eat meat, and need to produce it sustainably rather than dredge to oceans to death. I do not condone animals suffering any more than is necessary, but they're mortal, and humane farming is far kinder than nature - which really is red in tooth and claw. Most humans born will reach maturity. That's not so in nature.
Quoting David Pearce
I don't doubt that, but you'll not achieve it with an unsystematic approach to science, and also, I very much doubt that:
Quoting David Pearce
Interfering in the human genome, so altering every subsequent human being who will ever live, is a risk that's not justified by depression; for all the reasons stated. I think we need to suffer the consequences of things that are bad for us; and if we don't feel the suffering, we will still suffer, but just won't know that we are suffering.
Indeed. Even an “ideal” pleasure drug could be abused. The classic example from fiction is soma (cf. https://www.huxley.net/soma/somaquote.html) in Aldous Huxley’s Brave New World. I hope that establishment pharmacologists and the scientific counterculture alike can develop more effective mood-brighteners to benefit both the psychologically ill and the nominally well. Yet one reason I’ve focused on genetic recalibration and genetically-driven hedonic uplift is precisely to avoid the pitfalls of drug abuse. Whether you're a hyperthymic with an innate hedonic range of, say, 0 to +10 or a posthuman ranging from +90 to +100, you can continue to seek more – where the guise of “more” depends on how your emotions are encephalised. But an elevated hedonic set-point doesn’t pose the personal, interpersonal and societal challenges of endemic drug-taking. Indeed, we don’t know whether posthumans will take psychoactive drugs at all. I often assume that posthumans will take innovative designer drugs in order to explore alien state-spaces of consciousness. However, maybe our successors will opt to be mostly if not entirely drug-free. After all, if there weren’t something fundamentally wrong with our human default state of consciousness, then would we try so hard to change it? It’s tragic that mankind's attempts to do so are often so inept.
Is it possible you're conflating hyperthymia with mania? Yes, unusually temperamentally happy people have proverbially rose-tinted spectacles. Their affective biases need to be exhaustively researched before there's any bid to create a hyperthymic society. But the kind of temperament I had in mind is exemplified by the author of The Precipice (2020). https://en.wikipedia.org/wiki/The_Precipice:_Existential_Risk_and_the_Future_of_Humanity
Extreme life-lovers may take more risks in some ways, but are on guard to avert them in others
(see my disagreement with Toby e.g.
https://www.hedweb.com/social-media/pre2014.html).
Quoting counterpunch
My describing human and nonhuman animals as sentient organic robots isn't intended to "dehumanise" them. Rather, it's to highlight how our behaviour is mechanistically explicable – and how we can create an architecture of mind that doesn't depend on pain. Inorganic robots don't need a signalling system of sub-zero states to function; re-engineered organic robots can do likewise.
Quoting counterpunch
Around 20% of humans never eat meat. Humans don't need to eat meat in order to flourish. Instead of harming our fellow creatures, we should be helping them by civilising the biosphere (cf. https://www.gene-drives.com). In the meantime, the cruelties of Nature don't serve as a moral license for humans to add to them via animal agriculture.
Quoting counterpunch
Recall that all humans are untested genetic experiments. The germline can be edited – and unedited. But if we don't fix our legacy code, then atrocious suffering will proliferate indefinitely.
Quoting counterpunch
I'm struggling to parse this. Yes, feelings of malaise or discomfort may sometimes be subtle and elusive. But the "raw feels" of outright suffering – whether psychological or physical – are unmistakably nasty by their very nature. "The having is the knowing", as Galen Strawson puts it. Either way, if we replace the biology of hedonically sub-zero states with information-sensitive gradients of well-being, then unpleasant experience will become physically impossible. It won't be missed.
I searched hyperthymia - and those are apparently, associated behaviours. I have no personal knowledge of the condition.
I also read your link on The Precipice, and I think Ord is missing a piece of the puzzle, and it's something that's difficult to understand - I'm trying to tell you about.
Essentially, it's the consequence of Galileo's trial for the heresy. I believe declaring Galileo grievously suspect of heresy divorced science as an understanding of reality, from science as a tool - such that we used the tools without regard to a scientific understanding of reality. We developed and applied technology in pursuit of ideological ends. As a consequence, we've applied the wrong technologies for the wrong reasons - and IMO, that's precisely what you are proposing to do.
Maybe there will come a time when we understand genetics well enough, that the risk of altering the human genome - for every subsequent generation, is a tiny risk worth taking, but that isn't just yet. Right now, we are faced with an existential crisis that is the consequence of the misapplication of technology, for ideological ends. A scientific understanding of reality is the right basis for applying technology, or else you get 'monkeys with machine guns' - or as Ord has it: "We gained the power to destroy ourselves, without the wisdom to ensure that we avoid doing so." That wisdom was denied us by accusations of heresy.
I don't wish to debate vegetarianism with you, because I think it's a perfectly valid choice, but it's absolutely not a moral imperative. Nor is it necessary to a sustainable future. Maybe, one day - we'll be able to grow meat in vats without any conscious agent suffering, but that isn't just yet. Further, I believe agriculture has a vital role to play in resisting desertification. See this Ted Talk by Allan Savory:
https://www.ted.com/talks/allan_savory_how_to_fight_desertification_and_reverse_climate_change
We certainly need to do farming better, and in order to do so - we need limitless clean energy from magma, which is the most scientifically fundamental, most environmentally beneficial, and least disruptive thing we could possibly do to secure the future. I could maybe imagine your genetic proscription for gradients of superhuman bliss working out in a prosperous sustainable future, but while the world is a basket case barrelling toward extinction, being deliriously happy nonetheless, seems to me a sticking plaster on a still gaping wound. I know you keep saying it wouldn't be like that; that we wouldn't lose our ability to navigate a still - hostile environment, but how can you possibly know?
The grass is always greener on the other side. We want what we can't or at least don't have. Perhaps this is what you refer to? This is what it means to be human. The curse of want and desire. Without this, what differentiates a transhuman from a robot clothed in flesh?
Hyperthymia is the opposite of dysthymia (cf. https://en.wikipedia.org/wiki/Dysthymia). When advocating a hyperthymic civilisation, I'm urging a society where everyone has, at minimum, the high hedonic set-point of today's temperamentally happiest people who aren't manic (cf. https://en.wikipedia.org/wiki/Mania). That said, apologies; I'd do well to use less jargon.
Quoting counterpunch
If prospective parents agreed on a moratorium on genetic experiments, then an indefinite delay of genome-editing too would be wise. But at present most people intend to keep reproducing willy-nilly. Parenthood inevitably creates more involuntary suffering. So the question arises whether it's ethical not to load the genetic dice in favour of the subjects of the experiments. The least controversial option will be universal access to preimplantation genetic screening and counselling. But gene-editing is now feasible too. We shouldn’t just assume that the upshot of responsible editing will be worse than random genetic mutations and the genetic shuffling of traditional sexual reproduction.
Quoting counterpunch
We'd both that agree stopping child abuse is morally imperative. The abuse of sentient beings of comparable sentience deserves similar priority. Perhaps try to empathise, if only for 30 seconds, with what it feels like to be, say, a factory-farmed pig.
Quoting counterpunch
To stress, I do not advocate becoming "deliriously happy". Hedonic set-point elevation doesn’t work like that.
Quoting counterpunch
If the touted biohappiness revolution proposed that we should leapfrog ahead and try to create hedonic supermen, then you'd have a point. I hope I've clearly flagged that discussion of a future world animated by gradients of superhuman bliss is speculation. What isn't speculative is the existence of today's extremely high-functioning hedonic outliers – and the strong genetic loading of hyperthymia. The Anders Sandbergs (cf. https://quotefancy.com/quote/1695040/Anders-Sandberg-I-do-have-a-ridiculously-high-hedonic-set-point) of this world have more than adequate navigational skills. It's chronic depressives who often suffer from a broken compass. Responsible recalibration of the hedonic treadmill promises wider engagement with the problems of the world. Not least, passionate life-lovers care more about the future of sentience than depressive nihilists.
Some people cannot imagine life could be different. Suffering shapes their conception of the human predicament and life itself. Other people have tasted paradise and want the world to share their vision. Alas, visions of the ideal society often conflict. Environmentally-based utopian experiments fail. In one sense, the biological-genetic strategy of hedonic uplift is tamer. Potentially, elevated pain thresholds, hedonic range and hedonic set-points can underpin a richer personal quality of life for all, but the manifold social, economic and political problems of society are left unaddressed. I'll bang the drum for a biohappiness revolution for as long as I'm able, but unless (like me) you're a negative utilitarian, it's not a panacea. The end of suffering will still be the most momentous revolution in the history of sentience.
Why? Do you suppose I advocate factory farming? I said specifically that we need to do agriculture better, and that I don't condone unnecessary cruelty to animals. But okay. I get plenty of food, and I like food. Otherwise, I've got no idea what's going on - and know nothing to compare it with. I'm loaded onto a truck, and driven to an abattoir. Someone puts something near my head and the world disappears in an instant. Now you imagine your life as a pig in the wild being ripped apart and eaten alive by a pack of wild dogs. Any thoughts on the Allan Savory video? He explains why we need animal agriculture.
Consider e.g.
https://www.youtube.com/watch?v=icOD7hxUGI8
Worse happens off-camera. Animal agriculture is a crime against sentience.
Quoting counterpunch
Hence the case for:
https://www.reprogramming-predators.com/
Quoting counterpunch
For a rebuttal, perhaps see e.g.
https://slate.com/human-interest/2013/04/allan-savorys-ted-talk-is-wrong-and-the-benefits-of-holistic-grazing-have-been-debunked.html
Civilisation will be vegan.
And if you'd like to see successful trial of Savory's methods, see here:
Agriculture, Ecosystems & Environment
Grazing management impacts on vegetation, soil biota and soil chemical, physical and hydrological properties in tall grass prairie
W.R.Teague S.L.Dowhower S.A.Baker N.Haileb P.B.DeLaune D.M.Conover
? We evaluated the impacts of multi-paddock grazing and continuous grazing. ? We measured impacts on soils and vegetation on neighboring ranches in three counties. ? Multi-paddock grazing had superior vegetation composition and biomass. ? Multi-paddock grazing had higher soil carbon, water- and nutrient-holding capacities. ? Success was due to managing grazing adaptively for desired results.
https://www.sciencedirect.com/science/article/abs/pii/S0167880911000934?via%3Dihub
Quoting David Pearce
No, it really will not, and anyone who's a foodie can assure you that's never going to happen. I always think that, vegans don't really like food; don't like to cook - and take no pleasure in eating.
You massively underestimate what a cultural shift if would be - because individually, you can just stop eating meat, and you think that's it. But it's very different universally. And if Allan Savory is right, there wouldn't be civilisation for very long after.
Have you ever wondered why vegan foods mimic meat? Ever wondered why vegans need supplements for Vitamin B12, Vitamin D, Long-chain omega-3s, Iron, Calcium, Zinc, and Iodine? It's because we are carnivores. As a personal choice - choosing to be a herbivore is fine. That is, until you can take a pill and not have to eat at all! But I love food, I love cooking and eating, and you put yourself between me and a pork chop at your peril!
This is my point essentially. Let's not pretend you've never suffered. Without it, you and these "some people" are one and the same. We're just circling back, essentially. What you claim to wish to eliminate not only nurtured you but in fact created you! Sure, as a creation progresses it wishes or desires to eliminate that which hampers its own progression but at what cost?
Quoting David Pearce
Again, this "other people" and yourself appear to be one and the same.
Quoting David Pearce
Of course, any otherwise successful experiment can be interrupted of perhaps halted by a skeptical, perhaps even religious (though they'll never admit it) interlocutor. What of it?
Quoting David Pearce
Yes, and potentially a currently unproven theory, like yours, and perhaps God, can allow me to fly after jumping off a cliff. These are all in the same bucket of unproven theories. My main question is as follows: what is a "rich" personal quality?
It's not because you are happy and successful for no merit or purpose where others are the same, it's because there is something to stand on so to speak.
Quoting David Pearce
I don't disagree diametrically however I do disagree tangentially. Let us hearken back to the basics. There is no love without something to hate. No joy without something to annoy. No fun without something to bore. Is this true or false, young David
Quoting Outlander
This gets more and more interesting but it's likely that I speak from ignorance rather than any real knowledge. The best way to express my concern here is to bring up the matter of duality. I hope we're all on the same page here. The only serious study of duality that has been undertaken in good faith and were/are by Eastern philosophers (Indian & Chinese). Of the Indians, I know very little but of the Chinese, Lao Tzu (Taoism) is quite well-known and by and large well-received by the West.
The gist of Taoism (Yin-Yang) is duality in that if there's something, say x, then there's always an anti-something (aniti-x) but that's not all. This particular point of view puts the Western notion of symmetry on a pedestal but given how things are, this seems fully justified. After all, every thing stands out as the thing against a backdrop of other things that are not that thing. How Yin-Yang is relevant to the issue at hand is that happiness wouldn't make sense without suffering and vice versa because each provides the contrast for the other, in effect making them both discernible to our mind.
Ergo, setting aside posthumans who have memories of suffering against which they could compare superhappiness to, those posthumans born after the abolishment of suffering ( :clap: ) who know only superhappiness wouldn't really be able to appreciate what they have. Perhaps such posthumans would create education camps for themselves where they're given small doses of pain to give them an idea of what suffering is if only so that they can see the true value of superhappiness which to their ancestors was the very definition of a perfect life. However, if we've modified pain systems in our brains to achieve superhappiness this doesn't seem possible and this takes us back to the statement Outlander seems to be interested in: superhappiness is meaningless without some suffering to serve as a foil in a manner of speaking.
I assume you're trolling. But if not, I promise vegans love food as much as meat eaters. Visit a vegan foodie community if you've any doubt.
Quoting counterpunch
If so, then it's mysterious why scientific studies suggest vegetarians tend to be slimmer, longer-lived and more intelligent than meat-eaters:
https://www.hedweb.com/quora/2015.html#scientificvege
Strict veganism is relatively new. Perhaps see:
https://www.independent.co.uk/life-style/health-and-families/health-news/vegan-meat-life-expectancy-eggs-dairy-research-a7168036.html
("Vegans Live Longer Than Those Who Eat Meat or Eggs")
Many vegans (and indeed non-vegans) do indeed prudently take a B12 supplement. Alternatives are fortified nutritional yeast (cf. https://en.wikipedia.org/wiki/Nutritional_yeast) or the natural plant source Wolffia globosa (cf. "Wolffia globosa–Mankai Plant-Based Protein Contains Bioactive Vitamin B12 and Is Well Absorbed in Humans": https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7600829/ ).
Quoting counterpunch
Is the level of pleasure someone derives from harming his victims – human or nonhuman – a morally relevant consideration?
Yes, it's a powerful intuition. But if the existence of pain and pleasure were inseparable, then there would be no victims of chronic pain or depression. Chronic pleasure and happiness aren't harder to engineer genetically; but perpetual euphoria wasn't fitness-enhancing in the environment of evolutionary adaptedness. Biotech is a game-changer. Humanity now has the tools to create life based entirely on information-sensitive gradients of well-being – and eventually superhuman bliss.
Once again, the intuition is deeply rooted. IMO it's just not supported by the empirical evidence. The temperamentally happiest people are simply born that way – and they don't experience deficits of perceived meaning.
Or consider the rarest of the affective psychoses, unipolar euphoric mania. People in the grip of (hypo)mania find life indiscriminately meaningful. Everything is supercharged with significance.
Transhumanists don't advocate or predict a world of (hypo)mania. But if the hyperthymic civilisation I anticipate comes to pass, then its inhabitants won’t need a contrast with suffering to give meaning to their existence, or fully to savour what they enjoy. A pervasive sense of meaning will be built into the fabric of life itself:
https://www.hedweb.com/quora/2015.html#david
In a civilised world, the "foil" should come from information-sensitive dips and peaks of bliss, not from seesawing between pain and pleasure.
Funny because, I assume the same of vegans - that they troll normal people with their presumed moral superiority.
Quoting David Pearce
It's you broadcasting moral claims based on your personal choices, that seek to cast me, and much of the rest of the world in a bad light. So, who's the troll here?
Quoting David Pearce
Scientifically, that would be such an incredibly difficult finding to prove - that I know any such study is seriously methodologically flawed. Two identical babies - one raised vegan, the other normal, would have to be followed all their lives - to draw such conclusions.
I suspect that vegetarianism is a cultural practice that occurs among a particular type of person, that are already more intelligent than the average. They're slimmer because they don't really like food, don't like to cook, and don't enjoy eating - and I suspect that's because they are hung up on the Freudian anal stage of development, and have some childhood trauma around defecation that subconsciously influences adult behaviour.
Tell me which describes you best:
* hate mess, obsessively tidy, punctual, and respectful to authority, or -
* messy, disorganized, rebellious, inconsiderate of others' feelings.
Quoting David Pearce
In terms of what we owe each other, my only moral obligation to the animals I eat is to minimise the suffering of a mortal creature that is a food animal. It exists for that reason, in nature - and in farming, because that's where that kind of animal is on the food chain. I'm glad I'm human, because I'm at the top of the food chain, and I'm not the 'pull down the ceiling to make everything equal' type. There are inherent inequalities in life. Animals are not all equal. There are predators and prey. I'm a predator.
On some fairly modest assumptions, a world where all sentient beings can flourish is ethically preferable to a world where sentient beings hurt, harm and kill each other. Biotech makes the well-being of all sentience technically feasible. So let's civilise Darwinian life, not glory in its depravities:
https://www.hedweb.com/quora/2015.html#worldvegan
So you "modestly assume" you have the wisdom and technological ability to genetically alter all life on earth that doesn't meet your ethical standards? Evolution has produced the lifeforms that exist, by testing them mercilessly over one and a half billion years - and rendering extinct those that are unfit, that fitter lifeforms can take their place. This is the basis for the apparent design in nature - how everything works together to a productive end, and you would presume to take this process on yourself? You should consider Orgel's Second Rule: "Evolution is cleverer than you are."
The prospect of ending the cruelties of Nature isn't a madcap scheme some philosopher just dreamed up in the bathtub. It's a venerable vision: the "peaceable kingdom" of Isaiah. Biotech lets us flesh out the practical details.
Quoting counterpunch
What exactly is this "productive end"? If I may quote Dawkins:
"The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
(Richard Dawkins, River Out of Eden: A Darwinian View of Life, (1995))
Quoting counterpunch
Evolution has been clever enough to create creatures smart enough to edit their own source code. Now that the level of suffering on Earth is an adjustable parameter, what genetic dial-settings should we choose?
You know my answer.
Let's agree to differ!
If we were discussing some academic question of art or literature, fair enough. But the problem of suffering is morally urgent – and calls for radical solutions.
If you were talking about sustainability, I'd agree - it's urgent, and calls for radical solutions, I've sought to explain to you, but you won't acknowledge my argument. I suspect that's because your real purpose is to horrify people with science; to trash science on behalf of religion.
Quoting David Pearce
Since the 1635 trial of Galileo made science a heresy, we have not integrated science as valid knowledge of reality, but merely used it as a tool - as you propose to do. A systematic approach to science puts sustainability as a priority, far in advance of risky genetic experimentation.
You say you value a sustainable future, but you're anti-natalist before you're transhumanist, if I recall correctly, and continue to propound this Croenenberg-esque madcap scheme - bound to horrify religious conservatives, and so maintain 400 years of science denial that's driving the life of this planet toward extinction.
If you really cared about suffering you'd accept my argument that we need to recognise a scientific understanding of reality, and look first to the most fundamental implications; energy and entropy - on page one of your physics textbooks, and therefore harness magma energy for limitless clean electricity, carbon sequestration, desalination and irrigation, recycling - and so on, because if we don't, the suffering is going to be unimaginably worse than that of a factory farmed pig.
I've tried to explain this several times - and got nothing back. This is a philosophy discussion forum - not a lecture hall. My arguments are based in the epistemology of knowledge, not in some dubious, anti-natalist, vegan moral self righteousness - that to my mind is floating in the air with no visible means of philosophical support.
You're not serious?! Since the age of ten or eleven, I've been a secular scientific rationalist. My reason for alluding to the "peaceable kingdom" of Isaiah was to disclaim originality for the vision of a vegan biosphere. Molecular biology provides the tools to turn utopian dreaming into practical policy.
Quoting counterpunch
All genetic experimentation is risky; the very nature of sexual reproduction involves gambling with the life of a sentient being.
Quoting counterpunch
My core value is suffering-minimisation. "Hard" antinatalism is hopeless (cf. https://www.hedweb.com/quora/2015.html#arguments); transhumanism gives us a fighting chance of defeating suffering for ever.
Quoting counterpunch
I support such initiatives. One comparatively minor argument for ending animal agriculture is that feeding grain and soya products directly to humans is more energy-efficient than feeding them to factory-farmed nonhuman animals whom humans then butcher.
Quoting counterpunch
Missionaries believed they were morally superior to cannibals. Their moral self-righteousness is not an argument for eating babies. Likewise, the foibles of individual vegans are not a moral argument for harming nonhuman animals.
Above I argued:
Quoting counterpunch
Your answer was an appeal to religious authority.
Quoting David Pearce
Your belief that the problem of suffering is morally urgent, is your opinion - and that's all. It's not a fact. It's a subjectively conceived priority. So your allusion to Issiah comes across as an attempt at justification of your opinion, with an appeal to religious authority - which is rather odd for a supposedly secular, scientific rationalist.
If you were a secular scientific rationalist, my argument that science describes an understanding of reality that implies a systematic application of technology - to secure a sustainable future, and thereby relive suffering, should have more impact on you. Maybe you think you're a scientific rationalist, but like West Side Story is really Romeo and Juliet, you are hanging your scientific baubles on the same philosophically religious Christmas tree. And I'm trying to explain to you that applying science for unscientific reasons is why we're headed for extinction.
Admittedly, sustainability is a value - but it's the most objective value conceivable; not least because, one has to exist to have values. The problem of suffering is subjective. You think it important. I don't care about it. I'll concede, unnecessary cruelty to animals is to be avoided, but beyond that I don't care that food animals die. All mortal creatures die, and in the wild suffer far worse than they do on a well run farm - as your Dawkins quote illustrates.
Quoting David Pearce
In your anti-natalist opinion! I disagree; and so we cancel each other out. But you cannot cancel out scientific knowledge. And science as an understanding of reality (it's not just a tool box of neat gadgets to use as you see fit) implies systematic application of technology. On any such list of scientific facts, prioritized in terms of sustainability, risky genetic experimentation is a long way down the list of things we need to do.
Quoting David Pearce
I'm guessing you've never done a physical days work in your life. A vegetarian diet - with all the necessary supplements, is probably fine for an office worker, or an academic philosopher - i.e. the middle class to whom vegetarianism appeals. But it's simply not adequate to the needs of a manual labourer. Meat is concentrated calories, protein and nutrients - with more energy per kilo than lentil casserole. That's how you can claim vegetarians are more intelligent - and you think that's good science. You're not a scientific rationalist. You're a scientific cherry picker - appealing to your moral opinions and religious authority, as justification for something you refuse to acknowledge is arrogant in the extreme - and precisely mirrors, and justifies the anti-science prejudices of religious conservatives.
Quoting David Pearce
Scientifically, cannibalism is a bad idea. It's the cause of prion diseases - like CJD, (mad cow disease.) Cannibalism by the natives of Papua New Guinea lead to the spread of a fatal brain disease called kuru that caused a devastating epidemic in the group. There's no need for moral superiority. Simply knowing what's scientifically true and doing what's sustainable is sufficient, and a far more reliable means to reduce suffering.
Tracing the historical antecedents of one's ideas is very different from appealing to religious authority. In this case, I was simply noting how the vegan transhumanist idea of civilising Nature is prefigured in the Book of Isaiah. The science is new, not the ethic.
Quoting counterpunch
The disvalue of suffering is built into the experience itself. Sadly, it's not some idiosyncratic opinion on my part. If you are not in agony or despair, then you may believe that the problem of suffering isn't morally urgent. One's epistemological limitations shouldn't be confused with a deep metaphysical truth about sentience.
Quoting counterpunch
Nutritionists would differ. So would e.g. vegan body-builders:
https://www.greatveganathletes.com/category/vegan-bodybuilders/
Quoting counterpunch
An ethic of not harming others doesn't involve trumpeting one's moral superiority; rather, it's just basic decency. Also, technology massively amplifies the impact of even minimal benevolence. This amplifying effect is illustrated by the imminent cultured-meat revolution. Commercialised cultured meat and animal products promise an end to the cruelties of animal agriculture:
https://www.hedweb.com/quora/2015.html#slaughterhouses
Looking further ahead, we may envisage a pan-species welfare state – once again, not the result of saintly human self-sacrifice, but the transformative potential of biotech.
What a disgusting admission.
Quoting counterpunch
There are actual and potential workarounds to animal suffering and you are dismissing them.
Quoting counterpunch
No, it's literally not an opinion. It is a scientific fact. You might be comfortable with the odds, but genetic blending has risks whether via natural or bioengineered means.
— counterpunch
Quoting ProbablyTrue
Admittedly, it doesn't look good when taken out of context. But I don't think there's a realistic solution to the "problem". Animals eat each other, and suffer far worse in nature than on a well run farm. I've explained this repeatedly, but everything I say falls on stoney ground. I don't want to be rude to our guest by using ever greater rhetorical force to break through this stonewalling. Baden wouldn't like it. I assume he wants to invite other guest philosophers in future, and I'm already on thin ice with him for remarks deemed off topic elsewhere. So I've said all I can on the subject without risk of getting banned. Thanks Dave, for explaining your views. I don't agree with them, but they are interesting. And Probably True, thank you for affording me the opportunity to explain my difficult position.
Stonewalling?
Philosophers need to acquaint themselves with what's technically feasible so we can have a serious ethical debate on what should be done. From “A Welfare State to Elephants” (cf. https://www.abolitionist.com/reprogramming/elephantcare.html) to “Reprogramming Predators” (cf. https://www.reprogramming-predators.com) to "Compassionate Biology: How CRISPR-based gene drives could cheaply, rapidly and sustainably reduce suffering throughout the living world” (cf. https://www.gene-drives.com), I've tried to explore what creating a cruelty-free living world will involve. Such blueprints sound fantastical today; but they are grounded in science. No practically-minded person need wade though such material, but the biotech revolution means that unsupported claims that no alternative exists to "Nature, red in tooth and claw” are simply mistaken. Intelligent moral agents can now choose how much suffering we want to exist.
However, suppose that humanity decides to retain the ecological and biological-genetic status quo – or at least some approximation of the status quo after today's uncontrolled habitat-destruction runs its course. The fact that a great deal of suffering exists in Nature doesn't somehow morally entitle humans to add to it. Factory-farming and slaughterhouses are an abomination. Let’s get the death-factories shut and outlawed.
I have commented that your proposals are frankenstien-esque - and not just to religious conservatives. Genetic engineering carries a huge risk of unintended consequences, particularly with regard to disease. I've argued that suffering allows us to navigate a hostile environment. I explained at length why it's an unsystematic misuse of science - that rightfully should begin with energy. I said that longevity could not be environmentally supported. And I've questioned the morality of imposing your values on subsequent generations at the biological/medical level without their consent. You've dismissed these remarks, and simply repeated iterations of the same lecture over and over again. If you don't wish to discuss these criticisms, that's your prerogative - but that so, thanks again for explaining your ideas, again!
Both a genetic crapshoot and targeted germline interventions carry risks. Antinatalists refuse to gamble; but they won't inherit the Earth. So instead we must weigh risk-reward ratios when creating new life. Is the deliberate choice of "low pain" genes for our future children likely to create more or less suffering in the long-term than the traditional genetic casino?
Quoting David Pearce
On the so called "genetic crapshoot" - we evolved in relation to a rich and complex biosphere by the function or die algorithm of evolution. This is a process of attrition, where the organism dies until the species chances on a genetic mutation suited to survive, and then all subsequent reproduction follows that "design" - suited to survive within the biosphere, in relation to other organisms. Evolution is not a crapshoot. It's ballet, and you blunder onto the stage in your hobnail boots. For example:
European rabbits were introduced to Australia initially as a food source, but became feral, bred and multiplied into a plague - because they are not designed (by evolutionary attrition) to be in balance with the Australian biosphere. (Australia is now extremely cautious about biosecurity; remember Johnny Depp's dogs.)
Another similar example is Japanese knotweed, brought back from the far east by European landscape gardeners for its aesthetic qualities. It got into the wild and is now an invasive biohazard, almost impossible to eradicate. These organisms get out of control because they are not evolved in relation to the complex living environment to which they were introduced.
The biosphere is not just rabbits and knotweed. It's bacteria and viruses too. In order to try to control the rabbit population they took a virus from central and south America, and deliberately infected the rabbit population of Australia. Unlike the south American brush rabbit, European rabbits have no natural immunity to the myxoma virus, and the disease spread and the rabbit population was reduced for a while, until they developed immunity.
So, it's not just plants and animals, but bacteria and viruses; and what you are claiming is, that despite these examples - and many more, you have the wisdom and technical ability to alter organisms at the genetic level; implying that you can foresee all the possible interactions of all the possible organisms in nature. That's the risk you take upon yourself, not for you personally - but for every subsequent generation of human being. (To say nothing of reprogramming predators in pursuit of your religious vision of the lion laying down with the lamb.)
Evolution via natural selection is a cruel engine of suffering, not a performance art.
Quoting counterpunch
I tiptoe far more gingerly than, say, Freeman Dyson (“In the future, a new generation of artists will be writing genomes the way that Blake and Byron wrote verses”). Exhaustive research, risk-benefit analysis and pilot studies will be essential. But whether eradicating smallpox or defeating vector-borne disease, human interventions will have far-reaching ecological implications. Should we have conserved Variola major and Variola minor because the disappearance of smallpox leads to much larger human population sizes? Should we allow Anopheles mosquitoes to breed unchecked because the long-term ecological ramifications of getting rid of malaria are unknown? What about the biology of involuntary pain and suffering? By all means urge extreme vigilance and exploration of worse-case scenarios. But an abundance of caution shouldn't involve placing faith in a mythical wisdom of Nature.
Quoting counterpunch
As a "soft" antinatalist, I'm not planning any personal genetic experiments – with the possible exception of some late-life somatic gene-editing when the technology matures. But for evolutionary reasons, most people don't believe in exercising such restraint. Humanity should plan accordingly. I'm urging a world in which natalists conduct genetic experiments more responsibly. One day natalism can even be harmless.
Weeping buckets over the fact animals eat each other shouldn't blind you to the complexities of the system you propose fucking with.
Perhaps his belief is that what all three of us are referring to is not a system at all but rather a lack of one. What you call complexities Mr. David refers to as undesirable consequences of a then-necessary system that can and should be remedied, much like child labor or egregious workplace accidents as the result of unsafe labor conditions. They had to occur for there was simply no other option, however when such remedies were made available, any and all people did support them. I believe his theory and mission is far more of a Pandora's Box then the sort of panacea he wishes to promote. Still, terms and motives should fall where they may.
An argument used by hunters (like Joe Rogan) is that they are doing the animals a favour, as otherwise their lives and deaths can be horrific. I take it you agree that it would lead to worse lives if they were left to live on, but you disagree with the consequentialist approach?
What do you think of methods to reduce wild animal fertility?
What's in contention isn't whether humans should or shouldn't intervene in Nature. Humans already do so on a massive scale:
https://en.wikipedia.org/wiki/Anthropocene
Rather, we're discussing what principles should govern our interventions. Not least, what level of suffering in Nature is optimal?
One of the risks of ethical advocacy is losing one's critical detachment and turning into a propagandist. There are forms of propaganda more obnoxious than a plea for paradise engineering and a pan-species welfare state. When these ideas go mainstream, policy initiatives will need to be rigorously critiqued.
Humans should actively be helping non-human animals, not terrorising them and then rationalising their bloodlust. On consequentialist grounds, we should uphold in law the sanctity of sentient life.
Quoting Down The Rabbit Hole
For large slow-breeders, cross-species fertility-regulation via immunocontraception is feasible. For small fast-breeders, we can use remotely tunable synthetic gene drives:
https://www.hedweb.com/quora/2015.html#killed
Yes, sure, but still - the human organism lives within a complex biosphere, and your proposed design changes have not been rigorously tested by evolution, in relation to various other organisms, including viruses and bacteria. Our 'design' is the result of millions of years of evolutionary R+D - in relation to everything else, and what I'm suggesting is that any change to a complex system is almost certain to be detrimental.
Eradicating a virus like smallpox is justified. Eradicating malarial mosquito's is justified. So too, certain genetic diseases. The risks are still there, but there's already such clear and preventable suffering - it's worth the risk. Golden rice - extra vitamin D; probably fine. Messing with human psychology via genetics? That strikes me as several steps too far. You have to understand, there's a very real chance that you would create far greater suffering than you intend to remedy.
If you are serious about your philosophy, you need to address these things. Simply saying:
Quoting David Pearce
Quoting David Pearce
Quoting David Pearce
...is dismissive, and doesn't address the risks of what you are proposing.
Quoting David Pearce
We are both operating from the foundation that suffering is the moral priority. So do you disagree with the premise that the animals would have more suffering if left to live? The consequentialism comes in, as the normalisation of hunting (killing of our fellow sentient beings) leads to more suffering? Or is there some principle/s that take precedence over the consequences?
Quoting David Pearce
This is something we can and should do immediately? Or more research is required?
Do you believe that existing people with high hedonic set-points indirectly cause more suffering? Why exactly do you believe that a whole world of temperamentally happy people would lead to more suffering rather than less?
Do you believe you can reliably make germline alterations to human DNA, without possibly, making subsequent generations susceptible to disease, or other malady that you haven't foreseen?
The first step towards a hyperthymic civilisation is ensuring universal access of all prospective parents to preimplantation genetic screening and counsellng (Cue for "Have you seen Gattaca?!" Yes.) The second step is conservative editing, i.e. no creation of novel genes or allelic combinations that don't occur naturally within existing human populations. The third step, true genetic innovation and transhuman genomes, will be most radical – but the idea that germline editing is irreversible is a canard.
You're a canard! (Couldn't resist it.)
I have seen Gattaca, yes - and the underlying premise of a creating a genetic elite is a genuine problem - particularly with regard to health insurance, bearing on employment prospects, and many other quality of life issues. However, what I'd be more concerned about is a genetic arms race. Once you start down this path, how do know when to stop?
Quoting David Pearce
How can you stop there when you've established an overt genetic competition - by the difference between augmented, and naturally conceived humans; you think the genetic elite will be satisfied with modest enhancements?
Quoting David Pearce
You don't intend to stop there. You're okay with genetic arms race. And you haven't answered my question:
Quoting counterpunch
Again, the human organism is 'designed' by the function or die algorithm of evolution, to live within a complex living environment - such that, you would not only have to account for how the 60,000 or so bases of human DNA interact, but how they relate to other organisms, including bacteria and viruses.
Don't make me out to be some anti-science religious nutjob. I value science, but argue it needs to be accepted as an understanding of reality, and applied systematically, staring with energy technology - and that this is the way to reduce suffering. (i.e. avoid extinction.)
I believe you are making the same mistake humankind has made in regard to science: using it as a tool box, in pursuit of your own subjectively, or ideologically conceived priorities - with little or no regard to a scientific understanding of reality. You despise evolution, but it works - in far more subtle ways than are dreamt of in your philosophy.
A nonhuman animal who suffers a grisly death at the hands of a shooter might well have experienced more suffering in the course of a lifetime if allowed to live unmolested. But killing (human or nonhuman) sentient beings for fun should be prohibited. A world of "high-tech Jainism" where life is sacred will be a happier world. I often use the language of deontology, but my reasoning is entirely consequentialist.
Quoting Down The Rabbit Hole
With funding, creation of artificial self-contained "happy biospheres" could begin today using small fast-breeders. The more ambitious pan-species project I discuss on https://www.gene-drives.com sounds science fiction. But the biggest obstacles are ethical-ideological, not technical.
A "genetic arms race" sounds sinister. It's not. Even inequalities in hedonic enhancement aren't sinister. Consider a toy example. Grossly over-simplifying, imagine if pushy / privileged parents arrange to have kids with a 50% higher hedonic set-point compared to the 30% boost of less privileged newborns. Everyone is still better off. Contrast getting a 30% pay increase if your colleagues get a 50% increase, which will probably diminish your well-being. OK, this example isn’t sociologically realistic. Any geneticists reading will be wincing too. But you get my point. Alleles and allelic combinations that predispose to low mood and high pain-sensitivity will increasingly be at a selective disadvantage as the reproductive revolution unfolds, but there won't be "losers" beyond some nasty lines of genetic code. To quote J.B.S. Haldane,
"The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which had not previously heard of their existence, would not appear to him as indecent and unnatural."
(Daedalus; or, Science and the Future, 1924)
When to stop?
You know my negative utilitarian perspective. The world's last experience below hedonic zero will mark the end of the Darwinian era and the end of disvalue. But I don't for a moment predict that intelligent life will settle for mediocrity. Instead, I cautiously predict a hedonic +90 to +100 civilisation. Life lived within such a hedonic range will be unimaginably sublime.
While I do have some eccentric sprititual/metaphysical beliefs, I'm going to do my best to bracket them here. They help me in my own life, but when talking ideas I don't want to use them as dei ex machina. If those eccentric beliefs are legit, they should be able to deal with whatever happens in rational argument.
So:
I think you make a good point: if I look at my life and my suffering, and spin a narrative where I had to suffer what I did in order to get where I am now (i.e. a state I find better than where I was before) then doesn't that lead to me suggesting that others go through similar suffering?
I agree, right away, that such a line of thought is abhorrent.
I shift it this way:
Instead of saying, e.g. 'you too have to suffer corporal punishment, as I did,' I would, if I had kids, understand they're entering into a dicey space, and convey to them (through all the means parents have) 'you are loved, but shit's going to be hard. You're going to need to learn to feel and make sense of suffering/hurt/pain/heartbreak'
The meaning isn't in the particular suffering. Meaning is produced through a stance, or mode-of-being - what will come will come; what I have to do, over time, is learn how to make sense of it. I have to cultivate a meta-capacity for undergoing and recovering from suffering.
I think this scales from -100 to n-1 (where n is a state where there is no suffering at all.)
in raising my kids I simulatenously
(1) aim to reduce their suffering
&
(2) cultivate their capacity to work through suffering.
But, as you correctly point out, some shit is so fucked up, this doesn't work:
There are overwhelming sufferings - traumas - that so flood the victims, knocking over all their sense-making categories, that there's no nice, neat way to wrap it up. I don't see suicide as a weakness of will, or failure to make sense as one should - I see, usually, justified desperation in the face of irremediable suffering, psychological or social double binds, etc. I had a nervous breakdown in my twenties, spent some time in a psych ward - and the chaotic pain you see there blasts away moral and religious categories of suffering in an instant. You don't forget it.
So I'm not fetishizing suffering either. I was once an antinatalist - and while I only am familiar with Benatar through osmosis, I once went deep into Schopenhauer, Beckett & (to a lesser extent) Cioran. I'm coming at this from the lens of : Ok, we're in it; and, being in it, how to proceed?
My concern is in some ways about (dialectical) tempo. I have reservations about too readily positing a suffering-free state from a suffering-saturated state. One way to look at this is from a Bayesian lens. As a student of history, one of my (strong) priors is that utopian projects tend to be inverted mirrors of present-suffering, and so create new forms of suffering in doing away with the old (they can see how to reverse present suffering, but, understandably, can't anticipate the conditions and flavor of future suffering.)
To me, this seems like a historical hard-limit: you only know what you know now. Now my position isn't that we should never knock down Chesterton's Fence( Besides, I think its inevitable - beyond good and evil, as a matter of history - that all fences eventually come down) but I also doubt strongly that we can know now what we're knocking down, and the ramifications of that breaking-down, when the state of scientific knowledge and historical reality is accelerating at breakneck pace. It's not that I don't think not-suffering wouldn't be better - it's that thinking we know what that would mean now seems unlikely. again, back to Bayes - our understanding shifts dramatically, again and again. The historical evidence is overwhelming: what we think we understand now is likely only a scaffolding to another paradigm shift. The people in the (recent!) past couldn't know then the frame-shattering things we know about neuroscience now. But we can't know now what the people in (not-too-distant!) future will know. And that future-knowing will likely not be simply a filling-in-the-gaps in our current paradigm, but an overturning of our paradigms altogether. (its possible not - but we would need realllllly strong philosophical reasons to overturn the priors we get studying how these things tend to unfold historically.)
I agree that reducing suffering (and increasing flourishing) is the best orienting, regulative idea, for our ethics, but implementation will have to unfold gradually (or at least in tandem with our understanding) - and because of that, I think our best bet is to cultivate a focus - really cultivate a focus -on the here-and-now, and only then tentatively venture out toward widening time horizons (and how far out can we really see?)
The worst source of severe and readily avoidable suffering in the world is simply remedied. Without slaughterhouses, the entire industrialized apparatus for exploiting and murdering sentient beings would collapse. What's needed isn't Zen-like calm, but a fierce moral urgency and vigorous political lobbying to end the animal holocaust. By contrast, reprogramming the biosphere to eradicate suffering is much more ambitious in every sense. Yes, the "regulative idea" of ending involuntary suffering should inform policy-making and ethics alike. And society as a whole needs to debate what responsible parenthood entails. People who choose to create babies "naturally" create babies who are genetically predisposed to be sick by the criteria of the World Health Organization's own definition of health. By these same criteria, most people alive today are often severely sick. Shortly, genetic medicine will allow the creation of babies who are predisposed to be (at worst) occasionally mildly unwell. A reproductive revolution is happening this century; and the time to debate it is now.
Zen-like calm doesn't preclude fervent participation.
(I'm laying myself bare to criticism - wouldn't be surprised if there's a revisionist account of ThĂch Qu?ng ??c I don't know.)
But fervency bred from calm here-and-now feels strong to me. I vibe with that, it feels right. No arguments really now: I'd have to pivot to literature. But that's why I mention that I've also gone deep into antinatalism - that suggests we've both touched the harsh nerve of pain, really touched it, and realized how frame-shatteringly painful real pain is.
But couldn't someone ingenuously frame transhuman as a reflexive reaction (perfect inversion) of holding your hand to the coals and pulling it back (a world - *hands pulling back* - where hands will never be burnt?)
Yes. Words fail to communicate the awfulness of severe pain. Extreme suffering corrodes one's values, personality and life itself. Philosophers are widely supposed to love knowledge. But there are some things best left unknown. That said, cultivated ignorance is no excuse for inaction. I hope that oft-derided philosophers will be at the forefront of the abolitionist project. The biotech and AI revolution means that blueprints are now feasible for eradicating suffering altogether in favour of genetically programmed gradients of well-being. A dietary, genetic, reproductive and ecological revolution will eliminate the root of all evil. Whether your hero is Bentham- or Buddha-inspired, will bioethicists rise to the challenge?
I absolutely respect your devotion to mitigating clearly-defined evils (to say you're doing much more than me to help others would be a wild understatement) - but how do you sustain your pinpointing of (biologically based, and so capable-of-being-engineered-out) evil against, say, the knocking-down-chesterton's-fence argument? What I'm trying to get at is, it feels to me you have full faith in the current scientific framing (an end of scientific framing ala the much discussed political 'end of history') Is that fair?
From the 26 letters (sic) on my keyboard I can write more than a million English words, about 170,000 in common usage, and string these words into a virtually infinite number of meaningful sentences consistent with correct syntax and grammar. There are 118 known elements. Four or five - (possibly more) fundamental forces (when you think about dark matter/energy.) I think the end of scientific history may be some way off yet.
Thinking in those terms, and bearing in mind Chesterton's fence - what about epigenetic engineering? Epigenetics is the study of changes in organisms caused by modification of gene expression rather than alteration of the genetic code itself. For example, it's been shown that starvation suffered by ancestors can alter the epigenetic expression of genes associated with metabolism in subsequent generations.
This is very strange, and quite contrary to the supposed unidirectional nature of Darwinian (or Mendellian) genetics. It's like the blacksmith's son inheriting big arms. It's not supposed to work like that. It's the Lamarkian heresy come back to bite Darwin in the ass! But it does seem to be real. The mechanisms are not currently well understood - which speaks to Chesterton, or does it - because arguably, epigenetic engineering could bypass many of the moral dilemmas associated with germ-line genetic alteration foisted on subsequent generations, in that - gene expression could be altered phylogenically, as opposed to ontogenically, meaning you could have informed consent.
Science doesn't understand existence. Our conceptual scheme is desperately inadequate. Cosmology is in flux, the foundations of quantum mechanics are rotten (cf. https://www.hedweb.com/quora/2015.html#measurementproblem) and scientific materialism is inconsistent with the entirety of the empirical evidence:
https://www.hedweb.com/quora/2015.html#physexplan
However, the case for reprogramming the biosphere to end suffering is still compelling.
Compare surgical anaesthesia: https://www.general-anaesthesia.com . Why phenomenal pain exists isn't understood. (Why aren’t we zombies?) Nor is the existence of phenomenal binding. (Why aren't we micro-experiential zombies?) Consequently, a Chesterton's-fence argument might suggest we should shun pain-free surgery until these mysteries are elucidated. Moreover, the mechanism of anaesthetics is still elusive. Surgical anaesthesia itself carries gross and subtle risks. But in practice, I'd insist on anaesthesia (as distinct from just a muscle-paralysing agent like curare) before surgery. I bet you would too. Furthermore, I'd argue that every other patient is entitled to surgical anaesthesia as well (I'm not special.). Reckless? No, we are weighing risk-reward ratios. Likewise with a biohappiness revolution. Mental and physical pain are frightful. They will shortly become genetically optional. Preserving our information-sensitivity keeps our collective options open. Humanity stands on the brink of post-Darwinian life: a "civilisation" in every sense of the term. We should facilitate the transformation.
In our discussion, I've glossed over the role of transgenerational epigenetic inheritance (cf.
https://www.sciencemag.org/news/2019/07/parents-emotional-trauma-may-change-their-children-s-biology-studies-mice-show-how) and indeed epigenetic editing to enhance mood, motivation and analgesia in existing human and nonhuman animals (cf. "'Dead' Cas9-CRISPR Epigenetic Repression Provides Opioid-Free Pain Relief with No Side Effects":
https://www.genengnews.com/news/dead-cas9-crispr-epigenetic-repression-provides-opioid-free-pain-relief-with-no-side-effects/ ).
The catch-all term "biological-genetic strategy" to defeat suffering is intended to include these interventions – and more besides.
Quoting counterpunch
The question to ask is: What should be the "default settings" of new life? Should we continue to create people genetically predisposed to a ghastly range of unpleasant experiences they will only later be able to palliate? Or should we design healthy people:
https://www.who.int/about/who-we-are/constitution
?
Quoting David Pearce
Why? You said:
Quoting David Pearce
I asked you repeatedly about germline intervention. I wrote:
Interfering in the human genome, so altering every subsequent human being who will ever live, is a risk that's not justified by depression
— counterpunch
You replied:
Quoting David Pearce
You haven't just glossed over epigenetic engineering - that could be performed on the adult individual with their consent. You omitted to mention entirely, an approach that would directly address concerns that I have raised, after telling us that we need to acquaint ourselves with what's technically feasible that we can have a serious ethical debate. Is not consent a serious ethical consideration?
Germline interventions are not irreversible. They merely change the genetic default. Might a future hyperthymic civilisation revert to creating babies with high-pain, low-mood (etc) alleles and allelic combinations? It's technically feasible. Likewise breeding babies with alleles for cystic fibrosis and other nasty genetic disorders. But such scenarios lack sociological credibility.
Nor, apparently, necessary to address the problem.
Children don't consent to be born. If lack of prior consent is the key issue, one should stay child-free.
Quoting counterpunch
If one believes that antinatalists are wrong to condemn baby-making as inherently unethical, then one must show that genetic experimentation can and will be conducted responsibly. I'm not convinced that responsible experimentation is yet feasible. But we now at least know enough to mitigate the harm of coming into existence in a Darwinian world.
Children can't consent to be born - because they don't exist, and for a long time after they are born, are not deemed responsible enough to give consent. Consent is the purview of responsible adults, and it's responsible adults with ability to make germline genetic interventions on behalf of their offspring - and all subsequent generations. The questions is - should they make germline interventions on behalf of their unborn offspring, and grandchildren etc, when epigenetic engineering suggest that detrimental conditions can be treated with epigenetics, when they are adults - able to give consent?
Quoting David Pearce
I do not regard causing suffering as inherently unethical. Suffering allows us to navigate the world by teaching us to avoid that which is harmful. The fact that a child born is destined to suffer - is just part of the learning process. Depriving the child of the ability to suffer is harmful. A harm you would inflict without consent - when, again, epigenetics allows for treatment of unnecessary suffering of the individual, with consent, without forever after altering the human genome. Your omission of individual epigenetic therapy on adults, in favour of germline genetic engineering on unborn offspring remains unexplained.
"Responsible" adults are engineered by evolution to maximize the inclusive fitness of their genes, not impartially to weigh whether it's ethical to generate more pain-ridden Darwinian malware. I coo over babies as much as anyone. But on an intellectual level, I recognise they are the victims of our evolutionary psychology.
Quoting counterpunch
If I (or transhumanists in general) advocated getting "blissed out", thereby robbing people of their ability to learn, then you might have a point. However, you may recall we urge intelligence-amplification. Sentient beings with a hedonic range of +70 to +100 can learn as well as savages with a hedonic range of -10 to 0 to +10.
Apologies, I've done my fallible best to respond, and cited your comments verbatim in my replies. Possibly the conceptual gulf that separates us is too large. Either way, I promise I'm as keen on fostering education as you are – just not by means of suffering:
https://www.gradients.com/
The question I asked is quite simple: why do you advocate genetic interventions - which are passed on through germ cells to subsequent generations - when, epigenetic therapies are not passed on to subsequent generations, but merely effect the expression of genes in the individual?
It's using a sledgehammer to break a nut. Why - when there's nutcrackers right there? Were you unaware of epigenetics? Or, did you want to scare people back to the Church - and/or postmodern subjectivism with a sci-fi nightmare?
Remediation is harder than prevention. Preimplantation genetic screening and counselling are available now. By contrast, the gene-repressing strategy I cited above for pain reduction has been invesigated only in "mouse models". I hope human trials can begin soon.
Magnifique! Superb! Excelente! Carry on. Just ignore me!
Please forgive me if I've missed a post / point you'd like to see addressed. If you let me know, I'll do my best!
(High-tech Jainism: https://www.hedweb.com/transhumanism/neojainism.html)
sorry for the double post just for the "@" reminder notification thingy.
Recall I'm an antinatalist:
https://www.hedweb.com/quora/2015.html#agreeantinatal
My view of life is bleaker than David Benatar's. So no, I won't be conducting genetic experiments on anyone but myself. But selection pressure means that the future belongs to natalists. Natalists conduct around 130 million genetic experiments each year. Should prospective parents be encouraged to mitigate the suffering their experimentation creates? Or should pain-ridden Darwinian malware be encouraged to proliferate indefinitely in its existing guise?
The transhumanist option strikes me as more humane:
https://www.hedweb.com/transhumanism/
Within reason, sure. I'm sure you can understand and perhaps even respect your opposition calling - not your motives but your attempt to actualize them - this, "transhumanism" as an abominable "Pandora's Box". There is already gene editing and there's been a fierce backlash against it. You yourself said one should be careful and thoroughly and exhaustively account and acknowledge the potential risks. Example, say one is able to make one not need food or water anymore, being able to live for an entire decade. This would cut carbon emissions, animal slaughter, and squalid living conditions worldwide by an unimaginable amount. This is good. Now imagine if that person becomes trapped in a cave. A normal person would eventually starve to death after a month or so. Thus relieving their suffering. A transhuman in this case would live and suffer in their cavernous prison for 10 years. This is bad. The examples only get more extreme. Torture by political powers or criminals, for example. The body shutting down after deprivation or enough trauma is a blessing not a curse, I dare say.
Quoting David Pearce
David.. I've always encouraged one should, at least consider the idea of, separating the art from the artist, as far as philosophies and other creative works. But in this example it's simply not the case. Everything you perceive, have perceived, and ever will perceive is the byproduct and result of "Darwinian malware" .. all your ideas, beliefs, motivations, and suggestions are of the same as well. Life sucks. You wish to make it better. That's admirable. However. "The road to hell is paved with good intentions" .. and no that doesn't mean what others think it does. More of a Pandora's Box, better the devil you know, such and such is a double-edged sword, one step forward two steps back, etc..
Where in this chronology would you call a halt:
https://en.wikipedia.org/wiki/Timeline_of_medicine_and_medical_technology ?
Or do you advise against only future innovation?
I don't trust humans either. But if we choose to conserve our genetic source code, then millions of years of obscene suffering still lie ahead of us. By contrast, even a handful of genetic tweaks could dramatically reduce the burden of suffering in the world:
https://www.hedweb.com/social-media/paradise.pdf
You are an atheist, friend. Whether closeted or flouted, it is one and the same. You will never see the big picture in your current state of beliefs. Tell me. Do you really think you knew the world as it was when you were 6? What about when you were 12? Or 20? Or now? The answer has always been the same. A resounding yes. Why do you limit yourself to further knowledge and potential. The answer is the same as why you did when you were 6. Ignorance. Pray some. In sincerity. I dare you.
Current scientific evidence does not support the therapeutic benefit of prayer:
https://pubmed.ncbi.nlm.nih.gov/16569567/
("Study of the Therapeutic Effects of Intercessory Prayer (STEP) in cardiac bypass patients: a multicenter randomized trial of uncertainty and certainty of receiving intercessory prayer")
I am sceptical of the adequacy of our existing conceptual scheme:
https://www.hedweb.com/quora/2015.html#philmat
But phasing out the biology of suffering is not just morally imperative. Life animated by gradients of bliss promises an intellectual revolution:
https://www.hedweb.com/quora/2015.html#psychedelics
Kudos for actually engaging. I appreciate you keeping the implicit promise that many others did not.
Cheers!
Creativesoul, you are very kind. It's much appreciated.
Second this. :clap:
Baden, very many thanks.
Apologies if I've left any loose ends.
You've been talking with us for two months! Time flies. Very many thanks for all you've done.
fdrake, thank you. And good heavens – you're right about time!
I hope it's nothing to do with enjoying having the last word...
https://www.psychologicalscience.org/news/reason-seen-more-as-weapon-than-path-to-truthfor-centuries-thinkers-have-assumed-that-the-uniquely-human-capacity-for-reasoning-has-existed-to-let-people-reach-beyond-mere-perception-and-reflex-in-the.html