An empirical investigation of hedonistic accounts of animal welfare

“Many scientists studying animal welfare appear to hold a hedonistic concept of welfare -whereby welfare is ultimately reducible to an animal’s subjective experience. […] analysis showed welfare judgments depended on the objective features of the animal’s life more than they did on how the animal was feeling: a chimpanzee living a natural life with negative emotions was rated as having better welfare than a chimpanzee living an unnatural life with positive emotions. We also found that the supposedly more purely psychological concept of happiness was also influenced by normative judgments about the animal’s life. For chimpanzees with positive emotions, those living a more natural life were rated as happier than those living an unnatural life. Insofar as analyses of animal welfare are assumed to be reflective of folk intuitions, these findings raise questions about a strict hedonistic account of animal welfare. More generally, this research demonstrates the potential utility of using empirical methods to address conceptual problems in animal welfare and ethics.”

https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0193864

The Challenge of Determining Whether an A.I. Is Sentient, by Carissa Véliz

“…sentience may go unnoticed for years, as was the case with Martin Pistorious [1] … Because brain death can be misdiagnosed [2], and because we have little understanding of the necessary and sufficient causes for consciousness and therefore cannot be certain of when someone might be in pain, some experts have called for the use of anesthesia [3] for organ donation procedures.”

Read more:

https://slate.com/technology/2016/04/the-challenge-of-determining-whether-an-a-i-is-sentient.html

[1] https://www.ted.com/talks/martin_pistorius_how_my_mind_came_back_to_life_and_no_one_knew?language=en

[2] https://www.nytimes.com/2012/04/01/books/review/the-undead-by-dick-teresi.html?_r=0

[3] https://onlinelibrary.wiley.com/doi/full/10.1046/j.1365-2044.2000.055002105.x

 

Conversations about the badness of involuntary suffering

I have the intuition that voluntary suffering might not be bad. This is primarily due to personal experience: I often feel sad (sympathy) when I encounter sad stories or sad situations, but I don’t have the intuition that this is bad for me, because I don’t feel like I ought to look away or stop feeling sad in response to these and I often feel like thinking/learning/reading more about these situations even if I feel more sadness because of it (and I usually do). This happens to me with both real and fictional situations (I was a fan of tragedies for a while). Furthermore, sometimes in the past, when I’ve been depressed about my own life, I didn’t want to be happy and even preferred to be miserable.

It’s suffering that’s bad, intrinsically (though suffering can be instrumentally good)

I’m a hedonistic utilitarian, and I think that even voluntary suffering is be intrinsically bad, as long as it’s still suffering at that point.

Buddhism would say that if you experience sadness without craving that the sadness go away, you continue to feel sadness but you don’t suffer from it.

My intuition is that suffering is bad, but sometimes (all things considered) I prefer to suffer in a particular instance (e.g. in service of some other value). In such cases it would be better for my welfare if I did not suffer, but I still prefer to.

I think we don’t quite have the words to distinguish between all these things in English, but in my mind there’s something like

  • pain – the experience of negative valence
  • suffering – the experience of pain (i.e. the experience of the experience of negative valence)
  • expected suffering – the experience of pain that was expected, so you only suffer for the pain itself
  • unexpected suffering – the experience of pain that was not expected, so you suffer both the pain itself and the pain of suffering itself from it not being expected and thus having negative valence

Of them all, unexpected suffering is the worst because it involves both pain and meta-pain.

I noticed that reading only “positive” and “joyous” stories eventually feel empty. The answer seem that sad elements in a story bring more depth than the fun/joyous ones. In that sense, sadness in stories act as a signal of deepness, but also a way to access some deeper part of our emotions and internal life.

Source

 

On Transhumanism and Philosophy by Phil Torres

We have a pretty good sense of how digestion works. And our grasp of thermodynamics is excellent. We know that there are three bones – the smallest in our bodies – in the middle ear, and that stars produce light because of thermonuclear fusion. While I’m skeptical of “progressionist” claims that the human condition has inexorably improved since the Neolithic revolution (the proliferation of technology-related existential risks being one reason for skepticism), it seems that science has made genuine progress.

The knowledge we now have about what the universe is like and how it works1 far exceeds that of our ancestors – even just a few generations ago.

One finds the exact opposite situation in philosophy. There has been little to no significant progress on many of the most fundamental issues, such as the nature of causation, the self, knowledge, the a priori, meaning, and even consciousness. (Note that a causal explanation is not the same as a constitutive one; brain-thought correlations do not tell us what consciousness is!) Why would this be?

Does the stagnation of philosophy suggest that its problems are intrinsically hard – perhaps even more difficult to apprehend than, say, black holes and quantum tunneling? Some philosophers answer “No – or at least not necessarily.” It might be that the question of what exactly causes are is incredibly easy to answer, except that the answer includes one or more concepts that our three pound Jell-O brains simply can’t grasp ahold of. Not in the sense that an ancient relative of ours would find it hard to grasp what “radioactive decay” refers to, but in the sense that a dog could never, in principle, make sense of the concept of a stock market – or a quark, or a palindrome. The mental machinery pumping out thoughts in the dog’s tiny skull simply doesn’t have the conceptual resources to get these ideas.

Philosophers call this ineluctable situation “cognitive closure,” and we may distinguish two versions of it. The first involves having the mental capacity to ask a question but not to answer it. It appears that this is the case with a range of philosophical topics, from causation to consciousness, knowledge to meaning. The answers seem to dangle in front of our minds’ eyes, yet no matter how hard we struggle to clutch them they continually evade our reach.

The second kind of closure is defined by the inability to even ask the question, much less answer it. This is the cognitive prison our canine friends find themselves in with respect to quarks and palindromes. Their predicament is marked by a second-order ignorance – ignorance of their ignorance of concepts X, Y, and Z. In Rumsfeldian terms, the concepts aren’t merely “unknown unknowns” but “unknowable unknowns.” They lie forever beyond the horizon of intelligibility.

As just alluded to, the boundary between “mysteries” (perennial unknowns) and “problems” (in principle knowable even if currently unknown) is entirely relative to types of minds. It is, in other words, a species-specific distinction: the boundary line is drawn differently for Canis lupus than for Homo sapiens. It follows from this relativism that a superintelligence – whether taking the form of a cognitively enhanced cyborg or a wholly artificial machine – could potentially have access to a vastly expanded library of concepts that are permanently unavailable to us.

As such, it could grasp a range of ideas, beliefs, theories, hypotheses, explanations, and so on, that we can’t even begin to fathom. Thus if “we” – meaning us and our posthuman progeny – want to actually make some progress in philosophy, it may be that the only way forward is through the creation of minds that are superintelligent. This is essentially what the transhumanist philosopher Mark Walker has proposed: rather than dumb down the questions, smarten up the thinker.2 In his words, “The idea … is that it is not we who ought to abandon philosophy, but that philosophy ought to abandon us.” Call this inflationism.

The transhumanist literature distinguishes between “strong” and “weak” supersmarts, where the former is qualitative and the second quantitative.3 The situation above involves superintelligence of the strong variety (although it doesn’t preclude the other kind). But weak superintelligence could also be incredibly useful for philosophy. Why? Because – they are institutionally permitted to examine the “big picture”— to work towards an understanding of “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” as Wilfrid Sellars put it.

The difficulty in achieving this stems from the word “broadest.” While collective human knowledge has undergone an exponential climb in the past several centuries, our individual capacities have remained more or less fixed. The result is that the relative ignorance of individuals is at an all-time high.4 You and I and even the most polymathic scholar are pathetically unaware of truly oceanic realms of “known knowables.”5 This prevents us from seeing the big picture. The two primary constraints here are memory – use it or lose it! – and time – even with eidetic abilities the day just ain’t long enough.

​But a weak superintelligence could rectify this problem of “size.” Although it would not be able to think any new (in kind) thoughts about the peculiar nature and workings of reality, it could potentially “remember” every fact, theory, and notion that humanity has so far registered as knowledge. Furthermore, since an AI would by definition be running on hardware in which components communicate at roughly the speed of light, it could easily overcome the time constraint as well.

Putting this all together, a mind that’s superintelligent in both the weak and strong senses has the potential to satisfy Sellars’ aim of “seeing the whole picture,” as well as to solve the still-unanswered, age-old philosophical questions about life, the universe, and everything. Perhaps the transhumanist agenda offers the only path to philosophical enlightenment.

1 I.e., the properties of objects that exist in the cosmos and their various causal relations.

2 A superintelligence might also make progress in scientific areas like fundamental physics, where a “theory of everything” still eludes us, and the best proposed idea so far posits spatial dimensions beyond the three of length, width, and height – dimensions that are, at best, only vaguely intelligible to even the brightest minds.

3 I’m expanding the definition of a weak superintelligence to including not only information processing speed but information organization and retention as well.

4 That is, precisely because our collective knowledge is at an all-time high.

5 We are thus forced to rely on a complex hierarchy of divided cognitive labor to navigate the intellectual landscape, since we can’t do it on our own.

Source: https://ieet.org/index.php/IEET2/more/torres20141003

Mutations in sodium-channel gene SCN9A cause a spectrum of human genetic pain disorders

Individuals with congenital indifference to pain have painless injuries beginning in infancy but otherwise normal sensory responses upon examination. Perception of passive movement, joint position, and vibration is normal, as are tactile thresholds and light touch perception. There is intact ability to distinguish between sharp and dull stimuli and to detect differences in temperature. The insensitivity to pain does not appear to be due to axonal degeneration, as the nerves appear to be normal upon gross examination (8). The complications of the disease follow the inability to feel pain, and most individuals will have injuries to lip or tongue caused by biting themselves in the first 4 years of life. Patients have frequent bruises and cuts, usually have a history of fractures that go unnoticed, and are often only diagnosed because of limping or lack of use of a limb. The literature contains very colorful descriptions of patients with congenital inability to perceive any form of pain.

Read more

Synesthesia as unusual sense, by Craig Weinberg

“The fact of synesthesia (the experience of multiple and unusual sense modalities associated with events that are commonly experienced with one sense modality) shows that there need not be any connection between physical conditions and consciousness. Someone might play a piano and see musical notes at the same time, and that would be a form of synesthesia, but they are still seeing something visible and hearing something audible. I think it’s useful to distinguish visible (Aesthetic Qualia) from optical (Anesthetic Physical Mechanism) and audible (AQ) from sonic (APM). All sense qualia can be separated from physics or information this way.”

Post by Craig Weinberg

Researchers discover technique to alter a patient’s DNA that could cut chronic agony for sufferers

Scientists have discovered how to switch off a key ‘pain gene’, dramatically raising hopes of a long-term treatment to relieve the agony of serious illness for millions.

The revolutionary technique alters a patient’s DNA, silencing a gene that transmits pain signals up the spine.

Preliminary studies on mice have already proven successful and US researchers plan to start human trials next year, potentially offering terminally-ill patients and those with chronic conditions the prospect of pain-free care.

So suppressing this ‘pain gene’ – called SCN9A – could be used as an alternative to morphine, helping cancer patients stay on chemotherapy longer and enabling them to live their final months more fully. Navega’s method involves placing the CRISPR-editing tool inside particles of a harmless virus, which acts like a Trojan horse.

These virus particles are injected into the spine, much like an epidural, after which they ‘infect’ neuron cells. Once inside a cell, the CRISPR tool is released and gets to work silencing the pain gene.

Read more

The emotional need of a “scenario completion” and the difference between a cook and a chef

The need of a “scenario completion”

“Fascinating concept that I came across in military/police psychology dealing with the unique challenges people face in situations of extreme stress/danger: scenario completion. Take the normal pattern completion that people do and put fear blinders on them so they only perceive one possible outcome and they mechanically go through the motions *even when the outcome is terrible* and there were obvious alternatives. This leads to things like officers shooting *after* a suspect has already surrendered, having overly focused on the possibility of needing to shoot them. It seems similar to target fixation where people under duress will steer a vehicle directly into an obstacle that they are clearly perceiving (looking directly at) and can’t seem to tear their gaze away from. Or like a self fulfilling prophecy where the details of the imagined bad scenario are so overwhelming, with so little mental space for anything else that the person behaves in accordance with that mental picture even though it is clearly the mental picture of the *un*desired outcome.

I often try to share the related concept of stress induced myopia. I think that even people not in life or death situations can get shades of this sort of blindness to alternatives. It is unsurprising when people make sleep a priority and take internet/screen fasts that they suddenly see that the things they were regarding as obviously necessary are optional. In discussion of trauma with people this often seems to be an element of relationships sadly enough. They perceive no alternative and so they resign themselves to slogging it out for a lifetime with a person they are very unexcited about. This is horrific for both people involved.”

Romeo Stevens

 

…and the opposite: how is Elon’s Software?

The difference between the way Elon thinks and the way most people think is kind of like the difference between a cook and a chef. […]

Musk calls this “reasoning from first principles.” I’ll let him explain:

I think generally people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good.” But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.5

My favorite all-time quote might be Steve Jobs saying this:

When you grow up, you tend to get told the world is the way it is and your life is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. That’s a very limited life. Life can be much broader once you discover one simple fact. And that is: Everything around you that you call life was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.

[…]

Most people would have stuck with the Stanford program—because they had already told everyone about it and it would be weird to quit, because it was Stanford, because it was a more normal path, because it was safer, because the internet might be a fad, because what if he were 35 one day and was a failure with no money because he couldn’t get a good job without the right degree.

Musk quit the program after two days. The big macro arrow of his software came down on the right, saw that what he was embarking on wasn’t in the Goal Pool anymore, and he trusted his software—so he made a macro change.

He started Zip2 with his brother, an early cross between the concepts of the Yellow Pages and Google Maps. Four years later, they sold the company and Elon walked away with $22 million.

As a dotcom millionaire, the conventional wisdom was to settle down as a lifelong rich guy and either invest in other companies or start something new with other people’s money. But Musk’s goal formation center had other ideas. His Want box was bursting with ambitious startup ideas that he thought could have major impact on the world, and his Reality box, which now included $22 million, told him that he had a high chance of succeeding. Being leisurely on the sidelines was nowhere in his Want box and totally unnecessary according to his Reality box.

So he used his newfound wealth to start X.com in 1999, with the vision to build a full-service online financial institution. The internet was still young and the concept of storing your money in an online bank was totally inconceivable to most people, and Musk was advised by many that it was a crazy plan. But again, Musk trusted his software. What he knew about the internet told him that this was inside the Reality box—because his reasoning told him that when it came to the internet, the Reality box had grown much bigger than people appreciated—and that was all he needed to know to move forward. In the top part of his software, as his strategy-action-results-adjustments loop spun, X.com’s service changed, the team changed, the mission changed, even the name changed. By the time eBay bought it in 2002, the company was called PayPal and it was a money transfer service. Musk made $180 million.

source: https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

 

Bonus tip: about Tim Urban’s and Elon Musk’s idea of consciousness:

“One topic I disagreed with him on is the nature of consciousness. I think of consciousness as a smooth spectrum. To me, what we experience as consciousness is just what it feels like to be human-level intelligent. We’re smarter, and “more conscious” than an ape, who is more conscious than a chicken, etc. And an alien much smarter than us would be to us as we are to an ape (or an ant) in every way. We talked about this, and Musk seemed convinced that human-level consciousness is a black-and-white thing—that it’s like a switch that flips on at some point in the evolutionary process and that no other animals share. He doesn’t buy the “ants : humans :: humans : [a much smarter extra-terrestrial]” thing, believing that humans are weak computers and that something smarter than humans would just be a stronger computer, not something so beyond us we couldn’t even fathom its existence.”

Source: https://waitbutwhy.com/2015/05/elon-musk-the-worlds-raddest-man.html

List of Animals That Have Passed the Mirror Test

When conducting the mirror test, scientists place a visual marking on an animal’s body, usually with scentless paints, dyes, or stickers. They then observe what happens when the marked animal is placed in front of a mirror. The researchers compare the animal’s reaction to other times when the animal saw itself in the mirror without any markings on its body.

Animals that pass the mirror test will typically adjust their positions so that they can get a better look at the new mark on their body, and may even touch it or try to remove it. They usually pay much more attention to the part of their body that bears a new marking.

Currently, nine non-human animal species pass the mirror test. Not all individuals of each species pass, but many do.

Read more