We have a pretty good sense of how digestion works. And our grasp of thermodynamics is excellent. We know that there are three bones – the smallest in our bodies – in the middle ear, and that stars produce light because of thermonuclear fusion. While I’m skeptical of “progressionist” claims that the human condition has inexorably improved since the Neolithic revolution (the proliferation of technology-related existential risks being one reason for skepticism), it seems that science has made genuine progress.
The knowledge we now have about what the universe is like and how it works1 far exceeds that of our ancestors – even just a few generations ago.
One finds the exact opposite situation in philosophy. There has been little to no significant progress on many of the most fundamental issues, such as the nature of causation, the self, knowledge, the a priori, meaning, and even consciousness. (Note that a causal explanation is not the same as a constitutive one; brain-thought correlations do not tell us what consciousness is!) Why would this be?
Does the stagnation of philosophy suggest that its problems are intrinsically hard – perhaps even more difficult to apprehend than, say, black holes and quantum tunneling? Some philosophers answer “No – or at least not necessarily.” It might be that the question of what exactly causes are is incredibly easy to answer, except that the answer includes one or more concepts that our three pound Jell-O brains simply can’t grasp ahold of. Not in the sense that an ancient relative of ours would find it hard to grasp what “radioactive decay” refers to, but in the sense that a dog could never, in principle, make sense of the concept of a stock market – or a quark, or a palindrome. The mental machinery pumping out thoughts in the dog’s tiny skull simply doesn’t have the conceptual resources to get these ideas.
Philosophers call this ineluctable situation “cognitive closure,” and we may distinguish two versions of it. The first involves having the mental capacity to ask a question but not to answer it. It appears that this is the case with a range of philosophical topics, from causation to consciousness, knowledge to meaning. The answers seem to dangle in front of our minds’ eyes, yet no matter how hard we struggle to clutch them they continually evade our reach.
The second kind of closure is defined by the inability to even ask the question, much less answer it. This is the cognitive prison our canine friends find themselves in with respect to quarks and palindromes. Their predicament is marked by a second-order ignorance – ignorance of their ignorance of concepts X, Y, and Z. In Rumsfeldian terms, the concepts aren’t merely “unknown unknowns” but “unknowable unknowns.” They lie forever beyond the horizon of intelligibility.
As just alluded to, the boundary between “mysteries” (perennial unknowns) and “problems” (in principle knowable even if currently unknown) is entirely relative to types of minds. It is, in other words, a species-specific distinction: the boundary line is drawn differently for Canis lupus than for Homo sapiens. It follows from this relativism that a superintelligence – whether taking the form of a cognitively enhanced cyborg or a wholly artificial machine – could potentially have access to a vastly expanded library of concepts that are permanently unavailable to us.
As such, it could grasp a range of ideas, beliefs, theories, hypotheses, explanations, and so on, that we can’t even begin to fathom. Thus if “we” – meaning us and our posthuman progeny – want to actually make some progress in philosophy, it may be that the only way forward is through the creation of minds that are superintelligent. This is essentially what the transhumanist philosopher Mark Walker has proposed: rather than dumb down the questions, smarten up the thinker.2 In his words, “The idea … is that it is not we who ought to abandon philosophy, but that philosophy ought to abandon us.” Call this inflationism.
The transhumanist literature distinguishes between “strong” and “weak” supersmarts, where the former is qualitative and the second quantitative.3 The situation above involves superintelligence of the strong variety (although it doesn’t preclude the other kind). But weak superintelligence could also be incredibly useful for philosophy. Why? Because – they are institutionally permitted to examine the “big picture”— to work towards an understanding of “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” as Wilfrid Sellars put it.
The difficulty in achieving this stems from the word “broadest.” While collective human knowledge has undergone an exponential climb in the past several centuries, our individual capacities have remained more or less fixed. The result is that the relative ignorance of individuals is at an all-time high.4 You and I and even the most polymathic scholar are pathetically unaware of truly oceanic realms of “known knowables.”5 This prevents us from seeing the big picture. The two primary constraints here are memory – use it or lose it! – and time – even with eidetic abilities the day just ain’t long enough.
But a weak superintelligence could rectify this problem of “size.” Although it would not be able to think any new (in kind) thoughts about the peculiar nature and workings of reality, it could potentially “remember” every fact, theory, and notion that humanity has so far registered as knowledge. Furthermore, since an AI would by definition be running on hardware in which components communicate at roughly the speed of light, it could easily overcome the time constraint as well.
Putting this all together, a mind that’s superintelligent in both the weak and strong senses has the potential to satisfy Sellars’ aim of “seeing the whole picture,” as well as to solve the still-unanswered, age-old philosophical questions about life, the universe, and everything. Perhaps the transhumanist agenda offers the only path to philosophical enlightenment.
1 I.e., the properties of objects that exist in the cosmos and their various causal relations.
2 A superintelligence might also make progress in scientific areas like fundamental physics, where a “theory of everything” still eludes us, and the best proposed idea so far posits spatial dimensions beyond the three of length, width, and height – dimensions that are, at best, only vaguely intelligible to even the brightest minds.
3 I’m expanding the definition of a weak superintelligence to including not only information processing speed but information organization and retention as well.
4 That is, precisely because our collective knowledge is at an all-time high.
5 We are thus forced to rely on a complex hierarchy of divided cognitive labor to navigate the intellectual landscape, since we can’t do it on our own.
Source: https://ieet.org/index.php/IEET2/more/torres20141003