The present page is part of the author’s articles on cognition
|Well-known authors such as Richard Dawkins, Steven Pinker, Jared Diamond, and others, have claimed that cognition is only important to humans because of our understandably human-centered view of the world: if we were swifts, elephants, or woodpeckers, we might regard other features of our constitution, such as free flight, trunk bearing, or hole boring, respectively, as centrally important. Thus, human-like cognition, being an “evolutionary fluke” according to their view, is unlikely to have evolved elsewhere in the universe. We are most probably alone, and intelligent aliens are highly unlikely! Can this be true? This article disagrees, not directly with the idea that aliens are highly unlikely, but with the idea that human cognition is a mere evolutionary fluke. Far from it, here it will be argued that human cognition constitutes a new stage in material complexity, which emerged after the physical, chemical, and biological stages (in that order) had already appeared. It shows that cognition, to be understood properly, must be treated not as another biological feature, but entirely separately from it, and that cognitive science, rather than a parochial human concern, is equivalent as a discipline to the other physical (“hard”) sciences. As a consequence, intelligent aliens should in principle be more common in the universe than the above-mentioned professors would expect.|
|The view that we might not be as
special as we think we are is not new. It was suggested
roughly 2,300 years ago, when Aristarchus of Samos
proposed that the Earth is not at the center of the
world, but that it orbits the Sun. This idea, which was
impossible to verify experimentally in the ancient world,
conflicted with the religious view that the Earth — and
hence the humans inhabiting it — rests (or floats) at
the universal center. Further blows to the notion of a
special position for human beings came with the
independent rediscovery of this idea by Copernicus in the
16th century and its subsequent theoretical and
experimental justification, by Kepler and Tycho Brahe,
respectively; with the theory of evolution by Darwin in
1859, which entailed that humans are just another twig in
the evolutionary tree of living beings; and with Einstein’s
theories of relativity in 1905 and 1915, since, according
to them, 3-D space has no absolute center, nor is there
an absolute time ticking instantly everywhere in the
universe, and so any extraterrestrial culture could claim
an equal status with us in importance regarding their
location in universal space–time.
The “ousting” of humans from the center of creation — either literally or figuratively — has recently been carried one step further by biologists and cognitive scientists. Steven Pinker, in his influential “How the Mind Works” (Pinker, 1997), argues against astronomer and SETI (Search for Extra-Terrestrial Intelligence) enthusiast Frank Drake and his supposition that intelligence must have evolved somewhere else in the universe, by pointing out that there is no “trend” towards producing more intelligent species in evolution, and that extra-smart human brains, just like extra-long elephantine trunks, are the outcome of random evolutionary events:
Pinker then goes on to paraphrase an excerpt in which Drake, who argues in support of SETI, claims that the first species to develop intelligent civilizations will discover that it is the only such species, and that it should not be surprised because someone must be first. Pinker substitutes “trunk” for “intelligent civilization”, “powder itself with dust” for “develop electronic technology”, and “trunk-using” for “technology-using”, resulting in the following parody of Drake’s argument:
Pinker’s thesis was supported in a more recent publication by Richard Dawkins (Dawkins, 2004), in which Dawkins compares intelligence to the fine-tuned flight abilities of birds such as swifts: just as swifts are among the most capable flying machines in the world of flight-enabled species, so humans are the most capable thinking beings in the world of neuron-possessing species:
Dawkins then recalls and expands on Pinker’s idea:
In an earlier work, Jared Diamond put forth essentially the same idea, using woodpeckers and their ability to bore holes in wood as an example of a biological property that was honed to an extreme by natural selection (Diamond, 1992, Ch. 12). Few species bore holes in wood, according to Diamond, and none can compete with the amazing abilities of a woodpecker. Diamond concludes that we should not be surprised if we do not detect our likewise unique cognitive abilities elsewhere in the universe.
The late biologist Ernst Mayr, in a well-publicized debate with the late astronomer Carl Sagan, has also argued that out of the billions of species that have lived on our planet only a single one evolved to be smart enough to develop radio technology to use in efforts to contact extraterrestrial civilizations. He concluded that the probability of such a species evolving must be around one in 50 billion (Mayr, 1995). Mayr fell into the black-or-white trap: a species either has or does not have human-like intelligence. This view overlooks the fact that cognition did not start with humans, but evolved gradually from very lowly origins. Even slugs and oysters exhibit rudimentary cognition: as long as such creatures are able to perceive and negotiate their existence in their environment, they have cognition. Similarly, it is incorrect to claim that the probability of a mountain rising above 8,800 m on Earth is one in several billion, because out of the billions of bumps, dunes, hills, and mountain peaks on the surface of our planet, only one has risen above that altitude. Mayr’s biased view will concern us very little in the rest of this text.
It appears there is a consensus, at least among the above eminent scientists, that cognition of the human genre is an evolutionary fluke, unlikely to be replicated in extraterrestrial life. But a corollary is that cognitive science is motivated by our anthropocentric concerns. Science should provide an objective description of the natural world. If human intelligence is a fluke, then most of the research in cognitive science is human-centered, because it attributes to cognition an unjustifiably important role in universal affairs. If we were elephant scientists, we would want to develop not cognitive science but elephantine truncology; if we were swift scientists, we would want to develop the science of swift-like aviation; and so on.
Does the above view hold any water? An immediate (and rather troubling) observation is that the Diamond–Pinker–Dawkins (DPD)(*) analogies are asymmetrical: whereas it is human cognition that enables us to argue whether cognition is objectively as important as hole-boring, trunk-bearing, or swift-flying, none of those features enable purported woodpecker, elephant, or swift scientists to argue in an analogous way. Such animal scientists, as both Pinker and Dawkins fancy them, would still need cognition to argue about anything at all. But this asymmetry is only a minor glitch in the DPD analogies. To claim that cognition is a much more fundamental property of the universe than trunks, flight, or any other biological feature — regardless of our human biases — and that it is therefore worthy of a rigorous discipline (cognitive science) devoted to its scientific investigation, we must do more than point out glitches: we must show that cognitive science is indeed rigorous, independent of biology, and based on elements and principles similar to those of the other so-called hard sciences.
It will be instructive to start this discussion with an analogy of much grander magnitude than those suggested by DPD. Observe that each of the hard sciences appears to describe the properties of some elementary units, and of structures built from them: in quantum physics the units are particles such as quarks and leptons, and the structures are the baryons; in chemistry we have the atoms and molecules; in biology there are cells, tissues, and organisms; in astronomy there are stars and planets, star clusters, galaxies, etc. Are there similarly elementary units and structures in cognitive science?
|The following four-faceted analogy suggests that cognition, far from a mere biological feature as per the DPD view, is chronologically the latest stage of material evolution in our universe, after the quantum, chemical, and biological stages. (Terms that differ among the four stages, below, are indicated in italics.)|
|There is more to this analogy than
suggested by the above four columns of corresponding
properties. For example, the following regularities in
material evolution can be observed: the bulk of matter
exists in a prior stage, whereas only a minute fraction
of it constitutes the next stage. For instance, almost
all matter in the universe exists at the quantum stage,
whereas a minute fraction of it forms the more complex
units of the chemical stage, i.e., molecules such as
those that we observe on terrestrial-like planets; within
the chemical stage, the bulk of matter exists in
relatively simple molecules, whereas only a minute
fraction of it forms the more complex organic
macromolecules, replicating molecules such as the DNA and
RNA, and eventually all that comprises the biological
matter; finally, among the matter that exists in the
biological stage (which we are aware of on Earth), only a
minute fraction of it — appearing as neuronal cells of
our own species — is capable of supporting the
structures (i.e., the concepts) that belong to the
fourth stage of material evolution. Another regularity is
that each successive stage presupposes the existence of
prior stages plus some suitable conditions for it to
emerge, but this does not hold the other way around. For
example, the chemical stage presupposes the formation of
galaxies of stars, and a sufficient number of supernova
explosions for enough complex atoms and molecules to form
in interstellar dust clouds; the biological stage presupposes a stable star
and a terrestrial-like planet with water, orbiting the
star within the “habitable zone” (Ward and Brownlee, 2000); and the cognitive stage (as we know it from
our single example) presupposes a species with a
sufficiently large brain, excellent vision, dexterous
tactile sense, social organization, and an efficient way
of communicating information among its members.
It might be thought that the cognitive stage differs from all the others in that its units (the concepts) have no physical existence, they are immaterial. But this is only an illusion. Concepts cannot exist immaterially, as abstractions. Concepts form the human memory, which exists because it is supported by material structures: the neurons of human brains and their neuronal synapses. If all human brains miraculously disappear (or if we cause them to disappear by obliterating ourselves through global warfare or other human-induced disasters), no concepts will exist anymore in the universe (unless the cognitive stage has emerged, or will emerge again, in extraterrestrial environments). Concepts with no material implementation are an imaginary, nonexistent entity.
This does not thoroughly repudiate the DPD view of cognition, however, because it could still be claimed that elephantine trunks, dexterous aviation, or any of a large number of biological features that have evolved to an extreme, can be thought to comprise the “next stage” of material evolution, from the point of view of the species that possess them. To show that cognition is unlike all other biological features it must be established that the science that describes it (cognitive science), is based on fundamental principles that are completely independent of the biological substratum.(*) (If there are correspondingly abstract sciences for woodpecker-like hole-boring, elephantine trunk-bearing, swift-like flying, etc., it is up to the supporters of the DPD view to demonstrate them.) The next section presents briefly some of the fundamental principles of cognitive science.
|The notion of a “fundamental
principle” in the context of physics brings to mind
Newton’s laws of motion and Einstein’s relativity
(among others); in chemistry, Dalton’s atomic theory;
in biology, Darwin’s law of evolution by natural
selection; in astronomy, Kepler’s laws of planetary
motion; and so on. Is cognitive science based upon a set
of equally fundamental laws, or principles?
In a page titled “Fundamental Principles of Cognition”, seven fundamental principles have been presented by the author. They will not be repeated here, only summarized very briefly, because their inclusion would dwarf the present article; the interested reader can see their full discussion in the above-referenced article (and also by clicking on the corresponding link at the end of each principle, below). Conceivably, they could also be called “fundamental laws of cognition”, in the sense that every sufficiently complex cognitive agent necessarily follows them: it is beyond human will or consciousness to try to avoid them. But the term “principles” emphasizes that claims of programmed cognition should show evidence of adhering to them: the fewer of these principles an agent employs, the less cognitively interesting it is.
|This principle refers to our ability
to categorize things, and thus form concepts
out of the categories. We don’t see just a number of
unrelated percepts out there, but we group them
together, forming categories, and thus objects. A
pristine example of this principle is given in the
What we see, above, is not just a number of dots, but two groups of dots. We group the dots together, automatically and subconsciously. We do the same with the individual “pixels” that are sent as signals from the retinas of our eyes to the visual cortex, on the back of our brains: instead of a number of “pixels”, we group them together and see objects. Without this ability we would be unable to perceive objects in the world, and hence, most likely, we would fail to perceive anything of importance at all.
Our “grouping”, or “lumping together”, or “categorizing”, or “object identifying”, or “concept forming” ability is not restricted to concrete objects, but is extended to categories of abstract ideas that we form in our minds. Many animals perceive objects, but only humans can do so in the realm of abstract ideas.
For more information, see the full presentation of the principle here.
|We don’t just identify objects in
the world, but we perceive their structure, and we
perceive it in a most economical way. This is
what we do when we perceive the following shape (an X) as
consisting of two slanted lines:
There is a number of other ways in which the shape of an X can be parsed (e.g., as a > and a <, and so on), but the way shown in the above figure is the shortest one, as shown in the full discussion of this principle. We perceive minimal descriptions not only in concrete objects, but in abstract ideas too. These include entire theories, hence the well-known “Occam’s razor” (or “Ockham’s”, from William of Ockham) in philosophy, which is a direct consequence of this fundamental cognitive principle. Another consequence is what the mathematicians feel as an urge for elegance, simplicity, shortness of proofs, and so on. Many animals perceive the structure of objects, but only humans can do so in the realm of abstract ideas.
For more information, see the full presentation of the principle here.
|Even a partial presentation of an
object is often enough to allow us recall the entire
object from memory.
Although the above figure shows only an eye, a nose, half of a forehead, and some hair, most people immediately recognize not just a jumbled collection of those percepts, but a face (indeed, Einstein’s face in particular). Completion of patterns happens not only visually, but also abstractly, as when we are able to roughly predict future instances of events based on past ones by using inductive reasoning, which is the basis of scientific predictions. Many animals are able to complete patterns out of partial information, but only humans can do so in the realm of abstract ideas.
For more information, see the full presentation of the principle here.
|Our abilities to identify objects,
perceive their structure minimally, and auto-complete
them from partial information are insufficient for what
makes up full human-like cognition, if a fourth ability
is missing: perception of the essence of things.
The above figure shows how a program, after performing some elementary operations on pixels, “thins” the black figure on the left, resulting in the “skeleton” (or stick figure) on the right. This is an example of “essence distilation” of concrete objects. It is not known whether any animal can derive the “median pixels” as they are called (shown in red color in the middle and on the right), but children regularly draw stick figures, imagining them as standing for the real things. This ability acquires its full power in abstract human cognition, where it becomes the essence of analogy making, as explained in the full discussion of this principle.
For more information, see the full presentation of the principle here.
|Another cognitive ability is used when
we perceive the quantity or size of
In the above figure, we are able to tell approximately the number of dots in the square without counting them, even if the dots are flashed in front of our eyes for a split-second. Even though we are unaware of their exact number, we can be quite sure that they must be more than, say, 10, and less than 30; and we can make even better approximations with a smaller degree of confidence. This is the basis of our perception of numerosity, and also of our understanding of the size of things. We extend our perception of “magnitude” in things like the duration of time, the impression that the reading of a novel made to us, or our skills at persuasion through rational argumentation. Many animals are able to perceive small values of numerosity and the size of concrete objects, but only humans can do so in the realm of abstract ideas.
For more information, see the full presentation of the principle here.
|All the previous principles pertain to
static abilities, i.e., ones that do not involve
time. But cognition has a temporal dimension, too, which
is most evident in its ability to learn new
information. Although the first principle is somewhat
relevant to learning, in the sense that it describes the
learning of new concepts, there is another very important
feature of learning, which is the establishment of associations
between concepts. In its simplest form, this appears in animals as
something called association-building by co-occurrence,
or “Hebbian learning”. from Donald Hebb, who first
described it (Hebb,
The above figure is very well known to mid-20th century audiences of entertainment motion pictures. This person, Mr. Hardy, brings immediately to the mind of such audiences another one, Mr. Laurel, whose figure is linked conceptually to Mr. Hardy’s due to a large number of co-occurrences of the two of them in the same episodes of movies or TV series. This principle can be generalized to any situation in which a number of percepts on one hand, co-occur with a number of percepts on the other hand, and our cognition manages to find which percept on one side is associated with which other percept on the other side. The full discussion of this principle describes an algorithm that achieves this.
Many animals, even very lowly ones, are able to exhibit this form of learning, which is known as “habituation” in biology, but only humans employ this principle in abstract ideas, and are thus able to learn the meanings of words, as shown in an example in the full discussion.
For more information, see the full presentation of the principle here.
|This principle, although it holds
independently of all the rest, is numbered 6½ because the mechanism that is responsible for
its implementation is already present in the context of
principle 6. This principle says that forming wrong
generalizations is not disastrous, nor do we need to
observe an explicit rejection of the wrong generalization
in order to get rid of the error; time takes
care of the matter, through forgetfulness. Thus,
forgetting is not a malfunction of our memory system, but
a necessary mechanism that allows us to keep our
knowledge always current, since information that is not
reinforced is not kept around forever.
For more information, see the full presentation of the principle here.
|Although these principles of cognition initially emerged as properties of biological organisms, they have come to be independent of their biological underpinnings. Consequently, cognitive science can stand alone as an academic discipline, independent of the other life sciences. All of the above principles are presented in an abstract way in the Fundamental Cognitive Principles page where they are fully discussed, often by means of formulas or algorithms, implying that they can be implemented in computational systems, thus resulting in cognitive architectures that do not bear any relation to biology. (A specific example of just such a cognitive architecture is mentioned in the Fundamental Cognitive Principles page.)|
naturally tend to see the world through the spectacles of
their own disciplines, and biologists(*) are no exception to this rule. So it is
unsurprising that biologists view human cognition
as just another biological property. But the DPD
analogies are flawed even from a biological perspective.
The following paragraphs explain why.
Cognition has a property not shared by any of the other cases in the DPD analogy set: it has been erratically increasing in complexity since the dawn of the animal kingdom. This requires some explanation, because the notion of general biological complexity — and especially of increasing magnitude — is controversial. Richard Leakey, for example, writes: “There are few issues more calculated to provoke strong disagreement among biologists than that of complexity. Or, more specifically, the idea that the process of evolution has resulted in greater biological complexity.” (Leakey and Lewin, 1996, p. 91) Other biologists insist that even a bacterium is an extremely complex organism, and thus that there is no basis for the intuitive idea that mammals or birds are “more complex” living beings. This attitude is a reaction against teleological and religious notions that regard humans as “the pinnacle of creation”, as if the procession of life-forms on Earth (whether by evolution, or by divine intervention) had as its ultimate purpose to culminate in Homo sapiens. Nonetheless, the biologists’ rejection of the idea of increase in biological complexity is also incorrect: there is support for the notion that evolution, throughout the eons, has resulted in some more complex organisms, however rare they might be among the billions of species that have existed. For example, we can confidently claim that bacteria are less complex than cows because cows include bacteria in their guts, which help them digest plant cellulose, whereas bacteria do not have cows in their protoplasm.(*) More rigorously, a cow is more complex than a bacterium because if we try to create a virtual cow and a virtual bacterium by simulating in computer programs all their exchanges of energy, nutrition, and information with their environment (assuming we also simulate their environment to the extent that is necessary for such programs to work), then the cow’s program will be longer than that of the bacterium. This must be so because parts of the cow program will be the sub-routines for its symbiotic bacteria, and such sub-routines would necessarily be as complex as the programs simulating any other free-living bacteria.(*) By the same reasoning, any multicellular creature is also more complex than any unicellular one; and even unicellular eukaryotic organisms are more complex than prokaryotic ones (bacteria), because the mitochondria and chloroplasts of eukarya are assumed to be bacteria that were trapped inside the protoplasm of ancient bacteria, at some time in evolutionary history. Nonetheless, despite such examples, the biologists’ point is well taken that it is a nontrivial exercise to compare the complexity of two organisms that appeared relatively recently in geological time, such as a mammal and a fish. Fortunately, the project here is to compare the cognitive complexity of species, not their general biological complexity.
The notion of comparing program lengths, suggested in the previous paragraph, can serve to compare the cognitive complexity of species. Every species can be associated with a list of all behaviors that depend on reacting to external or internal stimuli: all the “cognitive achievements” of the species. Observe that reactions to stimuli do not necessarily depend on neuronal processing. For example, even a single-celled Euglena can sense the direction of light, as can many plants, none of which have any neurons at all. Dionaea muscipula (the Venus Flytrap) can react to the touch of a small insect on its leaves and trap it inside. Examples are so abundant that it is appropriate to talk about the “cognitive achievements” of all living beings, not just animals. Although there is currently no way to implement this idea in programming terms, few cognitive scientists and animal psychologists would disagree with the conjecture that the list of cognitive achievements of a lizard must be shorter than the corresponding one of a chimpanzee. After all, when zoologists claim that dolphins and chimps are “smart animals”, they implicitly refer to something like a list of cognitive achievements of those creatures. The animal psychology literature is full of reports of clever experiments showing that, for example, many mammals are cleverer than most reptiles; many primates are cleverer than most artiodactyls; and one hardly needs to resort to experiments to prove that the list of cognitive achievements of a mentally average human being is longer than the list of any other species.
All of this supports an application of the Minimum Description Length (MDL) principle. The logic of this approach is not intended to be mathematically precise, since cognitive scientists might disagree on whether some detailed behavior constitutes a cognitive achievement, or whether an achievement follows from some more basic ones. But given the large cognitive differences between some animals (e.g., mammals and insects), disputes over details should not affect the overall result, which would be that one list ends up longer than the other, no matter which way disputes are settled.
By applying the MDL principle on the length of the cognitive achievements list we can compare the complexity of the cognition of any two species, and even assign an absolute value to the cognitive complexity of a living being. For example, if we assign the value 1 to the maximum known list, that of Homo sapiens, and the value 0 to non-biological matter that does not react to stimuli (e.g., a stone), then every living species could be assigned a value between 0 and 1, according to the length of its list.
The crucial observation — which is true of cognition but not true of other biological properties proposed in the DPD analogies, such as hole-boring, nose-utilizing, and flying — is that if we make a graph with evolutionary time on its x-axis and cognitive complexity on the y-axis (i.e., a real number between 0 and 1), and plot the maximum value of cognitive complexity at any given time (i.e., the value of the “smartest” species extant at that time), then the resulting curve will be erratically increasing; “erratically”, because whenever the smartest species was extinct, the curve plunged down somewhat (to the value of the second-smartest species); and “increasing” because, e.g., the insects (which appeared around 350 million years ago, or Ma) are smarter than bacteria and protists, but less smart than reptiles (they appeared around 280 Ma), which are less smart than mammals (~150 Ma), and so on (Arms and Camp, 1988; Cowen, 1995; Freeman and Herron, 2001). The curve should exhibit a steep increase at its recent end, around 3 Ma, when the brain of the first hominids started expanding, perhaps since the appearance of the australopithecines (Lewin, 1999).
It is in this sense that we can talk about a “trend” of increasing cognitive complexity, whereas there is no such trend in the properties of the DPD analogies. Aviation, which comes closest to claiming such a trend, was discovered early on by insects, and was rediscovered around 200 Ma later by reptiles (e.g., the pterodactyl), birds, and some mammals (bats, gliding squirrels). But, however we define “flying ability”, the aviation curve will present us with a plateau from the time of the first flying insects until the appearance of birds, and with a second plateau from shortly after the appearance of birds until the present. The latter plateau results from the unlikelihood that swifts, to use Dawkins’s example, are the best flying machines ever. There is a simple explanation for this difference in graphs: flight is a discrete property. Either you soar through the air or you don’t, and there are only limited improvements that can be made on body design to improve the quality of flight. Evidence for this is that birds that fly look much more like the “prototypical bird” (sparrow, robin, etc.) than mammals look like the prototypical mammal (Murphy, 2002). In addition, there has been plenty of evolutionary time for such improvements to have reached perfection several times in the past. Similarly, such properties as using a nose or an ability to bore holes are unlikely to display an increasing trend of complexity. In contrast, cognition is a continuous property: it correlates with the number of neurons of an organism — at least in animal species. Although there is no perfect relation between brain size and smartness, roughly speaking, more cognitive achievements require more brain power, hence a larger brain. Larger bodies afford larger braincases, and this is what has allowed cognition to increase (erratically) in geological time.
What distracts many biologists is the extremely small ratio of “smart” species over the millions of species that exist at any time on Earth. (An extreme example of this reasoning, mentioned in the introduction, is Mayr’s, who saw a single intelligent species, Homo sapiens, against the 50 billion or so unintelligent species that have ever existed.) But a trend can be a trend without needing to apply to all, or even most, members of a population. For example, in ancient Greek culture there was a trend toward sculpting increasingly realistic statues of human figures, which culminated in the exquisite statues of the classic period, famous examples of which decorate museums around the world today. This does not mean that every ancient Greek was a sculptor, or that there was no trend because out of the hundreds of thousands of ancient Greeks only a few dozen were active in statue-making. Trends can be defined by the few, even if the masses do not follow.
But an evolutionary trend in increasing cognitive complexity has more important implications than its role as a source of argument against DPD-like analogies. If true, it has the far deeper implication that if life evolved (or will evolve) elsewhere in the universe, then, even if such life were rooted in vastly different chemical machinery, it would evolve toward creatures with human-like cognitive complexity. But then we have to confront Enrico Fermi’s famous question: “Where are they”? Why is there only “deafening silence” in our universe? Why — to the best of our current observational abilities — do we not observe a universe teeming with radio signals or other telltale marks of human-like intelligence? The following, final section of this article examines briefly this conundrum.
aside popular but unsupported sightings of
extraterrestrial visitations,(*) the truth of the matter is that there is not a
single shred of scientific evidence that extraterrestrial
intelligence exists. This is puzzling because in the
physical world generally we observe the “law of
plenitude”: even the most rare phenomena are not unique
(Davies, 1993). For example, highly transparent crystal
structures are extremely unusual, but not unique.
Similarly, animal species with noses longer than one
meter are rare, but several species fit the bill
(mastodons, mammoths, the two living species of
elephants, etc.). Why should it be that we are the only
examples of beings with human-like intelligence in the
universe, especially if the increase in intelligence is
an evolutionary trend, as argued for in this article?
Several answers have been proposed. One, makes the interesting suggestion that perhaps life up to the complexity of bacteria is common in the universe, but for evolution to reach anything beyond that (e.g., eukarya and multicellular organisms), too many conspiring astronomical and biological coincidences must occur, so the probability for more-interesting-than-bacteria creatures to evolve is nearly zero. This is the well-known “rare Earth” hypothesis (Ward and Brownlee, 2000). It rests on the observation that bacteria appeared on Earth practically as soon as it became possible (probably 500 million years after the Earth’s surface cooled down and acquired a crust), but that it took another 3.5 billion years for the first eukaryotic cell to emerge. Since then, life has been perpetuated only by the confluence of many external circumstances and coincidences, including: a non-violent Sun; the existence of Jupiter, which diverts comets and asteroids; a large Moon causing tidal forces and hence the motion of tectonic plates, without which the ever-changing habitats on Earth — and hence the corresponding chances for evolutionary “innovation” — would not exist; and many more.
It is not the purpose of this article to criticize the rare Earth hypothesis: for all we know, it could be that biological evolution generally stops at the bacterial level of complexity. Instead, the present argument assumes cognitive evolution exists, and discusses the course that it would be most likely to take.
If we take off the biologist’s spectacles for a moment, and see cognition for what it is, i.e., a new and distinct quality, or stage, of material evolution, then it is interesting to consider the duration of stages: how long do they last? The quantum stage seems to be “eternal”: being the foundation, it lasts for as long as there is a material universe. The chemical stage is also durable, once it is formed, but it is not “eternal”. For example, a planetary system might last for billions of years, but it might also be destroyed by an explosion of the host star (either a premature one, or during the star’s death throes). The biological stage appears to be even more “fragile”. Although ours seems to have lasted nearly as long as the chemical stage in our solar system, we know that there have been numerous occasions when the Earth’s biosphere was threatened with total extinction (through asteroid collisions or other geological events — overcooling, overheating, etc.). That it survived is unsurprising: if it hadn’t, it would not be possible for this publication to notice its longevity.(*) Such relative fragility raises the possibility of biological stages in other planetary systems that were not of sufficient duration for us to notice them. In any case, each successive material stage appears to be more “fragile” than its progenitor. If this idea is correct, it suggests that the fourth material stage is the most fragile of all. And the reason for its increased fragility might be staring us in the face: it could be that the cognitive stage has the inherent property that it self-destructs soon after reaching the technological sophistication to spread in space, thus eliminating the opportunity to spread. After all, it is hard to fathom a space-traveling culture that has not learned the physics sufficient to build nuclear weapons, and has not discovered the use of chemicals that can upset the delicate balance of its biosphere. It cannot be a coincidence that in our case we mastered all three — space travel, nuclear weapons, and abusive use of industrial chemicals — virtually simultaneously. Advanced technology seems to imply them all in a parcel. If so, then it is little wonder that we do not observe extraterrestrials: the cognitive stage cannot last long enough for civilizations to coexist. Whenever an extremely rare technologically advanced civilization evolves, it self-destructs before it has time to spread to nearby stars.
A possible argument against this idea might be that human-like cognition, by its very nature, is capable of monitoring its own properties, and thus able to foresee the destruction that it causes to its environment, or the possibility of self-destruction by war. Hence an opposing force would develop among human-like cognitive beings who would wish to avoid their own destruction. The more imminent the self-destruction threat, the stronger the opposing force would be. One hopes this opposition to destruction would win — in at least some civilizations — and peace and/or control of the environment would prevail.
Although it is logically dubious to generalize over all possible civilizations from our own single example, we can at least draw a useful conclusion regarding ourselves: human beings did not evolve with the idea “We must ensure the survival of our species” dominant in their minds. Indeed, no cognitive species could evolve whose average individual thinks like this, since such thought is irrelevant for the survival of the individual and the propagation of its genes to its descendants, which is the level at which biological evolution works. The everyday actions of average human beings aim at increasing their own well-being, and to a lesser extent the well being of their relatives and friends (the closer the kinship, the stronger the concern). To illustrate, people might vote for a government that promises lower taxes (or for another politically irrelevant reason, such as the congeniality of the main candidate), although the same government makes it clear that it will refuse to take measures for improving environmental conditions, but will adopt policies likely to lead to war, famine, and destruction. People act selfishly at the individual level, ignoring the level of their species and their planet’s biosphere. Assuming that civilizations are always the outcome of biological evolution (as opposed to engineering, for example), it is difficult to see how a civilization can emerge that lacks the selfishness at the individual level, and instead ensures its existence at the civilization and species level.
Still, if even one civilization had managed to spread in space before self-destructing, we could be observing a universe teeming with intelligent life, which apparently is not the case. Thus, the observed absence of the fourth stage of material evolution might be a combined result of the rarity of complex forms of life (the “rare Earth” hypothesis) and of the shortness of duration of the cognitive stage, due to self-destruction.
|It was argued in this article that the DPD analogies are fundamentally flawed because they erroneously view cognition as just another biological property, rather than a new and separate stage of material evolution. This idea was supported by discussing some fundamental cognitive principles, which suggested that cognitive science is a discipline that stands on its own, possessing emergent principles that can be examined independently of the biological substratum out of which they evolved. The DPD analogies were also shown to be flawed strictly within the domain of biology, by considering the notion of increasing cognitive complexity over time, a property lacking from the analogous parts within the DPD analogies. The idea of increasing cognitive complexity over geological time raised the question of why we do not observe a universe full of intelligent life, and an answer was suggested based on an observation regarding the durations of existence (“fragility”) of successive stages of material complexity.|
|Footnotes: (Clicking on the caret
(^) brings back to the text, where the footnote is
(^) The names of these scientists are listed in the chronological order of their publications. Mayr does not make an analogy of cognition with some biological feature, so his initial is omitted from this acronym.
(^) Similarly, the biological principles are independent of the chemical substratum. For example, biological evolution, a fundamental pillar of biology, can be simulated in sufficiently large and fast computers without reference to chemistry or other concepts that belong to disciplines on which biology is founded.
(^) However, note that Pinker, who has argued for the DPD view, is a cognitive scientist.
(^) Cows cannot be considered as living organisms in isolation from such bacteria, because without them they would die, unable to digest their food. Thus, the cow + bacteria organism is one symbiotic entity.
(^) Notice that all such programs should also be as short as possible (they should not include redundant instructions) for the comparison to be effective and meaningful.
(^) Interestingly, the number of reports of such sightings at various places in the world reached a maximum during the 1970’s, immediately after the moon landing, implying that the purported aliens increased their visitations exactly when people became most interested in them.
(^) This is an appeal to a version of the well-known “anthropic principle”: it is meaningless to be surprised by coincidences without which we would not be in existence. This principle makes sense only with the additional assumption that there is a large number of other systems in which some (or even just one) of those coincidences did not occur, implying that there are no cognitive beings there to notice the mundane and prohibitive nature of their systems.
|References: (Clicking on the caret
(^) at the end of the reference brings back to the text,
where the reference is first made)
Foundalis, Harry E. (2006). “Phaeaco: A Cognitive Architecture Inspired by Bongard’s Problems”. Dissertation Thesis, Computer Science and Cognitive Science, Indiana University, Bloomington, IN. (Download it. Warning: large pdf file (14 MB).) (^)
Hebb, Donald O. (1949). The Organization of Behavior. New York: Wiley. (^)
|© Copyright notice: The above was first written in October, 2007. The author is not interested in making this material appear in print. However, any attempt to use the present ideas in articles that appear either in print, or on the web, or other media of information without an explicit reference to their source, i.e., this web page, will be considered an unethincal act of plagiarism (at best) and a theft of intellectual property (at worst). Please note that intellectual property is automatically (without the explicit request of the owner) protected by law in almost all countries of the world.|
Back to the Topics in Cognition
Back to the author’s home page