Why I stopped working on the Bongard Problems
|A web page that I wrote some 10 years
ago (this one) explains the wonderful domain of my research
in cognitive science, the Bongard Problems.
Today, a decade later, for all the keen interest that I
still retain for Bongard problems, I have stopped working
on this domain for some ethical reasons. I would
like to explain what these ethical reasons are, because I
think they concern everyone of us; not just cognitive
scientists like me, not even just scientists in general,
but literally everybody. Here is a brief
First, what are the Bongard problems? You can best understand the answer by clicking on the link above, but to save you from going on a tangent, I’ll simply say here that Bongard problems are visual puzzles: you look at something like the drawing with the 6+6 boxes below, and you try to figure out why the six boxes on the left have been separated from the six boxes on the right. What is it that the six boxes on the left have in common?
|Bongard problem #38: What do all the boxes on the left have in common? What about those on the right?|
|My research focused on writing a
computer program, which I called Phaeaco, that
could solve such problems automatically. Actually, to
write just any program that can do that, is not
remarkable at all. How it is done is of utmost
importance, because on one hand there are trivial,
mechanical, and uninteresting programs, and on the other
hand there are more human-like programs to solve such
problems. My dissertation describes a computational
architecture for cognition (that’s what Phaeaco is)
that, among other things, can solve Bongard
problems, displaying a more-or-less human-like
In other words, the goal of my research was not to write simply a program that solves Bongard problems, but to write a program that implements some fundamental principles of cognition, which help it exhibit — in a rudimentary way — some aspects of human behavior, or of human-like thinking.
Okay. So where are the ethical issues in all that?
They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt. That’s what I would like to explain below.
An actroid (credit: GNU license)
|Take a look at the
picture of the woman on the left. Does she look real? She’s
not. She’s a doll, called an “actroid”. You can
learn more about actroids by clicking on the image, but
in summary, an actroid is “a humanoid robot with strong
graphic human-likeness” that “can mimic such lifelike
functions as blinking, speaking, and breathing”. The
specific one shown in the picture is an interactive robot
with the ability to recognize and process speech, and
respond in kind. Such robots include enough “artificial
intelligence” to fend off intrusive motions, such as a
slap or a poke, but react differently to more gentle
kinds of touch, such as a pat on the arm. “The skin is
composed of silicone and appears highly realistic.”
(Without useless details under the clothes, I suppose.)
Now, picture this kind of robot in the not-so-remote future. Imagine that she (or he, or it) has enough computational power in the computer situated in her skull (or in her thorax, doesn’t matter where) to allow her to behave naturally like a human being. And also imagine that in her belly she harbors not guts (which would be useless to her), but a small nuclear bomb.
Impossible? Why not? I’m not talking about current technology, by which nuclear weapons look like monsters. I’m talking future technology, when downscaling everything appears reasonable.
|Nor am I talking about a nuclear bomb
capable of annihilating a whole city, but one that can
turn to smoke perhaps a few building blocks and turn to
uninhabitable a great number more.
Let’s see... I suspect you don’t really object that this is a plausible scenario. What you really believe (or maybe just hope) is that it will be us, our side, our army that will acquire such marvelous weapons. The enemy won’t have them, and so we, with our superior technology, will emerge victorious and live happily ever after, having crushed the barbarians. Yey!
It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.) Here is what an American journalist wrote in Washington Post, on May 6, 2007:
Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II. (The full article is here, and explains how soldiers become sentimentally attached to their robots, extensions of their teenage-time toys, obviously ascribing to them a higher value than human life; the above quoted passage appears on the 3rd page.)
If this attitude were marginal among Americans, if the above story were a fluke, I wouldn’t worry at all. Any moron can say anything they like in a free society, and even have their imbecilic thoughts appear in print. The problem from my point of view is that I’ve seen the above attitude again and again in the years that I lived in the U.S.A. Once, the janitor of the building where I used to do my research, having just learned some sad news about American soldiers killed in Iraq, wondered in a discussion with me: “Why don’t we just nuke ’em all? Just turn the damn desert into glass and be done with those ___” (I don’t remember what adjective he used). You might think the janitor wasn’t very sophisticated in his approach toward war or human lives. But a few days later I was reading another article on a web site, of which unfortunately I didn’t save the address, that was reporting about a similar issue as the one above: how to use robots to enter caves (it was known that the al-Qaeda leader, Osama bin Laden, was hiding in caves at the Afghanistan–Pakistan border back then), search for terrorists, and blast the place, terrorists, robot, and all; “to smoke them out of their holes”, as that pinnacle of wisdom, the American president G. W. Bush, said immediately after the 9/11 attacks.
So, back to our subject: how nice it would be to have “actroids” pregnant with nuclear or biological bombs, right? Perhaps “nuctroids”, how about that? Of course, only we would know they are actually nuctroids. To the terrorists they would pass for normal people.
How immature must a person be to believe something like that! Think of nuclear weapons. When they first appeared on the scene, in the second half of the 20th century, originally only five nations possessed them: the U.S.A., the U.K., France, the Soviet Union, and China (the victors of WWII). Gradually, more countries entered the nuclear club, some of them openly (India, Pakistan), others secretly (Israel). Now every pariah state can have their nuclear toys, or dream of acquiring them. At the time the present was written, there was a strong fear in the international community that Pakistan’s American-supported dictator would be overthrown by extremist Islamists, and the nuclear bombs that Pakistan possesses would fall into the hands of terrorists. It is no secret that Iran, an avowed enemy of the U.S.A., is planning to build its own nuclear weapons. Turkey, now an ally of the U.S., is planning to build its own “energy-only” nuclear factories; but after one or two decades Turkey might turn into a hub for radical Islamism due to its gradually changing demographics, and so in the future we might have another Iran-like nuclear-power wannabe in the same region. So by what stretch of the imagination and crooked logic will it become impossible for pariah states, or even individuals, to possess and command “nuctroids” in the foreseeable future?
Technology spreads. It’s not something that can be confined within national borders. Especially now, when we talk about globalization, we must understand that knowledge “goes global” too, and this includes the specialized knowledge needed to build an extra-small weapon of mass destruction, or a human-like deadly robot.
So how does working toward innocent projects such as the automation of Bongard problems tie in into all this?
As I explained earlier, it’s not just the automation of Bongard problems that’s involved. It’s about the automation of cognition. Anyone who works toward making machines intelligent, and especially wanting machines to “come alive”, must understand the grave ethical issues involved in such an endeavor. Consider the following email message sent by a student at Indiana University (IU, the academic institution where I did my Ph.D.) in 2008 (my emphasis):
Does anyone at IU realize the ethical issues that these kids are toying with? Is it really more important to be concerned with cloning and stem cell research? Does it not matter at all that these kids, or maybe their children, might be turned to a loose collection of quantum particles some time in the not-so-remote future by the fruits of their own toy-making? Or is it that what causes the indifference is the remoteness of the future, whereas other ethical issues in science are present here-and-now? But don’t the seriousness of the nuctroid threat and its logical inevitability make any impression on anyone?
|My ex-advisor in research offered the following
counter-argument (or rather, a hope of his), recently:
|I am fully aware of this. A single
person’s abstinence can have absolutely no consequence
in the overall scheme of things. My purpose is not to
hide my head in the sand like the ostrich, but to
raise people’s awareness about this problem. Nor
do I expect that a single person’s voicing of his
concern is enough. (I don’t know if I am alone; I
suspect I am not, but I haven’t heard anyone else’s
voice on this issue; contact me if you are aware of others speaking about
this.) If others want to undertake the development of nuctroids, let
them feel free to do it (and face the consequences), but count me out. I
choose to “cast my vote” in favor of voicing my concern. My hope is that
in this way a larger percent of people will see the seriousness of this
matter and join their voices, putting pressure on society and
administrations to do something and take some measures.
In the late 19th – early 20th century, with the anticipated spread of the use of electricity, some people were afraid that others would be electrocuted, so electricity was perceived as a public threat. Such worries, which appear even funny today, were not baseless or useless. It was because of such worries that measures were taken and technology developed that made it possible to build essentialy harmless electrical devices.
|I agree that skepticism is a healthy
attitude, and I myself am skeptical about many issues. I
am skeptical even about the threat that I foresee and
explain in the present article. But, weighing rationally
what I know about human nature and how far technology can
reach on one hand, and any objections that I might have
due to skepticism on the other hand, I find that the “nuctroid
threat” weighs more. It is up to the reader to think
about performing their own weighing before reaching any
conclusions. Just a word of caution: just because some
imaginative doomsday scenarios did not materialize in the
past does not imply that there is really no danger ever
— one might be caught in the trap of “the boy crying
‘Wolf!’”, in other words. It is a fundamental
feature of the human mind to try and categorize,
pigeonhole situations; so if one has seen a number of
failed doomsday scenarios, one is strongly tempted to
categorize everything that appears similar as “Oh, it’s
just one of those”. I believe this feature of our
cognition does not help us in this case; one must
rationally list the reasons why this is “just one of
those”, and ponder over the accuracy of such reasoning.
A related issue, specific to the people in the U.S.A., is that after the 9/11 attacks Americans went through a period of intense fear, an apprehension about anything that could upset their easy and cozy way of living. Now, after several years without another attack on U.S. soil, they have timidly started exiting from this period of apprehension, and the first articles that look at their “age of fear” with a critical eye (and even a sense of humor) have started appearing (e.g., read this one). The danger here is that they will experience what I call the “swing of the pendulum to the other extreme position”; i.e., when you release a pendulum from one extreme position it doesn’t go straight to the equilibrium point but swings all the way to the other extreme first. Similarly, Americans might feel disdain for the kind of danger that I describe here, and treat it as just another one of those hateful scenarios that used to send chilling sparks of fear up and down their spines in the past. It’s a natural psychological reaction to try and turn one’s face away from what causes discomfort. But Americans can sense that this is not a case like those they’re familiar with, if they realize that the “reign of terror” was a cheap trick employed for years by their post-9/11 administrations in order to reduce civil liberties and pass antidemocratic policies with no resistance. I am not a member of their administration, not even an American. I am speaking as a person concerned about fellow people and the future of humanity as a whole.
|This is the argument that some
correspondents have put forth and, honestly, I find the hardest to
counter. The argument says that the individual who invented the knife
(suppose there was such a prehistoric individual) cannot be held
responsible for all the stabbings that have taken place since then. Sir
Isaac Newton cannot be held accountable for others using calculus to
find with precision the parabolic orbit of a pelted object, such as a
cannon ball. James Clerk Maxwell cannot be accused of the electrocution
of criminals (or suspected criminals) in various States of the U.S.A.
The more general a discovery is, the more likely it is that a way will
be found to apply it so that it will result in the loss of life. One
should draw a line between creating a scientific theory and consciously
manufacturing weapons using that theory, with the express intent to
OK, so where does research in theoretical cognition fall? On which side of the line?
Looking at it coolly, it seems that it falls on the same side like Newton’s calculus, Maxwell’s electromagnetism, and even the unknown ancestor’s blade-making activity. Designing cognitive architectures, and implementing ideas in computer programs in order to see whether the ideas work or not, without having in mind how these ideas can be used against humanity, does not seem to be a culpable activity. It’s just that, although I don’t have any mis-applications in mind, I can’t help but think that others will find mis-applications in their own minds, without doubt. So, though now I have started working again in cognition (but in isolation), I can’t avoid seeing the problem coming.
|Seeing the problem coming is one
thing, but figuring out what to do is quite another. I do
not want to give the impression I know how we can deal
with the nuctroid threat. All I can do is propose the
Research that aims toward making machines appear human must be marked as highly dangerous, or ethically suspicious at least. Such research should not be funded. Note, I am not advocating an enmity toward all research in artificial intelligence and cognitive science; only a discouragement of the research that explicitly leads to the development of machines that can deceive humans, and pass as humans. To have computers that can compose high-quality music, for instance, or translate between languages, is not directly dangerous. Nor is it dangerous to have self-aware, conscious machines. If anything, a self-aware machine that places a high value on its preservation, and on the preservation of humanity, is probably more difficult to persuade to go and explode itself among people — Islamist suicide bombers notwithstanding. This last thought implies that sophistication is probably a desired attribute of machines: the more self-conscious, the less of a nuctroid threat; but self-conscious they must be by law, not by the goodwill of the free market; and self-conscious means human-like in mind, not in form and external appearance. We do need robots that work for us, but not robots that trick us into misidentifying them.
High school children, undergraduates of colleges, including military friendly colleges, graduates, and in general all people involved in the educational systems of the U.S.A., Europe, Japan, Australia, and elsewhere, must abandon their naïve attitude of “Let’s make stuff come alive”, and become aware of the seriousness of such an endeavor. Children cannot discover the seriousness of this matter by themselves, so it is up to the academic institutions to educate their students and take appropriate measures. If universities, such as my own IU, can be so serious about the ethics of procedures that involve psychological experiments on human subjects and even on animals (as I know they do from first-hand experience), then it is high time they become even more serious about what machines their students in artificial intelligence and cognitive science are experimenting on.
Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal. One can’t just tell an entire culture to do this, don’t do that. In this case matters are complicated by the existence of an elite rich class in the U.S.A., in the interests of which is to keep the public uninformed, having my janitor’s “Let’s nuke ’em all” attitude, because the lack of public awareness increases the short-term benefits of the rich class (they support wars, which help them manufacture, advertise, and sell more weapons abroad, for instance). This is compounded by the American myth that it doesn’t matter if you are poor, because if you’re capable enough you can raise your social status all the way to the top. Having believed this myth for decades, the poor among the Americans don’t mind much being kept at bay (i.e., being poor and thus staying at my janitor’s level of political and educational sophistication) because the notion that anyone can rise to the top sweetens the pill and makes it more palatable. I must note here, I believe the so-called American dream of “rags-to-riches” is a myth for at least two reasons: first, it does matter if you are born Black, Hispanic, etc. — after all, what is the percent of non-white billionaires in the U.S.A.? And second, it is analogous to winning the jackpot in a lottery: yes, you can be the sole lucky winner, and you can even nurture your ego by thinking that winning in life is not a matter of mere luck, but the existence of the jackpot itself presupposes a vast number of losers; what are the chances that you’ll have the guts and wits to be the sole winner and rise high in social status? And is it really so attractive to live in a society with a handful of winners and millions of losers, resulting in the hordes of homeless people who search in the garbage, while you — the “good Christian” — look at them with disdain for they didn’t make it and ended up being among the losers?
For the above reasons, I realize the tremendous difficulty of talking to an entire culture. So, the only course of action that seems conceivable to me is to raise their awareness by having as many voices as possible talk about this issue in unison, and do this particularly in the most anarchic, relatively uncensored-from-above medium of communication that humanity has ever known: the Internet.
Back to Harry’s index page on social issues
Back to Harry’s research index page
Back to Harry’s home page