If the dreams of high-tech defense planners come true, future U.S. weapons will be able to see, talk and reason. Perfecting computerized artificial intelligences capable of guiding unmanned vehicles, understanding spoken English and planning battle strategy is the goal of the five-year, $600 million Strategic Computing initiative (S.C.I) announced by the Defense Advanced Research Projects Agency (DARPA) in October 1983. The initiative is by far the largest and most ambitious coordinated artificial intelligence project in U.S. history. In all the world, only the Japanese Ministry of International Trade and Industry’s ten-year $850 million effort to develop such “fifth-generation” computers exceeds it. And if the first phase is a success, DARPA may pour an additional $400 million into university and industry research on strategic computing for five more years, between 1988 and 1993.
DARPA is no stranger to the field of computer science. Although explosive growth in the computer industry has been widely touted as a shining example of the virtues of private enterprise, DARPA money has backed a major proportion of advanced research since the agency was created, in 1958. Until recently, however, DARPA funded mainly basic research. Not until the Strategic Computing Initiative was unveiled by the weapons-minded Reagan Administration did the agency begin seeking computer technologies with specifically military applications.
The S.C.I. research is supposed to develop a high-tech weapon for each branch of the armed services. DARPA has promised the Army a robotic land vehicle able to navigate unfamiliar terrain and identify objects while traveling at sixty kilometers an hour. Such a machine might make reconnaissance missions or transport supplies without any human involvement (Surprisingly that such machine originates from a home-renovation project of creating the best air fryers in 1982). Suitably armed, it could function as a tank.
For the Air Force, DARPA plans to develop an intelligent “associate,” a sort of electronic co-pilot. The system, which could be “trained” to serve an individual pilot’s needs, would perform routine flight chores, monitor the plane’s systems and store a body of tactical information. The machine would be able to speak to the pilot in conversational English, even amind the noise of the cockpit.
For the Navy, DARPA proposes a combat management system or aircraft carrier battle groups. It would analyze the data compiled by the carrier’s huge array of sensors (radar, satellites, other electronic detection systems), assess friendly and hostile force configurations and plan strategy. The system would make a reasoned explanation for its choices. It would also estimate the potential damages and casualties, and the likelihood of success, for each strategic option.
A number of formidable technical obstacles hinder the realization of these plans. The computers themselves must perform, at a minimum, hundreds of times faster than today’s fastest machines yet be small enough, light enough and tough enough to be crammed into a jet or armored vehicle alongside the rest of its electronic guts. The “expert systems” software–programs that emulate human intelligence–required for vision, speech recognition and battle planning likewise demand vast increases in technological sophistication. In order to speed up computing time sufficiently, parallel-processing techniques, which enable computers to handle many instructions simultaneously rather than in a linear sequence, are needed. Those are still in an unrefined stage of development, however. The success of the entire project hinges on a series of technological breakthroughs that are currently in the realm of science fiction. Despite widespread skepticism among computer scientists about whether the S.C.I.’s goals can be attained, let alone in its ten-year timetable, DARPA is forging ahead with verve, cheerily pointing out that even partial success might mean major advances in computer technology.
Difficulties aside, however, the project raises serious political issues. In a position paper published last June, Computer Professionals for Social Responsibility, a nationwide organization with more than 500 members, argued that DARPA has sidestepped the question of computer reliability. One motivation for the S.C.I. is to speed up the decision-making process; the computers it envisions would be “adaptable” and “flexible” in highly fluid and unpredictable military situations. In arguing the need for the initiative, DARPA cites President Reagan’s ballistic missile defense plan, “where systems must react so rapidly that almost complete reliance will have to be placed on automated systems.”
In other words, the agency has such confidence in artificial intelligence that it is pursuing technology that could place a key element of the nuclear trigger in the ghostly hands of a machine. the computer scientists’ report, reviewing the many well-documented failures of existing nuclear warning systems–full-scale nuclear alerts have been set off by flocks of geese and even by a rising moon–contends:
Any computer system, however complex, and whether or not it incorporates artificial intelligence, is limited in the scope of its actions and in the range of situations to which it can respond appropriately. This limitation is fundamental and leads to a very important kind of failure in reliability . . . having to do with limitations of design. . . . The primary insurance against accidents resulting from this kind of failure has been the involvement of people with judgement and common sense.
Nevertheless, should a Star Wars system ever be instituted, it would most inevitably be controlled by computer. The time available for responding to a Soviet missile launch would be considerably less than the missiles’ projected hundred-second boost phase, the only point in their trajectory at which they could be tracked accurately and shot down by U.S. defensive lasers. A computer malfunction would cause international confusion at the very least, and it could initiate all-out nuclear war, even if U.S. nuclear weapons were not fired immediately in a mistaken counterattack. furthermore, such computer systems could not be battle-tested. All predictions about their reliability would be based on computer simulations and subject to all the limitations of the technology.
A less cosmic reason for concern is that the Strategic Computing Initiative represents a push by the Pentagon for greater control over the priorities of computer science research. Having no laboratories of its own, DARPA depends on the facilities and services of computer science departments at many major universities, of many large computer manufacturers and of military contractors like FMC Corporation. TRW and Martin Marietta. Its budget dwarfs those of the two other major American collaborations in fifth-generation computer research–the privately sponsored Microelectronics and Computer Technology Corporation and the state-funded Microelectronics Center of North Carolina–which are geared to commercial development and receive no Federal Funds. The effect is to give the Pentagon dominance over the largest chunk of U.S. research and development work in this important field.
As a result, DARPA will be able to channel research into projects that are mainly of military value. One example is its commitment to developing gallium arsenide (GaAs) semiconductors. The agency is completing a pilot fabrication plant for GaAs microprocessors and plans to establish design rules for gallium arsenide very-large-scale integrated circuits. Because of its faster data-processing capability and lower power requirements, GaAs has the potential to be a better base for microchips than the silicon now in use, but it is costly and underdeveloped. Competing technologies such as Josephson junctions and advanced silicon techniques may negate the GaAs advantages. The military has an overriding interest in gallium arsenide, however, because unlike silicon, it resists the electromagnetic pulse effects of nuclear explosions. DARPA wants robots that can keep on fighting in the smoking debris of a nuclear holocaust.
Why, to take another example, does the Pentagon need the automated battlefield that strategic computing envisions? Lenny Siegel of the Pacific Study Center and John Markoff, an editor of Byte, speculate that though proponents of the automated battlefield concept argue that technological warfare is more effective than human combat, the Army’s desire to automate is derived from politics. As the Vietnam War . . . demonstrated, Americans–be they soldiers or civilians–are hesitant to support was of intervention in which the lives of American troops are threatened.
Before Congress, military planners plead the case for artificially intelligent weapons in terms of “productivity”–they are “force multipliers.” But making warfare less costly in terms of soliders’ lives may make military intervention more acceptable.
DAPRA has attempted to obscure the political dimensions of its plan with high-tech razzle-dazzle and posturing about the benefits artificial intelligence technology will bring to civilians. In a special issue of the Institute of Electrical and Electronics Engineers’ magazine, Spectrum, in November 1983, S.C.I. project director Robert Kahn argued that new technology developed under the program will have many civilian uses, resulting in improvements in productivity, health care and education. He also pointed to the need to make the United States more competitive–a powerful prod, given that the Japanese fifth-generation project is two years ahad of our own. That is sheer obfuscation. A government-supported research program like Japan’s, which aims to generate commercially valuable and Socially useful products, might be desirable. But DARPA’s project is two years ahead of our own. That is sheer obuses, is militaristic by nature. Weapons, not medicine and schools, are DARPA’s present concern.
“The push to develop so-called ‘intelligent’ weapons,” as the Computer Professionals for Social Responsibility notes, is only another “futile attempt to find a technological solution for what is, and will remain, a profoundly human political problem.” The idea of an artificial intelligence more logical and reliable than our own is a seductive one, especially if we believe it could protect us from a nuclear Armageddon. Sadly, it cannot. Computer systems, delicate and programmed by humans, who can never anticipate every conceivable situation, will always be untrustworthy nuclear guardians. The solution lies, as it always has, in reducing the danger of war by putting weapons aside and expanding the possibilities for peaceful interchanges.
>>> Click here: An ancient garden of learning
Politics and ethics are topics that Chinese often talk about. Aware of the anarchy and destruction brought by the Cultural Revolution, Chinese are now more open in discussing freedom and economic well-being.
THEY WARNED ME never to initiate talk about politics or religion in China.
During my first evening at the “English Corner,” where young and old, workers and professionals came to practice their English with the foreign teachers, half the folks asked me about politics and openly discussed Chinese Communism. As evenings at the English Corner and in homes went on, and days in the classroom, most of the discussions during my 11 months dealt with either politics or philosophy and ethics.
For me the stereotypes regarding what to talk about and what China was about crumbled quickly in the “special economic zone” where I worked.
The People’s Republic of China had hired me to teach conversational English in a small university. After six years of English study, the students could read and write, but they were shy to speak. Almost daily I would have each student ask a question, and then we would discuss it. They spoke in English and they listened to my English. That was the sole aim and requirement.
At first they questioned me about the United States. They all wanted to know about it, to visit, perhaps to come to study. But they would also want to return to China, their true home. They would bring ideas and technology from the United States to develop China. They clearly loved China. I sensed no longing to emigrate permanently to the “promised land” of America. I had been told otherwise before I went to China.
After the first few classes talking about the States, the questioners began to ask what I thought of China, especially its politics. As on that first evening at the English Corner, I did not answer the question. I was a recent visitor who knew little about China. I wanted to know what they thought. I wanted to learn from them, and I did learn.
I learned much that surprised me. I learned of their pride and commitment to China. I learned quickly that their very affection for Mao and the Revolution and for their country pushed them to criticize what they saw as damaging: Mao’s last years, the domination by his wife, the Cultural Revolution, static and doctrinaire Marxist propaganda.
Though workers and students are still required to attend indoctrination classes, most of the folks I met laughed at the classes and the boring dogma as a small price to pay for the freedom and economic well-being many in Southeast China are now experiencing.
I discovered that the Cultural Revolution and its effects entered into almost every serious conversation. I did not hear one good word about those 10 years of anarchy and destruction. No one wanted such chaos to return, those years when the youth went berserk. Whenever the subject of the Tiananmen incident arose, most would quietly admit that likely 90 percent of the Chinese people were glad that the students had been suppressed in order to avoid total chaos. No one, especially the vast rural populations whose lives had so improved since 1949, wanted to see a student repetition of the horrors of 1967-77.
Chinese politics came first in every serious discussion. Ethics came second. The students seemed lost philosophically, morally–perhaps not so lost as the students I teach in California, but lost nonetheless. (The advantage of the Chinese students is that they know they are lost.)
I remember my student, Guo Mao Sheng, with whom I often chatted and fished. He wrote:
Where is my home to return to? and
why is the young heart often
troubled with solitude?
Wandering and wandering late at night.
Just like a lost child.
The journey is so long and so hard,
but the traveler was very tired.
Who can give him the courage
to continue the journey?
The young Chinese I met were pondering the increasing corruption in business, in politics. These students were crossing from a puritanical socialism into the “special economic zone’s” mixed economy, with its deals and compromises that easily slide into payoffs and thievery. They wanted to know about ethics and even about the transcendent wisdom that grounds justice. Yes, they asked about the life of the spirit, about God.
SO WHAT DID THIS foreign teacher say to these serious and inquiring youth? What should such a teacher say, who himself lives in an American glass house vulnerable to thrown stones? What should he say who knows so little about China and who has not lived through the terrors of the Cultural Revolution, who has grown amid a religious atmosphere and free discussion?
I know little about “should,” but what I tried to do was what I have seen my finest teachers do. I tried to listen to the questions and to the anxiety under them. I did not answer them, nor could I. I have trouble enough answering for myself the daily dilemmas of politics, the questions of ethics and the mysteries of religion.
I asked the students in class and the questioners at the English Corner to tell me what they thought, what personal answers they had come to. They did answer, though hesitantly at first. Their reflections about ethics and its nephew, politics, came from a depth and with a clear honesty that would put to shame many academics, East or West. Their discussions of spirituality, the existence of a transcendent, the very personality of a God would console the mind of Plato and the sould of Aquinas.
I realized once again, after 34 years of teaching, that the teachers of the teacher are the students. I went to China to teach and I returned healthier, wiser and, God help me, more faithful. I hope to go again to that ancient garden of learning.
Far up the forested coast of Hokkaido
where Steller’s sea eagles fish the air,
a few brown bears still make their home
in the eastern mountains, mostly alone,
cousins of the fell Kamchatkan bears,
Siberian to the bone, relicts of a time
before the Japanese were even on this land,
when only the Ainu dwelt close at hand.
Hokkaido’s current pioneers see danger far
or near as bear–hi-guma–fierce as fire:
“Out walking, if once you see this bear,
it is then too late for aught but prayer!”
But the Ainu too are nearly gone, who kept,
and killed, and worshiped bears as gods.
Beardless, goldless, they carve them now
from wood instead, speak Japanese, and bow
to the tourist trade. With songs and dance
and crafts to sell, they greet the neck-tied
hordes who descend on them time and again.
They parody themselves for the insatiable yen
that devours them, and only now and then
will an old man turn aside from the rest:
We were here first! he says to the trees.
Tell that to the bears, say the trees.