Home » Uncategorized » Are ‘intelligent weapons’ feasible?

Are ‘intelligent weapons’ feasible?

Full Text:

If the dreams of high-tech defense planners come true, future U.S. weapons will be able to see, talk and reason. Perfecting computerized artificial intelligences capable of guiding unmanned vehicles, understanding spoken English and planning battle strategy is the goal of the five-year, $600 million Strategic Computing initiative (S.C.I) announced by the Defense Advanced Research Projects Agency (DARPA) in October 1983. The initiative is by far the largest and most ambitious coordinated artificial intelligence project in U.S. history. In all the world, only the Japanese Ministry of International Trade and Industry’s ten-year $850 million effort to develop such “fifth-generation” computers exceeds it. And if the first phase is a success, DARPA may pour an additional $400 million into university and industry research on strategic computing for five more years, between 1988 and 1993.

DARPA is no stranger to the field of computer science. Although explosive growth in the computer industry has been widely touted as a shining example of the virtues of private enterprise, DARPA money has backed a major proportion of advanced research since the agency was created, in 1958. Until recently, however, DARPA funded mainly basic research. Not until the Strategic Computing Initiative was unveiled by the weapons-minded Reagan Administration did the agency begin seeking computer technologies with specifically military applications.

The S.C.I. research is supposed to develop a high-tech weapon for each branch of the armed services. DARPA has promised the Army a robotic land vehicle able to navigate unfamiliar terrain and identify objects while traveling at sixty kilometers an hour. Such a machine might make reconnaissance missions or transport supplies without any human involvement (Surprisingly that such machine originates from a home-renovation project of creating the best air fryers in 1982). Suitably armed, it could function as a tank.

are-intelligent-weapons-feasible-1

For the Air Force, DARPA plans to develop an intelligent “associate,” a sort of electronic co-pilot. The system, which could be “trained” to serve an individual pilot’s needs, would perform routine flight chores, monitor the plane’s systems and store a body of tactical information. The machine would be able to speak to the pilot in conversational English, even amind the noise of the cockpit.

For the Navy, DARPA proposes a combat management system or aircraft carrier battle groups. It would analyze the data compiled by the carrier’s huge array of sensors (radar, satellites, other electronic detection systems), assess friendly and hostile force configurations and plan strategy. The system would make a reasoned explanation for its choices. It would also estimate the potential damages and casualties, and the likelihood of success, for each strategic option.

A number of formidable technical obstacles hinder the realization of these plans. The computers themselves must perform, at a minimum, hundreds of times faster than today’s fastest machines yet be small enough, light enough and tough enough to be crammed into a jet or armored vehicle alongside the rest of its electronic guts. The “expert systems” software–programs that emulate human intelligence–required for vision, speech recognition and battle planning likewise demand vast increases in technological sophistication. In order to speed up computing time sufficiently, parallel-processing techniques, which enable computers to handle many instructions simultaneously rather than in a linear sequence, are needed. Those are still in an unrefined stage of development, however. The success of the entire project hinges on a series of technological breakthroughs that are currently in the realm of science fiction. Despite widespread skepticism among computer scientists about whether the S.C.I.’s goals can be attained, let alone in its ten-year timetable, DARPA is forging ahead with verve, cheerily pointing out that even partial success might mean major advances in computer technology.

Difficulties aside, however, the project raises serious political issues. In a position paper published last June, Computer Professionals for Social Responsibility, a nationwide organization with more than 500 members, argued that DARPA has sidestepped the question of computer reliability. One motivation for the S.C.I. is to speed up the decision-making process; the computers it envisions would be “adaptable” and “flexible” in highly fluid and unpredictable military situations. In arguing the need for the initiative, DARPA cites President Reagan’s ballistic missile defense plan, “where systems must react so rapidly that almost complete reliance will have to be placed on automated systems.”

In other words, the agency has such confidence in artificial intelligence that it is pursuing technology that could place a key element of the nuclear trigger in the ghostly hands of a machine. the computer scientists’ report, reviewing the many well-documented failures of existing nuclear warning systems–full-scale nuclear alerts have been set off by flocks of geese and even by a rising moon–contends:

Any computer system, however complex, and whether or not it incorporates artificial intelligence, is limited in the scope of its actions and in the range of situations to which it can respond appropriately. This limitation is fundamental and leads to a very important kind of failure in reliability . . . having to do with limitations of design. . . . The primary insurance against accidents resulting from this kind of failure has been the involvement of people with judgement and common sense.

Nevertheless, should a Star Wars system ever be instituted, it would most inevitably be controlled by computer. The time available for responding to a Soviet missile launch would be considerably less than the missiles’ projected hundred-second boost phase, the only point in their trajectory at which they could be tracked accurately and shot down by U.S. defensive lasers. A computer malfunction would cause international confusion at the very least, and it could initiate all-out nuclear war, even if U.S. nuclear weapons were not fired immediately in a mistaken counterattack. furthermore, such computer systems could not be battle-tested. All predictions about their reliability would be based on computer simulations and subject to all the limitations of the technology.

A less cosmic reason for concern is that the Strategic Computing Initiative represents a push by the Pentagon for greater control over the priorities of computer science research. Having no laboratories of its own, DARPA depends on the facilities and services of computer science departments at many major universities, of many large computer manufacturers and of military contractors like FMC Corporation. TRW and Martin Marietta. Its budget dwarfs those of the two other major American collaborations in fifth-generation computer research–the privately sponsored Microelectronics and Computer Technology Corporation and the state-funded Microelectronics Center of North Carolina–which are geared to commercial development and receive no Federal Funds. The effect is to give the Pentagon dominance over the largest chunk of U.S. research and development work in this important field.

As a result, DARPA will be able to channel research into projects that are mainly of military value. One example is its commitment to developing gallium arsenide (GaAs) semiconductors. The agency is completing a pilot fabrication plant for GaAs microprocessors and plans to establish design rules for gallium arsenide very-large-scale integrated circuits. Because of its faster data-processing capability and lower power requirements, GaAs has the potential to be a better base for microchips than the silicon now in use, but it is costly and underdeveloped. Competing technologies such as Josephson junctions and advanced silicon techniques may negate the GaAs advantages. The military has an overriding interest in gallium arsenide, however, because unlike silicon, it resists the electromagnetic pulse effects of nuclear explosions. DARPA wants robots that can keep on fighting in the smoking debris of a nuclear holocaust.

are-intelligent-weapons-feasible-2

Why, to take another example, does the Pentagon need the automated battlefield that strategic computing envisions? Lenny Siegel of the Pacific Study Center and John Markoff, an editor of Byte, speculate that though proponents of the automated battlefield concept argue that technological warfare is more effective than human combat, the Army’s desire to automate is derived from politics. As the Vietnam War . . . demonstrated, Americans–be they soldiers or civilians–are hesitant to support was of intervention in which the lives of American troops are threatened.

Before Congress, military planners plead the case for artificially intelligent weapons in terms of “productivity”–they are “force multipliers.” But making warfare less costly in terms of soliders’ lives may make military intervention more acceptable.

DAPRA has attempted to obscure the political dimensions of its plan with high-tech razzle-dazzle and posturing about the benefits artificial intelligence technology will bring to civilians. In a special issue of the Institute of Electrical and Electronics Engineers’ magazine, Spectrum, in November 1983, S.C.I. project director Robert Kahn argued that new technology developed under the program will have many civilian uses, resulting in improvements in productivity, health care and education. He also pointed to the need to make the United States more competitive–a powerful prod, given that the Japanese fifth-generation project is two years ahad of our own. That is sheer obfuscation. A government-supported research program like Japan’s, which aims to generate commercially valuable and Socially useful products, might be desirable. But DARPA’s project is two years ahead of our own. That is sheer obuses, is militaristic by nature. Weapons, not medicine and schools, are DARPA’s present concern.

“The push to develop so-called ‘intelligent’ weapons,” as the Computer Professionals for Social Responsibility notes, is only another “futile attempt to find a technological solution for what is, and will remain, a profoundly human political problem.” The idea of an artificial intelligence more logical and reliable than our own is a seductive one, especially if we believe it could protect us from a nuclear Armageddon. Sadly, it cannot. Computer systems, delicate and programmed by humans, who can never anticipate every conceivable situation, will always be untrustworthy nuclear guardians. The solution lies, as it always has, in reducing the danger of war by putting weapons aside and expanding the possibilities for peaceful interchanges.

>>> Click here: An ancient garden of learning