21st Century Science & Technology

Accomplishment No. 1: Invention of the Computer

Accomplishment No. 2: John von Neumann and The Foundations of ‘Artificial Intelligence’

Accomplishment No. 3: John von Neumann’s Game-Theory Foundation of Economic Theory
‘A Thought Problem

The First Programmer Was a Woman

Ada’s Achievements

PROFILES IN PORRIDGE
The Artificial Reputation of John von Neumann’s Intelligence

(Reprinted in full from from Fall 2000 21st Century)

by Ralf Schauerhammer

The Financial Times of London celebrated John von Neumann as “The Man of the Century” on Dec. 24, 1999. The headline hailed him as the “architect of the computer age,” not only the “most striking” person of the 20th century, but its “pattern-card”—the pattern from which modern man, like the newest fashion collection, is cut.

The Financial Times and others characterize von Neumann’s importance for the development of modern thinking by what are termed his three great accomplishments, namely:

(1) Von Neumann is the inventor of the computer. All computers in use today have the “architecture” von Neumann developed, which makes it possible to store the program, together with data, in working memory.

(2) By comparing human intelligence to computers, von Neumann laid the foundation for “Artificial Intelligence,” which is taken to be one of the most important areas of research today.

(3) Von Neumann used his “game theory,” to develop a dominant tool for economic analysis, which gained recognition in 1994 when the Nobel Prize for economic sciences was awarded to John C. Harsanyi, John F. Nash, and Richard Selten.

I shall examine these three millennial accomplishments in turn, the better to judge whether John von Neumann really is such a “Man of the Century,” as the Financial Times, and others, claim.

Accomplishment No. 1: Invention of the Computer
“Computer” is one of the most common words in use today. But what is a “computer,” really? The word obviously comes from the Anglo-American language group and denotes, or denoted, until the end of the World War II, a person who carries out calculations according to a given scheme (in bookkeeping, for example, or in a technical office). This person usually used a “calculator.” Today, when we say “computer,” we do not mean a person, but a calculating machine, which is not only able to perform calculations according to an arbitrarily given calculating scheme, but can manipulate the most diverse kinds of information and data in some desired way.

So, we see that the objects we designated with certain words, change with the development of technology. That is why it is not enough to consider only the function of a technology if we want to judge how certain inventions have unfolded; it is also necessary to investigate how the economic realization of this function was possible, in specific cycles of work, at a specific time. We have to consider an “evolutionary series” of this technology and pay attention to the emergence of its different “organs,” which make it what it is today, and which make it possible to develop in the direction it will take in the future.

In this context, I present a short overview of the history of the development of the calculating machine.

The idea for a calculating machine first arose in the year 1617, at a meeting of the founder of astrophysics, Johannes Kepler, and the theologian and machinist, Wilhelm Schickard. The machine was able to accomplish all four basic kinds of calculation (addition, subtraction, multiplication, and division). It had a mechanical adding function and a system of movable bars and windows, which allowed for the display of the preliminary results of multiplying two numbers, which then only had to be added together the proper number of times. The machine was lost in the turmoil of the Thirty Years’ War, and the description of it was only rediscovered in the papers of Kepler and Schickard in the 19th century.

In 1642, the mathematician-philosopher Blaise Pascal exhibited in Paris a calculating machine, which was similar to that of Kepler-Schickard in certain ways. It was able only to add and subtract, and even then not with the decimal system, but in different units which corresponded to the system of monetary values at the time. The machine was developed to make it easier to count and calculate volumes of money.

Gottfried Wilhelm Leibniz took the next decisive step. He was familiar with Pascal’s work, but he already had a finished design of his own calculating machine when he went to Paris in 1672. Leibniz had invented a very crucial new “organ”—he used a stepped cylinder for entering the numbers. It was a broad gear which could be moved along the axis of rotation, on which the teeth of the gears are shifted slightly. At first, only one tooth grabs upon rotation, and if the cylinder is moved a step forward, two teeth grab, and then three, until all ten grab when the cylinder is shifted the full length of the axis. With a set of several cylinders, it is possible to generate decimal numbers of several places (by shifting the cylinder until the number of teeth corresponding to the number, grabs for each decimal place), and these can be repeatedly fed into the adding mechanism.

If the entire cylinder system is shifted one or two positions forward, the machine adds the sum 10 times, or 100 times, and so multiplication can be accomplish by means of repeated addition. Division, through a similar process, is equivalent to repeated subtraction. From that time forward, all calculating machines used this basic principle of Leibniz’s machine.

We should also mention, with respect to Leibniz, that he investigated various systems of number-notation, and recognized that the binary system, where only the numbers 0 and 1 are needed, is in fact the simplest means of representing numbers—but that advantage has a price: The representation of numbers requires a very long series of numerals, so that the number of calculating steps becomes very large. For a manual calculating machine, where a person is required to turn the crank, a higher system of numeration is more practical; for example, the decimal system.

This brings us up almost to today’s “computer,” and the development by British mathematician and inventor Charles Babbage of the first mechanical, program-controlled calculating machine. In 1822, Babbage began to construct what he called his “difference machine,” and in 1833, he began construction of the “analytical calculating automaton.” Neither machine was fully functional, because Babbage came up against the limits of what could be done at the time in precision machining. However, Babbage foresaw all of the“organs” which are characteristic of a “computer”:

(1) an automatic calculating mechanism for all the calculating functions;
(2) a large memory storage;
(3) a control-system which used punch-cards;
(4) a unit to enter data; and
(5) a printer unit to display results.

Some 100 years after Babbage, in 1932, Konrad Zuse, an engineer, had the idea of producing a calculating machine which was conceptually similar to that of Babbage, in that it was programmable. He presented his idea in two patented designs on April 11, 1936, and July 3, 1937, and between 1938 and 1945, he built a number of machines which he named Z1, Z2, Z3, and Z4.

Zuse’s decisive step was to recognize the enormous advantage which lay in the use of binary switching elements, while also using a dual system of numeration. This went hand in hand with the use of the logical basic functions AND, OR, and Negation, to carry out the steps of calculation as well as to display the floating decimal point. The program, for which Zuse used the appropriate expression “calculating plan,” was stored on a strip with holes punched in it (or, more economically, on old rolls of film). In his patent applications, Zuse mentions the possibility of storing the calculating plan in a working memory, but this was impossible to achieve practically at that time.

We should keep in mind that Zuse built his Z1 and Z2 during the war, without any government support, and he did it in his “free time” from work. In 1940, he obtained a contract from the Aviation Testing Facility (Versuchsanstalt fuer Luftfahrt) to build the Z3. It was completed in 1941, and had a dual calculating mechanism consisting of 600 relays, and a memory of 1,400 relays for 62 numbers at 22 dual positions each. A multiplication or division, or the calculation of a square root, took three seconds. The Z3 was about the size of a large walk-in closet.

Three years later, in August 1944, in the United States, the MARK I of Howard H. Aiken was put into operation. This program-controlled calculator weighed 35 tons. It still worked with the decimal system and fixed points for decimals. A multiplication of two 10-position numbers took about 6 seconds, their division 11 seconds.

A little more than a year later, the ENIAC of J.P. Eckert and J.W. Mauchly was built in Pennsylvania. It took another two years for the machine to work properly. The ENIAC was the first calculator that used electrical tubes for switching elements, which added considerably to its speed of operation. Even this calculator—which took up a surface area of 140 square meters, consumed 150 kilowatts, and was equipped with more than 18,000 electronic vacuum tubes—did not realize the Zuse concept of a modern binary computer; the flip-flops, consisting of two tubes each, were used only to represent the 10 positions of a mechanical counting gear (as described in Leibniz’s calculating machine). Moreover, the ENIAC was not freely programmable, and the control was accomplished by arrays of hundreds of turn-switches and cable connections.

When he set to work preparing the successor machine, EDVAC, J.W. Mauchly, one of the developers of ENIAC, mentioned the possibility, for the first time, of storing a program in a working memory, in a way comparable to that reported by Zuse in his patent writings of 1936-1937. Mauchly also mentioned the possibility of changing these calculation commands in the working memory while the program was running. If someone wanted to sketch this development in greater detail, he might look at Mauchly’s report, written in June 1945, on the design of the EDVAC, in which he also described the five basic component units of a computer, as had been earlier developed by Charles Babbage.
Now, in this short overview, we have become acquainted with all of the components—“organs” and principles—which make up a modern computer. Yet, such a modern computer, it is said, has a “von Neumann architecture.” Amazing! Over the course of sketching this entire process of development, von Neumann has not appeared at all.1

Von Neumann has been lauded as “the Man of the Century,” and the “inventor of the computer,” but his so-called accomplishments are computed out of thin air.
(a) Kepler model
The idea for a calculating machine first arose in the year 1617, at a meeting of Johannes Kepler and the theologian and machinist Wilhelm Schickard. The machine could perform addition, subtraction, multiplication, and division. Although the description of the machine was lost in the turmoil of the Thirty Years’ War, it was rediscovered in the papers of Kepler and Schickard in the 19th century. Here, a model of the machine.
(b) Pascal model
A model of Blaise Pascal’s calculating machine, which was similar to that of Kepler and Schickard, but was able only to add and subtract.

From:
Why the ‘New Economy’ Is Doomed, EIR Special Report, June 2000
(c) Leibniz model
The decisive next step in computers was taken by Gottfried Wilhelm Leibniz, who developed a crucial new “organ”: Leibniz used a stepped cylinder for entering the numbers, a broad gear that could be moved along the axis of rotation, on which the teeth of the gears are shifted slightly. Division was accomplished by repeated subtraction; multiplication by repeated addition. Later calculating machines all used this basic principle. Here, a model of Leibniz’s 1672 machine.

Accomplishment No. 2: John von Neumann and The Foundations of ‘Artificial Intelligence’
That John von Neumann is the inventor of the modern computer, is a myth. The conditions of World War II might explain why an erroneous picture has arisen about the development of the computer. But they do not explain how this mistake endured through the 1950s and 1960s, and blossomed into the myth which prevails today. Perhaps the history of the invention of the computer just fits all too nicely with the idea that von Neumann created the foundations for “Artificial Intelligence,” and that he did that with his book The Computer and the Brain, published in 1958 by Yale University Press.

Von Neumann claims in the Introduction to the book, “What is at stake is an attempt to find a way of understanding the nervous system from the standpoint of the mathematician.” This sounds quite impressive, but the assertion is then modified in the very next line. First of all, von Neumann asserts, what is at stake is not really “a way to understand,” but only a “systematic speculation about how such a way,” in his opinion, “should be travelled.” Second, he says, “the standpoint of the mathematician” is quite limited, because merely “logical and statistical aspects are in the foreground.”

So, von Neumann says, in fact: “I am interested in a speculation about a way which one might take, to contribute to understanding the nervous system with notions of logic and statistics.” Fine: But why didn’t he say that from the start? This sort of introduction leads the reader to suspect that this book is about something quite different from what is claimed. And that is indeed the case.

The essential content of the book can be summarized briefly, as follows: Von Neumann describes the knowledge available at that time about computers. He explains the difference between analog computers and the digital computers that today are used almost exclusively. In analog computers, numbers are described and linked by measurable physical states (for example, electrical charge), while in digital computers, a system of numeration (today, exclusively the binary system of 0,1) is described by ordered markings.

Then von Neumann explains that the task set for a computer has to be resolved into a series of successive steps of “basic operations” (of which a computer can carry out only a very limited number). These steps may be passed through in part, but only in part, in parallel. On the basis of this procedure of resolving general problems into a few basic operations, there results the “arithmetic depth” of the process of calculation, that is, it consists of a multiplicity of steps consisting of basic operations which are minimally different from each other. That, in turn, makes it necessary to have a very precise representation of numbers, because the errors increase greatly with the number of steps in the calculation, a principle which any child knows from the game of “telephone.” The greater the number of players who whisper a given message down the line, the less the final message resembles the original message.

Next, von Neumann describes the human nervous system. He says that he wants to “discuss the points in which the two ‘machines’ are similar, as well as the points in which they differ.” He first says that data operated on in the nervous system, just as in a computer, are transported by electrical current. And he also finds a memory capacity in the brain (which should not be too surprising, given the ability of living creatures to remember things). Von Neumann compares the size, number, and packing-density of the just-developed electronic elements in computers, with human nerve cells, and he finds in the nervous system a numerical representation which is a digital-analog mixture, in which the magnitudes are represented “analog” by the frequency of particular “digital” impulses. Finally, he observes that the precision of this numerical representation, compared with the simplest computers, is very small. However, because the nervous system demonstrably functions quite precisely, von Neumann concludes: “Accordingly, the nervous system appears to use a system of representation which differs completely from the systems known from the usual arithmetic and mathematics.”

Is that really the difference between the computer and the brain, the two “machines”? Hardly. The results of von Neumann’s musings seem rather meager. But von Neumann was not interested in concrete results. Let us return to the Introduction of his book, where he states, “This is an attempt to find a way to understand the nervous system.” Yes—and precisely the path which von Neumann took, cannot lead to meaningful results. It is just the wrong path, upon which people have repeatedly gone astray with new technologies in history, when they attempt to explain human beings by means of the processes of these new technologies. Such people reduce human beings to processes which human beings, as creative beings, themselves created, and they forget that, as their creators, human beings are infinitely superior to these newly created processes.

Today, we smile compassionately at the attempt which the radical “enlightened” atheist, physician Julien Offroy de La Mettrie, made to reduce human beings to a clockwork mechanism in his “L’homme machine” of 1747. But when von Neumann attempts in 1958 to compare the functions of the human brain, with the concepts and processes of the “electronic brain” of modern computers, many are paralyzed in wondrous adulation. Worse, many adopt this mechanical way of thinking about the human brain, and extend this approach, all unnoticed, to the way in which they try to explain human thinking itself—and that means, ultimately, creative thinking. It does not occur to them, when they babble about “Artificial Intelligence” and “thinking computers,” that they reduce the human being, in a simplistic and decorticating way, to a machine—which could not exist at all, but for the inventive mind of the human being.

Accomplishment No. 3: John von Neumann’s Game-Theory Foundation of Economic Theory
The basic document that is the basis for this vaunted accomplishment is the book Theory of Games and Economic Behavior, which von Neumann authored with Oskar Morgenstern, and which was published by the Princeton University Press in 1944. The basic foundations of game-theory discussed in the book had been put forward by von Neumann in a 1928 essay, “Zur Theorie der Gesellschaftsspiele” (On the Theory of Social Games), published in the Berlin mathematics journal, Mathematischen Annalen. His argument elicited little interest at the time, but now, enriched by Oskar Morgenstern and the application of extreme neo-liberal economic dogmas, the old arguments fitted perfectly into the period of the onset of the Cold War.

In this connection, it is worth noting that even the Financial Times in its laudatio to “The Man of the Century” could not avoid reporting von Neumann’s famous saying in 1950, “If you ask why should we not bomb the Russians tomorrow, I say, why don’t we bomb them today?” The Financial Times explains von Neumann’s anti-communism with reference to his experience as a youth at the end of World War I, when von Neumann’s family left Hungary, temporarily, when the Soviet republic came to power.

Von Neumann’s co-author, Oskar Morgenstern, belonged to the same liberal school of economics from which Friedrich von Hayek came, and he had spent the days after World War I in Vienna, where a socialist government had come to power. Morgenstern knew the government’s economic expert, Otto Bauer, from their joint visit to the Boehme-Bawerk political seminar, and he succeeded, in nightlong meetings, in turning Bauer away from Marxism.

In the 1928 essay, in contrast to the objective-theoretical tone of the 1944 book, Von Neumann explained more directly, and with less euphemism, why game-theory is the ideal tool to serve as the foundation of liberal economic dogma. In the earlier work, he says, “And ultimately any event whatsoever, under given external circumstances and given acting persons, . . . can be seen as a social game.” Along the same lines, he also says, “The main problem of classical national economics is: What will the absolutely egoistic ‘homo oeconomicus’ do under given external circumstances?”

Obviously, Von Neumann reduces the notion of “classical national economics” to the liberal dogma of a Thomas Hobbes, an Adam Smith, or a Bernard de Mandeville. Mandeville, for example, represented human egoism as the decisive motive force for moral action in his 1723 Fable of the Bees, satirically elaborating how it is that private vices, and not public virtue, promote general well-being.

According to this economic dogma, an effective higher principle (such as the “pursuit of happiness” set forth in the American Constitution as a bedrock human right which government must protect, or Christian brotherly love), which seeks to maximize the general welfare, is not permitted. Mathematically this means, as von Neumann correctly observes, that the mathematical methods developed for physical problems, are of no use in determining an optimum in economic theory. On the other hand, even the available methods of mathematics for calculating probabilities are insufficient for solving this “main problem of classical national economy.” Chance events do happen, but the crucial point is that the persons acting develop strategies, so they do not in general act according to principles of statistical probability; they decide “freely” and “rationally,” as “absolute egoists,” and consider only their personal advantage. The most suitable tool for investigating this situation theoretically, is game-theory, von Neumann claims.

An oft-cited example for the application of game-theory is the “prisoners’ dilemma,” which shows, in fact, quite well how the method of game-theory functions and where it fails. The following situation is assumed: Sitting in a prison cell are two people (let’s call them Max and Melvin), against whom the prosecuting attorney cannot prove his accusation of crime. He speaks to each of them individually, and says “So, listen up. If you both plead guilty, you won’t be sentenced to five years as usual, but only four years because of your plea. But don’t believe that you’ll get out of here if you say nothing: I have enough circumstantial and other evidence to put you both behind bars for two years without a guilty plea. But if you cooperate and testify against your buddy, he gets five years and I’ll apply the state’s evidence clause in your case—you go free.”

Once Max and Melvin are both back in the cell, they scratch their heads, and both of them think “rationally” as game-theory defines it, so they both think the same thing.

Let us consider the situation from Melvin’s standpoint:
If I don’t testify, and Max doesn’t testify, I get two years.
If I testify and Max does not testify, I get zero years.
If I don’t testify and Max doesn’t testify, I get five years.
If I testify and Max testifies, I get four years.

Regardless of whether Max and Melvin committed the crime or not,2 and regardless of whether Max testifies or not, it pays off for Melvin to testify in any case. If Max does not testify, then Melvin gets zero years instead of the two years he would get if he did not testify himself, and in case Max also testifies, Melvin gets only four years, instead of the five he would get if he remained silent.

The same calculation works for Max too, so both of them will behave “rationally” in the sense of game-theory, “absolutely egoistically,” and both of them will confess to the deed of which the prosecuting attorney accuses them. So each of them will get a sentence of four years. Max is happy, Melvin is happy, and John von Neumann and the prosecuting attorney are happy, too.

This simple standard example for the application of game-theory ought to convince anyone, that the state’s evidence rules are nonsense, and likewise for the common practice of plea-bargaining. Second, it shows quite convincingly how drastically the game-theory approach collapses as soon as the participants diverge ever so slightly from the behavioral norm of the “absolute egoist.” If Melvin and Max, for example, did indeed commit the crime, but do not testify against each other out of “honor among thieves,” and instead remain silent, they get only two years. (For those who want to save the belief in game-theory, let it be remarked that this “honor among thieves”—self-determined but not egoistical—represents a real paradox and does not correspond to the behavior described in game-theory with the notion of a “coalition.”) And in more complicated cases, for example, that of the magnificently filmed Agatha Christie classic crime story, Witness for the Prosecution, with Marlene Dietrich and Charles Laughton, or real life, or real economics, the game-theory method fails also.

The point is not the claim that reality is too complicated for game-theory to encompass it. That problem might be solved by working out the mathematics and developing the theory. Game-theory fails systematically because the economic dogma to which it is chained, for better or worse, has a principled fault. It cannot explain how real wealth is created. The market cannot generate real wealth, but only, at best, distribute it. The Mandevillean basic assumption that “private vices” promote the “public good,” in a way which remains a mystery to the participants in the great “social game,” is wrong. Without a horizon of the future which provides a perspective for economic activity, no national economy can last for long. General egoism cannot replace creative innovations, which always also contain changes in the economic and social “rules of the game.”

Wherever this overriding principle makes itself felt in concrete activity, game-theory must systematically fail, as surely as do the prisoners Max and Melvin. Or, inversely expressed, to the degree that game-theory yields correct results, the economy has lost the horizon opening onto the future, without which it cannot exist.

For irresponsible and immoral speculators and frauds, on the other hand, the game-theoretical approach is ideal. The game-theoretical approach to economics functions so well that it even wins Nobel Prizes. And then it is high time to ring the alarm bells!

‘A Thought Problem’
The following quote from a current representation of “applications of game-theory” characterizes how the “thought problem” in considering “Artificial Intelligence” is connected to game-theory.3 It is said that:

    Wherever the competition of ‘individuals’ for resources is to be investigated, game-theory investigations can be applied. . . . In Artificial Intelligence and in the research for “artificial life,” artificial agents are to be so programmed that they “survive,” or are successful, in the (real or simulated) environment. Here, too, competition situations often arise, in which the agents quarrel with other agents or real objects for resources. After all, in modern computer games the artificial adversary is equipped with strategies which give the impression of a real adversary. The adversary should not be invincible, operating in real time, and he should manifest a pattern of action which is not easily seen through. A good example are the real-time strategy games which have become popular in recent years (“Dune II,” “Command & Conquer,” “War Craft”), where two competing “tribes” or “races” establish settlements and bases of operation with limited resources, and pursue the goal of conquering the territory of the adversary.

The educational effect of these “popular” games is foreseeable: a “rational” behavior, in the sense of game-theory, is practiced—“absolute egoism.” Individuals shaped in this way in their development by Artificial Intelligence are then most suited for game-theory investigations of their economic behavior, because they will in all probability behave “rationally.”

So, one thing fits the other. But isn’t something missing? Yes, human freedom. That is what was missing all along, from the von Neumann “pattern card.”

Ralf Schauerhammer is an editor of the German-language science magazine Fusion, a computer specialist, and an organizer with the LaRouche political movement in Germany. He is the co-author of The Holes in the Ozone Scare, The Scientific Evidence That the Sky Isn’t Falling, published by 21st Century. This article was translated from German by George Gregory, and has been adapted here from an Executive Intelligence Review Special Report, Why the ‘New Economy’ Is Doomed.”


Notes
1. I want to comment here on Alan Turing’s work, “On Calculable Numbers” written in 1936, which has also been cited in the context of the development of the computer as the design of a “universal machine.” This was a theoretical writing. If I have proven that the three operations, “go one step westwards,” “go one step northwards,” and “go one step southwards,” allow one to reach any point on the surface of the Earth from any other given point, that does not mean that I am the inventor of a universal system of transportation. Similarly, Alan Turing was not the inventor of the computer. Typically, the machines used for deciphering Germany’s Enigma-code at the wartime cryptography laboratory at Bletchley Park, were not universal calculators and not even freely programmable.

2. The fact that the truth makes no difference for the game-theory result of this juridical example, corresponds to the assumption (and a wrong one) in economic issues, that economic activity has nothing to do with physical reality. Today’s markets, dominated by financial wheeling and dealing and speculation, operate as if this were so.

3. Tobias Thelen, 1998. Game Theory (Universität Osnabrueck).


The First Programmer Was a Woman

Return to top

by Ralf Schauerhammer

The first computer program ever written was for Charles Babbage’s “Analytical Engine” in the 1830s. It was developed by the poet Lord Byron’s daughter, Augusta Ada Byron, Countess Lovelace, who collaborated with Babbage for several years. In her published description of Babbage’s computer, she wrote: “It is quite fitting to say that the Analytical Engine weaves algebraic patterns just as Jacquart’s loom weaves leaves and blossoms.” That is accurate, because Babbage recognized that the way Joseph Marie Jacquart used punched cards to control the operations of the loom, could be generally used for a “programmed” control of any machine, especially a computer.

Babbage’s computer used punch-cards for three different purposes: First, the “operation cards” stipulated which operations the “mill” (the central processing unit, CPU) was supposed to carry out. These operation cards gave commands, such as whether numbers were to be added, divided, and so on.

Second, there was the “variable card,” from which the values for the operations were to be retrieved from the “store” (the “random access memory,” or RAM), and the destination for the storage of the results. These variable address cards stipulate, for example, that the operation contained on the operation card should be carried out with the values at storage-position 1 and storage-position 2, and that the result should be deposited in storage-position 3.

Let us assume we have the value 1903 at storage-position 1, and the value 1834 at storage-position 2. If the operation card says subtraction is to be carried out, Babbage’s analytical machine can retrieve the value 69 from the storage-position 3 and print it out. Babbage’s computer can thus calculate that the invention of the modern computer occurred precisely 69 years before John von Neumann was born.

A third kind of card, the “number card,” was designed by Babbage to have an external storage for the Analytical Engine’s calculated values, for example, for logarithms or approximations for the number . These values were punched into the number cards, in order to read them into a computation later. This external storage made it possible to generate tables and calculations in almost unlimited ways.

Ada’s Achievements
Countess Lovelace developed concrete examples for how the machine could be used for calculations, in the context of her description of the Analytical Machine—for example, for the calculation of Bernoulli numbers. These example/calculations with the Analytical Engine were the first “computer programs.” A century before our own computers, Lovelace completely understood the principles of the programmable computer. Her programs included subroutines, program loops, and the conditional jumps. She even recognized that “the mechanism [of the Analytical Engine] could operate with things other than numbers, if their natural relations can be expressed by the abstract science of operations.”

The concept of Boolean algebra—that is, the basis for logical calculations—was published in 1854, two years after Lovelace’s death.1 It was almost 100 years before Konrad Zuse took the last step toward building the modern computer, by using the CPU of his computer for the processing of numbers and logical variables together.

Augusta Ada Byron also reflected on the principled possibilities of future computers, and, even at that time, she rejected the idea of “artificial intelligence.” She emphasized that machines can never have free will: “The Analytical Engine has no desire to produce anything. It can do everything we know how to order it to do.”

This remarkable woman died in 1852, at the early age of 36, and when she died, her achievements were forgotten. The programming language of ADA should recall her memory.


Note
1. So called after George Boole, the English mathematician and logician whose book, An Investigation of the Laws of Thought, was published in 1854.

Home   Current Issue Contents   Sample Articles   Subscribe   Order Books  News
Shop Online
 Contribute  Statement of Purpose  Back Issues Contents  Español  Translations
Order Back Issues 
Index 1988-1999   Advert. Rates  Contact Us

21st Century, P.O. Box 16285, Washington, D.C. 20041 Phone: (703) 777-6943 Fax: (703) 771-9214
www.21stcenturysciencetech.com
Copyright © 2005 21st Century Science Associates. All rights reserved.