Pages


History of Computers (Generation of Computers)

Generation of Computers

1. First Generation (1939-1954) - vacuum tube

Modern vacuum tubes, mostly miniature style
In electronics, a vacuum tube, electron tube (in North America), thermionic valve "tube" or "valve", is a device that controls or regulates the flow of electric current through a vacuum. Vacuum tubes may be used for rectification, amplification, switching, or similar processing or creation of electrical signals. Vacuum tubes rely on thermionic emission of electrons from a hot filament or hot cathode. Electrons travel to the anode (or plate) when it is at a positive voltage with respect to the cathode. Additional electrodes interposed between the cathode and anode can regulate the current, giving the tube the ability to amplify or switch

Vacuum tubes were critical to the development of electronic technology, which drove the expansion and commercialization of radio communication and broadcasting, television, radar, sound reinforcement, sound recording and reproduction, large telephone networks, analog and digital computers, and industrial process control. Although some of these applications had counterparts using earlier technologies, such as the spark gap transmitter or mechanical computers, it was the invention of the triode vacuum tube and its capability of electronic amplification that made these technologies widespread and practical.

In most applications, vacuum tubes have been replaced by solid-state devices such as transistors and other semiconductor devices. Solid-state devices last much longer, and are smaller, more efficient, more reliable, and cheaper than equivalent vacuum tube devices. However, tubes still find particular uses where solid-state devices have not been developed or are not practical, or where the tube device is regarded as having superior performance over the solid-state equivalent, as can be the case with some devices used in professional audio. Tubes are still produced for such applications and to replace those used in existing equipment such as high-power radio transmitters.


Vacuum tube triode: voltage applied to the
grid controls plate (anode) current.
Vacuum tube diode: electrons from the hot cathode
flow towards positive anode, but not vice versa.
Classifications

One domain of classification of vacuum tubes uses the number of active electrodes, neglecting the filament or heater in devices with indirectly-heated cathodes (where the heater is electrically separate from the cathode). A device with two active elements is a diode, usually used for rectification. Devices with three elements are triodes used for amplification and switching. Additional electrodes create tetrodes, pentodes, and so forth, which have mulitple additional functions made possible by the additional controllable electrodes.

Other classifications are:
by frequency range (audio, radio, VHF, UHF, microwave),
by power rating (small-signal, audio power, high-power radio transmitting),
by design (e.g., sharp- versus remote-cutoff in some pentodes)
by application (receiving tubes, transmitting tubes, amplifying or switching, rectification, mixing),
special qualities (long life, very low microphonic and low noise audio amplification, and so on).

Multiple classifications may apply to a device; for example similar dual triodes can be used for audio preamplification and as flip-flops in computers, although linearity is important in the former case and long life in the latter.

Other vacuum tubes have different construction and different functions, such as cathode ray tubes which create a beam of electrons for display purposes (such as the television picture tube) in addition to more specialized functions such as electron microscopy and electron beam lithography. X-ray tubes are also vacuum tubes. Phototubes and photomultipliers also rely on electron flow through a vacuum, though in this case the emission of electrons from the cathode depends on energy from photons rather than thermionic emission. Since these sorts of "vacuum tubes" have functions other than electronic amplification and rectification they are described in their own articles.

Gas-filled tubes

There are also varieties of current-conducting tubes filled with one or another gas at a higher or lower pressure; the common fluorescent bulb is a familiar example. Such discharge tubes and cold cathode tubes are not hard vacuum tubes, though are always filled with gas at less than sea-level atmospheric pressure. However certain types such as the voltage-regulator tube and thyratron physically resemble commercial vacuum tubes and fit in sockets designed for vacuum tubes. Their distinctive orange, red, or purple glow during operation indicates the presence of gas; electrons flowing in a vacuum do not produce light within that region. These types may still be referred to as "electron tubes" as they do perform electronic functions, and are briefly discussed below under "Special-purpose tubes."


Description

A vacuum tube consists of two or more electrodes in a vacuum inside an airtight enclosure. Most tubes have glass envelopes, though ceramic and metal envelopes (atop insulating bases) have also been used. The electrodes are attached to leads which pass through the envelope via an airtight seal. On most tubes, the leads, in the form of pins, plug into a tube socket for easy replacement of the tube (tubes were by far the most common cause of failure in electronic equipment, and consumers were expected to be able to replace tubes themselves). Some tubes had an electrode terminating at a top cap which reduced interelectrode capacitance to improve high-frequency performance, kept a possibly very high plate voltage away from lower voltages, and could accommodate one more electrode than allowed by the base.


The earliest vacuum tubes evolved from incandescent light bulbs, containing a filament sealed in an evacuated glass envelope. When hot, the filament releases electrons into the vacuum, a process called thermionic emission. A second electrode, the anode or plate, will attract those electrons if it is at a more positive voltage. The result is a net flow of electrons from the filament to plate. However current cannot flow in the reverse direction because the plate is not heated and does not emit electrons. The filament (cathode) has a dual function: it emits electrons when heated; and, together with the plate, it creates an electric field due to the potential difference between them. Such a tube with only two electrodes is termed a diode, and is used for rectification. Since current can only pass in one direction, such a diode (or rectifier) will convert AC to pulsating DC. This can therefore be used in a DC power supply, and is also used as a demodulator of amplitude modulated (AM) radio signals and similar functions.

While early tubes used the directly heated filament as the cathode, most (but not all) more modern tubes employed indirect heating. A separate element was used for the cathode. Inside the cathode, and electrically insulated from it, was the filament or heater. Thus the heater did not function as an electrode, but simply served to heat the cathode sufficiently for it to emit electrons by thermionic emission. This allowed all the tubes to be heated through a common circuit (which can as well be AC) while allowing each cathode to arrive at a voltage independently of the others, removing an unwelcome constraint on circuit design.

During operation, vacuum tubes require constant heating of the filament thus requiring considerable power even when amplifying signals at the microwatt level. In most amplifiers further power is consumed due to the quiescent current between the cathode and the anode (plate), resulting in heating of the plate. In a power amplifier, heating of the plate can be quite considerable; the tube can be destroyed if driven beyond its safe limits. Since the tube requires a vacuum to operate, convection cooling of the plate is not generally possible (except in special applications where the anode forms a part of the vacuum envelope; this is generally avoided due to the shock hazard from the anode voltage). Thus anode cooling occurs mainly through black-body radiation.

Vacuum tube triode: voltage applied to the grid controls plate (anode) current.

Except for diodes, additional electrodes are positioned between the cathode and the plate (anode). These electrodes are referred to as grids as they are not solid electrodes but sparse elements through which electrons can pass on their way to the plate. The vacuum tube is then known as a triode, tetrode, pentode, etc., depending on the number of grids. A triode has three electrodes: the anode, cathode, and one grid, and so on. The first grid, known as the control grid, (and sometimes other grids) transforms the diode into a voltage-controlled device: the voltage applied to the control grid affects the current flow between the cathode and the plate. When held negative with respect to the cathode, the control grid creates an electric field which repels electrons emitted by the cathode, thus reducing or even stopping the current flow between cathode and anode. As long as the control grid is negative relative to the cathode, essentially no current flows into it, yet a change of several volts on the control grid is sufficient to make a large difference in the plate current, possibly changing the output by hundreds of volts (depending on the circuit). The solid-state device which operates most like the pentode tube is the junction field-effect transistor (JFET), although vacuum tubes typically operate at over a hundred volts, unlike most semiconductors in most applications.

------------------------------------------------------
2. Second Generation Computers (1954-1959) - transistor 
transistor is a semiconductor device used to amplify and switch electronic signals and power. It is composed of a semiconductor material with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current flowing through another pair of terminals. Because the controlled (output) power can be much more than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.

The transistor is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. Following its release in the early 1950s the transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things.

History


 The thermionic triode, a vacuum tube invented in 1907, propelled the electronics age forward, enabling amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a lot of power. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement for the triode.[1][2] Lilienfeld also filed identical patents in the United States in 1926[3] and 1928.[4][5] However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Since the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device were built.[6] In 1934, German inventor Oskar Heil patented a similar device.

From November 17, 1947 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input.[8] Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a portmanteau of the term "transfer resistor". According to Lillian Hoddeson and Vicki Daitch, authors of a recent biography of John Bardeen, Shockley had proposed that Bell Lab's first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld’s patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal since the idea of a field-effect transistor which used an electric field as a “grid” was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first bipolar point-contact transistor. In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect."

In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By witnessing currents flowing through point-contacts, similar to what Bardeen and Brattain had accomplished earlier in December 1947, Mataré by June 1948, was able to produce consistent results by using samples of germanium produced by Welker. Realizing that Bell Lab's scientists had already invented the transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network.[12]

The first silicon transistor was produced by Texas Instruments in 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs.[14] The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.

Importance

The transistor is the key active component in practically all modern electronics. Many consider it to be one of the greatest inventions of the 20th century. Its importance in today's society rests on its ability to be mass produced using a highly automated process (semiconductor device fabrication) that achieves astonishingly low per-transistor costs. The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009.

Although several companies each produce over a billion individually packaged (known as discrete) transistors every year, the vast majority of transistors now are produced in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2011, can use as many as 3 billion transistors (MOSFETs). "About 60 million transistors were built in 2002 ... for [each] man, woman, and child on Earth."

The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.
------------------------------------------------------
3. Third Generation Computers (1959-1971) - IC 

An integrated circuit or monolithic integrated circuit (also referred to as IC, chip, or microchip) is an electronic circuit manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material. Additional materials are deposited and patterned to form interconnections between semiconductor devices.

Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of production of integrated circuits.

Introduction

Synthetic detail of an integrated circuit through four layers of planarized copper interconnect, down to the polysilicon (pink), wells (greyish), and substrate (green)

ICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, much less material is used to construct a packaged IC die than to construct a discrete circuit. Performance is high because the components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2006, typical chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.

Terminology

Integrated circuit originally referred to a miniaturized electronic circuit consisting of semiconductor devices, as well as passive components bonded to a substrate or circuit board. This configuration is now commonly referred to as a hybrid integrated circuit. Integrated circuit has since come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.
------------------------------------------------------
4. Fourth Generation (1971-1990) - microprocessor 

Intel 4004, the first general-purpose,
commercial microprocessor
A microprocessor incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit, (IC) or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and provides results as output. It is an example of sequential digital logic, as it has internal memory. Microprocessors operate on numbers and symbols represented in the binary numeral system.

The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose microprocessors in personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control of a myriad of objects from appliances to automobiles to cellular phones and industrial process control.


Origins

During the 1960s, computer processors were constructed out of small and medium-scale ICs each containing from tens to a few hundred transistors. For each computer built, all of these had to be placed and soldered onto printed circuit boards, and often multiple boards would have to be interconnected in a chassis. The large number of discrete logic gates used more electrical power—and therefore, produced more heat—than a more integrated design with fewer ICs. The distance that signals had to travel between ICs on the boards limited the speed at which a computer could operate.

In the NASA Apollo space missions to the moon in the 1960s and 1970s, all onboard computations for primary guidance, navigation and control were provided by a small custom processor called "The Apollo Guidance Computer". It used a primitive gate array whose only logic elements were three-input NOR gates.

The integration of a whole CPU onto a single chip or on a few chips greatly reduced the cost of processing power. The integrated circuit processor was produced in large numbers by highly automated processes, so unit cost was low. Single-chip processors increase reliability as there were many fewer electrical connections to fail. As microprocessor designs get faster, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same.

Microprocessors integrated into one or a few large-scale ICs the architectures that had previously been implemented using many medium- and small-scale integrated circuits. Continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.

The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on.

Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this originally suggested that the number of transistors that can be fitted onto a chip doubles every year, though Moore later refined the period to two years.

------------------------------------------------------
5. Fifth Generation (from 1991 ) 

The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see History of computing hardware) which was supposed to perform much calculation using massive parallel processing. It was to be the end result of a massive government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence.

The term fifth generation was intended to convey the system as being a leap beyond existing machines. Computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance. The project was to create the computer over a ten year period, after which it was considered ended and investment in a new, Sixth Generation project, began. Opinions about its outcome are divided: Either it was a failure, or it was ahead of its time.

History

In the late 1960s and early '70s, there was much talk about "generations" of computer hardware — usually "three generations".
First generation: Vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer.
Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer.
Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, which, by stacking layers of silicon over a ceramic substrate, accommodated over 20 transistors per chip; the chips could be packed together onto a circuit board to achieve unheard-of logic densities. The IBM 360/91 was a hybrid second- and third-generation computer.

Omitted from this taxonomy is the "zeroth-generation" computer based on metal gears (such as the IBM 4077) or mechanical relays (such as the Mark I), and the post-third-generation computers based on Very Large Scale Integrated (VLSI) circuits.

There was also a parallel set of generations for software:
First generation: Machine language.
Second generation: Assembly language.
Third generation: Structured programming languages such as C, COBOL and FORTRAN.
Fourth generation: Domain-specific languages such as SQL (for database access) and TeX (for text formatting)

Background and design philosophy

Throughout these multiple generations up to the 1990s, Japan had largely been a follower in the computing arena, building computers following U.S. and British leads. The Ministry of International Trade and Industry (MITI) decided to attempt to break out of this follow-the-leader pattern, and in the mid-1970s started looking, on a small scale, into the future of computing. They asked the Japan Information Processing Development Center (JIPDEC) to indicate a number of future directions, and in 1979 offered a three-year contract to carry out more in-depth studies along with industry and academia. It was during this period that the term "fifth-generation computer" started to be used.

Prior to the 1970s, MITI guidance had successes such as an improved steel industry, the creation of the oil supertanker, the automotive industry, consumer electronics, and computer memory. MITI decided that the future was going to be information technology. However, the Japanese language, in both written and spoken form, presented and still presents major obstacles for computers. These hurdles could not be taken lightly. So MITI held a conference and invited people around the world to help them.

The primary fields for investigation from this initial project were:
Inference computer technologies for knowledge processing
Computer technologies to process large-scale data bases and knowledge bases
High performance workstations

- Distributed functional computer technologies
Super-computers for scientific calculation

The project imagined a parallel processing computer running on top of massive databases (as opposed to a traditional filesystem) using a logic programming language to define and access the data. They envisioned building a prototype machine with performance between 100M and 1G LIPS, where a LIPS is a Logical Inference Per Second. At the time typical workstation machines were capable of about 100k LIPS. They proposed to build this machine over a ten year period, 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982 the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies.

Implementation

So ingrained was the belief that parallel computing was the future of all performance gains that the Fifth-Generation project generated a great deal of apprehension in the computer field. After having seen the Japanese take over the consumer electronics field during the 1970s and apparently doing the same in the automotive world during the 1980s, the Japanese in the 1980s had a reputation for invincibility. Soon parallel projects were set up in the US as the Strategic Computing Initiative and the Microelectronics and Computer Technology Corporation (MCC), in the UK as Alvey, and in Europe as the European Strategic Program on Research in Information Technology (ESPRIT), as well as ECRC (European Computer Research Centre) in Munich, a collaboration between ICL in Britain, Bull in France, and Siemens in Germany.

Five running Parallel Inference Machines (PIM) were eventually produced: PIM/m, PIM/p, PIM/i, PIM/k, PIM/c. The project also produced applications to run on these systems, such as the parallel database management system Kappa, the legal reasoning system HELIC-II, and the automated theorem prover MGTP, as well as applications to bioinformatics.

Failure

The FGCS Project did not meet with commercial success for reasons similar to the Lisp machine companies and Thinking Machines. The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intel x86 machines). The project did produce a new generation of promising Japanese researchers. But after the FGCS Project, MITI stopped funding large-scale computer research projects, and the research momentum developed by the FGCS Project dissipated. However MITI/ICOT embarked on a Sixth Generation Project in the 1990s.

A primary problem was the choice of concurrent logic programming as the bridge between the parallel computer architecture and the use of logic as a knowledge representation and problem solving language for AI applications. This never happened cleanly; a number of languages were developed, all with their own limitations. In particular, the committed choice feature of concurrent constraint logic programming interfered with the logical semantics of the languages.

Another problem was that existing CPU performance quickly pushed through the "obvious" barriers that experts perceived in the 1980s, and the value of parallel computing quickly dropped to the point where it was for some time used only in niche situations. Although a number of workstations of increasing capacity were designed and built over the project's lifespan, they generally found themselves soon outperformed by "off the shelf" units available commercially.

The project also suffered from being on the wrong side of the technology curve. During its lifespan, GUIs became mainstream in computers; the internet enabled locally stored databases to become distributed; and even simple research projects provided better real-world results in data mining.[citation needed] Moreover the project found that the promises of logic programming were largely negated by the use of committed choice.

At the end of the ten year period the project had spent over ¥50 billion (about US$400 million at 1992 exchange rates) and was terminated without having met its goals. The workstations had no appeal in a market where general purpose systems could now take over their job and even outrun them. This is parallel to the Lisp machine market, where rule-based systems such as CLIPS could run on general-purpose computers, making expensive Lisp machines unnecessary.

In spite of the possibility of considering the project a failure, many of the approaches envisioned in the Fifth-Generation project, such as logic programming distributed over massive knowledge-bases, are now being re-interpreted in current technologies. The Web Ontology Language (OWL) employs several layers of logic-based knowledge representation systems, while many flavors of parallel computing proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end.
------------------------------------------------------
1939 – Dr. John Vincent Atanasoft produced the first prototype electronic computer.

 --------------------------------------------------------------------------------------
1944 – Aitken built Mark 1 the first automatic, sequence controlled calculator; used by military to compute ballistic data.

1947 – Mauchy and Eckert built ENIAC – 2nd Electronic digital computer.

-------------------------------------------------------------------------------------
1949 – Maurice, Eckert and Von Neumann built EDVAC the -1st stored program computer.

-------------------------------------------------------------------------------------
1950 – Turing built Ace – 1st programmable computer.


-------------------------------------------------------------------------------------
1951 – Mauchy and Eckert built UNIVAC 1 – 1stcommercially sold.


The UNIVAC I (UNIVersal Automatic Computer I) was the first commercial computer produced in the United States.[1] It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work was begun by their company, Eckert–Mauchly Computer Corporation, and was completed after the company had been acquired by Remington Rand (which later became part of Sperry, now Unisys). In the years before successor models of the UNIVAC I appeared, the machine was simply known as "the UNIVAC".

The first UNIVAC was delivered to the United States Census Bureau on March 31, 1951, and was dedicated on June 14 that year.[2] The fifth machine (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. With a sample of just 1% of the voting population it correctly predicted that Dwight D. Eisenhower would win.

History

Market positioning

As well as being the first American commercial computer, the UNIVAC I was the first American computer designed at the outset for business and administrative use (i.e., for the fast execution of large numbers of relatively simple arithmetic and data transport operations, as opposed to the complex numerical calculations required by scientific computers). As such the UNIVAC competed directly against punch-card machines (mainly made by IBM), but oddly enough the UNIVAC originally had no means of either reading or punching cards (which initially hindered sales to some companies with large quantities of data on cards, due to potential manual conversion costs). This was corrected by adding offline card processing equipment, the UNIVAC Card to Tape converter and the UNIVAC Tape to Card converter, to transfer data between cards and UNIVAC magnetic tapes. However, the early market share of the UNIVAC I was lower than the Remington Rand Company wished. In an effort to increase market share, the company joined with CBS to have UNIVAC I predict the result of the 1952 Presidential election. UNIVAC I predicted Eisenhower would have a landslide victory over Adlai Stevenson who the pollsters favored. The result for UNIVAC I was a greater public awareness in computing technology. [3]

Installations

Remington Rand employees, Harold E. Sweeney (left) and J. Presper Eckert (center) demonstrate the U.S. Census Bureau's UNIVAC for CBS reporter Walter Cronkite (right).

The first contracts were with government agencies such as the Census Bureau, the U.S. Air Force, and the U.S. Army Map Service. Contracts were also signed by the ACNielsen Company, and the Prudential Insurance Company. Following the sale of Eckert–Mauchly Computer Corporation to Remington Rand, due to the cost overruns on the project, Remington Rand convinced Nielsen and Prudential to cancel their contracts.

The first sale, to the Census Bureau, was marked with a formal ceremony on March 31, 1951, at the Eckert–Mauchly Division's factory at 3747 Ridge Avenue, Philadelphia. The machine was not actually shipped until the following December, because, as the sole fully set-up model, it was needed for demonstration purposes, and the company was apprehensive about the difficulties of dismantling, transporting, and reassembling the delicate machine.[4] As a result, the first installation was with the second computer, delivered to the Pentagon in June 1952
-------------------------------------------------------------------------------------