Late 1950s - Early 1960s: Second Generation Computers

An important change in the development of computers occurred in 1948 with the invention of the transistor. It replaced the large, unwieldy vacuum tube and as a result led to a shrinking in size of electronic machinery. The transistor was first applied to a computer in 1956. Combined with the advances in magnetic-core memory, the use of transistors resulted in computers that were smaller, faster, more reliable and more energy-efficient than their predecessors.

Stretch by IBM and LARC by Sperry-Rand (1959) were the first large-scale machines to take advantage of the transistor technology (and also used assembly language instead of the difficult machine language). Both developed for atomic energy laboratories could handle enormous amounts of data, but still were costly and too powerful for the business sector's needs. Therefore only two LARC's were ever installed.

Throughout the early 1960s there were a number of commercially successful computers (for example the IBM 1401) used in business, universities, and government and by 1965 most large firms routinely processed financial information by using computers. Decisive for the success of computers in business was the stored program concept and the development of sophisticated high-level programming languages like FORTRAN (Formular Translator), 1956, and COBOL (Common Business-Oriented Language), 1960, that gave them the flexibility to be cost effective and productive. The invention of second generation computers also marked the beginning of an entire branch, the software industry, and the birth of a wide range of new types of careers.

TEXTBLOCK 1/1 // URL: http://world-information.org/wio/infostructure/100437611663/100438659439
 
Artificial intelligence approaches

Looking for ways to create intelligent machines, the field of artificial intelligence (AI) has split into several different approaches based on the opinions about the most promising methods and theories. The two basic AI approaches are: bottom-up and top-down. The bottom-up theory suggests that the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons (through neural networks and parallel computing) while the top-down approach attempts to mimic the brain's behavior with computer programs (for example expert systems).

INDEXCARD, 1/1