World-Information City

 CONTENTS   SEARCH   HISTORY   HELP 



  Report: Slave and Expert Systems

Browse:
  Related Search:


 WORLD-INFOSTRUCTURE > SLAVE AND EXPERT SYSTEMS > 1960S - 1970S: INCREASED RESEARCH ...
  1960s - 1970s: Increased Research in Artificial Intelligence (AI)


During the cold war the U.S. tried to ensure that it would stay ahead of the Soviet Union in technological advancements. Therefore in 1963 the Defense Advanced Research Projects Agency (DARPA) granted the Massachusetts Institute of Technology (MIT) U.S.$ 2.2 million for research in machine-aided cognition (artificial intelligence). The major effect of the project was an increase in the pace of AI research and a continuation of funding.

In the 1960s and 1970s a multitude of AI programs were developed, most notably SHRDLU. Headed by Marvin Minsky the MIT's research team showed, that when confined to a small subject matter, computer programs could solve spatial and logic problems. Other progresses in the field of AI at the time were: the proposal of new theories about machine vision by David Marr, Marvin Minsky's frame theory, the PROLOGUE language (1972) and the development of expert systems.




browse Report:
Slave and Expert Systems
    Introduction: The Substitution of Human Faculties with Technology: Early Tools
 ...
-3   Late 1950s - Early 1960s: Second Generation Computers
-2   1961: Installation of the First Industrial Robot
-1   Late 1960s - Early 1970s: Third Generation Computers
0   1960s - 1970s: Increased Research in Artificial Intelligence (AI)
+1   1960s - 1970s: Expert Systems Gain Attendance
+2   1970s: Computer-Integrated Manufacturing (CIM)
+3   Late 1970s - Present: Fourth Generation Computers
+4   1980s: Artificial Intelligence (AI) - From Lab to Life
 INDEX CARD     RESEARCH MATRIX 
World Wide Web (WWW)
Probably the most significant Internet service, the World Wide Web is not the essence of the Internet, but a subset of it. It is constituted by documents that are linked together in a way you can switch from one document to another by simply clicking on the link connecting these documents. This is made possible by the Hypertext Mark-up Language (HTML), the authoring language used in creating World Wide Web-based documents. These so-called hypertexts can combine text documents, graphics, videos, sounds, and Java applets, so making multimedia content possible.

Especially on the World Wide Web, documents are often retrieved by entering keywords into so-called search engines, sets of programs that fetch documents from as many servers as possible and index the stored information. (For regularly updated lists of the 100 most popular words that people are entering into search engines, click here). No search engine can retrieve all information on the whole World Wide Web; every search engine covers just a small part of it.

Among other things that is the reason why the World Wide Web is not simply a very huge database, as is sometimes said, because it lacks consistency. There is virtually almost infinite storage capacity on the Internet, that is true, a capacity, which might become an almost everlasting too, a prospect, which is sometimes consoling, but threatening too.

According to the Internet domain survey of the Internet Software Consortium the number of Internet host computers is growing rapidly. In October 1969 the first two computers were connected; this number grows to 376.000 in January 1991 and 72,398.092 in January 2000.

World Wide Web History Project, http://www.webhistory.org/home.html

http://www.searchwords.com/
http://www.islandnet.com/deathnet/
http://www.salonmagazine.com/21st/feature/199...