1950s: The Beginnings of Artificial Intelligence (AI) Research With the development of the electronic computer in 1941 and the stored program computer in 1949 the conditions for research in A discovery that influenced much of the early development of AI was made by The person who finally coined the term artificial intelligence and is regarded as the father of AI is John McCarthy. In 1956 he organized a conference "The Dartmouth summer research project on artificial intelligence" to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. In the following years AI research centers began forming at the Carnegie Mellon University as well as the One of the results of the intensified research in AI was a novel program called The General Problem Solver, developed by Newell and Simon in 1957 (the same people who had created The Logic Theorist). It was an extension of Wiener's feedback principle and capable of solving a greater extent of common sense problems. While more programs were developed a major breakthrough in AI history was the creation of the LISP (LISt Processing) language by John McCarthy in 1958. It was soon adopted by many AI researchers and is still in use today. |
|
In Search of Reliable Internet Measurement Data Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible. Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Size and Growth In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known So his statement is a slap in the face of all market researchers stating otherwise. In a certain sense this is ridiculous, because since the inception of the So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the "There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Hosts The Despite the small sample, this method has at least one flaw: Internet Weather Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet Hits, Page Views, Visits, and Users Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed. For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate. In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle. But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and For If you like to play around with Internet statistics instead, you can use Robert Orenstein's Measuring the Density of Measuring the Density of Dodge and Shiode used data on the ownership of IP addresses from |
|
1940s - Early 1950s: First Generation Computers Probably the most important contributor concerning the theoretical basis for the digital computers that were developed in the 1940s was The onset of the Second World War led to an increased funding for computer projects, which hastened technical progress, as governments sought to develop computers to exploit their potential strategic importance. By 1941 the German engineer Konrad Zuse had developed a computer, the Z3, to design airplanes and missiles. Two years later the British completed a secret code-breaking computer called Colossus to Also spurred by the war the Electronic Numerical Integrator and Computer (ENIAC), a general-purpose computer, was produced by a partnership between the U.S. government and the University of Pennsylvania (1943). Consisting of 18.000 Concepts in computer design that remained central to computer engineering for the next 40 years were developed by the Hungarian-American mathematician Characteristic for first generation computers was the fact, that instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different |
|
Feedback In biology, feedback refers to a response within a system (molecule, cell, organism, or population) that influences the continued activity or productivity of that system. In essence, it is the control of a biological reaction by the end products of that reaction. Similar usage prevails in mathematics, particularly in several areas of communication theory. In every instance, part of the output is fed back as new input to modify and improve the subsequent output of a system. The importance of feedback mechanisms has also been pointed out by |
|
World Wide Web (WWW) Probably the most significant Internet service, the World Wide Web is not the essence of the Internet, but a subset of it. It is constituted by documents that are linked together in a way you can switch from one document to another by simply clicking on the link connecting these documents. This is made possible by the Hypertext Mark-up Language (HTML), the authoring language used in creating World Wide Web-based documents. These so-called hypertexts can combine text documents, graphics, videos, sounds, and Especially on the World Wide Web, documents are often retrieved by entering keywords into so-called search engines, sets of programs that fetch documents from as many Among other things that is the reason why the World Wide Web is not simply a very huge database, as is sometimes said, because it lacks consistency. There is virtually almost infinite storage capacity on the Internet, that is true, a capacity, which might become an almost everlasting too, a prospect, which is sometimes According to the Internet domain survey of the |
|
Expert system Expert systems are advanced computer programs that mimic the knowledge and reasoning capabilities of an expert in a particular discipline. Their creators strive to clone the expertise of one or several human specialists to develop a tool that can be used by the layman to solve difficult or ambiguous problems. Expert systems differ from conventional computer programs as they combine facts with rules that state relations between the facts to achieve a crude form of reasoning analogous to |
|