In Search of Reliable Internet Measurement Data

Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible.

Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Internet Performance Measurement and Analysis Project (IPMA) compiled a list of news items about Internet performance and statistics and a few responses to them by engineers.

Size and Growth

In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known Cooperative Association for Internet Data Analysis (CAIDA).

So his statement is a slap in the face of all market researchers stating otherwise.
In a certain sense this is ridiculous, because since the inception of the ARPANet, the offspring of the Internet, network measurement was an important task. The very first ARPANet site was established at the University of California, Los Angeles, and intended to be the measurement site. There, Leonard Kleinrock further on worked on the development of measurement techniques used to monitor the performance of the ARPANet (cf. Michael and Ronda Hauben, Netizens: On the History and Impact of the Net). And in October 1991, in the name of the Internet Activities Board Vinton Cerf proposed guidelines for researchers considering measurement experiments on the Internet stated that the measurement of the Internet. This was due to two reasons. First, measurement would be critical for future development, evolution and deployment planning. Second, Internet-wide activities have the potential to interfere with normal operation and must be planned with care and made widely known beforehand.
So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the World Wide Web, the transition of the stewardship role of the National Science Foundation over the Internet into a competitive industry (bluntly spoken: its privatization) left no framework for adequate tracking and monitoring of the Internet. The early ISPs were not very interested in gathering and analyzing network performance data, they were struggling to meet demands of their rapidly increasing customers. Secondly, we are just beginning to develop reliable tools for quality measurement and analysis of bandwidth or performance. CAIDA aims at developing such tools.
"There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of AT & T Labs-Research, state something similar in their paper The Size and Growth Rate of the Internet, published in First Monday. There are some sources containing seemingly contradictory information on the size and growth rate of the Internet, but "there is no comprehensive source for information". They take a well-informed and refreshing look at efforts undertaken for measuring the Internet and dismantle several misunderstandings leading to incorrect measurements and estimations. Some measurements have such large error margins that you might better call them estimations, to say the least. This is partly due to the fact that data are not disclosed by every carrier and only fragmentarily available.
What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count IP addresses. Coffman and Odlyzko focus on the sizes of networks and the traffic they carry to answer questions about the size and the growth of the Internet.
You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Intranets, because they are convinced (that means their assertion is put forward, but not accompanied by empirical data) that "the evolution of the Internet in the next few years is likely to be determined by those private networks, especially by the rate at which they are replaced by VPNs (Virtual Private Networks) running over the public Internet. Thus it is important to understand how large they are and how they behave." Coffman and Odlyzko check other estimates by considering the traffic generated by residential users accessing the Internet with a modem, traffic through public peering points (statistics for them are available through CAIDA and the National Laboratory for Applied Network Research), and calculating the bandwidth capacity for each of the major US providers of backbone services. They compare the public Internet to private line networks and offer interesting findings. The public Internet is currently far smaller, in both capacity and traffic, than the switched voice network (with an effective bandwidth of 75 Gbps at December 1997), but the private line networks are considerably larger in aggregate capacity than the Internet: about as large as the voice network in the U. S. (with an effective bandwidth of about 330 Gbps at December 1997), they carry less traffic. On the other hand, the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U. S. will overtake voice traffic around the year 2002 and will be dominated by the Internet. In the future, growth in Internet traffic will predominantly derive from people staying longer and from multimedia applications, because they consume more bandwidth, both are the reason for unanticipated amounts of data traffic.

Hosts

The Internet Software Consortium's Internet Domain Survey is one of the most known efforts to count the number of hosts on the Internet. Happily the ISC informs us extensively about the methods used for measurements, a policy quite rare on the Web. For the most recent survey the number of IP addresses that have been assigned a name were counted. At first sight it looks simple to get the accurate number of hosts, but practically an assigned IP address does not automatically correspond an existing host. In order to find out, you have to send a kind of message to the host in question and wait for a reply. You do this with the PING utility. (For further explanations look here: Art. PING, in: Connected: An Internet Encyclopaedia) But to do this for every registered IP address is an arduous task, so ISC just pings a 1% sample of all hosts found and make a projection to all pingable hosts. That is ISC's new method; its old method, still used by RIPE, has been to count the number of domain names that had IP addresses assigned to them, a method that proved to be not very useful because a significant number of hosts restricts download access to their domain data.
Despite the small sample, this method has at least one flaw: ISC's researchers just take network numbers into account that have been entered into the tables of the IN-ADDR.ARPA domain, and it is possible that not all providers know of these tables. A similar method is used for Telcordia's Netsizer.

Internet Weather

Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet weather report is from The Matrix, Inc. Another one is the Internet Traffic Report displaying traffic in values between 0 and 100 (high values indicate fast and reliable connections). For weather monitoring response ratings from servers all over the world are used. The method used is to "ping" servers (as for host counts, e. g.) and to compare response times to past ones and to response times of servers in the same reach.

Hits, Page Views, Visits, and Users

Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed.
For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate.
In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding IP addresses and host names with others, she/he might access not the site, but a cached copy from the Web browser or from the ISP's proxy server. So the server might receive just one page request although several users viewed a document.

Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the New York Times. Click-through rates - a quantitative measure - are used as a substitute for something of intrinsically qualitative nature: the importance of a column to its readers, e. g. They may read a journal just for a special column and not mind about the journal's other contents. Deleting this column because of not receiving enough visits may cause these readers to turn their backs on their journal.
More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle.
But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and IP addresses can be shared and the Internet is a client-server system; in a certain sense, in fact computers communicate with each other. Therefore many content providers are eager to get to know more about users accessing their sites. On-line registration forms or WWW user surveys are obvious methods of collecting additional data, sure. But you cannot be sure that information given by users is reliable, you can just rely on the fact that somebody visited your Web site. Despite these obstacles, companies increasingly use data capturing. As with registration services cookies come here into play.

For

If you like to play around with Internet statistics instead, you can use Robert Orenstein's Web Statistics Generator to make irresponsible predictions or visit the Internet Index, an occasional collection of seemingly statistical facts about the Internet.

Measuring the Density of IP Addresses

Measuring the Density of IP Addresses or domain names makes the geography of the Internet visible. So where on earth is the most density of IP addresses or domain names? There is no global study about the Internet's geographical patterns available yet, but some regional studies can be found. The Urban Research Initiative and Martin Dodge and Narushige Shiode from the Centre for Advanced Spatial Analysis at the University College London have mapped the Internet address space of New York, Los Angeles and the United Kingdom (http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/telecom.html and http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/gisruk98.html).
Dodge and Shiode used data on the ownership of IP addresses from RIPE, Europe's most important registry for Internet numbers.





TEXTBLOCK 1/4 // URL: http://world-information.org/wio/infostructure/100437611791/100438658352
 
Virtual body and data body



The result of this informatisation is the creation of a virtual body which is the exterior of a man or woman's social existence. It plays the same role that the physical body, except located in virtual space (it has no real location). The virtual body holds a certain emancipatory potential. It allows us to go to places and to do things which in the physical world would be impossible. It does not have the weight of the physical body, and is less conditioned by physical laws. It therefore allows one to create an identity of one's own, with much less restrictions than would apply in the physical world.

But this new freedom has a price. In the shadow of virtualisation, the data body has emerged. The data body is a virtual body which is composed of the files connected to an individual. As the Critical Art Ensemble observe in their book Flesh Machine, the data body is the "fascist sibling" of the virtual body; it is " a much more highly developed virtual form, and one that exists in complete service to the corporate and police state."

The virtual character of the data body means that social regulation that applies to the real body is absent. While there are limits to the manipulation and exploitation of the real body (even if these limits are not respected everywhere), there is little regulation concerning the manipulation and exploitation of the data body, although the manipulation of the data body is much easier to perform than that of the real body. The seizure of the data body from outside the concerned individual is often undetected as it has become part of the basic structure of an informatised society. But data bodies serve as raw material for the "New Economy". Both business and governments claim access to data bodies. Power can be exercised, and democratic decision-taking procedures bypassed by seizing data bodies. This totalitarian potential of the data body makes the data body a deeply problematic phenomenon that calls for an understanding of data as social construction rather than as something representative of an objective reality. How data bodies are generated, what happens to them and who has control over them is therefore a highly relevant political question.

TEXTBLOCK 2/4 // URL: http://world-information.org/wio/infostructure/100437611761/100438659695
 
Timeline 1900-1970 AD

1913 the wheel cipher gets re-invented as a strip

1917 William Frederick Friedman starts working as a cryptoanalyst at Riverbank Laboratories, which also works for the U.S. Government. Later he creates a school for military cryptoanalysis

- an AT&T-employee, Gilbert S. Vernam, invents a polyalphabetic cipher machine that works with random-keys

1918 the Germans start using the ADFGVX-system, that later gets later by the French Georges Painvin

- Arthur Scherbius patents a ciphering machine and tries to sell it to the German Military, but is rejected

1919 Hugo Alexander Koch invents a rotor cipher machine

1921 the Hebern Electric Code, a company producing electro-mechanical cipher machines, is founded

1923 Arthur Scherbius founds an enterprise to construct and finally sell his Enigma machine for the German Military

late 1920's/30's more and more it is criminals who use cryptology for their purposes (e.g. for smuggling). Elizabeth Smith Friedman deciphers the codes of rum-smugglers during prohibition regularly

1929 Lester S. Hill publishes his book Cryptography in an Algebraic Alphabet, which contains enciphered parts

1933-1945 the Germans make the Enigma machine its cryptographic main-tool, which is broken by the Poles Marian Rejewski, Gordon Welchman and Alan Turing's team at Bletchley Park in England in 1939

1937 the Japanese invent their so called Purple machine with the help of Herbert O. Yardley. The machine works with telephone stepping relays. It is broken by a team of William Frederick Friedman. As the Japanese were unable to break the US codes, they imagined their own codes to be unbreakable as well - and were not careful enough.

1930's the Sigaba machine is invented in the USA, either by W.F. Friedman or his colleague Frank Rowlett

- at the same time the British develop the Typex machine, similar to the German Enigma machine

1943 Colossus, a code breaking computer is put into action at Bletchley Park

1943-1980 the cryptographic Venona Project, done by the NSA, is taking place for a longer period than any other program of that type

1948 Shannon, one of the first modern cryptographers bringing mathematics into cryptography, publishes his book A Communications Theory of Secrecy Systems

1960's the Communications-Electronics Security Group (= CESG) is founded as a section of Government Communications Headquarters (= GCHQ)

late 1960's the IBM Watson Research Lab develops the Lucifer cipher

1969 James Ellis develops a system of separate public-keys and private-keys

TEXTBLOCK 3/4 // URL: http://world-information.org/wio/infostructure/100437611776/100438658921
 
Challenges for Copyright by ICT: Copyright Owners

The main concern of copyright owners as the (in terms of income generation) profiteers of intellectual property protection is the facilitation of pirate activities in digital environments.

Reproduction and Distribution

Unlike copies of works made using analog copiers (photocopy machines, video recorders etc.) digital information can be reproduced extremely fast, at low cost and without any loss in quality. Since each copy is a perfect copy, no quality-related limits inhibit pirates from making as many copies as they please, and recipients of these copies have no incentive to return to authorized sources to get another qualitatively equal product. Additionally the costs of making one extra copy of intellectual property online are insignificant, as are the distribution costs if the copy is moved to the end user over the Internet.

Control and Manipulation

In cross-border, global data networks it is almost impossible to control the exploitation of protected works. Particularly the use of anonymous remailers and other existing technologies complicates the persecution of pirates. Also digital files are especially vulnerable to manipulation, of the work itself, and of the (in some cases) therein-embedded copyright management information.

TEXTBLOCK 4/4 // URL: http://world-information.org/wio/infostructure/100437611725/100438659526
 
Gerard J. Holzmann and Bjoern Pehrson, The Early History of Data Networks

This book gives a fascinating glimpse of the many documented attempts throughout history to develop effective means for long distance communications. Large-scale communication networks are not a twentieth-century phenomenon. The oldest attempts date back to millennia before Christ and include ingenious uses of homing pigeons, mirrors, flags, torches, and beacons. The first true nationwide data networks, however, were being built almost two hundred years ago. At the turn of the 18th century, well before the electromagnetic telegraph was invented, many countries in Europe already had fully operational data communications systems with altogether close to one thousand network stations. The book shows how the so-called information revolution started in 1794, with the design and construction of the first true telegraph network in France, Chappe's fixed optical network.

http://www.it.kth.se/docs/early_net/

INDEXCARD, 1/4
 
MIT

The MIT (Massachusetts Institute of Technology) is a privately controlled coeducational institution of higher learning famous for its scientific and technological training and research. It was chartered by the state of Massachusetts in 1861 and became a land-grant college in 1863. During the 1930s and 1940s the institute evolved from a well-regarded technical school into an internationally known center for scientific and technical research. In the days of the Great Depression, its faculty established prominent research centers in a number of fields, most notably analog computing (led by Vannevar Bush) and aeronautics (led by Charles Stark Draper). During World War II, MIT administered the Radiation Laboratory, which became the nation's leading center for radar research and development, as well as other military laboratories. After the war, MIT continued to maintain strong ties with military and corporate patrons, who supported basic and applied research in the physical sciences, computing, aerospace, and engineering. MIT has numerous research centers and laboratories. Among its facilities are a nuclear reactor, a computation center, geophysical and astrophysical observatories, a linear accelerator, a space research center, supersonic wind tunnels, an artificial intelligence laboratory, a center for cognitive science, and an international studies center. MIT's library system is extensive and includes a number of specialized libraries; there are also several museums.

INDEXCARD, 2/4
 
Moral rights

Authors of copyrighted works (besides economic rights) enjoy moral rights on the basis of which they have the right to claim their authorship and require that their names be indicated on the copies of the work and in connection with other uses thereof. Moral rights are generally inalienable and remain with the creator even after he has transferred his economic rights, although the author may waive their exercise.

INDEXCARD, 3/4
 
Computer programming language

A computer programming language is any of various languages for expressing a set of detailed instructions for a digital computer. Such a language consists of characters and rules for combining them into symbols and words.

INDEXCARD, 4/4