In Search of Reliable Internet Measurement Data Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible. Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Size and Growth In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known So his statement is a slap in the face of all market researchers stating otherwise. In a certain sense this is ridiculous, because since the inception of the So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the "There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Hosts The Despite the small sample, this method has at least one flaw: Internet Weather Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet Hits, Page Views, Visits, and Users Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed. For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate. In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle. But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and For If you like to play around with Internet statistics instead, you can use Robert Orenstein's Measuring the Density of Measuring the Density of Dodge and Shiode used data on the ownership of IP addresses from |
|
Virtual body and data body The result of this informatisation is the creation of a virtual body which is the exterior of a man or woman's social existence. It plays the same role that the physical body, except located in virtual space (it has no real location). The virtual body holds a certain emancipatory potential. It allows us to go to places and to do things which in the physical world would be impossible. It does not have the weight of the physical body, and is less conditioned by physical laws. It therefore allows one to create an identity of one's own, with much less restrictions than would apply in the physical world. But this new freedom has a price. In the shadow of virtualisation, the data body has emerged. The data body is a virtual body which is composed of the files connected to an individual. As the The virtual character of the data body means that social regulation that applies to the real body is absent. While there are limits to the manipulation and exploitation of the real body (even if these limits are not respected everywhere), there is little regulation concerning the manipulation and exploitation of the data body, although the manipulation of the data body is much easier to perform than that of the real body. The seizure of the data body from outside the concerned individual is often undetected as it has become part of the basic structure of an informatised society. But data bodies serve as raw material for the "New Economy". Both business and governments claim access to data bodies. Power can be exercised, and democratic decision-taking procedures bypassed by seizing data bodies. This totalitarian potential of the data body makes the data body a deeply problematic phenomenon that calls for an understanding of data as social construction rather than as something representative of an objective reality. How data bodies are generated, what happens to them and who has control over them is therefore a highly relevant political question. |
|
Timeline 1900-1970 AD 1913 the wheel cipher gets re-invented as a strip 1917 - an AT&T-employee, Gilbert S. Vernam, invents a polyalphabetic cipher machine that works with random-keys 1918 the Germans start using the ADFGVX-system, that later gets later by the French Georges Painvin - Arthur Scherbius patents a ciphering machine and tries to sell it to the German Military, but is rejected 1919 Hugo Alexander Koch invents a rotor cipher machine 1921 the Hebern Electric Code, a company producing electro-mechanical cipher machines, is founded 1923 Arthur Scherbius founds an enterprise to construct and finally sell his late 1920's/30's more and more it is criminals who use cryptology for their purposes (e.g. for smuggling). Elizabeth Smith Friedman deciphers the codes of rum-smugglers during prohibition regularly 1929 Lester S. Hill publishes his book Cryptography in an Algebraic Alphabet, which contains enciphered parts 1933-1945 the Germans make the Enigma machine its cryptographic main-tool, which is broken by the Poles Marian Rejewski, Gordon Welchman and Alan Turing's team at Bletchley Park in England in 1939 1937 the Japanese invent their so called Purple machine with the help of Herbert O. Yardley. The machine works with telephone stepping relays. It is broken by a team of 1930's the Sigaba machine is invented in the USA, either by W.F. Friedman or his colleague Frank Rowlett - at the same time the British develop the Typex machine, similar to the German Enigma machine 1943 Colossus, a code breaking computer is put into action at Bletchley Park 1943-1980 the cryptographic Venona Project, done by the NSA, is taking place for a longer period than any other program of that type 1948 Shannon, one of the first modern cryptographers bringing mathematics into cryptography, publishes his book A Communications Theory of Secrecy Systems 1960's the Communications-Electronics Security Group (= CESG) is founded as a section of Government Communications Headquarters (= GCHQ) late 1960's the IBM Watson Research Lab develops the Lucifer cipher 1969 James Ellis develops a system of separate public-keys and private-keys |
|
Challenges for Copyright by ICT: Copyright Owners The main concern of copyright owners as the (in terms of Reproduction and Distribution Unlike copies of works made using analog copiers (photocopy machines, video recorders etc.) digital information can be reproduced extremely fast, at low cost and without any loss in quality. Since each copy is a perfect copy, no quality-related limits inhibit pirates from making as many copies as they please, and recipients of these copies have no incentive to return to authorized sources to get another qualitatively equal product. Additionally the costs of making one extra copy of intellectual property online are insignificant, as are the distribution costs if the copy is moved to the end user over the Internet. Control and Manipulation In cross-border, global data networks it is almost impossible to control the exploitation of protected works. Particularly the use of anonymous remailers and other existing technologies complicates the persecution of pirates. Also digital files are especially vulnerable to manipulation, of the work itself, and of the (in some cases) therein-embedded |
|
Definition During the last 20 years the old "Digital divide" describes the fact that the world can be divided into people who do and people who do not have access to (or the education to handle with) modern information technologies, e.g. cellular telephone, television, Internet. This digital divide is concerning people all over the world, but as usually most of all people in the formerly so called third world countries and in rural areas suffer; the poor and less-educated suffer from that divide. More than 80% of all computers with access to the Internet are situated in larger cities. "The cost of the information today consists not so much of the creation of content, which should be the real value, but of the storage and efficient delivery of information, that is in essence the cost of paper, printing, transporting, warehousing and other physical distribution means, plus the cost of the personnel manpower needed to run these `extra' services ....Realizing an autonomous distributed networked society, which is the real essence of the Internet, will be the most critical issue for the success of the information and communication revolution of the coming century of millennium." (Izumi Aizi) for more information see: |
|
Gerard J. Holzmann and Bjoern Pehrson, The Early History of Data Networks This book gives a fascinating glimpse of the many documented attempts throughout history to develop effective means for long distance communications. Large-scale communication networks are not a twentieth-century phenomenon. The oldest attempts date back to millennia before Christ and include ingenious uses of homing pigeons, mirrors, flags, torches, and beacons. The first true nationwide data networks, however, were being built almost two hundred years ago. At the turn of the 18th century, well before the electromagnetic telegraph was invented, many countries in Europe already had fully operational data communications systems with altogether close to one thousand network stations. The book shows how the so-called information revolution started in 1794, with the design and construction of the first true telegraph network in France, Chappe's fixed optical network. http://www.it.kth.se/docs/early_net/ |
|
MIT The MIT (Massachusetts Institute of Technology) is a privately controlled coeducational institution of higher learning famous for its scientific and technological training and research. It was chartered by the state of Massachusetts in 1861 and became a land-grant college in 1863. During the 1930s and 1940s the institute evolved from a well-regarded technical school into an internationally known center for scientific and technical research. In the days of the Great Depression, its faculty established prominent research centers in a number of fields, most notably analog computing (led by |
|
Moral rights Authors of copyrighted works (besides |
|
Computer programming language A computer programming language is any of various languages for expressing a set of detailed instructions for a digital computer. Such a language consists of characters and rules for combining them into symbols and words. |
|
Caching Caching generally refers to the process of making an extra copy of a file or a set of files for more convenient retrieval. On the Internet caching of third party files can occur either locally on the user's client computer (in the RAM or on the hard drive) or at the server level ("proxy caching"). A requested file that has been cached will then be delivered from the cache rather than a fresh copy being retrieved over the Internet. |
|
WIPO The World Intellectual Property Organization is one of the specialized agencies of the United Nations (UN), which was designed to promote the worldwide protection of both industrial property (inventions, trademarks, and designs) and copyrighted materials (literary, musical, photographic, and other artistic works). It was established by a convention signed in Stockholm in 1967 and came into force in 1970. The aims of WIPO are threefold. Through international cooperation, WIPO promotes the protection of intellectual property. Secondly, the organization supervises administrative cooperation between the Paris, Berne, and other intellectual unions regarding agreements on trademarks, patents, and the protection of artistic and literary work and thirdly through its registration activities the WIPO provides direct services to applicants for, or owners of, industrial property rights. |
|