In Search of Reliable Internet Measurement Data Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible. Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Size and Growth In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known So his statement is a slap in the face of all market researchers stating otherwise. In a certain sense this is ridiculous, because since the inception of the So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the "There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Hosts The Despite the small sample, this method has at least one flaw: Internet Weather Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet Hits, Page Views, Visits, and Users Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed. For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate. In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle. But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and For If you like to play around with Internet statistics instead, you can use Robert Orenstein's Measuring the Density of Measuring the Density of Dodge and Shiode used data on the ownership of IP addresses from |
|
Virtual body and data body The result of this informatisation is the creation of a virtual body which is the exterior of a man or woman's social existence. It plays the same role that the physical body, except located in virtual space (it has no real location). The virtual body holds a certain emancipatory potential. It allows us to go to places and to do things which in the physical world would be impossible. It does not have the weight of the physical body, and is less conditioned by physical laws. It therefore allows one to create an identity of one's own, with much less restrictions than would apply in the physical world. But this new freedom has a price. In the shadow of virtualisation, the data body has emerged. The data body is a virtual body which is composed of the files connected to an individual. As the The virtual character of the data body means that social regulation that applies to the real body is absent. While there are limits to the manipulation and exploitation of the real body (even if these limits are not respected everywhere), there is little regulation concerning the manipulation and exploitation of the data body, although the manipulation of the data body is much easier to perform than that of the real body. The seizure of the data body from outside the concerned individual is often undetected as it has become part of the basic structure of an informatised society. But data bodies serve as raw material for the "New Economy". Both business and governments claim access to data bodies. Power can be exercised, and democratic decision-taking procedures bypassed by seizing data bodies. This totalitarian potential of the data body makes the data body a deeply problematic phenomenon that calls for an understanding of data as social construction rather than as something representative of an objective reality. How data bodies are generated, what happens to them and who has control over them is therefore a highly relevant political question. |
|
Timeline 1900-1970 AD 1913 the wheel cipher gets re-invented as a strip 1917 - an AT&T-employee, Gilbert S. Vernam, invents a polyalphabetic cipher machine that works with random-keys 1918 the Germans start using the ADFGVX-system, that later gets later by the French Georges Painvin - Arthur Scherbius patents a ciphering machine and tries to sell it to the German Military, but is rejected 1919 Hugo Alexander Koch invents a rotor cipher machine 1921 the Hebern Electric Code, a company producing electro-mechanical cipher machines, is founded 1923 Arthur Scherbius founds an enterprise to construct and finally sell his late 1920's/30's more and more it is criminals who use cryptology for their purposes (e.g. for smuggling). Elizabeth Smith Friedman deciphers the codes of rum-smugglers during prohibition regularly 1929 Lester S. Hill publishes his book Cryptography in an Algebraic Alphabet, which contains enciphered parts 1933-1945 the Germans make the Enigma machine its cryptographic main-tool, which is broken by the Poles Marian Rejewski, Gordon Welchman and Alan Turing's team at Bletchley Park in England in 1939 1937 the Japanese invent their so called Purple machine with the help of Herbert O. Yardley. The machine works with telephone stepping relays. It is broken by a team of 1930's the Sigaba machine is invented in the USA, either by W.F. Friedman or his colleague Frank Rowlett - at the same time the British develop the Typex machine, similar to the German Enigma machine 1943 Colossus, a code breaking computer is put into action at Bletchley Park 1943-1980 the cryptographic Venona Project, done by the NSA, is taking place for a longer period than any other program of that type 1948 Shannon, one of the first modern cryptographers bringing mathematics into cryptography, publishes his book A Communications Theory of Secrecy Systems 1960's the Communications-Electronics Security Group (= CESG) is founded as a section of Government Communications Headquarters (= GCHQ) late 1960's the IBM Watson Research Lab develops the Lucifer cipher 1969 James Ellis develops a system of separate public-keys and private-keys |
|
Challenges for Copyright by ICT: Copyright Owners The main concern of copyright owners as the (in terms of Reproduction and Distribution Unlike copies of works made using analog copiers (photocopy machines, video recorders etc.) digital information can be reproduced extremely fast, at low cost and without any loss in quality. Since each copy is a perfect copy, no quality-related limits inhibit pirates from making as many copies as they please, and recipients of these copies have no incentive to return to authorized sources to get another qualitatively equal product. Additionally the costs of making one extra copy of intellectual property online are insignificant, as are the distribution costs if the copy is moved to the end user over the Internet. Control and Manipulation In cross-border, global data networks it is almost impossible to control the exploitation of protected works. Particularly the use of anonymous remailers and other existing technologies complicates the persecution of pirates. Also digital files are especially vulnerable to manipulation, of the work itself, and of the (in some cases) therein-embedded |
|
Definition During the last 20 years the old "Digital divide" describes the fact that the world can be divided into people who do and people who do not have access to (or the education to handle with) modern information technologies, e.g. cellular telephone, television, Internet. This digital divide is concerning people all over the world, but as usually most of all people in the formerly so called third world countries and in rural areas suffer; the poor and less-educated suffer from that divide. More than 80% of all computers with access to the Internet are situated in larger cities. "The cost of the information today consists not so much of the creation of content, which should be the real value, but of the storage and efficient delivery of information, that is in essence the cost of paper, printing, transporting, warehousing and other physical distribution means, plus the cost of the personnel manpower needed to run these `extra' services ....Realizing an autonomous distributed networked society, which is the real essence of the Internet, will be the most critical issue for the success of the information and communication revolution of the coming century of millennium." (Izumi Aizi) for more information see: |
|
Challenges for Copyright by ICT: Internet Service Providers ISPs (Internet Service Providers) (and to a certain extent also telecom operators) are involved in the copyright debate primarily because of their role in the transmission and storage of digital information. Problems arise particularly concerning Caching Caching it is argued could cause damage because the copies in the cache are not necessarily the most current ones and the delivery of outdated information to users could deprive website operators of accurate "hit" information (information about the number of requests for a particular material on a website) from which advertising revenue is frequently calculated. Similarly harms such as defamation or infringement that existed on the original page may propagate for years until flushed from each cache where they have been replicated. Although different concepts, similar issues to caching arise with mirroring (establishing an identical copy of a website on a different server), archiving (providing a historical repository for information, such as with newsgroups and mailing lists), and full-text indexing (the copying of a document for loading into a full-text or nearly full-text database which is searchable for keywords or concepts). Under a literal reading of some copyright laws caching constitutes an infringement of copyright. Yet recent legislation like the Information Residing on Systems or Networks at the Direction of Users ISPs may be confronted with problems if infringing material on websites (of users) is hosted on their systems. Although some copyright laws like the DMCA provide for limitations on the liability of ISPs if certain conditions are met, it is yet unclear if ISPs should generally be accountable for the storage of infringing material (even if they do not have actual knowledge) or exceptions be established under specific circumstances. Transitory Communication In the course of transmitting digital information from one point on a network to another ISPs act as a data conduit. If a user requests information ISPs engage in the transmission, providing of a connection, or routing thereof. In the case of a person sending infringing material over a network, and the ISP merely providing facilities for the transmission it is widely held that they should not be liable for infringement. Yet some copyright laws like the DMCA provide for a limitation (which also covers the intermediate and transient copies that are made automatically in the operation of a network) of liability only if the ISPs activities meet certain conditions. For more information on copyright ( Harrington, Mark E.: On-line Copyright Infringement Liability for Internet Service Providers: Context, Cases & Recently Enacted Legislation. In: Teran, G.: |
|
History: "Indigenous Tradition" In preliterate societies the association of rhythmic or repetitively patterned utterances with supernatural knowledge endures well into historic times. Knowledge is passed from one generation to another. Similar as in the Southern tradition |
|
Timeline 1600 - 1900 AD 17th century Cardinal Richelieu invents an encryption-tool called grille, a card with holes for writing messages on paper into the holes of those cards. Afterwards he removes the cards and fills in the blanks, so the message looks like an ordinary letter. The recipient needs to own the same card - Bishop John Wilkins invents a cryptologic system looking like music notes. In a book he describes several forms of steganographic systems like secrets inks, but also the string cipher. He mentions the so-called Pig Latin, a spoken way of encryption that was already used by the ancient Indians - the English scientist, magician and astrologer 1605/1623 Sir Francis Bacon (= Francis Tudor = William Shakespeare?) writes several works containing ideas about cryptography. One of his most important advises is to use ciphers in such a way that no-one gets suspicious that the text could be enciphered. For this the steganogram was the best method, very often used in poems. The attempt to decipher Shakespeare's sonnets (in the 20th century) lead to the idea that his works had been written by Francis Bacon originally. 1671 Leibniz invents a calculating machine that uses the binary scale which we still use today, more advanced of course, called the ASCII code 18th century this is the time of the Black Chambers of espionage in Europe, Vienna having one of the most effective ones, called the "Geheime Kabinettskanzlei", headed by Baron Ignaz von Koch. Its task is to read through international diplomatic mail, copy letters and return them to the post-office the same morning. Supposedly about 100 letters are dealt with each day. 1790's Thomas Jefferson and Robert Patterson invent a wheel cipher 1799 the Rosetta Stone is found and makes it possible to decipher the Egyptian Hieroglyphs 1832 or 1838 Sam Morse develops the Morse Code, which actually is no code but an enciphered alphabet of short and long sounds. The first Morse code-message is sent by telegraph in 1844. 1834 the 1844 the invention of the telegraph changes cryptography very much, as codes are absolutely necessary by then 1854 the Playfair cipher is invented by Sir Charles Wheatstone 1859 for the first time a tomographic cipher gets described 1861 Friedrich W. Kasiski does a cryptoanalysis of the Vigenère ciphers, which had been supposed to be uncrackable for ages 1891 Major Etienne Bazeries creates a new version of the wheel cipher, which is rejected by the French Army 1895 the invention of the radio changes cryptography-tasks again and makes them even more important |
|
Gerard J. Holzmann and Bjoern Pehrson, The Early History of Data Networks This book gives a fascinating glimpse of the many documented attempts throughout history to develop effective means for long distance communications. Large-scale communication networks are not a twentieth-century phenomenon. The oldest attempts date back to millennia before Christ and include ingenious uses of homing pigeons, mirrors, flags, torches, and beacons. The first true nationwide data networks, however, were being built almost two hundred years ago. At the turn of the 18th century, well before the electromagnetic telegraph was invented, many countries in Europe already had fully operational data communications systems with altogether close to one thousand network stations. The book shows how the so-called information revolution started in 1794, with the design and construction of the first true telegraph network in France, Chappe's fixed optical network. http://www.it.kth.se/docs/early_net/ |
|
MIT The MIT (Massachusetts Institute of Technology) is a privately controlled coeducational institution of higher learning famous for its scientific and technological training and research. It was chartered by the state of Massachusetts in 1861 and became a land-grant college in 1863. During the 1930s and 1940s the institute evolved from a well-regarded technical school into an internationally known center for scientific and technical research. In the days of the Great Depression, its faculty established prominent research centers in a number of fields, most notably analog computing (led by |
|
Moral rights Authors of copyrighted works (besides |
|
Computer programming language A computer programming language is any of various languages for expressing a set of detailed instructions for a digital computer. Such a language consists of characters and rules for combining them into symbols and words. |
|
Caching Caching generally refers to the process of making an extra copy of a file or a set of files for more convenient retrieval. On the Internet caching of third party files can occur either locally on the user's client computer (in the RAM or on the hard drive) or at the server level ("proxy caching"). A requested file that has been cached will then be delivered from the cache rather than a fresh copy being retrieved over the Internet. |
|
WIPO The World Intellectual Property Organization is one of the specialized agencies of the United Nations (UN), which was designed to promote the worldwide protection of both industrial property (inventions, trademarks, and designs) and copyrighted materials (literary, musical, photographic, and other artistic works). It was established by a convention signed in Stockholm in 1967 and came into force in 1970. The aims of WIPO are threefold. Through international cooperation, WIPO promotes the protection of intellectual property. Secondly, the organization supervises administrative cooperation between the Paris, Berne, and other intellectual unions regarding agreements on trademarks, patents, and the protection of artistic and literary work and thirdly through its registration activities the WIPO provides direct services to applicants for, or owners of, industrial property rights. |
|
Proxy Servers A proxy server is a server that acts as an intermediary between a workstation user and the Internet so that security, administrative control, and caching service can be ensured. A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it passes filtering requirements, the proxy server, assuming it is also a cache server, looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own Source: Whatis.com |
|
Cooperative Association of Internet Data Analysis (CAIDA) Based at the University of California's San Diego Supercomputer Center, CAIDA supports cooperative efforts among the commercial, government and research communities aimed at promoting a scalable, robust Internet infrastructure. It is sponsored by the |
|
Adi Shamir Adi Shamir was one of three persons in a team to invent the |
|