|
In Search of Reliable Internet Measurement Data Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible. Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Size and Growth In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known So his statement is a slap in the face of all market researchers stating otherwise. In a certain sense this is ridiculous, because since the inception of the So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the "There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Hosts The Despite the small sample, this method has at least one flaw: Internet Weather Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet Hits, Page Views, Visits, and Users Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed. For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate. In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle. But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and For If you like to play around with Internet statistics instead, you can use Robert Orenstein's Measuring the Density of Measuring the Density of Dodge and Shiode used data on the ownership of IP addresses from |
|
|
|
Intellectual Property: A Definition Intellectual property, very generally, relates to the output, which result from intellectual activity in the industrial, scientific, literary and artistic fields. Traditionally intellectual property is divided into two branches: 1) Industrial Property a) b) c) d) Unfair competition (trade secrets) e) Geographical indications (indications of source and appellations of origin) 2) Copyright The protection of intellectual property is guaranteed through a variety of laws, which grant the creators of intellectual goods, and services certain time-limited rights to control the use made of their products. Those rights apply to the intellectual creation as such, and not to the physical object in which the work may be embodied. |
|
|
|
Late 1950s - Early 1960s: Second Generation Computers An important change in the development of computers occurred in 1948 with the invention of the Stretch by Throughout the early 1960s there were a number of commercially successful computers (for example the IBM 1401) used in business, universities, and government and by 1965 most large firms routinely processed financial information by using computers. Decisive for the success of computers in business was the stored program concept and the development of sophisticated high-level |
|
|
|
Databody convergence In the phrase "the rise of the citizen as a consumer", to be found on the When the citizen becomes a consumer, the state must become a business. In the data body business, the key word behind this new identity of government is "outsourcing". Functions, that are not considered core functions of government activity are put into the hands of private contractors. There have long been instances where privately owned data companies, e.g. credit card companies, are allowed access to public records, e.g. public registries or electoral rolls. For example, in a normal credit card transaction, credit card companies have had access to public records in order to verify identity of a customer. For example, in the UK citizen's personal data stored on the Electoral Roll have been used for commercial purposes for a long time. The new British Data Protection Act now allows people to "opt out" of this kind of commercialisation - a legislation that has prompted protests on the part of the data industry: While this may serve as an example of an increased public awareness of privacy issues, the trend towards outsourcing seems to lead to a complete breakdown of the barriers between commercial and public use of personal data. This trend can be summarised by the term "outsourcing" of government functions. Governments increasingly outsource work that is not considered core function of government, e.g. cooking meals in hospitals or mowing lawns in public parks. Such peripheral activities marked a first step of outsourcing. In a further step, governmental functions were divided between executive and judgemental functions, and executive functions increasingly entrusted to private agencies. For these agencies to be able to carry out the work assigned to them, the need data. Data that one was stored in public places, and whose handling was therefore subject to democratic accountability. Outsourcing has produced gains in efficiency, and a decrease of accountability. Outsourced data are less secure, what use they are put to is difficult to control. The world's largest data corporation, Technically the linking of different systems is already possible. It would also create more efficiency, which means generate more income. The question, then, whether democracy concerns will prevent it from happening is one that is capable of creating But what the EDS example shows is something that applies everywhere, and that is that the data industry is whether by intention or whether by default, a project with profound political implications. The current that drives the global economy deeper and deeper into becoming a global data body economy may be too strong to be stopped by conventional means. However, the convergence of political and economic data bodies also has technological roots. The problem is that politically motivated surveillance and economically motivated data collection are located in the same area of information and communication technologies. For example, monitoring internet use requires more or less the same technical equipment whether done for political or economic purposes. Data mining and data warehousing techniques are almost the same. Creating transparency of citizens and customers is therefore a common objective of intelligence services and the data body industry. Given that data are exchanged in electronic networks, a compatibility among the various systems is essential. This is another factor that encourages "leaks" between state-run intelligence networks and the private data body business. And finally, given the secretive nature of state intelligence and commercial data capturing , there is little transparency. Both structures occupy an opaque zone. |
|
|
|
Who owns the Internet and who is in charge? The Internet/Matrix still depends heavily on public infrastructure and there is no dedicated owner of the whole Internet/Matrix, but the networks it consists of are run and owned by corporations and institutions. Access to the Internet is usually provided by Internet Service Providers (ISPs) for a monthly fee. Each network is owned by someone and has a network operation center from where it is centrally controlled, but the Internet/Matrix is not owned by any single authority and has no network operation center of its own. No legal authority determines how and where networks can be connected together, this is something the managers of networks have to agree about. So there is no way to ever gain ultimate control of the Matrix/Internet. The in some respects decentralized Matrix/Internet architecture and administration do not imply that there are no authorities for oversight and common standards for sustaining basic operations, for administration: There are authorities for IP number and domain name registrations, e.g. Ever since the organizational structures for Internet administration have changed according to the needs to be addressed. Up to now, administration of the Internet is a collaborative undertaking of several loose cooperative bodies with no strict hierarchy of authority. These bodies make decisions on common guidelines, as Amazingly, there seems to be an unspoken and uncodified consent of what is allowed and what is forbidden on the Internet that is widely accepted. Codifications, as the so-called Sometimes violations not already subject to law become part of governmental regulations, as it was the case with spamming, the unsolicited sending of advertising mail messages. But engineers proved to be quicker and developed software against spamming. So, in some respects, the Internet is self-regulating, indeed. For a detailed report on Internet governance, click here. |
|
|
|
Atrocity Stories Atrocity stories are nothing else than lies; the two words "atrocity stories" simply pretend to be more diplomatic. The purpose is to destroy an image of the enemy, to create a new one, mostly a bad one. The story creating the image is not necessarily made up completely. It can also be a changed into a certain variable direction. The most important thing about atrocity stories is to follow the line of possibility. Even if the whole story is made up it must be probable or at least possible, following rumors. Most successful might it be if a rumor is spread on purpose, some time before the atrocity story is launched, because as soon as something seems to be familiar, it is easier to believe it. |
|
|
|
Machine language Initially computer programmers had to write instructions in machine language. This |
|
|
|
Internet Society Founded in 1992, the Internet Society is an umbrella organization of several mostly self-organized organizations dedicated to address the social, political, and technical issues, which arise as a result of the evolution and the growth of the Net. Its most important subsidiary organizations are the Its members comprise companies, government agencies, foundations, corporations and individuals. The Internet Society is governed by elected trustees. |
|
|
|
Scientology Official name Church Of Scientology, religio-scientific movement developed in the United States in the 1950s by the author L. Ron Hubbard (1911-86). The Church of Scientology was formally established in the United States in 1954 and was later incorporated in Great Britain and other countries. The scientific basis claimed by the church for its diagnostic and therapeutic practice is disputed, and the church has been criticized for the financial demands that it makes on its followers. From the 1960s the church and various of its officials or former officials faced government prosecutions as well as private lawsuits on charges of fraud, tax evasion, financial mismanagement, and conspiring to steal government documents, while the church on the other hand claimed it was being persecuted by government agencies and by established medical organizations. Some former Scientology officials have charged that Hubbard used the tax-exempt status of the church to build a profitable business empire. |
|
|
|
Adi Shamir Adi Shamir was one of three persons in a team to invent the |
|
|
|
Virtual Private Networks Virtual Private Networks provide secured connections to a corporate site over a public network as the Internet. Data transmitted through secure connections are encrypted and therefore have to be encrypted before they can be read. These networks are called virtual because connections are provided only when you connect to a corporate site; they do not rely on dedicated lines and support mobile use. |
|
|
|
PGP A |
|
|
|
Caching Caching is a mechanism that attempts to decrease the time it takes to retrieve data by storing a copy at a closer location. |
|
|
|
Roman smoke telegraph network, 150 A.D. The Roman smoke signals network consisted of towers within visible range of each other and had a total length of about 4500 kilometers. It was used for military signaling. For a similar telegraph network in ancient Greece see |
|
|