Legal Protection: European Union

Within the EU's goal of establishing a European single market also intellectual property rights are of significance. Therefore the European Commission aims at the harmonization of the respective national laws of the EU member states and for a generally more effective protection of intellectual property on an international level. Over the years it has adopted a variety of Conventions and Directives concerned with different aspects of the protection of industrial property as well as copyright and neighboring rights.

An overview of EU activities relating to intellectual property protection is available on the website of the European Commission (DG Internal Market): http://www.europa.eu.int/comm/internal_market/en/intprop/intprop/index.htm

TEXTBLOCK 1/7 // URL: http://world-information.org/wio/infostructure/100437611725/100438659574
 
History: "Indigenous Tradition"

In preliterate societies the association of rhythmic or repetitively patterned utterances with supernatural knowledge endures well into historic times. Knowledge is passed from one generation to another. Similar as in the Southern tradition intellectual property rights are rooted in a concept of 'collective' or 'communal' intellectual property existing in perpetuity and not limited to the life of an individual creator plus some number of years after his or her death. Often rights are exercised by only one individual in each generation, often through matrilineal descent.


TEXTBLOCK 2/7 // URL: http://world-information.org/wio/infostructure/100437611725/100438659557
 
Databody convergence

In the phrase "the rise of the citizen as a consumer", to be found on the EDS website, the cardinal political problem posed by the databody industry is summarised: the convergence of commercial and political interest in the data body business, the convergence of bureaucratic and commercial data bodies, the erosion of privacy, and the consequent undermining of democratic politics by private business interest.

When the citizen becomes a consumer, the state must become a business. In the data body business, the key word behind this new identity of government is "outsourcing". Functions, that are not considered core functions of government activity are put into the hands of private contractors.

There have long been instances where privately owned data companies, e.g. credit card companies, are allowed access to public records, e.g. public registries or electoral rolls. For example, in a normal credit card transaction, credit card companies have had access to public records in order to verify identity of a customer. For example, in the UK citizen's personal data stored on the Electoral Roll have been used for commercial purposes for a long time. The new British Data Protection Act now allows people to "opt out" of this kind of commercialisation - a legislation that has prompted protests on the part of the data industry: Experian has claimed to lose LST 500 mn as a consequence of this restriction - a figure that, even if exaggerated, may help to understand what the value of personal data actually is.

While this may serve as an example of an increased public awareness of privacy issues, the trend towards outsourcing seems to lead to a complete breakdown of the barriers between commercial and public use of personal data. This trend can be summarised by the term "outsourcing" of government functions.

Governments increasingly outsource work that is not considered core function of government, e.g. cooking meals in hospitals or mowing lawns in public parks. Such peripheral activities marked a first step of outsourcing. In a further step, governmental functions were divided between executive and judgemental functions, and executive functions increasingly entrusted to private agencies. For these agencies to be able to carry out the work assigned to them, the need data. Data that one was stored in public places, and whose handling was therefore subject to democratic accountability. Outsourcing has produced gains in efficiency, and a decrease of accountability. Outsourced data are less secure, what use they are put to is difficult to control.

The world's largest data corporation, EDS, is also among the foremost outsourcing companies. In an article about EDS' involvement in government outsourcing in Britain, Simon Davies shows how the general trend towards outsourcing combined with advances in computer technology allow companies EDS, outside of any public accountability, to create something like blueprints for the societies of the 21st century. But the problem of accountability is not the only one to be considered in this context. As Davies argues, the data business is taking own its own momentum "a ruthless company could easily hold a government to ransom". As the links between government agencies and citizens thin out, however, the links among the various agencies might increase. Linking the various government information systems would amount to further increase in efficiency, and a further undermining of democracy. The latter, after all, relies upon the separation of powers - matching government information systems would therefore pave the way to a kind of electronic totalitarianism that has little to do with the ideological bent of George Orwell's 1984 vision, but operates on purely technocratic principles.

Technically the linking of different systems is already possible. It would also create more efficiency, which means generate more income. The question, then, whether democracy concerns will prevent it from happening is one that is capable of creating

But what the EDS example shows is something that applies everywhere, and that is that the data industry is whether by intention or whether by default, a project with profound political implications. The current that drives the global economy deeper and deeper into becoming a global data body economy may be too strong to be stopped by conventional means.

However, the convergence of political and economic data bodies also has technological roots. The problem is that politically motivated surveillance and economically motivated data collection are located in the same area of information and communication technologies. For example, monitoring internet use requires more or less the same technical equipment whether done for political or economic purposes. Data mining and data warehousing techniques are almost the same. Creating transparency of citizens and customers is therefore a common objective of intelligence services and the data body industry. Given that data are exchanged in electronic networks, a compatibility among the various systems is essential. This is another factor that encourages "leaks" between state-run intelligence networks and the private data body business. And finally, given the secretive nature of state intelligence and commercial data capturing , there is little transparency. Both structures occupy an opaque zone.

TEXTBLOCK 3/7 // URL: http://world-information.org/wio/infostructure/100437611761/100438659769
 
Timeline Cryptography - Introduction

Besides oral conversations and written language many other ways of information-transport are known: like the bush telegraph, drums, smoke signals etc. Those methods are not cryptography, still they need en- and decoding, which means that the history of language, the history of communication and the history of cryptography are closely connected to each other
The timeline gives an insight into the endless fight between enciphering and deciphering. The reasons for them can be found in public and private issues at the same time, though mostly connected to military maneuvers and/or political tasks.

One of the most important researchers on Cryptography through the centuries is David Kahn; many parts of the following timeline are originating from his work.

TEXTBLOCK 4/7 // URL: http://world-information.org/wio/infostructure/100437611776/100438658824
 
Biometrics applications: physical access

This is the largest area of application of biometric technologies, and the most direct lineage to the feudal gate keeping system. Initially mainly used in military and other "high security" territories, physical access control by biometric technology is spreading into a much wider field of application. Biometric access control technologies are already being used in schools, supermarkets, hospitals and commercial centres, where the are used to manage the flow of personnel.

Biometric technologies are also used to control access to political territory, as in immigration (airports, Mexico-USA border crossing). In this case, they can be coupled with camera surveillance systems and artificial intelligence in order to identify potential suspects at unmanned border crossings. Examples of such uses in remote video inspection systems can be found at http://www.eds-ms.com/acsd/RVIS.htm

A gate keeping system for airports relying on digital fingerprint and hand geometry is described at http://www.eds-ms.com/acsd/INSPASS.htm. This is another technology which allows separating "low risk" travellers from "other" travellers.

An electronic reconstruction of feudal gate keeping capable of singling out high-risk travellers from the rest is already applied at various border crossing points in the USA. "All enrolees are compared against national lookout databases on a daily basis to ensure that individuals remain low risk". As a side benefit, the economy of time generated by the inspection system has meant that "drug seizures ... have increased since Inspectors are able to spend more time evaluating higher risk vehicles".

However, biometric access control can not only prevent people from gaining access on to a territory or building, they can also prevent them from getting out of buildings, as in the case of prisons.

TEXTBLOCK 5/7 // URL: http://world-information.org/wio/infostructure/100437611729/100438658838
 
In Search of Reliable Internet Measurement Data

Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible.

Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Internet Performance Measurement and Analysis Project (IPMA) compiled a list of news items about Internet performance and statistics and a few responses to them by engineers.

Size and Growth

In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known Cooperative Association for Internet Data Analysis (CAIDA).

So his statement is a slap in the face of all market researchers stating otherwise.
In a certain sense this is ridiculous, because since the inception of the ARPANet, the offspring of the Internet, network measurement was an important task. The very first ARPANet site was established at the University of California, Los Angeles, and intended to be the measurement site. There, Leonard Kleinrock further on worked on the development of measurement techniques used to monitor the performance of the ARPANet (cf. Michael and Ronda Hauben, Netizens: On the History and Impact of the Net). And in October 1991, in the name of the Internet Activities Board Vinton Cerf proposed guidelines for researchers considering measurement experiments on the Internet stated that the measurement of the Internet. This was due to two reasons. First, measurement would be critical for future development, evolution and deployment planning. Second, Internet-wide activities have the potential to interfere with normal operation and must be planned with care and made widely known beforehand.
So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the World Wide Web, the transition of the stewardship role of the National Science Foundation over the Internet into a competitive industry (bluntly spoken: its privatization) left no framework for adequate tracking and monitoring of the Internet. The early ISPs were not very interested in gathering and analyzing network performance data, they were struggling to meet demands of their rapidly increasing customers. Secondly, we are just beginning to develop reliable tools for quality measurement and analysis of bandwidth or performance. CAIDA aims at developing such tools.
"There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of AT & T Labs-Research, state something similar in their paper The Size and Growth Rate of the Internet, published in First Monday. There are some sources containing seemingly contradictory information on the size and growth rate of the Internet, but "there is no comprehensive source for information". They take a well-informed and refreshing look at efforts undertaken for measuring the Internet and dismantle several misunderstandings leading to incorrect measurements and estimations. Some measurements have such large error margins that you might better call them estimations, to say the least. This is partly due to the fact that data are not disclosed by every carrier and only fragmentarily available.
What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count IP addresses. Coffman and Odlyzko focus on the sizes of networks and the traffic they carry to answer questions about the size and the growth of the Internet.
You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Intranets, because they are convinced (that means their assertion is put forward, but not accompanied by empirical data) that "the evolution of the Internet in the next few years is likely to be determined by those private networks, especially by the rate at which they are replaced by VPNs (Virtual Private Networks) running over the public Internet. Thus it is important to understand how large they are and how they behave." Coffman and Odlyzko check other estimates by considering the traffic generated by residential users accessing the Internet with a modem, traffic through public peering points (statistics for them are available through CAIDA and the National Laboratory for Applied Network Research), and calculating the bandwidth capacity for each of the major US providers of backbone services. They compare the public Internet to private line networks and offer interesting findings. The public Internet is currently far smaller, in both capacity and traffic, than the switched voice network (with an effective bandwidth of 75 Gbps at December 1997), but the private line networks are considerably larger in aggregate capacity than the Internet: about as large as the voice network in the U. S. (with an effective bandwidth of about 330 Gbps at December 1997), they carry less traffic. On the other hand, the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U. S. will overtake voice traffic around the year 2002 and will be dominated by the Internet. In the future, growth in Internet traffic will predominantly derive from people staying longer and from multimedia applications, because they consume more bandwidth, both are the reason for unanticipated amounts of data traffic.

Hosts

The Internet Software Consortium's Internet Domain Survey is one of the most known efforts to count the number of hosts on the Internet. Happily the ISC informs us extensively about the methods used for measurements, a policy quite rare on the Web. For the most recent survey the number of IP addresses that have been assigned a name were counted. At first sight it looks simple to get the accurate number of hosts, but practically an assigned IP address does not automatically correspond an existing host. In order to find out, you have to send a kind of message to the host in question and wait for a reply. You do this with the PING utility. (For further explanations look here: Art. PING, in: Connected: An Internet Encyclopaedia) But to do this for every registered IP address is an arduous task, so ISC just pings a 1% sample of all hosts found and make a projection to all pingable hosts. That is ISC's new method; its old method, still used by RIPE, has been to count the number of domain names that had IP addresses assigned to them, a method that proved to be not very useful because a significant number of hosts restricts download access to their domain data.
Despite the small sample, this method has at least one flaw: ISC's researchers just take network numbers into account that have been entered into the tables of the IN-ADDR.ARPA domain, and it is possible that not all providers know of these tables. A similar method is used for Telcordia's Netsizer.

Internet Weather

Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet weather report is from The Matrix, Inc. Another one is the Internet Traffic Report displaying traffic in values between 0 and 100 (high values indicate fast and reliable connections). For weather monitoring response ratings from servers all over the world are used. The method used is to "ping" servers (as for host counts, e. g.) and to compare response times to past ones and to response times of servers in the same reach.

Hits, Page Views, Visits, and Users

Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed.
For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate.
In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding IP addresses and host names with others, she/he might access not the site, but a cached copy from the Web browser or from the ISP's proxy server. So the server might receive just one page request although several users viewed a document.

Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the New York Times. Click-through rates - a quantitative measure - are used as a substitute for something of intrinsically qualitative nature: the importance of a column to its readers, e. g. They may read a journal just for a special column and not mind about the journal's other contents. Deleting this column because of not receiving enough visits may cause these readers to turn their backs on their journal.
More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle.
But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and IP addresses can be shared and the Internet is a client-server system; in a certain sense, in fact computers communicate with each other. Therefore many content providers are eager to get to know more about users accessing their sites. On-line registration forms or WWW user surveys are obvious methods of collecting additional data, sure. But you cannot be sure that information given by users is reliable, you can just rely on the fact that somebody visited your Web site. Despite these obstacles, companies increasingly use data capturing. As with registration services cookies come here into play.

For

If you like to play around with Internet statistics instead, you can use Robert Orenstein's Web Statistics Generator to make irresponsible predictions or visit the Internet Index, an occasional collection of seemingly statistical facts about the Internet.

Measuring the Density of IP Addresses

Measuring the Density of IP Addresses or domain names makes the geography of the Internet visible. So where on earth is the most density of IP addresses or domain names? There is no global study about the Internet's geographical patterns available yet, but some regional studies can be found. The Urban Research Initiative and Martin Dodge and Narushige Shiode from the Centre for Advanced Spatial Analysis at the University College London have mapped the Internet address space of New York, Los Angeles and the United Kingdom (http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/telecom.html and http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/gisruk98.html).
Dodge and Shiode used data on the ownership of IP addresses from RIPE, Europe's most important registry for Internet numbers.





TEXTBLOCK 6/7 // URL: http://world-information.org/wio/infostructure/100437611791/100438658352
 
Timeline BC

~ 1900 BC: Egyptian writers use non-standard Hieroglyphs in inscriptions of a royal tomb; supposedly this is not the first but the first documented example of written cryptography

1500 an enciphered formula for the production of pottery is done in Mesopotamia

parts of the Hebrew writing of Jeremiah's words are written down in "atbash", which is nothing else than a reverse alphabet and one of the first famous methods of enciphering

4th century Aeneas Tacticus invents a form of beacons, by introducing a sort of water-clock

487 the Spartans introduce the so called "skytale" for sending short secret messages to and from the battle field

170 Polybius develops a system to convert letters into numerical characters, an invention called the Polybius Chequerboard.

50-60 Julius Caesar develops an enciphering method, later called the Caesar Cipher, shifting each letter of the alphabet an amount which is fixed before. Like atbash this is a monoalphabetic substitution.

TEXTBLOCK 7/7 // URL: http://world-information.org/wio/infostructure/100437611776/100438659084
 
Hieroglyphs

Hieroglyphs are pictures, used for writing in ancient Egypt. First of all those pictures were used for the names of kings, later more and more signs were added, until a number of 750 pictures

INDEXCARD, 1/7
 
World Wide Web (WWW)

Probably the most significant Internet service, the World Wide Web is not the essence of the Internet, but a subset of it. It is constituted by documents that are linked together in a way you can switch from one document to another by simply clicking on the link connecting these documents. This is made possible by the Hypertext Mark-up Language (HTML), the authoring language used in creating World Wide Web-based documents. These so-called hypertexts can combine text documents, graphics, videos, sounds, and Java applets, so making multimedia content possible.

Especially on the World Wide Web, documents are often retrieved by entering keywords into so-called search engines, sets of programs that fetch documents from as many servers as possible and index the stored information. (For regularly updated lists of the 100 most popular words that people are entering into search engines, click here). No search engine can retrieve all information on the whole World Wide Web; every search engine covers just a small part of it.

Among other things that is the reason why the World Wide Web is not simply a very huge database, as is sometimes said, because it lacks consistency. There is virtually almost infinite storage capacity on the Internet, that is true, a capacity, which might become an almost everlasting too, a prospect, which is sometimes consoling, but threatening too.

According to the Internet domain survey of the Internet Software Consortium the number of Internet host computers is growing rapidly. In October 1969 the first two computers were connected; this number grows to 376.000 in January 1991 and 72,398.092 in January 2000.

World Wide Web History Project, http://www.webhistory.org/home.html

http://www.searchwords.com/
http://www.islandnet.com/deathnet/
http://www.salonmagazine.com/21st/feature/199...
INDEXCARD, 2/7
 
Polybius

Polybius was one of the greatest historians of the ancient Greek. he lived from 200-118 BC. see: Polybius Checkerboard.

INDEXCARD, 3/7
 
Internet Research Task Force

Being itself under the umbrella of the Internet Society, the Internet Research Task Force is an umbrella organization of small research groups working on topics related to Internet protocols, applications, architecture and technology. It is governed by the Internet Research Steering Group.

http://www.irtf.org

http://www.irtf.org/
INDEXCARD, 4/7
 
Immanuel Wallerstein

Immanuel Wallerstein (* 1930) is director of the Fernand Braudel Center for the Study of Economies, Historical Systems, and Civilizations. He is one of the most famous sociologists in the Western World. With his book The Modern World-System: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century (1976), which led to the expression World-System Theory about centers, peripheries and semi-peripheries in the capitalist world system, he did not only influence a whole generation of scientists, but this theory seems to get popular again, due to globalization.

INDEXCARD, 5/7
 
Polybius Checkerboard


 

1

2

3

4

5

1

A

B

C

D

E

2

F

G

H

I

K

3

L

M

N

O

P

4

Q

R

S

T

U

5

V

W

X

Y

Z



It is a system, where letters get converted into numeric characters.
The numbers were not written down and sent but signaled with torches.

for example:
A=1-1
B=1-2
C=1-3
W=5-2

for more information see:
http://www.ftech.net/~monark/crypto/crypt/polybius.htm

http://www.ftech.net/~monark/crypto/crypt/pol...
INDEXCARD, 6/7
 
Bruce Schneier

Bruce Schneier is president of Counterpane Systems in Minneapolis. This consulting enterprise specialized in cryptography and computer security. He is the author of the book Applied Cryptography and inventor of the Blowfish and Twofish encryption algorithms.

INDEXCARD, 7/7