Biometrics applications: gate keeping

Identity has to do with "place". In less mobile societies, the place where a person finds him/herself tells us something about his/her identity. In pre-industrial times, gatekeepers had the function to control access of people to particular places, i.e. the gatekeepers function was to identify people and then decide whether somebody's identity would allow that person to physically occupy another place - a town, a building, a vehicle, etc.

In modern societies, the unambiguous nature of place has been weakened. There is a great amount of physical mobility, and ever since the emergence and spread of electronic communication technologies there has been a "virtualisation" of places in what today we call "virtual space" (unlike place, space has been a virtual reality from the beginning, a mathematical formula) The question as to who one is no longer coupled to the physical abode. Highly mobile and virtualised social contexts require a new generation of gatekeepers which biometric technology aims to provide.

TEXTBLOCK 1/10 // URL: http://world-information.org/wio/infostructure/100437611729/100438658757
 
Definition

During the last 20 years the old Immanuel Wallerstein-paradigm of center - periphery and semi-periphery found a new costume: ICTs. After Colonialism, Neo-Colonialism and Neoliberalism a new method of marginalization is emerging: the digital divide.

"Digital divide" describes the fact that the world can be divided into people who
do and people who do not have access to (or the education to handle with) modern information technologies, e.g. cellular telephone, television, Internet. This digital divide is concerning people all over the world, but as usually most of all people in the formerly so called third world countries and in rural areas suffer; the poor and less-educated suffer from that divide.
More than 80% of all computers with access to the Internet are situated in larger cities.

"The cost of the information today consists not so much of the creation of content, which should be the real value, but of the storage and efficient delivery of information, that is in essence the cost of paper, printing, transporting, warehousing and other physical distribution means, plus the cost of the personnel manpower needed to run these `extra' services ....Realizing an autonomous distributed networked society, which is the real essence of the Internet, will be the most critical issue for the success of the information and communication revolution of the coming century of millennium."
(Izumi Aizi)

for more information see:
http://www.whatis.com/digital_divide.htm

TEXTBLOCK 2/10 // URL: http://world-information.org/wio/infostructure/100437611730/100438659300
 
Intellectual Property and the "Information Society" Metaphor

Today the talk about the so-called "information society" is ubiquitous. By many it is considered as the successor of the industrial society and said to represent a new form of societal and economical organization. This claim is based on the argument, that the information society uses a new kind of resource, which fundamentally differentiates from that of its industrial counterpart. Whereas industrial societies focus on physical objects, the information society's raw material is said to be knowledge and information. Yet the conception of the capitalist system, which underlies industrial societies, also continues to exist in an information-based environment. Although there have been changes in the forms of manufacture, the relations of production remain organized on the same basis. The principle of property.

In the context of a capitalist system based on industrial production the term property predominantly relates to material goods. Still even as in an information society the raw materials, resources and products change, the concept of property persists. It merely is extended and does no longer solely consider physical objects as property, but also attempts to put information into a set of property relations. This new kind of knowledge-based property is widely referred to as "intellectual property". Although intellectual property in some ways represents a novel form of property, it has quickly been integrated in the traditional property framework. Whether material or immaterial products, within the capitalist system they are both treated the same - as property.

TEXTBLOCK 3/10 // URL: http://world-information.org/wio/infostructure/100437611725/100438659429
 
Biometrics applications: privacy issues

All biometric technologies capture biometric data from individuals. Once these date have been captured by a system, they can, in principle, be forwarded to other locations and put to many different uses which are capable of compromising on an individuals privacy.

Technically it is easy to match biometric data with other personal data stored in government or corporate files, and to come a step closer to the counter-utopia of the transparent citizen and customer whose data body is under outside control.

While biometric technologies are often portrayed as protectors of personal data and safeguards against identity theft, they can thus contribute to an advance in "Big Brother" technology.

The combination of personalised data files with biometric data would amount to an enormous control potential. While nobody in government and industry would admit to such intentions, leading data systems companies such as EDS (Electronic Data Systems; http://www.eds.com) are also suppliers of biometric systems to the intelligence agencies of government and industry.

Biometric technologies have the function of identification. Historically, identification has been a prerequisite for the exercise of power and serves as a protection only to those who are in no conflict with this power. If the digitalisation of the body by biometric technologies becomes as widespread as its proponents hope, a new electronic feudal system could be emerging, in which people are reduced to subjects dispossessed of their to their bodies, even if these, unlike in the previous one, are data bodies. Unlike the gatekeepers of medieval towns, wear no uniforms by they might be identified; biometric technologies are pure masks.

TEXTBLOCK 4/10 // URL: http://world-information.org/wio/infostructure/100437611729/100438658826
 
Basics: Protected Persons

Generally copyright vests in the author of the work. Certain national laws provide for exceptions and, for example, regard the employer as the original owner of a copyright if the author was, when the work was created, an employee and employed for the purpose of creating that work. In the case of some types of creations, particularly audiovisual works, several national laws provide for different solutions to the question that should be the first holder of copyright in such works.

Many countries allow copyright to be assigned, which means that the owner of the copyright transfers it to another person or entity, which then becomes its holder. When the national law does not permit assignment it usually provides the possibility to license the work to someone else. Then the owner of the copyright remains the holder, but authorizes another person or entity to exercise all or some of his rights subject to possible limitations. Yet in any case the "moral rights" always belong to the author of the work, whoever may be the owner of the copyright (and therefore of the "economic rights").


TEXTBLOCK 5/10 // URL: http://world-information.org/wio/infostructure/100437611725/100438659527
 
In Search of Reliable Internet Measurement Data

Newspapers and magazines frequently report growth rates of Internet usage, number of users, hosts, and domains that seem to be beyond all expectations. Growth rates are expected to accelerate exponentially. However, Internet measurement data are anything thant reliable and often quite fantastic constructs, that are nevertheless jumped upon by many media and decision makers because the technical difficulties in measuring Internet growth or usage are make reliable measurement techniques impossible.

Equally, predictions that the Internet is about to collapse lack any foundation whatsoever. The researchers at the Internet Performance Measurement and Analysis Project (IPMA) compiled a list of news items about Internet performance and statistics and a few responses to them by engineers.

Size and Growth

In fact, "today's Internet industry lacks any ability to evaluate trends, identity performance problems beyond the boundary of a single ISP (Internet service provider, M. S.), or prepare systematically for the growing expectations of its users. Historic or current data about traffic on the Internet infrastructure, maps depicting ... there is plenty of measurement occurring, albeit of questionable quality", says K. C. Claffy in his paper Internet measurement and data analysis: topology, workload, performance and routing statistics (http://www.caida.org/Papers/Nae/, Dec 6, 1999). Claffy is not an average researcher; he founded the well-known Cooperative Association for Internet Data Analysis (CAIDA).

So his statement is a slap in the face of all market researchers stating otherwise.
In a certain sense this is ridiculous, because since the inception of the ARPANet, the offspring of the Internet, network measurement was an important task. The very first ARPANet site was established at the University of California, Los Angeles, and intended to be the measurement site. There, Leonard Kleinrock further on worked on the development of measurement techniques used to monitor the performance of the ARPANet (cf. Michael and Ronda Hauben, Netizens: On the History and Impact of the Net). And in October 1991, in the name of the Internet Activities Board Vinton Cerf proposed guidelines for researchers considering measurement experiments on the Internet stated that the measurement of the Internet. This was due to two reasons. First, measurement would be critical for future development, evolution and deployment planning. Second, Internet-wide activities have the potential to interfere with normal operation and must be planned with care and made widely known beforehand.
So what are the reasons for this inability to evaluate trends, identity performance problems beyond the boundary of a single ISP? First, in early 1995, almost simultaneously with the worldwide introduction of the World Wide Web, the transition of the stewardship role of the National Science Foundation over the Internet into a competitive industry (bluntly spoken: its privatization) left no framework for adequate tracking and monitoring of the Internet. The early ISPs were not very interested in gathering and analyzing network performance data, they were struggling to meet demands of their rapidly increasing customers. Secondly, we are just beginning to develop reliable tools for quality measurement and analysis of bandwidth or performance. CAIDA aims at developing such tools.
"There are many estimates of the size and growth rate of the Internet that are either implausible, or inconsistent, or even clearly wrong", K. G. Coffman and Andrew, both members of different departments of AT & T Labs-Research, state something similar in their paper The Size and Growth Rate of the Internet, published in First Monday. There are some sources containing seemingly contradictory information on the size and growth rate of the Internet, but "there is no comprehensive source for information". They take a well-informed and refreshing look at efforts undertaken for measuring the Internet and dismantle several misunderstandings leading to incorrect measurements and estimations. Some measurements have such large error margins that you might better call them estimations, to say the least. This is partly due to the fact that data are not disclosed by every carrier and only fragmentarily available.
What is measured and what methods are used? Many studies are devoted to the number of users; others look at the number of computers connected to the Internet or count IP addresses. Coffman and Odlyzko focus on the sizes of networks and the traffic they carry to answer questions about the size and the growth of the Internet.
You get the clue of their focus when you bear in mind that the Internet is just one of many networks of networks; it is only a part of the universe of computer networks. Additionally, the Internet has public (unrestricted) and private (restricted) areas. Most studies consider only the public Internet, Coffman and Odlyzko consider the long-distance private line networks too: the corporate networks, the Intranets, because they are convinced (that means their assertion is put forward, but not accompanied by empirical data) that "the evolution of the Internet in the next few years is likely to be determined by those private networks, especially by the rate at which they are replaced by VPNs (Virtual Private Networks) running over the public Internet. Thus it is important to understand how large they are and how they behave." Coffman and Odlyzko check other estimates by considering the traffic generated by residential users accessing the Internet with a modem, traffic through public peering points (statistics for them are available through CAIDA and the National Laboratory for Applied Network Research), and calculating the bandwidth capacity for each of the major US providers of backbone services. They compare the public Internet to private line networks and offer interesting findings. The public Internet is currently far smaller, in both capacity and traffic, than the switched voice network (with an effective bandwidth of 75 Gbps at December 1997), but the private line networks are considerably larger in aggregate capacity than the Internet: about as large as the voice network in the U. S. (with an effective bandwidth of about 330 Gbps at December 1997), they carry less traffic. On the other hand, the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U. S. will overtake voice traffic around the year 2002 and will be dominated by the Internet. In the future, growth in Internet traffic will predominantly derive from people staying longer and from multimedia applications, because they consume more bandwidth, both are the reason for unanticipated amounts of data traffic.

Hosts

The Internet Software Consortium's Internet Domain Survey is one of the most known efforts to count the number of hosts on the Internet. Happily the ISC informs us extensively about the methods used for measurements, a policy quite rare on the Web. For the most recent survey the number of IP addresses that have been assigned a name were counted. At first sight it looks simple to get the accurate number of hosts, but practically an assigned IP address does not automatically correspond an existing host. In order to find out, you have to send a kind of message to the host in question and wait for a reply. You do this with the PING utility. (For further explanations look here: Art. PING, in: Connected: An Internet Encyclopaedia) But to do this for every registered IP address is an arduous task, so ISC just pings a 1% sample of all hosts found and make a projection to all pingable hosts. That is ISC's new method; its old method, still used by RIPE, has been to count the number of domain names that had IP addresses assigned to them, a method that proved to be not very useful because a significant number of hosts restricts download access to their domain data.
Despite the small sample, this method has at least one flaw: ISC's researchers just take network numbers into account that have been entered into the tables of the IN-ADDR.ARPA domain, and it is possible that not all providers know of these tables. A similar method is used for Telcordia's Netsizer.

Internet Weather

Like daily weather, traffic on the Internet, the conditions for data flows, are monitored too, hence called Internet weather. One of the most famous Internet weather report is from The Matrix, Inc. Another one is the Internet Traffic Report displaying traffic in values between 0 and 100 (high values indicate fast and reliable connections). For weather monitoring response ratings from servers all over the world are used. The method used is to "ping" servers (as for host counts, e. g.) and to compare response times to past ones and to response times of servers in the same reach.

Hits, Page Views, Visits, and Users

Let us take a look at how these hot lists of most visited Web sites may be compiled. I say, may be, because the methods used for data retrieval are mostly not fully disclosed.
For some years it was seemingly common sense to report requested files from a Web site, so called "hits". A method not very useful, because a document can consist of several files: graphics, text, etc. Just compile a document from some text and some twenty flashy graphical files, put it on the Web and you get twenty-one hits per visit; the more graphics you add, the more hits and traffic (not automatically to your Web site) you generate.
In the meantime page views, also called page impressions are preferred, which are said to avoid these flaws. But even page views are not reliable. Users might share computers and corresponding IP addresses and host names with others, she/he might access not the site, but a cached copy from the Web browser or from the ISP's proxy server. So the server might receive just one page request although several users viewed a document.

Especially the editors of some electronic journals (e-journals) rely on page views as a kind of ratings or circulation measure, Rick Marin reports in the New York Times. Click-through rates - a quantitative measure - are used as a substitute for something of intrinsically qualitative nature: the importance of a column to its readers, e. g. They may read a journal just for a special column and not mind about the journal's other contents. Deleting this column because of not receiving enough visits may cause these readers to turn their backs on their journal.
More advanced, but just slightly better at best, is counting visits, the access of several pages of a Web site during one session. The problems already mentioned apply here too. To avoid them, newspapers, e.g., establish registration services, which require password authentication and therefore prove to be a kind of access obstacle.
But there is a different reason for these services. For content providers users are virtual users, not unique persons, because, as already mentioned, computers and IP addresses can be shared and the Internet is a client-server system; in a certain sense, in fact computers communicate with each other. Therefore many content providers are eager to get to know more about users accessing their sites. On-line registration forms or WWW user surveys are obvious methods of collecting additional data, sure. But you cannot be sure that information given by users is reliable, you can just rely on the fact that somebody visited your Web site. Despite these obstacles, companies increasingly use data capturing. As with registration services cookies come here into play.

For

If you like to play around with Internet statistics instead, you can use Robert Orenstein's Web Statistics Generator to make irresponsible predictions or visit the Internet Index, an occasional collection of seemingly statistical facts about the Internet.

Measuring the Density of IP Addresses

Measuring the Density of IP Addresses or domain names makes the geography of the Internet visible. So where on earth is the most density of IP addresses or domain names? There is no global study about the Internet's geographical patterns available yet, but some regional studies can be found. The Urban Research Initiative and Martin Dodge and Narushige Shiode from the Centre for Advanced Spatial Analysis at the University College London have mapped the Internet address space of New York, Los Angeles and the United Kingdom (http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/telecom.html and http://www.geog.ucl.ac.uk/casa/martin/internetspace/paper/gisruk98.html).
Dodge and Shiode used data on the ownership of IP addresses from RIPE, Europe's most important registry for Internet numbers.





TEXTBLOCK 6/10 // URL: http://world-information.org/wio/infostructure/100437611791/100438658352
 
Individualized Audience Targeting

New opportunities for online advertisers arise with the possibility of one-to-one Web applications. Software agents for example promise to "register, recognize and manage end-user profiles; create personalized communities on-line; deliver personalized content to end-users and serve highly targeted advertisements". The probably ultimate tool for advertisers. Although not yet widely used, companies like Amazon.Com have already started to exploit individualized audience targeting for their purposes.

TEXTBLOCK 7/10 // URL: http://world-information.org/wio/infostructure/100437611652/100438658450
 
Copyright Management and Control Systems: Post-Infringement

Post-infringement technologies allow the owners of copyrighted works to identify infringements and thus enhance enforcement of intellectual property rights and encompass systems such as:

Steganography

Applied to electronic files, steganography refers to the process of hiding information in files that can not be easily detected by users. Steganography can be used by intellectual property owners in a variety of ways. One is to insert into the file a "digital watermark" which can be used to prove that an infringing file was the creation of the copyright holder and not the pirate. Other possibilities are to encode a unique serial number into each authorized copy or file, enabling the owner to trace infringing copies to a particular source, or to store copyright management information.

Agents

Agents are programs that can implement specified commands automatically. Copyright owners can use agents to search the public spaces of the Internet to find infringing copies. Although the technology is not yet very well developed full-text search engines allow similar uses.

Copyright Litigation

While not every infringement will be the subject of litigation, the threat of litigation helps keep large pirate operations in check. It helps copyright owners obtain relief for specific acts of infringement and publicly warns others of the dangers of infringement.

TEXTBLOCK 8/10 // URL: http://world-information.org/wio/infostructure/100437611725/100438659699
 
Basics: Acquisition of Copyright

The laws of almost all countries provide that protection is independent of any formalities. Copyright protection then starts as soon as the work is created.

TEXTBLOCK 9/10 // URL: http://world-information.org/wio/infostructure/100437611725/100438659576
 
Data bunkers

Personal data are collected, appropriated, processed and used for commercial purposes on a global scale. In order for such a global system to operate smoothly, there a server nodes at which the data streams converge. Among the foremost of these are the data bases of credit card companies, whose operation has long depended on global networking.

On top of credit card companies such as Visa, American Express, Master Card, and others. It would be erroneous to believe that the primary purpose of business of these companies is the provision of credit, and the facilitation of credit information for sale transactions. In fact, Information means much more than just credit information. In an advertisement of 1982, American Express described itself in these terms: ""Our product is information ...Information that charges airline tickets, hotel rooms, dining out, the newest fashions ...information that grows money funds buys and sells equities ...information that pays life insurance annuities ...information that schedules entertainment on cable television and electronically guards houses ...information that changes kroners into guilders and figures tax rates in Bermuda ..."

Information has become something like the gospel of the New Economy, a doctrine of salvation - the life blood of society, as Bill Gates expresses it. But behind information there are always data that need to be generated and collected. Because of the critical importance of data to the economy, their possession amounts to power and their loss can cause tremendous damage. The data industry therefore locates its data warehouses behind fortifications that bar physical or electronic access. Such structures are somewhat like a digital reconstruction of the medieval fortress

Large amounts of data are concentrated in fortress-like structures, in data bunkers. As the Critical Art Ensemble argue in Electronic Civil Disobedience: "The bunker is the foundation of homogeneity, and allows only a singular action within a given situation." All activities within data bunker revolve around the same principle of calculation. Calculation is the predominant mode of thinking in data-driven societies, and it reaches its greatest density inside data bunkers. However, calculation is not a politically neutral activity, as it provides the rational basis - and therefore the formal legitimisation most every decision taken. Data bunkers therefore have an essentially conservative political function, and function to maintain and strengthen the given social structures.

TEXTBLOCK 10/10 // URL: http://world-information.org/wio/infostructure/100437611761/100438659754
 
Enigma Machine

The Enigma Encryption Machine was famous for its insecurities as for the security that it gave to German ciphers. It was broken, first by the Poles in the 1930s, then by the British in World War II.

INDEXCARD, 1/10
 
Center for Democracy and Technology

The Center for Democracy and Technology works to promote democratic values and constitutional liberties in the digital age. With expertise in law, technology, and policy, the Center seeks practical solutions to enhance free expression and privacy in global communications technologies. The Center is dedicated to building consensus among all parties interested in the future of the Internet and other new communications media.

http://www.cdt.org

INDEXCARD, 2/10
 
John Dee

b. July 13, 1527, London, England
d. December 1608, Mortlake, Surrey

English alchemist, astrologer, and mathematician who contributed greatly to the revival of interest in mathematics in England. After lecturing and studying on the European continent between 1547 and 1550, Dee returned to England in 1551 and was granted a pension by the government. He became astrologer to the queen, Mary Tudor, and shortly thereafter was imprisoned for being a magician but was released in 1555. Dee later toured Poland and Bohemia (1583-89), giving exhibitions of magic at the courts of various princes. He became warden of Manchester College in 1595.

INDEXCARD, 3/10
 
Cutting

The cutting of pictures in movies or photographs is highly manipulative: it is easy to produce a new video out of an already existing one. The result is a form of manipulation that is difficult to contradict. A reputation destroyed by this, is nearly impossible to heal.

INDEXCARD, 4/10
 
Blue Box

The blue box-system works with a special blue colored background. The person in front can act as if he/she was filmed anywhere, also in the middle of a war.

INDEXCARD, 5/10
 
International Standardization Organization

ISO (International Organization for Standardization), founded in 1946, is a worldwide federation of national standards bodies from some 100 countries, one from each country. Among the standards it fosters is Open Systems Interconnection (OSI), a universal reference model for communication protocols. Many countries have national standards organizations that participate in and contribute to ISO standards making.

http://www.iso.ch

Source: Whatis.com

http://www.iso.ch/
INDEXCARD, 6/10
 
Telnet

Telnet allows you to login remotely on a computer connected to the Internet.

INDEXCARD, 7/10
 
Harold. D. Lasswell

Harold. D. Lasswell (* 1902) studied at the London School of Economics. He then became a professor of social sciences at different Universities, like the University of Chicago, Columbia University, and Yale University. He also was a consultant for several governments. One of Lasswell's many famous works was Propaganda Technique in World War. In this he defines propaganda. He also discussed major objectives of propaganda, like to mobilize hatred against the enemy, to preserve the friendship of allies, to procure the co-operation of neutrals and to demoralize the enemy.

INDEXCARD, 8/10
 
Martin Hellman

Martin Hellman was Whitfield Diffie's collegue in creating pubylic key cryptography in the 1970s.

INDEXCARD, 9/10
 
Apple

Founded by Steve Jobs and Steve Wozniak and headquartered in Cupertino, USA, Apple Computer was the first commercially successful personal computer company.

In 1978 Wozniak invented the first personal computer, the Apple II. IBM countered its successful introduction to the market by introducing a personal computer running MS-DOS, the operating system supplied by Microsoft Corporation. And IBM gained leadership again. Although by introducing the first graphical user interface affordable to consumers having started the desktop publishing revolution, Apple could not regain leadership again.

http://www.apple.com

For more detailed information see the Encyclopaedia Britannica: http://www.britannica.com/bcom/eb/article/6/0,5716,115726+1+108787,00.html

http://www.apple.com/
INDEXCARD, 10/10