Conclusion

As we have seen in the latest wars and in art, propaganda and disinformation are taking place on all sides. No contemporary political system is immune against those two. All of them utilize them if it seems to be useful and appropriate. Democracy, always pretending to be the most liberal and most human system is no exception in that - especially not a good one.
Democracy might give us more chances to escape censorship - but only as long as the national will is not disturbed. Then disinformation and propaganda come in ...
NATO-members gave us a very sad example for this during the Kosovo crisis.

It is our hunger for sensations and glory, for rumors and shows which makes disinformation so powerful. Many books and WebPages give informations about how to overcome disinformation and propaganda - but in vain. We somehow seem to like it - or at least we need it for getting through our interests.

There is a lot what we could try to do, but very little that will succeed as people prefer to believe that disinformation is an issue of the past.

At this moment the only appropriate measure to get rid of disinformation's influence seems to be the putting side by side of different aspects and ideas, especially of opinions telling the contrary, or are at least not the same. In any other case the model will probably commit the crime it is fighting against. Because how would we be able to know?

TEXTBLOCK 1/4 // URL: http://world-information.org/wio/infostructure/100437611661/100438658764
 
Copyright Management and Control Systems: Post-Infringement

Post-infringement technologies allow the owners of copyrighted works to identify infringements and thus enhance enforcement of intellectual property rights and encompass systems such as:

Steganography

Applied to electronic files, steganography refers to the process of hiding information in files that can not be easily detected by users. Steganography can be used by intellectual property owners in a variety of ways. One is to insert into the file a "digital watermark" which can be used to prove that an infringing file was the creation of the copyright holder and not the pirate. Other possibilities are to encode a unique serial number into each authorized copy or file, enabling the owner to trace infringing copies to a particular source, or to store copyright management information.

Agents

Agents are programs that can implement specified commands automatically. Copyright owners can use agents to search the public spaces of the Internet to find infringing copies. Although the technology is not yet very well developed full-text search engines allow similar uses.

Copyright Litigation

While not every infringement will be the subject of litigation, the threat of litigation helps keep large pirate operations in check. It helps copyright owners obtain relief for specific acts of infringement and publicly warns others of the dangers of infringement.

TEXTBLOCK 2/4 // URL: http://world-information.org/wio/infostructure/100437611725/100438659699
 
How the Internet works

On the Internet, when you want to retrieve a document from another computer, you request a service from this computer. Your computer is the client, the computer on which the information you want to access is stored, is called the server. Therefore the Internet's architecture is called client-server architecture.

A common set of standards allows the exchange of data and commands independent from locations, time, and operating systems through the Internet. These standards are called communication protocols, or the Internet Protocol Suite, and are implemented in Internet software. Sometimes the Internet Protocol Suite is erroneously identified with TCP/IP (Transmission Control Protocol / Internet Protocol).

Any information to be transferred is broken down into pieces, so-called packets, and the Internet Protocol figures out how the data is supposed to get from A to B by passing through routers.

Each packet is "pushed" from router to router via gateways and might take a different route. It is not possible to determine in advance which ways these packets will take. At the receiving end the packets are checked and reassembled.

The technique of breaking down all messages and requests into packets has the advantage that a large data bundle (e.g. videos) sent by a single user cannot block a whole network, because the bandwidth needed is deployed on several packets sent on different routes. Detailed information about routing in the Internet can be obtained at http://www.scit.wlv.ac.uk/~jphb/comms/iproute.html.

One of the Internet's (and of the Matrix's) beginnings was the ARPANet, whose design was intended to withstand any disruption, as for example in military attacks. The ARPANet was able to route data around damaged areas, so that the disruption would not impede communication. This design, whith its origin in strategic and military considerations, remained unchanged for the Internet. Yet the design of the ARPANet's design cannot be completely applied to the Internet.

Routing around depends on the location of the interruption and on the availability of intersecting points between networks. If, for example, an E-mail message is sent from Brussels to Athens and in Germany a channel is down, it will not affect access very much, the message will be routed around this damage, as long as a major Internet exchange is not affected. However, if access depends on a single backbone connection to the Internet and this connection is cut off, there is no way to route around.

In most parts of the world the Internet is therefore vulnerable to disruption. "The idea of the Internet as a highly distributed, redundant global communications system is a myth. Virtually all communications between countries take place through a very small number of bottlenecks, and the available bandwidth isn't that great," says Douglas Barnes. These bottlenecks are the network connections to neighboring countries. Many countries rely on a one single connection to the Net, and in some places, such as the Suez Canal, there is a concentration of fiber-optic cables of critical importance.

TEXTBLOCK 3/4 // URL: http://world-information.org/wio/infostructure/100437611791/100438659870
 
Problems of Copyright Management and Control Technologies

Profiling and Data Mining

At their most basic copyright management and control technologies might simply be used to provide pricing information, negotiate the purchase transaction, and release a copy of a work for downloading to the customer's computer. Still, from a technological point of view, such systems also have the capacity to be employed for digital monitoring. Copyright owners could for example use the transaction records generated by their copyright management systems to learn more about their customers. Profiles, in their crudest form consisting of basic demographic information, about the purchasers of copyrighted material might be created. Moreover copyright owners could use search agents or complex data mining techniques to gather more information about their customers that could either be used to market other works or being sold to third parties.

Fair Use

Through the widespread use of copyright management and control systems the balance of control could excessively be shifted in favor of the owners of intellectual property. The currently by copyright law supported practice of fair use might potentially be restricted or even eliminated. While information in analogue form can easily be reproduced, the protection of digital works through copyright management systems might complicate or make impossible the copying of material for purposes, which are explicitly exempt under the doctrine of fair use.

Provisions concerning technological protection measures and fair use are stated in the DMCA, which provides that "Since copying of a work may be a fair use under appropriate circumstances, section 1201 does not prohibit the act of circumventing a technological measure that prevents copying. By contrast, since the fair use doctrine is not a defense e to the act of gaining unauthorized access to a work, the act of circumventing a technological measure in order to gain access is prohibited." Also the proposed EU Directive on copyright and related rights in the information society contains similar clauses. It distinguishes between the circumvention of technical protection systems for lawful purposes (fair use) and the circumvention to infringe copyright. Yet besides a still existing lack of legal clarity also very practical problems arise. Even if the circumvention of technological protection measures under fair use is allowed, how will an average user without specialized technological know-how be able to gain access or make a copy of a work? Will the producers of copyright management and control systems provide fair use versions that permit the reproduction of copyrighted material? Or will users only be able to access and copy works if they hold a digital "fair use license" ("fair use licenses" have been proposed by Mark Stefik, whereby holders of such licenses could exercise some limited "permissions" to use a digital work without a fee)?

TEXTBLOCK 4/4 // URL: http://world-information.org/wio/infostructure/100437611725/100438659629
 
World Wide Web (WWW)

Probably the most significant Internet service, the World Wide Web is not the essence of the Internet, but a subset of it. It is constituted by documents that are linked together in a way you can switch from one document to another by simply clicking on the link connecting these documents. This is made possible by the Hypertext Mark-up Language (HTML), the authoring language used in creating World Wide Web-based documents. These so-called hypertexts can combine text documents, graphics, videos, sounds, and Java applets, so making multimedia content possible.

Especially on the World Wide Web, documents are often retrieved by entering keywords into so-called search engines, sets of programs that fetch documents from as many servers as possible and index the stored information. (For regularly updated lists of the 100 most popular words that people are entering into search engines, click here). No search engine can retrieve all information on the whole World Wide Web; every search engine covers just a small part of it.

Among other things that is the reason why the World Wide Web is not simply a very huge database, as is sometimes said, because it lacks consistency. There is virtually almost infinite storage capacity on the Internet, that is true, a capacity, which might become an almost everlasting too, a prospect, which is sometimes consoling, but threatening too.

According to the Internet domain survey of the Internet Software Consortium the number of Internet host computers is growing rapidly. In October 1969 the first two computers were connected; this number grows to 376.000 in January 1991 and 72,398.092 in January 2000.

World Wide Web History Project, http://www.webhistory.org/home.html

http://www.searchwords.com/
http://www.islandnet.com/deathnet/
http://www.salonmagazine.com/21st/feature/199...
INDEXCARD, 1/3
 
The Spot

http://www.thespot.com/

http://www.thespot.com/
INDEXCARD, 2/3
 
Sony Corporation

Japanese SONY KK, major Japanese manufacturer of consumer electronics products. Headquarters are in Tokyo. The company was incorporated in 1946 and spearheaded Japan's drive to become the world's dominant consumer electronics manufacturer in the late 20th century. The company was one of the first to recognize the potential of the consumer videotape market. In 1972 it formed an affiliate to market its Betamax colour videocassette system. In 1987-88 Sony purchased the CBS Records Group from CBS Inc., thus acquiring the world's largest record company. It followed that purchase with the purchase in 1989 of Columbia Pictures Entertainment Inc.

INDEXCARD, 3/3