1913: Henry Ford and the Assembly Line

Realizing that he'd need to lower costs Henry Ford (Ford Motor Company) was inspired to create a more efficient way to produce his cars. Looking at other industries he and his team found four principles, which furthered their goal: interchangeable parts, continuous flow, division of labor, and reducing wasted effort.

The use of interchangeable parts meant making the individual pieces of the car the same every time. Therefore the machines had to be improved, but once they were adjusted, they could be operated by a low-skilled laborer. To reduce the time workers spent moving around Ford refined the flow of work in the manner that as one task was finished another began, with minimum time spent in set-up. Furthermore he divided the labor by breaking the assembly of the legendary Model T in 84 distinct steps. Frederick Taylor, the creator of "scientific management" was consulted to do time and motion studies to determine the exact speed at which the work should proceed and the exact motions workers should use to accomplish their tasks.

Putting all those findings together in 1913 Ford installed the first moving assembly line that was ever used for large-scale manufacturing. His cars could then be produced at a record-breaking rate, which meant that he could lower the price, but still make a good profit by selling more cars. For the first time work processes were largely automated by machinery.

TEXTBLOCK 1/5 // URL: http://world-information.org/wio/infostructure/100437611663/100438659335
 
body and mind as defects

In an increasingly technisised world where technology has also become a determinant of value-free values, mind and body are increasingly considered as "imperfect" compared to the brilliant designs of technology. While for centuries the "weakness" of the human flesh has been the object of lamentations, the 21st century seems set to transform the genre of tragedy into a sober technological project of improvement. Within this project, men and women receive the status of "risk factor" which potentially destabilises technological systems, a circumstance which calls for correction and control measures.

Two main ways of checking the risk of "human error", as well as inefficiency, irrationality, selfishness, emotional turbulence, and other weaknesses of human beings: by minimizing human participation in technological processes, and, to an increasing extent, by technically eliminating such risk factors in human beings themselves.

Human beings, once considering themselves as the "crown of creation" or the "masters of the world" are reducing themselves to the "human factor" in globally networked technical systems, that factor which still escapes reliable calculation and which, when interacting with fast and potent technical environments, is a source of imperfection. For the human mind and body to perfect itself - to adapt itself to the horizon of perfection of science and technology - takes long time periods of discipline, learning, even biological evolution.

In the calculating thinking required in highly technisised context, mind and body inevitably appear as deficient compared to a technology which, unlike the human organism, has the potential of fast and controlled "improvement". Surely, the human organism has always been prey to defects, to "illnesses" and "disablement". Disease has therefore been one of the main motivations behind the development of Bio-ITs: Bio-ITs are being developed to help the blind get their eyesight back, the deaf to hear, the lame to walk, the depressed to be happy. Such medical applications of Bio-ITs are nothing essentially new: Captain Silver's crunch, the wheelchair, a tooth filling save the same basic purpose of correcting a physical deficiency.

But there is a much wider scope to this new development, in which the "normal" biological condition of a human being, such as proneness to death, forgetfulness, aging, inefficiency, solitude, or boredom are understood as defects which can and should be corrected. The use of ITs to overcome such "biological" constraints is often seen as the "ultimate" technological advance, even if the history of utopian visions connected to technological innovation is as old as it is rife with surprise, disappointment, and disaster.

TEXTBLOCK 2/5 // URL: http://world-information.org/wio/infostructure/100437611777/100438658726
 
"Attention Brokerage"

"Attention Brokerage" is one of the latest developments in the field of online advertising. The first Web-site applying the concept of selling and buying attention is Cybergold. Users, who want to earn money have to register and then look at ads, which, of course, they have to prove by e.g. downloading software. Attention, according to this idea, represents a good, which is worth being paid for.

TEXTBLOCK 3/5 // URL: http://world-information.org/wio/infostructure/100437611652/100438658064
 
What is the Internet?

Each definition of the Internet is a simplified statement and runs the risk of being outdated within a short time. What is usually referred to as the Internet is a network of thousands of computer networks (so called autonomous systems) run by governmental authorities, companies, and universities, etc. Generally speaking, every time a user connects to a computer networks, a new Internet is created. Technically speaking, the Internet is a wide area network (WAN) that may be connected to local area networks (LANs).

What constitutes the Internet is constantly changing. Certainly the state of the future Net will be different to the present one. Some years ago the Internet could still be described as a network of computer networks using a common communication protocol, the so-called IP protocol. Today, however, networks using other communication protocols are also connected to other networks via gateways.

Also, the Internet is not solely constituted by computers connected to other computers, because there are also point-of-sale terminals, cameras, robots, telescopes, cellular phones, TV sets and and an assortment of other hardware components that are connected to the Internet.

At the core of the Internet are so-called Internet exchanges, national backbone networks, regional networks, and local networks.

Since these networks are often privately owned, any description of the Internet as a public network is not an accurate. It is easier to say what the Internet is not than to say what it is. On 24 October, 1995 the U.S. Federal Networking Council made the following resolution concerning the definition of the term "Internet": "Internet" refers to the global information system that (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein." (http://www.fnc.gov/Internet_res.html)

What is generally and in a simplyfiying manner called the Internet, may be better referred to as the Matrix, a term introduced by science fiction writer William Gibson, as John S. Quarterman and Smoot Carl-Mitchell have proposed. The Matrix consists of all computer systems worldwide capable of exchanging E-Mail: of the USENET, corporate networks and proprietary networks owned by telecommunication and cable TV companies.

Strictly speaking, the Matrix is not a medium; it is a platform for resources: for media and services. The Matrix is mainly a very powerful means for making information easily accessible worldwide, for sending and receiving messages, videos, texts and audio files, for transferring funds and trading securities, for sharing resources, for collecting weather condition data, for trailing the movements of elephants, for playing games online, for video conferencing, for distance learning, for virtual exhibitions, for jamming with other musicians, for long distance ordering, for auctions, for tracking packaged goods, for doing business, for chatting, and for remote access of computers and devices as telescopes and robots remotely, e. g. The Internet is a wonderful tool for exchanging, retrieving, and storing data and sharing equipment over long distances and eventually real-time, if telecommunication infrastructure is reliable and of high quality.

For a comprehensive view of uses of the Matrix, especially the World Wide Web, see ""24 Hours in Cyberspace"

TEXTBLOCK 4/5 // URL: http://world-information.org/wio/infostructure/100437611791/100438659889
 
Virtual cartels, oligopolistic structures

Global networks require global technical standards ensuring the compatibility of systems. Being able to define such standards makes a corporation extremely powerful. And it requires the suspension of competitive practices. Competition is relegated to the symbolic realm. Diversity and pluralism become the victims of the globalisation of baroque sameness.

The ICT market is dominated by incomplete competition aimed at short-term market domination. In a very short time, new ideas can turn into best-selling technologies. Innovation cycles are extremely short. But today's state-of-the-art products are embryonic trash.

    According to the Computer and Communications Industry Association, Microsoft is trying to aggressively take over the network market. This would mean that AT&T would control 70 % of all long distance phone calls and 60 % of cable connections.



    AOL and Yahoo are lone leaders in the provider market. AOL has 21 million subscribers in 100 countries. In a single month, AOL registers 94 million visits. Two thirds of all US internet users visited Yahoo in December 1999.



    The world's 13 biggest internet providers are all American.



    AOL and Microsoft have concluded a strategic cross-promotion deal. In the US, the AOL icon is installed on every Windows desktop. AOL has also concluded a strategic alliance with Coca Cola.


TEXTBLOCK 5/5 // URL: http://world-information.org/wio/infostructure/100437611709/100438658963
 
Neighboring rights

Copyright laws generally provide for three kinds of neighboring rights: 1) the rights of performing artists in their performances, 2) the rights of producers of phonograms in their phonograms, and 3) the rights of broadcasting organizations in their radio and television programs. Neighboring rights attempt to protect those who assist intellectual creators to communicate their message and to disseminate their works to the public at large.

INDEXCARD, 1/4
 
Core copyright industries

Those encompass the industries that create copyrighted works as their primary product. These industries include the motion picture industry (television, theatrical, and home video), the recording industry (records, tapes and CDs), the music publishing industry, the book, journal and newspaper publishing industry, and the computer software industry (including data processing, business applications and interactive entertainment software on all platforms), legitimate theater, advertising, and the radio, television and cable broadcasting industries.

INDEXCARD, 2/4
 
Artificial Intelligence

Artificial Intelligence is concerned with the simulation of human thinking and emotions in information technology. AI develops "intelligent systems" capable, for example, of learning and logical deduction. AI systems are used for creatively handling large amounts of data (as in data mining), as well as in natural speech processing and image recognition. AI is also used as to support decision taking in highly complex environments.
Yahoo AI sites: http://dir.yahoo.com/Science/Computer_Science/Artificial_Intelligence/
MIT AI lab: http://www.ai.mit.edu/


http://dir.yahoo.com/Science/Computer_Science...
http://www.ai.mit.edu/
INDEXCARD, 3/4
 
ARPAnet

ARPAnet was the small network of individual computers connected by leased lines that marked the beginning of today's global data networks. Being an experimental network mainly serving the purpose to test the feasibility of wide area networks, the possibility of remote computing, it was created for resource sharing between research institutions, not for messaging services like E-mail. Although research was sponsored by US military, ARPAnet was not designed for directly martial use but to support military-related research.

In 1969 ARPANET went online and links the first two computers, one of them located at the University of California, Los Angeles, the other at the Stanford Research Institute.

But ARPAnet has not become widely accepted before it was demonstrated in action to a public of computer experts at the First International Conference on Computers and Communication in Washington, D. C. in 1972.

Before it was decommissioned in 1990, NSFnet, a network of scientific and academic computers funded by the National Science Foundation, and a separate new military network went online in 1986. In 1988 the first private Internet service providers offered a general public access to NSFnet. Beginning in 1995, after having become the backbone of the Internet in the USA, NSFnet was turned over to a consortium of commercial backbone providers. This and the launch of the World Wide Web added to the success of the global data network we call the Net.

In the USA commercial users already outnumbered military and academic users in 1994.

Despite the rapid growth of the Net, most computers linked to it are still located in the United States.

INDEXCARD, 4/4