English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Suppose i open a file then using which buses it comes to ram
if i apend the file how does the content of the files in ram being updated and after i save changes..how the ram contents are copied to harddisk

2007-11-22 04:45:20 · 6 answers · asked by CoolFire 1 in Computers & Internet Hardware Other - Hardware

i know wats ram...wats buses
i want to know how?
i dont wont device names.

i.e. from keyboard charcters are converted by k\b controller. and the data is sent using data buses.
etc...

2007-11-22 05:15:14 · update #1

6 answers

A computer is a machine which manipulates data according to a list of instructions.

Computers take numerous physical forms. The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1941), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers.[1] Modern computers are based on comparatively tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. [2] Today, simple computers may be made small enough to fit into a wrist watch and be powered from a watch battery. Personal computers in various forms are icons of the information age and are what most people think of as "a computer". However, the most common form of computer in use today is by far the embedded computer. Embedded computers are small, simple devices that are often used to control other devices—for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and even children's toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: Any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.
This is only a small part.

2007-11-22 04:53:25 · answer #1 · answered by Bob G 3 · 0 0

When you first open up the file, your computer load the data to RAM via your internal data bus. The information you edit/change/append at the time is on your RAM until you save the file it will then be written to your Hard drive. Writing to hard drive is slower processes, and RAM contains what you see on the screen.

2007-11-22 05:03:25 · answer #2 · answered by Scott P 7 · 0 0

Check out this site. It also has a video you can watch which will help you understand the flow of information. This is an awesome site for anything you need to know, I have been there. Also, look at the other site.

http://computer.howstuffworks.com/pc.htm
http://www.pcworld.com/

2007-11-22 04:57:58 · answer #3 · answered by michelle A 5 · 0 0

very simple...

The computer Receives an input , works on the iput and displays an output to you, it does this by using devices called processors.

2007-11-22 04:59:01 · answer #4 · answered by King 3 · 0 0

nfw anyone is going 2 answer that

2007-11-22 04:51:39 · answer #5 · answered by Jake 7 · 0 1

INTRODUCTION

Computer, electronic device that can receive a set of instructions, or program, and then carry out this program by performing calculations on numerical data or by manipulating other forms of information.

Cray Supercomputer The Cray-1 Supercomputer (designed by Seymour Cray of Cray Research, Eagan, Minnesota) was the first computer capable of performing over 100 million floating-point calculations per second. Of the many technological problems that had to be solved, one of the most important was how to remove the heat generated by the high-speed logic. This was accomplished by mounting the circuits on vertical plates that were cooled by a freon refrigeration system. Although faster machines have now been built, the Cray-1 continues to be used for mathematical studies of very complex problems, such as speech analysis, weather forecasting, and fundamental questions in physics and chemistry. The Cray-1 also leaves its mark as the informal unit of measure for newer supercomputers, some of which are now projected to equal 1,000 ‘Crays’.

Computer-Aided Design and Manufacturing The amount of wind pressure on a car is simulated by computers with Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) programs. Red indicates high wind pressures and blue indicates low pressures.
The modern world of high technology could not have come about except for the development of the computer. Different types and sizes of computers find uses throughout society in the storage and handling of data, from secret governmental files to banking transactions to private household accounts. Computers have opened up a new era in manufacturing through the techniques of automation, and they have enhanced modern communication systems. They are essential tools in almost every field of research and applied technology, from constructing models of the universe to producing tomorrow’s weather reports, and their use has in itself opened up new areas of conjecture. Database services and computer networks make available a great variety of information sources. The same advanced techniques also make possible invasions of personal and business privacy. Computer crime has become one of the many risks that are part of the price of modern technology.

II TYPES OF COMPUTERS

Two main types of computers are in use today, analogue and digital. Analogue computers exploit the mathematical similarity between physical interrelationships in certain problems, and employ electronic or hydraulic circuits (see Fluidics) to simulate the physical problem. Digital computers solve problems by performing calculations and by dealing with each number digit by digit.

Installations that contain elements of both digital and analogue computers are called hybrid computers. They are usually used for problems in which large numbers of complex equations, known as time integrals, are to be computed. Data in analogue form can also be fed into a digital computer by means of an analogue-to-digital converter, and the same is true of the reverse situation (see Digital-to-Analogue Converter).

A Analogue Computers

The simplest analogue calculating device is the slide rule, which employs specially calibrated scales to facilitate multiplication, division, and other functions. The analogue computer is a more sophisticated electronic or hydraulic device that is designed to handle input in terms of, for example, voltage levels or hydraulic pressures, rather than numerical data. In a typical electronic analogue computer, the inputs are converted into voltages that may be added or multiplied using specially designed circuit elements. The answers are continuously generated for display or for conversion to another desired form.

B Digital Computers

Everything that a digital computer does is based on one operation: the ability to determine whether a switch, or “gate”, is open or closed. That is, the computer can recognize only two states in any of its microscopic circuits: on or off, high voltage or low voltage, or—in the case of numbers—0 or 1. The speed at which the computer performs this simple act, however, is what makes it a marvel of modern technology. Computer speeds are measured in megahertz, or millions of cycles per second. A computer with a “clock speed” of 100 MHz—a fairly representative speed for a microcomputer—is capable of executing 100 million discrete operations each second. Supercomputers used in research and defence applications attain speeds of billions of cycles per second.

Digital computer speed and calculating power are further enhanced by the amount of data handled during each cycle. If a computer checks only one switch at a time, that switch can represent only two commands or numbers; thus ON would symbolize one operation or number, and OFF would symbolize another. By checking groups of switches linked as a unit, however, the computer increases the number of operations it can recognize at each cycle. For example, a computer that checks two switches at one time can represent four numbers (0-3) or can execute one of four instructions at each cycle, one for each of the following switch patterns: OFF-OFF (0); OFF-ON (1); ON-OFF (2); or ON-ON (3).

III HISTORY

The first adding machine, a precursor of the digital computer, was devised in 1642 by the French scientist, mathematician, and philosopher Blaise Pascal. This device employed a series of ten-toothed wheels, each tooth representing a digit from 0 to 9. The wheels were connected so that numbers could be added to each other by advancing the wheels by a correct number of teeth. In the 1670s the German philosopher and mathematician Gottfried Wilhelm Leibniz improved on this machine by devising one that could also multiply.

The French inventor Joseph-Marie Jacquard, in designing an automatic loom, used thin, perforated wooden boards to control the weaving of complicated designs. During the 1880s the American statistician Herman Hollerith conceived the idea of using perforated cards, similar to Jacquard’s boards, for processing data. Employing a system that passed punched cards over electrical contacts, he was able to compile statistical information for the 1890 United States census.

A The Analytical Engine

Babbage's Difference Engine Considered by many to be a direct forerunner of modern calculating devices, the Difference Engine was able to compute mathematical tables. This woodcut shows a small portion of the ingenious machine, which was designed by Charles Babbage in the 1820s. Babbage’s later idea for the Analytical Engine would have been a true, programmable computer if the project had been pursued with adequate funding. As it was, neither machine was completed in his lifetime, although it was not beyond the technological capabilities of the time. In 1991 a team at the London Science Museum finished work on a fully functional Difference Engine No. 2.THE BETTMANN ARCHIVE

Also in the 19th century, the British mathematician and inventor Charles Babbage worked out the principles of the modern digital computer. He conceived a number of machines, such as the Difference Engine, that were designed to handle complicated mathematical problems. Many historians consider Babbage and his associate, the mathematician Augusta Ada Byron, Countess of Lovelace, the true pioneers of the modern digital computer. One of Babbage’s designs, the Analytical Engine, had many features of a modern computer. It had an input stream in the form of a deck of punched cards, a “store” for saving data, a “mill” for arithmetic operations, and a printer that made a permanent record. Babbage failed to put this idea into practice, though it may well have been technically possible at that date.

B Early Computers

Analogue computers began to be built in the late 19th century. Early models calculated by means of rotating shafts and gears. Numerical approximations of equations too difficult to solve in any other way were evaluated with such machines. Lord Kelvin built a mechanical tide predictor that was a specialized analogue computer. During World Wars I and II, mechanical and, later, electrical analogue computing systems were used as torpedo course predictors in submarines and as bombsight controllers in aircraft. Another system was designed to predict spring floods in the Mississippi River basin.

C Electronic Computers

Circuit Board and Transistors A close-up of a smoke detector’s circuit board reveals its components, which include transistors, resistors, capacitors, diodes, and inductors. Rounded containers house the transistors that make the circuit work. Transistors are capable of serving many functions, such as amplifying and switching. Each transistor consists of a small piece of semiconducting material, such as silicon, that has been “doped”, or treated with impurity atoms, to create n-type and p-type regions. Invented in 1948, transistors are a fundamental component in nearly all modern electronic devices.H. Schneebeli/Science Source/Photo Researchers, Inc.

During World War II a team of scientists and mathematicians, working at Bletchley Park, north of London, created one of the first all-electronic digital computers: Colossus. By December 1943, Colossus, which incorporated 1,500 vacuum tubes, was operational. It was used by the team headed by Alan Turing, in the largely successful attempt to crack German radio messages enciphered in the Enigma code.

Independently of this, in the United States, a prototype electronic machine had been built as early as 1939, by John Atanasoff and Clifford Berry at Iowa State College. This prototype and later research were completed quietly and later overshadowed by the development of the Electronic Numerical Integrator And Computer ( ENIAC) in 1945. ENIAC was granted a patent, which was overturned decades later, in 1973, when the machine was revealed to have incorporated principles first used in the Atanasoff-Berry Computer (ABC).

UNIVAC Computer System The first commercially available electronic computer, UNIVAC I, was also the first computer to handle both numeric and textual information. Designed by J. Presper Eckert and John Mauchly, whose corporation subsequently passed to Remington Rand, the implementation of the machine marked the beginning of the computer era. Here, a UNIVAC computer is shown in action. The central computer is in the background, and in the foreground is the supervisory control panel. Remington Rand delivered the first UNIVAC machine to the US Bureau of Census in 1951.THE BETTMANN ARCHIVE

ENIAC contained 18,000 vacuum tubes and had a speed of several hundred multiplications per minute, but originally its program was wired into the processor and had to be manually altered. Later machines were built with program storage, based on the ideas of the Hungarian-American mathematician John von Neumann. The instructions, like the data, were stored within a “memory”, freeing the computer from the speed limitations of the paper-tape reader during execution and permitting problems to be solved without rewiring the computer. See Von Neumann Architecture.

The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logical elements than were possible with vacuum-tube machines. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines called second-generation computers. Components became smaller, as did inter-component spacings, and the system became much less expensive to build.

D Integrated Circuits

Late in the 1960s the integrated circuit, or IC, was introduced, making it possible for many transistors to be fabricated on one silicon substrate, with interconnecting wires plated in place. The IC resulted in a further reduction in price, size, and failure rate. The microprocessor became a reality in the mid-1970s with the introduction of the large-scale integrated (LSI) circuit and, later, the very large-scale integrated (VLSI) circuit (microchip), with many thousands of interconnected transistors etched into a single silicon substrate.

To return, then, to the switching capabilities of a modern computer: computers in the 1970s were generally able to handle eight switches at a time. That is, they could deal with eight binary digits, or bits, of data, at every cycle. A group of eight bits is called a byte, each byte containing 256 possible patterns of ONs and OFFs (or 1s and 0s). Each pattern is the equivalent of an instruction, a part of an instruction, or a particular type of datum, such as a number or a character or a graphics symbol. The pattern 11010010, for example, might be binary data—in this case, the decimal number 210 (see Number Systems)—or it might be an instruction telling the computer to compare data stored in its switches to data stored in a certain memory-chip location.

The development of processors that can handle 16, 32, and 64 bits of data at a time has increased the speed of computers. The complete collection of recognizable patterns—the total list of operations—of which a computer is capable is called its instruction set. Both factors—the number of bits that can be handled at one time, and the size of instruction sets—continue to increase with the ongoing development of modern digital computers.

IV HARDWARE

Computer System A typical computer system consists of a central processing unit (CPU), input devices, storage devices, and output devices. The CPU consists of an arithmetic/logic unit, registers, control section, and internal bus. The arithmetic/logic unit carries out arithmetical and logical operations. The registers store data and keep track of operations. The control unit regulates and controls various operations. The internal bus connects the units of the CPU with each other and with external components of the system. For most computers, the principal input device is a keyboard. Storage devices include external floppy disk drives and internal memory boards. Output devices that display data include monitors and printers
Modern digital computers are all conceptually similar, regardless of size. Nevertheless, they can be divided into several categories on the basis of cost and performance: the personal computer or microcomputer, a relatively low-cost machine, usually of desk-top size (though “laptops” are small enough to fit in a briefcase, and “palmtops” can fit into a pocket); the workstation, a microcomputer with enhanced graphics and communications capabilities that make it especially useful for office work; the minicomputer, generally too expensive for personal use, with capabilities suited to a business, school, or laboratory; and the mainframe computer, a large, expensive machine with the capability of serving the needs of major business enterprises, government departments, scientific research establishments, or the like (the largest and fastest of these are called supercomputers).

A digital computer is not a single machine: rather, it is a system composed of five distinct elements: (1) a central processing unit; (2) input devices; (3) memory storage devices; (4) output devices; and (5) a communications network, called a bus, which links all the elements of the system and connects the system to the external world.

A Central Processing Unit (CPU)

The CPU may be a single chip or a series of chips that perform arithmetic and logical calculations and that time and control the operations of the other elements of the system. Miniaturization and integration techniques made possible the development of the microprocessor, a CPU chip that incorporates additional circuitry and memory. The result is smaller computers and reduced support circuitry. Microprocessors are used in personal computers.

Most CPU chips and microprocessors are composed of four functional sections: (1) an arithmetic/logic unit; (2) registers; (3) a control section; and (4) an internal bus. The arithmetic/logic unit gives the chip its calculating ability and permits arithmetical and logical operations. The registers are temporary storage areas that hold data, keep track of instructions, and hold the location and results of these operations. The control section has three principal duties. It times and regulates the operations of the entire computer system; its instruction decoder reads the patterns of data in a designated register and translates the pattern into an activity, such as adding or comparing; and its interrupt unit indicates the order in which individual operations use the CPU, and regulates the amount of CPU time that each operation may consume.

The last segment of a CPU chip or microprocessor is its internal bus, a network of communication lines that connects the internal elements of the processor and also leads to external connectors that link the processor to the other elements of the computer system. The three types of CPU buses are: (1) a control bus consisting of a line that senses input signals and another line that generates control signals from within the CPU; (2) the address bus, a one-way line from the processor that handles the location of data in memory addresses; and (3) the data bus, a two-way transfer line that both reads data from memory and writes new data into memory.

B Input Devices

Light Pen Light pens are electronic pointers that allow users to modify designs on-screen. The hand-held pointer contains sensors that send signals to the computer whenever light is recorded. The computer’s screen is not lit up all at once, but traced row-by-row by an electron beam 50 or 60 times every second. Because of this, the computer is able to determine the pen’s position by noting exactly when the pen detects the electron beam passing its tip. Light pens are often used in computer-aided design and computer-aided manufacture (CAD and CAM) technology because of the flexibility they provide. Here, an engineer uses a light pen to modify a technical drawing on a computer display screen.Gary Guisinger/Photo Researchers, Inc.

These devices enable a computer user to enter data, commands, and programs into the CPU. The most common input device is the keyboard. Information typed at the typewriter-like keyboard is translated by the computer into recognizable patterns. Other input devices include the mouse, which translates physical motion into motion on a computer video display screen; the joystick, which performs the same function, and is favoured for computer games; the trackball, which replaces the mouse on laptops; scanners, which “read” words or symbols on a printed page and translate them into electronic patterns that the computer can manipulate and store; light pens, which can be used to “write” directly on the monitor screen; and voice recognition systems, which take spoken words and translate them into digital signals for the computer. Storage devices can also be used to input data into the processing unit.

C Storage Devices

Computer systems can store data internally (in memory) and externally (on storage devices). Internally, instructions or data can be temporarily stored in silicon RAM (Random Access Memory) chips that are mounted directly on the computer’s main circuit board, or in chips mounted on peripheral cards that plug into the computer’s main circuit board. These RAM chips consist of millions of switches that are sensitive to changes in electric current. So-called static RAM chips hold their data as long as current flows through the circuit, whereas dynamic RAM (DRAM) chips need high or low voltages applied at regular intervals—every two milliseconds or so—if they are not to lose their information.

Another type of internal memory consists of silicon chips on which all switches are already set. The patterns on these ROM (Read-Only Memory) chips form commands, data, or programs that the computer needs to function correctly. RAM chips are like pieces of paper that can be written on, erased, and used again; ROM chips are like a book, with its words already set on each page. Both RAM and ROM chips are linked by circuitry to the CPU.

External storage devices, which may actually be located within the computer housing, are external to the main circuit board. These devices store data as charges on a magnetically sensitive medium such as a magnetic tape or, more commonly, on a disk coated with a fine layer of metallic particles. The most common external storage devices are so-called floppy disks and hard disks, although most large computer systems use banks of magnetic tape storage units. The floppy disks in normal use store about 800 kilobytes (a kilobyte is 1,024 bytes) or about 1.4 megabytes (1 megabyte is slightly more than a million bytes). Hard, or “fixed”, disks cannot be removed from their disk-drive cabinets, which contain the electronics to read and write data on to the magnetic disk surfaces. Hard disks currently used with personal computers can store from several hundred megabytes to several gigabytes (1 gigabyte is a billion bytes). CD-ROM technology, which uses the same laser techniques that are used to create audio compact discs (CDs), normally produces storage capacities up to about 800 megabytes.

D Output Devices

These devices enable the user to see the results of the computer’s calculations or data manipulations. The most common output device is the video display unit (VDU), a monitor that displays characters and graphics on a television-like screen. A VDU usually has a cathode ray tube like an ordinary television set, but small, portable computers use liquid crystal displays (LCDs) or electroluminescent screens. Other standard output devices include printers and modems. A modem links two or more computers by translating digital signals into analogue signals so that data can be transmitted via analogue telephone lines.

E Operating Systems

Different types of peripheral devices—disk drives, printers, communications networks, and so on—handle and store data differently from the way the computer handles and stores it. Internal operating systems, usually stored in ROM memory, were developed primarily to coordinate and translate data flows from dissimilar sources, such as disk drives or coprocessors (processing chips that operate simultaneously with the central unit). An operating system is a master control program, permanently stored in memory, that interprets user commands requesting various kinds of services—commands such as display, print, or copy a data file; list all files in a directory; or execute a particular program.

V PROGRAMMING

A program is a sequence of instructions that tells the hardware of a computer what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. In some specialized, or “dedicated”, computers the operating instructions are embedded in their circuitry; common examples are the microcomputers found in calculators, wristwatches, car engines, and microwave ovens. A general-purpose computer, on the other hand, although it contains some built-in programs (in ROM) or instructions (in the processor chip), depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. Software in widespread use includes a wide range of applications programs—instructions to the computer on how to perform various tasks.

A Languages

Application of Programming Languages Programming languages allow people to communicate with computers. Once a job has been identified, the programmer must translate, or code, it into a list of instructions that the computer will understand. A computer program for a given task may be written in several different languages. Depending on the task, a programmer generally chooses the language that will involve the least complicated program. It may also be important to the programmer to pick a language that is flexible and widely compatible if the program will have a range of applications. The examples above are programs written to average a list of numbers. Both C and BASIC are commonly used programming languages. The machine interpretation shows how a computer would process and execute the commands from the programs
A computer must be given instructions in a programming language that it understands—that is, a particular pattern of binary digital information. On the earliest computers, programming was a difficult, laborious task, because vacuum-tube ON-OFF switches had to be set by hand. Teams of programmers often took days to program simple tasks such as sorting a list of names. Since that time numbers of computer languages have been devised, some with particular kinds of functioning in mind and others aimed more at ease of use—the “user-friendly” approach.

B Machine Language

The computer’s own binary-based language, or machine language, is difficult for human beings to use. The programmer must input every command and all data in binary form, and a basic operation such as comparing the contents of a register to the data in a memory-chip location might look like this: 11001010 00010111 11110101 00101011. Machine-language programming is such a tedious, time-consuming task that the time saved in running the program rarely justifies the days or weeks needed to write the program.

C Assembly Language

One method programmers devised to shorten and simplify the process is called assembly-language programming. By assigning a short (usually three-letter) mnemonic code to each machine-language command, assembly-language programs could be written and “debugged”—cleaned of logic and data errors—in a fraction of the time needed by machine-language programmers. In assembly language, each mnemonic command and its symbolic operands equals one machine instruction. An assembler program translates the source code, a list of mnemonic operation codes and symbolic operands, into object code, that is into machine language, and executes the program.

Each assembly language, however, can be used with only one type of CPU chip or microprocessor. Programmers who expended much time and effort to learn how to program one computer had to learn a new programming style each time they worked on another machine. What was needed was a shorthand method by which one symbolic statement could represent a sequence of many machine-language instructions, and a way that would allow the same program to run on several types of machines. These needs led to the development of high-level languages.

D High-Level Languages

High-level languages often use English words—for example, LIST, PRINT, OPEN, and so on—as commands that might stand for a sequence of tens or hundreds of machine-language instructions. The commands are entered from the keyboard or from a program in memory or in a storage device, and they are intercepted by a program that translates them into machine-language instructions.

Translator programs are of two kinds: interpreters and compilers. With an interpreter, programs that loop back to re-execute part of their instructions reinterpret the same instruction each time it appears, so interpreted programs run much more slowly than machine-language programs. Compilers, by contrast, translate an entire program into machine language prior to execution, so such programs run as rapidly as though they were written directly in machine language.

The American computer scientist Grace Hopper is credited with implementing the first commercially oriented computer language. After programming an experimental computer at Harvard University, she worked on the UNIVAC I and II computers and developed a commercially usable high-level programming language called FLOW-MATIC. To facilitate computer use in scientific applications, IBM then developed a language that would simplify work involving complicated mathematical formulas. Begun in 1954 and completed in 1957, FORTRAN (FORmula TRANslator) was the first comprehensive high-level programming language that was widely used.

In 1957 the Association for Computing Machinery in the United States set out to develop a universal language that would correct some of FORTRAN’s shortcomings. A year later they released ALGOL (ALGOrithmic Language), another scientifically oriented language; widely used in Europe in the 1960s and 1970s, it has since been superseded by newer languages, while FORTRAN continues to be used because of the huge investment in existing programs. COBOL (Common Business-Oriented Language), a commercial and business programming language, concentrated on data organization and file-handling and is widely used today in business.

BASIC (Beginner’s All-purpose Symbolic Instruction Code) was developed at Dartmouth College in the early 1960s for use by non-professional computer users. The language came into almost universal use with the microcomputer explosion of the 1970s and 1980s. Condemned as slow, inefficient, and inelegant by its detractors, BASIC is nevertheless simple to learn and easy to use. Because many early microcomputers were sold with BASIC built into the hardware (in ROM memory) the language rapidly came into widespread use. The following very simple example of a BASIC program adds the numbers 1 and 2, and displays the result (the numerals 10 to 40 are line numbers):

Although hundreds of different computer languages and variants exist, several others deserve mention. PASCAL, originally designed as a teaching tool, is now one of the most popular microcomputer languages. LOGO was developed to introduce children to computers. C, a language Bell Laboratories designed in the 1970s, is widely used in developing systems programs, as is its successor, C++. LISP and PROLOG are widely used in artificial intelligence. Still further languages have been developed to permit programming in hypermedia, as in CD-ROM and Internet applications.

VI FUTURE DEVELOPMENTS

One continuing trend in computer development is microminiaturization, the effort to compress more circuit elements into smaller and smaller chip space. For example, in 1999, scientists developed a circuit the size of a single layer of molecules, and in 2000 IBM announced that it had developed new technology to produce computer chips that operate five times faster than the most advanced models to date. Also in 2000, scientists discovered a way to transfer information on an atomic level without relying on traditional wires or circuits. This effect, dubbed the "quantum mirage", describes how an atom of matter placed in an elliptical-shaped structure on a solid surface reflects itself at other points within the ellipse, thereby relaying information. Researchers are also trying to speed up circuitry functions through the use of superconductivity, the phenomenon of decreased electrical resistance observed in certain materials at very low temperatures. As the physical limits of silicon-chip computer processors are being approached, scientists are exploring the potential of the next generation of computer technology, using, for instance, devices based on deoxyribonucleic acid (DNA).

The “fifth-generation” computer effort to develop computers that can solve complex problems in ways that might eventually merit the description “creative” is another trend in computer development, the ideal goal being true artificial intelligence. One path actively being explored is parallel processing computing, which uses many chips to perform several different tasks at the same time. Parallel processing may eventually be able to duplicate to some degree the complex feedback, approximating, and assessing functions of human thought. One important parallel processing approach is the neural network, which mimics the architecture of the nervous system. Another ongoing trend is the increase in computer networking, which now employs the worldwide data communications system of satellite and cable links to connect computers globally. There is also a great deal of research into the possibility of “optical” computers—hardware that processes not pulses of electricity but much faster pulses of light.

2007-11-22 07:41:01 · answer #6 · answered by Anonymous · 0 0

fedest.com, questions and answers