Normally these are variations on the same thing.
As computers got big enough and fast enough, it was found that having a single user in control of the whole machine was too costly, especially when jobs were read in as they were to be done as on card readers or punched tape. There was no real time access and there were no mini-computers to collect information and feed it to the big machines.
So the idea was created that more memory would be added and a method would be created to jump between multiple jobs that were swapped in and out of memory or kept in parts of memory. Some of the jobs were doing really stupid tasks like waiting for a machine to send the next character, so they were very small (later modems were made that collected a whole command line and passed it in when Enter was hit.)
A program was set up to manage the swapping and it stayed in memory all the time, interrupting one job to let the processor handle another job. Each time this was done, all the registers the processor used had to be swapped for the ones the new job was using. If only one user was on the machine, this was multi-tasking. If several were on, then it was time-sharing. The administrators decided whether a user or job was high priority or low when competing for a share of time.
When this software was in place, it was possible for a bunch of people to sit at terminals and type commands directly and see output directly on their screens or to send it to shared printers where operators tore the jobs apart and handed them back. While they were typing their way through the command, being slow humans, the computer handled all the other users. Most of the time, all of the users can act as if each one is the only user. When someone runs a job that demands a lot of time, everyone slows down.
The early minicomputers were multi-user in part because businesses wanted to have 2-3 people working at the same time and did not want to buy expensive card reading and punching equipment.
2007-03-13 06:08:07
·
answer #1
·
answered by Mike1942f 7
·
0⤊
0⤋