English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I've been in the computer biz for many years. I've started with Teletype (CDC mainframe) and Apple ][ / TRS-80 (micro). It seems like the minicomputer - at the time - were just better microcomputers. There was always a chasm between main and micro, and it seemed to grow over the years. But, in the last 5 years, the micro moved into the server domain with up to hundreds of CPUs, lots of memory, fault-tolerant, etc, etc.

So, currently, is there a simple definition that separates a Mainframe from a well designed "highend" server (or supercomputer)?

PS: I am referring from a usability perspective, not really hardware.

2007-01-06 04:51:31 · 3 answers · asked by flyddw 2 in Computers & Internet Other - Computers

3 answers

We have similar roots. If you remember HOW the old IBM mainframes used to communicate, they were all 'channel attached'...thus the main difference. Mainframes are for DIRECT attachment of 'consoles'. Servers use various protocols in order to communicate remotely such as TCP/IP. While many of the high end mainframes have remote access capabilities, the grey area is very wide today.

Usability is much more cryptic in a true mainframe environment and not too 'user-friendly'. There is not much consideration to presentation.

2007-01-06 05:00:37 · answer #1 · answered by orlandobillybob 6 · 0 0

Well, simply put a Mainframe is a bigger, faster, more fault tolerant version of what you'd expect a "highend" server to be.

I know you don;t want to know about hardware, but its like comparing a IBM z890 to Dell's PowerEdge 6950.

With the z890 you're going to be running many operating systems and having loads of users. On the "high end server" you'll probably only have 1 OS.

The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally focus on problems which are limited by calculation speed while mainframes focus on problems which are limited by input/output and reliability. The differences and similarities include:

* Both types of systems offer parallel processing. Supercomputers typically expose it to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.

* Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data.

* Supercomputers are often purpose-built for one or a very few specific institutional tasks. Mainframes typically handle a wider variety of tasks. Consequently, most supercomputers are one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.

* Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power.

There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted, and the market generally recognizes that mainframes are genuinely and demonstrably different.

2007-01-06 04:57:41 · answer #2 · answered by Anonymous · 0 0

That's not correct

2016-09-19 04:29:07 · answer #3 · answered by ? 2 · 0 0

fedest.com, questions and answers