English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Hello,
I have recently started to get into the actual hardware-driven sciences of computers (what makes them tick). One of the most basic and well known theories of binary. I do have a basic understanding and know that binary is just base 2 numbering (didn't word that too well.. But you get it) and that a computer is limited of the number it may use by the bit size in use (most commonly 32 bits even though 64 bit processing does exist). Why not use trinary or an even higher number base to allow the easy calculation of larger numbers and in turn more data. Would it clog the processor? Is this far from possible and I just need to learn my stuff?
This is probably a real beginners question and most likely not possible but I'd like to know why. Thanks.

2007-08-28 15:07:31 · 2 answers · asked by Anonymous in Computers & Internet Hardware Other - Hardware

2 answers

That has actually been done. The IBM 14xx series of computers back in the '50s ran in decimal .

2007-08-29 11:52:46 · answer #1 · answered by The Phlebob 7 · 1 0

Electronic circuitry is very compatible with 0/1, on/off, binary logic. Computers could use any number base, but the circuitry would be much more complicated. Besides, binary will allow the representation of any size number, if you willing to use enough bits.

2007-08-28 15:21:34 · answer #2 · answered by Computer Guy 7 · 0 0

fedest.com, questions and answers