English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I know that languages are compiled into machine code, and I assume that there's some kind of decoder to do the opposite process, but what is the actual princible behind conveying computer instructions using lots of binary numbers?

2006-11-07 12:06:19 · 8 answers · asked by richy 2 in Computers & Internet Programming & Design

Think I've got it from those links now-
the code in computer language(C, C++ etc.) is simplified into a series of instructions. Each of these instructions has a certain number alotted it by the computer. the code is thus transformed into a list of numbers representing instructions.

is that right?

2006-11-07 13:06:12 · update #1

8 answers

Computers are based in transistors, which are electronic devices that can hold only two states: let electric current pass, or stop the current, and most importantly, we can controls these states at will using small electrical currents.

To one of these states we have assigned a 0 and to the other a 1.

Now, with 0s and 1s you can represent numbers, any number, if you stick to a set of rules.

For example, if you agree that digits have different values depending on their position, then you can represent any integer number:

0: 0
1: 1
2: 10 (the first before last position is multiplied by 2)
3: 11
4: 100 (the second before last position digit has a value multiplied by 4).
5: 101
6: 110
7: 111

and so on.

Once you have a high enough number of numbers in this odd binary language (lets say 256) then you could decide that each digit represents a mathematical operation of some kind, for example:

0 could be an addition.
1 could be a substraction

and do on.

So in synthesis you have operations described with 1s and 0s and you can also represent numbers with 1s and 0s. So you have the rudiments of a language you can use, using the examples above, you could say:

0001
0000
0010

which could mean 1+2 (using a convention where the second group of digits represents an operation, in this case addition as sugested previously)

all using only 0s and 1s.

You can extend these principles as much as you want, by doing do you would be creating a computer language.

Now, how does the computer interpret this?

With transistors you can also create electronic circuits that behave like small logical machines.

These logical machines (called logical gates) can be put together in order to perform arithmethic or logical operations.

So if you feed the series of 0s and 1s of your computer program to a different circuit behaving like logical/ arithmetic machine (and this is done prety much by wiring the cables as needed), you have the principle of a computer system which is reading a computer program with complex meanings.

2006-11-07 15:47:06 · answer #1 · answered by Tzctlpc 2 · 0 0

When you compile source code, it first is translated into assembly language, which is an intermediate between the 1's and 0's and the human-readable code that you actually write. The assembly language is then assembled into machine language which consists of byte codes which tell the processor what individual operations to do. There are codes to add two addresses in the memory together. The important thing to remember is that lots of 1's and 0's put together can make larger numbers, just like a two digit number is larger than a one digit number.

2006-11-07 12:12:26 · answer #2 · answered by incorrigible_misanthrope 3 · 1 0

Modern computers are digital devices that contain nothing more than a very fancy set of switches. Switches are either ON - 1 or OFF - 0.

Based on the switch settings it takes actions that are designed into the system.

This is the "machine" or "binary" code.

The principle is to get the switches to set the way you need them in order to get the system to take the appropriate action. This is what your program does. It sets the switches. (SEVERAL MLLION of them!)

The FIRST digital device was a calculator called the ABACUS, and is several thousand years old!

2006-11-07 12:52:05 · answer #3 · answered by f100_supersabre 7 · 0 0

Google - history of programming

there is no "principle" . machines / computers understand only one language.. that is all data is represented by a series of 1's and 0's. No matter if its an image file or a text file; the computer only understands that file as 1's and 0's.

2006-11-07 12:10:04 · answer #4 · answered by arus.geo 7 · 3 0

A computer can only read 0's and 1's. These numbers are electical states of what the computer inputs.

2006-11-08 04:21:55 · answer #5 · answered by Siu02rk 3 · 0 0

Everybody will give you technical answers, but all you have to do is to imagine the 1s and 0s as switches that turn on and off. All they do is switch circuits on and off to divert information to where it is needed.

2006-11-07 12:22:15 · answer #6 · answered by JAKE 2 · 1 0

This will explain bits and bytes to you
http://computer.howstuffworks.com/bytes.htm



This will help you understand compilers and how it works with processors http://computer.howstuffworks.com/microprocessor.htm

2006-11-07 12:11:02 · answer #7 · answered by jack 6 · 1 0

try this
http://en.wikipedia.org/wiki/Binary_numeral_system

2006-11-07 12:12:00 · answer #8 · answered by m0rph0s1s 2 · 0 1

fedest.com, questions and answers