A Raster scan, or raster scanning, is the pattern of image detection and reconstruction in television, and is the pattern of image storage and transmission used in most computer image systems. The word raster comes from the Latin word for a rake, as the pattern left by a rake resembles the parallel lines of a scanning raster.
In a raster scan, an image is cut up into successive samples called pixels, or picture elements, along scan lines. Each scan line can be transmitted as it is read from the detector, as in television systems, or can be stored as a row of pixel values in an array in a computer system. On a television receiver or computer monitor, the scan line is turned back to a line across an image, in the same order. After each scan line, the position of the scan line is advanced, typically downward across the image in a process known as vertical scanning, and a next scan line is detected, transmitted, stored, retrieved, or displayed. This ordering of pixels by rows is known as raster order, or raster scan order.
A pixel (short for picture element, using the common abbreviation "pix" for "picture") is a single point in a graphic image. Each such information element is not really a dot, nor a square, but an abstract sample. With care, pixels in an image can be reproduced at any size without the appearance of visible dots or squares; but in many contexts, they are reproduced as dots or squares and can be visibly distinct when not fine enough. The intensity of each pixel is variable; in color systems, each pixel has typically three or four dimensions of variability such as red, green and blue, or cyan, magenta, yellow and black.
A pixel is generally thought of as the smallest complete sample of an image. The definition is highly context sensitive; for example, we can speak of printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive, and depending on context there are several synonyms that are accurate in particular contexts, e.g. pel, sample, byte, bit, dot, spot, etc. We can also speak of pixels in the abstract, or as a unit of measure, in particular when using pixels as a measure of resolution, e.g. 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart.
The measures dots per inch (dpi) and pixels per inch (ppi) are sometimes used interchangeably, but have distinct meanings especially in the printer field, where dpi is a measure of the printer's resolution of dot printing (e.g. ink droplet density). For example, a high-quality inkjet image may be printed with 200 ppi on a 720 dpi printer.
The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display), and therefore has a total number of 640 × 480 = 307,200 pixels or 0.3 megapixels.
The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image.
In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from halftone printing technology, and has been widely used to describe television scanning patterns.
The number of distinct colours that can be represented by a pixel depends on the number of bits per pixel (bpp). The maximum number of colors a pixel can take can be found by taking two to the power of the color depth. For example, common values are
8 bpp, 28 = 256 colours
16 bpp, 216 = 65536 colours, known as Highcolour or Thousands
24 bpp, 224 = 16,777,216 colours; known as Truecolor or Millions
48 bpp, 248; for all practical purposes a continuous colorspace; used in many flatbed scanners and for professional work
Images composed of 256 colours or fewer are usually stored in the computer's video memory in chunky or planar format, where a pixel in memory is an index into a list of colours called a palette. These modes are therefore sometimes called indexed modes. While only 256 colours are displayed at once, those 256 colours are picked from a much larger palette, typically of 16 million colours. Changing the values in the palette permits a kind of animation effect. The animated startup logos of Windows 95 and Windows 98 are probably the best-known example of this kind of animation. On older systems, 4 bpp (16 colors) was common.
For depths larger than 8 bits, the number is the sum of the bits devoted to each of the three RGB (red, green and blue) components. A 16-bit depth is usually divided into five bits for each of red and blue, and six bits for green, as most human eyes are more sensitive to green than the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image).
When an image file is displayed on a screen, the number of bits per pixel is expressed separately for the raster file and for the display. Some raster file formats have a greater bit-depth capability than others. The GIF format, for example, has a maximum depth of 8 bits, while TIFF files can handle 48-bit pixels. There are no consumer display adapters that can output 48 bits of colour, so this depth is typically used for specialized professional applications with film scanners, printers and very expensive workstation computers. Such files are only rendered on screen with 24-bit depth on most computers.
Subpixels
Many display and image-acquisition systems are, for various reasons, not capable of displaying or sensing the different colour channels at the same site. This approach is generally resolved by using multiple subpixels, each of which handles a single color channel. For example, LCDs typically divide each pixel horizontally into three subpixels. Most LED displays divide each pixel into four subpixels; one red, one green, and two blue. Most digital camera sensors also use subpixels, by using colored filters. (CRT displays also use red-green-blue phosphor dots, but these are not aligned with image pixels, and cannot therefore be said to be subpixels).
For systems with subpixels, two different approaches can be taken:
The subpixels can be ignored, with pixels being treated as the smallest addressable imaging element; or
The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases.
The latter approach has been used to increase the apparent resolution of colour displays. The technique, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three coloured sub-pixels separately, and is most effective with flat-panel displays set to their native resolutions (because the pixel geometry of such displays is usually fixed and predictable). This works best with black-on-white images and thus is often used to make text sharper and easier to read. An added bonus of this effect is that while it does not work on CRTs, it still produces an anti-aliasing effect, and thus still improves image quality to some extent.
Megapixel
A megapixel is 1 million pixels, and is a term used not only for the number of pixels in an image, but also to express the number of sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera with an array of 2048×1536 sensor elements is commonly said to have "3.1 megapixels" (2048 × 1536 = 3,145,728).
Digital cameras use photosensitive electronics, either Charge-coupled device (CCD) or Complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement, so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record 1 channel (only red, or green, or blue) of the final color image. Thus, a so-called N-megapixel camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement).
In contrast to conventional image sensors, the Foveon X3 sensor uses three layers of sensor elements, so that it detects red, green, and blue intensity at each array location. This structure eliminates the need for de-mosaicing and eliminates the associated image artifacts, such as color blurring around sharp edges. Citing the precedent established by mosaic sensors, Foveon counts each single-color sensor element as a pixel, even though the native output file size has only one pixel per three camera pixels[1]. With this method of counting, an N-megapixel Foveon X3 sensor therefore captures the same amount of information as an N-megapixel Bayer-mosaic sensor, though it packs the information into fewer image pixels, without any interpolation.
Standard display resolutions
Standard display resolutions include:
VGA 0.3 Megapixels = 640×480
SVGA 0.5 Megapixels = 800×600
XVGA 0.8 Megapixels = 1024×768
SXGA 1.3 Megapixels = 1280×1024
UXGA 1.9 Megapixels = 1600×1200
QXGA 3.1 Megapixels = 2048×1536
QSXGA 5.2 Megapixels = 2560×2048
Etymology
The word pixel was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of video images from space probes to the moon and Mars; but he did not coin the term himself, and the person he got it from (Keith E. McFarland at the Link Division of General Precision in Palo Alto) does not know where he got it, but says it was "in use at the time" (circa 1963).
The word is a combination of picture and element, via pix. Pix was first coined in 1932 in a headline in Variety magazine, as an abbreviation for the word pictures, in reference to movies; by 1938 pix was being used in reference to still pictures by photojournalists.
The concept of a picture element dates to the earliest days of television, for example as Bildpunkt (the German word for pixel, literally picture point) in the 1888 German patent of Paul Nipkow. The earliest publication of the term picture element itself was in Wireless World magazine in 1927.
2006-11-12 23:32:26
·
answer #7
·
answered by gadgetsanjay 2
·
1⤊
1⤋