What Controls The Picture Quality of a Digital Camera

The picture quality of a digital camera depends on several factors, including the optical quality of the lens and image-capture chip, compression algorithms, and other components. However, the most important determinant of image quality is the resolution of the CCD. The more elements, the higher the resolution, and thus the greater the detail that can be captured. In 1997 the typical native resolution of consumer digital cameras was 640×480 pixels. A year later as manufacturing techniques improved and technology progressed the emergence of megapixel cameras meant that the same money could buy a 1024×768 or even a 1280×960 model. By early 1999, resolutions were as high as 1536×1024 and before the middle of that year the two megapixel barrier had been breached, with the arrival of 2.3 million CCDs supporting resolutions of 1800×1200. A year later the unrelenting march of the megapixels saw the three megapixel barrier breached, with the advent of 3.34 megapixel CCDs capable of delivering a maximum image size of 2048×1536 pixels. The first consumer model 4 megapixel camera appeared in mid-2001, boasting a maximum image size of 2240×1680 pixels. At this level, raw resolution is arguably little more than a numbers game and secondary to a digital camera’s other quality factors. One of these – and almost as important to the quality of the final image as the amount of information the CCD is capable of capturing in the first place – is how cleanly the information is passed to the ADC. The quality of a CCD’s colour management process is another important factor and one of the prime reasons for differences in the output...

What is CCD in the Context of Digital Cameras?

The CCD is the technology at the heart of most digital cameras, and replaces both the shutter and film found in conventional cameras. Its origins lie in 1960s, when the hunt was on for inexpensive, mass-producible memory solutions. Its eventual application as an image-capture device hadn’t even occurred to the scientists working with the technology initially. At Bell Labs in 1969, Willard Boyle and George Smith came up with the CCD as a way to store data. The first imaging CCD, with a format of 100×100 pixels, was created in 1974 by Fairchild Electronics. By the following year the device was being used in TV cameras for commercial broadcasts and soon became commonplace in telescopes and medical imaging systems. It was some time later before the CCD became part of the high-street technology that is now the digital camera. It works like an electronic version of a human eye. Each CCD consists of millions of cells known as photosites or photodiodes. These are essentially light-collecting wells that convert optical information into an electric charge. When light particles known as photons enter the silicon body of the photosite, they provide enough energy for negatively-charged electrons to be released. The more light that enters the photosite, the more free electrons are available. Each photosite has an electrical contact attached to it, and when a voltage is applied to this the silicon below each photosite becomes receptive to the freed electrons and acts as a container for them. Thus, each photosite has a particular charge associated with it – the greater the charge, the brighter the intensity of the associated pixel. The...

Pin It on Pinterest