ilghila

Tuesday, January 30, 2007

Complementary and Harmonious Colors

Mastering color is an essential ability to any good photographer. To achieve proficiency, a basic knowledge of primary colors and complementary colors is required. We will define these fundamental concepts and teach how to exploit them in order to take better photographs.

The great physicist James Maxwell in 1859 demonstrated that each color could be obtained starting from just three colored light beams, one red, one green and another blue. By over-projecting these three beams on a white screen and by carefully dosing their intensities, any color could be attained. This is the basis of the aptly named "additive synthesis". These three colors (red, green and blue, or RGB) are called "primary colors". Adding all the primary colors at their maximum intensities, give rise to white; black is the absence of any light.

Each primary color also has a "complementary color". The complementary color of a given primary color is defined as the color that added to the primary color gives the white light. It can be shown (but we will omit the demonstration) that the complementary colors for red, green and blue are cyan, magenta and yellow, respectively. What a photographer must always keep in mind are simply the (fundamental, complementary) color pairs:
(red, cyan)
(green, magenta)
(blue, yellow)

Masterful control of complementary colors is essential in composing an image. Juxtaposition of complementary colors always creates striking chromatic contrast. Consequently, if you wish to make an object stand out from its background, you should pick up a background with the complementary color of the object. Good examples are yellow autumn leaves against a blue sky or magenta flowers against green foliage. The main subject will immediately catch the attention of the viewer. It does not matter how small the main subject, if it has a complementary backdrop, will always be an important compositional element. Usually it will draw the whole attention.

Similar colors are harmonious and not complementary. When just harmonious colors are present in a picture, color is typically not the main attraction. Other fundamental elements take over, such as form and texture. An example might be green grass against a blue sky. Let me give you a tip: if you are going to take a shot of a green grass, back lighting will produce a fantastic effect, making the green very vivid and glowing. So, keep in mind that direction of sunlight is important, too. Be careful when making compositions with similar colors in black & white photography. Most of the time, what appears clear and well defined in the colored world, will seem confused and lackluster in black and white.

Now you know how to compose a highly contrasting image or, on the contrary, a harmonious picture. All of this from a chromatic point of view. Now it is time for the better thing you can do: experiment what you have just learnt.

By Andrea Ghilardelli

More articles about photography at ilghila.com.

Tuesday, January 23, 2007

Frame Transfer and Interline CCD - Electronic Shutter in CCDs

Image devices with a CCD sensor, such as digital cameras and camcorders, can readily implement an electronic shutter. Such electronic shutter does not require extra circuitry, because the way a standard CCD is operated already has an inherent shutter. However, if a fast shutter speed is required, the CCD integrated circuit must be enhanced in some way.

Full-frame CCD

The full-frame is a CCD in its simplest form. It is made of an array of photosensitive elements (pixels) where electrons are created by incoming photons from the time we press the shutter release button throughout the exposure time. Then a shifting phase occurs, shifting these electrons one row at a time to a sensing circuit producing a voltage proportional to the number of electrons. The shifting takes place as follows. The electrons of the first row are shifted into an array of so-called serial registers placed at the edge of the CCD array; the electrons of the second row are shifted to the first row and so on. At this point, the above-mentioned serial register shifts its content into a sensing output amplifier one pixel at a time, converting the electrons' charge to a voltage. Once all the pixels of the first row are read by the output amplifier, the shifting phase takes place again and the whole sensing process is repeated. This is so until all the pixels in the matrix are read out. This mechanism has an inherent electronic shutter in it, in that the exposure is over once all the matrix array has been shifted out and read. On the other hand, the exposure starts with an electronic reset of the CCD, during which all electrons in the array are swept away. Therefore, a mechanical shutter is not strictly necessary.

What makes all this useless, for common shutter speed used in photography, is its slowness. The shifting of a row, its reading one bit at a time, and repeating this process for all the thousands of rows present in a CCD is very time consuming. As an order of magnitude, the time required to read a row is about 130us. Reading 2500 lines, as for an 8 megapixel camera, would require 130us*2500 = 325ms. This is the time necessary in order to read the whole array and corresponds to 1/3 of a second. Not only this is a very slow time for photography, but it is not the maximum shutter speed achievable, neither. Indeed, while reading the first row through the charge detection output amplifier, all the other rows are still in the CCD's array and collecting incoming photons. Their exposure is therefore longer than the first row. The last row is read 325ms after the first one, so it is exposed to light 325ms more with respect to the first row. Besides, each row is in a different position within the matrix at each shifting phase, so smearing occurs. Hence, a practical shutter speed must be much slower than those 325ms. This mechanism is simply too slow to be used as a useful shutter.

Frame-transfer CCD

A faster solution is attained through frame-transfer CCDs. They break up the process of shifting and reading the array into two parts. The array is duplicated: one part (photosensitive or image array) acts exactly as the standard CCD array collecting incoming photons, while the second part acts just as a temporary storage area (storage array). The storage array is shielded from light, so that no electrons are generated by incoming photons. The timing is the following. At the end of the exposure, all the electrons in the image array are shifted (transferred) to the storage array. Only when all this shifting is over the reading phase begins. So, instead of shifting one row at a time and reading it pixel by pixel as in full-frame CCDs, in frame-transfer CCDs all the rows are shifted in a transient area altogether and only then they are read. The reading, that is the conversion of the number of electrons into a voltage, is done as it normally would in a standard full-frame CCD. It takes place in the storage array, however, and not in the photosensitive one. The rows of the storage array are so shifted one by one and read pixel by pixel. The advantage of this solution is that the transfer of the electrons from the photosensitive to the storage array is pretty fast and the longer reading phase is postponed. Once all the electrons are transferred into the storage array, the exposure is over, because it is optically shielded.

As an order of magnitude, a row can be shifted by one place in roughly 100ns, which is one thousand times faster than the full-frame architecture! This time is fast enough to provide a useful shutter for camcorder application, where small CCDs are utilized. For instance, a typical 754x484 CCD would require 100ns*484 = 48.4us which corresponds to 1/20000s. Again, this is not the faster attainable shutter speed for the same reasons of the previous paragraph. However, it is fast enough to offer a 1/1000 - 1/30 electronic shutter with sufficient acquisition rate.

The drawback of frame-transfer over full-frame CCDs is evident. As the array is duplicated, the area is doubled and this implies higher cost.

Interline CCD

What about an even faster shutter speed? Interline CCD still have a photosensitive and a masked storage array, but they are interlaced, so that each storage row is adjacent to its photosensitive counterpart. Photosensitive and storage rows are alternated. This means that just one shift is required in order to store the electrons safely from the photosensitive into the storage light-shielded array, instead of a number of shifts equal to the number of the rows. For instance, for an array with 2500 rows as in the previous example, the interline CCD offers a 2500 times faster shutter than a frame-transfer one! Any practical shutter speed is attainable with such a structure, independently on the number of rows in the sensor. The slow reading phase is then accomplished exploiting the storage array, just as in frame-transfer. Interline CCDs share the same drawback of the frame-transfer: they take a large area thus rising costs.

By Andrea Ghilardelli

More articles about photography at ilghila.com.

Thursday, January 18, 2007

Brightness and RGB Histograms – No Clipping Allowed

Each time a photographer is going to shoot, he has to set the correct exposure for the photograph, i.e. aperture and shutter pair, and sometimes ISO setting, too. In particular, photographers crave for avoiding under- or over-exposure. There are many ways available to determine the correct exposure. This time we are going to see a rarely used method that has a great power and control in it and exploits the histograms all digital cameras and editing software offer nowadays.


Full article

Low Light Photography - One Long vs. Many Short

When shooting in subdued light, classic photography method is to select a very low shutter speed (tens of seconds or even more) to reveal the faintest objects. Therefore, we will take just one long exposure. One alternative way to proceed, however, is to take several short exposures of the same scene and then adding them up with editing software like Photoshop. This technique offers a wealth of advantages, leading up to better images and greater creativity.


Full article

Wednesday, January 03, 2007

Photogeneration – Physics Underlying Image Sensors

Microelectronic image sensors used in digital still cameras, such as CCD and CMOS, rely on electron generation by incoming photons to detect light. We want to give a deeper insight to the physics underlying this phenomenon.

Photons Collide against the Image Sensor

Incident photons can break the covalent bonds holding electrons at atomic sites in the lattice, provided that the photon energy is sufficient. This is what happens when we press the shutter release button of our camera. Light of the scene we are shooting strikes the image sensor. Image sensors are made of silicon, as all other integrated circuits. Once the covalent bond has been broken, the freed electron is able to move through the semiconductor crystal. This process is called "photogeneration". In terms of the energy-band structure, this is equivalent to exciting electrons from the valence band into the conduction band.

Sensors Are Sensitive to Infrared Radiation

For the incident photon to be able to do this, it must possess an energy equal or greater than the bandgap energy, that is the energy gap between the valence and the conduction bands. The band gap in silicon with no voltage applied and at ambient temperature is 1.124eV. This corresponds to the far infrared portion of the electromagnetic spectrum, at a wavelength of 1.10 microns. So now we know that sensors used in digital still cameras are sensitive to infrared radiation. As a photographer does not usually want to capture this part of the spectrum, a lens is necessary in order to filter out infrared radiation before the light reaches the sensor. All cameras are equipped with such a filter. Those digital cameras, permitting infrared photography, just have the option to internally remove the filter away.

Absorption Coefficient

The radiation incident on the semiconductor surface is absorbed as it penetrates into the crystal lattice. The equation describing this process is
I(x) = Io exp(-ax)
where "Io" is the energy reaching the surface of the semiconductor (the sensor), "x" is the depth in the semiconductor and "a" is a coefficient called "absorption coefficient". As the exponential expression always implies, the absorption is very strong, so that photons are readily absorbed as they enter into the sensor. The absorption coefficient is a strongly decreasing function of photon wavelength. As an order of magnitude, high-energy ultraviolet radiation penetrates about 10nm into silicon before decaying appreciably, while infrared light penetrates about 100 microns, i.e. 10000 times deeper. Absorption of photons with energies higher than the band gap is almost entirely due to the generation of electrons.

CMOS Image Sensors vs. CCDs

CMOS image sensors are widely used in digital still cameras for capturing images. Their competitors are CCDs, accomplishing the same task. We analyze the pros and cons of these two options.

CMOS image sensors with an active-pixel architecture differ from CCD devices in that each pixel not only has the photo-sensitive element transforming incoming photons (light) into electrons, but also a circuitry sensing such generated electronic charge through a charge-to-voltage conversion. This is the major difference between CCD and CMOS image sensors and, from there, the rest of architecture differs accordingly.

We can see, from this simple fact, what are the pros and cons of one solution versus the other. For starters, the opaque sensing circuitry does not allow photons to reach the photoactive substrate, where electrons are generated. Therefore, given a pixel dimension, a CMOS sensor is inherently less sensitive to light thus requiring brighter scenes.

Another difference is that the charge-to-voltage amplifier is just one in a CCD and one per pixel in a CMOS. Therefore, CCD offers greater uniformity because it is not afflicted by process spread (millions of transistors can not statistically have the same electrical characteristics).

Contrary to CCDs, in CMOS sensors the analog to digital conversion is on-chip. The circuitry present in each pixel delivers an analog signal, which is fed to an analog to digital converter (ADC). This is yet another macroscopic difference between CCD and CMOS sensors: CMOS sensors have additional circuitry making their design effort and sizes higher, but permit a simpler off-chip circuitry.

Another difference is that CMOS sensors require much less energy and lower voltages to operate, thus leading to extended battery life for handheld applications.

Finally, an advantage of CMOS over CCD sensors is that they require a less dedicated process, thus reducing cost.

Although all the above-mentioned differences are real, CCD and CMOS image sensors have become closer and closer in their characteristics and performances. That is why both technologies still co-exist. Moreover, their unique strengths and weaknesses are exploited in different applications.