About Us Contact Us Help


Archives

Contribute

 

Technology - Where is Osama? A Hyperspectral Signal Processing Task!

Vinay K. Ingle
04/21/2003

Since the 9/11 terrorist attack on the United States, the Pentagon has relentlessly pursued the task of locating and capturing the leadership of Al-Qaida and, in particular, Osama-bin-Laden. Due to the rugged mountain ranges in east Afghanistan with its intricate cave network in which Osama is hiding, it is a daunting task. Recently, the intelligence community of the Pentagon is considering the use of classified and highly sensitive hyperspectral imaging sensors to detect and identify the presence of cave activity from airborne platforms. Thus, the hunt for Osama has become the high-tech equivalent of finding him in the accompanying cartoon from a popular kid's game (can you locate him?) and the space-age remote-sensing hyperspectral technology may have an answer to this question.

Spectral Imaging Sensors
There are two general classes of remote imaging sensors – active and passive. Most of the re-mote sensing is performed via passive sensors; those that do not provide their own illumination. These spectral imaging sensors capture a portion of the electromagnetic spectrum from the ultraviolet through the infrared region which also covers the visual and the near-visual range, as shown on the right(Fig 2). Evolution has made our human eye perceive light only in the visible spectrum through the com-bination of red, green, and blue (RGB) sensations. The imaging technology on the other hand has come a long way from a panchromatic (black and white) film to color film (RGB) and now to the ultraspectral cubes as shown in the following figure. A panchromatic sensor records images in a single (averaged) band. A color film records images in three color (RGB) bands. A multispectral sensor has between 5 to 20 bands. A hyperspectral sensor may have hundreds of bands while the newer ultraspectral sensor may have thousands of very narrow spectral bands (this sensor is still in research and development). A higher number of bands in a given smaller spectral bandwidth results in much higher spectral resolution, which is the basis of detection and identification of objects from hyperspectral sensors onboard high altitude platforms.

What is Hyperspectral Imaging?
This is an emerging military intelligence discipline that has evolved from aerial photography. As Fig 3. shows, it has more information than just pictures. There are hundreds of snap-shots¨ taken over an area that contain spatial (area) and spectral (wavelength) information. Objects (natural as well as manmade) have unique infrared signatures or characteristics much like a person has his own fingerprints.

Pixel (derived from "Picture Cell") is the fundamental unit in the "napshot" A group of pixels form an image in two-dimensions, similar to a mosaic. A hyperspectral image contains many, many pixels (typically 1000„e1000), collectively they form the hyperspectral image. Each pixel contains some information about the scene in two-dimensions, and the third dimension contains the spectral information (typically 128 bands). This information is collected in a hypercube, and a typical hypercube contains nearly 50 million spectral values equivalent of 16 thousand pages of text.

Some military and commercial applications of hyperspectral imaging include detection of cam-ouflaged targets (e.g. tanks), land mines, counterfeit money, pollution, etc.

Hyperspectral Signal Processing
By exploiting the finer detail available in the spectral signatures of targets and background materials, real-time detection and identification of military and civilian targets from airborne platforms using hyperspectral sensors is now possible. A key element of hyperspectral imaging exploitation is imaging spectroscopy; the identification of materials based upon their absorption and reflective properties. A fundamental assumption implicit in remote sensing is that there is a one-to-one correspondence between the observed data and the ground truth (or spectral signatures collected on the ground).

An initial processing task is to provide a spectrum at each sensor location that may be used to recognize the detected materials, or, more specifically, objects. However, the “at sensor” radiance measurements are affected by both the atmospheric and geometric conditions under which the spectra are obtained. To extract intrinsic surface properties, these atmospheric and geometric conditions must be duly accounted. Typically, methods exist for correcting these atmospheric and geometric conditions but rely upon specific knowledge of the spectral reflectance of ground materials. However, these approaches require specialized knowledge about the location, time, and prevailing atmospheric conditions at the time the data was gathered, and, therefore, render them unsuitable for automated processes.

In an approach developed by our research team, we incorporate the effects of applying generalized atmospheric and geometric effects to hyperspectral signal processing. We first derive and construct a model for image spectral irradiance. This model, with many iterations of synthetically generated atmospheres, generated with MODTRAN simulation software, are applied to calibrated object ground truth measurements to achieve likely radiance spectra that spans the range of all possible atmospheric and geometric conditions. These spectra are represented by a low-dimensional linear model which is then used to determine the maximum likelihood of an object of interest being present within a scene of interest. This then forms the generalized likelihood ratio test for automated object identification that is invariant to atmospheric and geometric conditions. Our approach, called invariant detection, was proven on several hyperspectral cubes for different objects and under different operational conditions. The example in Fig 4. shows a detection and localization of an "enemy" object in the right picture, which otherwise is not visible in the left picture.

Our algorithm exploits a material’s emissivity to identify objects of interest within a scene. Hence, this information is used as a spectral discriminant to identify one object relative to another. The emissivity of an object is obtained from the truth data, thus, very accurate spectral representation of the object of interest is obtained. Typically, multiple truth spectra is taken at different times, under different illumination conditions, at different positions, and at different aspect angles to produce a set of object representations. We then use this information within our algorithm to determine a spectral fit to the ob-ject of interest with the many synthesized atmospheric representations.

This algorithm and its underlying technology has recently been applied to detect gaseous effluents, and to estimate their concentration, specie type, diffusion rate, and extent. This brings us to our original questions. If Osama and his cohorts are hiding in a cave and breathing, can we locate him by detecting the presence of certain gasses at the mouth of a cave? The hyperspectral signal processing may soon provide an answer.

(Vinay Ingle is a PRofessor of Electrical and Computer Engineering at Northeastern Engineering. He received the PhD degree in Computer and Systems Engineering from Rensselaer Polytechnic Institute in 1981, the MSEE degree from Illinois Institute of Technology, Chicago, and the BS degree in Electrical Engineering from Indian Institute of Technology, Bombay India. His research interests are in the area of multidimensional signal processing and image processing. )

Bookmark and Share |

You may also access this article through our web-site http://www.lokvani.com/





Fig 1.


Fig 2


Fig 3.


Fig 4.


Radiance Image


Processed Image

Home | About Us | Contact Us | Copyrights Help