SAR Data Fusion With Other Sensors Like Optical & Hyperspectral

SAR Data Fusion With Other Sensors Like Optical & Hyperspectral

Remote Earth observation platforms capture data in different segments of the electromagnetic (EM) spectrum. An image is essentially a measure of interaction between the target material and the imaging wavelength. Such interactions are not discrete but rather continuous in nature and its magnitude varies as a function of varying wavelength. Therefore, information captured by a sensor tuned to a particular wavelength depicts only a discrete slice of response and is therefore not the comprehensive characterization of the target material. The fusion of multi-sensor datasets is thus an attempt to partially recreate and approach the continuous response curve of the target, which defines its absolute electromagnetic attribute.

The concept of data fusion appeared in the domain of remote sensing with the intention to enhance the quality of the end products. For instance, the panchromatic images (IKONOS PAN & SPOT HRV-PAN) rich in spatial information like edges are employed to sharpen the multi-spectral images, which are rich in spectral information but lacks well-defined boundaries between spectrally distinct classes. Remote sensing applications utilize a bouquet of airborne & spaceborne sensor datasets, among the popular ones, are from the optical multi-spectral & hyperspectral sensors and the synthetic aperture radars (SARs). While both multi-spectral & hyperspectral pretty much captures the data in the same optical bandwidth, the fundamental difference is in the number of bands and the resolution of the band. Multi-spectral sensors are limited to about 10 bands with a radiometer dedicated for each band tuned to the central wavelength. On the contrary, a hyperspectral sensor typically has several hundreds of bands with each band a few tens of nanometers (nm) wide. For instance, LANDSAT-8 is a multi-spectral system with 11 bands measuring all the way from 0.43 micrometre to 12.51 micrometre in discrete steps with resolution varying from 15-100 m depending on the application. On the other hand, Hyperion has 220 spectral bands between 0.4-2.5 micrometre. Contrary to this, SAR sensors typically operate in the centimetre wavelength scale right from W (0.53-0.30 cm) to P (133-76.9 cm) band. Since different materials reflect and absorb differently at different wavelengths, the reflectance spectrum of a material is a plot of the fraction of incoming solar radiation reflected as a function of the incident wavelength, which serves as a unique spectral signature for the material. A hyperspectral sensor also measures the reflectance spectra of the target material. It is widely utilized in estimating mineral concentration for its prospectivity mapping. This is possible since each mineral is composed of a different molecule that absorbs energy in different regions of the spectrum and with radiative transfer analysis, the relative concentrations of the minerals can be determined. This feat, however, cannot be achieved with the multispectral sensors due to its discrete nature of imaging. On the other hand, the SAR signal is sensitive towards both the dielectric and geometric configuration of the target. Since the SAR wavelength is of the order of the physical dimensions of real-life targets, interaction with the incoming signal is prominent. Targets whose dimensions are smaller relative to the imaging wavelength are perceived as smooth targets and therefore the scattering from a resolution cell containing such targets is relatively homogeneous.

Data fusion is essentially the process of merging multiple datasets, which provides complementary information. The fundamental challenges involved are matching the pixel resolution and aligning the object orientation prior to the fusion of the datasets. The applications can range from object detection, recognition, identification, and classification, to object tracking, change detection, decision-making etc. Remote sensing data fusion can be majorly classified into pixel/data level, feature level, and decision level. Pixel level fusion is performed by combining raw data from multiple sources into a single resolution data with the intention to enhance the information or detect a change occurring due to temporal variability of the datasets. The objective of feature-level fusion, on the contrary, is to extract various features like edges, lines, texture etc. to generate a feature map that may be referred to instead of the original data for further processing. This is essential when the data possess an inhumanly large number of bands, which is beyond the scope of analyses. At the decision level fusion, outcomes of several algorithms are utilized to trigger a decision. Thus, the fusion of data from multi-sensor platforms is a utilitarian approach towards the use of Earth observation datasets.

In the ideal sense, a multi-sensor Earth observation platform should exploit the perpetually indefinite spectrum of the electromagnetic (EM) spectrum by simultaneously imaging the terrestrial entities under multiple wavelengths. The inception of such systems will precisely be the emergence of an era of remote spectroscopy of the land cover spread over the Earth’s surface. Analysis of data from such platforms will redefine the definition of hyperspectral remote sensing in its true sense. As opposed to merging images by coregistering data from multiple platforms, which is separated both spatially and temporally thus introducing errors in the interpretation, multi-sensor platforms will be a substantially superior choice. Thus the tandem approach of imaging the Earth eliminates ambiguity by introducing confidence in the image interpretation. This approach provides myriad perspectives to reason a phenomenon observed from the satellite by virtue of the diversity of the EM spectrum.