Section 1: What is Remote Sensing?

Definitions

Remote sensing refers to the process of gathering information about an object, at a distance, without touching the object itself. The most common remote sensing method that comes to most people's minds is the photographic image of an object taken with a camera. Remote sensing has evolved into much more than looking at objects with our eyes. It now includes using instruments, which can measure attributes about objects which unaided human eyes can't see or sense.

Some other definitions of Remote Sensing are:

"Photogrammetry and Remote Sensing are the art, science and technology of obtaining reliable information about physical objects and the environment, through a process of recording, measuring and interpreting imagery and digital representations of energy patterns derived from noncontact sensor systems" (Colwell, 1997).

"Remote sensing may be broadly defined as the collection of information about an object without being in physical contact with the object. Aircraft and satellites are the common platforms from which remote sensing observations are made. The term remote sensing is restricted to methods that employ electromagnetic energy as the means of detecting and measuring target characteristics" (Sabins, 1978).

"Remote sensing is the art and science of obtaining information from a distance, i.e. obtaining information about objects or phenomena without being in physical contact with them. The science of remote sensing provides the instruments and theory to understand how objects and phenomena can be detected. The art of remote sensing is in the development and use analysis techniques to generate useful information"(Aronoff, 1995).

 

History

In 1858 a French photographer, Gaspaed Felix Tournachon was the first to take aerial photos from a tethered balloon. A few years later in 1861, aerial photographs became a tool for military intelligence during the civil war. Aerial photographs were also taken from cameras mounted in kites (1858), and on carrier pigeons (1903). In 1909 Wilber Wright flew the first airplane to take the first photographs in flight. The first aerial photographs used in the process of creating maps was presented in a paper in 1913, by Captain Tardivo at a meeting of the International Society for Photogrammetry.

Military aerial photos were used on a large scale during World War I. The military trained hundreds of people to process and interpret aerial reconnaissance photos. The French aerial units developed 56,000 photos in four days during the Meuse-Argonne offensive in 1918 (Colwell, 1997). After World War I and through the 1930's, commercial aerial survey companies employed many former military personnel to process aerial photos to produce maps such as topographic maps, forest management maps, and soil maps.

World War II saw the development of color-infrared film for the US Army in 1942. These images were used to detect enemy forces and equipment that were camouflaged. A majority of Allied intelligence gathered about the enemy during this war was the direct result of aerial photoreconnaissance.

The U.S. military and other government agencies such as National Aeronautics and Space Administration (NASA) continued to develop the use of remote sensing during the cold war years. The 1960's also saw the expansion and development of earth remote sensing from space. The first military space photo reconnaissance satellite, Corona, was launched in 1960. Corona took pictures of the Soviet Union and its allies using photographic film. The exposed film was then transferred into unmanned recovery vehicles in space. The recovery vehicles then de-orbited and returned to earth by parachute carrying the film, which was then processed and analyzed in the lab. The first series of weather satellites called the Television Infrared Observation Satellites (TIROS) began launching in 1960. NASA continued collecting images for its earth observation surveys, from outer space, with the Apollo and Gemini spacecraft.

wpe2.jpg (19838 bytes)             SR-71.jpg (151373 bytes)

Figure 1.1  Cuban missile site 1962                       Figure 1.2  SR-71  

Aerial photographs taken from high-altitude U-2 and low-altitude RF101 aircraft, uncovered missile installations in Cuba such as that shown in figure 1.1.  These images were televised to the world during the Cuban Missile Crisis in 1962. In 1964 the U.S. Air Force started flying the SR-71 Blackbird reconnaissance aircraft shown in figure 1.2. The SR-71 flies at speeds in excess of Mach 3 or 2,000 miles per hour and at altitudes greater than 85,000 feet.

Scores of U.S. meteorological and earth observation satellites were launched during the 1970's. Also during the 1970's manned spacecraft such as the Skylab space station collected images of earth from outer space. In 1972 Landsat-1 shown in figure 1.3 with an original resolution of only 80 meters was the first satellite launched into space for nonmilitary earth resource observation. Landsat contained sensors capable of taking multispectral digital images.

landsat.gif (29467 bytes)

Figure 1.3 Landsat Satellite

U.S. military photoreconnaissance satellites have been keep secrete and unavailable to the general public. Starting in 1976 the U.S. military started deploying more sophisticated high-resolution satellites capable of relaying digital images to earth. Eight Keyhole-11 satellites were launched between 1976 and 1988. Three improved Keyhole-11B satellites were launched between 1992 and 1996. They are able to produce images with estimated resolutions of nearly ten centimeters (four inches) (Vick et al, 1997).

Nonmilitary satellite images have been used to monitor the degradation and pollution of the environment. These images also can be used to assess the damage of floods and natural disasters, assist in forecasting the weather, locate minerals and oil reserves, locate fish stocks, monitor ocean currents, assist in land use mapping and planning, produce geologic maps, and monitor range, forestry and agricultural resources.

 

Fundamental Properties and Concepts

The Electromagnetic Spectrum

All objects including plants and soil emit and or reflect energy in the form of electromagnetic radiation. Electromagnetic radiation travels in waves propagating through space similar to that shown in figure 1.4. Three major components of these waves are frequency, amplitude and wavelength. Frequency is the number of cycle crests passing a point during a given period of time. One cycle per second is referred to as one hertz. Amplitude is the energy level of each wave measuring the height of each wave peak. Wavelength is the distance from the top of one wave peak to the top of the following wave peak

wave.gif (6514 bytes)

Figure 1.4 Electromagnetic Radiation

The most common source of electromagnetic radiation that we are familiar with is the sun.  The sun radiates energy covering the entire electromagnetic frequency spectrum as shown in figure 1.5.

Remote sensors act similar to the human eye. They are sensitive to images and patterns of reflected light. A major difference between the human eye and remote sensors is the frequency range of the electromagnetic spectrum that they are sensitive to.

The electromagnetic spectrum range varies from very short wavelengths of less than ten trillionths of a meter known as gamma rays, to radio waves with very long wavelengths of several hundred meters. The electromagnetic spectrum can be sliced up into discrete segments of wavelength ranges called bands, also sometimes referred to as a channel.

emspec.gif (6125 bytes)

Figure 1.5 Electromagnetic spectrum

It is the sun that most often provides the energy to illuminate objects (figure 1.6). The sun's radiant energy strikes an object on the ground and some of this energy that is not scattered or absorbed is then reflected back to the remote sensor.  A portion of the sun's energy is absorbed by objects on the earth's surface and is then emitted back into the atmosphere as thermal energy.  

radiation.gif (8739 bytes)

Figure 1.6

Visible Region

The visible light portion of the electromagnetic spectrum ranges from 0.4 micrometers ("µm") (shorter wavelength, higher frequency) to 0.7 µm (longer wavelength, lower frequency).  This is the frequency range of light that the human eye is sensitive to.  Every object reflects, absorbs and transmits electromagnetic energy in the visible portion of the electromagnetic spectrum and also other non-visible frequencies.  Electromagnetic energy which completely passes through an object is referred to as transmittance.  Our eyes receive the visible light reflected from an object.   

The three primary colors reflected from an object (figure 1.7) known as additive primaries are the blue, green and red wavelengths.  Primary colors cannot be formed by the combination of any other primary colors.   Intermediate colors are formed when a combination of primary colors are reflected from an object.  Magenta is a combination of reflected red and blue, cyan a combination of reflected blue and green, and yellow a combination of reflected red and green.

Color film produces colors by using layers of dyes which filter out various colors.   The three colors which absorb the primary colors, known as subtractive primaries, are magenta, cyan and yellow.  Magenta absorbs green and reflects red an blue, cyan absorbs red and reflects blue and green, and yellow absorbs blue and reflects red and green.  The absorption of all colors produces black.  If no color is absorbed then the film produces white.

 

      blue.gif (1820 bytes)    green.gif (1886 bytes)    red.gif (1843 bytes)

magenta.gif (2230 bytes)    cyan.gif (2080 bytes)    yellow.gif (2102 bytes)

black.gif (1777 bytes)     white.gif (2053 bytes)

Figure 1.7

Infrared Region

The non-visible infrared spectral region lies between the visible light and the microwave portion of the electromagnetic spectrum. The infrared region covers a wavelength range from .7 µm to 14 µm. This broad range of infrared wavelengths is further subdivided into two smaller infrared regions. Each of these regions exhibits very different characteristics.

The infrared region closest to visible light contains two smaller bands labeled near infrared and short-wave infrared with wavelengths ranging from .7 µm to 1.1 µm, and from 1.1 µm to 3.0 µm respectively. These infrared regions exhibit many of the same optical characteristics as visible light. The sun is the primary source of infrared radiation, which is reflected from an object. Cameras used to capture images in the visible light spectrum can capture images in the near infrared region by using special infrared film.

The other infrared region with longer wavelengths ranging from 3.0 µm to 14.0 µm is composed of two smaller bands labeled mid-wave infrared and long-wave infrared with wavelengths ranging from 3.0 µm to 5.0 µm, and from 5.0 µm to 14.0 µm respectively. Objects generate and emit thermal infrared radiation thus these objects can be detected at night because they are not dependent on reflected infrared radiation from the sun. Remote sensors operating in this infrared wavelength range measure an object’s temperature.

Interaction Between Plants and Electromagnetic Radiation

Leaf Structure

The structure of a leaf is shown in Figure 1.8. The cuticle is a thin waxy layer covering the epidermis cells on the surface of the leaf. Tiny pours in the epidermis layer of cells are called stomata. The stomata are surrounded by guard cells, which cause the stomata to open or close. The guard cells regulate the water evaporation from the leaf and also control the gas exchange between the leaf and the atmosphere.

The interior layer of the leaf is composed of two regions of mesophyll tissue. This is where most photosynthesis takes place. The palisade mesophyll lies just below the upper epidermis. These cells are elongated, lined up in rows and contain most of the leaf’s chloroplasts. Chloroplasts of most plants contain pigments and two different kinds of chlorophyll. Chlorophyll a is the most abundant and is bluish green in color. Chlorophyll b is yellowish green in color and absorbs light and then transfers that energy to chlorophyll a. Pigment molecules within the chloroplasts also absorb light energy and transfer the energy to the chlorophyll. The spongy mesophyll is the leaf lower interior composed of loosely arranged and irregular shaped cells. These cells contain chloroplasts and are surrounded by air spaces.

leaf.gif (63606 bytes)

Figure 1.8  Cross section of a typical plant leaf

Spectral Response

Chlorophyll primarily absorbs light in the violet to blue and red wavelengths. Green light is not readily absorbed and is reflected thus giving the leaf a green color appearance.  The internal cell wall structure of the mesophyll causes high reflectance of near infrared radiation. Chlorophyll is transparent to near infrared radiation. The sharp increase of reflected energy just beyond the red region of visible light into the near infrared region is referred to as the red edge. Figure 1.9 shows this sharp reflection increase located around the 0.7 µm wavelength. The location of the red edge is not static throughout the life of a leaf. As the leaf matures, chlorophyll will absorb slightly longer wavelengths in the visible red region. This change moves the red edge shown in figure 1.9 to the right and is referred to as the red shift (Campbell, 1996).

Environmental stress factors such as drought, disease, weed pressure, insect damage and others stress or injure plants. This stress will cause physiological changes in the plant. Stressed plants will have a spectral reflectance that is different from normal plants at the same growth stage.  One example of a physiological change would be the change in the color of plant leaves due to chlorosis.  The yellow color from chlorosis is caused by the breakdown of chlorophyll.  Reflected green will be decreased, and reflected red will increase.  The correlation of the different spectral responses observed with remote sensing equipment to the actual condition of the plants, is critical for the accurate interpretation and identification of crop injury and stress.

Figure 1.9                                          

Sensor Types

Most remote sensors measure and record the magnitude and frequency of reflected radiation from an object. The recorded frequency spectrum data from the object is then compared and matched to the spectrum signatures of known objects thus allowing for the identification and classification of the object on the ground.

Remote sensing from aircraft and satellites use imaging sensors, which measure reflected energy from objects under surveillance. These imaging sensors fall into two general categories, active sensors and passive sensors. Passive sensors monitor only the natural solar reflected light or electromagnetic energy from an object. Passive sensors make up a majority of the sensors in use today. Active image sensors provide their own light or electromagnetic energy, which is transmitted to the object and then reflected back to the sensor. A common example of this type of sensor is radar. Cloud cover in the sky can often block passive sensors from receiving reflected energy from the ground but radar systems can penetrate cloud cover.

Early remote sensing history consisted of photographic images on film taken by cameras. Reflected light received by the camera exposes the film by reacting with the chemical emulsion on the film to create an image in analog format. The images produced are fixed and not subject to very much manipulation unless they are converted into an electronic digital format. Digital images have advantages over analog film images because computers can store, process, enhance, analyze, and render images on a computer screen.

Digital images are images reduced to numbers. The image is made up of numbers, which represent image attributes such as brightness, color or radiated energy frequency wavelength, and position location for each point or picture element in the image. The smallest size picture elements on a computer screen are called pixels. A digital image is made up of pixels arranged in rows and columns depicted in figures 1.6, 1.7, 1.8.

pixel.gif (950 bytes) scanlin.gif (1666 bytes)
Figure 1.10  A single pixel     Figure 1.11  A row of pixels represents a scan line

             

img1bnd.gif (2810 bytes)

Figure 1.12 Rows and columns of pixels represent an image

Resolution

Remote sensors measure differences and variations of objects. There are four main resolutions, which affect the accuracy and usefulness of remote sensors.

Spatial resolution describes the ability of a sensor to identify the smallest size detail of a pattern on an image. The distance between distinguishable patterns or objects in an image that can be separated from each other is often expressed in meters.

Spectral resolution, is the sensitivity of a sensor to respond to a specific frequency range. The frequency ranges covered often include not only visible light but also non-visible light and electromagnetic radiation. The discrete range of frequency wavelengths that a sensor is able to detect and measure is called a Band.  Features on the ground such as water and vegetation can be identified by the different wavelengths reflected.   The sensor used must be able to detect these wavelengths in order to see these and other features. 

Radiometric resolution is often called contrast. It describes the ability of the sensor to measure the signal strength or brightness of objects. The more sensitive a sensor is to the brightness of an object as compared to its surroundings, the smaller an object that can be detected and identified.

Temporal resolution, is the period of elapsed time between images taken of the same object at the same location. The more frequent a sensor is able to return to an exact specific location the greater the temporal resolution. Several observations over time reveal changes and variations in the object being observed.  For satellite systems temporal resolution is described as the revisit period, which refers to the time it takes for a satellite to return to the same area on subsequent orbits.

Image processing

Once the raw remote sensing digital data has been acquired, it is then processed into usable information. Analog film photographs are chemically processed in a darkroom whereas digital images are processed within a computer. Processing digital data involves changing the data to correct for certain types of distortions. Whenever data is changed to correct for one type of distortion, the possibility of the creating another type of distortion exists. The changes made to remote sensing data involve two major operations: preprocessing and postprocessing.

Preprocessing

The preprocessing steps of a remotely sensed image generally are performed before the postprocessing enhancement, extraction and analysis of information from the image. Typically, it will be the data provider who will preprocess the image data before delivery of the data to the customer or user. Preprocessing of image data often will include radiometric correction and geometric correction.

Radiometric corrections are made to the raw digital image data to correct for brightness values, of the object on the ground, that have been distorted because of sensor calibration or sensor malfunction problems. The distortion of images is caused by the scattering of reflected electromagnetic light energy due to a constantly changing atmosphere. This is one source of sensor calibration error.

Geometric corrections are made to correct the inaccuracy between the location coordinates of the picture elements in the image data, and the actual location coordinates on the ground. Several types of geometric corrections include system, precision, and terrain corrections.

System correction uses a geographic reference point for a pixel element such as that provided by the global positioning system. Correction accuracy often varies depending upon the accuracy of the position given by the global positioning system.  Aircraft platform system instability is shown in figure 1.13.  Preprocessing correction removes the motion distortion as shown in figure 1.14.  

toledo_small_raw.jpg (12229 bytes)                              

Figure 1.13  Raw uncorrected aerial sensor data.

toledo_synthetic_small.jpg (14879 bytes)

Figure 1.14 Preprocessed data corrected for aircraft motion.

Precision correction uses ground control points. Ground control points, which have accurate predetermined longitude and latitude geographic locations, are often used to measure the location error of the picture elements. Several mathematical models are available to estimate the actual position of each picture element based on its distance from the control ground point.

Terrain correction is similar to precision correction except that in addition to longitude and latitude a third dimension of elevation is referenced with the ground control point to correct for terrain induced distortion. This procedure is also referred to as ortho-corrected or orthorectified.  For example the tall buildings appear to lean away from the center point of figure 1.15 while the buildings directly below the camera lens (nadir) have only their roofs visible.  The relief distortion will be larger for objects further away from the center of the photo.

LongBeach.jpg (122278 bytes)

Figure 1.15 Example of terrain or relief displacement.

Postprocessing

Digital image postprocessing routines include image enhancement, image classification, and change detection. These computerized process routines improve the image scene quality and aid in the data interpretation.

Image enhancement techniques include contrast stretching, spatial filtering, and ratioing.

Contrast stretching changes the distribution and range of the digital numbers assigned to each pixel in an image. This is often done to accent image details that may be difficult for the human viewer to observe unaided.

Spatial filtering involves the use of algorithms called filters to either emphasize or de-emphasize brightness using a certain digital number range over an image. High pass filters improve image edge detail. Low pass filters smooth an image and reduce image noise.

Ratios are computed by taking the digital numbers for a frequency band and dividing them by the values of another band. The ratio range can be redistributed to highlight certain image features.

Image classification groups pixels into classes or categories. This image classification process may be unsupervised or supervised.

Unsupervised image classification is a computer-based system that assigns pixels to statistically separable clusters based on the pixel digital number values from several spectral bands. The resulting cluster patterns can be assigned different colors or symbols for viewing to produce a cluster map. The resulting map may not necessarily correspond to ground features that the user is interested in.

Supervised classification is a more comprehensive procedure which uses experienced human image analyst to recognize and group pixels into classes and categories of interest to the user. The analyst picks several samples of homogeneous pixel patterns on the image called training sites. Analysts identify these sites by actually visiting the ground location and making field observations (ground truthing) or by using past experience and skill. The remaining pixels outside the training sites are then matched to the training sites using statistical processing techniques.

Change detection is a process where two images at the same location taken on different dates are compared with each other to measure any changes in physical shape, location, or spectral properties. A third image is then produced which shows only the changes between the first and second image. Change detection lends itself to computer automation analysis. Pixel digital number values are compared pixel by pixel within each frequency band. Computer analysis is most useful when combined with the human analyst's experience and knowledge to interpret the image changes.

 


Top             Contents             Previous Page            Next Page