Earth Observation in the frame of EO-MINERS - Remote Sensing Technologies
Overview of remote sensing techniques
Aerial Photography - Aerial photographs have been taken since the early days of photography and aviation. The first picture from a (tethered) balloon appears to have been taken by 'Nadar' of Paris in 1858 and then particularly for military reconnaissance purposes, e.g. during the American Civil War (1861-65), and the Franco-Prussian War (1870-71). The sensors are usually panchromatic gray-scale or colour films, but for special purposes films sensitised for other spectral bands, e.g. UV or IR, can be used. The resolution depends on the focal length of the camera, the format and type of photographic films used, and the distance to the observed scene. Typically, the camera lens points down vertically, but images taken at oblique angles can also be useful. The images are either inspected visually or digitised and the subject to computer-supported image or pattern recognition. Today, images taken directly by digital cameras are used. In the EO-MINERS context time-series of mine development and impact can be established using historical aerial photographs. These indicate the change in size and location of open-cast mines, of spoil heaps and tailings ponds, as well as changes to vegetation e.g. due to acid drainage or soil gases.
The classical platforms are specially equipped aeroplanes that can accommodate the long lenses that come with large film formats or photosensitive diode arrays. At the other end of the scale, the availability of miniaturized digital cameras and remotely controlled electrically powered aircraft or helicopters allow the deployment in form of drones the size of a palm for law-enforcement or military purposes. In addition to airborne photography, satellite imagery is being used (e.g. the service Google Earth™).
Image of central Paris taken from a balloon in 1858 by Gaspard-Félix Tournachon ('Nadar').
Since single band photography does not provide spectral information, they are sometimes used in conjunction with shadings in order to maximise effects of geometry. For example, photographs taken under glancing light (early morning or late afternoon) or at an oblique angle are used in archaeology to survey open land for changes in otherwise uniform vegetation that may indicate anthropogenic soil compaction or exchange, revealing foundations, ancient roads etc.
Airborne photogrammetry - When the exact height of flight and/or location and elevation of several identifiable points are known, photographic images can be used to construct scaled maps. Two adjacent images can be placed side by side and viewed through a stereo viewer. Stereo-photogrammetry was an important tool in the construction of topographic maps, but may be superseded today by other techniques, e.g. radar imagery, to construct (digital) terrain models.
Geologists inspect aerial (stereo) photographs visually for structures or patterns that are not visible from the ground. Preferential direction of valleys and other structures ('lineaments') may indicate geological features, such as faults. This technique is particularly suitable to areas with little vegetation and soil cover.
Overlapping aerial photographs and photogrammetric stereo viewer.
Spaceborne remote sensing – A remote sensing instruments can be placed on a variety of platforms, including artificial satellites. Although ground-based and aircraft platforms can be used, a huge part of the remote sensing imagery is produced by spaceborne sensors. Satellites have indeed several unique characteristics which make them particularly useful for earth observation. One of these characteristics is the ability to adjust the satellite orbit to the required objectives. Under some specific conditions, satellites can revolve at at a speed which matches the rotation of the Earth: so they then seem stationary, relative to the Earth's surface. This allows the satellites to observe and collect information continuously over specific areas. Weather and communications satellites commonly have these types of orbits. Due to their high altitude, some geostationary weather satellites can monitor weather and cloud patterns covering an entire hemisphere of the Earth. Geostationary orbits are however not suitable for observing high latitude phenomenon. There are moreover only very little geostationary slots available.
A geostationary satellite
Many remote sensing platforms are designed to follow an orbit (basically north-south) which, in conjunction with the Earth's rotation (west-east), allows them to cover most of the Earth's surface over a certain period of time. These are near-polar orbits, so named for the inclination of the orbit relative to a line running between the North and South poles. Many of these orbits are also sun-synchronous such that they cover each area of the world at a constant local time of day called local sun time. At any given latitude, the position of the sun in the sky as the satellite passes overhead will be the same within the same season. This ensures consistent illumination conditions when acquiring images in a specific season over successive years, or over a particular area over a series of days. This is an important factor for monitoring changes between images or for mosaicking adjacent images together, as they do not have to be corrected for different illumination conditions. Most of the remote sensing satellite platforms today are in near-polar orbits.
A near-polar satellite
As a satellite revolves around the Earth, the sensor "sees" a certain portion of the Earth's surface. For raster imagery, the width of the area imaged on the surface, is referred to as the swath. Imaging swaths for spaceborne sensors generally vary between tens and hundreds of kilometers. As the satellite orbits the Earth from pole to pole, its east-west position wouldn't change if the Earth didn't rotate. However, as seen from the Earth, it seems that the satellite is shifting westward because the Earth is rotating (from west to east) beneath it. This apparent movement allows the satellite swath to cover a new area with each consecutive pass. The satellite's orbit and the rotation of the Earth work together to allow complete coverage of the Earth's surface, after it has completed one complete cycle of orbits.
If we start with any randomly selected pass in a satellite's orbit, an orbit cycle will be completed when the satellite retraces its path, passing over the same point on the Earth's surface directly below the satellite (the nadir point) for a second time. The exact length of time of the orbital cycle will vary with each orbit. The interval of time required for the satellite to complete its orbit cycle (called repitivity) is not the same as the "revisit period" (also known as temporal resolution). Using steerable sensors, a satellite-borne instrument can view an area (off-nadir) before and after the orbit passes over a target, thus making the 'revisit' time less than the orbit cycle time. The revisit period is an important consideration for a number of monitoring applications, especially when frequent imaging is required (for example, to monitor the spread of an oil spill, or the extent of flooding). In near-polar orbits, areas at high latitudes will be imaged more frequently than the equatorial zone due to the increasing overlap in adjacent swaths as the orbit paths come closer together near the poles.
One should note that the choice of the orbit is a really complex task and usually one of the most limiting factors a satellite life-time. Besides all the technical parameters to be studied, there exists a growing international legislation about this. Interested reader should not consider this simplified overview as an exhaustive description.