Satellite images are used for a wide range of applications. These include use (1) as a base map, (2) for urban planning purposes, and (3) for agricultural monitoring. Such purposes require that the location of features in the image are accurately determined planimetrically (X, Y) and elevationally (Z). In other words, the position of each feature in the image should represent its true and exact location on the earth’s surface and be provided in an appropriate geographical coordinate system (Hruska et al., 2012; Wang et al., 2015).
Satellite images are not always delivered with accurate geographic information (coordinates) of features. While images delivered at certain lower processing levels do not have geographic information at all, those delivered at higher levels could show substantial spatial displacement. For example, the geographic information contained in a level 2A product is derived from an approximation of the location of the satellite at the time of image acquisition. As with any practical method, this method results in location accuracy errors. The order of magnitude for horizontal error is 5.0m at the 90-circular error percentile (see here). This value is for an image acquired with an off-nadir angle of 0, so, acquired with the satellite looking straight down.
Apart from positional error described above, further positional displacement/distortion may occur due to the actual terrain elevation differing from that of the (simple) model used for it, and due to non-zero off-nadir angles (Wang et al., 2015). Depending on the nature of the terrain imaged (i.e. flat or rugged) and the viewing angle of the satellite sensor, the apparent geographic position of an object can be further displaced from true position. Positional errors may occasionally be as large as several hundreds of metres. At smaller scale, tree crowns may typically appear displaced due to sensor viewing angles that are off-nadir, and tree height plays a role in this game. This becomes evident if one compares side-by-side multiple images acquired over the same area, when sensor viewing angles typically differ between images.
Geometric correction corrects a satellite image for these displacements, and ensures that pixels/features in the image are in their proper and exact position on the earth’s surface. Image levels 3A and 3B do not typically require this step as they often represent the highest possible level of geometric correction. Two fundamental steps are required in geometric correction:
- Geo-rectification: at this stage, an appropriate map coordinate system must be defined for the image. This is normally done with the use of (a) ground control points (GCPs) or, (b) another, already geometrically rectified image that has the desired coordinate system. In the first situation GCPs should be identified in the unrectified image and their correct coordinates should be specified. In the second case, feature correspondences between the images must be identified. A polynomial function can then be used to fit the coordinates of the unrectified image to that of the GCPS or the rectified image.
- Orthorectification: this is the process of correcting the image for image distortions caused by variations in the terrain topography in tandem with non-optimal satellite sensor viewing angles. A digital elevation model (DEM) is required for this step. The increased availability of Digital Surface Models (DSM) (e.g., from the SRTM, ASTER GDEM) has led to more uses of these products for orthorectification. An animation of the result of an orthorectification procedure is nicely presented here.
Similar to the other processing steps, geometric correction can be performed with a number of proprietary (e.g. Erdas Imagine, ENVI) and open-source applications (e.g. QGIS, ILWIS, MultiSpec, etc.). Within the automatic workflow, geometric correction is performed with an R script that makes use of GDAL. This script performs orthorectification based on the .RPC files delivered with the images and a DEM of the user’s choice, which could be the 30m SRTM DEM product.