Ideally, a camera on-board a UAV acquires consecutive images while focusing on the point in the terrain exactly below it, following the plumb line (nadir point). The top of the object in this point is visible and its shape is unchanged, but this is not the case when the image is taken at an angle away from nadir, when the terrain in the scene is not flat, or when the object is captured away from the center of the frame. In such cases, the sides of object are seen instead of its top with distortion becoming larger as the object is farther from image center. This is useful for generating a 3D model because we see all sides of the objects of interest, but when we want a 2D map, there is one step more to do.
The objective of orthophoto generation is to eliminate any distortions owing to camera tilt and ground relief, and to maintain the scene’s or image’s actual dimensions. Orthophoto generation requires georeferenced photos and a Digital Surface Model (DSM). The accuracy of the orthophotos depends, to a very large extent, on the vertical accuracy of the DSM. This is particularly the case in rugged terrain with strong terrain distortions.
To generate a geometrically corrected image, the processing software uses the position of the camera with respect to the terrain in each image captured (georeference), stretches (projects) the images as needed to correct the perspective distortions based on the DSM and selects the images that show better the top of the object (the center of each image) to create a fusion of all corrected images.
As part of the processing, reports on the geolocation and pixel reprojection accuracy are generated. This helps to understand the quality of the results. If the DSM presents inaccuracies in certain points, artefacts will be present in the orthophoto. Where these are detected, a solution is to go back to the point cloud and clean it or add more manual tie points to improve the model.
Generated orthophotos can be used as base maps on which accurate measurements can be made. They can serve as input for further processing, e.g., 3D modelling, crop spectral signature extraction, crop classification, or field boundary delineation.