This stage entails loading the UAV images into the processing software and stitching them into a seamless image. Normally, information from the GPS device fitted with the UAV (known as geotags) are used to georeference the images (Ai et al., 2015; Turner et al., 2014, 2012). Geotags are contained in the associated image metadata and specifies the UAV’s position (geographic coordinates) where the image was taken. Upon loading the images, the processing software can normally detect the camera used, read its characteristics, and load the geotags as well. If GCPs were not measured, it is still possible to process the images into a seamless image using the geotags. But as explained before, this will likely lead to more imprecise coordinates due to the lower accuracy obtained from consumer market GPS devices. The use of geotags only can cause the resulting image to be positionally shifted relative to an accurately georeferenced image.
If GCPs are measured, they can be incorporated at this stage to improve the positional accuracy. The minimum number of GCPs required is three, but these must not be on a line (horizontal or vertical) and preferably span up a polygon that covers most of the mission area. A higher number of GCPs is usually preferred, but at some point adding another GCP will only marginally improve overall accuracy. Determining the optimal number of GCPs is not trivial. Three factors can be considered in determining it: (1) size of the mission area, (2) magnitude of image overlap and (3) level of visual content. In general, (1) the bigger the mission area, the higher the required number of GCPs (evenly spread), (2) the larger the overlaps, the fewer GCPs required, (3) the lower the visual content, the more GCPs are required. Visual content is affected by the availability of light, e.g., images taken in the dark or evening have a low visual content. The reader is advised to consult the software manual of the processing software in use.
When incorporating GCPs at the geoferencing stage, care must be taken to specify the correct coordinate system in which the coordinates were expressed, and to assign the appropriate values to latitude and longitude readings.
Apart from proprietary software such as AgiSoft and Pix4D, open source applications can be used for stitching UAV images. Examples are Microsoft Image Composite Editor (ICE), Visual SfM and OpenDroneMap. Microsoft ICE, for example, was found to be a useful and relatively easy-to-use application for image stitching although it is unable to produce digital surface models (DSM) and subsequently orthophotos. One may, for instance, choose to use it for stitching and later export the results to other processing applications for subsequent processing.
The International Potato Center (CIP) is developing an open source application for image stitching (ISAM-CIP V2). This software is focused on agricultural applications. Readers can obtain a trial copy here. Present limitations of this application are:
- limited range of supported output formats (currently .bmp) and
- no support for geographical information integration.
On the other hand, the use of the Pix4D software comes with these challenges when stitching the images from the multiSPEC4C camera. With a low overlap between images (less than 60%), due to lower flight altitudes than recommended), one of the four bands may not stitch properly, or presents artefacts due to the image mismatches. Some approaches may help to minimize this problem. They are:
- add tie points manually between the images that show inconsistencies,
- add GCPs that include latitude, longitude and height,
- plan of the flights with 80% overlap or more. When high overlap can’t be achieved in one flight, it is possible to generate a perpendicular flight over the same area of interest that generates more images for better stitching. Figures 5.8 and 5.9 are snapshots of aligned UAV images in AgiSoft and Pix4D Mapper software, respectively.