Feature-based Registration#

One of the main approaches to registration is a feature-based approach. We try to build correspondences between the images by finding similar features in both images and find a transformation that satisfies the correspondences the best. These algorithms can be split into three parts, which can for the most part be freely interchanged:

Detection and Description

First, we extract points in the images which correspond to features. These can corners, blobs or other structures that have a definite location (unlike edges). We build descriptions of these points using the nearby pixels. These descriptors should optimally be invariant to rotation, scaling, and lighting changes, so that we can match features in images that have these variations without losing the correspondences. Some algorithms for this step are SIFT, ORB, BRIEF, … The result of this step will be a list of feature descriptors for each image, where the descriptors are in most cases vectors.

Matching

We now need to find matching features in both images. This is usually done by defining the distance between two feature descriptors and finding the nearest neighbor of a feature in the other image. Finding the nearest neighbor is often done through brute force. Alternatives like FLANN yield better runtimes, but only approximate the optimal neighbors

Mapping

We want to find the transformation that satisfies as many matches as possible. Depending on the type of transformation we allow, we have between 2 and 8 parameters for the 2D case ranging from translations to projective transforms. This problem can then be phrased as a least-squares problem, where each match defines two equalities (for 2D). Due to wrong matches, methods like RANSAC are often applied to better detect outliers