Multi-Template-Matching is an accessible method to perform object-detection in images using one or several template images for the search.
The strength of the method compared to previously available single-template matching, is that by combining the detections from multiple templates, one can improve the range of detectable patterns. This helps if you expect variability of the object-perspective in your images, such as rotation, flipping…
The detections from the different templates are not simply combined, they are filtered using Non-Maxima Supression (NMS) to prevent overlapping detections.
We currently have implemented Multi-Template-Matching (MTM) in:
Activate the IJ-OpenCV and Multi-Template Matching update site.
Original Python implementation relying on OpenCV matchTemplate
pip install Multi-Template-Matching (case sensitive and mind the - )
python-oop : a more object-oriented programming version, with a cleaner syntax. This one relies on scikit-image and shapely (for the BoudingBox)
Maybe a bit slower but more interoperable and easier to extend.
Refer to the wiki sections of the respective GitHub repository for the implementation-specific documentation.
In particular, the Fiji and KNIME implementation have dedicated youtube tutorials, while the python implementation comes with example notebooks that can be executed in a browser.
Below some generic documentation pages:
- the open-access publication
- YouTube playlist
- Recorded talk about single vs template matching, and other outcomes of my PhD
- Poster of the project available on Zenodo
- my PhD thesis on object-detection and image-annotation solutions for microscopy
- Research outcomes/publications citing MTM
If you use these implementations, please cite:
Thomas, L.S.V., Gehrig, J.
Multi-template matching: a versatile tool for object-localization in microscopy images
BMC Bioinformatics 21, 44 (2020). https://doi.org/10.1186/s12859-020-3363-7
Download the citation as a ris file.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 721537 “ImageInLife”.