QR codes are a near-ubiquitous type of barcode designed to be scanned by ordinary cameras. In my various fieldwork campaigns, we used QR codes to link images with metadata and sample identify, enabling field-to-result tracking of individuals. However, like most parts of fieldwork, taking these images rarely goes to plan: we often find the images taken in the field have comparatively low rates of QR code decoding with common open source libraries.
In this post, I document my recent work trying to improve the rate at which QR codes are recognised in these images from the field. TL;DR: with a few very simple and automated image manipulations (scaling, sharpening, and blurring), we can dramatically improve the rate at which barcodes can be recognised. If you are interested in using these methods, please see https://qrmagic.kdmurray.id.au/, and/or the NVTK.
First, let’s see a collection of images from the field. These examples were taken during several different field campaigns I’ve been involved in over the last several years. We use these images to take visual note of the appearance of plants we collect. As much of our work (e.g. DNA extraction) destroys the sample we have taken, we need some way to see what the plant (or leaf, flower, seed capsule etc) we collected actually looked like in the field. We also often take notes alongside the image, tying the sample’s ID to, e.g., date, location, etc. In case of a metadata mixup, we can later check these images to confirm the correct particulars as they were recorded in the field.
As you can see, these images vary in their quality. It is often the case that in an otherwise useful image, the QR code is either overexposed, out of focus, distorted, or very small. Obviously, that will affect our ability to decode QR codes in these images. So, our goal here is to find a set of simple and automated manipulations we can do in order to improve the rate at which our QR codes are detected.
We’ll be doing the image processing here in Python, given it has an excellent ecosystem for working with image (meta)data. Specifically, I will be using pyzbar to decode qrcodes within our images. I use mostly PIL/pillow for image IO and manipulation, but also the python OpenCV bindings.
The first experiment will be to see how zbar treats our images before any manipulation. In this post, I will use only snippets of the overall test code, which is available in full at the end of this post, and on github.
This code takes about one second per image, but decodes only about 48% of our test images (a full comparison table is a the end). By eye, we can expect well over half of the above images to be decoded. There are visible barcodes in every single image, and while some are signficantly distorted or defocussed, 48% is a woeful decoding rate.
So, what can we do about it? There are several very fancy approaches that try to improve the actual decoding algorthm, either using deep learning image recognition, or hand coded detection of qr code sub-features. But I’m too stupid to understand much of that! So we are left with just simple manipulations. Luckily, they turn out to be quite effective.
A side note here: For clarity and brevity, I elide the actual code used to produce the numbers here, and show just how we would actually use the manipulations in a real decoder. In reality we take the first result that decodes, but in the true testing code (on github), we do ensure that every combination of preprocessor actually gets run. Therefore, while the code below might seem that it biases against later iterations (it does, for effiency), the benchmarking code does run every iteration.
Here, we successively try different downscalings. This is the preprocessing step that has the largest effect. For some magical reason, it seems like scaling down images improves the QR decoding rate! This is somewhat magical to me. I suspect (but haven’t been able to confirm) that the internal algorithm of Zbar expects QRcodes to be an approximate size, and when they are larger, downsizing helps both by reducing the size of the qrcode, and slightly sharpening the whole image.
Here, we combine a sharpen/blur pass with the previous downscaling step. NB: sharpness factors below one indicate blurring, while above one indicate sharpening.
We get reliably better results when we apply either sharpening or blurring after downsizing than with downsizing alone. On the whole, it seems that blurring does better, but it does seem to vary by the downscaling factor, and of course it depends on the indiviual image.
Here, we apply the autocontrast routine from PILlow to automatically normalise the image histogram (i.e. ensure the image’s contrast is high enough), which helps decoding for low-contrast images.
It seems that the automated contrast adjustment does very little to aid QR code decoding here, and this pattern held for a larger set of images. This also makes sense: any image containing both a plant or some other natural feature (e.g. soil) and a black QR code on white paper is already a relatively high contrast image, especially if the exposure is correct (i.e. exposed for the plant).
Combining all pre-processors
So far, we have seen that there are various treatments we can do to our images to increase the rate of decoding sucess. But in reality, there is no one treatment that will universially increase performace, as the optimal treatment will depend on the individual image. What if we do everything, and either take the union of all results, or the first successful result?
Wow! Now we’re talking. By iterating over the possible treatments, we find the one(s) that work for each image. This allows us to decode 93% of our test images, which is a massive improvement over the 48% of images that we can decode without any preprocessing. This does obviously take a lot longer to compute, but that is work the computer is doing, not me or you. And, we can reduce this a bit by ordering the preprocessing steps by their likelihood of working and taking the first one that produces results.
While on the topic, I thought I’d outline a small list of things that didn’t work.
- Automatic brightness/contrast adjustment with CLAHE
- (also, the automatic contrast adjustment from Pillow, as shown above)
- Successively rotating images by ~15 degrees betwen 0 and 90
- Using xzing or other open-source QR decoding libraries (which performed no better than Zbar).
This code is available as part of the Natural Variation Toolkit, a set of tools to help manage the sorts of large natural variation collections I work with. I eventually intend to package this QR-decoding subset as a separate python library on PyPI, but for now it’s embedded in the qrmagic CLI/web service (
pip install qrmagic). The easiest way to access it is via the free but very slow web portal. If you would like to sort many (~100 or more) images, please use the CLI to do the hard work on your laptop1. You can then use the website to do the curation and generate a script to sort the images. Any questions, email
my first name at
And here is the full code, in case link-rot eats the links to github:
It should “easily” install on any Linux/Mac machine with
pip install qrmagic. See help at
qrmagic-detect --help; but in summary something like
qrmagic-detect --threads 8 --output my_images.json my-images/*.jpgshould work. This should be reasonably quick; it will take about 30 minutes to do 1000 images on an average modern laptop. ↩︎