About 15, 20 years ago I was still in uni and we had a computer vision lab, the main guy there had been working on that subject for years and dealt with businesses where his stuff was used for quality control.
Without fail, step one of computer vision was to bring the image down to grayscale and / or filter for specific colours so you ended up with a 1 bit representation.
My "algorithm" for a robot that was to follow a line drawn on the floor boiled down to "filter out the colour green, then look at the bottom rows of the image and find the black pixels. If they're to the left, adjust to the left, if to the right adjust to the right". Roughly. I'm sure it could be done a lot more cleverly but I was pretty proud of it AND the whole tool suite was custom made, from editing environment to programming language. Expensive cameras and robot, too.
shoo [3 hidden]5 mins ago
If you enjoyed this post you may also like the 2024 book foundations of computer vision: https://visionbook.mit.edu/
i don't have any background in computer vision but enjoyed how the introductory chapter gets right into it illustrating how to build a limited but working simple vision system
yunnpp [3 hidden]5 mins ago
Thanks for the reference. Looks very from-the-ground-up and comprehensive.
jsmailes [3 hidden]5 mins ago
The blob-finding algorithm makes me think of the "advent of code" problems - I wouldn't have thought to do a two-pass approach, but now that I see it set out in front of me it's obviously a great idea. Seems like this technique could quite easily be generalised to work with a range of problems.
nusl [3 hidden]5 mins ago
This title is excellent.
ryukoposting [3 hidden]5 mins ago
It may come as a surprise to some that a lot of industrial computer vision is done in grayscale. In a lot of industrial CV tasks, the only things that matter are cost, speed, and dynamic range. Every approach we have to making color images compromises on one of those three characteristics.
I think this kind of thing might have real, practical use cases in industry if it's fast enough.
vincenthwt [3 hidden]5 mins ago
Ah, I think you work in the same industry as me, machine vision. I completely agree with you, most applications use grayscale images unless it’s color-based application.
Which vision library are you using? I’m using Halcon by MVTec.
ryukoposting [3 hidden]5 mins ago
I used to work in industrial automation, I was mostly making the process control equipment that your stuff would plug into. PLCs and whatnot. We had a close relationship with Cognex, I don't remember the exact details of their software stack.
gridspy [3 hidden]5 mins ago
Also resolution & uniformity
Color makes major compromises physically also, since it seems like the Red, Green and Blue channels are sampling from the same physical location but the actual sensor buckets are offset from each other.
swiftcoder [3 hidden]5 mins ago
This is really solid intro to computer vision, bravo!
teiferer [3 hidden]5 mins ago
Appreciate the old school non-AI approach.
Sharlin [3 hidden]5 mins ago
Classical machine vision and pattern recognition is absolutely AI. Or at least it was AI before it became too mature to be called that. As they say, any AI problem that gets solved stops being AI and becomes just normal algorithmics.
amelius [3 hidden]5 mins ago
But have a look at the "Thresholding" section. It appears to me that AI would be much better at this operation.
vincenthwt [3 hidden]5 mins ago
It really depends on the application. If the illumination is consistent, such as in many machine vision tasks, traditional thresholding is often the better choice. It’s straightforward, debuggable, and produces consistent, predictable results. On the other hand, in more complex and unpredictable scenes with variable lighting, textures, or object sizes, AI-based thresholding can perform better.
That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent.
Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option.
shash [3 hidden]5 mins ago
Not to mention performance. So often, the traditional method is the only thing that can keep up with performance requirements without needing massive hardware upgrades.
Counter intuitively, I’ve often found that CNNs are worse at thresholding in many circumstances than a simple otsu or adaptive threshold. My usual technique is to use the least complex algorithm and work my way up the ladder only when needed.
hansvm [3 hidden]5 mins ago
Something I've had a lot of success with (in cases where you're automating the same task with the same lighting) is having a human operator manually choose a variety of in-sample and out-of-sample regions, ideally with some of those being near real boundaries. Then train a (very simple -- details matter, but not a ton) local model to operate on small image patches and output probabilities for each pixel.
One fun thing is that with a simple model it's not much slower than techniques like otsu (you're still doing a roughly constant amount of vectorized, fast math for each pixel), but you can grab an alpha channel for free even when working in colored spaces, allowing you to near-perfectly segment the background out from an image.
The UX is also dead-simple. If a human operator doesn't like the results, they just click around the image to refine the segmentation. They can then apply directly to a batch of images, or if each image might need some refinement then there are straightforward solutions for allowing most of the learned information to transfer from one image to the next, requiring much less operator input for the rest of the batch.
As an added plus, it also works well even for gridlines and other stranger backgrounds, still without needing any fancy algorithms.
MassPikeMike [3 hidden]5 mins ago
I am usually working with historical documents, where both Otsu and adaptive thresholding are frustratingly almost but not quite good enough. My go-to approach lately is "DeepOtsu" [1]. I like that it combines the best of both the traditional and deep learning worlds: a deep neural net enhances the image such that Otsu thresholding is likely to work well.
Ok. Those are impressive results. Nice addition to the toolbox
Greamy [3 hidden]5 mins ago
It can benefit from more complex algorithms, but I would stay away from "AI" as much as possible unless there is indeed need of it.
You can analyse your data and make some dynamic thresholds, you can make some small ML models, even some tiny DL models, and I would try the options in this order.
Some cases do need more complex techniques, but more often than not, you can solve most of your problems by preprocessing your data.
I've seen too many solutions where a tiny algorithm could do exactly what a junior implemented using a giant model that takes forever to run.
spookie [3 hidden]5 mins ago
There are also many other classical thresholding algos. Don't worry about it :)
Legend2440 [3 hidden]5 mins ago
It indeed would be much better.
There’s a reason the old CV methods aren’t used much anymore.
If you want to anything even moderately complex, deep learning is the only game in town.
shash [3 hidden]5 mins ago
I’ve found exactly the opposite. In domain after domain the performance of a pure deep learning method is orders of magnitude less than that of either a traditional algorithm or a combination.
And often the CNNs are so finicky about noise or distortion that you need something as an input stage to clean up the data.
do_not_redeem [3 hidden]5 mins ago
sure, if you don't mind it hallucinating different numbers into your image
Legend2440 [3 hidden]5 mins ago
Right, but the non-deep learning OCR methods also do that. And they have a much much lower overall accuracy.
There’s a reason deep learning took over computer vision.
vincenthwt [3 hidden]5 mins ago
You're absolutely right, deep learning OCR often delivers better results for complex tasks like handwriting or noisy text. It uses advanced models like CNNs or CRNNs to learn patterns from large datasets, making it highly versatile in challenging scenarios.
However, if I can’t understand the system, how can I debug it if there are any issues? Part of an engineer's job is to understand the system they’re working with, and deep learning models often act as a "black box," which makes this difficult.
Debugging issues in these systems can be a major challenge. It often requires specialized tools like saliency maps or attention visualizations, analyzing training data for problems, and sometimes retraining the entire model. This process is not only time-consuming but also may not guarantee clear answers.
Legend2440 [3 hidden]5 mins ago
No matter how much you tinker and debug, classical methods can’t match the accuracy of deep learning. They are brittle and require extensive hand-tuning.
What good is being able to understand a system if this understanding doesn’t improve performance anyway?
vincenthwt [3 hidden]5 mins ago
I agree, Deep Learning OCR often outperforms traditional methods.
But as engineers, it’s essential to understand and maintain the systems we build. If everything is a black box, how can we control it? Without understanding, we risk becoming dependent on systems we can’t troubleshoot or improve. Don’t you think it’s important for engineers to maintain control and not rely entirely on something they don’t fully understand?
That said, there are scenarios where using a black-box system is justifiable, such as in non-critical applications where performance outweighs the need for complete control. However, for critical applications, black-box systems may not be suitable due to the risks involved. Ultimately, what is "responsible" depends on the potential consequences of a system failure.
shash [3 hidden]5 mins ago
OCR is one of those places where you can just skip algorithm discovery and go straight to deep learning. But there are precious few of those kinds of places actually.
do_not_redeem [3 hidden]5 mins ago
GP is talking about thresholding and thresholding is used in more than just OCR. Thresholding algorithms do not hallucinate numbers.
Right now the neat future it have is the ability of running custom filters of varied window size of images, and use custom formulas to blend several images
I don't have a tutorial at hand on how to use it, but I have a YouTube video where I show some of its features
I had recently learned about using image pyramids[1] in conjunction with template matching algorithms like SAD to do simple and efficient object recognition, it was quite fun.
Without fail, step one of computer vision was to bring the image down to grayscale and / or filter for specific colours so you ended up with a 1 bit representation.
My "algorithm" for a robot that was to follow a line drawn on the floor boiled down to "filter out the colour green, then look at the bottom rows of the image and find the black pixels. If they're to the left, adjust to the left, if to the right adjust to the right". Roughly. I'm sure it could be done a lot more cleverly but I was pretty proud of it AND the whole tool suite was custom made, from editing environment to programming language. Expensive cameras and robot, too.
prior hn thread: https://news.ycombinator.com/item?id=44281506
i don't have any background in computer vision but enjoyed how the introductory chapter gets right into it illustrating how to build a limited but working simple vision system
I think this kind of thing might have real, practical use cases in industry if it's fast enough.
Which vision library are you using? I’m using Halcon by MVTec.
Color makes major compromises physically also, since it seems like the Red, Green and Blue channels are sampling from the same physical location but the actual sensor buckets are offset from each other.
That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent.
Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option.
Counter intuitively, I’ve often found that CNNs are worse at thresholding in many circumstances than a simple otsu or adaptive threshold. My usual technique is to use the least complex algorithm and work my way up the ladder only when needed.
One fun thing is that with a simple model it's not much slower than techniques like otsu (you're still doing a roughly constant amount of vectorized, fast math for each pixel), but you can grab an alpha channel for free even when working in colored spaces, allowing you to near-perfectly segment the background out from an image.
The UX is also dead-simple. If a human operator doesn't like the results, they just click around the image to refine the segmentation. They can then apply directly to a batch of images, or if each image might need some refinement then there are straightforward solutions for allowing most of the learned information to transfer from one image to the next, requiring much less operator input for the rest of the batch.
As an added plus, it also works well even for gridlines and other stranger backgrounds, still without needing any fancy algorithms.
[1] https://arxiv.org/abs/1901.06081
If you want to anything even moderately complex, deep learning is the only game in town.
And often the CNNs are so finicky about noise or distortion that you need something as an input stage to clean up the data.
There’s a reason deep learning took over computer vision.
However, if I can’t understand the system, how can I debug it if there are any issues? Part of an engineer's job is to understand the system they’re working with, and deep learning models often act as a "black box," which makes this difficult.
Debugging issues in these systems can be a major challenge. It often requires specialized tools like saliency maps or attention visualizations, analyzing training data for problems, and sometimes retraining the entire model. This process is not only time-consuming but also may not guarantee clear answers.
What good is being able to understand a system if this understanding doesn’t improve performance anyway?
But as engineers, it’s essential to understand and maintain the systems we build. If everything is a black box, how can we control it? Without understanding, we risk becoming dependent on systems we can’t troubleshoot or improve. Don’t you think it’s important for engineers to maintain control and not rely entirely on something they don’t fully understand?
That said, there are scenarios where using a black-box system is justifiable, such as in non-critical applications where performance outweighs the need for complete control. However, for critical applications, black-box systems may not be suitable due to the risks involved. Ultimately, what is "responsible" depends on the potential consequences of a system failure.
Right now the neat future it have is the ability of running custom filters of varied window size of images, and use custom formulas to blend several images
I don't have a tutorial at hand on how to use it, but I have a YouTube video where I show some of its features
https://youtube.com/playlist?list=PL3pnEx5_eGm9rVr1_u1Hm_LK6...
Here's the source code if anyone's interested https://github.com/victorqribeiro/customFilter
I had recently learned about using image pyramids[1] in conjunction with template matching algorithms like SAD to do simple and efficient object recognition, it was quite fun.
1: https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29
A truly clever image processing method.
https://github.com/zserge?tab=repositories&q=&type=&language...