Humans are the best at determining if a bit is good or not.
Algorithms can be good. Even 99.999% good, but that means for a 8k die, there is a very good chance that 1 bit is bad. Some algorithms can assign a detection confidence to each bit, but there is a chance this confidence score is wrong as well. We have found that running a very good algorithm yields us very good results, but to my knowledge, they have never been 100% correct.
That means that, thus far, a human needs to go in and check the bits that have a low confidence. Which saves a lot of time, but in my experience, sometimes a bit with high confidence is marked incorrectly too. So, well, now to be sure the dump is good, you need to go in and check every bit, and if that's the case, you're back to where you started - typing every bit in by hand.
Now about the posted code: taking average pixel values can get you 90% on very evenly-imaged dies, but what about something like this (I just looked for some example of a die image - any example - there is a good chance this has never been typed)?
https://1.bp.blogspot.com/-ezB5BcdYFfs/W...CEw/s1600/3.jpg
See how the average intensity varies across the surface of the die? That means that, for this die, you need to get the average pixel intensity in a spatially varying fashion. Which will introduce even more errors.
So, yes, there are automated methods, and from modern research, deep learning is getting *really* good at recognizing small images for what they are, but none of them are perfect, and they're all imperfect in different ways.
The only way to get good results for many different types of die image with the highest level of confidence is to use a human. Thus the typing monkey project.
/Andrew
|