My $0.02 as someone that tried a few CV experiments to help validate the project
I tried some simple thresholding on the dies as well as using some more advanced statistical techniques. 100% automatic recovery is ideal, but it would also be acceptable (and possibly required) to simply flag bits that CV can't confidently resolve.
Unfortunately, although accuracy was pretty good, I wasn't able to successfully flag all bit errors. Ex: the white gunk on the blank area were triggering as 1's when they actually are 0's. IMHO it will take something more intelligent like a neural network to recognize whether the correct shape/pattern is present or not. While I think it would be a great long term project, Andrew offered a short term solution that seems to be working well.
The current approach is a bit brute force, but it is known to work. In fact, some people are entertained by it. Comments include:
-"MOAR! This is highly addictive. will you publish more die images to work on?"
-"I did a bit more than a hundred images while waiting for a backup to be copied."
-"Please send ... photos. They will amuse ..."
We discussed presenting the CV results for quick human validation, but I was afraid this would bias the results and thus defeat the project.
If you are curious, my CV tests can be found here: https://github.com/JohnDMcMaster/pr0ntools/tree/master/capture/tem
Other: thanks for all the feedback! We are looking it over and are prioritizing feature requests.
Other: known bug "OperationalError: Expression tree is too large (maximum depth 1000)". Apologies if you see this, this is top of our list to take care of once we have some cycles