The point i made with my attached files is that the captured images vuforia uses are often blurry. I can't imagine that lies well with computer analysys for "fixed points in space based on hard greyscale edges" Or whatever it exactly tries to get out of that image. I am suggesting to solve that in order to improve scans on the app side (less false points etc.) But maybe it doesn't matter when you take 100 images or so.
1. In case of our bigger scan we don't really have a way to cover the entire room in 18% greyscale just for the sake of a scan. I will propose this for a next scanning session but i think the probablity of turning the showroom into a photoshoot room is appealing. I get that this causes inaccurate scanning points though.
2. We have a bigger version for our bigger scans, we got that idea from the guys scanning in classic cars. It works well enough.
3. We know it isn't aligned due to the object being scanned not being able to align well. We actually just alligned it on 1 axis on our actual scans. This example scan i did to post here, and to figure out how to handle rotation/positioning relative to that center point in the grey square. We figured that as long as we had the measurements/offset we could deal with that in Unity anyway.
Another complaint came in about the current scans being hard to catch on. You can get something recognized now, but recognizing something like the cardridge on this SNES for example, isn't possible. It simply doesn't grasp that it is zoomed in on an existing target. Youc an do extended tracking, but Metaio somehow is able to recognize the object even when 8/9th is offscreen from the start. It somehow manages a lot more detailed points or whatever it does. We really want that kind of percision in our scans as well!
Thanks for helping out so far!