Log in or register to post comments

Positional issue with Object Recognition

October 18, 2018 - 12:17am #1


I'm building an app to track an Arduino and breadboard by using object recognition. However, the model I want to augment over the top is not keeping its position. See the video for a better understanding of my issue: https://youtu.be/YvzhRo4tZRI

As you can see from the video, as I move the camera around, from side to side, the Arduino model doesn't quite sit over the real world object, unless the camera is front on or directly over the top. Pay close attention to the usb port not lining up correctly to know where the model should sit. Any tips on how I can fix this issue? I've enabled device tracking and set the world center mode of the camera to a specific target, but the issue I'm having remains.

Kind regards, 


Positional issue with Object Recognition

October 18, 2018 - 9:03am #2


When you performed the authoring step with the scanning tool, how may keypoints were generated? If you still have the app on the device you used for scanning, you should be able to open the .OD file and get the value (sorry, the exact workflow is escaping me right now).

Here's some general tips that I normally provide to those who are using the feature:

There are many factors that can affect the tracking performance of a Object Target, both during the scanning process and when running an app. Most issues with Object Targets can be traced to the creation of the .od file using the Vuforia Object Scanner. Be sure to follow the scanner app instructions: https://library.vuforia.com/articles/Training/Vuforia-Object-Scanner-Users-Guide, paying close attention to step #8 in the article.

  • When creating the .od file, was the model scanned in an environment that was free of background details which may have introduced features that were not part of the model? Scanning in 'cluttered' environments can introduce false detection/tracking points.
  • When creating the .od file, were there any specular reflections on the model introduced by environmental lighting? Scanning objects that have reflective surfaces under direct lighting can introduce areas with no detection/tracking points.
  • Are you using the recommended devices referenced on the tool download page?: https://developer.vuforia.com/downloads/tool

In our labs, we utilize four primary strategies for creating an optimal Object Target scanning environment:

  1. All background surfaces are colored at 18% gray. An easy, off-the-shelf solution is to buy bed sheets near this color and drape everything in the environment that could be seen by the scanning device's camera.
  2. No direct lighting. We use light boxes and/or diffusers to eliminate direct lighting upon the object and minimize any spectral reflections.
  3. Utilize a 360 turntable to re-orient the device. This is especially helpful when you've set your environment to near ideal conditions within a limited area. You can spin the model and scan in 360 degrees without having to move around it.
  4. Be sure that the environment in which you’re testing (via the Object Scanner app ‘test mode’) is the same in which you’re verifying tracking (via the sample code). Environmental factors such as lighting, shadows, spectral reflections, etc. can negatively affect tracking performance, so awareness of how the environment is interacting with your model is important for qualifying performance.

Lastly, be sure to use meters and the default scale as this can also impact the feature's accuracy and performance.


Vuforia Engine Support


Log in or register to post comments