Log in or register to post comments

Show real objects ahead of augmented

May 15, 2014 - 12:32am #1

Hey, I have been searching for this solution for quite long time, " How can we show real objects ahead of the augmented....i mean when tracking a marker and 3d/2d object is been displayed on top of the marker...& now when i place my hand between marker and camera its not showing up on screen...the augmented layer is on top of the real layer".

How can i show the real objects ahead of the augmented data?

Thanks :)!

Show real objects ahead of augmented

May 18, 2014 - 11:35am #4

What I mean in my previous message is that you cannot do this (unless you have a depth sensor or something that allows you to detect depth of your hand with respect to the target).

The video you are referring to ("smart traking" as you mention), refers to the fact that even if you partially occlude the target with your hand, Vuforia will still be able to  track the marker position, but this is just about tracking robustness, it has not much to do with what you want to achieve (although it looks like... but in your case, you want your hand to be "on top" of the augmentation, which is slightly different thing...  and currently not supported).


Show real objects ahead of augmented

May 18, 2014 - 11:18am #3

Hey Alessandro,

Thanks for the reply....but can you pls suggest me how can we do that i mean can you pls elaborate about what you just mentioned. I remember from earlier versions of vuforia smart tracking was not introduces & at that time our hand used to be visible or not been overlapped by augmented data when placed between the camera & the marker....how & what need to be edited in current SDK to disable the smart tracking. 

BTW this is what the smart tracking i was talking about & if you see the video, palm is been overlapped by the augmented data but here i dont want the palm to be overlapped



Thanks :)

Show real objects ahead of augmented

May 18, 2014 - 10:53am #2

That's because the "real" world is part of the video background, and as such, it has no "depth" information;   to achieve what you describe, you would need to have some depth information about the real world, for example some data delivered from a depth sensor or similar.


Log in or register to post comments