I would like to implement an app with 3dmodel and video. I can't understand how to mix the image targets sample with the videoplayback sample. Any suggestion is welcome.
In native iOS there is no shortcut as you would have to look at the image targets sample and apply this to the VP sample.
The code you need to look at is in RenderFrameQCAR() as this is where the augmentation happens. The other complication you may find is in displaying a complex 3D model unless you are familiar with OpenGL ES. You can search the forums for advice on how to do this and for tools and utilities that may help you.
It is far easier in Unity to do this as basically a 3D model or any augmentation is a child of an ImageTarget that gets enabled/disabled at runtime depending on whether or not it is being tracked. Also it handles all the 3D formats etc. for you and can save you lots of time.
Are you sure you want to delete this message?
Are you sure you want to delete this conversation?