I am trying to create an AR experience where the user can dynamically create a target (like in UserDefinedTargets). I then want to replace the teapot model with a regular two dimensional Android View (animated). I found a thread saying that the CloudRecognition app projects a bitmap (guessing the book info) that's drawn into by a canvas onto the target. This is what I would like to do, but I don't need cloud capability and I would like to integrate the user-defined-target capability with projecting a bitmap that can be updated to simulate animation.
Can someone guide me into which parts of the code I need to look at, and/or other threads to look at? Thanks!