Using Text Recognition, I am able to take input text from an inputField and place the text on the plane of the recognized word. I also am able to show a 3d object along with the text. It is a bit shaky but it works in Unity when play testing. However, when I build for iOS on an iPhone 6, the augmented text and 3d image show up in different location than where I placed them in Unity and I am unable to see them at all most of the time.
My Hierarchy looks like this:
ARCamera
Directional Light
TextRecognition
EventSystem
TextRecoCanvas
-InputField
-Word
---- ImageTarget_teapot
----Text
I removed the "Turn Off Word Behavior" script on the word prefab so that the text would be displayed as a text mesh. I then created a script which assigns the text from the InputField to that textMesh. Other than that, it's pretty much the same as the TextReco sample scene, minus the masking canvas and the TextHandler script. Any idea what would cause the teapot and textMesh to show up in one location in Unity and another on the iPhone?
Thanks!
Oh wait, I think I figured it
Oh wait, I think I figured it out. I think the problem is having the Word prefab inside the TextReco Canvas in the hierarchy. Moving it out helped a lot. I still need to make some adjustments, but now I can at least see the augments.