I have done a lot of research but I think it is time to call in the experts to give me the best workflow to complete this.
I am going to give a general idea of what i want to achieve and where i am at.
My idea is to use augmented reality as an app for IOS, the concept is to have for example a generic "Printer" in real life, and when the app sees the printer, a 3D Human model appears on the screen doing some form of animation with a 3D model of a printer (or other object 'unimportant').
So let's just say i have the 3D models and animations, i could easily achieve this by just sticking a single image to the printer in real life on each object to have the animation play, but I do not want to do that, what i would like to do is to play the 3D animation from the point where the app sees the printer.
The issue is, i can't scan the printer as it don't fit on the a4 vuforia object scanner..
So i am now unable to do that, I thought of using a pre existing 3D model of lets say a printer, and do a model recognition, i know as it works on geometry this could be a method, the issue is that it was not recognised on app because obviously the 3d version is going to be slightly different than the real version.
The third method was to use a video background to play a video but i am not interested in that either.
I have considered three methods as seen on top but not really sure what to do now that i am stuck.
The object target was my best choice but the size of the paper to scan it in is too small.
Keep in mind when i say "printer" i am talking about doing it on other models that are medium size and are all bigger than the paper.
Do you have any ideas about to do this type of thing with medium sized objects, I would love to know!
Any links or informational tips would greatly be appreciated, thanks.