Got some questions I could not find answers yet.
Image target and then also frame marker must be quite big in user's sight, actually filling entire digital field to get detected. You need to be quite close to it which is a bit inpractical. I understand that camera field of view is larger than projection filed of view but still this is a quite an issue. Is there any thing I can do to improve it? to get smaller target detected, like lets say target filling like half size of projection screen? Once detected I can move that far with no problem for target to still be tracked reliably, just the initial detection does nto work at this distance. also is there a way to check focus of camera? TO verify vision is sharp.
Frame marker scale. I can calibrate and create profile. I can create image target and set it correct size in dataset manager. All is aligned and projected fine. But I can't get it right for frame marker. If I set its scale in scene in mm of real size it's not properly aligned. eyes cameras are too far away from each other. Is there a way to swith to dual eye rendering but in monoscopic? If I switch AR camera to mono only one eye is rendered. In stereo cameras are being always put apart each other so vision is stereoscopic. I do not need to show depth really as I'm planning to overlay only 2d fixed to target. So if only one camera would be rendered and projected to both eyes I gues it would be fine? there does not seem to be such setting though.