Log in or register to post comments

some technical q´s

May 5, 2011 - 8:17am #1

Hi!
Impressive piece of technology you´re giving away here ;)

Now for some questions;
What are the main differences in using frame markers vs image markers.
Am I correct in assuming that the frame markers general advantage is freedom to create the "inside" artwork any way you want. Would they be more sensitive and more error prone than a well designed image marker?

What are the plans for supporting tablets (gingerbread) in the future. I am thinking about a couple of architectural visualization applications, and the larger screen of a tablet would be extremely nice in these cases.

How do we get access to more granular data from the Trackable Event? In the examples I can only see OnTrackingLost and OnTrackingFound. If I want to get at the world position of the framemarker for example, how would I do that.

Do you have any tips for debugging and fine tuning the app on the device? I guess spraying the code with Debug.Log is the way to go or is it possible to use the debugger?

thanks again for the hard work you have put into this.

niklas wörmann

Re: some technical q´s

October 21, 2011 - 8:23pm #3

Is there a way to set the zoom level for the camera view (the input image)?

Re: some technical q´s

May 5, 2011 - 12:06pm #2

Frame markers are easy to grab and use (we provide 512 of them) and are lightweight to process, so you can include all 512 in a single project. They also give you flexibility on the inner image, as you suggested. The downside is that you have to have the entire marker in view to track it.

Good image targets (those with a nice set of features) provide more robust tracking. You can focus on a subset of the image, zoom in and out, and track at a wide range of angles.

Support for gingerbread tablets will be added in a future release.

As far as world position goes, that depends on how you setup the World Center parameter on the AR Camera. If you set it to NONE, then the camera stays stationary while all the targets move in front of it. If you set it to AUTO, one target is stationary (the first one detected) while the camera and all other targets move in relation to it. You can pick a particular target to be the world center with USER, and then it will never move in the scene. In any case, you can just query the transform of the trackable like any other Unity object. Please note, if you have several trackables in your scene then the relative sizes in Unity should match the relative sizes of your printed targets (for occlusion to work correctly).

I'm guilty of using print statements for debugging, I don't have much experience with the Unity debugger. Perhaps someone else can chime in here :)

- Kim

Log in or register to post comments