As suggested by you I am able to get x,y coordiante with respect to trackable plane with its centre as origin.
Wherever I tap on screen I capture those screen coordinates and calculate the corresponding x,y ont the trackable plane.
Now please let me know if my further approach and understanding for recognition of floors is correct or not--As suggested by you once i get x,y coordinates i should divide my whole trackable(i.e. building/tower) into some 2 d rectangles each rectangle corresponding to specific floor,where in I should be able to get the bounding coordinates of my rectangles from the width and height of the building and considering trackable origin at the centre of it.
Then once I have got the bounding coordinates of rectangles i should check whether the screen point I touched lie in which one of these rectangles and process accordingly...
Am i right with this approach or have i misunderstood your explanation?
Secondly I want to know that say Iam standing in front of a building and pointing my camera to that building and say I touch the screen point corresponding to a particular window on that building(an assumption),with this i will get the coordinates of that window on the trackable plane with origin as centre of tackable.Now if I move my camera a little bit towards left such that the building is still visible and touch the same window on the camera screen.Now in this case will corresponding window coordinates change because the camera orientation has changed slightly?
If yes then how can i check the point i touched on screen with the bounding rectangles?Because Iam thinking to take fixed bounding coordinates for my rectangles based on widht and height of the building....as I guess that will also change with the movement of camera.
Please help as Im totally confused with this.:(
Hope Im clear....