For the ImageTargets - it seems like you have to use the print-out or screen-readout, and that, for stone or woodchips, you can't just point the user to the location: For example, if you just shine it at a pile of wood that looks very similar to the imagetarget, but isn't the same segment, it won't work. (A quicker way to check this would be to photoshop flip the original imagetarget - no detection.)
For ImageTargets: something like scanning that sign on a building in Rokus Reward would work for sure, but if you graffiti-ize it a bit, it'd likely not.
So, if you could just track -- say -- a random pile of stone or woodchips (rather than that particular shot), you'd be able to automatically spawn unique "scenic design" each time you play it. I suppose something like that would be more suitable for the Amazon-Turk-based image recognition app (oMoby on the iPhone).
If imagetargets could recognize similarlty, then... A fun iPhone-esque app would be to point your device at a field of grass (any field of grass) and instantly see virtual flowers and bees pop up, and maybe play some sort of flowers+bee game o.O -- similarly, point your device at the sky, and instantly see virtual balloons to pop! or UFO's to shoot!