This would be a huge enhancement to the overall stability, by a great deal. Just need to be sure the developer is in charge of supplying gyroscopic and accelerometer data directly. Otherwise the library would muck with the ability of a developer to access that data.
There are two cases:
1) Target Object is being moved, so we need to supply remote tracking data, possibly sourced from target object being an iPod Touch, iPhone, or iPad.
2) Self is being moved, so developer needs to supply motion data from local accelerometer and gyros.
Combining these two could provide some incredible uses of dual-device AR interactivity.
The latency should be taken into account, so it can be forward projected in time and integrated with the camera vision stream. Lazy evaluation of sample data could be a good setup for handling drop-outs and variability of the motion data. Forward projecting the motion data is helpful for feature tracking localization (more efficient, smaller bounding areas). If you wanted to get really fancy, you could auto-correlate the latency difference between motion data vs vision feature tracking.