I am by no means an expert, but I thought I'd share some of the things I've learned when coming up to speed on Unity 5.3.4f1, Vuforia 5.5.9 and Google Cardboard 0.6.0. Hopefully some of these things will help others or at least turn up in a search if someone applies their Google-fu to potential issues.
1) The Canvas class is not handled by the DefaultTrackableEventHandler. This means that if you try to assign a UI to an ImageTarget, it will more than likely not be handled correctly. To resolve this, you need to update OnTrackingFound() and OnTrackingLost() to iterate through attached Canvas components. Use the Renderer and Collider implementations in these methods to guide you here.
Prior to doing this, the UI I would attach to targets were very flaky. I'd see multiple UIs attach themselves to a single image target when I expected to see only one. Further, the UIs would remain in the view when the image target would exit the view.
2) You can use the Google Cardboard 0.6.0 Reticle under Vuforia 5.5.9 with some modifications. To do this, you'll need to add a CardboardReticle GameObject to the ARCamera, and add an EventSystem GameObject to the scene. In the EventSystem, include a script component using the GazeInputModule.cs. References to Cardboard.SDK.* will need to be removed from each, and those deal primarily with detecting user input triggers.
The approach I took to detect user input was to update the CardboardReticle.cs with trigger detection in the Update() method, and then act on and clear that detection from within GazeInputModule in the Process() method.
I further noticed that the raycasting performed by GazeInputModule.cs was messed up. I set the headPose in CastRayFromGaze to utilize only Vector3.Forward, and then set the pointerData.position to the vector created from the Camera.main.pixelWidth/2 and Camera.main.pixelheight/2. The use of the hotspot field in the original code gives you an incorrect raycast location (center of the screen instead of the center of one of the stereo cameras).
3) Performing the changes above resulted in a Reticle that worked just like it does in the Google Cardboard demos without requiring integration of further Cardboard objects. However, one thing I noticed is that meshes that were attached to image targets caused a depth perception problem with respect to the reticle. The reticle would project itself "into" the mesh as if the reticle was further away than the mesh. The net result is that my focus would cause the reticle to appear "double vision" due to the difference. A quick and dirty way to address this is to add a Canvas and (invisible) Panel above the mesh. This provides a raycast target that invokes the reticle distance updating done within GazeInputModule.
Ideally, things should be updated so the reticle distance adjusts based on the intersection with the mesh. I haven't gotten to the point of implementing that yet.
4) When I originally set up my meshes and image targets, I scaled them orders of magnitude larger than they should have been. The net effect was that while I'd have "3D" meshes, they were rendered flat in Google Cardboard -- much like a 3D image you see on a monitor. It wasn't until I scaled them back down to '1/1/1' that I started seeing them with actual depth perception.
All in all, I'm pretty impressed with the ease of use of Vuforia and how quickly I've been able to get things going using the SDK. Hats off to the developers!