I noticed that when using Texture2D.ReadPixels() that what is being rendered by ARCamera is not captured.
So I created a RenderTexture and was able to successfully render the ARCamera's view to a Texture2D.
Problem with that is that it doesn't capture the background (i.e., what the iPhone's camera is picking up).
So, the bottom line is this:
1. ReadPixels() will capture what the iPhone's cam is seeing.
2. Using RenderTexture allows you to capture what the ARCamera is rendering (into a Texture2D).
But I need both :)