Log in or register to post comments

getcameraimage and more

May 20, 2011 - 1:27am #1

Hi!
I notice in the CameraDeviceBehaviour there seems to be a method to GetCameraImage. Do you have any specific info regarding how to call this method, and what you would expect to get back. More specifically would the image include any Unity-rendered stuff or is it just the video signal from the camera?

On a completely different note, I have been browsing around the code and I have noticed a couple of things. You seem to rely on the class constructor to initialize stuff at times. I think the Unity docs discourage this usage and the best practice is to use Start or Awake to initialize things.

The other thing that could be improved is to avoid making repeated calls like FindObjectOfType, but instead cache the return values for speed. This is probably not a huge factor for performance, but it would certainly influence it. Since the poor old device is fighting for its life, performancewise, we might want to give it as much room as we can.

One thing that I noticed is that in adb logcat the frametimes seem to vary in an unusual way. I guess this has to do with the AR plugin workload varying over time. What can be done to lessen the load on the plugin? My guess is to have as few concurrent fiducials as possible, make sure the lighting conditions are very good, and the fiducial quality as well, and to use autofocus.

TIA
Niklas Wörmann

Re: getcameraimage and more

November 10, 2011 - 8:35pm #14
Quote:

in the unity script is where i registered for the image format i want, would that carry over to the update callback? I can't see why not, but just wondering how much stuff I'd need to do in the appcontroller.mm file vs whats done in the Unity script file

Yep, you should be able to register for the format in Unity, the result will be the same.

- Kim

Re: getcameraimage and more

November 10, 2011 - 7:26am #13

Hi Kim,

I actually had all this working outside of Unity, so know about the callback, just didn't think to consider still using it while also using the Unity plugin.

Interesting thought which I'll have try, as I would think it would be faster than sending the info via plugin call.

Phil

edit: in the unity script is where i registered for the image format i want, would that carry over to the update callback? I can't see why not, but just wondering how much stuff I'd need to do in the appcontroller.mm file vs whats done in the Unity script file.

Re: getcameraimage and more

November 9, 2011 - 8:24pm #12
Quote:

I figured i can send the image info via a native dll call, and use my old code

This is the officially supported method.

There's something you can try, though, that I've just thought of. You might be able to register for the native UpdateCallback in the AppController.mm file, and get access to the QCAR State object from there. You'll have to point your Xcode project to the include folder from the native QCAR install. Make sure the versions match (e.g. iOS 1.0.0 for both Unity and native).

This callback will get called each time the tracker has finished, so you'll want to cache the results and query from Unity. It might be a bit tricky to ensure the frame matches the current frame in Unity, you may be one off. Again, this isn't an officially supported method, but it might be worth investigating.

You can find sample code for using the callback in the native VirtualButtons sample.

- Kim

Re: getcameraimage and more

November 9, 2011 - 8:52am #11

is there a way to get at the cameraimage in the native code?

my knowledge of unity cameras and rendertextures isnt that great.

i had some color tracking code working in an ios project using QCAR, and now our project is in Unity. I've figured out how to get access to the camera image in the c# script, but as mentioned i can't wrap my head around now using that image and rendering to an offscreen buffer with my custom shader, retrieving that buffer, and then scanning retrieved buffer to find the color i'm tracking.

I figured i can send the image info via a native dll call, and use my old code, but if i could get to the camera image in the appcontroller.mm file instead it would probably be quicker.

Re: getcameraimage and more

October 15, 2011 - 8:06pm #10

I would like to add a vote for QCAR adding a fast build in solution to grab the video texture from the camera in Unity3d.

Re: getcameraimage and more

October 10, 2011 - 4:25pm #9

i'm moving a non-unity qcar project to unity that involves using the camera image and rendering to an off screen buffer with a pixel shader...that should be about the fastest way to do this no? I've got the whole thing working outside of unity, just need to hook it up to qcar/unity

Re: getcameraimage and more

August 23, 2011 - 1:36am #8

Any update on this topic? It would be very useful to use the camera image in textures for improving augmentation.

Re: getcameraimage and more

August 1, 2011 - 10:11am #7

Hmmm... we REALLY need a fast technique to grab the video stream for effects and sampling. Anyone up for tackling an approach?

Re: getcameraimage and more

July 31, 2011 - 8:24am #6

Unfortunately the Texture2D api doesn't provide a convenient way to create a texture given a raw pixel buffer (as far as I know). The LoadImage method expects a jpg or png formatted byte array, which is different than a simple pixel buffer. You can loop through the pixels and call Texture2D::SetPixel, but this is extremely slow.

I would actually suggest writing a C plugin to write the image to texture memory using OpenGL calls. You can create the texture in Unity and get the texture id using Texture2D::GetNativeTextureID(). Pass this texture id to native along with the camera image pixel buffer, and use glTexImage2D (or glTexSubImage2D) to write the pixels into the existing buffer.

- Kim

Re: getcameraimage and more

July 30, 2011 - 1:11am #5

After some experiments i managed to create a new texture2d, and use the setPixels function on the that texture. Note that the image returned from qualcomm is in rgb 565 space, and texture2d requires rgb 888.
So the format of qcars image is rrrr rggg gggb bbb
and unity's texture is rrrr rrrr gggg gggg bbbb bbbb

In practice you have to figure out the endiness of unity's system and that of qcars, and convert the color space. At that point i stopped (I had wrong colors).

Re: getcameraimage and more

July 29, 2011 - 9:57pm #4

I'm running into the same image with being unable to render the texture. IsValid() even returns true so maybe it has something to do with Unity's LoadImage()?

Re: getcameraimage and more

June 27, 2011 - 3:31am #3

I tried to implement your code suggestion to apply an environmental relection on a model with a reflection shader. But converting the pixels to a texture2d gives me some trouble. (i'm having a white texture with a questionmark instead).

here's my code:

public class CameraReflection : MonoBehaviour, ITrackerEventHandler {
  
    private IntPtr mImageHeaderData = IntPtr.Zero;
    private int mNumImageHeaders =0;
    private CameraDeviceBehaviour mCameraDevice = null;
    private Image.PIXEL_FORMAT mPixelFormat = Image.PIXEL_FORMAT.RGB565;
    private Texture2D tex;
    public GameObject reflective;
    private bool ImageRecieved;
    public GUITexture preview = null;

    void Start()
    {
        TrackerBehaviour trackerBehaviour = (TrackerBehaviour) FindObjectOfType(typeof(TrackerBehaviour));
        if (trackerBehaviour)
        {
            trackerBehaviour.RegisterTrackerEventHandler(this);
        }

        mCameraDevice = (CameraDeviceBehaviour)
                    GameObject.FindObjectOfType(typeof(CameraDeviceBehaviour));
        mCameraDevice.SetFrameFormat(mPixelFormat, true);
    }

  public void OnTrackablesUpdated()
  {
    Image cameraImage = mCameraDevice.GetCameraImage(mPixelFormat);
    ImageRecieved = cameraImage != null;
    if (ImageRecieved)
    {
      var camTex = new Texture2D(cameraImage.Width, cameraImage.Height, TextureFormat.RGB565, false);
      camTex.LoadImage(cameraImage.Pixels);
      //camTex.Resize(256,256); (resizing the wrong textures leaves me with a black texture instead)
      tex = camTex;
      if(preview != null){
        preview.texture = tex;  
      }
    }
  }
}

I then attached this script to the ar camera, and connected a gui texture to the preview slot for debugging purpose.

Any suggestions?

Re: getcameraimage and more

May 20, 2011 - 9:51am #2

The camera image returned by that method is the raw pixel buffer from the camera (does not include any augmentation). Here is a bit of sample code for using it. Note that this is an implementation of the ITrackerEventHandler. The OnTrackablesUpdated method is called at the end of each frame, which is a good point to ask for the camera image that goes with that frame.

public class MyImageHandler : MonoBehaviour, ITrackerEventHandler
{
    private CameraDeviceBehaviour mCameraDevice = null;
    private Image.PIXEL_FORMAT mPixelFormat = Image.PIXEL_FORMAT.RGB565;

    void Start()
    {
        TrackerBehaviour trackerBehaviour = (TrackerBehaviour) FindObjectOfType(typeof(TrackerBehaviour));
        if (trackerBehaviour)
        {
            trackerBehaviour.RegisterTrackerEventHandler(this);
        }

        mCameraDevice = (CameraDeviceBehaviour)
                    GameObject.FindObjectOfType(typeof(CameraDeviceBehaviour));
        mCameraDevice.SetFrameFormat(mPixelFormat, true);
    }

    public void OnTrackablesUpdated()
    {
        Image cameraImage = mCameraDevice.GetCameraImage(mPixelFormat);

        if (cameraImage != null)
        {
            byte[] pixels = cameraImage.Pixels;
            // Do something with the pixels!
        }
    }
}

Thanks for the rest of the feedback, I've passed it on!

- Kim

Log in or register to post comments