Log in or register to post comments

Tune up access of camera's color

June 16, 2013 - 12:02am #1

Hello guys,

I am trying to integrate my augmentation with the real world a little more. For that I am trying to get the color of the camera-image so that I can easily tint the lights in my scene.

I know that the String AR framework as someway of doing this by accessing the MarkerInfo.color values.

Per StringAR documentation:
"color:

Represents the average color of the detected marker in the frame, relative to the loaded marker image. In other words, if your camera captured a frame in which colors corresponded exactly to the original marker image, youʼd get {1, 1, 1}.

If the ambient light is warm, you might get something like {1.2, 1, 0.9} for example. How you use this is up to you. For some types of content, it can be used to imitate real- word ambient lighting conditions to great effect. "

Of course, after seeing this I tryed my own version so I made the following: 1) Get the Camera pixels in Image.PIXEL_FORMAT.RGB888, 2) Iterate through all of them and 3) Calculate the average value for each color. I suddenly realize that this would get me a performance hit that was noticeable on the camera feed. Code below:
 

using UnityEngine;
using System.Collections;

public class ARCameraColor : MonoBehaviour, ITrackerEventHandler
{
    
    public float refreshRate = 5f;
    
    
    private float lastUpdated = 0f;
    
    private CameraDevice cam;
    private Image.PIXEL_FORMAT m_PixelFormat = Image.PIXEL_FORMAT.RGB888;
    private bool m_RegisteredFormat = false;
    private Image image;
    
    
    private byte[] pixels;
    private int[] rgbPixels = new int[3];
    private int row, col, width, height, numPixels;
    private int stride;
    
    private bool isFirstTime = true;
    
    
    void Start()
    {
        
    }
    
    
    void Update(){
        
        if(isFirstTime)
        {
            QCARBehaviour qcarBehaviour = (QCARBehaviour) FindObjectOfType(typeof(QCARBehaviour));
            if (qcarBehaviour)
            {
                qcarBehaviour.RegisterTrackerEventHandler(this);
            }
            
            isFirstTime = false;
        }
        
        
        if( lastUpdated == 0f || Time.realtimeSinceStartup - lastUpdated >= refreshRate )
        {
            lastUpdated = Time.realtimeSinceStartup;
            
            cam = CameraDevice.Instance;
            image = cam.GetCameraImage(m_PixelFormat);
            
            if (image == null)
            {
                Debug.Log(m_PixelFormat + " image is not available yet");
                return;
            }
            else
            {
                string s = m_PixelFormat + " image: \n";
                s += "  size: " + image.Width + "x" + image.Height + "\n";
                s += "  bufferSize: " + image.BufferWidth + "x" + image.BufferHeight + "\n";
                s += "  stride: " + image.Stride;
                
                Debug.Log(s);
                
                pixels = image.Pixels;
                stride = image.Stride;
                
            }
            
            rgbPixels[0] = 0;
            rgbPixels[1] = 0;
            rgbPixels[2] = 0;
            
            width = image.Width;
            height = image.Height;
                        
            for(row = 0; row < height; row++)
            {
                for(col = 0; col < width; col++)
                {
                    numPixels++;
                    
                    rgbPixels[0] += (int)pixels[(row * stride) + (col*3)];
                    rgbPixels[1] += (int)pixels[(row * stride) + (col*3) + 1];
                    rgbPixels[2] += (int)pixels[(row * stride) + (col*3) + 2];
                }
            }
            
            rgbPixels[0] = rgbPixels[0] / numPixels;
            rgbPixels[1] = rgbPixels[1] / numPixels;
            rgbPixels[2] = rgbPixels[2] / numPixels;
            
            Debug.Log("[Update] - RGB( "+ rgbPixels[0] + ", " + rgbPixels[1] + ", " + rgbPixels[2] + " )" );
        }
            
    }
    
    
    public void OnInitialized(){}
    
    
    public void OnTrackablesUpdated()
    {
        
        if (!m_RegisteredFormat)
        {
            cam.SetFrameFormat(m_PixelFormat, true);
            m_RegisteredFormat = true;
        }

    }       
        
}

Is there anyway to make this performance-proof? Or am I am doing something wrong with my code above? As you can see I try to make this calculations from time to time, i tryed 5 and 10 seconds intervals but it was still noticeable.

I found some ways of doing this more performance-proof like http://www.bobbygeorgescu.com/2011/08/finding-average-color-of-uiimage/ but this as access directly to iOS. Is it in anyway for you, guys @ Vuforia, to make possible something like this. Maybe mimmic in some way what can be achieved with the StringAR color values?

Versions:
. Vuforia AR SDK v2.0.31
. Unity 3.5.7f6

Thank you very much for your time. Hope to hear back from you,
Joel Oliveira

Tune up access of camera's color

June 17, 2013 - 1:17am #4

Hi,

concerning your first question:

1) If the subset of pixels gets pixels that are associated with erroneous behaviour from the user like a finger in the camera or even an object with a strong color like a banana or an apple you may associate your light-scene in a biased way putting all yellow or red when only a part of the camera-feed has that color.

I think one way of mitigating this could be to us a "uniformly sparse" approach, i.e.  scan the entire image, but instead of reading every single pixel, you can just read one pixel every 50 pixels (for instance), i.e. you read row by row and column by column, but with the caveat that you read one pixel and then you skip the following 50 pixels, then you read another pixel, and so on...

This way you will make sure that you sample all the viewing area, but with a total pixel count which is significantly reduced.

 

Tune up access of camera's color

June 16, 2013 - 10:16pm #3

Hello Allessandro,

 

Thank you very much for your reply. I'll post a request on the wish-list as soon as possible.

Your suggestion seems doable but has at least 2 setbacks:

1) If the subset of pixels gets pixels that are associated with erroneous behaviour from the user like a finger in the camera or even an object with a strong color like a banana or an apple you may associate your light-scene in a biased way putting all yellow or red when only a part of the camera-feed has that color.

2) As I tried to tune-up the algorithm, it came to my attention that the getPixels function ( in the code depicted at image.Pixels ) also have a performance-hit in the app.

 

Once again thanks for your answer. Best Regards,

Joel Oliveira

Tune up access of camera's color

June 16, 2013 - 4:19am #2

Hi, 

the current SDK does not offer a function to return the average color or brightness of the camera image;

what you do in Unity with your algorithm is one possible way, but as you also observed, this is very slow (as you iterate on all pixels);

perhaps one workaround could be to only read a subset of the pixels (not all the pixels in the image); this could give you a reasonable approximation...

 

otherwise,, I can invite you to post the request for this additional feature in our Forum wish-list:

https://developer.vuforia.com/forum/general-discussion/wish-list

 

Log in or register to post comments