Text recognition support for forward facing camera for Unity to iOS etc.
We want ours users to be able to see themselves at the same time as the text detection is occuring in order to render virtual objects over the camera feed containing the user, as is the case with current AR techniques.
Either an API to upload images to the Target Manager, process them, and download the datasets OR have a localized sdk that can process targets and create datasets on device.
thanks for pointing this out. I tried it, but it leads to exactly the same memory leak.
I just imported the "Background Texture Access" demo into a fresh Unity project and made no other modifications than changing the Target Resolution in Player Settings to 768p (iPad).
The same behaviour as described in the other two posts can be observed!
Regarding this post I would like to suggest a new feature such that the QCAR library expose the detected image feature locations from the camera so third party developers could draw their own "scan points" as desired for non-cloud based target databases.
Although Window mobile devices don't have a huge market, the recent announcement of MS partnership with Unity and the whole/upcoming integratation of MS product lines, suggests it will be a growing area. As such it would be really nice to get Window mobile support along side Android and iOS.
Hello I want to request that future release of Vuforia versions will support user defined target which allow the user to specify some points and Vuforia will use those points to contruct the camera pose estimation for rendering.
The addition of the text tracker is a nice feature. A nice enhancement would be to recognize regexes in addition to the current whitelist/blacklist. For instance, enabling the recognition of phone numbers, URLs, email addresses etc that are partially freeform but parseable.
In addition, while you're in the business of adding trackers, it would be nice to have built-in trackers for QR code and bar codes, turning them into first-class targets instead of us having to rely on the current "go integrate zxing" hacks. The required image conversion step really kills performance.
I also think some better access to the camera would be nice. We often find ourselves wanting to capture a higher quality image than the video preview.
Make it possible to get the color of the camera-image so that I can easily tint the lights in my scene so that my augmentation gets more integrated with the real world.
I know that the String AR framework as someway of doing this by accessing the MarkerInfo.color values.
Per StringAR documentation: "color:
Represents the average color of the detected marker in the frame, relative to the loaded marker image. In other words, if your camera captured a frame in which colors corresponded exactly to the original marker image, youʼd get {1, 1, 1}.
If the ambient light is warm, you might get something like {1.2, 1, 0.9} for example. How you use this is up to you. For some types of content, it can be used to imitate real- word ambient lighting conditions to great effect. "
I found some ways of doing this more performance-proof like http://www.bobbygeorgescu.com/2011/08/finding-average-color-of-uiimage/ but this has access directly to iOS. Is it in anyway for you, guys @ Vuforia, to make possible something like this. Maybe mimmic in some way what can be achieved with the StringAR color values?
Currenlty voforia doesn't support multiple occurances of the same frame marker, all the markers visible need to have an unique id.
Example use case: board game like chess, where there are several instances of the same piece (like 8 white pawns) that are ideally represented by the same marker.
No news I know of. It has to do with investment and licensing issues I am sure. Fact: Qualcomm bought the source code and exclusive rights for StudierStube ES for mobile platforms. This makes sense as AR requires a lot of processing power and Qualcomm is in the business of making cpu/gpu's for mobile devices. Investing in another expensive source license to support desktop AR for them doesn't make sense.
Just my guess.
But not to worry. I am working on an AR engine wrapper which will solve this issue.
-More robust raw marker data acquisition in Unity. I would like to have an array which only holds the marker transform and its ID, only of the current markers in view and based on a camera with an identity transform. Currently if I create markers from code, Vuforia creates a game object for each marker. This creates a lot of overhead if many markers are used and it clogs up the Hierarchy view in the Editor. Additionaly not regarding the raw markers as game objects would solve the issue that the marker pose changes if the camera transform changes, which requires a complex setup procedure to make UCS work. Another script can turn the raw marker pose into a game object so generic ease-of-use is maintained,but it shouldn't be part of the Vuforia kernel as it were.
-Ability to set the world center mode using a script (before Vuforia needs it to be locked down), not only via the Editor.
Has there been any movment on providing Mac and PC desktop builds?
I don't understand why this hasn't been supported, especially as it works in the editor. Its a major sticking point for using vuforia and is forcing us to evaluate alternatives, which is a real shame as for most cases vuforia is perfect for our needs.
It would be great if the Vuforia team would consider adding support for Android boards or media players with USB webcams. On all the devices I've tested it works to some extent, but it doesn't seem like there are any profiles to support various cameras like in the editor with a webcam. As a result the video feed is always low quality and tracking isn't as robust as with devices that do have official support.
PC version using webcams. This would be particularly useful for making head-mounted AR displays (attach a couple of webcams to the Oculus Rift and have head-mounted stereo AR!)
Following the many SLAM requests, I'll add a slighter easier feature to the wish list, i.e. "visual odometry" (otherwise called "markerless tracking" in other posts). In practice, it would be nice if one could estimate the 3D pose of a camera with respect to a given frame of reference, not necessarily building a map of the environment, even if the estimation suffers of cumulative errors.
It would be great if we could control the camera image that is feed to Vuforia. For instance, we could pull the camera feed, filter it in some fashion, and push it to the augmentation engine.
Or we could have a saved stream that could be "replayed" to Vuforia.
something else that would be interesting is to have better control over the cameraDevice - namely the light reaction aperture.
Something that comes in mind right ahead is the possibility to apply computational filters to reduce glares - it is very hard to recognize trackers made on glossy surface, but there are algorithms that help reduce glaring, or augmenting contrasts.
Also, tweaking light exposure or programmatically giving the option to the user to adjust camera light exposure could end up in better tracking depending on where the user is. Or this could be controlled by the device's light sensor or computed from CameraImage processing.
For instance, sometimes vuforia won't track a target that is very visible. Blocking the camera with your hand for half a second (causing it to react to darkness, then brightness) would make Vuforia begin to track *while the camera is adjusting to light* ... can't we make this purposely by software???
I find it is hard to track in lower light environment such as subway or cafés, and sometimes also in high brightness environment such as outdoor in full sunshine, the camera doesn't necessarily react properly and some trackers are washed off.
This is even more the case when we are using less than 4-star trackers - I know in a perfect world we'd have controlled lighting and 5-star images, but reality doesn't always permit that and having control over the way data is fed to the recognition engine could improve results - for those who wants to use it.
Summary: Pass an arbitrary Texture2D, at runtime, to the ImageTarget Builder.
In the UserDefined demo, it seems to be taking the whole frame from the video and building an ImageTarget from it. I need to be able to pass in a portion of the frame, and after some other operations are done to it. I've seen other comments about taking a portion of a frame to use as an imagetarget from others. So...I think the feature that covers this and other simular things would be to pass in an arbitrary Texture2D to the ImageTarget Builder.
Hi, would be great to have API also to predefined target databases same as to cloud one. Cloud is great but there is need to have also offline target databases downloadded with same data packed over wifi to devices and ready to be used offline outside on devices without 3G or free wifi hot spots. Manual datasets modifications on vuforia portal is not nice manual overhread in managing like regular weekly updates of AR.
Thanks
+1 to having API for Device Database. surely Not every situation suits for cloud recognition.
Alternatively standalone database creation tool would be great.
Wish List
Text recognition support for forward facing camera for Unity to iOS etc.
We want ours users to be able to see themselves at the same time as the text detection is occuring in order to render virtual objects over the camera feed containing the user, as is the case with current AR techniques.
October 26, 2012
Wish List
a 3D rendering engine for ios sdk and android sdk...
April 2, 2013
Wish List
a full native activity on android without any jni interface
if there is any problem with camera (to get it working with ndk), the opencv guys successfully did a capture libraries
thx
November 30, 2012
Wish List
Either an API to upload images to the Target Manager, process them, and download the datasets OR have a localized sdk that can process targets and create datasets on device.
June 3, 2013
Wish List
Nalin,
thanks for pointing this out. I tried it, but it leads to exactly the same memory leak.
I just imported the "Background Texture Access" demo into a fresh Unity project and made no other modifications than changing the Target Resolution in Player Settings to 768p (iPad).
The same behaviour as described in the other two posts can be observed!
Best,
Stefan
December 1, 2010
Wish List
Desktop version so I can build a standalone app and create Augmented Reality experiences controled by Leap Motion.
YouTube Demo:
http://www.youtube.com/watch?v=GiiPcsoOFfQ (only runs inside of Unity Pro using the webcam preview feature)
January 21, 2012
Wish List
Hi Stefann
Can you see if this workaround works for you?
https://developer.vuforia.com/forum/ios/issue-when-setting-screen-resolution#comment-2029545
N
May 21, 2012
Wish List
thx very much , but I mean the standalone app on mac osx, not app on ios, sorry for my mistake
June 5, 2012
Wish List
Full support of non-native rendering resolutions in Unity (eg. 1024x768 on an iPad with retina display).
Currently this causes a memory leak as described here
https://developer.vuforia.com/forum/unity-3-extension-technical-discussion/memory-leak
and here
https://developer.vuforia.com/forum/unity-3-extension-technical-discussion/camera-viewport-gone-crazy#comment-2028588
December 1, 2010
Wish List
supporting webcam feature for stand-alone apps
June 5, 2012
Wish List
Regarding this post i would like to recomend that there should be an option to add more font types in library for text tracking
July 1, 2013
Wish List
Regarding this post I would like to suggest a new feature such that the QCAR library expose the detected image feature locations from the camera so third party developers could draw their own "scan points" as desired for non-cloud based target databases.
January 22, 2013
Wish List
SLAM
That is all...
April 23, 2013
Wish List - Add support for Windows mobile devices
Although Window mobile devices don't have a huge market, the recent announcement of MS partnership with Unity and the whole/upcoming integratation of MS product lines, suggests it will be a growing area. As such it would be really nice to get Window mobile support along side Android and iOS.
January 5, 2013
Wish List
It would be very useful to retrieve image and it feature points via the VWS api.
March 30, 2012
Wish List
Hello I want to request that future release of Vuforia versions will support user defined target which allow the user to specify some points and Vuforia will use those points to contruct the camera pose estimation for rendering.
thanks
October 13, 2012
Wish List
Since Vuforia is based on StudierStube, I would like to get the raw marker pose, not the filtered one.
In StudierStube Target.h, there is an enum:
To get the raw data, you need to call this:
instead of
Also, you need to call this:
target->setFilterStrength(0.0, 0.0, 0);
Please expose this feature to the user.
This would be a great fix for the problem (marker de-recogntion delay) I am experiencing here:
http://www.youtube.com/watch?v=W6Tuvlm9Oqs
May 3, 2011
Wish List
The addition of the text tracker is a nice feature. A nice enhancement would be to recognize regexes in addition to the current whitelist/blacklist. For instance, enabling the recognition of phone numbers, URLs, email addresses etc that are partially freeform but parseable.
In addition, while you're in the business of adding trackers, it would be nice to have built-in trackers for QR code and bar codes, turning them into first-class targets instead of us having to rely on the current "go integrate zxing" hacks. The required image conversion step really kills performance.
I also think some better access to the camera would be nice. We often find ourselves wanting to capture a higher quality image than the video preview.
August 20, 2012
Wish List
Hello. My wish comes from a Sesame Street demo we saw on youtube: https://www.youtube.com/watch?v=pUhQDXDfxhc
We would LOVE to be able to work with 3D figurine markers.
Thanks
-Mo
May 31, 2013
Wish List
Why not give vwl file to support language like french german, spanish...?
September 23, 2011
Wish List
In text recognition sample,
Able to recognize Japanese characters
March 19, 2013
Wish List
Enable number recognition. Not just letters
June 19, 2013
Wish List
We want to remove scanlines and feature points detection of Cloud Recognition.
April 25, 2011
Wish List
Make it possible to get the color of the camera-image so that I can easily tint the lights in my scene so that my augmentation gets more integrated with the real world.
I know that the String AR framework as someway of doing this by accessing the MarkerInfo.color values.
Per StringAR documentation:
"color:
Represents the average color of the detected marker in the frame, relative to the loaded marker image. In other words, if your camera captured a frame in which colors corresponded exactly to the original marker image, youʼd get {1, 1, 1}.
If the ambient light is warm, you might get something like {1.2, 1, 0.9} for example. How you use this is up to you. For some types of content, it can be used to imitate real- word ambient lighting conditions to great effect. "
I tryed my own implementation ( as seen on https://developer.vuforia.com/forum/unity-3-extension-technical-discussion/tune-access-cameras-color ) but you'll get a performance-hit.
I found some ways of doing this more performance-proof like http://www.bobbygeorgescu.com/2011/08/finding-average-color-of-uiimage/ but this has access directly to iOS. Is it in anyway for you, guys @ Vuforia, to make possible something like this. Maybe mimmic in some way what can be achieved with the StringAR color values?
August 23, 2012
Wish List
This. It would help our work greatly.
May 22, 2013
Wish List
Support permanent storage of user defined targets.
November 17, 2011
Wish List
+1 for Windows Desktop stand alone application...
June 2, 2013
Wish List
Currenlty voforia doesn't support multiple occurances of the same frame marker, all the markers visible need to have an unique id.
Example use case: board game like chess, where there are several instances of the same piece (like 8 white pawns) that are ideally represented by the same marker.
December 19, 2012
Wish List
May 3, 2011
Wish List
-More robust raw marker data acquisition in Unity. I would like to have an array which only holds the marker transform and its ID, only of the current markers in view and based on a camera with an identity transform. Currently if I create markers from code, Vuforia creates a game object for each marker. This creates a lot of overhead if many markers are used and it clogs up the Hierarchy view in the Editor. Additionaly not regarding the raw markers as game objects would solve the issue that the marker pose changes if the camera transform changes, which requires a complex setup procedure to make UCS work. Another script can turn the raw marker pose into a game object so generic ease-of-use is maintained,but it shouldn't be part of the Vuforia kernel as it were.
-Ability to set the world center mode using a script (before Vuforia needs it to be locked down), not only via the Editor.
May 3, 2011
Mac and PC desktop build support
Hi,
Has there been any movment on providing Mac and PC desktop builds?
I don't understand why this hasn't been supported, especially as it works in the editor. Its a major sticking point for using vuforia and is forcing us to evaluate alternatives, which is a real shame as for most cases vuforia is perfect for our needs.
January 5, 2013
Wish List
track more than 10 image targets simultaneously.
And image targeting more rapidly
January 5, 2013
Wish List
It would be great if the Vuforia team would consider adding support for Android boards or media players with USB webcams. On all the devices I've tested it works to some extent, but it doesn't seem like there are any profiles to support various cameras like in the editor with a webcam. As a result the video feed is always low quality and tracking isn't as robust as with devices that do have official support.
September 10, 2012
Wish List
My wish list :
I`m so looking foward to using vuforia in standalone PC/Mac/Linux/web via unity 3d, and the new word detection seen on big bird`s words!!
April 4, 2013
Wish List
Create and delete of Database API in WVS. Its Very Important to make wide-spreaded and having many target AR service...
December 24, 2011
Wish List
PC version using webcams. This would be particularly useful for making head-mounted AR displays (attach a couple of webcams to the Oculus Rift and have head-mounted stereo AR!)
January 18, 2013
Wish List
Following the many SLAM requests, I'll add a slighter easier feature to the wish list, i.e. "visual odometry" (otherwise called "markerless tracking" in other posts). In practice, it would be nice if one could estimate the 3D pose of a camera with respect to a given frame of reference, not necessarily building a map of the environment, even if the estimation suffers of cumulative errors.
Nicola
April 19, 2013
Wish List
Multiple trackables that are exactly the same to display the same 3D augmented content on them.
Thanks.
May 3, 2012
Wish List
Hi,
I have been working on AR 3 years now and your product is wonderful but I wish that Windows/Mac standalone builds would be supported.
September 21, 2011
Wish List
Hello,
My wish list :
- using vuforia for standalone application PC with unity 3d
- improve long distance detection (with smaller target)
- add 3d markerless detection
- add slam detection
I also have a question, when will be the next release of vuforia ?
December 19, 2012
Wish List
Persisting user defined targets as xml and dat files on the device running the application.
This would be very nice!
October 21, 2012
Wish List
It would be great if we could control the camera image that is feed to Vuforia. For instance, we could pull the camera feed, filter it in some fashion, and push it to the augmentation engine.
Or we could have a saved stream that could be "replayed" to Vuforia.
something else that would be interesting is to have better control over the cameraDevice - namely the light reaction aperture.
Something that comes in mind right ahead is the possibility to apply computational filters to reduce glares - it is very hard to recognize trackers made on glossy surface, but there are algorithms that help reduce glaring, or augmenting contrasts.
Also, tweaking light exposure or programmatically giving the option to the user to adjust camera light exposure could end up in better tracking depending on where the user is. Or this could be controlled by the device's light sensor or computed from CameraImage processing.
For instance, sometimes vuforia won't track a target that is very visible. Blocking the camera with your hand for half a second (causing it to react to darkness, then brightness) would make Vuforia begin to track *while the camera is adjusting to light* ... can't we make this purposely by software???
I find it is hard to track in lower light environment such as subway or cafés, and sometimes also in high brightness environment such as outdoor in full sunshine, the camera doesn't necessarily react properly and some trackers are washed off.
This is even more the case when we are using less than 4-star trackers - I know in a perfect world we'd have controlled lighting and 5-star images, but reality doesn't always permit that and having control over the way data is fed to the recognition engine could improve results - for those who wants to use it.
March 28, 2013
Wish List
Hi,
It would be great if Vuforia supports these platforms with Unity:
Standalone- PC & Mac
Windows 8 Mobile
Blackberry Z 10 mobile
Flash for Web
These all seems possible specially the Standalone support.
December 19, 2011
Wish List
Summary: Pass an arbitrary Texture2D, at runtime, to the ImageTarget Builder.
In the UserDefined demo, it seems to be taking the whole frame from the video and building an ImageTarget from it. I need to be able to pass in a portion of the frame, and after some other operations are done to it. I've seen other comments about taking a portion of a frame to use as an imagetarget from others. So...I think the feature that covers this and other simular things would be to pass in an arbitrary Texture2D to the ImageTarget Builder.
March 25, 2013
Wish List
+1 to having API for Device Database. surely Not every situation suits for cloud recognition.
Alternatively standalone database creation tool would be great.
-katch
December 1, 2012
Wish List
- Abillity to remove the Scanline and Feature Points.
It would be really nice if you could remove them with a function (or 2).
March 12, 2013
Wish List
Publish to native Mac OS X.
Please please please!
Save me!
January 31, 2013
Wish List
Publish to Windows and Mac desktop, through Unity
August 13, 2012
Wish List
Detailed guide tu UDT - User Define Targets
Save a runtime-generated UDT for future use
Publish for Mac and PC standalones and web with webcams
November 28, 2011
Wish List
face reconigtion like D'Fusion Studio of t-immersion
:)
February 22, 2012
Pages