I updated the vuforia engine to 9.8.11, VuMarkManager could not be found any more , so I downgraded vuforia again and now it works again.
is there another solution???
When using Vuforia's cloud image tracking within Unity, my object appears like normal but when I move my camera around the object shakes and/or jumps around even though the tracked images had 4 stars. I disabled the "Track device pose" within the Vuforia configuration and the image tracking is much smoother. However, I would like to use AR Foudation's light/reflection estimation to make the object's lighting look more accurate but by disabling the track device pose is causes my program to immediately close after it opens up.
Im having issues converting the sample code for the iOS to work in portrait mode.
There are two issues - video is flipped upside down, and it's stretched, like if it was drawn with different aspect ratio.
I tried various things, but can't solve it. Could someone help?
Using vuforia's 3D object recognition, the recognition object is a white model. The white plastic model with steel frame in the steel plant uses scanner software to collect points, which is about 340. But when using test, the scanning model does not appear green recognition block. If the model is not changed, is there any other way to solve it, or is there any problem
I have strange issue with my project. Vuforia Cloud detection is working fine when I have iPhone connected to Xcode and run from Xcode in debugging. But if I close the app and run in again opening the icon on the desktop, for some reason nothing is being detected. I have no way to test what's going on, because when Im debugging it works. I tried changing Run mode to Release, but it didn't help. I have used production license and database. What could be wrong?
Hi. Im making an iOS app that uses Vuforia Cloud and SceneKit together. I based this on Vuforia sample code.
MetalRenderer class is used to display video, while SceneKit is used to draw AR content.
It's working fine but with one flaw. Video and SceneKit are seems to be a bit off. I can see a small offset between camera and SceneKit rendering.
Is there some way to sync the timers of these things so that they render frames at the same time?
I would like to ask a question about my project. Right now, I am implementing a ROS communication between one of the ABB robots with Unity by the use of the ROS# package (https://github.com/siemens/ros-sharp). And instead of using a camera device connected to my computer (Windows), I would like to use the camera connected to ROS. I got a way of showing the camera that is running over ROS in the Unity environment, but I do not know how to implement the image recognition over it.
Hi. I would like to use Vuforia Cloud recognition on the iOS to perform Cloud recognition on given pixel buffer, that comes from another place in the app. I don't want Vuforia to display anything, just to return from the cloud the list of recognized images. Is this possible? Could you point to some references where I could find any information about how to achieve this?
Hello, we are currently working on a project using vuforia on the hololens, and we want to be able to overlay a model of an object over the object once scanned. The scanning process is working, and the model is placed into the world, but it is inconsistent in its alignment. Sometimes the model is in the same place, but other times it is angled or in a completely different location. We are using Unity and Model Targets. We have been trying to work around this issue for quite a bit now, but nothing appears to make it consistently go in the same place. Your help is very much appreciated!
Are you sure you want to delete this message?
Are you sure you want to delete this conversation?