Is there any possibility for a parallel solution for tackling the Camera Patch in Unity 3D? https://developer.vuforia.com/resources/dev-guide/camera-target-patch
Is there any possibility for a parallel solution for tackling the Camera Patch in Unity 3D? https://developer.vuforia.com/resources/dev-guide/camera-target-patch
Hi elecman,
Well doing a small search redirected me to one of your posts
http://pilotpage.monosock.org/fileadmin/files/UCS12bugfix.rar
I updated the two scripts .Well the intial animation had the right texture but after 2 sec the texture had some weird disturbance from background.
Please find the link for the video below:
https://www.dropbox.com/s/5rz6vn9bftbnlty/MOV_0035_1_1.mp4
Hi Elecman,
I followed your instructions in UCS and i have run the demo on my droid phone.
When i tried Videotexpath scene, i see the UV is not properly mapped. Im sure i dint make any changes in the code.
What do you think is the issue for this distortion.
Please find attachment of pics
Attachment | Size |
---|---|
![]() | 498.31 KB |
![]() | 552.58 KB |
In UCS, have a look at Vuforia.cs and SpecialEffects.cs
The code posted in this thread is just a reference.
Edit:
On second thought, I am not sure what you mean with
but I just can't see where in your code it is extracting the trackable target from the live camera texture.
-The trackable target "pose estimation / transform" is extracted from the live camera texture by Vuforia using their API, accessed in Vuforia.cs in the function UpdateMarkerStatusAndTransform().
-The texture region of where the marker is at the live video feed is not extracted as such. The whole screen texture is saved and then UV mapped accordingly so that only the marker (or other object) texture region is visible. The UV mapping is done in VidTexPatch.shader. Another important function to get the texture mapped right is GetMVPMatrix(), which is located in SpecialEffects.cs
Thank you elecman for posting this. I'm trying to achieve one thing - capturing only the trackable target (Vuforia) as viewed through the camera so I can compare pixels from the original source target and the live target in order to tone map virtual assets.
I'm reviewing your code carefully, but I just can't see where in your code it is extracting the trackable target from the live camera texture.
Thanks,
Nat
I found a way to get this to work in Unity. You can use a RenderTexture (fast, needs Unity Pro) or ReadPixels (visible frame stutter, no Pro required).
It is easy to get the UV mapping wrong so have a look at UCS 1.0 for the full code. These are just some code snippets so you get the idea.
Get the video texture from Vuforia, put it on a plane, and let a camera view it, even if you are not using the render texture. Use mask and depth values so the plane isn't seen by the real game camera. Then when the texture for the special effect object needs to be frozen, call this:
if(!isFrozen){ //Save the current camera frame renderTexCam.camera.Render(); //Get the matrix which is equivalent to UNITY_MATRIX_MVP in a shader Matrix4x4 MVP = GetMVPMatrix(Camera.main.gameObject, vidTexPatch); //copy the matrix to the shader vidTexPatch.renderer.material.SetMatrix("_MATRIX_MVP", MVP); #if !USE_RENDERTEXTURE RenderTexture.active = renderTexCam.camera.targetTexture; renderTextureReadPix.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0); renderTextureReadPix.Apply(); RenderTexture.active = null; #endif vidTexObject.SetActive(true); t = 0.0f; isFrozen = true; } else{ vidTexObject.SetActive(false); isFrozen = false; }
Do this once at startup:
private RenderTexture renderTexture; private Texture2D renderTextureReadPix; renderTexture = new RenderTexture(Screen.width, Screen.height, 0, RenderTextureFormat.ARGB32); renderTextureReadPix = new Texture2D(Screen.width, Screen.height); #if USE_RENDERTEXTURE vidTexPatch.renderer.material.mainTexture = renderTexture; #else vidTexPatch.renderer.material.mainTexture = renderTextureReadPix; #endif
This is the function to get the MVP matrix:
private Matrix4x4 GetMVPMatrix(GameObject cameraObject, GameObject shaderObject){ Matrix4x4 P = GL.GetGPUProjectionMatrix(cameraObject.camera.projectionMatrix, false); Matrix4x4 V = cameraObject.camera.worldToCameraMatrix; Matrix4x4 M = shaderObject.renderer.localToWorldMatrix; Matrix4x4 MVP = P * V * M; return MVP; }
This is the shader:
//NOTE: the game object which this shader is attached to must have a uniform scale //such as 1, 1, 1, or 2, 2, 2, but not for example 1, 0.5, 1 Shader "Custom/VidTexPatch" { Properties { _MainTex("Texture", 2D) = "white" { } } SubShader{ Pass{ Cull Back CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" sampler2D _MainTex; float4x4 _MATRIX_MVP; struct v2f{ float4 pos : SV_POSITION; float2 uv : TEXCOORD0; }; v2f vert(appdata_base v){ v2f o; float2 screenSpacePos; float4 clipPos; //Use the frozen matrix to do the UV mapping. clipPos = mul(_MATRIX_MVP, v.vertex); //Convert position from clip space to screen space. //Screen space has range x=-1 to x=1 screenSpacePos.x = clipPos.x / clipPos.w; screenSpacePos.y = clipPos.y / clipPos.w; //The screen space range (-1 to 1) has to be converted to //the UV range 0 to 1 using this formula. Note that the render //texture for this shader is already correctly mapped and flipped //so no additional math is needed like with the VidTexDeform shader. o.uv.x = (0.5f*screenSpacePos.x) + 0.5f; o.uv.y = (0.5f*screenSpacePos.y) + 0.5f; //The position of the vertex should not be frozen, so use //the standard UNITY_MATRIX_MVP matrix for that. o.pos = mul(UNITY_MATRIX_MVP, v.vertex); return o; } half4 frag(v2f i) : COLOR{ half4 texcol = tex2D(_MainTex, i.uv); return texcol; } ENDCG } } }
I will include this in the next version of UCS. Let me know if you want a working beta now.
Hi pixelplacement,
You can log this as a feature request in the wishlist for further consideration.
In the meantime, if you want to make some progress here yourself then getting the camera image within Unity would be the starting point:
https://developer.vuforia.com/resources/dev-guide/unity-camera-image-access
then its just a case of following the algorithm which is explained quite well.
Also, if you are going to prototype this, it's probably best done in the Webcam mode as it will help you iterate faster.
HTH
N
Here a similar solution –
https://developer.vuforia.com/forum/unity-3-extension-technical-discussion/region-capture-0