- add a definition for an extra target next to CHIPS and STONES, i.e. update this code:
public static final int NUM_TARGETS = 3; // was 2, set it to 3 or more
mMovieName[STONES] = "VuforiaSizzleReel_1.m4v";
mMovieName[CHIPS] = "VuforiaSizzleReel_2.m4v";
mMovieName[MY_TARGET] = "My_Movie_Filename.m4v"; //added line of code
- Open VideoPlayback.cpp (under the JNI directory of the project):
Update the definitions of NUM_TARGETS and the target names as follow:
static const int NUM_TARGETS = 3; //was 2 in original sample code, set it to 3 or more
static const int STONES = 0;
static const int CHIPS = 1;
static const int MY_TARGET = 2; //new line of code for my target
Update this code (in _initRendering function):
keyframeQuadAspectRatio[STONES] = (float)textures[0]->mHeight / (float)textures[0]->mWidth;
keyframeQuadAspectRatio[CHIPS] = (float)textures[1]->mHeight / (float)textures[1]->mWidth;
keyframeQuadAspectRatio[MY_TARGET] = (float)textures[2]->mHeight / (float)textures[2]->mWidth; //added line of code for my target
In _renderFrame function, update the code to determine the currentTarget as follows:
if (strcmp(imageTarget.getName(), "stones") == 0)
currentTarget=STONES;
else if (strcmp(imageTarget.getName(), "chips") == 1)
currentTarget=CHIPS;
else
currentTarget=MY_TARGET;
Similarly, update the glVertexAtribPointer code:
if (strcmp(imageTarget.getName(), "stones") == 0)
glVertexAttribPointer(videoPlaybackTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &videoQuadTextureCoordsTransformedStones[0]);
else if (strcmp(imageTarget.getName(), "chips") == 0)
glVertexAttribPointer(videoPlaybackTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &videoQuadTextureCoordsTransformedChips[0]);
else
glVertexAttribPointer(videoPlaybackTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &videoQuadTextureCoordsTransformed_My_Target[0]);
- Add this definition at the beginning of PlaybackVideo.cpp (next to the other two for chips and stones):
GLfloat videoQuadTextureCoordsTransformed_My_Target[] = {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
- Update the _setVideoDimensions function to account the MY_TARGET case (next to CHIPS and STONES)
Finally, create a new DataSet using the TMS and uploading three images (Chips, Stones and a third image of your choice), and use it instead of the currently used DataSet in the sample.
I hope this helps.
Investigating a bit further: it seems that MP4 sometime have problems to be played in streaming mode; this seems to depend on some encoding settings when the video was created.
This means that if somehow the mp4 has been encoded with some "streaming unfriendly" settings, it will be possible to play it from file (after downloading it to disk) but not in streaming mode (i.e. directly from http).
This issue does not seem to be Android specific, but a more general issue of MP4 videos.
This is discussed for example in this forum:
http://www.longtailvideo.com/support/forums/jw-player/setup-issues-and-embedding/11471/mp4-video-not-streaming-encoding-issue