Take advantage of the Vuforia Web API and integrate it into your workflows and automation processes to generate Model Targets and Advanced Model Targets.
The process of generating Model Targets with the Web API is similar to that of the Model Target Generator Desktop Tool, and includes the same options to customize the target for optimal detection and tracking.
- Model Target Web API Workflow
- Create a Model Target Dataset
- Guide View Position
- Create an Advanced Model Target Dataset
- Advanced Model Target with Multiple Views
- Uploading CAD Models
- Additional Options
- Data Retention
- Troubleshooting
Authentication
The Model Target Web API uses JWT tokens for authentication that can be obtained by providing the Oauth2 Client ID and Client Secret. See Vuforia Web API Authentication on how to create the client credentials.
Model Target Web API Workflow
The API performs all generation and training tasks in the cloud and consists of the following high- level asynchronous flow:
- Initiate the dataset creation through an HTTP request.
- Monitor the generation process through the generated UUID.
- Retrieve the dataset files as a .zip.
The full OpenAPI specification is available here.
Domains
The API is accessible at https://vws.vuforia.com
Create a Model Target Dataset
The API allows you to generate Model Targets and Advanced Model Targets with recognition ranges up to 360 degrees. You can additionally train multiple Advanced Model Targets simultaneously.
Model Target Dataset Creation
In this section we walk through a Standard Model Target dataset creation session using a bash shell script and curl to interact with the API.
- First, execute the login request to obtain a JWT access token& by supplying the Oauth2 credentials. See Obtain a JWT Token for more information.
- Following a successful login, you can create a Standard Model Target by supplying one CAD model and define a Guide View for it. Multiple Guide Views can be added in the views array. For setting the values of the
guideViewPosition
, please see Guide View Position below.12345678910111213141516171819202122
Copybody=$(cat <<EOF { "name": "dataset-name", "targetSdk": "10.18", "models": [ { "name": "CAD-model-name", "cadDataUrl": "https://YourURL.com/CAD-model.glb", "views": [ { "name": "viewpoint-name", "layout": "landscape", "guideViewPosition": { "translation": [ 0, 0, 5 ], "rotation": [ 0, 0, 0, 1 ] } } ] } ] } EOF ) curl -XPOST --data "$body" --header 'Content-Type: application/json' --header "Authorization: Bearer $token" " https://vws.vuforia.com/modeltargets/datasets"
Monitor the Progress and Download
The time it takes to create a Model Target Dataset will depend on the configuration and complexity of the uploaded CAD-model.
- To monitor the creation progress, you can call the status API route and retrieve information on:
- The creation status, resulting in either
done
,processing
, orfailed
. - A timestamp
createdAt
of when the creation request was received. - An estimated remaining processing time
eta
, when available. - A timestamp
completedAt
of when the creation request completed, only returned if status resulted indone
,failed
, orcancelled
. - Details about failure if the creation process returns as
failed
. See Failure Cases if the generation return anerror
orwarning
.
Example request to retrieve the creation status:
1
Copycurl -XGET --header "Authorization: Bearer $token" "https://vws.vuforia.com/modeltargets/datasets/$uuid/status"
Example response when status is processing:
1
Copy{"status": "processing", "createdAt": "2023-11-22T13:30:35.578Z", "eta": "2023-11-22T14:21:00.231Z"}
Example response when status is done:
1
Copy{"status": "done", "createdAt": "2023-11-22T13:30:35.578Z", "completedAt": "2023-11-22T14:21:00.231Z"}
Example response when status is failed:
1
Copy{"status": "failed", "error": {"code": "ERROR", "message": "..."}, "createdAt": "2023-11-22T13:30:35.578Z", "completedAt": "2023-11-22T14:21:00.231Z"}
When the status is “done”, the zipped dataset can be downloaded:
1
Copycurl -XGET --header "Authorization: Bearer $token" --output dataset.zip " http
- The creation status, resulting in either
Guide View Position
Model Target Datasets can have one or multiple Guide Views which you can let users switch between manually. In the Model Target Web API, Guide Views can be specified via a translation and rotation that represents the position ofthe virtual camera with respect to the object. This virtual camera follows the GLTF Y-up convention with the lens looking towards the negative Z-axis. Depending on your use case, you can change the layout
from landscape
to portrait
. The image below illustrates a landscape
layout.
The transformation matrix 'T * R' represents the "camera matrix" (equal to the "inverse view matrix") in OpenGL terms.
- The
rotation
field defines a 3D rotation quaternion[qx, qy, qz, qw]
. - The
translation
field defines a 3D translational offset in scene units[tx, ty, tz]
.
NOTE: The camera view must point towards the object to be detected from a perspective that the user of the app shall approach the model from. Consult the Model Target Guide Views documentation for more information on Guide Views.
For prototyping and debugging, Guide Views can be set up in the Model Target Generator and copied into the JSON request. Open the JSON project file in the MTG’s project folder, identify the relevant Guide View, and copy the translation
and rotation
objects into your own JSON description.
Create an Advanced Model Target Dataset
The creation of Advanced Model Targets follows much of the same process as above, but with a few key distinctions. For an Advanced Model Target, you will need to specify its views
using one of the following five methods::
recognitionRanges
andtargetExtent
ortargetExtentPreset
recognitionRangesPreset
andtargetExtent
ortargetExtentPreset
userVolume
and ortargetExtentPreset
guideViewPosition
partReference
For an introduction on Target Extent, presets, recognition ranges and User Volumes, please refer to the MTG documentation on Advanced Views. Note that the MTG name for recognition ranges is “Constrained Angle Range”, while the name for User Volume is “Constrained User Positions”. Create, get status, and retrieve the Advanced Model Target dataset by using the advancedDatasets
routes
- POST /modeltargets/advancedDatasets
- GET /modeltargets/advancedDatasets/:uuid:/status
- GET /modeltargets/advancedDatasets/:uuid:/dataset
Target Extent and presets
The Target Extent is used to define a bounding box that identifies which parts of the model will be used in recognizing the object or discerning between Guide Views.
When the area of interest is the whole object, the FULL_MODEL
preset can be used in the view definition:
1Copy"targetExtentPreset": "FULL_MODEL"
If only parts of the object are relevant, then the Target Extent can be specified as an arbitrary transformation as follows:
12345Copy"targetExtent": {
"translation": [ 0, 1, 0 ],
"rotation": [ 0, 0, 0, 1 ],
"scale": [ 1, 1, 1 ]
}
The translation, rotation, and scale transformations are applied to a cube centered at the origin with sides of length 2, that is, a cube with corners at coordinates (-1,-1,-1) and (1,1,1).
For example, a targetExtent
with scale set to [1, 1, 1] will produce a 2x2x2 meters cube, centered at the origin if the translation is [0, 0, 0].
The translation, rotation and scale fields follow the GLTF convention.
See also the Target Extent section in the MTG documentation on Guide Views.
Recognition ranges and presets
The recognition ranges are used to define from which angles and orientation a model can be recognized from.
The recognition ranges can be specified as presets DOME
, FULL_360
, or OBSERVATION
:
1Copy"recognitionRangesPreset": "DOME | FULL_360 | OBSERVATION"
When using the DOME
or OBSERVATION
preset, an upVector
must be defined at the model
level in the request body.
The recognition ranges can also be specified in full as follows:
123456Copy"recognitionRanges": {
"rotation": [ 0, 0, 0, 1 ],
"azimRange": [ -3.14, 3.14 ],
"elevRange": [ -1, 0 ],
"rollRange": [ -0.2, 0.2 ]
}
The rotation field can be used to change the frame of reference for the azimuth, elevation, and roll angles. A unit rotation quaternion means that the azimuth, elevation, and roll, identify rotations around the Y, X and Z axis respectively.
See also the Viewing Angles and Orientation section in Configuring Advanced Guide Views.
OBSERVATION preset
Use this preset to recognize from any angle around an object. For smaller, tabletop objects, this preset results in a similar recognition range as the DOME
preset. For larger objects, the OBSERVATION
preset will also allow recognition from closer distances. For example, it is difficult to initiate tracking on cars and machinery if it isn’t fully in view; the OBSERVATION preset solves this by adding a User Volume around the Model Target that enables recognition of the object even when the object is not fully in view.
When using the OBSERVATION
preset, the targetExtentPreset
must be set to FULL_MODEL
, and an upVector
must be defined at the model level.
- The volume size of the
OBSERVATION
preset depends on the size of the object. The volume coverage will allow the object to be recognized up to a maximum distance to the object so that it roughly covers half of the screen size of a typical handheld device. - The minimum distance from where an object can be detected from also depends on the object size. For a car-sized object, the minimum recognition distance will be around 50cm. The larger the object, the larger the minimum distance. If close-up views are needed, additional views can be specified.
- The height of the volume, and whether you can recognize from above, depends on the object size. Smaller objects’ volume allow the object to be recognized from above, but larger objects that end up having a volume larger than 2m (6.5 ft) will be capped at the 2m. Objects larger than 2m will have the volume adjusted to the object’s height and will require additional views to be specified if it needs to be recognizable from above or from up close.
Example request body with the OBSERVATION preset
Define a view with an observation detection range capable of detecting far and close-up views and a target extent of the whole model.
12345678910111213141516Copy{
"name": "dataset-name",
"targetSdk": "10.18",
"models": [
{
"name": "model-name",
"cadDataBlob": "cad-model-file.glb",
"upVector": [0, 1, 0],
"views": [{
"name": "viewpoint_0000",
"recognitionRangesPreset": "OBSERVATION",
"targetExtentPreset": "FULL_MODEL"
}]
}
]
}
Example request body with the DOME preset
Define a view with a dome detection range and a full model bounding box:
123456789101112131415161718Copy{
"name": "advanced-dataset-with-presets",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-name",
"cadDataUrl": https: //file.glb,
"upVector": [ 0, 1, 0 ],
"views": [
{
"name": "viewpoint-name",
"targetExtentPreset": "FULL_MODEL",
"recognitionRangesPreset": "DOME"
}
]
}
],
}
Example request body without recognition ranges presets
Define a view with a full 360-degree detection range and a target volume centered at the origin:
1234567891011121314151617181920212223242526Copy{
"name": "advanced-dataset-without-presets",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-name",
"cadDataUrl": "https://file.glb",
"views": [
{
"name": "viewpoint-name",
"targetExtent": {
"translation": [ 0, 0, 0 ],
"rotation": [ 0, 0, 0, 1 ],
"scale": [ 1, 1, 1 ]
},
"recognitionRanges": {
"rotation": [ 0, 0, 0, 1 ],
"azimRange": [ -3.14, 3.14 ],
"elevRange": [ -1.57, 1.57 ],
"rollRange": [ -0.2, 0.2 ]
}
}
]
}
],
}
User Volumes
The User Volume allows defining the possible positions of the user during the AR experience. This way of specifying the Advanced Views is particularly useful and convenient when the user has limited freedom to move during the experience, and is expected to be in a clearly defined location relative to the objects, such as:
- Sitting in the driver’s seat in a car and looking at the dashboard and other instruments in the car.
- Performing a service procedure that requires him/her to stand in a specific position
See User Volumes for example configurations in the MTG.
In order to define a User Volume in an Advanced View, a userVolume
object can be provided with the following fields:
volume
: Represents the volume where the user (respectively the AR camera) can be during the AR experience. It is defined in the same format as thetargetExtent
, with translation, rotation, and scale transformations in glTF format.minDistanceFromTarget
defines the relationship between the User Volume and Target Extent.- Set the distance value in meters, being the minimum distance between the camera device and model surface, you want to able to detect and initiate tracking of the object.
- If no value is given, it defaults to 20cm.
When defining userVolume
, it is mandatory to specify the global upVector
of the model.
Example request body with User Volume
This example API request defines a view using userVolume
instead of recognitionRanges
. In this example the targetExtent
is the full model bounding box and the userVolume
is a cube of size 2m centered at the origin. The user is expected to be always at least 20cm away from the target.
1234567891011121314151617181920212223Copy{
"name": "advanced-dataset-with-user-volume",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-name",
"cadDataUrl": "https://model.glb",
"upVector": [0, 1, 0],
"views": [{
"name": "viewpoint-name",
"targetExtentPreset": "FULL_MODEL",
"userVolume": {
"volume": {
"translation": [0, 0, 0],
"rotation": [0, 0, 0, 1],
"scale": [1, 1, 1]
},
"minDistanceFromTarget": 0.2
}
}]
}
]
}
Create advanced views from a Guide View position
The guideViewPosition
parameter is used in standard Model Targets to define the view from which the app's user shall approach the model. The same parameter can be used for the same purpose when creating advanced Model Targets, typically allowing a much broader set of views from which the object can be recognized.
This can be useful, e.g., to create close-up Advanced Views by specifying the camera position without defining a full set of recognition ranges.
See Guide View Position for details on how to specify the guideViewPosition
parameter.
NOTE: The guideViewPosition
is only used when none of recognitionRanges
, recognitionRangesPreset
, userVolume
, and partReference
are specified.
Example request body with Guide View position
123456789101112131415161718Copy{
"name": "advanced-dataset-from-guide-view",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-name",
"cadDataUrl": "https://model.glb",
"upVector": [0, 1, 0],
"views": [{
"name": "viewpoint-name",
"guideViewPosition": {
"translation": [0, 0, 0],
"rotation": [0, 0, 0, 1]
}
}]
}
]
}
Create views from a part of the CAD model
It is possible to specify the views
for an Advanced Model Target by referring to one of its parts. All the views where this part is visible from will then be used for detection, which can range from close-up views to views of the whole object.
NOTE: Specifying a view from a CAD model part requires the simplify
parameter to be set to never. Otherwise, the part might not be preserved during simplification.
A targetExtentPreset
must not be specified. Use one of the following part selectors:
partName
: Matches the name of a part in the model.partIdPath
: Matches the PVZ part id path.occurrenceId
: Matches the PTC occurrence ID stored in extensions.PTC_NODE_ASSEMBLY_NAME.occurence. This is only available in models extracted from the Vuforia Procedure Editor.
Examples of partReference in the view
Use only one part selector method to specify parts.
12345678Copy{
"name": "viewpoint_0000",
"partReference": {
"partName": "0000020319, FUEL SYSTEM, A.2 (Design)",
"partIdPath": "/0/1/2/3",
"occurrenceId": "3XVs96VDtUK9fBPxA",
}
}
Example request body with Part Reference
1234567891011121314151617Copy{
"name": "advanced-dataset-with-part-reference",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-name",
"cadDataUrl": "https://model.glb",
"upVector": [0, 1, 0],
"views": [{
"name": "viewpoint-name",
"partReference": {
"partName": "0000020319, FUEL SYSTEM, A.2 (Design)"
}
}]
}
]
}
Prototyping and debugging
For prototyping and debugging, Advanced Views can be set up in the Model Target Generator and copied into the JSON request. Open the JSON project file in the MTG’s project folder, identify the relevant view, and copy the targetExtent
, recognitionRanges
or userVolume
(depending on your setup) into your own JSON description.
Advanced Model Target with Multiple Views
There may be use cases in which multiple detection ranges or multiple CAD models are desired. To accomplish this, add additional views and/or models to your dataset creation request body. Just remember that the recognition ranges should not overlap on the same CAD model
Example request body for an Advanced Model Target with multiple CAD models:
123456789101112131415161718192021222324252627282930Copy{
"name": "advanced-dataset-multi-model-name",
"targetSdk": "10.18",
"models": [
{
"name": "cad-model-1-name",
"upVector": [ 0, 1, 0 ],
"cadDataUrl": "https://file-1.glb",
"views": [
{
"name": "viewpoint-name",
"targetExtentPreset": "FULL_MODEL",
"recognitionRangesPreset": "DOME"
}
]
},
{
"name": "cad-model-2-name",
"upVector": [ 0, 1, 0 ],
"cadDataUrl": "https://file-2.glb",
"views": [
{
"name": "viewpoint-name",
"targetExtentPreset": "FULL_MODEL",
"recognitionRangesPreset": "DOME"
}
]
}
],
}
Uploading CAD Models
CAD models can be provided in 2 different ways: Either by embedding the model data as Base64 encoded string in the cadDataBlob
field or by specifying a URL in the cadDataUrl
field.
The cadDataBlob
field is appropriate for small CAD models (less than 20MB) while the URL can be used for larger models.
The URL must be accessible from the Model Target Web API, so it is recommended to use a signed URL with an expiration date. Signed URLs can be created easily for AWS S3 objects or Azure Storage objects:
Supported CAD Model Formats
The API preferentially supports reading glTF 2.0, provided as a single .glb file or as a zipped glTF. In addition, the following file formats are also supported if the cadDataFormat
is specified:
File Format |
cadDataFormat |
glTF 2.0 as .glb file or as zipped glTF |
No specification needed. |
Creo View (.pvz), Collada (.dae), FBX (.fbx), IGES (.igs, .iges), Wavefront (.obj), STL (.stl,.sla), VRML (.wrl, .vrml). |
The |
NOTE: Upload .obj and .fbx files with their texture files by zipping them together. The input for cadDataFormat
is still OBJ or FBX.
Additional Options
Target SDK
The targetSdk
field must be used to specify the expected (minimum) SDK version used to load the generated dataset.
Based on the target SDK, the API will allow specific features and optimizations such as Draco compression, runtime Guide View rendering, and etc.
Tracking Optimization
The optimizeTrackingFor
field can be used to improve tracking on specific object types and use cases. Set it to ar_controller
, low_feature_objects
, or default
. For information on the different modes’ benefits, please refer to Optimizing Model Target Tracking.
TIP: The optimization tracking mode can be specified independently for each model.
NOTE: optimizeTrackingFor
replaces the trackingMode
and motionHint
from targetSDK 10.9 and later.
Tracking Mode (deprecated in targetSdk 10.9 and later)
The trackingMode
field can be used to improve tracking on specific object types. Set it to car
, scan
or leave it as default
.
TIP: The tracking mode can be specified independently for each model.
Motion Hint (deprecated in targetSdk 10.9 and later)
The motionHint
field (see the ) can be used to improve tracking for static vs. moving (adaptive) objects
The motionHint
field can be set for each model in the dataset creation request to either static
(default) or adaptive
.
Up Vector
If the model has a natural up vector, meaning that the object is always expected to be upright (a factory machine, a table-top object, a car, etc.) it is strongly recommended to specify an upVector
field; it will greatly improve its tracking performance.
The upVector
can be set independently for each model in the request body. The upVector
is mandatory when using the DOME
or OBSERVATION
recognition range preset.
Automatic Coloring
3D Models without photorealistic textures can have improved detection results if the automatic coloring is enabled.
The automaticColoring
field can be set for each model in the request and can have values never
, always
or auto
. By default, automatic coloring is disabled (never
).
See also Automatic Coloring.
NOTE: The realisticAppearance
flag cannot be enabled together with automatic coloring.
Realistic Appearance
3D Models with photorealistic textures can have improved detection results if realisticAppearance
is set to true.
Conversely, if the 3D model is single-colored or contains colors that don’t match the real-life object, set realisticAppearance
to false in order to improve detection results. From targetSdk 10.9 and later, the realisticAppearance
field can be omitted or set to auto in order to to let Vuforia Engine automatically analyze the model and use an appropriate mode.
- For targetSdk 10.9 and later, the
realisticAppearance
field can be specified independently for each model and is of type string (values true, false, or auto). When not specified, it defaults to auto. - For targetSdk 10.8 and earlier, the
realisticAppearance
field must be specified at the root level of the request and is of type boolean (values true or false).
NOTE: The realisticAppearance
field cannot be enabled together with automatic coloring.
Model Scale
The Vuforia Engine SDK requires models in real life scale and the input model units is assumed to be meters.
If necessary to adjust the 3D model to its real-life scale, each model object in the request body can specify a uniformScale
parameter that is a scale factor used to preprocess the 3D model.
Simplify
The API allows simplification of CAD models and as part of the dataset creation. Simplification will produce optimized models for high performance and robust AR object tracking experiences.
The simplify
field can be set for each model in the request and can have values never
, always
or auto
. By default, simplification is set to auto
.
At the moment, only simplification of a single model per request is supported.
Data Retention
The API servers and the processing servers are hosted in the AWS Cloud eu-west-1 region.
The uploaded CAD models and data are transferred to the cloud using TLS 1.2 encryption and then stored using AES-256 encryption at rest.
All data associated to a Model Target dataset can be removed by issuing a DELETE
request for a specific dataset UUID
.
For example, a standard Model Target can be deleted as follows using curl:
1Copycurl -XDELETE --header "Authorization: Bearer $token" "https://vws.vuforia.com/modeltargets/datasets/$uuid"
See Vuforia Engine Cloud Security for more information.
Troubleshooting
For all 4xx and 5xx HTTP status responses, the API returns additional information in the response body that could help with troubleshooting the issue.
For example, if the request body of the dataset creation request contained an invalid recognitionRanges field, the response will be:
1234567891011Copy{
"code":"BAD_REQUEST", <- level 1
"message":"Validation error for request MyModelTarget",
"target":"MyModelTarget",
"details":[
{
"code":"VALIDATION_ERROR", <- level 2
"message":"recognitionRangesPreset must be one of DOME, FULL_360"
}
]
}
Unknown errors caused by issues on the MTG API, will return a generic error response:
123456Copy{
"error": {
"code": "ERROR",
"message": "Internal Server Error"
}
}
If such an error persists contact us via the Support Center.
Other error codes are listed in the table below.
level 1 |
level 2 |
HTTP_CODE |
Details |
BAD_REQUEST |
VALIDATION_ERROR |
400 |
There were problems with the general format of the request. |
AUTHENTICATION_FAILED |
n/a |
401 |
The authentication failed. |
FORBIDDEN |
n/a |
403 |
User is not allowed to perform the operation. |
UNPROCESSABLE_ENTITY |
TRAINING_ALLOWANCE_EXCEEDED |
422 |
User has reached total number of allowed trainings. |
UNPROCESSABLE_ENTITY |
CONCURRENT_TRAININGS_EXCEEDED |
422 |
User has reached number of allowed concurrent trainings. Wait until some trainings completed before starting new ones. |
ERROR |
n/a |
5xx |
Internal error. Please contact support. |
Additionally, when retrieving the status of a dataset, the response can contain detailed errors or warnings that could help with identifying problematic CAD models (such as symmetric ones), objects that cannot be well discriminated, etc.See Monitor Progress and Download.
Failure Cases
Sometimes, the Model Target generation process returns an error or a warning for various reasons. An `Error` means the generation failed. A ‘Warning’ is returned when the generation succeeded but there might be some issues worth taking into consideration when using the Model Target.
The HTTP example response for a warning:
12345678910111213141516171819Copy{
"uuid": "MyModelTarget",
"status": "done",
"warning": {
"code": "WARNING", // <------ level 1
"message": "Warning after creating dataset",
"target": "MyModelTarget",
"details": [
{
"code": "LOW_RECOGNITION_QUALITY", // <------ level 2
"message": "The processed model appears to have substandard recognition quality. Following targets are affected: MyModelTarget.",
"innerError" : {
"targets": [{"model":"<m>"}, {"model": "<m>", "view" : "<v>", "state": "<s>"}], // optional, depending on the type of warning
"code": "SYMMETRIES_OR_AMBIGUITIES" // <------ level 3: optional
}
}
}
}
}
The following table summarizes the possible errors that can occur during generation.
level 1 |
level 2 |
level 3 |
fields |
description |
ERROR |
INTERNAL_ERROR |
|||
An internal error occurred. |
||||
UNKNOWN_ERROR |
An unknown error occurred. |
|||
PROCESSING_FAILED |
targets |
Processing failed. |
||
INVALID_MODELVIEWS |
targets |
Some views are broken. |
||
TOO_MANY_EMPTY_VIEWS |
targets |
The view definitions contain too many camera positions from which the object is not visible. |
||
TOO_MANY_FAR_VIEWS |
targets |
The view definitions contain too many camera positions from which the object appears small. |
||
TOO_MANY_DISJOINT_VIEWS | targets | Defining too many disjoint views on models requires excessive computational resources. | ||
MODELS_TOO_LARGE |
targets |
The models are too large and need too many resources. |
||
TOO_MUCH_OVERLAP | targets | The User Volume does not contain any possible camera locations which are at the required distance from the CAD geometry. Please check parameter settings for viewing volumes. | ||
NO_GEOMETRY_IN_TV | targets | Target Extent contains no mesh point. Please reposition the Target Extent volume box. | ||
INPUT_DATA_ERROR |
targets |
Some input data is invalid. |
||
INVALID_RANGES |
targets |
The definitions of the ranges are invalid. Please check for angle or distance ranges outside the allowed bounds or reversed min-max ranges. |
||
NEGATIVE_OR_ZERO_VOLUMES |
targets |
The definitions of the target extents or user volumes are invalid. Please check for negative volumes or volumes which are completely flat. |
||
BAD_MODELS |
targets |
The glTF of the models cannot be loaded. |
||
INVALID_SEED_POINT | targets | Seed point (representative pose) is outside the boundaries of the User Volumes. | ||
INVALID_PART_IDENTIFIER | targets | The part identifier is invalid (different types given or empty string). | ||
PART_NOT_FOUND | targets | No node matched the part identifier. | ||
PART_IDENTIFIER_NOT_UNIQUE | targets | The part identifier does not refer to a unique node. | ||
WARNING |
||||
PROBLEMATIC_MODELVIEWS |
targets |
A problem was detected with some models or view definition, which may lead to failed processing or lower recognition performance. |
||
SMALL_MODEL_DIMENSIONS |
targets |
Some objects or target extents are too small and will probably not work for tracking or fail during training. |
||
MANY_EMTPY_VIEWS |
targets |
Some view definitions contain too many camera positions from which the object is not visible but with the remaining views the training still succeeded. |
||
MANY_FAR_VIEWS |
targets |
Some view definitions contain too many camera positions from which the object appears small but with the remaining views the training still succeeded. |
||
HIGH_MEMORY_CONSUMPTION | targets | Defining many disjoint views on models increases memory consumption at runtime. | ||
MUCH_OVERLAP | targets | The User Volume contains only few possible camera locations which are at the required distance from the CAD geometry. Tracking will be effective from few views only. | ||
LOW_RECOGNITION_QUALITY |
targets |
The processed model appears to have substandard recognition quality. |
||
CONFUSIONS |
targets |
Some models or views could be confused with each other. |
||
SYMMETRIES_OR_AMBIGUITIES |
targets |
Some models or views might have rotational or translational symmetries. |
||
MANY_DISJOINT_VIEWS | targets | Defining many disjoint views on models can hurt recognition quality. |