Skip to main content

Change log

With the release of Analytics Data Format V1, several changes have been made compared to the previous beta version. The new release tightens type definitions, renames fields for clarity, and introduces a clearer separation between frame-level, track-level and object snapshot data. This document outlines the key changes and provides guidance for migrating from the beta formats to version 1.0.0.

The two beta formats frame and consolidated track defined in the monolithic adf.json have been replaced with three new formats:

SchemaNameDescription
frame_v1.0.0.jsonFrameRepresents a scene for a specific point in time
object_track_v1.0.0.jsonObject TrackRepresents a single object's consolidated data over some time span
object_snapshot_v1.0.0.jsonObject SnapshotRepresents a single object snapshot taken at a specific point in time

Each of the new formats are defined in their own schema files that has its own $id and version.

The following sections summarize the changes made in each format.

Frame

removed

  • the outer frame key is dropped in favor of a flatter format.

changed

added

  • v1 has a new required field channel_id which specify which source id the metadata is from (e.g. which video-sensor)

Detections

removed

  • timestamp (redundant as it is supplied on frame level)
  • velocity (not as accurate as world_velocity)
  • classes (only sending best alternative)

changed

  • image to image_id (which is no longer the actual image but instead the reference to the object_snapshot_v1 message which contains the image)
  • track_id to object_track_id
  • world_position's attribute r to distance
  • world_velocity's type to GeoDirection to better describe the values.
  • class for class changes see change summary.

TrackEvents

changed

  • DeleteOperation to TrackEnded (field id changes to object_track_id).
  • RenameOperation to Rename (fields from/to change to from_id/to_id).

Object snapshot

In v1 there is a new message containing the images from the object_snapshot feature reduce the data size of the frame-by-frame format. Take a look at the object_snapshot_v1 format

Object track

removed

  • the outer consolidated key is dropped in favor of a flatter format.
  • end_reason

changed

added

  • v1 has a new required field channel_id which specify which source id the metadata is from (e.g. which video-sensor)
  • the new field parts is an array of references to the object_track(s) involved in this consolidation

Time And Position

For v1 the observations are reduced in size and contains only parameters related to position and the corresponding timestamps.

removed

  • class
  • classes
  • image
  • track_id
  • velocity only world_velocity kept

changed

  • world_velocity, world_position see changed in Detections

Classification

changed

  • Base classes has the following to changes
    • Face class has changed to Head and the head is tracked even if the face is not visible
      • the Head class has the field face_visible and the value 0.3 and greater is equivalent with the Face classification in Beta
    • Vehicle is split into Vehicle and the new VehicleOther.
      • VehicleOther indicate that the detection is a vehicle but not any of the other vehicle types (Car, Bus, Truck)
      • Vehicle is kept for sensor types that does not distinguish between the subtypes of vehicles.

added

  • Human class has gotten a new field carries_bag which indicate that a bag is being carried
  • Vehicle classes Car, Bus, Truck and Vehicle (not VehicleOther) have gotten the following field:
    • license_plate_id which is a reference to a LicensePlate detection
    • for convenience the vehicle has also gotten the license_plate field which has the following fields:
      • plate_number and country_code which can also be fetched from the referenced LicensePlate object.
  • LicensePlate class got the reference back to the related Vehicle class in the field vehicle_id

Examples

Concrete examples of the changes are shown below for each of the three new formats.

Frame example

Beta
{
"frame": {
"timestamp": "2025-01-01T00:00:00.000000Z",
"observations": [
{
"track_id": "1",
"bounding_box": {
"bottom": 0.89332,
"left": 0.13332,
"right": 0.80004,
"top": 0.59942
},
"class": {
"type": "Human",
"score": 0.6
},
"image": {
"data": "<base-64 encoded image>",
"bounding_box": {
"bottom": 0.937405,
"left": 0.033312,
"right": 0.900048,
"top": 0.555335
}
}
}
],
"operations": [
{
"type": "DeleteOperation",
"id": "7"
},
{
"type": "RenameOperation",
"from": "2",
"to": "1"
}
]
}
}
v1
{
"channel_id": 1,
"timestamp": "2025-01-01T00:00:00.000000Z",
"detections": [
{
"bounding_box": {
"bottom": 0.89332,
"left": 0.13332,
"right": 0.80004,
"top": 0.59942
},
"class": {
"type": "Human",
"score": 0.6
},
"image_id": "ABC123",
"object_track_id": "1"
}
],
"track_events": [
{
"type": "TrackEnded",
"object_track_id": "7"
},
{
"type": "Rename",
"from_id": "2",
"to_id": "1"
}
]
}

Object Snapshot

In new format instead of embedding the image data inside the frame or object track, a separate object snapshot format is used.

v1
{
"object_track_id": "1",
"channel_id": 1,
"id": "ABC123",
"timestamp": "2025-01-01T00:00:00.000000Z",
"data": "base64...",
"class": {
"score": 0.6,
"type": "Human"
},
"crop_box": {
"bottom": 0.99332,
"left": 0.03332,
"right": 0.90004,
"top": 0.49942
}
}

Object Track

Beta
{
"consolidated": {
"id": "f4211d1b-7118-4e02-a3b2-4fabf69915cc",
"start_time": "2025-03-08T09:00:19.320111Z",
"end_time": "2025-03-08T09:00:30.871211Z",
"duration": 11.5511,
"classes": [
{
"type": "Human",
"score": 0.6,
"upper_clothing_colors": [
{
"name": "White",
"score": 0.78
}
],
"lower_clothing_colors": [
{
"name": "Blue",
"score": 0.59
}
]
}
],
"image": {
"timestamp": "2025-03-08T09:00:19.320111Z",
"bounding_box": {
"left": 0.033312,
"top": 0.555335,
"right": 0.900048,
"bottom": 0.937405
},
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBD"
},
"observations": [
{
"timestamp": "2025-03-08T09:00:19.320111Z",
"bounding_box": {
"left": 0.13332,
"top": 0.59942,
"right": 0.80004,
"bottom": 0.89332
}
},
{
"timestamp": "2025-03-08T09:00:30.871211Z",
"bounding_box": {
"left": 0.15332,
"top": 0.61942,
"right": 0.78122,
"bottom": 0.88621
}
}
]
}
}
v1
{
"channel_id": 1,
"classes": [
{
"type": "Human",
"score": 0.6,
"carries_bag": true,
"upper_clothing_colors": [
{
"name": "White",
"score": 0.78
}
],
"lower_clothing_colors": [
{
"name": "Blue",
"score": 0.59
}
]
}
],
"duration": 11.5511,
"end_time": "2025-03-08T09:00:30.871211Z",
"id": "f4211d1b-7118-4e02-a3b2-4fabf69915cc",
"image": {
"timestamp": "2025-03-08T09:00:19.320111Z",
"crop_box": {
"left": 0.033312,
"top": 0.555335,
"right": 0.900048,
"bottom": 0.937405
},
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBD",
"id": "ABC123"
},
"path": [
{
"timestamp": "2025-03-08T09:00:19.320111Z",
"bounding_box": {
"left": 0.13332,
"top": 0.59942,
"right": 0.80004,
"bottom": 0.89332
}
},
{
"timestamp": "2025-03-08T09:00:30.871211Z",
"bounding_box": {
"left": 0.15332,
"top": 0.61942,
"right": 0.78122,
"bottom": 0.88621
}
}
],
"start_time": "2025-03-08T09:00:19.320111Z"
}