Machine Learning assisted Video Object Tracking

Our video annotation solution combines machine learning and human-generated training data labels to track objects moving through space and time up to 100 times faster than human-only solutions.

Here’s how it works

On the first frame of a video, a human labeler annotates the objects in question. Functionally, this step is like a typical image annotation workflow. What makes this solution truly powerful is what comes next:

Using a deep learning ensemble model, our solution predicts where all annotated objects move in subsequent frames. Each individual label persists, even if there are dozens of instances of the same class. Instead of relabeling the entire image from scratch, a human labeler simply corrects the annotation if necessary, dragging or resizing the persisted label to squarely fit around the annotated object.

Customizable ontology

Our solution allows you to create an ontology of up to 255 classes, specific to your use case. We also support multiple instances in each class so you can label everything you need.

Persistent object tracking

Once an annotator has labeled each object, an ensemble deep learning model persists those labels onto subsequent frames. This frees up annotators to make small corrections instead of labeling every object again, making our solution up to 100 times faster than human-only approaches.

Drag and drop video annotation

We automatically parse your video into frames and reassemble it when the annotation is complete. You provide the video URL and customize your ontology, and our platform takes care of the rest.

Trusted by today’s leading brands

Interested in trying out our Machine Learning assisted Video Object Tracking solution?
Talk with one of our experts and we’ll help you get set up.

Related Resources

FeaturedBlog

Introducing Machine Learning Assisted Video Object Tracking

Read More
Blog

Introducing Instance-Based Pixel Labeling

Read More
eBooks

eBook: What We Learned Labeling 1 Million Images

Read More