Spectacular AI

What we do

Spectacular AI is a spin-off project from the research groups of Profs. Arno Solin (machine learning), Juho Kannala (computer vision), and Esa Rahtu (computer vision) and backed up by funding from Business Finland.

We offer next-generation visual-inertial tracking solutions that enable real-time relative position estimation by information fusion of camera and inertial measurement unit (IMU) data. Our solutions are independent of platform vendor services and can also run on custom hardware such as on drones and robots as well as on smartphones and tablets.

 AR/VR Applications

Precise real-time position information is a prerequisite for AR/VR applications. Our methods are not confined to small spaces and do not require mapping or external hardware.

 Smart-devices

Our visual-inertial odometry and SLAM (simultaneous localisation and mapping) methods provide full control of the estimation stack and map, which is not available in vendor-locked solutions.

 Custom Hardware

Our solutions can run on custom hardware—be that then a robot, drone, vehicle, or wearable. We offer both CPU-only and GPU-accelerated versions of the methods.

Technology

Visual-inertial navigation done right

Whether you challenge is to track the movement of vehicles, people or autonomous things—you will need to understand the device's relative location in the surrounding 3D environment.

While computer vision can help solve these challenges, it alone does not provide knowledge of absolute scale nor reliable performance in real-life environments. Varying lighting conditions, large number of other moving objects, or feature-poor textures can be challenging to handle. This is why information fusion of visual and inertial sensors—two orthogonal information sources—is crucial.

Our methods are learning-based and adapt to uncertainties in the observed data. We do principled probabilistic inference, where both visual and inertial data contribute as equal parties. Our unique approach is built from scratch using first-principles and implemented using state-of-the-art computer vision and machine learning methods.

Robust

ROBUST

Fusion gives robustness against camera occlusions, varying lighting conditions, and feature-poor visual environments.

Accurate

ACCURATE

Error in position less than 1% per covered distance in odometery alone. With SLAM, we reach near zero-drift when revisiting previous locations.

Cross-platform

CROSS-PLATFORM

Our C++ codebase can be built for various platforms ranging from smartphones to drones. The methods are configurable to run on different hardware setups.

Demonstration Videos

Example videos, comparisons, and case studies using our technology.

Image
Comparisons to ARCore, ARKit, and AR Engine

An indoor/outdoor visual-inertial tracking case on foot. We compare the performance of Google ARCore, Huawei AR Engine, Apple ARKit, and our VIO solution.

Image
SLAM vehicle tracking

A larger-scale visual-inertial SLAM tracking case using a car and comparisons against ARCore, AR Engine, ARKit, and Intel RealSense.

Image
Vehicle tracking in parking garage (revisited)

Comparison between Spectacular AI visual-inertial tracking and Apple ARKit, both simultaneously running on the same iPhone 8 smartphone attached to the car dashboard.

Image
Phone throwing

Extreme tracking of a smartphone while rotating in the air. Motion blur makes this challenging for visual-heavy methods.

Image
Model train tracking

Visual-inertial tracking of a model train using an iPhone Xs smartphone running the our visual-inertial odometry method (without SLAM).

Image
Pedestrial tracking in subway station

Tracking pedestrian movement in a multi-floor subway station. Note how the tracking works in the escalators. The device used is a Huawei Mate 20 smartphone.

All demo videos

Research

An authentic dataset for visual-inertial odometry

The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems.

Inertial Odometry on Handheld Smartphones

Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible.

Probabilistic Inertial-Visual Odometry for Occlusion-Robust Navigation

This paper presents a novel method for visual-inertial odometry. The method is based on an information fusion framework employing low-cost IMU sensors and the monocular camera in a standard smartphone.

Team

Our team consists of experienced developers and researchers in the field of computer vision, machine learning, and sensor fusion.

Prof. Arno Solin

Prof. Arno Solin

Dr. Otto Seiskari

Dr. Otto Seiskari

Prof. Juho Kannala

Prof. Juho Kannala

Prof. Esa Rahtu

Prof. Esa Rahtu

Pekka Rantalankila

Pekka Rantalankila

Jerry Ylilammi

Jerry Ylilammi

Contact us