Spectacular AI

State-of-the art inside-out tracking for AR/VR/XR, drones, and ground vehicles.

Spectacular AI SDK

Low power

Runs real-time on embedded hardware

Standalone

No dependencies to ARKit, ARCore, or specific hardware

Accurate

State-of-the-art performance, see below

For off-the-shelf devices

  • Plug'n'play experience, low-level details are handled by the SDK
  • Easy-to-use C++ and Python SDKs with extensive examples
  • Free for non-commercial use, contact us for commercial pricing

For custom hardware

  • Our core C++ SDK can support any device with suitable camera, IMU and compute hardware
  • Minimum CPU requirement: 2x ARM Cortex A53. Can leverage embedded DSPs/NPUs for acceleration
  • Additional supported sensors: Lidar and RTK-GPS
  • Integration and hardware design support available on request

Runs on

Trusted by

Spectacular AI SDK in action

Inside-out tracking

Accurate and low-latency 6-DoF pose tracking for AR & VR headsets, without external lighthouses. Lightweight enough to run on an embedded processor in the headset.

Drone navigation

Runs real-time on embedded devices. Ideal as an input for autonomous navigation. Power consumption well below 1 W with correct acceleration.

GNSS-VIO

Optional fusion with GNSS allows uninterrupted positioning during GPS outages and provides accurate orientation, even with a single GPS antenna.

Large-scale mapping

A floor area of approximately 1000m² was mapped in 21 minutes using Azure Kinect device.

NERFs

Spectacular AI SDK mapping API outputs can be fed into various NeRF frameworks - this 3D reconstruction was generated using NVidia Instant NeRF in a matter of seconds.

Fast 3D reconstruction

The SDK can be used for quick real-time mapping and 3D reconstructions on commodity hardware such as mobile phones with ToF sensor.

Research

Visual-inertial navigation done right

Whether your challenge is to track the movement of vehicles, people or autonomous things—you will need to understand the device's relative location in the surrounding 3D environment.

Our methods are learning-based and adapt to uncertainties in the observed data. We do principled probabilistic inference, where both visual and inertial data contribute as equal parties. Our unique approach is built from scratch using first-principles and implemented using state-of-the-art computer vision and machine learning methods.

Related research

O. Seiskari, P. Rantalankila, J. Kannala, J. Ylilammi, E. Rahtu, A. Solin
HybVIO: Pushing the Limits of Real-time Visual-inertial Odometry
2022 IEEE Winter Conference on Applications of Computer Vision (WACV)

A. Solin, S. Cortés, E. Rahtu, J. Kannala
PIVO: Probabilistic Inertial-visual Odometry for Occlusion-robust Navigation
2018 IEEE Winter Conference on Applications of Computer Vision (WACV)

A. Solin, S. Cortés, E. Rahtu, J. Kannala
Inertial Odometry on Handheld Smartphones
2018 21st International Conference on Information Fusion (FUSION)

S. Cortés, A. Solin, E. Rahtu, J. Kannala
ADVIO: An Authentic Dataset for Visual-Inertial Odometry
2018 European Conference on Computer Vision (ECCV)

Team

Spectacular AI is a university spin-off company from Helsinki, founded in 2021.

Our team is a unique mixture of seasoned software professionals plus established computer vision and machine learning researchers. We solve challenging problems related to Computer Vision, Spatial AI, Sensor Fusion and SLAM.

Otto Seiskari (CEO)
Pekka Rantalankila
Jerry Ylilammi
Valtteri Kaatrasalo
Prof. Arno Solin
Prof. Juho Kannala
Prof. Esa Rahtu

Contact us