3D VISION-BASED SENSING SOLUTIONS

FOR ADVANCED ROBOTICS & AUTOMATION

DEMOCRATISING VISUAL SENSING

We provide businesses the opportunity to develop their own robotic applications by making standalone perception systems modular, accessible and reliable.

Modular Sensor

Current autonomy is hard to scale, because the perception module requires a large perception and embedded team.

Our modularised sensor hardware and expandable software can better support both robot manufacturers and budding enthusiasts by speeding up their development process.

This lowers the barriers in adopting vision technologies, so that even the smallest robotics companies can enjoy cutting-edge perception technology from Day 1.

Cloud Monitoring & Calibration

Visual sensor calibration is often unnecessarily complicated and time consuming, and technical errors can occur unpredictably. Engineers often have to visit the site to conduct routine maintenance or repair.

These unnecessary mundane works are now automatically handled by Vilota's cloud monitoring and calibration packages. Your engineers can focus on what they do best. Your company saves operating time and cost.

360° Perception

Sensors are commonly built to work in isolation, which creates data fragmentation and perception blindspots on your robots.

Our sensors are designed to operate in a network environment and communicate with one another to share temporal and spatial data they collect. This allows for real-time, robust 360° perception of any environments.

CORE TECHNOLOGIES

3D Computer Vision

High level understanding of digital images and videos

Edge Compute

Computation and processing right at the source of data acquisition

Sensor Fusion

In-house knowhow and leading technologies in combining multiple sensory data

OUR OFFERINGS

To realise our vision in democratising 3D vision perception, we offer both software and hardware solutions to achieve state-of-the-art 3D vision solutions for your robots and businesses

Perceptive Kernel

Hardware-neutral API level product to offer onboard calculation for immediately actionable sensory data

API offered :

  • NavFuse (self-positioning)
  • MovTrack (moving object tracking)
  • Volume (dept perception)
  • Object3D (3D object positioning)

OmniSense Hardware

Our cameras are collectively known as OmniSense. Our goal for an end-state OmniSense is to include a communication protocol, on top of having compute and AI chips onboard. This allows for visual navigation under challenging environments, where multi-sensor redundancy could be achieved with vision sensors mounted on mobile objects and static infrastructure.

Vision Kit Lite

Vision Kit Lite is credit-card size, suitable to be mounted on drones and small robots. It is able to cover 360-degree for navigation, tracking and monitoring purposes. It comprises of our first-generation OmniSense and a compute system with embedded Perceptive Kernel.

Streaming Dev Kit

Our Streaming Dev Kit contains edge compute and vision sensors. Suitable for robotic researchers and developers interested to get their hands on our Perceptive Kernel MovTrack.

The Dev Kit is able to stream over low latency network and data processing is done onboard.

FEATURED IN

CONTACT US

You may also drop us a request for our brochure & factsheet

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.