Deploy AI across mobile, web, and embedded applications

  • On device

    Reduce latency. Work offline. Keep your data local & private.

  • Cross-platform

    Run the same model across Android, iOS, web, and embedded.

  • Multi-framework

    Compatible with JAX, Keras, PyTorch, and TensorFlow models.

  • Full AI edge stack

    Flexible frameworks, turnkey solutions, hardware accelerators

Explore the full AI edge stack, with products at every level — from low-code APIs down to hardware specific acceleration libraries.

MediaPipe Framework

A low level framework used to build high performance accelerated ML pipelines, often including multiple ML models combined with pre and post processing.

Model Explorer

Visually explore, debug, and compare your models. Overlay performance benchmarks and numerics to pinpoint troublesome hotspots.

nano characters

Recent videos and blog posts