Deploy AI across mobile, web, and embedded applications
On device
Reduce latency. Work offline. Keep your data local & private.
Cross-platform
Run the same model across Android, iOS, web, and embedded.
Multi-framework
Compatible with JAX, Keras, PyTorch, and TensorFlow models.
Full AI edge stack
Flexible frameworks, turnkey solutions, hardware accelerators
Explore the full AI edge stack, with products at every level — from low-code APIs down to hardware specific acceleration libraries.

Get started
MediaPipe Framework
A low level framework used to build high performance accelerated ML pipelines, often including multiple ML models combined with pre and post processing.
Get started
Model Explorer
Visually explore, debug, and compare your models. Overlay performance benchmarks and numerics to pinpoint troublesome hotspots.
Gemini Nano in Android & Chrome
Build generative AI experiences using Google's most powerful, on-device model
