GestureRecognizer class

Performs hand gesture recognition on images.

Signature:

export declare class GestureRecognizer extends VisionTaskRunner 

Extends: VisionTaskRunner

Properties

PropertyModifiersTypeDescription
HAND_CONNECTIONSstaticConnection[]An array containing the pairs of hand landmark indices to be rendered with connections.

Methods

MethodModifiersDescription
createFromModelBuffer(wasmFileset, modelAssetBuffer)staticInitializes the Wasm runtime and creates a new gesture recognizer based on the provided model asset buffer.
createFromModelPath(wasmFileset, modelAssetPath)staticInitializes the Wasm runtime and creates a new gesture recognizer based on the path to the model asset.
createFromOptions(wasmFileset, gestureRecognizerOptions)staticInitializes the Wasm runtime and creates a new gesture recognizer from the provided options.
recognize(image, imageProcessingOptions)Performs gesture recognition on the provided single image and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode image.
recognizeForVideo(videoFrame, timestamp, imageProcessingOptions)Performs gesture recognition on the provided video frame and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode video.
setOptions(options)Sets new options for the gesture recognizer.Calling setOptions() with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined.

GestureRecognizer.HAND_CONNECTIONS

An array containing the pairs of hand landmark indices to be rendered with connections.

Signature:

static HAND_CONNECTIONS: Connection[];

GestureRecognizer.createFromModelBuffer()

Initializes the Wasm runtime and creates a new gesture recognizer based on the provided model asset buffer.

Signature:

static createFromModelBuffer(wasmFileset: WasmFileset, modelAssetBuffer: Uint8Array): Promise<GestureRecognizer>;

Parameters

ParameterTypeDescription
wasmFilesetWasmFilesetA configuration object that provides the location of the Wasm binary and its loader.
modelAssetBufferUint8ArrayA binary representation of the model.

Returns:

Promise<GestureRecognizer>

GestureRecognizer.createFromModelPath()

Initializes the Wasm runtime and creates a new gesture recognizer based on the path to the model asset.

Signature:

static createFromModelPath(wasmFileset: WasmFileset, modelAssetPath: string): Promise<GestureRecognizer>;

Parameters

ParameterTypeDescription
wasmFilesetWasmFilesetA configuration object that provides the location of the Wasm binary and its loader.
modelAssetPathstringThe path to the model asset.

Returns:

Promise<GestureRecognizer>

GestureRecognizer.createFromOptions()

Initializes the Wasm runtime and creates a new gesture recognizer from the provided options.

Signature:

static createFromOptions(wasmFileset: WasmFileset, gestureRecognizerOptions: GestureRecognizerOptions): Promise<GestureRecognizer>;

Parameters

ParameterTypeDescription
wasmFilesetWasmFilesetA configuration object that provides the location of the Wasm binary and its loader.
gestureRecognizerOptionsGestureRecognizerOptionsThe options for the gesture recognizer. Note that either a path to the model asset or a model buffer needs to be provided (via baseOptions).

Returns:

Promise<GestureRecognizer>

GestureRecognizer.recognize()

Performs gesture recognition on the provided single image and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode image.

Signature:

recognize(image: ImageSource, imageProcessingOptions?: ImageProcessingOptions): GestureRecognizerResult;

Parameters

ParameterTypeDescription
imageImageSourceA single image to process.
imageProcessingOptionsImageProcessingOptionsthe ImageProcessingOptions specifying how to process the input image before running inference. The detected gestures.

Returns:

GestureRecognizerResult

GestureRecognizer.recognizeForVideo()

Performs gesture recognition on the provided video frame and waits synchronously for the response. Only use this method when the GestureRecognizer is created with running mode video.

Signature:

recognizeForVideo(videoFrame: ImageSource, timestamp: number, imageProcessingOptions?: ImageProcessingOptions): GestureRecognizerResult;

Parameters

ParameterTypeDescription
videoFrameImageSourceA video frame to process.
timestampnumberThe timestamp of the current frame, in ms.
imageProcessingOptionsImageProcessingOptionsthe ImageProcessingOptions specifying how to process the input image before running inference. The detected gestures.

Returns:

GestureRecognizerResult

GestureRecognizer.setOptions()

Sets new options for the gesture recognizer.

Calling setOptions() with a subset of options only affects those options. You can reset an option back to its default value by explicitly setting it to undefined.

Signature:

setOptions(options: GestureRecognizerOptions): Promise<void>;

Parameters

ParameterTypeDescription
optionsGestureRecognizerOptionsThe options for the gesture recognizer.

Returns:

Promise<void>