Open
Show file tree
Hide file tree
Changes from all commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Failed to load files.
Original file line numberDiff line numberDiff line change
Expand Up@@ -7,8 +7,7 @@ This cookbook demonstrates how to use OpenAI's [ Realtime API](https://platform.
A real-world use case for this demo is a multilingual, conversational translation where a speaker talks into the speaker app and listeners hear translations in their selected native language via the listener app. Imagine a conference room with a speaker talking in English and a participant with headphones in choosing to listen to a Tagalog translation. Due to the current turn-based nature of audio models, the speaker must pause briefly to allow the model to process and translate speech. However, as models become faster and more efficient, this latency will decrease significantly and the translation will become more seamless.


Let's explore the main functionalities and code snippets that illustrate how the app works. You can find the code in the [accompanying repo](https://.com/openai/openai-cookbook/tree/main/examples/voice_solutions/one_way_translation_using_realtime_api/README.md
) if you want to run the app locally.
Let's explore the main functionalities and code snippets that illustrate how the app works. You can find the code in the [accompanying repo](https://.com/openai/openai-cookbook/tree/main/examples/voice_solutions/one_way_translation_using_realtime_api) if you want to run the app locally.

## High Level Architecture Overview

Expand Down
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
/**
* All note frequencies from 1st to 8th octave
* in format "A#8" (A#, 8th octave)
*/
export const noteFrequencies: any[];
export const noteFrequencyLabels: any[];
export const voiceFrequencies: any[];
export const voiceFrequencyLabels: any[];
//# sourceMappingURL=constants.d.ts.map
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
/**
* Raw wav audio file contents
* @typedef {Object} WavPackerAudioType
* @property {Blob} blob
* @property {string} url
* @property {number} channelCount
* @property {number} sampleRate
* @property {number} duration
*/
/**
* Utility class for assembling PCM16 "audio/wav" data
* @class
*/
export class WavPacker {
/**
* Converts Float32Array of amplitude data to ArrayBuffer in Int16Array format
* @param {Float32Array} float32Array
* @returns {ArrayBuffer}
*/
static floatTo16BitPCM(float32Array: Float32Array): ArrayBuffer;
/**
* Concatenates two ArrayBuffers
* @param {ArrayBuffer} leftBuffer
* @param {ArrayBuffer} rightBuffer
* @returns {ArrayBuffer}
*/
static mergeBuffers(leftBuffer: ArrayBuffer, rightBuffer: ArrayBuffer): ArrayBuffer;
/**
* Packs data into an Int16 format
* @private
* @param {number} size 0 = 1x Int16, 1 = 2x Int16
* @param {number} arg value to pack
* @returns
*/
private _packData;
/**
* Packs audio into "audio/wav" Blob
* @param {number} sampleRate
* @param {{bitsPerSample: number, channels: Array<Float32Array>, data: Int16Array}} audio
* @returns {WavPackerAudioType}
*/
pack(sampleRate: number, audio: {
bitsPerSample: number;
channels: Array<Float32Array>;
data: Int16Array;
}): WavPackerAudioType;
}
/**
* Raw wav audio file contents
*/
export type WavPackerAudioType = {
blob: Blob;
url: string;
channelCount: number;
sampleRate: number;
duration: number;
};
//# sourceMappingURL=wav_packer.d.ts.map
Loading