JavaScript SDK Documentation
Official JavaScript/TypeScript SDK for the OrbitalsAI API. Process audio, manage billing, and integrate AI-powered features into your applications with ease.
Features
Installation
Install the official orbitalsai package using your preferred package manager. The SDK works across Node.js and browser-based applications and is fully typed for TypeScript projects.
npm install orbitalsaiQuick Start
Start with upload-and-transcribe for pre-recorded files, then use real-time streaming when your product needs live transcription in a session. Most teams begin with these two entry points.
Upload & Transcribe
Use this flow for a single pre-recorded file: upload audio, then poll until the transcription task completes.
import { OrbitalsClient } from "orbitalsai";// Initialize the client with your API keyconst client = new OrbitalsClient({ apiKey: "your-api-key-here",});// Upload and transcribe audioconst file = /* your audio file */;const upload = await client.audio.upload(file);// Wait for transcription to completeconst result = await client.audio.waitForCompletion(upload.task_id);console.log("Transcription:", result.result_text);Real-Time Streaming Transcription
Use streaming for voice assistants, captions, call interfaces, and other low-latency use cases where users need live partial and final text updates.
import { StreamingClient } from "orbitalsai";const streaming = new StreamingClient("your-api-key-here");// Handle real-time transcription eventsstreaming.on("partial", (msg) => console.log("Partial:", msg.text));streaming.on("final", (msg) => console.log("Final:", msg.text));// Connect and start streamingawait streaming.connect({ language: "english" });// Send audio data (Int16 PCM, 16kHz, mono)streaming.sendAudio(audioBuffer);// Disconnect when donestreaming.disconnect();Initialization
Initialize one client per process or request context and reuse it across calls. Configure timeout and retries based on your network environment and UX expectations.
import { OrbitalsClient } from "orbitalsai";const client = new OrbitalsClient({ apiKey: "your-api-key-here", timeout: 30000, // optional - request timeout in ms maxRetries: 3, // optional - number of retry attempts debug: false, // optional - enable debug logging});Get Your API Key
Authentication & Security
OrbitalsAI uses API key authentication. Keep keys server-side when possible, rotate them regularly, and never hardcode production secrets in committed source files.
API Key Authentication
Pass your key through the SDK configuration. The client sends the bearer token on your behalf, so you do not need to manually set headers for every request.
const client = new OrbitalsClient({ apiKey: process.env.ORBITALS_API_KEY,});Environment Variables
Store your API key in environment variables and read it at runtime. This keeps secrets out of your codebase and makes environment-specific deployments safer and easier to manage.
# .envORBITALS_API_KEY=your-api-key-hereAudio Processing
Use these endpoints for pre-recorded files and asynchronous workflows. You can control language behavior, model selection, subtitle output, and polling strategy based on your app requirements.
Selecting Language and model_name
Choose a language when you already know the source speech language, and set model_name when you want explicit control over the transcription model in use.
const upload = await client.audio.upload(audioBuffer, "audio.mp3", { language: "english", model_name: "Perigee-1", generate_srt: true,});Upload Audio for Transcription
Upload from either browser file inputs or Node.js buffers. The SDK accepts both and normalizes the request shape for you.
// Browser environmentconst fileInput = document.querySelector('input[type="file"]');const file = fileInput.files[0];// You can optionally pass upload options (3rd arg) including model_nameconst upload = await client.audio.upload(file, undefined, { model_name: "Perigee-1" });console.log("Task ID:", upload.task_id);Check Transcription Status
Use this endpoint to fetch the current state of a specific task by task ID. This is useful when you already have task IDs stored and want to retrieve status or final outputs on demand.
const status = await client.audio.getStatus(taskId);console.log("Status:", status.status);if (status.status === "completed") { console.log("Transcription:", status.result_text); if (status.srt_content) { console.log("SRT subtitles:", status.srt_content); }}Wait for Completion
For simpler flows, use waitForCompletionto handle polling internally with configurable interval and timeout.
// Automatically polls until completion or timeoutconst result = await client.audio.waitForCompletion( taskId, 2000, // poll interval in ms (optional, default: 2000) 300000 // max wait time in ms (optional, default: 300000 - 5 minutes));console.log("Transcription:", result.result_text);Batch Processing (Multiple Files)
For multi-file workflows, upload all files first, then resolve each task independently. This pattern helps you process partial successes without failing the entire batch when one file errors out.
import fs from "fs";import path from "path";import { OrbitalsClient } from "orbitalsai";const client = new OrbitalsClient({ apiKey: process.env.ORBITALS_API_KEY,});const files = [ "audio/meeting-1.mp3", "audio/meeting-2.mp3", "audio/meeting-3.mp3",];// 1) Upload all files and keep task IDsconst uploadResults = await Promise.allSettled( files.map(async (filePath) => { const audioBuffer = fs.readFileSync(filePath); const upload = await client.audio.upload(audioBuffer, path.basename(filePath), { language: "english", model_name: "Perigee-1", }); return { filePath, taskId: upload.task_id }; }));const successfulUploads = uploadResults .filter((result) => result.status === "fulfilled") .map((result) => result.value);// 2) Wait for each uploaded task to completeconst transcriptionResults = await Promise.allSettled( successfulUploads.map(async ({ filePath, taskId }) => { const result = await client.audio.waitForCompletion(taskId); return { filePath, taskId, text: result.result_text, }; }));// 3) Handle successes and failures independentlyconst completed = transcriptionResults.filter((r) => r.status === "fulfilled");const failed = transcriptionResults.filter((r) => r.status === "rejected");console.log("Completed:", completed.length);console.log("Failed:", failed.length);You can also run batch uploads in a deferred mode: upload files now, persist the returned task IDs, and resolve results later. In that flow, use client.audio.getTasks() to list tasks and statuses, then call client.audio.getStatus(taskId) for any specific task when you need its latest status or result.
Tasks and Models
List tasks to build history views and dashboards, and list models to dynamically expose model selection in your product UI.
const tasks = await client.audio.getTasks();tasks.forEach((task) => { console.log(`${task.original_filename}: ${task.status}`);});Get Available Models
// Fetch available transcription modelsconst models = await client.audio.getModels();models.forEach(model => { console.log(model.model_name, model.is_active ? "(active)" : "(inactive)");});Streaming Transcription (Real-Time ASR)
The SDK supports real-time audio transcription via WebSocket connection. This is ideal for live audio streams, voice interfaces, and real-time captioning. Supported languages: English, Hausa, Igbo, Yoruba.
Basic Streaming Setup
Use returnTimestamps: true in connect() to receive word-level timestamps in final events (e.g. for subtitles or alignment).
import { StreamingClient } from "orbitalsai";const streaming = new StreamingClient("your-api-key-here");streaming.on("ready", (msg) => { console.log("Connected! Session:", msg.session_id);});streaming.on("partial", (msg) => { process.stdout.write(`\r${msg.text}`);});streaming.on("final", (msg) => { console.log(`\nFinal: ${msg.text}`); if (msg.timestamps?.length) { msg.timestamps.forEach((w) => console.log(` ${w.start.toFixed(2)}s–${w.end.toFixed(2)}s: ${w.text}`) ); } console.log( `Cost: $${msg.cost.toFixed(6)} | Duration: ${msg.audio_seconds}s` );});streaming.on("error", (msg) => { console.error("Error:", msg.message);});await streaming.connect({ language: "english", // "english" | "hausa" | "igbo" | "yoruba" sampleRate: 16000, // Audio sample rate (default: 16000) returnTimestamps: true, // Optional: word-level timestamps in final events debug: false,});console.log("Connected:", streaming.isConnected);Enabling Word-Level Timestamps
Set returnTimestamps: true while connecting to receive per-word timing in final messages. Each timestamp entry includes start, end, and text, which is useful for subtitles and transcript alignment.
await streaming.connect({ language: "english", returnTimestamps: true,});streaming.on("final", (msg) => { if (!msg.timestamps?.length) return; msg.timestamps.forEach((word) => { console.log(word.start, word.end, word.text); });});Sending Audio Data
Audio must be sent as raw PCM data: Int16, mono, 16kHz.
// From a Buffer (Node.js)streaming.sendAudio(audioBuffer);// From an ArrayBuffer (Browser)streaming.sendAudio(audioArrayBuffer);// From Uint8Arraystreaming.sendAudio(uint8Array);Billing Management
Track account balance and usage history so you can monitor spend, display cost analytics, and enforce usage policies in your app.
Get Account Balance
const balance = await client.billing.getBalance();console.log(`Balance: $${balance.balance}`);console.log(`Last updated: ${balance.last_updated}`);Get Daily Usage History
// Get usage with date rangeconst usage = await client.billing.getUsageHistory({ start_date: "2024-01-01", end_date: "2024-01-31", page: 1, page_size: 30,});console.log(`Total records: ${usage.total_records}`);console.log("Period summary:", usage.period_summary);usage.records.forEach((record) => { console.log(`${record.date}:`); console.log( ` Transcription: ${record.transcription_usage} ($${record.transcription_cost})` ); console.log(` Total: $${record.total_cost}`);});Error Handling
The SDK provides detailed error classes for different scenarios:
import { OrbitalsClient, AuthenticationError, ValidationError, RateLimitError, NetworkError,} from "orbitalsai";const client = new OrbitalsClient({ apiKey: "your-api-key" });try { const result = await client.audio.upload(file);} catch (error) { if (error instanceof AuthenticationError) { console.error("Invalid API key:", error.message); } else if (error instanceof ValidationError) { console.error("Invalid request:", error.message, error.details); } else if (error instanceof RateLimitError) { console.error("Rate limited. Retry after:", error.retryAfter); } else if (error instanceof NetworkError) { console.error("Network error:", error.message); } else { console.error("Unknown error:", error); }}Error Types
AuthenticationError
Invalid or missing API key (401)
ValidationError
Invalid request parameters (422)
RateLimitError
Too many requests (429)
NetworkError
Connection or timeout issues
TypeScript Support
The SDK is written in TypeScript and provides comprehensive type definitions:
import type { OrbitalsConfig, AudioUploadOptions, ModelInfo, TaskStatus, BillingBalance, UsageHistory,} from "orbitalsai";// All types are exported for your convenienceComplete Example
This end-to-end example combines secure initialization, upload options, completion polling, and billing checks in one flow that you can adapt for production.
import { OrbitalsClient } from "orbitalsai";import fs from "fs";async function processAudio(filePath) { const client = new OrbitalsClient({ apiKey: process.env.ORBITALS_API_KEY, }); try { // 1. Upload file console.log("Uploading audio..."); const audioBuffer = fs.readFileSync(filePath); const upload = await client.audio.upload(audioBuffer, "audio.mp3", { generate_srt: true, language: "english", model_name: "Perigee-1", }); console.log("Task created:", upload.task_id); // 2. Wait for completion console.log("Processing..."); const result = await client.audio.waitForCompletion(upload.task_id); console.log("Processing complete!"); // 3. Check balance const balance = await client.billing.getBalance(); console.log(`Remaining balance: $${balance.balance}`); return result; } catch (error) { console.error("Error processing audio:", error); throw error; }}processAudio("./my-audio.mp3");