JavaScript SDK

JavaScript SDK Documentation

Official JavaScript/TypeScript SDK for the OrbitalsAI API. Process audio, manage billing, and integrate AI-powered features into your applications with ease.

Node.js 16+
Stable
npm Package

Features

Simple & Intuitive API
Real-Time Streaming
Word-level timestamps
Full TypeScript Support
Automatic Retries
Modern async/await
Universal (Node.js & Browser)
Comprehensive Error Handling

Installation

Install the official orbitalsai package using your preferred package manager. The SDK works across Node.js and browser-based applications and is fully typed for TypeScript projects.

bash
npm install orbitalsai

Quick Start

Start with upload-and-transcribe for pre-recorded files, then use real-time streaming when your product needs live transcription in a session. Most teams begin with these two entry points.

Upload & Transcribe

Use this flow for a single pre-recorded file: upload audio, then poll until the transcription task completes.

quick-start.js
javascript
import { OrbitalsClient } from "orbitalsai";
// Initialize the client with your API key
const client = new OrbitalsClient({
apiKey: "your-api-key-here",
});
// Upload and transcribe audio
const file = /* your audio file */;
const upload = await client.audio.upload(file);
// Wait for transcription to complete
const result = await client.audio.waitForCompletion(upload.task_id);
console.log("Transcription:", result.result_text);

Real-Time Streaming Transcription

Use streaming for voice assistants, captions, call interfaces, and other low-latency use cases where users need live partial and final text updates.

streaming-quick-start.js
javascript
import { StreamingClient } from "orbitalsai";
const streaming = new StreamingClient("your-api-key-here");
// Handle real-time transcription events
streaming.on("partial", (msg) => console.log("Partial:", msg.text));
streaming.on("final", (msg) => console.log("Final:", msg.text));
// Connect and start streaming
await streaming.connect({ language: "english" });
// Send audio data (Int16 PCM, 16kHz, mono)
streaming.sendAudio(audioBuffer);
// Disconnect when done
streaming.disconnect();

Initialization

Initialize one client per process or request context and reuse it across calls. Configure timeout and retries based on your network environment and UX expectations.

init.js
javascript
import { OrbitalsClient } from "orbitalsai";
const client = new OrbitalsClient({
apiKey: "your-api-key-here",
timeout: 30000, // optional - request timeout in ms
maxRetries: 3, // optional - number of retry attempts
debug: false, // optional - enable debug logging
});
Get Your API Key
Get your API key from the OrbitalsAI Dashboard

Authentication & Security

OrbitalsAI uses API key authentication. Keep keys server-side when possible, rotate them regularly, and never hardcode production secrets in committed source files.

API Key Authentication

Pass your key through the SDK configuration. The client sends the bearer token on your behalf, so you do not need to manually set headers for every request.

auth.js
javascript
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY,
});

Environment Variables

Store your API key in environment variables and read it at runtime. This keeps secrets out of your codebase and makes environment-specific deployments safer and easier to manage.

.env
bash
# .env
ORBITALS_API_KEY=your-api-key-here

Audio Processing

Use these endpoints for pre-recorded files and asynchronous workflows. You can control language behavior, model selection, subtitle output, and polling strategy based on your app requirements.

Selecting Language and model_name

Choose a language when you already know the source speech language, and set model_name when you want explicit control over the transcription model in use.

language-model.js
javascript
const upload = await client.audio.upload(audioBuffer, "audio.mp3", {
language: "english",
model_name: "Perigee-1",
generate_srt: true,
});

Upload Audio for Transcription

Upload from either browser file inputs or Node.js buffers. The SDK accepts both and normalizes the request shape for you.

javascript
// Browser environment
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
// You can optionally pass upload options (3rd arg) including model_name
const upload = await client.audio.upload(file, undefined, { model_name: "Perigee-1" });
console.log("Task ID:", upload.task_id);

Check Transcription Status

Use this endpoint to fetch the current state of a specific task by task ID. This is useful when you already have task IDs stored and want to retrieve status or final outputs on demand.

check-status.js
javascript
const status = await client.audio.getStatus(taskId);
console.log("Status:", status.status);
if (status.status === "completed") {
console.log("Transcription:", status.result_text);
if (status.srt_content) {
console.log("SRT subtitles:", status.srt_content);
}
}

Wait for Completion

For simpler flows, use waitForCompletionto handle polling internally with configurable interval and timeout.

wait-completion.js
javascript
// Automatically polls until completion or timeout
const result = await client.audio.waitForCompletion(
taskId,
2000, // poll interval in ms (optional, default: 2000)
300000 // max wait time in ms (optional, default: 300000 - 5 minutes)
);
console.log("Transcription:", result.result_text);

Batch Processing (Multiple Files)

For multi-file workflows, upload all files first, then resolve each task independently. This pattern helps you process partial successes without failing the entire batch when one file errors out.

batch-processing.js
javascript
import fs from "fs";
import path from "path";
import { OrbitalsClient } from "orbitalsai";
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY,
});
const files = [
"audio/meeting-1.mp3",
"audio/meeting-2.mp3",
"audio/meeting-3.mp3",
];
// 1) Upload all files and keep task IDs
const uploadResults = await Promise.allSettled(
files.map(async (filePath) => {
const audioBuffer = fs.readFileSync(filePath);
const upload = await client.audio.upload(audioBuffer, path.basename(filePath), {
language: "english",
model_name: "Perigee-1",
});
return { filePath, taskId: upload.task_id };
})
);
const successfulUploads = uploadResults
.filter((result) => result.status === "fulfilled")
.map((result) => result.value);
// 2) Wait for each uploaded task to complete
const transcriptionResults = await Promise.allSettled(
successfulUploads.map(async ({ filePath, taskId }) => {
const result = await client.audio.waitForCompletion(taskId);
return {
filePath,
taskId,
text: result.result_text,
};
})
);
// 3) Handle successes and failures independently
const completed = transcriptionResults.filter((r) => r.status === "fulfilled");
const failed = transcriptionResults.filter((r) => r.status === "rejected");
console.log("Completed:", completed.length);
console.log("Failed:", failed.length);

You can also run batch uploads in a deferred mode: upload files now, persist the returned task IDs, and resolve results later. In that flow, use client.audio.getTasks() to list tasks and statuses, then call client.audio.getStatus(taskId) for any specific task when you need its latest status or result.

Tasks and Models

List tasks to build history views and dashboards, and list models to dynamically expose model selection in your product UI.

get-tasks.js
javascript
const tasks = await client.audio.getTasks();
tasks.forEach((task) => {
console.log(`${task.original_filename}: ${task.status}`);
});

Get Available Models

get-models.js
javascript
// Fetch available transcription models
const models = await client.audio.getModels();
models.forEach(model => {
console.log(model.model_name, model.is_active ? "(active)" : "(inactive)");
});

Streaming Transcription (Real-Time ASR)

The SDK supports real-time audio transcription via WebSocket connection. This is ideal for live audio streams, voice interfaces, and real-time captioning. Supported languages: English, Hausa, Igbo, Yoruba.

Basic Streaming Setup

Use returnTimestamps: true in connect() to receive word-level timestamps in final events (e.g. for subtitles or alignment).

streaming-setup.js
javascript
import { StreamingClient } from "orbitalsai";
const streaming = new StreamingClient("your-api-key-here");
streaming.on("ready", (msg) => {
console.log("Connected! Session:", msg.session_id);
});
streaming.on("partial", (msg) => {
process.stdout.write(`\r${msg.text}`);
});
streaming.on("final", (msg) => {
console.log(`\nFinal: ${msg.text}`);
if (msg.timestamps?.length) {
msg.timestamps.forEach((w) =>
console.log(` ${w.start.toFixed(2)}s–${w.end.toFixed(2)}s: ${w.text}`)
);
}
console.log(
`Cost: $${msg.cost.toFixed(6)} | Duration: ${msg.audio_seconds}s`
);
});
streaming.on("error", (msg) => {
console.error("Error:", msg.message);
});
await streaming.connect({
language: "english", // "english" | "hausa" | "igbo" | "yoruba"
sampleRate: 16000, // Audio sample rate (default: 16000)
returnTimestamps: true, // Optional: word-level timestamps in final events
debug: false,
});
console.log("Connected:", streaming.isConnected);

Enabling Word-Level Timestamps

Set returnTimestamps: true while connecting to receive per-word timing in final messages. Each timestamp entry includes start, end, and text, which is useful for subtitles and transcript alignment.

word-timestamps.js
javascript
await streaming.connect({
language: "english",
returnTimestamps: true,
});
streaming.on("final", (msg) => {
if (!msg.timestamps?.length) return;
msg.timestamps.forEach((word) => {
console.log(word.start, word.end, word.text);
});
});

Sending Audio Data

Audio must be sent as raw PCM data: Int16, mono, 16kHz.

send-audio.js
javascript
// From a Buffer (Node.js)
streaming.sendAudio(audioBuffer);
// From an ArrayBuffer (Browser)
streaming.sendAudio(audioArrayBuffer);
// From Uint8Array
streaming.sendAudio(uint8Array);

Billing Management

Track account balance and usage history so you can monitor spend, display cost analytics, and enforce usage policies in your app.

Get Account Balance

get-balance.js
javascript
const balance = await client.billing.getBalance();
console.log(`Balance: $${balance.balance}`);
console.log(`Last updated: ${balance.last_updated}`);

Get Daily Usage History

usage-history.js
javascript
// Get usage with date range
const usage = await client.billing.getUsageHistory({
start_date: "2024-01-01",
end_date: "2024-01-31",
page: 1,
page_size: 30,
});
console.log(`Total records: ${usage.total_records}`);
console.log("Period summary:", usage.period_summary);
usage.records.forEach((record) => {
console.log(`${record.date}:`);
console.log(
` Transcription: ${record.transcription_usage} ($${record.transcription_cost})`
);
console.log(` Total: $${record.total_cost}`);
});

Error Handling

The SDK provides detailed error classes for different scenarios:

error-handling.js
javascript
import {
OrbitalsClient,
AuthenticationError,
ValidationError,
RateLimitError,
NetworkError,
} from "orbitalsai";
const client = new OrbitalsClient({ apiKey: "your-api-key" });
try {
const result = await client.audio.upload(file);
} catch (error) {
if (error instanceof AuthenticationError) {
console.error("Invalid API key:", error.message);
} else if (error instanceof ValidationError) {
console.error("Invalid request:", error.message, error.details);
} else if (error instanceof RateLimitError) {
console.error("Rate limited. Retry after:", error.retryAfter);
} else if (error instanceof NetworkError) {
console.error("Network error:", error.message);
} else {
console.error("Unknown error:", error);
}
}

Error Types

AuthenticationError

Invalid or missing API key (401)

ValidationError

Invalid request parameters (422)

RateLimitError

Too many requests (429)

NetworkError

Connection or timeout issues

TypeScript Support

The SDK is written in TypeScript and provides comprehensive type definitions:

types.ts
typescript
import type {
OrbitalsConfig,
AudioUploadOptions,
ModelInfo,
TaskStatus,
BillingBalance,
UsageHistory,
} from "orbitalsai";
// All types are exported for your convenience

Complete Example

This end-to-end example combines secure initialization, upload options, completion polling, and billing checks in one flow that you can adapt for production.

complete-example.js
javascript
import { OrbitalsClient } from "orbitalsai";
import fs from "fs";
async function processAudio(filePath) {
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY,
});
try {
// 1. Upload file
console.log("Uploading audio...");
const audioBuffer = fs.readFileSync(filePath);
const upload = await client.audio.upload(audioBuffer, "audio.mp3", {
generate_srt: true,
language: "english",
model_name: "Perigee-1",
});
console.log("Task created:", upload.task_id);
// 2. Wait for completion
console.log("Processing...");
const result = await client.audio.waitForCompletion(upload.task_id);
console.log("Processing complete!");
// 3. Check balance
const balance = await client.billing.getBalance();
console.log(`Remaining balance: $${balance.balance}`);
return result;
} catch (error) {
console.error("Error processing audio:", error);
throw error;
}
}
processAudio("./my-audio.mp3");