Skip to Content

API Reference

All backend functionality is delivered through Next.js API Routes under /api/*. Each route runs in a serverless environment (Edge Runtime where possible) and can therefore be called from any HTTP client.

Authentication is handled via bearer tokens where required; public routes (e.g. ISS position) need none.


Index

MethodRouteDescription
GET/api/issReal-time International Space Station position.
POST/api/chatStreaming AI chat completions with tool calling enabled.
POST/api/imageGenerates an image using the OpenAI Images API.
GET/api/articles/ingestEmbed & store the sample articles in the vector DB.
GET/api/articles/search?query=Semantic search across the vector DB.
POST/api/moderateContent moderation – flags rude / offensive comments.
GET / POST/api/podcastText-to-Speech (TTS) – returns an MP3 / WAV audio stream.
GET/api/sessionCreates an ephemeral OpenAI real-time session token.

Examples

1. Chat

curl -X POST \ -H "Content-Type: application/json" \ -d '{ "messages": [ {"role": "user", "content": "Give me a TL;DR of the latest news"} ]}' \ http://localhost:3000/api/chat

The endpoint returns a Server-Sent Event (SSE) stream. Read it chunk-by-chunk for real-time updates.

data: {"id":"chatcmpl…","choices":[{"delta":{"content":"Sure!"}}]}
const res = await fetch('/api/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ messages }), }); // Create a reader for the ReadableStream and process chunks const reader = res.body?.getReader(); // …

2. Article search

curl "http://localhost:3000/api/articles/search?query=typescript%203.5"

Response (truncated):

[ { "id": "getting-started-with-ts", "score": 0.92, "title": "Getting started with TypeScript 3.5" } ]

Versioning

The API is currently experimental – expect breaking changes while the project is 0.x.
Pin your client to a specific commit hash if stability is critical.


⚠️

Never expose your OpenAI API key in client-side code – always call the serverless routes instead.

Last updated on