Get Your Agent to Help
Install the OpenAI API skill so your coding agent knows the current API patterns:
npx skills add jezweb/claude-skills@openai-api
Then ask your agent: "help me build the daily-digest run script using the OpenAI API"
Set Your Key
Get a key from platform.openai.com/api-keys:
export OPENAI_API_KEY="sk-..."
bun run sync
The Pattern
The chat completions endpoint hasn't changed — POST https://api.openai.com/v1/chat/completions with an Authorization: Bearer header:
async function ask(prompt: string): Promise<string> {
const resp = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: prompt }],
max_tokens: 1024,
}),
});
if (!resp.ok) throw new Error(`OpenAI ${resp.status}: ${await resp.text()}`);
const data = (await resp.json()) as any;
return data.choices[0].message.content;
}
Current Models (March 2026)
| Model | Speed | Cost | Best For |
|---|---|---|---|
gpt-4.1-mini | Fast | Cheapest | Most scheduled jobs — summaries, classification |
gpt-4.1 | Mid | Mid | Complex reasoning |
gpt-5-mini | Fast | Mid | Latest capabilities, still affordable |
gpt-5.3-codex | Mid | Higher | Code-heavy tasks |
GPT-4o was retired from ChatGPT in Feb 2026 but remains available in the API. Start with gpt-4.1-mini for scheduled jobs — it's fast and cheap.
Error Handling
LLM APIs fail. Rate limits hit. Always retry with backoff:
async function askWithRetry(prompt: string, retries = 3): Promise<string> {
for (let i = 0; i < retries; i++) {
try { return await ask(prompt); }
catch (err) {
if (i === retries - 1) throw err;
await Bun.sleep(1000 * Math.pow(2, i));
}
}
throw new Error("unreachable");
}
API Key at Runtime
The manager injects PATH into the plist but not arbitrary env vars. For the API key to be available when launchd runs your job:
// Read from a dotfile — most durable for scheduled jobs
const key = (await Bun.file(`${Bun.env.HOME}/.config/openai-key`).text()).trim();
Test It
bun run src/cli.ts kick daily-digest
bun run src/cli.ts logs daily-digest
Companion Notes
Branch: OpenAI API
Direct API calls with fetch. You manage the key, you pick the model.
Setup
# Set your API key — the manager injects it into the plist EnvironmentVariables
export OPENAI_API_KEY="sk-..."
The key needs to be available when you run bun run sync. The manager captures process.env.PATH into the plist — but custom env vars need to be in the shell environment at sync time OR set in the schedule file.
Option A: Export before sync
export OPENAI_API_KEY="sk-..."
bun run sync
Option B: Add to schedule file
{"type": "scheduled", "calendar": {"Hour": 8}, "env": {"OPENAI_API_KEY": "sk-..."}}
Note: Option B puts the key in a file. Fine for local-only, but don't commit it.
The Pattern
#!/usr/bin/env bun
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
if (!OPENAI_API_KEY) {
console.error("[job] OPENAI_API_KEY not set");
process.exit(1);
}
async function ask(prompt: string): Promise<string> {
const resp = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4o-mini",
messages: [{ role: "user", content: prompt }],
max_tokens: 1024,
}),
});
if (!resp.ok) {
const err = await resp.text();
throw new Error(`OpenAI ${resp.status}: ${err}`);
}
const data = (await resp.json()) as any;
return data.choices[0].message.content;
}
// Use it
const result = await ask("Summarize these notes...");
console.log(result);
Model Selection
| Model | Cost | Speed | Best For |
|---|---|---|---|
gpt-4o-mini | Cheapest | Fast | Most jobs — classification, summaries, short tasks |
gpt-4o | Mid | Mid | Complex reasoning, longer outputs |
o3-mini | Higher | Slower | Multi-step reasoning, code generation |
Start with gpt-4o-mini. It handles 90% of scheduled job tasks.
Error Handling
async function askWithRetry(prompt: string, retries = 3): Promise<string> {
for (let i = 0; i < retries; i++) {
try {
return await ask(prompt);
} catch (err) {
if (i === retries - 1) throw err;
const waitMs = 1000 * Math.pow(2, i); // exponential backoff
console.warn(`[job] Retry ${i + 1}/${retries} in ${waitMs}ms: ${err}`);
await Bun.sleep(waitMs);
}
}
throw new Error("unreachable");
}
Rate limits, transient errors, timeouts — scheduled jobs WILL hit these. Retry with backoff.
Verification
# Test the API key
curl -s https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | head -1
# Test through the job
bun run src/cli.ts kick daily-digest
bun run src/cli.ts logs daily-digest