v3 feature matrix
As v3 is in Developer Preview there are some features that are not yet available. Take a look at the Feature matrix to check what is currently available.
Changes from v2 to v3
The main difference is that things in v3 are far simpler. That’s because in v3 your code is deployed to our servers (unless you self-host) which are long-running.
- No timeouts.
- No
io.runTask()
(and no cacheKeys
).
- Just use official SDKs, not integrations.
task
s are the new primitive, not job
s.
OpenAI example comparison
This is a (very contrived) example that does a long OpenAI API call (>10s), stores the result in a database, waits for 5 mins, and then returns the result.
First, the old v2 code, which uses the OpenAI integration. Comments inline:
import { client } from "~/trigger";
import { eventTrigger } from "@trigger.dev/sdk";
import { OpenAI } from "@trigger.dev/openai";
const openai = new OpenAI({
id: "openai",
apiKey: process.env["OPENAI_API_KEY"]!,
});
client.defineJob({
id: "openai-tasks",
name: "OpenAI Tasks",
version: "0.0.1",
trigger: eventTrigger({
name: "openai.tasks",
schema: z.object({
prompt: z.string(),
}),
}),
integrations: {
openai,
},
run: async (payload, io, ctx) => {
const chatCompletion = await io.openai.chat.completions.backgroundCreate(
"background-chat-completion",
{
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
}
);
const result = chatCompletion.choices[0]?.message.content;
if (!result) {
throw new Error("No result from OpenAI");
}
const dbRow = await io.runTask("store-in-db", async (task) => {
return saveToDb(result);
});
await io.wait("wait some time", 60 * 5);
return result;
},
});
In v3 we eliminate a lot of code mainly because we don’t need tricks to try avoid timeouts. Here’s the equivalent v3 code:
import { logger, task, wait } from "@trigger.dev/sdk/v3";
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export const openaiTask = task({
id: "openai-task",
retry: {
maxAttempts: 3,
},
run: async (payload: { prompt: string }) => {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
const result = chatCompletion.choices[0]?.message.content;
if (!result) {
throw new Error("No result from OpenAI");
}
const dbRow = await saveToDb(result);
await wait.for({ minutes: 5 });
return result;
},
});
Triggering tasks comparison
In v2 there were different trigger types and triggering each type was slightly different.
async function yourBackendFunction() {
const event = await client.sendEvent({
name: "openai.tasks",
payload: { prompt: "Create a good programming joke about background jobs" },
});
const { id } = await invocableJob.invoke({
prompt: "What is the meaning of life?",
});
}
We’ve unified triggering in v3. You use trigger()
or batchTrigger()
which you can do on any type of task. Including scheduled, webhooks, etc if you want.
async function yourBackendFunction() {
const handle = await openaiTask.trigger({
payload: {
prompt: "Tell me a programming joke",
},
});
}
Upgrading your project
Upgrade the v2 Trigger.dev packages
You can run this command to upgrade all the packages to the beta:
npx @trigger.dev/cli@beta update --to beta
Follow the v3 quick start
Using v2 together with v3
You can use v2 and v3 in the same codebase. This can be useful where you already have v2 jobs or where we don’t support features you need (yet).
This documentation is coming soon.