One of the most annoying parts of durable workflow systems is that they often make good TypeScript APIs worse. You start with a clean object that represents something real in the system, then the first suspend or resume boundary forces you to flatten it into plain data. After that, every step gets a little more procedural, a little more repetitive, and a little less honest about what the code is actually doing.
That is why Vercel's custom class serialization support in Workflow SDK is worth paying attention to. It is not a flashy headline feature, and it does not change the core discipline of durable systems. What it does change is the shape of workflow code. You can now preserve a better API around a small set of real workflow objects instead of rebuilding them by hand at every boundary.
The old workaround was always a smell
Before this release, the safe answer was simple: pass plain JSON-like data between steps and reconstruct anything richer yourself. That worked, but it had a cost. A sandbox handle became an object id plus a helper module plus some loose metadata. A review task became a plain record and a few utility functions living somewhere else. The runtime stayed durable, but the code stopped matching the mental model.
Experienced teams usually learn to live with that tradeoff, because durability matters more than elegance. Still, it creates friction in exactly the places where workflow code should read cleanly. You want to see a job handle and call cancel on it. You want to see a sandbox and call runCommand. You do not want every step to begin with five lines of reconstruction boilerplate just to get back to a useful abstraction.
What changed on April 2
Vercel added custom class serialization hooks through @workflow/serde. The important detail is that class instances are still persisted as plain serialized data. The runtime is not preserving live process state for you. Instead, you get an explicit way to say how a class should serialize and how it should be reconstructed when the workflow resumes.
1import { WORKFLOW_SERIALIZE, WORKFLOW_DESERIALIZE } from '@workflow/serde'23type SerializedSandboxHandle = {4 sandboxId: string5 region: string6}78export class SandboxHandle {9 constructor(10 public sandboxId: string,11 public region: string,12 ) {}1314 static [WORKFLOW_SERIALIZE](instance: SandboxHandle): SerializedSandboxHandle {15 return {16 sandboxId: instance.sandboxId,17 region: instance.region,18 }19 }2021 static [WORKFLOW_DESERIALIZE](data: SerializedSandboxHandle): SandboxHandle {22 return new SandboxHandle(data.sandboxId, data.region)23 }2425 async runCommand(command: string) {26 'use step'27 // call remote sandbox service here28 }29}
That is a small API change with a practical payoff. The workflow still respects the durable boundary, but the code on either side of that boundary can stay shaped around the real thing you are modeling.
Where this helps immediately
The obvious fit is remote execution. Sandboxes, background jobs, agent sessions, approval handles, repository snapshots, and checkout flows all benefit from having a thin object with a stable identity and a few meaningful methods. Those are not fake abstractions. They are real units in the system, and your code reads better when they stay explicit.
This also fits the way modern AI product workflows are actually built. The hard part is usually not one model call. It is the chain around it: create execution context, write files, call tools, checkpoint state, wait for a human, resume, retry safely, and only then commit the result. If every one of those boundaries degrades the code into anonymous objects and utility helpers, the workflow becomes harder to maintain than it needs to be.
Custom serialization gives teams a middle ground. You do not have to choose between clean APIs and durable state. You can keep a real handle object where it earns its keep, while still persisting only the minimal data required to rebuild that handle later.
Where teams will absolutely overuse it
The wrong reaction is to serialize every class in sight. Durable workflows still persist data, not living resources. Database clients, HTTP clients with internal connection state, open sockets, caches, SDK instances with hidden internals, and anything that depends on local process memory are still the wrong thing to carry across a resume boundary.
The second trap is using classes to hide oversized state. If the serialized payload is huge, fragile, or full of incidental fields, wrapping it in a class does not make the design better. It just makes the mess look more respectable. Rehydration should feel boring. If deserialize logic is doing something clever, the object probably contains too much.
The third trap is forgetting that workflows often outlive a single deployment. If you serialize an object shape today and resume the workflow after the code changes next week, your deserializer has to cope with that reality. Version tolerance matters more than class syntax here.
A production rule of thumb
My rule would be simple: default to plain data, then promote something to a serialized class only if it represents a real handle or domain object that makes the workflow meaningfully easier to read. The class should hold a stable identity, minimal metadata, and a small set of methods that still make sense after resume.
In practice that means keeping the serialized form small, testing the round trip explicitly, and treating every method that touches the outside world as an operational boundary. This feature improves developer experience, but it does not remove the usual durable workflow concerns around retries, idempotency, latency, and side effects.
Why this release matters
A lot of workflow tooling gets more powerful by making code more awkward. This release goes the other direction. It makes a durable system nicer to work with without lying about what durability costs. That is the right kind of improvement. It does not encourage magical thinking. It just removes a piece of needless friction that experienced teams hit quickly once their workflow code moves beyond toy demos.
If you are already building with Vercel Workflow, this is worth adopting selectively. Use it for the handful of objects that make your workflow read like the system you actually built. Leave the rest as plain data. That balance is where the feature becomes genuinely useful instead of just interesting.