D

Draftr AI

2h ago

The format you already know

D

Draftr AI

2h ago

URL → clean markdown

D

Draftr AI

2h ago

Sections → slots → QA

D

Draftr AI

2h ago

ElevenLabs TTS → timed subtitle track

D

Draftr AI

2h ago

9:16 · libass · sidechain

D

Draftr AI

2h ago

Guest library → personal library

D

Draftr AI

2h ago

UI + API + auth + render verification

Short-form video, pointed at things worth knowing.

People will sit through a five-minute breakdown of quantum computing delivered over Minecraft parkour. They won't open the paper it's based on. We didn't set out to exploit that. We set out to point it at something worth knowing.

Draftr is a pipeline that takes any written source (a URL, a PDF, a raw paste) and produces a fully rendered, narrated, subtitled short-form video. No editing. No recording. No script writing. You drop the content in chat. The machine handles everything else.

The same format that made you watch six hours of gameplay clips can make you actually learn something. We just had to build the machine.

Getting the text out of anything.

The internet stores knowledge in dozens of formats: behind JavaScript renders, inside PDF binaries, spread across multi-page documentation sites. We use Firecrawl. For a single article URL, the backend calls Firecrawl's scrape endpoint requesting both markdown and summary formats. Firecrawl handles the render, strips the nav, footer, and ads, and returns the actual content as clean markdown.

For an entire documentation website, we first call the map endpoint to enumerate candidate URLs across the domain, then rank and filter them by path heuristics. Paths containing /blog, /docs, or /research score up. Paths like /tag or /legal score down. The top-ranked URLs go into a batch crawl job, polled until completion.

CrewAI maps the source before a single line gets written.

Once the source text is ingested, it does not go straight into one giant summary prompt. The backend first hands the markdown to CrewAI, which splits it into meaningful sections, plans coverage, and assigns each short a different section plus a different angle family so the batch actually spreads across the source.

Each planned slot is then written with OpenAI. The model gets the local section context, the angle it needs to hit, the pacing target, and grounding constraints. The result is not one generic recap, but a structured batch of scripts tuned for 25–30 second videos that each sound distinct.

After writing, the backend runs a QA and repair pass for overlap, stale hook phrasing, schema shape, pacing, and grounded claims. Failed slots are repaired and retried up to three passes, so one weak script does not stall the rest of the bundle.

The audio comes back with timing, so the captions are already in sync.

Once a script clears QA, Draftr runs the narration through ElevenLabs TTS by default. Each short gets a narrator voice selected up front, and if that voice returns bad audio, the backend can automatically retry with the default voice instead of killing the whole render.

The returned audio already includes word-level timing data. Draftr stores that alignment, turns the timed words into an animated Advanced SubStation Alpha (.ass) subtitle track, and picks the subtitle preset that best fits the gameplay energy. By the time the final encode starts, the voiceover and captions are already locked together.

Clip in. MP4 out. Nothing in between.

The final step is assembly. The renderer takes three inputs (gameplay video, narration audio, and the generated .ass subtitle file) and runs them through a single FFmpeg encode pass. The gameplay clip is cropped and scaled to fill a 9:16 vertical frame. The narration audio goes on the primary audio track.

The subtitle track is burned directly into the video using libass during the encode pass. The final file is a self-contained MP4. It goes straight to Supabase storage and the public URL is returned to the orchestrator. A batch of ten videos generates in about three minutes from the moment ingestion completes.

Guest mode stays instant. Login turns the generator into your own archive.

Draftr now supports Supabase authentication with Google login. If you want to try the product fast, you can keep moving in guest mode and generate into the general library without creating an account.

The moment you sign in, the same workflow becomes personal. New chats, rendered MP4s, reruns, and saved shorts are scoped to your account, so your library becomes a private archive instead of a shared feed.

That split keeps the product simple. Guests get instant access, while logged-in users get ownership, persistence, and a cleaner place to revisit everything they have generated later.

The same generator works in both modes. Auth only changes who owns the history.

We pressure-test the full workflow before users ever feel the regression.

We use TestSprite as an AI-native testing layer for the product. It plans and executes end-to-end UI, API, and workflow checks, then returns the kind of evidence that matters when something breaks: reports, logs, screenshots, videos, and fix guidance.

In practice that means the video generator, the recommendation system, the authentication flow, Supabase-backed storage, and the rest of the backend path are tested as separate systems instead of only through one polished demo path.

For critical flows, we repeat the runs across multiple passes, five times over when needed, so regressions get caught early and fixes can be verified before the website ships them to real users.

The goal is not one happy-path demo. The goal is to keep the whole machine honest.