In 1953, 72% of American households tuned in to watch I Love Lucy. Everyone was in the same room at the same time, so advertisers only had to make one ad. That world is gone. We now live in a world where everyone inhabits their own Truman Show, their own filter bubble, and the only way to reach a customer is to meet them inside the specific reality they're already living in.
I started Omneky in 2018 because I saw this shift coming and I understood, from my grandfather who worked at IBM and drilled Moore’s Law into me as a kid, that the technology to solve it was about to arrive. At the time most investors thought generative AI was perpetually over-promised and under-delivered. I disagreed. I knew that if this technology improved by a couple of orders of magnitude, the applications in advertising would be pretty ginormous. So we started building — years before ChatGPT, years before anyone at a dinner party had heard the phrase “generative AI.”
Seven years in, the technology caught up to the thesis. And I still use our own platform every single day — not because I have to, but because when you’re the one building the tool, nothing teaches you faster than sitting down and trying to ship a real ad for a real brand. This post is a tour of what I actually do in Omneky when I create ads for Meta, Google, and TikTok. Eight video formats that work today, a couple that don’t, and the workflow principles that shaped how we built the product.
If you take one thing away, let it be this: we didn’t set out to build another generator. We set out to bring the entire best-practice workflow — generation, reporting, optimization, scaling — into one centralized platform. Everything below is a consequence of that choice.

Why One Platform, Not Five
Every time I watch a tutorial from another AI ads tool, I see the same five-app dance. Go to one tool to find ads to clone. Come back to generate a script. Jump to a third to find prompt guides from some influencer. Hop to a fourth to generate the video in Kling or Veo. Bounce to a fifth to launch. Then come back tomorrow and try to figure out what worked.
That workflow is a tax on creativity. Every context switch is friction, and friction is where good ideas die before they can be tested. So the principle behind Omneky is simple: keep the marketer in one place. One page to find inspiration, one place to generate, one place to edit, one place to launch, one place to read the results and decide what to do next.
There are only three ways to start an ad on our platform:
- Start from scratch with a script and an avatar or product.
- Clone a successful ad — yours or a competitor’s — and adapt the style, script, and overlay copy to your own offering.
- Edit an existing winner to test variants: new character, new setting, different SKU, tweaked script, different caption treatment.
Those three flows combined with eight creative formats give you enormous surface area. Let me walk through each format.
Format 01 — Talking Avatar, No Product

The simplest and most misunderstood format. A lot of marketers assume a talking avatar is just a digital stand-in for a real actor, but that framing misses what makes it powerful. A custom avatar is a way to make your creative match your audience exactly — right age, right setting, right energy — and then iterate a hundred times without a casting call.
There are three ways to create one in Omneky:
- Upload an existing image. A persona from your site, a model you’ve worked with before, even a photo of yourself. Our system lightly transforms the pose to square the subject with the camera — otherwise the underlying models tend to generate people facing sideways, and that small thing kills the illusion.
- Upload a model already wearing your product (we’ll get deeper into this in the fashion section). Prompt the system not to change the clothing, and optionally specify a background.
- Prompt from text alone. This is the flow we use in onboarding — when a new customer signs up, we take their product description and target audience and auto-generate a starting avatar. “Canadian women 35 to 45 who own pugs and like pug couture” is a real example from a recent test I ran.

One workflow note: tag your avatars as you create them. On our higher plans you get creative performance reporting sliced by those tags, which is how you actually learn which personas are driving your clicks and your sales. That feedback loop is the whole point. Generation without measurement is just expensive guessing.
Pro Tip — Scripts: Add emotion cues in parentheses before sentences or specific words, like (excited) or (skeptical). Without them, the voice models go flat. With them, the tone and intonation suddenly sound human. This is the single biggest quality lever in avatar ads, and it’s free.
Format 02 — Talking Avatar Holding a Product
If your product is something an actor should hold — supplements, gadgets, consumer electronics — use Avatar with Product. You pick a scene, you pick the product from your library, and the system composes them. The final actor won’t be the exact person in the preview thumbnail, but the setting and vibe carry over.
Voice matters as much as face
Once you have your avatar, the next lever is the script — and this is where most people leave money on the table. The voice library has named characters with distinct personalities. Pick one that matches your brand voice and your audience demographic.


A word on captions
Captions matter more than most marketers realize. A huge percentage of social video is watched on mute, which means your captions are your ad for the first few seconds. We built our caption tooling around that reality.
You can upload your own brand font, set stroke color for readability on busy backgrounds — this alone lifts readability — and turn on word focus color to highlight each word as it’s spoken. Opinions vary on one-word-at-a-time versus short snippets. I’ve seen both work depending on the audience. Test them.

Format 03 — Avatar Wearing the Product
Fashion, jewelry, and accessories are where the biggest creative gap exists in most brands’ workflows. It’s also where I’ve personally spent the most time figuring out what works. There are three distinct approaches, and they trade control against speed.
B-roll clip, no talking
Generate an image of someone wearing the item in a specific setting, then animate it to around seven seconds. Pair it with voiceover or music. Here’s one I did recently for a North Face test: different personas on a snowy mountaintop at sunrise, demonstrating cold-weather use.

Talking avatar wearing the product
Two approaches, both solid. Option A: upload a model already wearing the item. When you create a product from a URL, we often scrape the hero image automatically — so you just name the avatar, prompt the system not to change clothing or background, and you have a custom talking actor in their right context.

Option B: use Avatar with Product to dress an existing avatar in the item, then layer in motion.

Then the magic happens when you prompt motion. Static talking heads have a hard ceiling. The moment you add real motion, the ad starts to feel like something a human production team made.

And here’s what comes out the other side — “Chris,” a custom male avatar in the North Face jacket, holding a cup of coffee, walking toward the camera on a snowy mountaintop:


The 8 to 15 second short commercial
Sometimes I don’t want to build the ad piece by piece. I want to tell the AI the product and the brand and a short prompt, and let it generate a full storyboard with script and multi-camera cuts. That’s what our Short Commercial format does. My personal test discipline: generate the same prompt three times, in three different visual styles — UGC, Luxury Cinematic, and Clean Minimal — and compare. Each takes about three minutes. More on why three versions matter later.
Format 04 — Avatar in a Real Location
This is one of the formats I’m proudest of, because it unlocks entire verticals that nobody else is serving well: real estate, apartment complexes, furniture brands, restaurants, medical offices, spas. Any business whose product is the space.
Go to Avatar with Product, create a seller or customer avatar, associate a voice and script, and prompt “this person is inside this room.” The result is legitimately crazy the first time you see it.

The result speaks for itself — a believable customer in a believable interior, talking about the product they’re actually sitting on:

One recommendation from hard-won experience: if you want multiple interiors, make individual clips for each specific room rather than trying to get them all in one generation. You get far more control, and you can stitch them together in our storyboard editor. For rooms where you don’t need a person talking, use product image animations to move the camera through the space, then run a voiceover across the whole sequence. This is how we handle real estate customers who need a full three-bedroom tour.
Format 05 — Product Animation B-Roll
Cheap, reliable, underused. Take a product image, animate it with a simple camera-move prompt, and use the result as a standalone clip, background b-roll, or one scene in a longer video. I lean on these when I need filler that doesn’t eat credits. They’re also the best starting point for brands that are nervous about avatars — no faces, no lip sync, no uncanny valley. Just your product, moving cinematically.
Format 06 — Storyboard: Short & Long Videos
For multi-scene work — 8 to 15 second shorts or 24-second-plus longer pieces — the Storyboard UI is where everything comes together. You can pull in clips you’ve already generated, generate new scenes from prompts, extend scenes that cut off too early, and re-order camera cuts.
If a generated scene cuts off mid-sentence, hit extend and describe what you want next. I’ve taken 12-second clips out to 31 seconds this way when I needed more room to explain features. The Storyboard is also where you’d start if you’re cloning a competitor and need the final cut to mirror their pacing.

Format 07 — Editing & A/B Variants
Click any existing video to open the editing toolkit. You can swap characters, change clothing, change interior settings, tweak lighting, or replace a specific product in the scene.

Honest Caveat: Editing to swap an avatar in a talking clip often leaves lip sync slightly off. For A/B tests where you’re isolating one variable, editing is cheaper (~3 credits per second — a 21-second video runs about 63 credits). But for a clean result, I usually just generate a new video with the same script. That also runs around 60 credits and the lip sync is noticeably better.
Here’s Replace Product in action: same underlying video, different mug. I typed “VW bus mug” into the Replace field, selected the product image, and generated. The rest of the scene — the avatar, the jacket, the mountain, the lighting — stayed identical.

The other camera-distance lesson I’ve learned: when you want to show a whole piece of furniture and keep the viewer engaged, don’t shoot everything wide. Stitch together a wide establishing shot, a close-up on fabric, a medium shot of the person talking, and a final wide-angle of the full room. When you prompt Avatar with Product, always specify camera distance: “in front of the couch,” “sitting on the couch,” “showing off the whole room.”
Format 08 — Cloning a Competitor’s Winning Ad
This is the feature I use most often. Coming up with the idea is the hardest part of advertising — once you know the hook, the rest is execution. And the fastest way to know a hook works is to find one that’s already working for someone else.
- Find a winning ad. Search within Omneky for competitors or keywords, or upload a video you found elsewhere. Long-running, scaled ads are usually long-running because they work.
- Preview it. Hover to watch and listen.
- Click clone. Pick your product or service, give a short prompt about how you want to pitch it, and generate.
- Let Omneky pick the model. Depending on the use case it routes to Kling, Veo, or Grok — you don’t have to track which model is best this week.
In most other tools this is a five-app workflow: find ads in one place, script in another, prompt-guide from an influencer, generate in a third, launch from a fourth. Keeping it in one platform is the point. It’s also why building data integrations with ad networks and platforms for the last seven years has been one of the quiet moats of the company — not everyone can get access to these networks, and without those integrations, none of the rest of the workflow matters.
Formats I’d Currently Skip
I want to be honest about what doesn’t work yet. A few presets look tempting in the product but burn credits without delivering:
- Unboxing POV. The hands are the tell. Watch how the paper crumples, how fingers grip. It’s uncanny-valley bad right now. Skip it until the underlying models catch up. They will.
- Motion control for complex dance or movement transfer. Fun demo, unreliable output. Wait.
I could have hidden these behind a paywall or pretended they work. But I’d rather have customers who trust what we ship. The models will get there — Moore’s Law is real in AI too — and when they do, we’ll turn these on and tell you.
Generation Is Half the Loop
This is the section most “AI ads” tutorials skip entirely, because it’s not as flashy as generation. But without it, everything above is just expensive guessing.
Every ad you launch feeds data back into our reporting layer. You can see CTR, ROAS, spend efficiency, thumbstop rate — all sliced by the creative tags you added earlier. That’s where you find out which avatar, which voice, which visual style, which hook is actually resonating.
That’s what Chat With Data is for. You ask the AI analyst a question in plain English — “which ads had the best ROAS last month” or “show me CTR by product category” — and it returns a structured report with charts and recommendations.

You can keep iterating the report in the conversation — “add a chart tracking top ad spend over the last 10 days” — and then export it to PDF.

That feedback loop is the whole point of keeping everything in one platform. If you generate in one tool, analyze in another, and optimize in a third, you lose the thread. The signal lives in the gap between what you created and what the audience chose. Close that gap and you have a compounding system. Leave it open and you have an expensive slot machine.
The Best Practice Nobody Follows
Google’s Senior Director of Global Strategic Analytics has said that 55 to 70% of a marketing campaign’s success is influenced by the creative. That number only goes up as targeting options erode under privacy changes. Which means one winning idea, run as a single variant, will almost always underperform what that same idea could do. The story might be great, but you don’t know which visual style will click with your audience — or with Meta’s algorithm.
Run the same storyline and script in three different visual styles. You don’t know what your audience — or Meta — will like best.
If you have five ideas, that’s fifteen pieces of content. Ten ideas in three styles is thirty. That’s a proper test batch.
What happens next is counterintuitive: one variant will cap out at a couple hundred impressions while another version of the same idea takes off and hits hundreds of thousands or millions of views. The difference in CTR or thumbstop rate is often marginal. But ad networks detect that marginal lift and pour distribution into the winner. That one variant ends up driving 80%+ of your clicks and sales for the campaign.
You can’t predict which one. You have to ship all three. This is what I mean by brute-force performance: stop trying to out-guess the algorithm, and start feeding it variants at a scale that would have been impossible a few years ago. That’s the opportunity generative AI gives marketers, and it’s the reason we built Omneky the way we did.
Closing Thought
My dad is an artist. Growing up, we spent a lot of time in museums, and he would walk me through why DaVinci or Michelangelo made the specific color and symbolic choices they made. Every decision carried meaning, every brushstroke earned its place. I’ve carried that lens into everything I’ve built since.
The temptation with generative AI is to treat it as a firehose — just generate more, push more, hope something sticks. But the companies that win with AI advertising won’t be the ones that generate the most. They’ll be the ones that pair unlimited generation with meticulous measurement and genuine creative taste. The AI handles the scale. The human handles the meaning. We built Omneky to be the place where those two things meet.
In a world where attention is becoming scarcer as the cost of creating content approaches zero, the only sustainable advantage is a system that lets you try a hundred ideas, measure which one breaks through, and double down on it before anyone else notices. That’s what I’m working on. That’s what I hope you’ll try.
Happy testing. I’d love to hear what you ship.
