DEV Community

Cover image for Ozigi v2 Changelog: Building a Modular Agentic Content Engine with Next.js, Supabase, and Playwright
Dumebi Okolo
Dumebi Okolo

Posted on • Edited on

Ozigi v2 Changelog: Building a Modular Agentic Content Engine with Next.js, Supabase, and Playwright

When I first built Ozigi (initially WriterHelper), the goal was simple: give content professionals in my team a way to break down their articles into high-signal social media campaigns.

OziGi has now evolved to an open source SaaS product, oepn to the public to use and imnprove.

Here is the complete technical changelog of how I completely turned Ozigi from a monolithic v1 MVP into a production-ready v2 SaaS.

1. Modular Refactoring of The App.tsx (Separation of Concerns)

In v1, my entire application: auth, API calls, and UIβ€”lived inside a long app/page.tsx file. The more changes I made, the harder it became to manage.

  • Modular Component Library: I stripped down the monolith and broke the UI into pure, single-responsibility React components (Header, Hero, Distillery, etc.).

modular architecture

  • Centralized Type Safety: I created a global lib/types.ts file with a strict CampaignDay interface (complete with index signatures) to finally eliminate the TypeScript "shadow type" build errors I was fighting.
  • State Persistence: Implemented localStorage syncing so the app "remembers" if a user is in the dashboard or the landing page, preventing frustrating resets on browser refresh.

2. Using Supabase as the Database and Tightening the Backend

A major UX flaw in v1 was that refreshing the page wiped the user's progress.

  • Relational Database & OAuth: I replaced anonymous access with secure GitHub OAuth via Supabase.
  • Automated Context History: I engineered a system that auto-saves every generated campaign to a PostgreSQL database. Users can now restore past URLs, notes, and outputs with a single click.

strategy history

  • Identity Storage: Built a settings flow to permanently save a user's custom "Persona Voice" and Discord Webhook URLs directly to their profile.

discord webhook upload and added context

3. Core Feature Additions

  • Multi-Modal Ingestion: Upgraded the input engine to accept both a live URL and raw custom text simultaneously.

context engine dashboard

  • Native Discord Deployment: Built a dedicated API route and UI webhook integration to push generated content directly to Discord servers with one click.

4. Update UI/UX & Professional Branding

  • The Rebrand: Pivoted the app's messaging to focus entirely on content professionals, positioning it as an engine to generate social media content with ease and in your own voice.

  • Open-First Onboarding: Designed a "Try Before You Buy" workflow. Unauthenticated users can test the AI generation seamlessly, but are gated from premium features (History, Personas, Discord) via an Upgrade Banner.

guest mode

  • Pixel-Perfect Layouts & SEO: Eliminated rogue whitespace and z-index issues using precise CSS Flexbox rules. Upgraded app/layout.tsx with professional OpenGraph and Twitter Card metadata.

ozigi homepage

5. Quality Assurance & DevOps (Automated Playwright E2E Tests)

  • Automated E2E Testing: Completely rewrote the Playwright test suite (engine.spec.ts) to verify the new landing page copy, test the navigation flow, and confirm security rules apply correctly.

  • Linux Dependency Fixes: Patched my CI/CD pipeline by ensuring underlying Linux browser dependencies (--with-deps) are installed so headless Chromium tests pass flawlessly.

What's Next? (v3 Roadmap)

With the Context Engine now stable, the foundation is set.
My plan for V3 is to fix the deployment pipeline:

  • integrating the native X (Twitter)
  • LinkedIn APIs so users can publish directly from the Ozigi dashboard.

What has been your biggest challenge scaling a Next.js MVP? Let me know in the comments!
Try out Ozigi
And let me know if you have any feature suggestions? Let me know!
Want to see my poorly written code? Find OziGi on Github.

Connect with me on LinkedIn!

What came next:
After shipping v2, the next hard question was model selection. A reader suggested switching to Claude for better content quality. I ran the benchmarks instead of just taking the advice. The results across JSON stability, latency, multimodal ingestion, and tone were clearer than I expected: Gemini 2.5 Flash vs Claude 3.7 Sonnet: 4 Production Constraints That Made the Decision for Me

Top comments (21)

Collapse
 
anmolbaranwal profile image
Anmol Baranwal • Edited

lol I just came from the previous post 🀣

All the feedback I gave - you have already incorporated it. It works really well. this is exactly what I wanted: giving it the necessary context so it can create posts in my preferred style.

edit: add demo video to the repo if you can, it will be really useful.

Collapse
 
dumebii profile image
Dumebi Okolo

😁 This is great feedback for me honestly. I'm glad your suggestions were incorporated, even before you suggested them. πŸ˜…

I'm thinking of adding the video, and then a sort of documentation on what the app does exactly. It's actually a general use app and you don't have to be a dev/content professional or DevRel person to use it.

Anyone who posts insights on sm is the perfect user.

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

yeah, just make the ux super smooth because that's what a lot of people care about. I would say use #discuss tag in your posts if you want the feedback from the community here -- it targets entire devto community.

Thread Thread
 
dumebii profile image
Dumebi Okolo

Thanksssss. I already made a lot of improvements. You can check it out.

Collapse
 
rohan_sharma profile image
Rohan Sharma

oh, there's a v3 roadmap as well! Waiting for it!

Collapse
 
dumebii profile image
Dumebi Okolo

Thank youuuuuuuuu.
Have you tried the V2?

Collapse
 
rohan_sharma profile image
Rohan Sharma

not yet, I've high fever. I just go through the blog. Will try soon.

I did try the 1st version.

Thread Thread
 
dumebii profile image
Dumebi Okolo

I'm so sorryyyyyyy.
Get well soon! <3

Thread Thread
 
rohan_sharma profile image
Rohan Sharma

no need to sorry. virus should apologize.

Collapse
 
chadtdyar profile image
Chad Dyar

Love the modular approach. I'm using Supabase on the other side of this β€” not for the content engine itself, but for the lead capture pipeline. My website chatbot (ChadBot) collects name, email, and interest through conversational intake, then writes directly to a Supabase pipeline_leads table via the REST API.

The Playwright integration is interesting β€” are you using it for content verification (checking how posts render on each platform) or for actual posting automation? That's the piece I haven't cracked yet. My agents generate platform-native content but the actual distribution is still manual because most platforms actively block automated posting.

Stack comparison: I'm on React + Express + Claude API with per-channel agent prompts that all reference a shared product dictionary. The dictionary approach has been key for keeping voice consistent across 8+ output formats.

Collapse
 
dumebii profile image
Dumebi Okolo

Using Playwright for E2E tests.

Collapse
 
dumebii profile image
Dumebi Okolo

Posting a quick update here for anyone who followed this: the next thing I tackled after v2 was the LLM selection question properly. A reader suggested Claude would give better content output than Gemini. Instead of just switching, I ran a structured benchmark across four constraints β€” JSON output stability, latency on the public sandbox, native multimodal ingestion, and tone quality. Wrote up the full ADR if you want the numbers.

Collapse
 
chadtdyar profile image
Chad Dyar

Really like the modular angle here. I've been building something similar for my own apps where each "module" owns a part of the content lifecycle: research, drafting, repurposing, distribution. The piece I've found most fragile is state: making sure each stage actually has the context it needs instead of starting from scratch. Curious how you're persisting and passing state between modules.

Collapse
 
dumebii profile image
Dumebi Okolo

In Ozigi, we solve this using what we call a 'Unified Context Schema.' Because we use Gemini 2.5 Flash with strict responseSchema enforcement, every module (Research, Drafting, etc.) is essentially a pure function. The 'Research' module is contractually obligated to output a specific JSON structure that the 'Repurposing' module expects. This eliminates the 'brittle parsing' issue you mentioned.

For persistence, we lean heavily on Supabase JSONB columns. Each campaign has a 'State' object in the DB. As a module completes its task, it performs an atomic update to that JSONB blob. This means if a user leaves and comes back, or if the distribution module needs to 'look back' at a raw research note from step one, the entire lifecycle is already hydrated in a single source of truth.

Essentially: Schema enforcement at the AI level + JSONB persistence at the DB level = Zero context loss.

Collapse
 
chrisebuberoland profile image
Chris Ebube Roland

This is so neat.

Looking forward to being able to publish directly to sm platforms πŸš€

Collapse
 
dumebii profile image
Dumebi Okolo

Thank you so much, Chris! I appreciate it.

Collapse
 
crisiscoresystems profile image
CrisisCore-Systems

This is the kind of changelog I trust because it is not just features, it is pain you removed. Pulling auth, calls, and UI out of one growing file into single responsibility components is one of those moves that feels boring until you have done it, then you cannot unsee the difference.

Centralizing types is also underrated. A strict core shape that the rest of the system has to obey is how you stop the quiet rot.

Curious about the localStorage dashboard state. How are you handling versioning and reset when the schema changes, so people do not get stuck in a weird state after an update

Some comments may only be visible to logged-in visitors. Sign in to view all comments.