AI workflow diagram
Case Studies
Property Portal · Zimbabwe · 2024

AI SEO Content Engine
450 Articles. 48 Hours.

450
Articles in 48 hours
+2
Avg Google ranking gain
7K+
Property DB records integrated

The Challenge

Propertybook.co.zw is Zimbabwe's largest property portal — consistently ranking 2nd–6th on Google for 4,000+ real estate keywords. They had domain authority and a private database of 7,000+ active listings. What they lacked: any scalable system for turning those assets into quality content.

Their blog was unfocused. Their existing content production process — one freelancer, one article, two weeks, ~$200 each — couldn't scale to their opportunity. And they were clear about what they didn't want: generic AI text that would actively hurt their rankings under Google's EEAT standards.

The Solution

I designed and built a 6-step AI content pipeline on Coda.io, integrating eight external APIs and the client's proprietary property database. The system runs fully autonomously — select a keyword, click start, get a QA-checked article.

  • 01

    Keyword Selection

    GSC + ahrefs APIs identify the highest-ROI keyword from 4,000+ tracked terms based on volume, competition, current ranking, and estimated CTR.

  • 02

    Competitor Research

    SerpAPI pulls top 5 ranking articles. Scrapestack fetches and parses full content. Claude identifies content gaps, structure patterns, and missing angles.

  • 03

    Private DB Integration

    Claude generates a SQL-like query against the 7,000+ listing database via custom Coda API. Pulls suburb-level metrics, price trends, and property characteristics no competitor can access.

  • 04

    EEAT Assembly

    Google Maps API provides location-specific context. Dynamic writing guidelines select the appropriate article format (neighborhood guide, investment analysis, buyer's guide) based on search intent.

  • 05

    Generation + QA

    Claude generates the article in structured JSON. A second Claude pass performs automated QA: keyword density, EEAT signals, brand voice, factual accuracy against the database.

  • 06

    CMS + Performance Tracking

    Status tracked from idea through publication. GSC dashboard monitors each article's clicks, rank, and CTR — feeding back into keyword prioritization.

Hundreds of printed articles laid out on a table

Why Low-Code First

I built the entire pipeline as a Coda.io prototype before committing to production infrastructure. That meant the client could test the system with real data in days — not months. Every edge case, every API quirk, every prompt failure surfaced cheaply in the prototype stage. By the time we went live, the system was battle-tested.

In this case, the Coda prototype became the production system. It now runs continuously, processing keywords and producing articles on demand.

The Results

The system ran over a weekend. Monday morning: 450 fully written, EEAT-optimized, QA-checked articles in the CMS. Each article was unique, each incorporated proprietary database data, and each was structured for the specific search intent of its target keyword.

Six months post-launch: average +2 Google ranking across all published articles. Articles drawing on private database data consistently outperformed generic AI content from competitors.

"Genuinely amazing. Jheesh, well done. And thank you very much. Super chuffed with this."

— Client, Propertybook.co.zw

Key Learnings

  1. EEAT integration is critical. Generic AI content is penalized. The only way to pass EEAT at scale is to build systems that deliberately incorporate real signals — proprietary data, location-specific research, verified facts.
  2. Proprietary data creates competitive moats. The system's most valuable feature wasn't the AI generation — it was the private database integration. Competitors can copy the prompts. They can't copy the database.
  3. Low-code prototyping dramatically accelerates delivery. Full production build would have taken 3–4 months. Coda prototype took 2 weeks and became the production system.
  4. Structured output formats improve reliability. Having Claude return JSON significantly improved parsing reliability. Prose output introduces unpredictable formatting; JSON enforces structure.
  5. AI systems need human oversight at the right checkpoints. Automated QA caught ~8% of articles that needed revision. Human review of QA failures (not all articles) is the right balance of automation and quality control.
Coda.ioAnthropic Claude API Google Search Consoleahrefs API SerpAPIScrapestack Google Maps APICustom DB API EEATRAGPrompt Chaining

Want the full narrative story of how this was built — in 10 panels?

View the scrollystory