Melbourne

AI Disclosure — Where AI is and isn't used at MELBZ

Melbourne Zones Editorial Board April 25, 2026
X Facebook LinkedIn

This page exists because Google’s stance on AI-assisted publishing is the same as ours: it’s about whether the content is helpful and original, not whether AI was involved at any step.

But “AI was used somewhere in the pipeline” can mean wildly different things — from “an AI wrote the whole article and a human pressed publish” (the failure mode that has flooded the local-guide space since 2023, and the failure mode Google has explicitly classified as scaled content abuse) all the way down to “we ran spellcheck.”

So here is the specific, honest, paragraph-by-paragraph disclosure of where AI assists MELBZ and where it doesn’t.

Where AI is used

Image candidate retrieval and matching

We use AI tools (CLIP-style embeddings and large-language-model classification) to help find image candidates for articles from our licensed sources (Unsplash, Pexels, our own contributor library). The AI suggests images that may match the topic of an article. A human editor selects the final image for every published piece, verifies the licence and credit, and writes or edits the alt text.

Draft research and source compilation

For data-heavy articles (suburb hubs, “cost of living” pieces), we use AI to assemble candidate facts and source links from public datasets — for example, gathering every relevant ABS QuickStats figure for an SA2 region into one place a writer can work from. The writer verifies every figure against the primary source before it appears in a published article. AI-assembled raw research is not a published article. It is a working document.

Alt-text suggestions

When a hero image is selected, an AI tool may suggest a first-pass alt text describing what the image shows. The editor reviews and rewrites the alt text to ensure it accurately describes the actual image content and serves accessibility. Final alt text is human.

Structured data assembly

JSON-LD schema for articles (author, publisher, dates, image references, breadcrumbs) is generated programmatically from frontmatter. This is not “AI” in the generative sense — it’s templated assembly — but for full disclosure we mention it here because schema is part of what Google sees.

Internal pipeline tooling

We use AI tools internally for code review, log analysis, image deduplication, link checking, and other engineering hygiene. None of this affects the published voice or content of an article.

Where AI is not used

The published voice of the article

Every published MELBZ article is written or substantially rewritten by a named human writer, in their own voice. We do not publish articles where the published prose is AI-generated. If a writer drafts with AI assistance and the result reads like AI-generated text — generic, hedged, full of “vibrant,” “nestled,” “boasts,” “offers a unique experience” — the editor sends it back for a full human rewrite or rejects it.

Opinion, ranking, and recommendation

Every score on every “best of” list is assigned by a named human writer who has visited the venue. (See How We Rate.) AI does not rank venues. AI does not pick what’s best.

Fact verification

A claim does not get to a published article on the strength of an AI saying it. Every factual claim is traced to a primary source by the writer. AI-generated facts that cannot be independently sourced are removed.

First-hand observation

The first-person voice in a MELBZ article — “we visited at 11:40pm on a Tuesday”, “the eastern end of the strip is quiet by 9pm” — is always a real human’s actual observation. AI does not invent first-hand experience.

The editorial judgement to publish

Every article is read end-to-end by a named human editor before publish. The editor checks against the editorial standards, the methodology, the fact-check standard, and the voice standard. Articles that fail any check are sent back or rejected.

Why we disclose this

Google’s published guidance (the Search Quality Rater Guidelines, the “Helpful, reliable, people-first content” doc, and the March 2024 spam policy update on Scaled Content Abuse) is consistent on one point: AI-assisted content is judged the same as any other content — by whether it’s original, useful, accurate, transparent, and produced with sufficient human effort and added value.

The thing Google explicitly penalises is scaled content abuse: low-effort generation of many pages without genuine added value. The defence against being mis-classified as that is twofold: (1) make sure the content actually is original and valuable, and (2) be transparent about how it’s made.

We try to do both. This page is the second.

What an honest AI-disclosure should and shouldn’t be

It should be specific. “AI is used somewhere” is meaningless. “AI suggests image candidates; a human selects the final image” is meaningful.

It should distinguish steps. “AI in the pipeline” can mean ten different things at ten different points; readers deserve to know which.

It should not pretend humans were involved where they weren’t. We are aware that some publishers have published “AI disclosure” notices that overstate human review. If we ever materially change the human-review depth on MELBZ articles, this page will be updated and the change will be dated below.

If you suspect a specific article is AI-slop

Write to [email protected] with the URL and what triggered the suspicion. We’ll respond with the named writer, the date of their work on it, and any supporting evidence we can share. If the article fails our own standards on second look, we rewrite or unpublish it.


Last reviewed: 2026-04-25. This page is reviewed quarterly. Next review: 2026-07-25.

Share this X Facebook LinkedIn