Service · AI engagements
AI, whatever you need.
No packaged offer. No fixed menu. You describe the problem, we scope it at the Workflow Audit, quote it fixed, and build what your practice actually needs. Every engagement starts empty-handed and ends with documentation you own. What gets built in between is yours to define.
Every practice wants something different. Shape, tools, risk, data: all vary. We do not publish a list of AI products because we do not sell AI products. We scope your brief at the Audit, quote it fixed, build it, and hand over documentation you own. The shape is fixed. The work is yours.
How this works
Four steps. Yours in the middle.
The engagement shape is fixed. The work inside it is yours to describe. The Audit is where your words become scope.
Brief
You tell us the problem and what a good outcome looks like. A 30-minute call, no obligation, no paperwork, no pitch deck.
Audit
Two weeks on-site and in your data. Out the other end: scope, model choice, review surface, data path, risk notes, and a fixed Build price.
Build
To the scope signed at the Audit. We do not issue change orders on work that was not scoped, and we do not quietly expand scope to cover something you assumed.
Handover
Runbook, prompts, model configurations, credentials, change log. Post-handover stabilisation window for monitoring, tuning, and questions from your team.
Data handling
Your data, by default.
Every AI build touches client data. These are the defaults we ship with, before any Audit-specific adjustments are made. OAIC guidance on commercially-available AI products (21 October 2024) sets the baseline. Our defaults sit above it.
Nothing stored by the model provider.
Frontier model accounts (OpenAI, Anthropic, Google) are configured to zero-retention settings where the vendor supports it. Your inputs and outputs are not stored for training or future reference. Where a specific model cannot be run with zero retention, we choose a different model or raise it in the Audit.
Your data is not training data.
Client data is not used to train any third-party model. No exceptions without explicit written agreement, which in practice we have never sought. If a vendor changes terms in a way that would affect this, monitoring catches the change and the runbook has a rollback path.
Inference in an Australian region where residency is a requirement.
Hosted in an Australian region where the data profile warrants it and the vendor supports it. Residency is an Audit question, not a platform default. When a hard residency requirement cannot be met with a frontier model, we scope smaller open models hosted locally.
Every data path on a single page.
For every build, the data path is written down: what is sent, to which endpoint, what is retained, for how long, under which contract, and what the rollback is if the vendor changes terms. One page, delivered at handover, not buried in a sub-clause.
Pricing
Fixed at the Audit.
There is no packaged offer and no subscription. There is an engagement model. Your Build price is fixed at the end of the Workflow Audit once scope, model choice, data-handling regime, and review surface are agreed.
Figures are indicative starting prices, not committed quotes. Only the $1,500 Workflow Audit is fixed. Your Build price is fixed at the end of the Audit once scope is signed, and we do not issue change orders on work that was not scoped.
Common questions
Before you book the Audit.
Can you actually build anything?
If you can describe the problem and what a good outcome looks like, we can scope it. The Audit is where we agree the shape, the model, the review surface, the data path, and the price. We have scoped document work, drafting workflows, classification pipelines, retrieval systems, reconciliation tools, and agents with bounded authority. Your brief may be none of those, or a combination we have never seen before. The engagement works the same either way.
Do you build AI agents with action authority?
Yes. Where the brief asks for an agent, we build an agent. What we design into the workflow is a human sign-off on anything irreversible. Not because we refuse to automate those steps, but because a regulator, an aggregator, or a PI insurer is going to look at that output sooner or later and ask who authorised it. Agents we have scoped have had narrow documented permissions and a named operator on any action that cannot be undone.
What models do you use, and where does the data live?
Chosen per engagement, documented in your runbook. Most document and drafting work runs on frontier models (OpenAI, Anthropic, Google) via API with zero-retention settings. Smaller extraction or classification tasks often run on smaller open models hosted in an Australian region. The data path is specified in writing for every build: what is sent, to which endpoint, what is retained, for how long, and what the rollback looks like if the provider changes terms.
What happens when the model gets something wrong?
Review catches it. Outputs with client, regulator, or financial consequence pass through a human checkpoint before they leave the system. Low-confidence outputs flag themselves, escalate, or fall back to manual handling. When a provider updates a model and behaviour drifts, monitoring triggers a rollback to the last known-good configuration while we investigate. The runbook documents every step of that process.
How is this different from buying an off-the-shelf AI tool?
SaaS AI products are built for the median user. A compliance-bound Australian practice is not the median user. A bespoke build costs more up front and less over time, because you are not paying a monthly subscription for features you do not use, working around ones you cannot change, or exposed to a vendor pivot that breaks your workflow. You own the prompts, the model configuration, the credentials, and the documentation from day one of handover.
Can we start without the Workflow Audit?
Occasionally, if the scope is genuinely small and sits inside a platform we know well. In most cases the two-week Audit is worth it. AI work fails most often because the scope was wrong, not because the model was wrong. The Audit is where we decide what to build, which model suits the job, what the review surface looks like, and what the rollback path is. That work gets done once. Doing it up front is cheaper than doing it mid-build.
Related services
Keep exploring the practice.
See all five services →Workflow Audit
$1,500 fixed. Two weeks. Credited against your first Build within 30 days.
Continue →Automation Build
Native features configured first. Integration and exception handling layered around them.
Continue →Orchestration & Exception Handling
The runbook, fallback flows, operator surfaces, and alerting for after the happy path breaks.
Continue →Support & Maintenance
Monthly plans for work we have built. Monitoring, API-change patching, adjustment hours, named contact.
Continue →Every engagement starts with the Audit.
Two weeks on-site and in your data. $1,500 fixed. Credited against the Build if you proceed. You leave with a scoped build, a fixed price, and the decision still in your hands.
Book the Audit →