Documentation Index
Fetch the complete documentation index at: https://docs.auditynow.com/llms.txt
Use this file to discover all available pages before exploring further.
The shape of an Audity audit
Every audit follows the same arc:- Create the project with client info
- Add documents and intake context (skipped if you only have a client name)
- Trigger the audit analysis, this is the expensive AI synthesis pipeline
- Fetch the deliverables, opportunities, risks, stakeholder memos, executive summary
The minimum viable audit
For a fast read on a client when you only have basic info:POST /api/projects/{id}/audit-analysis
Runs the synthesis pipeline synchronously. The HTTP request blocks for 60-300 seconds.
Set client timeouts to at least 360s. Response:
{ analysis, requestData, message }.GET /api/projects/{id}/opportunities
Returns
{ opportunities: [...] } (wrapped). Each opportunity has impactScore (1-10),
effortScore (1-10), category, roiPotential, implementationTimeline. The agent ranks/summarizes.With supporting documents
If the consultant has uploaded documents through the web app first, the agent can run a deeper analysis:- Fetch project detail to confirm documents are present
- Trigger
POST /api/projects/{id}/audit-analysis - Compare against the prior analysis (
GET .../audit-analysisreturns the most recent; multiple versions are stored) - Synthesize the diff
Generating client-ready deliverables
After analysis, the agent can pull the deliverable dashboard and reformat any of its sections:GET /api/projects/{id}/deliverables, returns the full deliverable dashboard (executive summary, opportunities, risks, stakeholder memos)- The agent extracts and reformats from the response
Deliverable document regeneration (e.g. creating a new stakeholder memo PDF on demand) is a web-app-only operation in v1. The synthesis pipeline already produces these once
audit-analysis runs, so agents should pull from /deliverables rather than triggering regeneration.Lead-to-audit flow
If your project starts as a Audity ReadyLink lead:GET /api/lead-generation/leads?status=active&sortBy=ai_readiness_score&sortOrder=desc&limit=50(response is wrapped:{ data, pagination, filters })- Agent filters client-side by
createdAt >= 7 days agoand ranks byaiReadinessScore - For each pick:
POST /api/lead-generation/leads/{id}/convert(creates project, returns 400 if already converted, 402 on insufficient credits, 1,000 credits each) - For each:
POST /api/projects/{id}/audit-analysis(synchronous, 60-300s)
Common patterns and pitfalls
The agent times out waiting for analysis
The agent times out waiting for analysis
POST /api/projects/{id}/audit-analysis is synchronous and blocks 60-300 seconds. Set client HTTP timeout to at least 360s. If your agent platform caps at 60s (some webhooks do), trigger the analysis from a background job and poll GET .../audit-analysis for the result instead.The agent runs out of credits mid-batch
The agent runs out of credits mid-batch
Each
POST /api/projects deducts 1,000 credits. For batch operations, have the agent check GET /api/user/credits first and abort if there’s not enough headroom.The agent hallucinates an opportunity that isn't in the data
The agent hallucinates an opportunity that isn't in the data
Always have the agent cite the opportunity by ID when summarizing: “Top opportunity is opp_xyz: implement RPA for invoice processing”. The IDs come back in the response.
Multi-tenant safety, can the agent see other users' projects?
Multi-tenant safety, can the agent see other users' projects?
No. Every PAT resolves to your Clerk user ID, and Supabase RLS enforces that you can only see your own rows. Even if the agent tries to GET a project ID belonging to another user, the response is 404.