Auditing Your Salesforce Org for Agentic Readiness

Abstract illustration of an audit magnifying glass examining data quality blocks with pass and fail indicators

Your org is about to be operated by software that does not read error messages, does not check its work, and does not care what your page layout looks like. Here is how to find out if your data can survive that.

Salesforce Headless 360 exposed every platform capability as an API, MCP tool, or CLI command. That means AI agents can now create records, trigger Flows, modify data, and execute business logic at machine speed without a browser.

The org you built for humans clicking through screens is the same org those agents will operate on. Every shortcut, every unstandardized field, every "we'll clean it up later" decision is about to be tested at scale.

I have run data quality audits for nonprofits, healthcare organizations, and government agencies for 14 years. Here is the five-step audit I run before any AI feature gets activated.

Step 1: Data Quality Assessment

Agents amplify whatever is already in your data. If 40% of your Contact records have no salutation, three different state abbreviations, and inconsistent phone formatting, an agent operating through the API will propagate that mess, not fix it.

Run these checks:

  • Field completeness: What percentage of records have values in key fields (Email, Phone, Mailing State, Salutation)? Target: 95%+ for fields agents will read or write.
  • Format consistency: Are State fields standardized (all "NY" or all "New York," not both)? Are phone numbers in a consistent format? Are salutations from a controlled picklist?
  • Duplicate rate: What percentage of Accounts or Contacts are duplicates? Agents create records. If your duplicate rules are not configured, agent-created records will multiply the problem.
  • Picklist hygiene: Are picklist values current? Are there inactive values still selectable? Are Global Value Sets used where appropriate?

If your data quality score is below 7/10, stop here. Clean the data first. No AI feature should be activated on dirty data.

Step 2: Validation Rule Audit

Validation rules fire on API-initiated saves. That is the good news. The bad news: most validation rules were written for humans, not machines.

Check every active validation rule for:

  • Error message clarity for API consumers. "Please enter a valid phone number" is helpful for a human. An agent needs a structured error response. If your validation rules return soft messages, agents may retry with the same bad data or bypass the rule entirely through a different save path.
  • Bypass exposure. If your rules use a Custom Permission or checkbox field for bypass, who has access? An integration user with bypass access can skip every validation rule in your org.
  • Coverage gaps. Validation rules do not fire on workflow field updates or some lead-conversion tasks. If agents trigger those paths, you need Flow-based validation as a supplement.

Step 3: Permission Model Review

The integration user your agent runs as determines what the agent can see, create, edit, and delete. Most orgs I audit have integration users with far more access than they need.

Check:

  • Is the integration user on a dedicated Permission Set, not System Administrator? Least-privilege access is the standard. Every object permission, every field permission, should be explicitly assigned.
  • Is "Run Flows" assigned via Permission Set? This is enforced as of Spring '26. Non-admin users (including integration users) need explicit Flow access.
  • Are Apex class permissions assigned? If your Flows call Apex actions, the integration user needs Apex class access. This was enforced in Winter '26.

Step 4: Flow and Automation Review

Agents trigger the same Flows humans do. But agents do it at batch speed, which means execution order and entry criteria matter more than they ever did.

Check:

  • Entry criteria specificity. Broad entry criteria (like "when a record is created or updated") will fire on every agent-initiated save. If the agent creates 200 records in a batch, that Flow fires 200 times. Tighten entry criteria to the specific conditions that require automation.
  • Flow Trigger Explorer. If you have multiple Flows on the same object, confirm the execution order. Agents do not wait between saves. Race conditions between Flows become real problems at batch speed.
  • Fault handling. Every DML and Action element in your Flows should have a fault path. When an agent triggers a Flow that fails silently, you lose the error information. Surface $Flow.FaultMessage and log it.

Step 5: Documentation Audit

Your field descriptions, your validation rule documentation, and your sharing model are no longer just for the next admin. They are instructions for the next agent.

Check:

  • Field descriptions populated? Agents (and the developers building agents) reference field descriptions to understand what a field is for. If your Description field on the Field Definition page is blank, the agent is guessing.
  • Validation rule documentation? Each rule should have a description that states: what it enforces, why it exists, and what the bypass mechanism is. This is not optional. It is the spec sheet for anyone building agent interactions with your object.
  • Data governance standards documented? State format, phone format, address conventions, salutation values. If these are not written down, they are not enforceable at agent scale.

Key Takeaways

  • Audit before activation. No AI feature should be turned on until data quality, validation rules, permissions, automations, and documentation pass review. In a recent nonprofit audit, 40% of Contact records had incomplete salutations, inconsistent state formats, and no phone standardization.
  • Data quality is the foundation. Agents amplify whatever is already in your org. Target 95%+ field completeness and under 1% duplicate rate on key objects before activating any agent feature.
  • Validation rules fire on API saves but not on all save paths (workflow field updates, some lead conversion tasks). Identify the gaps and supplement with Flow-based validation using the Custom Error element.
  • Integration users should follow least-privilege Permission Set assignments. System Administrator access for agents is a security risk at scale. Spring '26 enforces "Run Flows" via Permission Set. Winter '26 enforces Apex class permissions.
  • Documentation is no longer just for humans. Field descriptions and validation rule specs are instructions for agent builders. If your org has 50+ custom fields with blank descriptions, that is 50+ guesses an agent developer will make. If your Description field on the Field Definition page is blank, the agent is guessing.
  • Start with the AI Readiness Scorecard to establish your baseline across 15 weighted questions and five categories.
Jeremy Carmona

13x certified Salesforce Architect and founder of Clear Concise Consulting. 14 years of platform experience specializing in data governance, data quality, and AI governance for nonprofit, government, healthcare, and enterprise organizations. Instructor of NYU Tandon's Salesforce Administration course with 160+ students trained and an ~80% job placement rate. Published in Salesforce Ben on AI governance and data quality. Based in New York.

https://www.clearconciseconsulting.com

https://www.clearconciseconsulting.com
Next
Next

Pardot Email Deliverability: How to Fix High Bounce Rates Before Salesforce Suspends Your Account