5 Questions Salesforce Admins Must Ask Before Turning on AI
Minimal geometric shield icon with circuit board line patterns, navy blue on beige background
Governance isn't about slowing down AI adoption. It's about not having to explain failures to your executive team.
The VP of Sales was excited. "I saw the Agentforce demo. Let's turn it on."
Three weeks later, the same VP was in my office asking why the AI agent had sent personalized emails to 200 contacts using outdated job titles, old company names, and in four cases, people who had explicitly asked to be removed from all communications.
The AI worked exactly as configured. The data it worked with was garbage.
This is the pattern I've seen repeatedly since organizations started adopting Salesforce AI features: the technology works, the governance doesn't exist, and admins get blamed for failures that were entirely predictable.
Governance sounds like bureaucracy. It's not. It's the difference between AI that makes your organization look competent and AI that sends your CEO an apology email at 2 AM.
Here are five questions every admin should answer before enabling any Salesforce AI feature.
Question 1: What Data Will the AI Access?
AI features in Salesforce don't operate in a vacuum. They read your data. Agentforce agents query records. Einstein features analyze fields. Prompt Builder pulls context from objects.
Before enabling anything, map exactly what data the AI will see:
Object Access:
• Which objects can the AI read?
• Which objects can the AI create or update?
• Are there objects the AI should never touch?
Field Access:
• Which fields contain sensitive information (SSN, financial data, health information)?
• Which fields are garbage (legacy data, never cleaned, known to be inaccurate)?
• Which fields are confidential (executive comp, HR notes, legal matters)?
Record Access:
• Does the AI respect your sharing model?
• Can it see records the running user shouldn't see?
• What happens if it surfaces confidential records to the wrong audience?
Most organizations discover, during this mapping exercise, that their data isn't ready for AI. That's a feature, not a bug. Better to discover it now than after the AI has acted on bad data.
Question 2: What Can the AI Actually Do?
There's a difference between AI that suggests and AI that acts.
Einstein Copilot suggesting a response for review is low risk. An Agentforce agent autonomously sending emails, creating records, or triggering workflows is high risk.
For every AI feature, document:
Read-Only vs. Write Access:
• Can the AI only surface information?
• Can it create records?
• Can it update records?
• Can it delete anything?
• Can it send external communications?
Human-in-the-Loop Requirements:
• Which actions require human approval?
• Which actions are fully autonomous?
• What's the escalation path when the AI is uncertain?
Blast Radius Analysis:
• If this AI makes a mistake, how many records are affected?
• How many customers could be impacted?
• What's the worst-case scenario?
The organizations that succeed with AI aren't the ones that move fastest. They're the ones that match autonomy levels to risk tolerance.
Question 3: How Will You Know When It Breaks?
Every system fails eventually. The question is whether you find out from your monitoring or from angry customers.
Before enabling AI, build observability:
Logging:
• Are AI actions logged?
• Can you trace which agent took which action on which record?
• Is there an audit trail for compliance purposes?
Alerting:
• What triggers an alert to the admin team?
• How do you detect anomalies (unusual volume, unexpected errors)?
• Who gets notified and how quickly?
Metrics:
• What does "working correctly" look like?
• How do you measure AI accuracy?
• What's your baseline for comparison?
Salesforce's Agentforce Testing Center helps here, but it's not automatic. You need to define what success looks like before you can measure whether you're achieving it.
Set up monitoring before you enable the feature, not after you get the first complaint.
Question 4: Who Owns This?
AI governance isn't a one-person job. Different stakeholders own different pieces:
Data Owners:
• Who's responsible for data quality in each object?
• Who approves AI access to sensitive data?
• Who handles data quality issues the AI surfaces?
Business Owners:
• Who defines what the AI should do?
• Who reviews AI outputs for accuracy?
• Who decides when AI behavior needs adjustment?
Technical Owners:
• Who configures and maintains the AI?
• Who troubleshoots when it breaks?
• Who handles security and compliance requirements?
Executive Sponsor:
• Who has authority to pause or disable the AI?
• Who communicates AI incidents to leadership?
• Who decides acceptable risk levels?
Document these roles before launch. When something goes wrong at 6 PM on Friday, you need to know who to call.
Question 5: What's Your Rollback Plan?
The best AI governance includes a plan for when things go wrong.
Pause Capability:
• Can you disable the AI instantly?
• Does disabling break other functionality?
• How do you communicate the pause to users?
Rollback Procedure:
• Can you undo AI-created records?
• Can you reverse AI-triggered actions?
• How do you identify which records were affected?
Communication Plan:
• How do you notify impacted customers?
• Who drafts the communication?
• What's the approval process for external messaging?
Post-Incident Review:
• How do you analyze what went wrong?
• Who's responsible for implementing fixes?
• How do you prevent recurrence?
Test your rollback plan before you need it. A plan that exists only on paper isn't a plan.
The Governance Checklist
Before enabling any Salesforce AI feature, confirm:
Data Readiness:
• [ ] Data quality is sufficient for AI to act accurately
• [ ] Sensitive fields are identified and protected
• [ ] Sharing model is correctly configured
• [ ] Data owners have approved AI access
Scope Definition:
• [ ] AI capabilities are documented (read vs. write)
• [ ] Human-in-the-loop requirements are defined
• [ ] Blast radius is understood and acceptable
• [ ] Edge cases and exceptions are handled
Monitoring:
• [ ] Logging is configured
• [ ] Alerting is active
• [ ] Success metrics are defined
• [ ] Baseline measurements exist
Ownership:
• [ ] Data, business, technical, and executive owners are assigned
• [ ] Escalation paths are documented
• [ ] On-call responsibilities are clear
Rollback:
• [ ] Disable procedure is tested
• [ ] Rollback steps are documented
• [ ] Communication templates are prepared
• [ ] Post-incident process exists
Governance Is Ongoing
Answering these questions once isn't enough. AI systems evolve. Your data changes. Business requirements shift.
Schedule quarterly governance reviews:
• Is the AI still accessing appropriate data?
• Have new sensitive fields been added?
• Are the defined owners still correct?
• Has the risk profile changed?
• What incidents occurred and what did we learn?
The organizations that treat governance as a one-time checkbox fail. The ones that treat it as an ongoing practice succeed.
The Real Cost of Skipping Governance
I've seen organizations skip governance to "move fast." Here's what happens:
Scenario 1: AI emails customers using outdated contact information. Customers complain. Marketing loses trust in the system. The feature gets disabled indefinitely.
Scenario 2: AI creates duplicate records because data quality was poor. Sales reps waste hours cleaning up. Pipeline reports are wrong for a quarter. CRO loses confidence in Salesforce data.
Scenario 3: AI accesses records it shouldn't (sharing model misconfiguration). Confidential information surfaces to the wrong users. Legal gets involved. The incident becomes an audit finding.
Every one of these was preventable with basic governance. Every one of them cost more to fix than governance would have cost to implement.
Getting Started
If you're already behind on AI governance, start here:
1. Audit current AI features: What's already enabled? What data does it access?
2. Identify the highest risk: Which feature, if it fails, causes the most damage?
3. Build governance for that feature first: Answer all five questions for your highest-risk AI
4. Expand from there: Repeat for each AI capability in priority order
You don't need perfect governance before enabling any AI. You need appropriate governance before enabling each AI feature.
Next Steps
1. List every AI feature currently enabled in your org
2. Map data access for each feature
3. Identify gaps in monitoring and ownership
4. Prioritize governance work by risk level
5. Build rollback procedures before you need them
If you're implementing Agentforce or other Salesforce AI features and want to ensure your governance foundation is solid, Clear Concise Consulting offers AI readiness assessments. I've helped organizations avoid the governance failures that derail AI adoption.
Jeremy Carmona is a 13x certified Salesforce Architect who has written about AI governance for Salesforce Ben. His approach to AI governance is informed by 14 years of building Salesforce systems that organizations can trust.

