Salesforce Trust Layer Explained: What Admins Need to Know

The security architecture that makes Agentforce enterprise-ready

"How do we know the AI won't leak customer data?"

The CISO asked this question 15 minutes into our Agentforce planning session. It's the right question. Every organization considering AI should ask it.

Salesforce's answer is the Trust Layer: a set of security controls built into the platform that govern how AI features handle data. Understanding the Trust Layer is essential for any admin implementing Agentforce or other Einstein AI capabilities.

Here's what the Trust Layer actually does and how to configure it for your organization.

What the Trust Layer Protects Against

AI systems create new security concerns that traditional CRM security wasn't designed to handle:

Data Exposure:

AI models might inadvertently surface sensitive information in responses. A customer service agent asking for account history shouldn't see executive compensation data.

Data Leakage:

Information sent to AI models for processing could potentially be retained or used for training, exposing proprietary data.

Prompt Injection:

Malicious users might craft inputs designed to make the AI behave unexpectedly or reveal information it shouldn't.

Hallucination:

AI generating confident-sounding but factually wrong information, potentially leading to compliance issues.

Unauthorized Actions:

AI agents taking actions users shouldn't be able to perform, bypassing normal permission controls.

The Trust Layer addresses each of these risks.

Trust Layer Components

Component 1: Data Masking

Before sending data to AI models, the Trust Layer can mask sensitive information:

PII Masking:

• Credit card numbers replaced with tokens

• Social Security numbers obscured

• Personal health information filtered

Custom Masking:

• Designate additional fields as sensitive

• Define masking rules for proprietary data

How it works:

Data gets masked before being sent to the LLM. The AI processes masked data and returns results. The Trust Layer then unmasks the response where appropriate.

Admin Configuration:

Navigate to Setup → Trust Layer → Data Masking

• Enable masking for standard sensitive fields

• Add custom fields to masking rules

• Configure masking behavior (full vs. partial)

Component 2: Secure Data Retrieval

The Trust Layer enforces Salesforce's existing security model when AI retrieves data:

Sharing Model Enforcement:

AI queries respect Organization-Wide Defaults, Role Hierarchy, and Sharing Rules. An AI acting on behalf of a sales rep can only access records that rep can access.

Field-Level Security:

AI can only read fields the running user can read. Hidden fields remain hidden even from AI.

Object Permissions:

AI respects CRUD permissions. Users without Read access to an object can't get AI-powered insights about that object.

Admin Verification:

Test AI features with users at different permission levels. Verify that restricted data doesn't surface inappropriately.

Component 3: Zero Data Retention

A core Trust Layer principle: Salesforce's AI doesn't retain customer data for model training.

What this means:

• Prompts and responses aren't stored by AI providers

• Customer data isn't used to train models

• Each interaction is stateless

Audit Support:

Salesforce provides Data Processing Addendums (DPAs) documenting zero retention commitments. Legal and compliance teams can review these for regulatory requirements.

Component 4: Prompt Defense

The Trust Layer includes guardrails against prompt manipulation:

Input Sanitization:

Filters potentially malicious prompt patterns before they reach the AI.

Output Validation:

Checks AI responses for attempts to execute unauthorized actions or reveal restricted information.

Guardrail Configuration:

Admins can configure additional guardrails:

• Topics the AI should never discuss

• Actions the AI should never take

• Response formats the AI must follow

Navigate to Setup → Agentforce → Agent Configuration → Guardrails

Component 5: Audit Logging

Every AI interaction can be logged for compliance and troubleshooting:

What's Logged:

• Who initiated the AI interaction

• What data was accessed

• What actions were taken

• Timestamps for all events

Accessing Logs:

Setup → Trust Layer → AI Audit Logs

Retention:

Configure log retention based on compliance requirements.

Configuring Trust Layer for Your Org

Step 1: Review Default Settings

Trust Layer ships with defaults. Review them before enabling AI features:

Navigate to Setup → Trust Layer

Check:

• Data masking configuration

• Audit logging status

• Default guardrails

Step 2: Identify Sensitive Data

Map fields that need protection:

Object | Field | Sensitivity | Masking Required

Contact | SSN__c | High | Yes

Account | Revenue | Medium | No

Opportunity | Margin__c | High | Yes

Add custom sensitive fields to masking rules.

Step 3: Configure Guardrails

Define what your AI should never do:

Topic Restrictions:

• Competitor disparagement

• Legal advice

• Medical diagnosis

• Financial guarantees

Action Restrictions:

• Sending emails without approval

• Deleting records

• Accessing specific objects

Response Requirements:

• Include disclaimers where appropriate

• Cite sources when providing information

• Escalate to humans for sensitive topics

Step 4: Test Security Boundaries

Before production deployment:

1. Test with users at different permission levels

2. Attempt to surface restricted data through AI

3. Test prompt injection scenarios

4. Verify audit logs capture expected events

Document test results for compliance records.

Step 5: Monitor Ongoing

Build monitoring into AI operations:

Weekly Review:

• Check audit logs for anomalies

• Review any security alerts

• Verify masking is working correctly

Monthly Review:

• Audit permission changes that might affect AI access

• Review new guardrail requirements

• Check for Salesforce Trust Layer updates

Trust Layer and Compliance

Different regulations have different AI requirements:

GDPR:

• Right to explanation: Users can request information about AI decisions

• Data minimization: AI should only access necessary data

• Trust Layer logging supports auditability requirements

HIPAA:

• PHI must be masked or excluded from AI processing

• Access logging is required

• Document AI handling in BAAs

SOC 2:

• Demonstrate AI access controls

• Show audit trails for AI actions

• Document AI security configuration

Industry-Specific:

• Financial services may have restrictions on AI-generated advice

• Healthcare has limitations on AI diagnostic assistance

• Government contracts may require specific AI certifications

Review regulatory requirements before enabling AI features. The Trust Layer provides controls, but you must configure them appropriately.

Common Trust Layer Questions

Q: Can AI access data from connected systems (via MuleSoft, etc.)?

A: Trust Layer governs Salesforce data. External data accessed through integrations may have different security models. Review integration security separately.

Q: What happens if Trust Layer detects a violation?

A: Configurable. Options include blocking the action, logging without blocking, or alerting administrators.

Q: Does Trust Layer work with all AI features?

A: Trust Layer applies to Einstein features and Agentforce. Third-party AppExchange AI tools may have their own security models.

Q: How do we handle AI features in sandbox vs. production?

A: Trust Layer configuration can differ by org. Configure appropriate security for each environment. Be careful not to have more permissive settings in production than intended.

Q: What's the performance impact?

A: Trust Layer processing adds minimal latency. Data masking and security checks are designed for real-time operation.

Trust Layer Checklist

Before enabling AI features, confirm:

Data Protection:

• [ ] Sensitive fields identified

• [ ] Masking rules configured

• [ ] PII handling documented

Access Control:

• [ ] Sharing model verified

• [ ] Field-level security tested

• [ ] Object permissions confirmed

Guardrails:

• [ ] Topic restrictions defined

• [ ] Action restrictions configured

• [ ] Response requirements set

Audit:

• [ ] Logging enabled

• [ ] Retention configured

• [ ] Review process established

Compliance:

• [ ] Regulatory requirements reviewed

• [ ] Documentation updated

• [ ] Legal/compliance sign-off obtained

The Security Conversation

When stakeholders ask about AI security, you can explain:

"Salesforce's Trust Layer provides enterprise-grade security for AI features. It masks sensitive data before AI processing, enforces our existing permission model, doesn't retain our data for model training, and logs all AI interactions for audit purposes. We've configured additional guardrails specific to our requirements and tested security boundaries before deployment."

That's the answer that satisfies security teams and enables AI adoption.

Next Steps

1. Review your organization's sensitive data inventory

2. Configure Trust Layer masking for sensitive fields

3. Define guardrails appropriate for your industry and use cases

4. Test AI features with different user permission levels

5. Document your Trust Layer configuration for compliance

If you're preparing for Agentforce deployment and need to ensure your Trust Layer configuration is complete, Clear Concise Consulting offers AI security reviews. We help organizations configure Trust Layer appropriately for their compliance requirements.


Jeremy Carmona is a 13x certified Salesforce Architect who has written about AI governance for Salesforce Ben. He helps organizations implement AI features with appropriate security controls.

Previous
Previous

Pardot Engagement Studio: How to Build Drip Campaigns That Actually Convert

Next
Next

The Org Autopsy: What I Check First When Companies Hire Me to Fix Their Salesforce