Introducing the CCC AI Center of Excellence

Geometric minimal illustration in CCC navy and beige showing six interlocking governance principles around a central Salesforce cloud icon

A nonprofit director called me last fall. She had just been told by her board that her organization needed an "AI strategy" by the end of the quarter. She had three months, no budget for new hires, and a Salesforce org with 70,000 contact records that hadn't been audited in two years.

Her question was simple. "Where do I even start?"

I told her the truth. The AI strategy was the easy part. The hard part was that her data wasn't ready, her team didn't have a governance framework, and nobody on the board was prepared to be accountable when the AI got something wrong. Adding AI to that org would have made every existing problem worse and faster.

That conversation is the reason I built the CCC AI Center of Excellence.

What the AI CoE Is

The CCC AI Center of Excellence is a structured advisory practice for organizations running Salesforce who need to deploy AI responsibly. It packages the governance frameworks, assessment tools, and documentation standards I've been developing for two years into one operating model.

It is not a tech stack. It is not a software product. It is a way of approaching AI deployment that puts data quality, human oversight, and accountability ahead of feature activation.

The CoE is aligned to three authoritative frameworks:

  • NIST AI Risk Management Framework 1.0 (Govern, Map, Measure, Manage)

  • ISO 42001 controls for AI management systems

  • Salesforce Trusted AI principles

It addresses a documented gap in the market. As of early 2026, 82% of nonprofits are using AI tools in some form. Fewer than 10% have written governance policies. Federal agencies are now subject to OMB M-25-21 compliance requirements for high-impact AI. Healthcare organizations face HIPAA obligations for any AI system handling protected health information. The volume of AI activity has outpaced the volume of governance.

Most organizations I talk to don't need more AI. They need better guardrails for the AI they already have.

Why Now

Two events in April 2026 made the timing of this announcement obvious.

First, Salesforce announced Headless 360 at TDX. Every platform capability is now exposed as an API, MCP tool, or CLI command. AI agents can create records, trigger Flows, modify data, and orchestrate cross-system workflows without a browser. This is an architectural shift, not a feature release. The platform is now designed for machine consumption, not just human navigation.

For organizations that haven't audited their data, their permission model, or their integration users in years, this is a crisis waiting to happen. An agent operating at machine speed against bad data can corrupt thousands of records before a human notices.

Second, I attended Agentforce World Tour NYC at Javits Center on April 29. The keynote and breakouts were about agents, deployment velocity, and customer outcomes. The governance conversation barely registered. There were 130 sessions. I counted three that touched on data readiness or human oversight as deployment prerequisites.

The Salesforce ecosystem is moving fast on AI. The governance conversation is not keeping pace. That gap is where things go wrong, and that gap is where the CoE operates.

The Six Guiding Principles

The CoE runs on six principles drawn from NIST AI RMF trustworthiness characteristics. Each one becomes a working article in this series over the next four weeks.

Data First. No AI feature is activated until data quality is assessed. Clean data is a prerequisite, not a parallel workstream.

Human Oversight. Every AI-assisted decision has a documented human review gate. Automated actions are bounded by explicit scope limits.

Transparency. AI governance documentation is written in plain language for four audiences: executives, project managers, technical teams, and end users.

Accountability. Every AI deployment has a named human owner responsible for outcomes. AI does not make consequential decisions without human authorization.

Proportionality. Governance controls are proportional to risk. Low-risk automations receive lighter review. High-impact AI decisions receive mandatory multi-step validation.

Reversibility. Every AI deployment includes a documented rollback procedure. If something goes wrong, the system returns to its pre-AI state within a defined timeframe.

These are not new ideas. They are the same governance principles that show up in NIST publications, ISO standards, and academic papers on responsible AI. What's new is applying them inside a Salesforce org, with concrete checkpoints and documentation that a nonprofit director with no engineering background can actually use.

Six Governance Checkpoints

Every CoE engagement runs through six mandatory checkpoints. Each is pass/fail. Engagements do not proceed until gate criteria are met.

The checkpoints map to NIST functions:

  1. Data Readiness (MAP). Data quality score 7/10 or higher. Sharing model audited.

  2. Permission and Access (GOVERN). Field-level security verified. Trust Layer configured.

  3. Design Review (MAP + GOVERN). Use case risk classified. Human oversight defined.

  4. Agentic Governance Review (GOVERN + MAP). Agent boundaries set. Session Tracing on.

  5. Pre-Production Validation (MEASURE). Bias checks done. Rollback tested.

  6. Post-Launch Governance (MANAGE). 30-day review. Drift indicators baselined.

The eighth article in this series walks through all six checkpoints with a real engagement example.

What This Means for Your Org

If you're a nonprofit, government agency, healthcare org, or enterprise running Salesforce, the practical question is whether your org is ready to deploy AI responsibly. The CCC AI Readiness Scorecard answers that question in 15 weighted questions across five categories. It produces a numerical baseline (0-100) that identifies gaps before AI features are activated. It is free and takes about ten minutes.

If your score lands below 70, you're in the same position as that nonprofit director from last fall. You have time, but not unlimited time, and you need to fix the foundations before you build on top of them.

If your score lands above 70, you're in better shape than most. The CoE engagements then focus on the specific checkpoint where you're weakest, not on rebuilding everything.

Either way, the path is the same. Data first. Then permissions. Then human oversight. Then deployment.

What's Next in This Series

Over the next four weeks, I will publish seven follow-up articles. One per Guiding Principle, plus a deep dive on the Six Governance Checkpoints. Each article includes a concrete example, a one-page framework you can use, and the mistakes I see most often in client engagements.

The goal is to make responsible AI adoption practical, measurable, and sustainable. Not theoretical. Not aspirational. Practical.

If you have a specific AI deployment you're considering, or a board mandate you need to satisfy, the AI Readiness Scorecard is the right starting point. If you want to talk through the results, my Zoom scheduler is open.

Responsible AI is not a tech problem. It is a governance problem. The good news: governance is solvable.

Jeremy Carmona

13x certified Salesforce Architect and founder of Clear Concise Consulting. 14 years of platform experience specializing in data governance, data quality, and AI governance for nonprofit, government, healthcare, and enterprise organizations. Instructor of NYU Tandon's Salesforce Administration course with 160+ students trained and an ~80% job placement rate. Published in Salesforce Ben on AI governance and data quality. Based in New York.

https://www.clearconciseconsulting.com

https://www.clearconciseconsulting.com
Previous
Previous

The Integration Built on Prayer: What Happens When Nobody Documents the API

Next
Next

The Sharing Model Nobody Designed: The Most Ignored Architecture Decision in Salesforce