Home
IT Hub
Salesforce

Securing Salesforce AI Features in Your Org

Reco Security Experts
Updated
July 15, 2025
July 15, 2025

Salesforce comes with AI tools in different parts of its platform, such as Einstein GPT, predictive analytics in the Sales Cloud, and service recommendations in the Service Cloud. Although they are beneficial, these features also introduce new safety and compliance issues. Using AI in Salesforce expands your org’s capabilities, but also introduces concerns related to data flow, inference behavior, prompt structure, and storage locations. To protect AI in Salesforce, it is important to set secure access, check data-sharing habits, review system usage, and oversee user expectations.

This article outlines the process for making Salesforce AI features secure in your organization. The area highlights AI components that make use of models, prompt systems, and automation.

Control Access to AI Features and Components

It’s important to ensure that the correct people have access to Salesforce AI features. AI tools usually use metadata, prompts, and predictions that can include private or vital commercial information.

Salesforce provides permission sets and licenses for features like Einstein GPT. For example:

// Check if user has the 'EinsteinGPTUser' permission
Boolean hasAccess = FeatureManagement.checkPermission('EinsteinGPTUser');
if (hasAccess) {
    // Allow access to AI functionality
}

You should restrict access using custom profiles or permission sets. For example:

  1. Create a permission set called AI_Feature_Users.
  2. Enable only required features (e.g., Prompt Builder, Model Builder).
  3. Assign this permission set only to authorized roles (e.g., Data Analysts, AI Admins).

Use Field-Level Security to restrict what data AI models can access. This prevents sensitive fields from being used in training or inference.

Review Prompt Templates and Data Usage

Salesforce's Prompt Builder allows admins to create templates that send user and record data to large language models. If not secured properly, this may result in exposing PII or sensitive business data.

Use Data Masking for prompt testing and QA environments. Salesforce offers tools like Data Mask to anonymize or redact sensitive information when testing prompts in sandbox environments.

Also, ensure the “Prompt Template Usage” setting is logged and audited. This helps you trace any data exposure or prompt misuse later.

Apply Data Access Policies for AI Models

Some AI features, like Einstein Discovery or Einstein Next Best Action, use historical data to train models. This data may come from various objects and fields across your org.

Define clear data access policies:

  • Limit which objects and fields are used for model training.
  • Use record-level access controls (Sharing Rules) to ensure models only use data the user has access to.
  • Enable scoped data sets when training models to isolate data per business unit or region.

For organizations with multi-region deployments, ensure that training data complies with data residency laws. Avoid using globally scoped models unless the data-sharing agreement supports it.

Audit and Monitor AI Activity

Once AI features are enabled, it's critical to audit their usage and behavior. Salesforce provides logs and monitoring tools for this.

Enable the following:

  1. Event Monitoring: Tracks prompt usage, prediction calls, and model activity.
  2. Field Audit Trail: Useful for tracking changes to fields used by models.
  3. Debug Logs: Enable for users or integrations interacting with AI APIs.
 Salesforce Event Monitoring dashboard with charts and user activity logs.

Salesforce Event Monitoring interface displaying logs of AI usage, user actions, and data access events for compliance tracking.

You can also create custom objects to store metadata about AI usage. Use the data in dashboards to track trends, spikes, or anomalies.

Additionally, review Salesforce’s AI Usage Dashboard from Setup > Reports > Einstein Usage to get an overview of feature usage across users and departments.

Manage External Connections and Data Flow

Salesforce’s AI features may send data outside your org for processing. For example, Einstein GPT may call external models or use OpenAI APIs depending on the configuration.

To secure this:

  • Use Named Credentials for all external API calls.
  • Restrict outbound traffic using Network Access and Remote Site Settings.
  • Disable external model access unless your org has signed proper data-sharing agreements.
  • Review IP allow lists and deny access to regions or services not in use.

You can configure your org to use Salesforce-hosted models instead of third-party APIs if compliance is a concern. This reduces the risk of data leakage through third-party services.

Insight by
Dr. Tal Shapira
Cofounder & CTO at Reco

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from Tel Aviv University with a focus on deep learning, computer networks, and cybersecurity and he is the former head of the cybersecurity R&D group within the Israeli Prime Minister's Office. Tal is a member of the AI Controls Security Working Group with CSA.

Expert Insight: Security Tips for Salesforce AI


To boost your AI security strategy in Salesforce, use the following insights taken from experts’ experiences.

  • Remember to check AI permissions because they may change rapidly in Salesforce. Audit the permission sets every three months to make sure only those who need it can keep using software like Prompt Builder.
  • Do not add PII or financial information to the input fields used in prompts. Details about groups of people can be found in carefully calculated fields, too.
  • Divide datasets for model training in Einstein Discovery by department and geography to keep all data separate.
  • Follow interactions between different features in multiple clouds (like Sales, Service and Marketing). Log prompt events into a single storage or use Event Monitoring to find details about prompt use across the boundaries.
  • Unless there is a clear reason with proper approval, restricted access to external models is best in Einstein GPT. If possible, go for Salesforce-hosted LLMs.
  • Add visual reminderssuch as “AI Generated” to any interface showing AI content, and make sure to explain this when helping users.
  • Include AI Metadata in Backup Pipelines. Treat prompt templates, model configurations, and prompt logs as first-class metadata. Ensure they are version-controlled and included in CI/CD backups.

Conclusion

Salesforce AI features provide a lot of automation and insights, but they also increase the attack surface of your org. Securing these features requires more than just setting permissions—it involves reviewing prompts, limiting model access to appropriate data, monitoring usage, and being transparent with users.

Treat every AI model, prompt, and prediction as a potential endpoint for sensitive data. Apply the same controls you would to any external API or integration. With proper planning and secure configuration, Salesforce AI can be powerful and compliant at the same time.

No items found.
EXPERIENCE RECO 1:1 - BOOK A DEMO

Discover How Reco Can Help You Protect Your Salesforce Environment

“I’ve looked at other tools in this space and Reco is the best choice based on use cases I had and their dedication to success of our program. I always recommend Reco to my friends and associates, and would recommend it to anyone looking to get their arms around shadow IT and implement effective SaaS security.”
Mike D'Arezzo
Executive Director of Security
“We decided to invest in SaaS Security over other more traditional types of security because of the growth of SaaS that empowers our business to be able to operate the way that it does. It’s just something that can’t be ignored anymore or put off.”
Aaron Ansari
CISO
“With Reco, our posture score has gone from 55% to 67% in 30 days and more improvements to come in 7-10 days. We are having a separate internal session with our ServiceNow admin to address these posture checks.”
Jen Langford
Information Security & Compliance Analyst
“That's a huge differentiator compared to the rest of the players in the space. And because most of the time when you ask for integrations for a solution, they'll say we'll add it to our roadmap, maybe next year. Whereas Reco is very adaptable. They add new integrations quickly, including integrations we've requested.”
Kyle Kurdziolek
Head of Security

Explore More

Ready for SaaS Security
that can keep up?

Request a demo