Salesforce comes with AI tools in different parts of its platform, such as Einstein GPT, predictive analytics in the Sales Cloud, and service recommendations in the Service Cloud. Although they are beneficial, these features also introduce new safety and compliance issues. Using AI in Salesforce expands your org’s capabilities, but also introduces concerns related to data flow, inference behavior, prompt structure, and storage locations. To protect AI in Salesforce, it is important to set secure access, check data-sharing habits, review system usage, and oversee user expectations.
This article outlines the process for making Salesforce AI features secure in your organization. The area highlights AI components that make use of models, prompt systems, and automation.
It’s important to ensure that the correct people have access to Salesforce AI features. AI tools usually use metadata, prompts, and predictions that can include private or vital commercial information.
Salesforce provides permission sets and licenses for features like Einstein GPT. For example:
// Check if user has the 'EinsteinGPTUser' permission
Boolean hasAccess = FeatureManagement.checkPermission('EinsteinGPTUser');
if (hasAccess) {
// Allow access to AI functionality
}
You should restrict access using custom profiles or permission sets. For example:
Use Field-Level Security to restrict what data AI models can access. This prevents sensitive fields from being used in training or inference.
Salesforce's Prompt Builder allows admins to create templates that send user and record data to large language models. If not secured properly, this may result in exposing PII or sensitive business data.
Use Data Masking for prompt testing and QA environments. Salesforce offers tools like Data Mask to anonymize or redact sensitive information when testing prompts in sandbox environments.
Also, ensure the “Prompt Template Usage” setting is logged and audited. This helps you trace any data exposure or prompt misuse later.
Some AI features, like Einstein Discovery or Einstein Next Best Action, use historical data to train models. This data may come from various objects and fields across your org.
Define clear data access policies:
For organizations with multi-region deployments, ensure that training data complies with data residency laws. Avoid using globally scoped models unless the data-sharing agreement supports it.
Once AI features are enabled, it's critical to audit their usage and behavior. Salesforce provides logs and monitoring tools for this.
Enable the following:
Salesforce Event Monitoring interface displaying logs of AI usage, user actions, and data access events for compliance tracking.
You can also create custom objects to store metadata about AI usage. Use the data in dashboards to track trends, spikes, or anomalies.
Additionally, review Salesforce’s AI Usage Dashboard from Setup > Reports > Einstein Usage to get an overview of feature usage across users and departments.
Salesforce’s AI features may send data outside your org for processing. For example, Einstein GPT may call external models or use OpenAI APIs depending on the configuration.
To secure this:
You can configure your org to use Salesforce-hosted models instead of third-party APIs if compliance is a concern. This reduces the risk of data leakage through third-party services.
Salesforce AI features provide a lot of automation and insights, but they also increase the attack surface of your org. Securing these features requires more than just setting permissions—it involves reviewing prompts, limiting model access to appropriate data, monitoring usage, and being transparent with users.
Treat every AI model, prompt, and prediction as a potential endpoint for sensitive data. Apply the same controls you would to any external API or integration. With proper planning and secure configuration, Salesforce AI can be powerful and compliant at the same time.