Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

Navigating the New Frontier of AI Governance: Insights from Digital World Conference Summit

Tal Shapira
Updated
April 18, 2024
November 29, 2024
5 min read
Ready to Close the SaaS Security Gap?
Chat with us

Introduction

As we are at the convergence of unprecedented technology advancements and global challenges, the significance of AI safety governance has never been more important. On April 14, 2024, I had the distinct honor of participating as a panelist in the Generative AI Governance discussion at the recent Digital World Conference (DWC) Summit “Building Trust in the Digital World” in Geneva, Switzerland. This event was organized by the World Digital Technology Academy (WDTA).

Participating in the "Building Trust in the Digital World" event

Insights from the DWC Summit

The DWC Summit was a melting pot of ideas and initiatives supported by WDTA’s commitment to advancing digital technology for all. The panel discussion I participated in brought together experts from the United Nations International Computing Centre, the Norwegian Academy of Engineering, Georgetown’s CSET, OpenAI, and of course, Reco AI.

During the session, we discussed the challenges of governing GenAI. This technology’s rapid evolution has brought significant governance issues that require frameworks to ensure that it contributes positively to society. At Reco, we've observed a sharp increase in the deployment of GenAI applications, which means that organizations require a new approach to SaaS security.

Our discussion extended to the rise in usage of GenAI apps and the subsequent rise in 3rd-party app connections, authentication permissions, and tokens. This trend shows the urgent need for comprehensive AI-driven security measures that not only protect but also anticipate potential threats.

During the panel, I was approached by the moderator, Ken Huang, Chair of the WDTA AI STR Working Group, with the following question:

"Given Reco's recent research on the potential risks of Microsoft Copilot, can you discuss the risks of generative AI usage and how to defend against them?"

On March 14, Microsoft introduced Copilot to their 365 environment. Copilot is designed to enhance productivity through a conversational AI interface that helps users conduct research and create content. However, our research at Reco AI reveals significant risks associated with the broad permissions often granted to such tools. Copilot, like other generative AI tools, has access to all files a user can access, including those shared widely across the organization. This expansive access poses a notable risk of data leakage that can occur by a single prompt.

In our deep-dive analysis, “Are you ready for Microsoft Copilot?”, we discovered that generative AI tools could potentially access and leak sensitive organizational data. For example, in situations where an account is compromised, a malicious actor can utilize Copilot to quickly locate and exploit sensitive information, bypassing the traditional time-consuming process of searching through files manually. Shockingly, these interactions often leave minimal audit trails, unlike regular user access, which is logged and can be audited. Our team managed to retrieve names of restricted files simply by querying Copilot about their authors, highlighting a severe oversight in data access logging and control.


To defend against these risks, we recommend educating employees about risks involved when using GenAI tools, meticulously configuring AI applications, monitoring for configuration drifts, reducing data exposure, and implementing strict access controls. Additionally, deploying advanced AI-driven monitoring and detection systems is critical for identifying and mitigating potential breaches or misuse of AI applications.

Conclusion: How GenAI Tools Can Be Used for AI Governance

As is said in the cartoon show Jackie Chan, “You can only defeat magic with magic." This is the approach needed in AI governance—essentially, the only way to fight AI is with GenAI. Depending on the use case, generative AI tools can be adapted to govern themselves and other digital tools effectively. For instance, when training or fine-tuning foundational models, we can use large language models (LLMs) to scan for personally identifiable information (PII) or payment card information (PCI). This would prevent data privacy breaches before they occur. In production environments, implement AI guardrails that scan user inputs and outputs. These models can block attacks like prompt injection, which manipulates systems to grant unauthorized access, and can also filter out harmful content, maintaining the integrity of interactions.

At Reco, we use AI to detect shadow GenAI apps or any unsanctioned applications utilizing generative AI, managing unauthorized data access risks. Our AI tools can identify abnormal user behaviors, such as unauthorized data access or unusual app-to-app permissions, which could indicate sophisticated attacks like SaaS session hijacking. We also leverage generative AI to automate creation of security policies and compliance rules, helping maintain agile, relevant governance processes that are aligned with the latest cybersecurity best practices. This significantly reduces the administrative burden on security teams. Reco also uses generative AI to map compliance requirements to specific controls, policies, and posture checks within SaaS. By embracing generative AI, we ensure that our governance tools are not only effective but are also continually evolving to meet the challenges posed by new technologies, thereby securing our digital future.

No items found.

Dr. Tal Shapira

ABOUT THE AUTHOR

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Technical Review by:
Gal Nakash
Technical Review by:
Dr. Tal Shapira

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Ready to Close the SaaS Security Gap?
Chat with us
Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Related Posts

OpenClaw: The AI Agent Security Crisis Unfolding Right Now
Alon Klayman
OpenClaw, the viral open-source AI agent with over 135,000 GitHub stars, has triggered the first major AI agent security crisis of 2026 with multiple critical vulnerabilities, malicious marketplace exploits, and over 21,000 exposed instances. When employees connect these autonomous agents to corporate systems like Slack and Google Workspace, they create shadow AI with elevated privileges that traditional security tools can't detect. Reco's platform provides the visibility security teams need to identify OpenClaw integrations, audit permissions, and assess risk before incidents occur.
SaaS and AI Security Is Here: Reco Raises Series B to Dominate the Future of AI Usage in SaaS
Ofer Klein
After 400% growth, Reco raises $30M Series B to address the AI SaaS security gap, where traditional tools can't see the thousands of AI apps, agents, and integrations that now power modern enterprises. This round was led by Zeev Ventures, with participation from all our existing investors—Insight Partners, boldstart ventures, and Angular Ventures—and new corporate investors including Workday Ventures, TIAA Ventures, S Ventures, and Quadrille Capital.
When AI Becomes the Insider Threat: Understanding Risks in Modern SaaS Environments
Tal Shapira
As AI becomes deeply embedded across SaaS platforms, it is increasingly operating with trusted internal access once reserved for employees and service accounts. This article examines how AI can function as an insider threat, why these risks are harder to detect than traditional insider activity, and what signals security teams should watch for. It also explores common governance gaps, real-world scenarios, and practical approaches organizations can take to reduce AI-driven insider risk without limiting legitimate AI use.
See more featured resources

Ready for SaaS Security that can keep up?

Request a demo