Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

SECtember AI Think Tank Reflections: Shaping the Future of AI Security & Governance

Tal Shapira
November 30, 2023
3 min

On September 22, 2023, I had the honor of being the opening speaker at the "SECtember AI Think Tank Day" in Bellevue, WA, hosted by the Cloud Security Alliance (CSA). As the Co-Founder & CTO at Reco AI, a company at the forefront of AI-powered SaaS security, I was thrilled to share insights on the transformative power of Generative AI and its implications on cybersecurity. This event was a pivotal platform for AI innovators and experts to discuss the industry priorities for AI research and to soft launch CSA’s AI Safety Initiative.

The Rise of Generative AI:
We are witnessing the most significant technology trend of our time: the rise of Artificial Intelligence. Generative AI, a technology capable of producing diverse content types, is revolutionizing industries, governments, and even hackers' strategies. Large language models (LLMs), which are deep learning models with billions or even trillions of parameters, have opened a new era in which Generative AI models like ChatGPT, Claude, and DALL·E are transforming the world by writing engaging text and creating photorealistic images on the fly.

The Main Flows for Using LLM in Enterprises:

Ken Huang, Co-Author of the OWASP Top 10 for LLM, presented the main flows for using LLMs in enterprises:

  1. Training or Fine-Tuning an LLM model over proprietary organizational data for a specific use case. 
  2. RAG (Retrieval-Augmented Generation) is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs' generative process. This flow requires a Vector DB.

Additionally, I would like to emphasize that there is an easier flow - using an agent or chain that retrieves data using multiple rest APIs or SQL queries (retrievals). The third option is well adopted, with many new applications based on libraries such as LangChain and LlamaIndex.

The Double-Edged Sword of Generative AI:

Generative AI offers numerous benefits, including improving business productivity and enhancing cybersecurity programs. As cybersecurity defenders, we can harness Generative AI for real-time threat intel, shadow app discovery,  phishing detection, policy auto-generation, etc.

Caption: “A risky dance?”, Lake Diablo, North Cascades National Park, WA, USA (photo taken by the author)

However, by doing so, it also poses significant security risks. We see malicious actors exploiting Generative AI to enumerate, create dynamic malware, and engage in social engineering, as we have all witnessed in the past week in the MGM Resorts cyber attack. Jason Clinton, CISO at Anthropic, which is one of the leading companies in the field, presented valuable insights on Frontier Model Security.

Furthermore, the risk is not restricted to malicious actors. Interest in Generative AI has exploded exponentially since October 2022, such that Generative AI is becoming an actual shadow app problem. In Reco, for example, we discovered more than 20 new Generative AI apps in the last month used by our employees.  Caleb Sima, Chair of the AI Safety Initiative, presented Open Interpreter, which lets LLMs run code via a terminal on a user's computer to complete tasks. This tool can exploit every permission/access that the user has, and, in the worst-case scenario can even expose a user's private data over the web or a social-media app, highlighting the importance of enforcing permissions. 

Therefore, the risks extend to legitimate usage by employees and third-party apps, leading to data exposure, compliance issues, and misinformation. The challenge is that most users of AI apps are neither technical nor security-aware, making it crucial for security practitioners to establish robust AI/ML security best practices, particularly an Access Control Policy and App Governance Procedures.


The AI Think Tank Day was a groundbreaking workshop that provided key insights into the responsible usage of Generative AI and its benefits and risks in cybersecurity. As we continue to leverage AI in various domains, it is imperative to build better AI/ML security best practices as a community and stay vigilant against the security implications of this transformative technology.

Author Bio:

Tal Shapira, P.hD., is the Co-Founder & CTO at Reco AI, specializing in AI-powered SaaS security, with a decade of experience in researching Generative AI in the context of cybersecurity, both in the academy and during his time as a Cybersecurity Group Leader at the Israeli Prime Minister's Office.

Additional References:
- CSA research paper "Security Implications of ChatGPT"
- Jim Reavis, CEO at CSA, “Hi ChatGPT, please help Cybersecurity”

*This blog post was crafted with the assistance of ChatGPT.

Get the Latest SaaS Security Insights
Subscribe to receive weekly updates, the latest attacks, and new trends in SaaS Security
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
See a Demo

Related Posts

SaaS Security
7 min read
Microsoft 365 and Azure AD: Addressing Misconfigurations and Access Risks
4 min read
Securing Your Okta Environment After the HAR Breach: How SSPM Can Help
Cyber Attack
3 min
MOVEit Exploit & Ransomware Attack: Why SaaS Security Is Critical During a Cyberattack