Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

OpenAI Leaks Cloud Storage

Dvir Sasson
May 23, 2024
7 mins

Introduction

OpenAI, the creators of ChatGPT, have been quite busy lately, which also means they’ve been busy making headlines. From ChatGPT Voice and the ensuing drama over how they used an actor’s voice likeness without their permission to the release of GPT-4o, they’re showing they’re still fully capable of turning heads in the saturated world of AI. 

In this flurry of releases, one addition to the latest release of ChatGPT caught our eye for the wrong reasons. Unfortunately that very feature could pose a very dangerous threat to both your personal and organizational data.

Connections to Shared Drives Could Spell Data Disaster

The latest release of ChatGPT unveiled a seemingly common feature to SaaS applications: a connector to shared drives, namely, Google Drive and Microsoft OneDrive in order to answer complex questions based on custom data. Normally these types of connectors can increase productivity immensely. However, they can also lead to a wider attack surface as these applications require access into personal and organizational environments where highly sensitive personal data (such as metadata on stored photographs) and organizational data (such as proprietary information/files/folders, plans, account names, roles, etc.) is held. 

ChatGPT connector requires access to both personal and organizational environments.
Once access is granted, ChatGPT can view, edit, create and delete data in your shared drive.

By now, it’s not a matter of if you store data in the cloud, but just how much. And when most people choose to connect these applications to each other, they aren’t thinking about the security threat vectors that could cause, they’re thinking how great it is that they won’t have to download a file from one service in order to upload it to another. 

But we have to ask ourselves in the case of connecting ChatGPT to our shared folders: do we want to give access to such important data of our lives and organizations to a market leader tech giant for them to use as part of their “training set” data?

This question becomes even more worrying when you consider that even what we think is inaccessible or deleted might not be, such as the recent discovery of how deleted photos on iPhones weren’t actually deleted. What else could be lingering out there? And, even more chilling, what if it fell into the wrong hands?

Walkthrough: Connecting Your Accounts to OpenAI 

Let’s take a quick walkthrough of what connecting these accounts actually looks like, whether done intentionally or not.

In the case of a personal OneDrive account - that's easy. It’s a quick acceptance since, ostensibly, other users wouldn’t be impacted by your data sharing actions, just yourself.

When connecting your account, ChatGPT will ask for access to your data in the shared drives, not that of others.

However, when we come to an organizational connection things it appears that you can consent on behalf of your organization in the rare case you're a tenant admin! 

This means anyone with tenant admin rights could connect an entire organization’s OneDrive without any other authorization required, regardless of internal governance. 

However, anyone with tenant admin rights could connect an entire organization’s OneDrive without any other authorization.

As you can see, they hint at just a few of the permissions this application will need, but as they say, the devil is in the details. So let’s show them.

Devil in the details: ChatGPT requests an extensive list of data it will need access.

It’s my guess that people even only vaguely familiar with security might grimace at this list. 

Once connected, the user is able to select specific files to be analyzed, which is certainly a good thing, in theory. However, based on the vast permissions requested previously such as accessing, editing, and deleting all files, at this point it seems this activity is little more than a UI dialog. 

Once connected, the user is able to select specific files to be analyzed.

Because of these wide and vague permissions, it’s natural to wonder what would happen if the specific connected app is attacked and compromised? With two thirds of breaches originating due to stolen credentials, this scenario is one every organization of every size needs to consider before adopting this type of technology with permissions. Our educated guess (which is all we have at the emergence of this feature) is that every connected account within that organization could also be impacted as well, riding the permissions the user themselves granted. 

It’s worth noting this scenario of stolen credentials accessing your environment through a third party app is true of any SaaS application, but it’s especially worth considering when that same SaaS provider also adds the risk of using your personal and/or proprietary data for its own enhancements and training. 

As part of this new feature, Reco now supports alerting cases where users connected – or attempted to connect – their corporate account to ChatGPT. 

Reco has released policies that alert when users connect corporate shared drives to ChatGPT.

Summary and Suggestions to Protect your Environment from ChatGPT 4o

If you’re not quite sure what to do or where to go from here? Let’s take a minute to recap: 

  • ChatGPT 4o’s release included a feature to connect your Google Drive or Microsoft OneDrive accounts for easier access and improved answering based on custom (aka, your) data.
  • This feature is incredible for workflows and ease of use, yet terrible for privacy and security because:
    • It can potentially use private and/or proprietary data stored on shared drives as data to train OpenAI products.
    • It increases the threat surface of a potential data leak due to vast permissions granted to the application.
    • Once connected, you are unable to offboard your data and/or be forgotten, i.e., OpenAI maintains access to these files even after you no longer want it to and try to opt out.

Right now, based on the cost/benefit analysis of this feature, our recommendation to security admins is to:

  • Immediately block any OpenAI apps.
  • Start educating all users on what the Grant button means for both them and the entire organization.
  • Set alerts within Reco to notify when users connect (or attempt to connect) their corporate accounts to OpenAI applications. 

This new feature has shown fully how quickly threat surfaces can widen for organizations, and how quickly widely used tools can be used against us. When dealing with SaaS security, understanding what you have, what it’s connected to, and what permissions it needs is more important than ever. If you need more visibility into your SaaS environment, Reco can help.

ABOUT THE AUTHOR

Dvir Sasson

Dvir is the Director of Security Research Director, where he contributes a vast array of cybersecurity expertise gained over a decade in both offensive and defensive capacities. His areas of specialization include red team operations, incident response, security operations, governance, security research, threat intelligence, and safeguarding cloud environments. With certifications in CISSP and OSCP, Dvir is passionate about problem-solving, developing automation scripts in PowerShell and Python, and delving into the mechanics of breaking things.

Technical Review by:
Gal Nakash
Technical Review by:
Dvir Sasson

Dvir is the Director of Security Research Director, where he contributes a vast array of cybersecurity expertise gained over a decade in both offensive and defensive capacities. His areas of specialization include red team operations, incident response, security operations, governance, security research, threat intelligence, and safeguarding cloud environments. With certifications in CISSP and OSCP, Dvir is passionate about problem-solving, developing automation scripts in PowerShell and Python, and delving into the mechanics of breaking things.

Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.