Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

Context Enables (Some) Automation in Security

Tal Shapira
Updated
May 10, 2023
September 1, 2024
4 min read
Ready to Close the SaaS Security Gap?
Chat with us

A  recent Forrester article says that automation is of limited to no value in security, and that automation can’t deliver an autonomous security operations center (SOC). At Reco we (respectfully) disagree. We believe that security tools that can understand context rather than content are able to support automation if not of all tasks, then of a significant proportion, enabling automation in security incident detection and resolution.

The Argument: Humans Can Always Outsmart Machines

The article argues that one of the main reasons automation does not work in security is because humans (read attackers) who want to will always outsmart machines with predefined rules. The argument states that “technology will always be limited by the purpose it is designed for, and will always lack the creativity and scope to address every single potential threat”.

More relevant for Reco, in this article AI is felt to be limited as it can only work within the rules defined in its framework, and is not capable of human ingenuity.

However, AI and machine learning have progressed significantly since 1997 – the year quoted in the article as the ultimate achievement when IBM’s Deep Blue beat chess champion Garry Champion. Over the past 25 years, the use of AI has been expanded to any number of uses from translating speech to diagnosing cancer, and later algorithms can beat both earlier algorithms and humans in both chess and poker to mention just a couple of games.

By now it is commonly accepted that AI algorithms can outperform humans in identifying patterns, analyzing data, and in learning new things. As a result, an AI algorithm can now be easily trained to learn about new additions to a network, or new attack techniques almost as quickly as attackers adopt them.

Security Needs Automation, but Automation Needs Help

Further, this argument fails to take into account that humans are simply not capable of keeping up with the changes in security demands posed by new ways of working and the often unseen threats resulting from collaboration tools and a proliferation of SaaS based platforms. In today’s world of collaboration tools, data doesn’t fall into neat groups. Users share data through any number of platforms, usually at the speed of light and a data incident can come from anywhere at any time.

As collaboration tool usage increases, two trends are emerging: increasing quantities of data are being leaked, and under-resourced security teams are drowning under the weight of alerts created by conventional security tools. Human-based security teams are not keeping up, and they are not achieving security goals.

Arguing further against the statement that humans are better than machines, is the simple fact that no single person or security team (if an organization has the luxury of a fully staffed security team) can keep up with the speed of business to understand all the different connections across the organization and understand whether an action is good, bad, or malicious. As a result, they either have the choice of taking the risk of letting something pass through, or of blocking business while they investigate. Neither is ideal.

With Context, Automation Tools Can Remediate Incidents and Mitigate Risk

Perhaps what has been missing until now is context. By context we mean the ability to understand the wider ecosystem when assessing an action that has been performed to understand if said action is acceptable or not. Armed with this context, a security tool team can automatically understand and assess who works with who, know who rightfully has access to specific tools, systems, or data, and make an informed decision of whether a specific action is justified.

In Reco’s collaboration security platform, context understanding and justification of actions are the end goal. The AI business context justification engine is given a simplified security rule: is an action justified? If yes, approve it, if not, block it and alert the security team. Once this justification has been achieved, all that remains is a simple remediation, which can be carried out by an automation workflow or playbook.

This applies to data of all types, but especially for sensitive data. Let’s take the example of a new employee’s social security number. As especially sensitive data, it should be protected at all times. But if payroll is outsourced, then it also likely needs to be shared externally.

A blanket content-based rule would immediately recognize the social security number and block the action (thereby potentially delaying the employee’s salary). In contrast, a context-based engine would be able to understand that the recipient of the email with the social security number in it is someone with whom the organization has a long standing relationship, and who has received this kind of data before. In that way they are most likely justified, and the action can be permitted.

A Fully Automated SOC May Not be Here Yet…

… But we have definitely taken significant steps towards it. Advances in technology are making it possible to automate specific tasks while remaining confident that they are being carried out correctly.

Further, we would argue automation is a requirement to stay ahead of adversaries of all types and intent. Tto not automate key elements of security is no longer an option. Security analyst time and resources are too rare and precious to simply spend all day wading through long lists of alerts, manually resolving each and every alert or incident. What we at Reco are doing is adding the necessary context to make remediation decisions that can then be used within existing workflows leveraging automated tools to resolve alerts and minimize risk.

No items found.

Dr. Tal Shapira

ABOUT THE AUTHOR

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Technical Review by:
Gal Nakash
Technical Review by:
Dr. Tal Shapira

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Ready to Close the SaaS Security Gap?
Chat with us
Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Related Posts

OpenClaw: The AI Agent Security Crisis Unfolding Right Now
Alon Klayman
OpenClaw, the viral open-source AI agent with over 135,000 GitHub stars, has triggered the first major AI agent security crisis of 2026 with multiple critical vulnerabilities, malicious marketplace exploits, and over 21,000 exposed instances. When employees connect these autonomous agents to corporate systems like Slack and Google Workspace, they create shadow AI with elevated privileges that traditional security tools can't detect. Reco's platform provides the visibility security teams need to identify OpenClaw integrations, audit permissions, and assess risk before incidents occur.
SaaS and AI Security Is Here: Reco Raises Series B to Dominate the Future of AI Usage in SaaS
Ofer Klein
After 400% growth, Reco raises $30M Series B to address the AI SaaS security gap, where traditional tools can't see the thousands of AI apps, agents, and integrations that now power modern enterprises. This round was led by Zeev Ventures, with participation from all our existing investors—Insight Partners, boldstart ventures, and Angular Ventures—and new corporate investors including Workday Ventures, TIAA Ventures, S Ventures, and Quadrille Capital.
When AI Becomes the Insider Threat: Understanding Risks in Modern SaaS Environments
Tal Shapira
As AI becomes deeply embedded across SaaS platforms, it is increasingly operating with trusted internal access once reserved for employees and service accounts. This article examines how AI can function as an insider threat, why these risks are harder to detect than traditional insider activity, and what signals security teams should watch for. It also explores common governance gaps, real-world scenarios, and practical approaches organizations can take to reduce AI-driven insider risk without limiting legitimate AI use.
See more featured resources

Ready for SaaS Security that can keep up?

Request a demo