AI-Washing at RSA

Cutting through the AI noise to find real security innovation

AI-Washing at RSA

This week’s RSA conference in San Francisco was filled with the usual crowds, vendor hoopla, chotchkes, and sore feet. But what surprised me, was seeing relatively few vendors talking about protecting the exploding use of AI applications, and LLMs.

There were plenty of AI mentions – in fact it’s hard to find many vendors that don’t have some kind of AI story, using terms like “AI-powered,” “AI-driven,” or “AI enabled,” to name a few. Given the booming interest in AI, you can imagine the urgent discussions of “we need an AI story,” when vendors were designing their RSA signs several months ago. (All of the photos in this blog image were taken from vendor booth signage at the show).

It was quite different at VC events earlier in the week. Investors are quickly placing bets on any new technology that is seen to protect AI systems, establish guardrails for LLMs, or simply create visibility within organizations about AI skunk-works projects. Yet, I saw relatively little of this on the RSA show floor. So, what’s going on?

It's no longer critical to “have a presence” at RSA

In the pre-COVID era it was viewed as essential that any, but the tiniest security companies had a booth, parties, and more at RSA or Black Hat. Over the last 20 years, I have managed hundreds of thousands of dollars of vendor spend at these shows. It’s very easy to get swept up in the arms race of outspending competitors to gain a supposed edge at large conferences.

But the ROI on these large shows has always been tenuous. Sure, you might happen to meet the biggest client ever who will spend millions with your company. Much more likely, is that you’ll come away with hundreds of “leads,” but even the “hot” ones seem to evaporate quickly post-event. Face-to-face customer contact is great if you can orchestrate it, but if an enterprise is looking for your type of solution, they’re much more likely to find you online.

The decision not to spend hugely at RSA has become increasingly easy for small to mid-size security companies. Sure, you should attend the event, and maybe even sponsor some parties around the show. But having a booth? That’s a huge barrier to entry that many companies are now comfortable skipping. 

Small, agile, AI innovators are under-represented

The AI security market is hot, but it is also very nascent. While larger security vendors are AI-washing their signs, scrappy startups are racing ahead, developing new AI security solutions, and getting funded quickly. Of course, we know it takes longer to build a sustainable, enterprise-class market, but early movers can be massively rewarded in this VC-driven industry. The bottom line is that these fast-moving innovators are too busy, and don’t need to spend the money at RSA to get noticed.

Meanwhile, the big guys with the huge booths and lavish budgets, were planning their RSA booths more than six months ago – an eternity in the rapidly moving AI market. Back then, they realized they needed an “AI story” and dropped a few AI references in their signage, but that’s not the same as real innovation. Just saying you’re “AI-powered” is vague, unspecific, and unverifiable by customers. It also is no longer enough to provide real differentiation.

Types of AI security use cases

There are three main areas where AI is relevant in security:

1. Using AI/ML for detection

This is not new and covers most of the “AI-powered” claims. There are certainly valid use cases for using machine learning to find attack patterns in huge amounts of data. In fact, it’s arguably impossible to have a competitive detection capability without automation.

The problem is that almost all these AI capabilities are black box, and difficult or impossible for customers to objectively evaluate. At the same time, most of these vendors are mysterious about exactly how they are using AI. Customers should expect AI in these solutions but keep a healthy skepticism about vague claims with little detail behind them. This is where most of the AI-washing is happening, and unfortunately, it dilutes the impact of real innovation.

2. Using Generative AI to create security workflows

This is a great use case that clearly adds value and is demonstrable. Creating security policies, workflows, complex remediation playbooks, or even compliance documentation has always been notoriously difficult, and it involves a huge amount of knowledge and repetitive work. Assuming these AI models are trained with good datasets, GenAI can save customers a lot of time, and give valuable guidance to non-security experts.

3. Protecting AI applications and LLMs

This is a hot new area, that is still very nascent. Businesses across sectors are seeing the potential of GenAI, experimenting with use cases, and in many instances, already rolling out initial public-facing applications. But in the rush to deployment, it’s easy to forget that GenAI and LLM applications introduce significant new risks, along with new players such as data scientists, who are not well versed in security. At the same time, many security professionals are only vaguely aware of fast-moving AI projects.

AI applications introduce many new and existing security issues that will need to be carefully managed including:

  • Lack of security visibility and oversight over AI projects
  • New players like Data Scientists who are not well-versed in security
  • Misconfigurations and vulnerabilities in AI applications
  • Software and data supply chain integrity – especially with huge numbers of LLM models with questionable lineage and security
  • Governance and compliance for AI applications

There were a few vendors at RSA talking about these issues, and the market segment is expected to grow very rapidly. But if you want to find the latest solutions in a fast-moving market, you probably can’t afford to wait for the next big security conference, where the AI hype will probably, once again drown out the signals of real innovation.