As I come up to speed in a new industry, on a new product I love the process of trying to figure out how to explain what it is. There’s a tension. The people you speak to want to understand it in the context of what they know, but I want to understand it in the context of what it does differently than other things. In particular, I want to understand how it delivers a profound change in outcomes rather than incremental improvement.

I’ve often talked about how this comes back to the product’s philosophy. What is the core belief that the development team has around their approach to the solution? If a customer’s core belief aligns to that of the development team, the solution is a better fit than a competitive one.

Two weeks into my new role here, and I’ve just made a cognitive jump about the philosophy we have in our approach to using AI to deliver pre-endpoint user-threat analysis.

We have a cool architecture. We talk about signals on one side, a signal being something we can learn about the user, their devices, their network, etc. On the other we have threat analytics and mitigation policies, along with an API into the data we’ve collected. In between, we have a secure, real-time data bus that takes all the inbound signals and processes them through an AI engine to understand user threat. We talk about real-time capabilities, the ability to scale to the size of the “internet consumer” (as opposed to the scale of the number of employees that IT controls and can put an agent on a machine), and the ability to derive meaning across all the signals (rather than get stuck in a silo).

The jump I’ve just made though is that it’s easier to understand what we’re doing if you think about an AI trying to determine if there’s a dog in a picture.

It’s not about processing “streams of data” from that photo. What an AI does is look at the picture and determine if it’s a dog.

Our AI looks at the user and determines if it’s the right human. It then judges the threat that human presents based on what it sees.

It’s not as useful to understand processing streams of signals when understanding this, even though it’s easy to fall into the trap of doing so. Processing streams of signals makes it look like we have a real-time complex event processing and can do something with some simple rules.

When in fact, we are building an AI that allows us to recognize the user and the threat they present.

We’re building a robust 3D picture of the user to determine their threat, whereas current technologies ask you to look at a hand-drawn stick figure and decide if that’s David.

Hi, I’m David… or am I?