EmilyAI live in customer SOC

The first customer-pilot deployment of EmilyAI commercial v1 went live yesterday afternoon — one of the existing Hedgehog SOC customers from 2017, who is also their own SOC operator and who has been on the v0 internal tooling since November. The deployment posture is multi-tenant on Hedgehog's infrastructure with a customer-dedicated data-isolation tenant; the integration is via the customer's existing Splunk indexers, which feed alert-classification candidates and analyst-decision data into Emily's pipeline and receive back the model's classifications and confidence scores into a custom Splunk lookup. The customer's own SOC analysts continue to make the final triage decisions; Emily's role is advisory and routing.

The team's mood yesterday afternoon was the appropriate combination of relief and watchful concern. The lead engineer (the postgraduate intern, now twenty-six months in and the engineering lead in everything but title) has been on call through the cutover and the first 24 hours. The customer's SOC manager has been engaged through the past several weeks of pre-deployment integration testing and was available yesterday for the cutover itself. The first day of operational use produced no incidents, no false alarms in the model's output that the customer's analysts found problematic, and an early-data point on the customer-side workload reduction (around 35% reduction in median triage time on the first day's high-confidence model classifications, which is consistent with what we saw in the internal deployment).

The product is real. The team has spent the past four months turning the internal tooling into a deployable product with the operational properties customers will require — multi-tenancy, customer-side authentication and authorisation, customer-data isolation guarantees, a documented integration path for Splunk and (in the next release) Elasticsearch and QRadar, an analyst-facing user interface that we have iterated on with the pilot customer through March, a feedback mechanism for the customer's analyst team to flag and correct model classifications that do not match their judgement, and a model-update cadence that respects the customer's own analyst-decision data without leaking that data across tenants. The engineering work to build this has been, by my measure, harder than the model-quality work was — taking research-grade code and making it production-grade for customer deployment is a discipline the team has now worked through for the first time, and the experience has been instructive in the way I should have anticipated and did not fully.

The pricing model that we landed on is per-analyst-per-month rather than per-alert-volume. The decision was to align the pricing with the value the customer experiences (analyst productivity gains) rather than with the underlying compute cost (which is substantially smaller than per-alert pricing would imply at any reasonable margin). The pilot customer is at four analysts and is paying a contracted price that I will not write down here but which is in line with the SaaS-side comparable benchmarks for SOC tooling. The pilot pricing is below the eventual list price; the agreement reflects the customer's role as a reference. The next two pilot customers are in onboarding for May and June.

The Emily team has grown around the work. The lead engineer is now formally Engineering Lead. The senior ML engineer added in October 2017 is the principal designer for the model-update pipeline. A second ML engineer started in February 2018. The product manager (added in February) has been running the pilot customer relationship and the documentation programme. The customer-success function will start hiring in May or June. The total Emily-side team is six full-time, plus the senior SOC analyst who works half-time on the integration and the playbook revision work. The team is the most focused and productive engineering function I have ever managed; the productivity is also, I am aware, fragile in ways I want to address rather than rely on.

The strategic conversation about how this changes Hedgehog's shape is ongoing. The services business — vCISO, pen testing, SOC-as-a-service — continues to grow and is, in headcount terms, still the larger part of the company. The product business is, in margin terms, going to scale differently from the services business once the deployment overhead per customer drops with the maturity of the platform. The investor conversations that I have been having quietly through Q1 (without representation, exploring rather than committing) reflect the changed shape. There is an institutional-capital path open to the company that was not previously available; whether to take it is a 2018-second-half decision rather than a Q1 one.

For the wider security community, EmilyAI is the first commercial offering of what I think will become a substantial category — SOC-augmentation tooling that uses machine learning trained on customer-specific decision data to reduce analyst workload and improve consistency. The category will have other entrants over the next two years; some of the larger SIEM vendors (Splunk in particular) will build their own native versions; some of the threat-intelligence vendors will move adjacent. Hedgehog's first-mover advantage is a function of having spent two years in production-shadow mode learning what the actual product needs to be, and the question for the company is whether to defend that advantage by moving fast on the product roadmap or by partnering with the larger players. The conversations with potential partners have been, in early form, useful; the conversations with potential acquirers I have not been pursuing actively but I am aware are available. The question is one to think through carefully rather than answer this quarter.

The personal note. This is the largest single product launch I have led in my career. The Evolution of DDoS book in 2007 was a substantial piece of work, but a book is a different thing from a multi-tenant production SaaS deployed in a customer's environment with operational SLAs. The team's work has been consistently of a quality that exceeds what I would have expected from a six-person engineering function; the lead engineer's growth from intern to engineering lead in two years is among the most rewarding things I have observed in management. The pilot customer's SOC manager messaged me last night to say "this is what we hoped it would be". I will keep that message.

I will write more as the pilot customer experience produces operational data. The next several months are going to teach us about the product in customer hands rather than in our own.


Back to all writing