The EmilyAI v3.2 release shipped to the customer fleet today. The release incorporates the language-model-based capabilities that we started planning in December — natural-language interaction with the SOC data, accessible-language explainability of model classifications, and analyst-assistant functions integrated with the existing alert-classification surface. The deployment posture is conservative — the language-model capabilities are advisory-and-assistive layers on top of the existing structured-classification model, the existing model's confidence-and-explainability properties remain the principal decision-support surface, and the analyst workflows continue to reflect the human-in-the-loop posture that has been the product's design principle since the 2017 commercial launch.
The team's pace through Q1 and into Q2 has been, on every measure I have, above what the December planning envisaged. Six months from the strategic-conversation to production deployment is fast by the team's historical engineering cadence; the customer-side validation that has been continuous through April-May has produced sufficient operational confidence to ship. The lead engineer's contribution to the architectural-integration decisions has been particularly substantive — the choices about which language-model-derived outputs to surface to analysts, how to communicate the confidence-and-uncertainty properties of language-model output (which differ from the structured-classification model's confidence properties in important ways), and how to handle the various failure modes of language-model-driven assistance, have been thoughtful and the production result reflects the careful design-work.
The customer-side reception of the v3.2 capabilities through the early-access programme of the past six weeks has been positive. The analyst-team feedback has been particularly engaged on the explainability layer — the ability to query "why did the model classify alert X as Y" in natural language and receive an accessible explanation that draws on both the structured-classification model's feature attribution and the broader contextual data has produced, on the customer-side qualitative feedback, substantive analyst-workflow improvements. The natural-language SOC-data interaction (querying the data in conversational form rather than constructing structured queries) has been less universally adopted but is producing engaged usage at the customer organisations that have invested in it.
The structural concerns about language-model integration that have been continuous through the past six months. First, hallucination and confidence-calibration. Language-model output can be confidently incorrect in ways that are operationally hazardous in security-decision contexts. The product design addresses this by clearly separating the structured-classification model's authoritative output from the language-model-derived assistive output, by communicating uncertainty properties of language-model output explicitly, and by training analysts on the appropriate use-pattern. The customer-side documentation and the analyst-team training material both emphasise the appropriate-use-pattern explicitly.
Second, data-leakage and privacy. The customer-side data that the language-model component processes is sensitive — alert content, network-flow data, customer-organisation-internal context. The architectural decisions on this — using customer-side-deployed models where the customer organisation requires data-residency, using vendor-managed models with carefully-restricted data flows where customer-side deployment is not operationally tractable, and never aggregating customer-side data across tenants for model-training purposes — have been the subject of substantial design attention and customer-side conversation. The compliance posture is documented and the customer-organisation legal-and-compliance teams have validated.
Third, the regulatory environment. The post-EU-AI-Act-progress regulatory landscape will apply to the language-model-integrated capabilities. The product team has been working through the regulatory implications throughout the engineering cycle and the v3.2 release is, on our analysis, compliant with both current and reasonably-anticipated regulatory requirements. The customer-side documentation supports the customer-organisation compliance posture against the customer's own regulatory obligations.
The wider strategic point. The language-model integration is the most substantive single product capability addition since the original 2018 commercial launch. The strategic implication for the company is that the EmilyAI product trajectory continues to be positive and the broader platform-product positioning is increasingly differentiated. The institutional-capital conversation that has been periodic-but-not-active through the past several years may, post-v3.2, develop in new directions; my disposition continues to be against raising but the optionality is meaningful.
I will write more on the v3.2 reception through the rest of the year. The team's work has been excellent.