AI Agent Monitoring, LLM Experiments and AI Agents Console help organisations measure and justify agentic AI investments
SYDNEY – JUNE 11, 2025 – Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new agentic AI monitoring and experimentation capabilities to give organisations end-to-end visibility, rigorous testing capabilities, and centralised governance of both in-house and third-party AI agents. Presented at DASH, Datadog’s annual observability conference, the new capabilities include AI Agent Monitoring, LLM Experiments and AI Agents Console.
The rise of generative AI and autonomous agents is transforming how companies build and deliver software. But with this innovation comes complexity. As companies race to integrate AI into their products and workflows, they face a critical gap. Most organisations lack visibility into how their AI systems behave, what agents are doing and whether they are delivering real business value.
Datadog is addressing this gap by bringing observability best practices to the AI stack. Part of Datadog’s LLM Observability product, these new capabilities allow companies to monitor agentic systems, run structured LLM experiments, and evaluate usage patterns and the impact of both custom and third-party agents. This enables teams to deploy quickly and safely, accelerate iteration and improvements to their LLM applications, and prove impact.
“A recent study found only 25 percent of AI initiatives are currently delivering on their promised ROI—a troubling stat given the sheer volume of AI projects companies are pursuing globally,” said Yrieix Garnier, VP of Product at Datadog. “Today’s launches aim to help improve that number by providing accountability for companies pushing huge budgets toward AI projects. The addition of AI Agent Monitoring, LLM Experiments and AI Agents Console to our LLM Observability suite gives our customers the tools to understand, optimise and scale their AI investments.”
Now generally available, Datadog’s AI Agent Monitoring instantly maps each agent’s decision path–inputs, tool invocations, calls to other agents and outputs–in an interactive graph. Engineers can drill down into latency spikes, incorrect tool calls or unexpected behaviours like infinite agent loops, and correlate them with quality, security and cost metrics. This simplifies the debugging of complex, distributed and non-deterministic agent systems, resulting in optimised performance.
“Agents represent the evolution beyond chat assistants, unlocking the potential of generative AI. As we equip these agents with more tools, comprehensive observability is essential to confidently transition use cases into production. Our partnership with Datadog ensures teams have the visibility and insights needed to deploy agentic solutions at scale,” said Timothée Lacroix, Co-founder & CTO at Mistral AI.
In preview, Datadog launched LLM Experiments to test and validate the impact of prompt changes, model swaps or application changes on the performance of LLM applications. The tool works by running and comparing experiments against datasets created from real production traces (input/output pairs) or uploaded by customers. This allows users to quantify improvements in response accuracy, throughput and cost—and guard against regressions.
“AI agents are quickly graduating from concept to production. Applications powered by Claude 4 are already helping teams handle real-world tasks in many domains, from customer support to software development and R&D,” said Michael Gerstenhaber, VP of Product at Anthropic. “As these agents take on more responsibility, observability becomes key to ensuring they behave safely, deliver value, and stay aligned with user and business goals. We’re very excited about Datadog’s new LLM Observability capabilities that provide the visibility needed to scale these systems with confidence.”
Moreover, as organisations embed external AI agents—such as OpenAI’s Operator, Salesforce’s Agentforce, Anthropic’s Claude-powered assistants or IDE copilots—into critical workflows, they need to understand their behavior, how they’re being used, and what permissions they have across multiple systems to better optimise their agent deployments. To overcome this, Datadog unveiled AI Agents Console in preview, which allows organisations to establish and maintain visibility into in-house and third-party agent behaviour, measure agent usage, impact and ROI, and proactively check for security and compliance risks.
To learn more about Datadog’s latest AI Observability capabilities, please visit: https://www.datadoghq.com/product/llm-observability/.
AI Agent Monitoring, LLM Experiments and AI Agents Console were announced during the keynote at DASH, Datadog’s annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in Applied AI, AI Security, Log Management and released its Internal Developer Portal.
About Datadog
Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers’ entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics.
Forward-Looking Statements
This press release may include certain “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption “Risk Factors” and elsewhere in our Securities and Exchange Commission filings and reports, including the Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 6, 2025, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise.
PR Archives: Latest, By Company, By Date