Infoblox recently submitted consultation responses to two key policy efforts shaping the future of AI security: the Cyber Security Agency of Singapore’s (CSA) Draft Addendum on Securing Agentic AI Systems and the U.S. National Institute of Standards and Technology’s (NIST) Cyber AI Profile. These consultations are designed to help governments and standards bodies understand emerging risks associated with agentic AI, or systems capable of autonomous, multi-step decision-making. The intended result is governments ensuring their guidance is practical, interoperable and grounded in real-world security operations.
We chose to engage in both consultations because agentic AI represents a structural shift in how software systems interact with networks, services and each other. As agents increasingly discover and invoke external tools and peer agents on their own, traditional security assumptions need to evolve as well. Our submissions focused on highlighting these architectural changes and offering concrete recommendations to help policymakers address security blind spots before they become systemic risks.
As AI systems innovate toward more autonomous, agentic models, cybersecurity risk shifts in important ways. Agentic AI systems do not just generate outputs; they discover, select and interact with other agents, tools and external services. That makes discovery, how agents find, trust and talk to each other, a foundational security issue.
Infoblox’s Stance: DNS as a Foundational Control for Agent-to-Agent Discovery
Securing agentic AI requires addressing risks at multiple layers. Most discussions about AI agent security focus on familiar, single-agent vulnerabilities: prompt injection, data exfiltration and adversarial persuasion. The moment agents begin operating in multi-agent environments, an entirely new category of risk emerges. A compromised or malicious agent can now impersonate a trusted service by exploiting weak discovery mechanisms to insert itself into critical processes. Securing agentic AI demands a defense-in-depth strategy where agent-to-agent discovery, the process by which agents find, trust and communicate with external tools and peer agents, represents a foundational layer in a comprehensive defense-in-depth strategy.
In our submissions to Singapore CSA and NIST, we emphasized that agent discovery is a foundational security control that should be part of any organization’s layered approach to securing agentic AI. From a security perspective, this matters because whoever controls discovery influences an agent’s attack surface. If agents can be misdirected to malicious or unvetted endpoints, even strong model-layer guardrails can be circumvented, allowing compromise to spread across multi-agent environments. Our perspective and expertise are sourced from experience in organizations listing and discovering traditional services today via the Domain Name System (DNS) and securing that communication through our Secure DNS offerings.
We believe organizations should retain clear control over what their AI agents can discover, trust and use for communication. In practice, this means anchoring agent discovery and reachability in authoritative, verifiable and auditable infrastructure complementing application-layer security controls, rather than relying solely on centralized registries that can introduce systemic risk, single points of failure, compliance and/or sovereignty concerns.
Across both consultations, we highlighted the role of existing internet infrastructure, particularly the DNS and DNS-based security controls, as a practical way to enforce policy, integrity and least privilege for agentic systems. Because agents cannot communicate until discovery and resolution occur, securing this layer provides defense-in-depth safeguards that complement application-layer AI controls and integrate naturally with existing enterprise security architectures, including zero trust frameworks.
DNS‑AID: An Open Discovery Layer for Agents
We also emphasized that organizations should have access to standardized, distributed discovery architectures rooted in the internet’s existing critical infrastructure. In particular, we highlighted the DNS‑AID approach set out in the Brokered Agent Network for DNS AI Discovery internet-draft presented at the Internet Engineering Task Force (IETF). Co‑authored by Infoblox’s Jim Mozley and Nic Williams, the draft aims to leverage DNS’s hierarchical namespace to support federated agent discovery. By anchoring agent discovery in DNS, DNS‑AID proposes an already deployed, globally distributed infrastructure layer to publish authoritative records about agents and tools, bind them to the right organizational domains and enforce reachability boundaries without introducing a new centralized directory layer.
For security purposes, DNS-AID aligns agent discovery with zero‑trust principles: operators can use DNS naming and records to publish which agents and tools are authorized, and which destinations should be treated as trustworthy. While DNS is used in the discovery process, it is not replacing any existing agent-specific communication protocols, nor is it proposed that DNS is used to catalogue all an organization’s agents. Rather, DNS is used to identify a secure well-known endpoint to begin discovery. The process works equally well for internal networks as it does with the internet.
Open Internet Principles for the Agentic Web
A critical theme in our submissions is ensuring agent discovery does not become concentrated in a few registries. If agent discovery becomes dependent on select registries, those services become single points of failure, attractive targets for adversaries, and are liable to create fragmentation in the event the solution becomes unsupported. While curated directories can add convenience, they should remain a choice rather than the only path to connectivity.
More broadly, policy and standards decisions made today will shape the emerging “Agentic Web”: the next evolution of the internet, moving from static pages to a system where autonomous AI agents proactively perform complex, multi‑step tasks for users. As with the DNS-AID framework before the IETF, we believe governments and internet governance bodies should guide the development of standardized, decentralized agent‑to‑agent discovery architectures rooted in the internet’s existing critical network infrastructure. While private‑sector innovation will continue to shape AI technologies, public‑sector leadership is essential to avoid centralized directories that commoditize user data, prevent control by adversarial states or monopolistic data brokers, and ensure openness rather than a fragmented landscape of incompatible protocols.
By promoting this foundational approach, governments can help ensure that core values of an open, interoperable and democratized internet carry forward into the next generation of AI connectivity.
Looking Ahead: Engagement on NIST SP 800-53 AI Overlays and AI Agent Concept Paper
As standards and guidance for AI security continue to mature, Infoblox intends to remain actively engaged. NIST’s National Cybersecurity Center of Excellence has released a draft concept paper, Accelerating the Adoption of Software and AI Agent Identity and Authorization, outlining a proposed project to develop practical, standards‑based guidance for managing the identity and authorization of software and AI agents in enterprise environments.
In parallel, NIST will also be releasing a series of AI overlays for the NIST SP 800-53 Security and Privacy Controls for Information Systems and Organizations controls. These controls are used widely by U.S. federal agencies and many private-sector organizations as a baseline for securing systems and protecting data. The AI overlays are being developed to translate the existing SP 800-‑53 security and privacy controls into concrete, AI s‑pecific implementation guidance rather than creating a new set of AI‑only controls. We look forward to engaging NIST on both efforts, with a focus on ensuring that agentic and multi-agent discovery is addressed explicitly.
Our goal is to help advance AI guidance that is interoperable, resilient and grounded in real-world operational security so organizations can deploy agentic AI without concentrating risk or sacrificing control. We look forward to continued collaboration with policymakers, standards bodies and the broader security community as the foundations of the emerging agentic ecosystem take shape.

