Table 1. A comparison of conventional AI assistants and agentic AI systems.

The U.S. Food and Drug Administration (FDA) has taken a substantial step in its digital modernization strategy with the deployment of agentic artificial intelligence capabilities across all agency employee groups. The move represents an expansion of the agency’s internal AI tools, intended to streamline complex, multi-step processes that support regulatory science, product review, and compliance activities. The deployment strengthens the FDA’s ongoing effort to embed structured, secure, and transparent AI systems into daily workflows, building on the rapid adoption of the LLM-based tool Elsa earlier this year.

According to the FDA announcement, agentic AI refers to a class of systems distinguished by their ability to plan, reason, and execute multi-step tasks, drawing on multiple AI models within governed workflows. These systems operate under built-in guidelines and maintain a requirement for human oversight — traits positioned to support reliability and traceability in environments where decision quality is paramount. The agency emphasized that the tool is voluntary for staff, mirroring the deployment model used for Elsa.

A Technical Expansion of AI Within Regulatory Workflows

While Elsa provided a single-query, conversational interface, the newly deployed agentic AI capabilities shift FDA personnel toward more sophisticated task automation. The press release highlights several categories of work expected to benefit from the upgrade:

  • Meeting management.

  • Premarket review support.

  • Review validation.

  • Postmarket surveillance.

  • Inspections and compliance.

  • Administrative processes.

Each of these categories requires structured document handling, data normalization, complex sequencing, and often cross-team coordination — areas where agentic AI can alleviate repetitive burdens and allow reviewers to prioritize high-judgment tasks.

For engineers and regulatory operations teams in industry, this signals a shift in how FDA staff may interact with technical submissions. Processes that once depended on manual navigation of lengthy design dossiers or postmarket data streams could increasingly leverage AI-assisted triage and cross-referencing. While the agency stressed that requirements are not changing, the speed and consistency with which information is analyzed may improve as structured AI tools become more tightly integrated in regulatory practice.

Human Oversight Designed into System Architecture

A critical detail in the announcement is the agency’s emphasis on data protection and model governance. The agentic AI operates within a high-security GovCloud environment, and the FDA specified that models do not train on input data, including proprietary or confidential material submitted by industry. This constraint is crucial for regulated sectors, as it aligns with confidentiality expectations and mitigates risks of inadvertent data leakage or cross-contamination.

From a systems engineering perspective, this architecture mirrors best practices used in other high-security industrial AI deployments, where inference engines operate on sensitive data without incorporating that data into training sets. For medical device manufacturers, this ensures that AI-enabled efficiency on the agency side does not introduce new uncertainties in data handling or intellectual property exposure.

Internal Innovation: The Agentic AI Challenge

To accelerate adoption and refine use cases, FDA is launching a two-month Agentic AI Challenge, culminating at Scientific Computing Day in January 2026. The competition encourages staff to develop custom agentic workflows aligned with their specialized review domains, from inspection planning to documentation auditing.

This initiative reflects the agency’s engineering-focused approach to modernization. Rather than imposing fully developed tools from the top down, the agency is encouraging bottom-up innovation where reviewers, investigators, and scientists identify friction points in their work and design solutions that map directly to operational needs. For medtech teams, this likely means future interactions with the FDA will be shaped not only by AI tools themselves but also by the staff who tailor them to specific regulatory challenges.

Signals for the Medtech Engineering Community

The rapid adoption of FDA’s earlier AI deployment is worth noting. More than 70 percent of staff voluntarily used Elsa after its introduction in May, according to internal agency data. That level of uptake suggests a workforce receptive to digital tools that reduce administrative load. With agentic AI, FDA is moving from point solutions to workflow-level augmentation, a change with several implications for engineering teams preparing submissions.

First, the agency’s increased use of structured automation may improve internal consistency in how similar documents are evaluated. For example, validation routines could help ensure that required elements of a submission are surfaced correctly every time, reducing the likelihood of avoidable delays.

Second, agentic tools could strengthen the FDA’s ability to manage growing data volumes — including postmarket cybersecurity reports, human factors evidence, and real-world performance data — without introducing additional manual burden on reviewers.

Third, the technical shift suggests that medtech organizations may benefit from adopting similar workflow-oriented AI systems to mirror the agency’s expectations for structured, traceable, and well-organized documentation.

A Step Toward Scalable Regulatory Science

Commissioner Marty Makary, MD, MPH, emphasized that AI capabilities will help the agency “accelerate more cures and meaningful treatments,” underscoring the connection between internal FDA efficiency and downstream patient impact. Meanwhile, Chief AI Officer Jeremy Walsh described the tools as a way to streamline work while ensuring the safety and efficacy of regulated products.

Neither statement suggests regulatory leniency. Instead, the message aligns with a familiar engineering principle: efficiency enables quality. When routine tasks become automated and standardized, human experts can direct more time toward high-complexity assessments that require domain judgment.

What Comes Next

The new deployment should be viewed as one component of a larger modernization arc. Over the past decade, FDA has increased its use of digital tools, expanded computational modeling, refined its digital health frameworks, and initiated efforts to harmonize AI-based product reviews. Agentic AI adds another layer to this progression — but one that functions internally rather than externally.

For industry, understanding how these systems influence reviewer workflows will become an important part of anticipating regulatory timelines, evidence expectations, and communication patterns. As the agency tests and iterates through its internal AI challenge, more refined use cases will likely emerge, potentially informing future guidance or best practices for interacting with AI-augmented regulatory processes.

For more information, visit here  .



Magazine cover
Medical Design Briefs Magazine

This article first appeared in the January, 2026 issue of Medical Design Briefs Magazine (Vol. 16 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.