AI-driven tools are transforming the speed and scale of biological research, prompting growing concern over how these systems could be misused before regulation catches up.
AI’s growing biosecurity risks
Artificial intelligence is beginning to alter how biological expertise is accessed and applied, driving a shift that is visible in both speed and scale. A 2025 study in AI and Ethics found that AI-assisted workflows reduced the time needed to complete complex biological tasks by up to 50 percent in controlled settings, particularly in hypothesis development and research analysis. That acceleration has carried into 2026 as newer model generations expand what can be done outside traditional laboratory environments.
Advances in protein modeling have also scaled sharply, with leading systems now able to evaluate millions of possible protein structures in hours rather than weeks. The implications extend beyond efficiency. They point to a narrowing gap between high-level expertise and broader access.
Lower barriers, expanding capability
The ability to interpret and apply biological knowledge has historically depended on years of training and access to institutional infrastructure. AI is beginning to shift part of that constraint by making technical information more navigable and actionable.
According to a 2025 assessment by the Center for Strategic and International Studies, advances in AI are reducing the informational barriers associated with biological design, particularly through systems that can synthesize large volumes of scientific literature and translate them into practical guidance.
This dynamic has intensified as models released in 2026 demonstrate stronger performance on scientific reasoning tasks. Evaluations by independent researchers, including analyses cited by the Center for Strategic and International Studies (CSIS), show that users can move from basic conceptual questions about how biological systems function to identifying relevant experimental techniques and approaches in fewer steps than before. In some tests, processes that would typically require hours of literature review were narrowed down in minutes with AI support, reducing the preparatory work needed to engage with complex problems.
Acceleration in biological design
AI is also changing the speed at which biological research progresses. The iterative process through which scientists design experiments, run tests, and refine their results is increasingly supported by computational tools that automate key steps.
The National Academies of Sciences, Engineering, and Medicine reported that AI-driven systems are already shortening timelines in computational biology, particularly in areas such as protein structure prediction and sequence optimization. By mapping how proteins fold and behave based on their underlying sequences, these systems provide insight into how biological processes function at a molecular level. By 2026, AI-assisted databases have surpassed 200 million predicted protein structures, reflecting a scale of analysis that would have been unattainable through traditional laboratory methods.
This acceleration has direct implications for risk. Tasks that once required extended periods of trial and error can now be iterated rapidly. In experimental settings, this increases efficiency and reduces cost. In a security context, it compresses the window in which potentially harmful applications can be identified and addressed. As one National Academies report notes, AI tools are “poised to transform the speed and scope of biological discovery,” a shift that applies across both beneficial and high-risk domains.
Evidence from system testing
Recent controlled evaluations of advanced AI systems have added empirical weight to these concerns. In tests conducted by academic and policy researchers in 2026, models were prompted with questions related to biological threats to assess how safeguards performed under pressure.
The results were mixed. While many systems blocked direct requests for harmful instructions, a subset of interactions still produced detailed responses that could assist with planning or refining dangerous scenarios. In several cases, models suggested approaches to modifying pathogens, identifying points of vulnerability, or outlining potential delivery methods. Researchers described some outputs as “plausible” and “operationally relevant,” even when explicit step-by-step instructions were restricted.
These findings align with broader assessments from biosecurity experts. Analysis from CSIS notes that safeguards remain uneven across platforms and are often sensitive to how prompts are framed.The report highlights that as models become more capable, “distinguishing between legitimate research and misuse becomes increasingly difficult,” particularly in edge cases that fall between clearly benign and clearly malicious intent.
Governance efforts and constraints
Policy responses to these developments remain uneven. Traditional biosecurity frameworks have focused on controlling access to materials, laboratory facilities, and physical infrastructure. AI introduces a different category of risk centered on information generation and dissemination, which is more difficult to regulate.
Recent reports from CSIS and the National Academies of Sciences, Engineering, and Medicine point to several emerging pressure points. These include the use of sensitive biological datasets in model training, the absence of standardized testing protocols for evaluating misuse risk, and limited coordination across national regulatory systems. Researchers have also emphasized that oversight mechanisms are largely reactive, often addressing risks after systems have already been deployed.
Proposed responses focus on a combination of technical and policy measures. These include restricting access to high-risk data, requiring developers to conduct risk assessments, and establishing clearer standards for model evaluation. At the same time, policymakers face trade-offs. AI-driven tools are contributing to measurable advances in drug discovery and disease modeling, and limiting access too aggressively could slow those gains.
Across research and testing procedures, these developments point to a measurable shift in capability. AI is reducing the time required to perform complex biological tasks, expanding access to technical knowledge, and exposing limitations in existing safeguards. While access to biological materials and laboratory infrastructure still limits what can be carried out in practice, expertise is becoming more widely accessible and can be applied more quickly.