The Continued Overreaction to AI and Biotech
This community loves to listen to its own doomsaying voice
Here we go again. There seems to be no end of articles warning as to the impending use of artificial intelligence by disgruntled loners or violent groups to create the next pandemic or a novel bioengineered weapon using equipment bought on Amazon in their garage. I’m generally a skeptic at this point in time, after reviewing many stories as to the inadequacy of AI-generated products. Sure, these AI tools can do some pretty wild deep fakes and videos of things that never happened. But will they be the source for someone to unleash a new biological weapon? If so, what do we do about it?
There are two interesting articles that caught my eye lately, both of which tackle this issue a little differently than the usual mantra of “oh we’re all going to die from a lethal, highly contagious, home-grown biological virus that came about because AI is ungoverned.” No, these two articles focus instead suggest that we fixate too much on AI-induced pandemics and not enough on the more obvious threats. The first article is from former DTRA director Rebecca Hersman and Cassidy Nelson, who works in a British think tank on biotech and AI use.
While it is good that [AI] companies are focusing on pandemics, the ecosystem is overly focused on a single “lone wolf virus terrorist” model as the most serious threat. Significantly less attention is being paid to all other risk scenarios. The focus on individual actors leaves state-based and terrorist group threats dangerously under-examined.
Furthermore, pandemic virus risks are not an adequate proxy for other threats. The technical steps for developing improvised explosives, non-transmissible biological agents, and chemical weapons are distinct from transmissible biological ones. Building and deploying a chemical weapon like Sarin, for example, as opposed to a virus, involves an entirely different set of technical steps. For this reason, we need distinct AI safety tests.
Yet, on their own, these “lesser” threats could still kill many, devastate communities, and sow chaos across countries. We are in danger of creating a safety system that fixates on guarding against the next pandemic while leaving the door wide open to other forms of terror.
They argue for the creation of public-private partnerships so that AI companies can get a better appreciation of state and non-state threats, while government agencies would gain a better understanding of the state of AI-generated data and outputs. I don’t think this will ever happen, but okay. Before I comment on this perspective, let me jump to this War on the Rocks article by Dr. Junaid Nabi, a scientist working for RAND’s School of Public Policy. Likewise, he would like to see more attention focused on biological weapons than pandemic threats.
American biodefense policy systematically conflates two distinct challenges: preparing for naturally occurring pandemics and defending against deliberately engineered biological agents. This confusion appears in budget allocations, organizational mandates, and strategic doctrine.
The conflation starts at the conceptual level. For instance, the 2022 National Biodefense Strategy treats “biological threats — whether naturally occurring, accidental, or deliberate in origin” as a unified category requiring a coordinated response. The framework sets audacious targets, including developing vaccines within 100 days, deploying pathogen-specific tests within 30 days, and repurposing therapeutics within 90 days. These capabilities assume natural pathogen evolution, not adversaries using AI tools to design agents that specifically evade detection systems or defeat therapeutic interventions.
Also, integration of these concepts leads to misallocation. A biodefense model that combines these hazards, while logical for dual-use capabilities, fails against the AI-synthetic biology convergence at specific, measurable points. Take surveillance: the Centers for Disease Control and Prevention’s $500M+ pathogen genomic surveillance is optimized for detecting natural variants through clinical sampling. However, it cannot detect AI-designed sequences with no homology to known organisms. Similarly, pandemic-focused investments, such as hospital-based surveillance programs or an almost $10B expansion of the Strategic National Stockpile (stocking predictable broad-spectrum antivirals), crowd out funding for critical biodefense capabilities such as unusual equipment purchase tracking, synthesis-order monitoring, or forensic genomics for attribution. This integrated approach to biodefense prioritizes epidemiological investigation (contact tracing) but ignores supply-chain monitoring needed to catch deliberate attacks. This is not an overlap — it’s a systemic gap where pandemic priorities actively undermine necessary biodefense investment.
Interestingly, he acknowledges a perspective of “American biodefense strategy” as being flawed in part due to its propensity to include all forms of biological hazards as “related challenges, served by common capabilities.” This strategy fails to address specific ways that people might generate pathogens using AI and synthetic biology to bypass existing U.S. health policy programs. I disagree with his recommendations, however, starting with the first one to strengthen authorities for NBIC, DHS’s (now defunct) CWMD Office, and DARPA, and to consolidate biodefense functions across executive agencies. Consolidation is exactly the wrong approach - specialization and focusing on discrete policy objectives is the only workable solution to this issue. As I mentioned last week, using the term “biodefense” to address all policy programs for biological threats is confusing enough. Congress won’t change its regulatory approach to working biological issues in different committees.
These articles are of interest and good reading, if not perhaps unpractical in terms of any near-term implementation by this administration or the AI industry in general (at least at this current point in time). Here’s my biggest challenge with both articles, however. Let’s start with a basic hypothesis, that there could be an artificial general intelligence model that lacks safety protocols against providing technical data and sources for equipment to develop chemical or biological weapons. Let’s also agree that, at least today, this AI engine is going to scrape research journals and laboratory-related businesses for existing data on how to form the more dangerous chemical and biological agents that exist. What then? Is it a foregone conclusion that a novel bioengineered bug will be released upon society?
Let’s also assume that this loner isn’t Bruce Ivins doing the research for an anthrax spore that can be easily dispersed through the mail. Can a disturbed, angry loner with a high school degree take this data and create a weapon that can be effectively disseminated for the purposes of causing mass casualties? What are the odds that this particular person is going to safely handle super-toxic materials, package it in a secure device that can be transported to a public event, and cause it to be released without attribution? How many people are going to be affected by this agent, assuming he correctly followed scientific procedures and didn’t create a pile of goo that isn’t at all virulent. And more importantly, that this individual is going to work alone and not talk about his plans to anyone, thus limiting his exposure to law enforcement?
Why wouldn’t this disturbed person just going to go to a gun show and buy a modified semi-automatic rifle and go shoot up a mall? For nation-states with BW programs, they could do fine with anthrax, tularemia, pneumonic plague, and SEB toxin. In fact, if it’s not engineered, it could better avoid attribution. I ask you, what are the risk probabilities? How much funding do you think the U.S. government ought to spend on this remote chance of technical extremism as opposed to the much more probable event of a future pandemic that could affect millions, if not tens of millions, of people across the globe?
Milton Leitenberg wrote what I consider one of the most practical treatises to assessing the threat of bioterrorism back in 2005. It’s still pertinent today. He assessed studies on bioterrorism as generally using spurious statistics, unknowable predictions, greatly exaggerated consequences, and gross exaggeration of the feasibility of successfully producing biological agents. He noted that, to be successful in creating a biological weapon, one had to:
Obtain the appropriate strain of the disease pathogen;
Know how to handle the organism correctly;
Know how to grow it in a way that will produce the appropriate characteristics;
Know how to store the culture, and to scale-up production properly; and
Know how to disperse the product properly
I will grant you that an ungoverned AI model could help with steps 2-4, but the initial step of getting the right strain of disease pathogen and developing a device to successfully disperse the product are big obstacles.1 These articles warning about the imminent threat of AI-assisted bioterrorism seem to always leave out these two steps. It’s still a long way from getting an idea to successfully breeding a virulent biological threat that could cause mass casualties. Beyond the technical feasibility of such a project, there’s still the question of exactly who is trying to make these agents.
Why would Russia, China, North Korea, or Iran seek out an AI model to develop a novel biological weapon? They might avoid attribution for a short while, but the use of a mass casualty weapon would leave other points of evidence. Exactly what existing terrorist groups have these lofty goals? Name one. Are we still fantasying about apocalyptic death cults? No one wants to talk about specific nation-states or specific terrorist groups that have the capabilities and intent to develop chemical or biological weapons with the assistance of AI models. It is a much easier task to work the intelligence collection aspect and act against the source of these threats than it will be to try to define and regulate guidelines for the many AI data centers that are being created in the near future.
Besides, we all understand that the real threat isn’t an AI-generated biological organism. U.S. Strategic Command is going to create SkyNet, which will become sentient and take over the U.S. nuclear stockpile for the purposes of eliminating humankind. So why worry about a future bioterrorist threat?
We could expand on this point and emphasize that it would be much smarter to invest in broad and standardized laboratory biosecurity measures instead of fretting over AI models. But that would be too practical and impinge on Big Pharma’s research and development.



It would take really smart and highly educated people to use these tools to make bad things (I'm participating in authoring an article about how to develop screening systems to prevent this.) It doesn't seem likely that one or even a few loners who want to do harm will even think of taking this route. (And why would they?) As you note, guns and explosives are cheap and plentiful.
It is hard to stay calm when AI chaos gripped everyone/all around. Very well 👌🏻 written and informative piece. Lots of food for 💭 thought. Reminded us about Milton Leitenberg again.🤔