Monday, May 16, 2022

When scientific information is dangerous

Must read

Shreya Christina
Shreya has been with for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

One big hope for AI as machine learning improves is that we can use it for drug discovery — leveraging the pattern-matching power of algorithms to identify promising drug candidates much faster and cheaper than human scientists could alone.

But we might want to be careful: Any system powerful and accurate enough to identify drugs that are safe for humans is inherently a system that will also be good at identifying drugs that are incredibly dangerous to humans.

That’s the takeaway from a new paper in Nature Machine Intelligence by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins. They took a machine learning model they’d trained to find nontoxic drugs and flipped the guideline so that it would try to find toxic compounds instead. In less than six hours, the system identified tens of thousands of dangerous compounds, including some that closely resemble VX nerve gas.

“Dual use” is here, and it’s not going away

Their paper touches on three interests of mine, all of which are essential to keep in mind when reading alarming news like this.

The first is the growing priority of dual-use concerns in scientific research. Biology is where some of the most exciting innovations of the 21st century are is happening† And continuous innovation, especially in broad spectrum vaccines and treatmentsis essential to save lives and prevent future disasters.

But the tools that make DNA sequence faster and easier to print, or make drug discovery cheaper, or help us easily identify chemical compounds that do exactly what we want, are also tools that make it much cheaper and easier to do terrible damage to to target. † That’s the dual-use problem.

Here’s a biology example: Adenovirus vector vaccines, like the Johnson & Johnson Covid-19 vaccine, work by taking a common, mild virus (adenoviruses often cause infections like the common cold) and editing it so the virus can’t make you sick. and changing a bit of the virus’ genetic code to replace it with the Covid-19 spike protein so your immune system learns to recognize it.

That’s incredibly valuable work, and vaccines developed using these techniques have saved lives. But work like this has also been brought to the attention of experts as with particularly high dual-use risks: that is, this research is also useful for bioweapons programs. “The development of virally vectorized vaccines may provide insights that are particularly dual-use, such as techniques for evading pre-existing anti-vector immunity,” biosecurity researchers Jonas Sandbrink and Gregory Koblentz argued last year.

For most of the 20th century, chemical and biological weapons were difficult and expensive to manufacture. For most of the 21st, that won’t be the case. If we don’t invest in managing that transition and making sure that lethal weapons are not easy to obtain or produce, we run the risk that individuals, small terrorist groups or rogue states could do terrible damage.

AI risk is becoming more concrete, not less scary

AI research increasingly has its own concerns about dual-use. Over the past decade, as AI systems have become more powerful, more researchers (but certainly not all) have come to believe that humanity will suffer catastrophe if we build extremely powerful AI systems without taking adequate steps to ensure that they do what we want them to do.

Any AI system powerful enough to do the things we want – invent new drugs, plan manufacturing processes, design new machines – is also powerful enough to invent deadly toxins, plan manufacturing processes with catastrophic side effects, or design machines with internal flaws that we don’t even understand.

When working with systems this powerful, someone will make a mistake somewhere – pointing a system to a purpose that is incompatible with the security and freedom of everyone on Earth. Turning more and more of our society over to increasingly powerful AI systems, even though we know we don’t really understand how they work or how to make them do what we want, would be a catastrophic mistake.

But because it’s very difficult to tune AI systems to what we want — and because their mismatched performance is often good enough, at least in the short term — it’s a mistake we actively make.

I think our best and brightest machine learning researchers should spend some time thinking about this challenge and see if they could work at one of the ever-expanding organizations trying to solve this challenge.

When information is a risk

Let’s say you discovered a way to teach an AI system to develop terrifying chemical weapons. Do you need to post a paper online describing how you did it? Or do you keep that information to yourself, knowing it could be misused?

In the world of computer security, there are established procedures for what to do if you discover a security problem. Usually you report this to the responsible organization (find a vulnerability in Apple computers, tell you Apple) and give them time to fix it before you tell the public. This expectation maintains transparency and also ensures that “good guys” who work in the computer security space don’t just give “bad guys” a to-do list.

But there is nothing comparable in biology or AI. Virus detection programs don’t usually hide the more dangerous pathogens they find until countermeasures are in place. They tend to publish them immediately. When OpenAI delayed the rollout of the text-generating machine GPT-2 because of concerns about misuse, they were harshly criticized and urged to do the more usual to publish all the details

The team that made the recent Nature Machine Intelligence paper has thought a lot about this “information dangerThe researchers said they were advised by security experts to withhold some details about how exactly they got their result, to make it a little more difficult for any bad actor who wants to follow in their footsteps.

By publishing their paper, they made the risks of emerging technologies a lot more concrete and gave researchers, policymakers and the public a specific reason to pay attention. It was ultimately a way of describing risky technologies in a way that probably overall reduced risk

Yet it is highly unfair to the average biology or AI researcher, who does not specialize in information security problems, to have to make these calls on an ad hoc basis. Experts in national security, AI security and biosecurity should work together to create a transparent framework for managing information risks so that individual researchers can consult experts as part of the publication process rather than figuring it out on their own.

More articles


Please enter your comment!
Please enter your name here

Latest article