twists and turns of mRNA research

This Science article (which seems to be discussing a Nature article) has an interesting discussion of how scientific and technological research has a lot of twists and turns and dead ends.

That Nature piece will also give non-scientists a realistic picture of what development of a new technology is like in this field. Everyone builds on everyone else’s work, and when a big discovery is finally clear to everyone, you’ll find that you can leaf back through the history, turning over page after page until you get to experiments from years (decades) before that in hindsight were the earliest signs of the Big Thing. You might wonder how come no one noticed these at the time, or put more resources behind them, but the truth is that at any given time there are a lot of experiments and ideas floating around that have the potential to turn into something big, some day. Looking back from the ones that finally worked out brings massive amounts of survivorship bias into your thinking. Most big things don’t work out – every experienced scientist can look back and wonder at all the time they spent on various things that (in retrospect) bore no fruit and were (in retrospect!) never going to. But you don’t see that at the time.

Science

So how could technological progress be accelerated? I suspect we will always need human brains to formulate experiments and make the final call on interpreting results. But it seems as though computers/robots should be able to perform experiments. If they can perform a lot more iterations/permutations of experiments in a fraction of the time that humans could, the cost of dead ends should be much lower. The humans won’t have to worry as much about which experiments they think are most promising, they can just tell the computer to perform them all. If we have really good computer models of how the physical world works, the need for physical experiments should be reduced. That seems like the model to me – first a round of automated numerical/computational experiments on a huge number of permutations, then a round of automated physical experiments on a subset of promising alternatives, then rounds of human-guided and/or human-performed experiments on additional subsets until you hone in on a new solution.

Of course, for this to work, you have to do the basic research to build the accurate conceptual models followed by the computer models, and you have to design the experiments. And you have to be able to measure and accurately distinguish the more promising results from the less promising. There will still be false positives leading to dead ends after much effort, and false negatives where a game-changing breakthrough is left in the dustbin because it was not identified.

That is another idea though – commit resources and brains to making additional passes through the dustbin of rejected results periodically, especially as computers continue to improve and conceptual breakthroughs continue to be made.

I doubt I am the first to think of anything above, and I bet much of it is being applied. To things like nuclear weapons, depressingly. But it seems like a framework for bumping up the pace of progress. The other half of the equation, of course, is throwing more brains and money into the mix. Then there is the long game of educating the next generation of brains now so they are online 20 years from now when you need them to take over.

Leave a Reply

Your email address will not be published. Required fields are marked *