Tag Archives: technological progress

“dark factories”

The first time I heard “dark factories”, I pictured the orcs toiling underground at Isengard (that’s Lord of the Rings for any readers who are not the right type of nerd to know that). I also think of Philip K. Dick’s story Autofac, one of my all time favorites. But no, the idea is that factories are emerging in China that are so automated that the lights don’t need to be on most of the time, because no humans are present. Naked Capitalism has a ton of links describing this phenomenon in China. Apparently U.S. industrialists are touring these Chinese factories and are shocked at how advanced they are and how far behind they (i.e., western industry) are.

The fact that western industrialists are invited to tour these factories would suggest that the technology is not secret. So maybe we should not feel threatened but rather look for opportunities to partner and learn. No doubt, there are similar factories churning out military and security hardware that are secret.

January 2026 in Review

Well, I seemed to be in a political mood in January. I try to stay on the policy side of the line, but that is hard when bad politics makes good policy impossible. Inspired by a Nate Silver post, I took a look back at what I see as key moments in the last 25 years of U.S. history, and there were just so many that were on a knife edge and ended up going the wrong way, in my view. Maybe there are other universes where things went better, but remember my scientific theory that once they make a Spiderman movie about a scientific theory, it is almost certainly wrong. I find it depressing how we got here, but there is no sense crying over it. We need to learn from the past yes, but then face up to the present moment and start picking up the pieces from where we are.

Most frightening and/or depressing story: Evidence is crystal clear that sabotaging R&D spending is a very effective way to sabotage economic growth and progress. Attaboy to the fools, assholes and traitors currently in nominal charge of the U.S. government. Meanwhile, if a more rational administration ever takes hold, research on learning curves might provide some clues on where to concentrate our efforts for the greatest gains.

Most hopeful story: New York City congestion pricing was a hard-won U.S. transportation policy win in 2025. This is just good, economically sound urban policy that would be apolitical in a more rational world.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: I reviewed book reviews from 2025, one of which was Ezra Klein’s Abundance (not the 2012 book Abundance by Peter Diamandis, which while I am not a huge fan I continue to be puzzled how Ezra Klein could either not be aware of that book or intentionally choose to name his book the same thing.) I still find it hard to summarize that book in a sound bite, which would need to be done if it were ever going to serve as the basis for a political campaign. But here is an attempt: (1) Continuously review and streamline federal regulations, (2) increase public and private investments in critical technology and infrastructure, including recommitting to clean energy, and (3) address market failures in housing, health care, and education. #3 is a doozy of course, but the un-sexy answer just has to be understand and implement the latest evidence-backed policies. I would think ramp up housing supply, Medicare for All, and free (tax-funded) college or trade school for all. And um, if we want a chance for any domestic agenda to succeed, we also need serious plans to manage international risks including war, ecosystem collapse, famine, and massive refugee flows that may be coming. Now, I just want to acknowledge that there is a rosy future scenario where AI magically solves all these problems. The way that could work is that technological progress and economic growth suddenly pick up so drastically that we are awash in cash and resources to the point that even the wildly suboptimal operations of our dysfunctional political system are adequate to solve the problems. I don’t think it is safe to put all our eggs in that basket! We better assume that we will need to continue doing the hard work of allocating scarce resources to manage difficult problems for the foreseeable future.

AI predictions for 2026

It’s easy to find predictions for where AI technology, the “AI race”, and the knock-on effects for the US and world economies might go in 2026. I find myself slightly fatigued from hearing about it, but nonetheless it is important.

Here’s one knowledgeable sounding blogger’s predictions:

  • Artificial general intelligence will not be achieved in 2026.
  • Robots will not be able to clean my bathroom in 2026.
  • ‘No country will take a decisive lead in the GenAI “race”.’
  • “Work on new approaches such as world models and neurosymbolic will escalate.”
  • The AI-driven stock market bubble may pop, or it may not. The exact words here are “the beginning of the end”. Well, I can predict with 100% confidence that the stock market will either go up, down, or stay the same.
  • AI will be discussed in the US midterm elections.

Okay, nothing too earth shattering here. On the subject of “countries in the AI race”, one perspective is that the US is focusing nearly all its investment on private sector AI, while China is spreading its investments across a basket of technology and infrastructure investments including AI, “electric vehicles, batteries, robotics, solar panels, wind turbines and other forms of advanced manufacturing” (“the Antimonopolist” blog). The US was also at least trying to do this during and after the Covid-19 pandemic era, but that sensible long-term strategy has been monkey-wrenched by a certain fool in 2025.

Then again, we could ask whether the basic econ 101 lessons are completely disproven? Is it possible we should invest more in what we are good at and sell it to others, while buying things from them that they can make better, faster, or cheaper? There’s a tension of course between being highly efficient and focused on comparative advantage, and also being diversified so you are resilient if something happens to upset your trade flows. But we are certainly not seeing rational debate about all this in the US political context.

Chartbook makes an argument that if you compare the US and European economies, it is really just the performance (measured by profits and stock market values) of the “superstar” US tech firms that makes the US look better. And while life at the top of the heap may skew the US numbers, quality of life for the average working European aided by their bumbling, stumbling social welfare systems is actually not that bad.

what’s new with learning curves?

At least since reading some early Singularity-adjacent publications by Vernor Vinge, Ray Kurzweil, and Bill Joy, I’ve been interested in learning curves. (And for the record, the topic and these authors were were not considered politically “right wing” or even political at all at the time.) Progress, at least in certain technologies, tends to be exponential over time. This clearly applies to computer technology, where there are short product cycles, the needed infrastructure is more or less in place and/or can adapt as the technology is scaled/commercialized, and legal and institutional barriers to change are relatively low. Technologies with “recipes”, like chemicals, drugs, seeds and other agricultural technologies, might be other examples. For these we have the patent system to actually try to slow down scaling and commercialization to the pace of innovation. With energy technology, learning curves seem to play out over much longer periods of time because while available technology changes rapidly, our system tends to be locked into long-lived infrastructure that can only change slowly. So new energy technology rolls out slowly as it is scaled up and commercialized over decades. There are also entities with enormous political and propaganda power that fight tooth and nail to keep us locked into obsolete technologies and infrastructure that fit into their historical (and profitable) business models. Now when you get to other technologies, like transportation and housing, public policy, legal and institutional barriers are dominant and tend to retard or even prevent progress. Rollout is so hard that while there are pockets of innovation, many don’t see the light of day or don’t spread from the local/pilot scale, even if they are successful at this scale. These also vary by location and jurisdiction, so that progress is very uneven geographically. These are my thoughts anyway. As for what’s new, here’s a journal article from Advances in Applied Energy.

Variability of technology learning rates

Climate and energy policy analysts and researchers often forecast the cost of low-carbon energy technologies using Wright’s model of technological innovation. The learning rate, i.e., the percentage cost reduction per doubling of cumulative production, is assumed constant in this model. Here, we analyze the relationship between cost and scale of production for 87 technologies in the Performance Curve Database spanning multiple sectors. We find that stepwise changes in learning rates provide a better fit for 58 of these technologies and produce forecasts with equal or significantly lower errors compared to constant learning rates for 36 and 30 technologies, respectively. While costs generally decrease with increasing production, past learning rates are not good predictors of future learning rates. We show that these results affect technological change projections in the short and long term, focusing on three key mitigation technologies: solar photovoltaics, wind power, and lithium-ion batteries. We suggest that investment in early-stage technologies nearing cost-competitiveness, combined with techno-economic analysis and decision-making under uncertainty methods, can help mitigate the impact of uncertainty in projections of future technology cost.

This blog in Construction Physics has a deeper dive across more industries, and discusses at least one large data set that is available for analysis. If you could accurately predict learning rates (and successful scaling/commercialization rates) for specific technologies based on known factors, then theoretically you could fine-tune policies and incentives to increase the rate of progress in the technologies you want. So this is an area of research that could boost all other areas of research and progress.

2025 Science (with a capital S!) breakthrough of the year

Does Science with a capital S speak for science? I don’t know, science, or nature or Nature might have something to say about that. Small-s science, after all, is just a way of asking questions and trying to strengthen our confidence in what we think we know about nature. Despite all that, the magazine/publishing conglomerate known as Science nominates candidates for scientific breakthrough of the year and then chooses one. This year’s winner is renewable energy.

This year, renewables surpassed coal as a source of electricity worldwide, and solar and wind energy grew fast enough to cover the entire increase in global electricity use from January to June, according to energy think tank Ember. In September, Chinese President Xi Jinping declared at the United Nations that his country will cut its carbon emissions by as much as 10% in a decade, not by using less energy, but by doubling down on wind and solar. And solar panel imports in Africa and South Asia have soared, as people in those regions realized rooftop solar can cheaply power lights, cellphones, and fans. To many, the continued growth of renewables now seems unstoppable—a prospect that has led Science to name the renewable energy surge its 2025 Breakthrough of the Year…

China’s mighty industrial engine is the driver. After years of patiently nurturing the sector through subsidies, China now dominates global production of renewable energy technologies. It makes 80% of the world’s solar cells, 70% of its wind turbines, and 70% of its lithium batteries, at prices no competitor can match.

The article makes the point that this progress is not really a technological breakthrough, but rather a successful scaling up of technology invented during the space race half a century ago. Materials science does offer some possibilities for breakthroughs on the near horizon:

Solar cells today are made of crystalline silicon, but another kind of crystal, perovskites, can be layered in tandem with silicon to make cells that gain efficiency by capturing more colors of light. Material advances are enabling wind turbine blades to get longer and harvest more energy, while designs for floating turbines could vastly expand the offshore areas in which they could be deployed. And the giant lithium-ion batteries now used to store energy when sunshine and wind falter could one day give way to other chemistries. Vanadium flow batteries and sodium batteries could be cheaper; zinc-air batteries could hold far more energy.

And there you go – an agenda for research and development that the U.S. could get behind, or better yet, cooperate internationally on a win-win basis.

Meanwhile the nominees that were not chosen were:

  • Gene-editing to cure rare diseases in human babies and adults
  • New antibiotics effective against antibiotic-resistant gonorrhea, which continues to evolve
  • A breakthrough in understanding how cancer can spread through the nervous system
  • Advances in telescopes
  • DNA reconstruction of early humans
  • Large language models conducting math and scientific experiments on their own – In 2025 this was done with thorny math problems, chemical and drug development. The article notes that AI agents did not really live up to their hype overall in 2025.
  • Stuff involving subatomic particles. Honestly, this stuff is interesting but it’s hard for us normals to draw straight lines to how it might eventually affect our daily lives. Of course this doesn’t mean it won’t, it just means a lot of twists and turns as it works its way through the worlds of science and technology over time.
  • Genetically engineered organs grown in pigs and transplanted to people (successfully, at least for a period of months which seems to be much longer than these particular people were expected to live without the experimental transplants.). Are these pig organs or human organs grown in pigs? At some point it doesn’t matter.
  • Advances in heat-resistant rice

The article makes a parting shot at the U.S. government under Trump, for just intentionally shooting our entire scientific development pipeline in the foot. These were not the actions of a patriot, if I need to remind anyone.

what’s new with fusion?

One thing that’s new, according to the New York Times, is massive investment in R&D by the government of China. Meanwhile in the U.S., it’s more about startups and public-private partnerships. In at least one anecdotal case mentioned in the article, top scientists from China who have been based at top U.S. universities for decades are choosing to go home. The article suggests that American firms are often the first to achieve breakthroughs, but the Chinese system is better at scaling and commercializing them.

It seems like there is an opportunity to cooperate for the greater good here, no? That is not the way the political winds are blowing at the moment, of course. At least in this case, if our countries aren’t actually cooperating, they are not competing to weaponize the technology first. This technology was weaponized more than half a century ago, of course, and the quest ever since has been to learn how to control it for peaceful purposes. Of course, the joke is always that commercial fusion is two decades away, no matter what decade we are currently in. This article declines to give a clear time table for widespread commercialization, but it talks about a “pilot plant in the 2030s and 2040s”, so yep the rolling two decade projection seems to be holding.

BBC: 25 most important scientific ideas of the 21st century

BBC has a list called The 25 most powerful ideas of the 21st century (so far), picked by the world’s top thinkers. They don’t spell out science or technology in the title, but I don’t see any grand philosophical or literary analysis here. It’s not exactly clear if the list is in any order, other than maybe grouped loosely by topic. I’m just going to list a few I found interesting below, in categories:

  • Medicine: stem cells that don’t come from babies, mRNA vaccines, genome sequencing, a cure for HIV*, the HPV vaccine, contraception apps [what we used to call “the rhythm method and were cautioned not to use, but the apps now make it accurate], tissue engineering [this is growing body parts from a sample of human DNA for implant back into that same person – the article says ears, trachea, and bone have been used in patients, while kidneys and hearts are still at the research stage], psychedelic therapy
  • Environment: global warming and continuing carbon emissions, attribution analysis
  • (Information) Technology: large language models, robots that can do chemistry experiments
  • (Other) Technology: self-repairing materials
  • Physics/Cosmology: dark matter, the Higgs boson, the James Webb telescope, exoplanets, gravitational waves

* The HIV cure deserves some extra discussion. HIV can be cured, at least in some people sometimes, by transplanting bone marrow from a naturally HIV-resistant person. A bone marrow transplant is such a big deal that it would not be ethical to do it for people whose only problem is HIV(!) because other effective treatments are available. It is done for people with terminal leukemia when no other treatments are available. A few of these people have HIV, and it has been shown that their HIV can be cured. So we need to keep working on applying some of the other technologies to an HIV vaccine and/or cure.

I want to just briefly talk about the contraceptive apps. I might have heard about that but didn’t realize it had been so rigorously studied and FDA-approved. It seems so simple and yet a breakthrough, which I find heartening. I find this heartening because I would like to see our society eventually move on from the abortion debate, and the way to move on in my view is to improve technology, access and knowledge about birth control while reducing stigma. This seems to me to accomplish all those objectives without a major scientific breakthrough being required. (I am under no illusions about the politics – if technology solves an issue, people who need an issue to suit their political purposes will find or manufacture another issue.)

what’s next for (incremental improvement of commercial) AI

We normals are hearing in the media that the large language model approach to AI has run its course, that further scaling it up is prohibitive in terms of energy, and that there is an AI-hype-driven financial bubble ready to pop any moment. According to at least one blogger though, the big breakthrough happening right now is having these models “reason” internally before they give an answer.

Two of those leading engineers are: Julian Schrittwieser who helped teach AlphaGo how to play Go at a level never witnessed in human history and is now a lead researcher at Anthropic. And Łukasz Kaiser, who whilst at Google Brain, co‑authored the paper that launched the architecture now driving every major released model on “Attention is all you need”

Kaiser, for his part, corrects time horizons. The category of work that still belongs unquestioned to humans is shrinking. He states, with a deep belief, that these AI systems will be able to do any labor task currently performed on a computer within a timeframe of five years!

The question is not whether machines will pass some imagined threshold in the future, but what it means that they have already crossed thresholds we still debate as hypothetical. A society reacts to what it believes is true, not to what is true. When the prevailing public understanding is delayed by years, institutions are, by definition, operating in a prior decade.

We can model technological progress as a series of sequential, overlaid S-curves that have to overlap in just such a way to produce continuous exponential growth. At least some insiders are still thinking in terms of keeping this S-curve going, in a competition between companies and countries. And when we see a new technology break through into widespread public, commercial use, it has already been going in the lab for awhile. That used to be measured in decades, now it is months if these optimist insider voices are to be believed.

https://onepercentrule.substack.com/p/is-ai-on-a-new-trajectory

the “military-digital complex”

The first time I heard this term was in this post from Naked Capitalism, but it sounds right. The article focuses on Palantir and an “alliance” between the US government and tech companies (particularly Palantir) and the Israeli government and tech companies. Palantir does indeed seem sinister. The events in Xinjiang were the first time I had heard of the idea of “social credit scores” to track and control large masses of people, and the events in Gaza take this concept to a new level of (I’m just going to say it) abhorrent violence and immorality.

I read a book once, and I can’t remember or find the title, making the point that these systems for ranking and controlling people go back to at least the Roman and Spanish Inquisitions. If you think about it, the religious authorities of the time would have been the only ones (in their society I mean) with access to the technology and skills needed, such as paper, quills, ink, and literacy. Move on to Tsar, Gestapo, Stasi, and J. Edgar Hoover, and similar ends were accomplished with typewriters and file folders. So it was probably inevitable that modern computerized database technology, and now machine learning technology, would take this to a new level.

And these technologies have many peaceful democratic and economic uses, so we would not want to put this genie back in the bottle even if we could. I also think that as cyber- and bio-weapons of mass destruction become increasingly accessible and dispersed in many more hands, this kind of surveillance will become necessary to manage these risks, which are existential. So the only real options here are to have political controls on the misuse of these technology in democratic societies, and to have updated and strengthened international institutions akin to the nuclear, chemical, and biological weapons control regimes of the past. At the moment, of course, it seems we are going down a dark path of increasingly sinister domestic surveillance with weakening democratic controls, along with weakening international controls. And I don’t know that governments focused on misusing these technologies to oppress their own citizens are going to be the ones most effective at also using them to manage the existential risks.

The International Atomic Energy Agency does in fact have a new(ish) AI-powered surveillance system called MOSAIC designed by…Palantir.

The Palantir, of course, was a crystal ball that figures in the Lord of the Rings. Created by the wise elves, who sure were somewhat elitist and mildly racist, but had the best interests of us common humans at heart overall. But the Palantir fell into the wrong hands and was misused by the forces of darkness. Only wizards and hobbits can save us now.

Is the AI bubble bursting?

Apparently trying to answer this question is consuming a lot of bandwidth in the financial, tech, and even geopolitical arenas right now. Here is one answer from Larry Johnson, whose politics and past statements I do not necessarily endorse. Just to very briefly summarize his article: YES.

A few insights of my own:

  • The AI “hype bubble” has almost certainly reached a commanding height, and will pop at some point. This will probably be felt in stock market index valuations, which are dominated by a handful of large tech companies at the moment. In my lifetime now covering half a century, we have seen this cycle first with the personal computer itself and then with the internet. In both cases, the expectation that these technologies would super-charge economic growth in a few years did not happen, and led to financial market declines. Both technologies have in fact transformed the economy drastically, it just took a few decades rather than years. Things do seem to be happening faster this time around, I admit.
  • When it comes to stock market crashes, there is usually some precipitating event like the Asian financial crisis in 1997 or U.S. derivative bubble in 2007. The combination of technology bubble bursting and external financial shock seems to be particularly powerful. In fact, when I look back, I think I can argue the forward progress of the U.S. halted around that 1997 (financial crisis) to 2000 (Bush v. Gore) to 2001 to 2003 (9/11 attacks and Iraq invasion) period, and went into outright decline between the 2007 financial crisis and 2020 Covid crisis.
  • Apparently some in Silicon Valley thought the artificial general intelligence singularity was so near when the LLMs first came out, and that US tech companies were so far ahead of international peers, that it justified huge short-term investments in order to gain a first mover advantage that would then be insurmountable. This particular bubble seems to be popping at the moment, with AGI clearly not here right now, and perhaps a loose, emerging consensus that LLMs are a useful technology but not a likely path to AGI. So companies may have over-invested in infrastructure that will hurt some of them badly in the short term, while possibly benefitting us all in the longer term (think about 19th century railroads for a fairly obvious analogy).

So there is somewhat of a race here – will we start to see significant economic benefits of these new technologies before some external shocker hits us? This is the luck of the draw. It seems luck has not been on our side for the last 25 years or so. Perhaps we’re due.