Category Archives: Book Review – Nonfiction

Thermoeconomics in a Time of Monsters

I am always trying to puzzle out how the financial and real economies are related, and how the loss of functioning ecosystems may eventually put the brakes on both – hopefully, in theory, with the financial markets acting as some sort of cushion where prices rise and allow some degree of adjustment that reduces the onset of actual physical shortages and severe hardship. Anyway, I heard an interview with Warwick Powell recently that seemed to cover some of this. Below is the Goodreads description of his most recent book (which I haven’t read).

Thermoeconomics in a Time of Monsters: Rethinking Theory, China and International Geopolitical Economy

In an era when the old world is dying and the new one struggles to be born, Thermoeconomics in a Time of Monsters delivers a groundbreaking reframing of political economy through the unforgiving laws of thermodynamics.

Warwick Powell argues that all social-economic systems are fundamentally energetic and metabolic. Their reproduction, stability and resilience are perpetually threatened by entropy—the natural tendency toward disorder and energetic dispersion. Successful societies survive and thrive through deliberate negentropic energetic renewal, efficiency gains, and institutional arrangements that convert energy surpluses into sustained order.

Building on classical concerns with value, surplus, production and circulation, Powell develops Systemic Exchange Value (SEV) theory. Money, information, supply chains and fictitious capital are reinterpreted as energetic phenomena. The result is a coherent ontology that integrates thermodynamics, endogenous money and information theory.

I often hear the claim that classical economics “ignores” or “doesn’t know about” land, energy, natural resources, or ecosystem services more generally as an input to production. I don’t think this is exactly true, they just choose to assume these factors are negligible compared to labor and capital, and they have some sound evidence that this assumption has been reasonable over the past couple centuries. What could change their minds is if the assumption becomes unreasonable because high prices or outright shortages of food, water, energy, and/or the physical environment’s ability to absorb our wastes becomes a large factor relative to labor and capital. Then maybe that will force a merger between the abstract models of economics and the iron laws of our actual physical universe. But is it possible that incorporating these factors in advance could help us head off the crisis? You might want to try to invent a parachute before you jump off the cliff, not while you’re in freefall.

https://www.goodreads.com/book/show/249361440-thermoeconomics-in-a-time-of-monsters

The Singularity is Really, Seriously Very Near You Guys

Okay, so I got around to reading The Singularity is Nearer by Ray Kurzweil. It is worth a read. It is not really a rewrite or update of The Singularity is Near, and I would still recommend reading that 2005 book first to anyone. I suppose reading it now would require a bit more of a historical lens since some events have already come to pass, and others may not be on the track he envisioned (although surprisingly close consider how much time has passed!) This new book feels a bit more thrown together, and yet it is very interesting. My bullets below are not a summary or review of the book – I’m sure you can find that elsewhere on the internet if you prefer that to reading (or don’t have time to read) the book yourself. These are just a few things I found interesting or surprising enough to bookmark as I was reading the book.

  • Okay, well actually this is a pretty good 4 sentence summary:

Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it’s difficult to comprehend. This is what I mean by the Singularity.

In the past I’ve heard him say 1000x. Which is it Ray??? Well, it doesn’t matter does it because we can’t comprehend the difference between 1,000 and 1 million. I can’t comprehend being twice as smart as I am, in fact. Einstein supposedly had an IQ of about 160 so he was only 60% smarter than me if IQ were an adequate measure of anything.

  • A random factoid, quoting Stephen Pinker I believe: Violent death in hunter-gatherer societies has been estimated at around 500 per 100,000 people per year. War death rates in Germany, Japan and Russia during the 20th century were more on the order of 100. It’s not clear to me though if this was the annual rate during World War II or if this has been averaged over the entire 20th century, which would seem disingenuous. But his point is that even though violence may seem high to us, civilization has achieved massive reductions compared to historical times.
  • While the cost of solar panels has declined exponentially, the cost of permitting and installation has not fallen as fast, if it all. This seems to support some of my recent musings that human institutions and sociopolitics act as a brake on implementation of new technology.
  • He’s very bullish on democracy as promoting the spread of peace. This is a nice idea, but I can’t help noting that two of the world’s nominal democracies (the US and Israel) seem to be some of the most violent actors in the world at the moment, while many autocratic countries (thinking of Middle Eastern hereditary dictatorships in particular) seem to be the more restrained and logical parties at the moment. But of course, it may just be that a violent sociopath in charge of one of those autocracies would be much worse than the violent sociopaths being only partially restrained by the struggling democratic institutions in the US and Israel. This moment will eventually pass (barring a civilization-ending nuclear exchange) and the dictatorship in North Korea, to cite one example, seems likely to endure.
  • He argues that GDP growth statistics do not account for improved quality of life due to technology. The internet is a simple case – much of it is free to people (this has to do with the marginal cost of providing it) and therefore doesn’t add much to GDP, and yet we clearly value it highly, and in the past we could not even have conceived that there would be such a thing to value.
  • He has high hopes for two agricultural technologies, cultured meat and vertical farming, to eventually solve both our food supply problems and most of our environmental problems currently caused by agriculture. He sees AI accelerating advances in material science and clean energy that will “turn all technology into information technology” and make it very inexpensive. It’s a nice vision, and my instinct is that we are taking baby steps in that direction but it will be a long time when and if it happens. But this is Kurzweil, he sees massive acceleration in progress in the next decade while my instinct is based on my past experience of the linear part of the curve.
  • 3D printing is another technology he is bullish on. Basically, he sees the trend as being toward decentralized production of almost everything from energy, water, and food to manufactured goods.
  • He sees simulation as the key to massive acceleration in medicine. Basically, the idea is that if AI can develop very good digital models of human bodies and brains, then AI can do massive simulated drug trials in minutes or days that would take years or decades in human subjects. Here, you can certainly imagine the human regulatory framework slowing things down, and that is probably for the best, but over time it may be shown that the digital trials are as accurate as the in-real-life trials, and resistance will eventually break down.
  • Now for the weird existential stuff. First, everyone should reread Altered Carbon (my suggestion, not mentioned by Kurzweil). We have probably all had the thought that when we wake up from sleep, we feel a continuity in our consciousness from the day before, but how can we really be sure that we are the same person? A perfect copy of my mind downloaded into a perfect copy of my body, or downloaded into a perfect simulation of my body and its surrounding environment, would have the identical experience. So in a sense, I can live forever in this situation. Kurzweil says it is a philosophical question, not a scientific one, whether I would still be myself in this situation. This makes a lot of people uncomfortable, including me. But there is another case where I connect my biological brain to the cloud and gradually extend my consciousness. I do not lose the biological part of my mind in this case, but gradually the biological part of my mind becomes a smaller and smaller part, until I might decide at some point to leave it behind. This is still an uncomfortable thought, but less uncomfortable than the previous one. There are many, many philosophical, moral, and practical socioeconomic thoughts to unpack here of course. More than I can even dip a toe into at the moment (inheritance? will anyone want to have or raise babies? what if someone kills the biological me but not the digital me, is that murder? most people would say yes to that last one – etc…)
  • He sees solutions to the idea of out-of-control biological (e.g., vaccines) and nanotechnological (e.g., controlling nanobots by “broadcasting” from a central location with a kill switch) threats. AI can play a role in all of this. But he sees AI as the biggest threat, with no sure-fire way to control it although there are promising ideas to mitigate the risks.

The 2030s and 2040s certainly sound like interesting times, barring any sort of tragic disaster between now and then. At the moment, we need to focus on not derailing our civilization and species to the point that we don’t get to find out.

Goodreads

2025 gardening books

Here is a roundup of recent gardening books from the Joe Gardener Podcast. I like to do a gardening book around January each year so this will give me some new ones to think about. Yes, you can accuse me of being mostly an armchair gardener if you want. I have a garden but I take a mostly laissez-faire approach, especially this past year when work, school, family, and life have conspired to take up 150% of my available time (outside of sleep and eating, two things I never skimp on.) Here are a few that caught my eye:

  • How Can I Help – a new one on ecological gardening from Doug Tallamy
  • Nature’s Action Guide by Sarah Jayne – sounds kind of similar actually
  • Several books on seed saving, a topic I have always been interested in.
  • Fruit Tree Pruning: The Science and Art of Cultivating Healthy Fruit Trees by Susan Poizner – I have two fruit trees. They grow a significant amount of Asian pears and persimmons each year. This makes the neighborhood squirrels very happy.
  • The New Organic Grower: A Master’s Manual of Tools and Techniques for the Home and Market Gardener by Eliot Coleman – a classic. I used to have a copy when I was young and thought I might grow up to live on a piece of land and have some time on my hands. Which I remember asking a bookstore to order for me before Amazon or even the internet existed.
  • Plant Grow Harvest Repeat by Meg McAndrews Cowden – “the book on succession planting”
  • The Vegetable Gardening Book: Your Complete Guide to Growing an Edible Organic Garden from Seed to Harvest by Joe Lamp’l – the podcast guy
  • Attracting Beneficial Bugs to Your Garden by Jessica Walliser – pretty self-explanatory right?

Noam Chomsky is old!!!

In my last post I posited that 61 is not that old, because it is not that much older than I am right now. Well, Noam Chomsky is 96, and that sounds old to me! How will I feel about that when I am, say, 89, if I am fortunate enough to make it that far? Congratulations to Noam for being alive and kicking and, not only that, WRITING BOOKS!

Anyway, he has a new (ish, to me) book from 2024, and here is a brief excerpt posted in a blog called neuburger.substack.com.

Elites gonna elite, aka manufacture consent. We have enough knowledge, technology, and wealth on this planet to all live in relative peace and comfort right now if we could only get out of our own way. But perhaps it is “utopian” to think that our species of nearly hairless poop-slinging monkeys will ever be able to do that on any scale for any length of time.

Btw, the book is The Myth of American Idealism: How U.S. Foreign Policy Endangers the World.

Wikipedia

7 philosophy books for beginners

Openculture.com has a list of where to start on philosophy. Perhaps I’ll add these to my retirement reading list.

They are as follows: Bertrand Russell’s The Problems of Philosophy, Simon Blackburn’s Thinkthe complete works of PlatoMarcus Aurelius Meditations, St. Augustine’s ConfessionsRené Descartes’ Meditations on First Philosophy, and John Stuart Mill’s On Liberty.

entropy economics

John Kenneth Galbraith, an economist at the University of Texas, has a new book called Entropy Economics: The Living Basis of Value and Production. The ideas are not really new, as he admits:

As we and others have said before, from a physics perspective, resources are low-entropy materials (Georgescu-Roegen 1971). The entropy law holds that systems tend towards higher entropy states spontaneously. Living systems, as non-equilibrium systems, need to extract low-entropy materials from the environment to compensate for their continuous dissipation.

We are taking concentrated resources from the Earth’s biophysical system, using them to perform useful work, and producing waste products which consist of less concentrated substances and heat which are too diffuse to use for useful work, and in many cases cause harm to the system. Entropy must increase at the scale of the universe, but organized systems like life and human civilization can get away with decreasing it on scales that matter to us short-lived primates, if not to a dark, cold universe that most likely doesn’t care about us (revealing my atheist stripes here, sure if you are religious that helps to solve this existential dread problem, and good for you!) There is a scale where the impact of our human economy becomes large relative to the physical system it is embedded in, and the economic theories we have based critical decisions on have chosen to neglect that to this point. Economists might say, our equations can account for that, we have just chosen to neglect it and we have clearly stated our assumptions. Well, those assumptions no longer hold as we approach or pass the point of no return.

Many others have made these points. In addition to Georgescu-Roegen – a few that come to my mind are Herman Daly, Howard Odum, Brian Czech, Jay Forrester and the authors of World3, to name a few. But these voices have been ignored by mainstream economists because they were from other disciplines, did not have the right credentials, or did not make their arguments at a time when the prevailing body of thought was receptive. So it probably helps to have one more credentialed academic economist make them for the audience of academic and professional economists at this particular point in history. Today’s students will be tomorrow’s professionals. Economists are very, very important. For better or worse, their opinions and choices and advice to policy makers shape our world. Maybe at a time when the public has become less receptive to these ideas even though the crisis has rapidly worsened, the economics profession could be ready to listen. I don’t know, but it’s worth another try.

University of Chicago Press

Kurzweil’s Singularity is Nearer

Well what do you know, Ray Kurzweil’s The Singularity is Nearer came out in 2024 and I somehow didn’t notice. I still tell young people who ask that the original book is a drop-everything-must-read. The question is whether I would tell them to drop the original and read this one first. (As much as I love interacting with Gen Z, the idea of many of them reading two complete books seems like a stretch…) I’ll definitely read this one and compare when I get a chance and see. Of course they also need to read How Much is Enough: Money and the Good Life. And probably Manufacturing Consent, which in 1988 might be the best explanation we have of the propaganda techniques we all seem to be falling for here in 2025.

Project Syndicate 2024 book picks

Usually Project Syndicate tells me my free articles are used up, but they are letting me look at their “best books” roundup, I suppose because they are trying to sell me something and I should thank them for the privilege. Anyway, there are a few interesting ones here in the realm of socioeconomic and/or geopolitical non-fiction books. I don’t read too many books in this genre because I am a busy working parent and many of these are TLDR that would have worked fine as longish magazine articles. In fact, sometimes they are magazine articles that got popular and the authors/publishers are trying to cash in. Other times I suspect they are written by humanities professors who are paid by the pound. Nonetheless, here are some that caught my eye. As usual, I am more or less just riffing on the titles and haven’t actually read the books, so don’t take my thoughts as book reviews per se.

  • Amir Lebdioui, Survival of the Greenest: Economic Transformation in a Climate-conscious World. Some ideas on how developing countries could maybe lead the way on various green new deals? Sure, I want to believe in this…
  • Atossa Araxia Abrahamian, The Hidden Globe: How Wealth Hacks the World. “a fascinating tour of ‘extralegal zones’ of suspended sovereignty – an interconnected network of autonomous, business-friendly enclaves where conventional tax, labor, and immigration laws do not apply.”
  • Yanis Varoufakis, Technofeudalism: What Killed Capitalism. “a classic case of feudal rent defeating capitalist profit, of wealth extraction by those who already have it triumphing over the creation of new wealth by entrepreneurs.” Well, I want to believe in the tech companies because when it comes to U.S. comparative advantage, it’s kind of all we have left? (well, maybe biotech, but a lot of that is tied up with the predatory health insurance/finance industry which has captured our elected officials and is financially raping its own citizens and customers all day every day rather than creating new value.) I want to believe in Schumpeter’s basic formula: capitalism=competition=innovation=”the greatest wealth creating engine the world has ever known”. But if the tech industry and other modern big businesses are not capitalism at all but rather disguised feudalism, that sort of solves my problem of needing to believe in them. The problem being, what is left to believe in?
  • Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. AI and (lack of?) ethics. In my own interactions with AI, I have noticed that it can sometimes show more empathy and patience than any human being could consistently be expected to show. You can shout or curse at it and it responds with “I understand your frustration…” and tries to help you. Does it matter whether there are any emotions there as we understand the term? What seems to matter is whether the AI’s interests are aligned with mine. So that is probably what we need to think about.
  • William Ury, Possible: How We Survive (and Thrive) in an Age of Conflict. From a “world-reknowned negotiation expert”. Well, negotiations are about figuring what the interests of the parties are, where they are aligned, and finding something that makes everybody a little better off even if nobody is fully satisfied?
  • Malcolm Gladwell, Talking to Strangers: What We Should Know about the People We Don’t Know. I don’t know if this is a good book, or just time for Malcolm Gladwell to write a book… but there seems to be a negotiation, competition, empathy, and cooperation theme developing here. Per Schumpeter, pure capitalist competition is supposed to be sort of a inadvertent cooperation that lifts all boats, right? Dear capitalists – don’t bite the invisible hand that feeds you.
  • Robert D. Blackwill and Richard Fontaine, Lost Decade: The US Pivot to Asia and the Rise of Chinese Power. I just don’t want to believe that China is a military threat to the United States. Maybe I am naive, but I just don’t see how it can be in their interests to threaten us. On the other hand, I am 100% certain they feel threatened by us. So how about a little strategic empathy? Can we be less threatening and still deter conflict?
  • Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. When I was a kid, it was dumb TV and high-sugar cereal that was supposedly rotting our brains. But I do see the screen-addiction in my own kids, and I don’t deny the rise in mental illness (diagnoses, at least) among children. Still, the screens give my children access to the world’s information that I could only dream of at their age, and they will be interacting with screens some day in some capacity as part of the work force. So I don’t have the answers here certainly, but I don’t think turning the screens off entirely can be the answer. Talking about what is on the screens sounds like a better path.
  • Kevin A. Young, Abolishing Fossil Fuels: Lessons from Movements That Won. I am thinking about the sudden spike in energy use when the AI search engines were turned on. I am thinking about the Kardashev scale, where a civilization’s level of advancement is measured by its energy use (more=more advanced). I am thinking about the Fermi paradox – is it possible that civilizations throughout the universe invent AI but then can’t come up with a viable way to power it without fouling their own nest? This doesn’t really make sense though, when half a century of investment and research in safe nuclear power could have gotten us to a place where we could be fueling the AI awakening more sustainably. The sun’s energy is virtually limitless on our human space and time scales, and solar panels in space are viable with current technology – we would just have had to invest in this and make it happen. Fusion is more speculative but there are some promising developments. I’m just saying, our human performance here on Earth may be pathetic and it seems like we may not make it long term, but if there are a billion civilizations out there similar to ours there must be some that got it right.
  • Michael Lewis, The Fifth Risk: Undoing Democracy. “the glaring absence of leadership and preparation during the transition to Donald Trump’s first administration, revealing how the US president-elect appointed incompetent and uninformed individuals to oversee America’s vast bureaucracy.” But this time around, it seems like we are getting even less competent, less informed clowns and fools, and only clowns and fools. Maybe the answer to the Fermi Paradox is that in all the billions of advanced civilizations that arise in the galaxy, a Donald Trump always arises at some point and shits the bed.

the other book recommendations from Bill Gates

I already mentioned The Coming Wave, a book about AI. Here are the others – honestly, none really catches my eye. But for the sake of completeness:

The Coming Wave

Bill Gates is starting to pump out some end-of-year book recommendations, and he identifies The Coming Wave by Mustafa Suleyman as his “favorite book about AI”. Here are a few quotes (from the Gates article):

…what sets his book apart from others is Mustafa’s insight that AI is only one part of an unprecedented convergence of scientific breakthroughs. Gene editing, DNA synthesis, and other advances in biotechnology are racing forward in parallel. As the title suggests, these changes are building like a wave far out at sea—invisible to many but gathering force. Each would be game-changing on its own; together, they’re poised to reshape every aspect of society…

In my conversations about AI, I often highlight three main risks we need to consider. First is the rapid pace of economic disruption. AI could fundamentally transform the nature of work itself and affect jobs across most industries, including white-collar roles that have traditionally been safe from automation. Second is the control problem, or the difficulty of ensuring that AI systems remain aligned with human values and interests as they become more advanced. The third risk is that when a bad actor has access to AI, they become more powerful—and more capable of conducting cyber-attacks, creating biological weapons, even compromising national security…

So how do we achieve containment in this new reality? …he lays out an agenda that’s appropriately ambitious for the scale of the challenge—ranging from technical solutions (like building an emergency off switch for AI systems) to sweeping institutional changes, including new global treaties, modernized regulatory frameworks, and historic cooperation among governments, companies, and scientists.

When it comes to AI, economic productivity, and job loss, it seems obvious that the answer is to take a portion of the economic value added by AI and reinvest it in services and benefits for the people adversely affected. Easy peasy right? And politically very difficult, at least in the U.S. “Value added tax” and “universal basic services and/or income” are words you could use to describe such programs, but we need to come up with better words and strategies if we are going to successfully describe these concepts to voters and neutralize the powerful interests who so far have been successful obstacles to these practical, somewhat obvious policies. The advantage of a VAT is the broadest possible tax base pays it in small increments over time rather than all at once, and therefore it is resented much less than filing an income tax return. If AI can truly increase economic productivity, then phasing in a VAT over time as productivity increases could be a way to increase quality of life for the greatest number of people possible. Throw in some automated counter-cyclical infrastructure spending along with the usual monetary policy adjustments, and you might have something. AI itself might be able to manage a system like this effectively in a way that is truly win-win for everyone.

It’s hard to be optimistic at this point in history about “historic cooperation among governments, companies, and scientists”. Still, maybe we have hit rock bottom on this and the coming trend will be up at some point.

The discussion of biological weapons and bad actors is chilling. Think of the ideologies that lead people to rationalize mass suicide and mass murder of civilians in events like 9/11 and the Oklahoma City bombing. The people who perpetrated those acts would certainly have used nuclear weapons if they had them handy. They will use biological weapons in the future if they can get their hands on them, and as the article points out it will be easier to get their hands on them and much harder to detect who has their hands on what. I don’t have an answer on this other than surveillance. Surveillance of AI, by AI perhaps? It sounds dystopian, but maybe that is what is needed – AI designed to be pro-human and pro-social looking for that needle in a haystack which is bad humans using bad AI to try to do something really terrible.