Tag Archives: artificial intelligence

December 2024 in Review

In December I reviewed a number of “best of” posts by others, so this is really a roundup of roundups.

Most frightening and/or depressing story: The annual “horizon scan” from the journal Trends in Ecology and Evolution lists three key issues having to do with tipping points: “melting sea ice, melting glaciers, and release of seabed carbon stores”.

Most hopeful story: I’m really drawing a blank on this one folks. Since I reviewed a number of book lists posted by others, I just pick one book title that sounds somewhat hopeful: Abolishing Fossil Fuels: Lessons from Movements That Won.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Bill Gates recommended The Coming Wave as the best recent book to understand the unfolding and intertwined AI and biotechnology revolution. I also listed the 2024 Nobel prizes, which largely had to do with AI and biotechnology.

Project Syndicate 2024 book picks

Usually Project Syndicate tells me my free articles are used up, but they are letting me look at their “best books” roundup, I suppose because they are trying to sell me something and I should thank them for the privilege. Anyway, there are a few interesting ones here in the realm of socioeconomic and/or geopolitical non-fiction books. I don’t read too many books in this genre because I am a busy working parent and many of these are TLDR that would have worked fine as longish magazine articles. In fact, sometimes they are magazine articles that got popular and the authors/publishers are trying to cash in. Other times I suspect they are written by humanities professors who are paid by the pound. Nonetheless, here are some that caught my eye. As usual, I am more or less just riffing on the titles and haven’t actually read the books, so don’t take my thoughts as book reviews per se.

  • Amir Lebdioui, Survival of the Greenest: Economic Transformation in a Climate-conscious World. Some ideas on how developing countries could maybe lead the way on various green new deals? Sure, I want to believe in this…
  • Atossa Araxia Abrahamian, The Hidden Globe: How Wealth Hacks the World. “a fascinating tour of ‘extralegal zones’ of suspended sovereignty – an interconnected network of autonomous, business-friendly enclaves where conventional tax, labor, and immigration laws do not apply.”
  • Yanis Varoufakis, Technofeudalism: What Killed Capitalism. “a classic case of feudal rent defeating capitalist profit, of wealth extraction by those who already have it triumphing over the creation of new wealth by entrepreneurs.” Well, I want to believe in the tech companies because when it comes to U.S. comparative advantage, it’s kind of all we have left? (well, maybe biotech, but a lot of that is tied up with the predatory health insurance/finance industry which has captured our elected officials and is financially raping its own citizens and customers all day every day rather than creating new value.) I want to believe in Schumpeter’s basic formula: capitalism=competition=innovation=”the greatest wealth creating engine the world has ever known”. But if the tech industry and other modern big businesses are not capitalism at all but rather disguised feudalism, that sort of solves my problem of needing to believe in them. The problem being, what is left to believe in?
  • Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. AI and (lack of?) ethics. In my own interactions with AI, I have noticed that it can sometimes show more empathy and patience than any human being could consistently be expected to show. You can shout or curse at it and it responds with “I understand your frustration…” and tries to help you. Does it matter whether there are any emotions there as we understand the term? What seems to matter is whether the AI’s interests are aligned with mine. So that is probably what we need to think about.
  • William Ury, Possible: How We Survive (and Thrive) in an Age of Conflict. From a “world-reknowned negotiation expert”. Well, negotiations are about figuring what the interests of the parties are, where they are aligned, and finding something that makes everybody a little better off even if nobody is fully satisfied?
  • Malcolm Gladwell, Talking to Strangers: What We Should Know about the People We Don’t Know. I don’t know if this is a good book, or just time for Malcolm Gladwell to write a book… but there seems to be a negotiation, competition, empathy, and cooperation theme developing here. Per Schumpeter, pure capitalist competition is supposed to be sort of a inadvertent cooperation that lifts all boats, right? Dear capitalists – don’t bite the invisible hand that feeds you.
  • Robert D. Blackwill and Richard Fontaine, Lost Decade: The US Pivot to Asia and the Rise of Chinese Power. I just don’t want to believe that China is a military threat to the United States. Maybe I am naive, but I just don’t see how it can be in their interests to threaten us. On the other hand, I am 100% certain they feel threatened by us. So how about a little strategic empathy? Can we be less threatening and still deter conflict?
  • Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. When I was a kid, it was dumb TV and high-sugar cereal that was supposedly rotting our brains. But I do see the screen-addiction in my own kids, and I don’t deny the rise in mental illness (diagnoses, at least) among children. Still, the screens give my children access to the world’s information that I could only dream of at their age, and they will be interacting with screens some day in some capacity as part of the work force. So I don’t have the answers here certainly, but I don’t think turning the screens off entirely can be the answer. Talking about what is on the screens sounds like a better path.
  • Kevin A. Young, Abolishing Fossil Fuels: Lessons from Movements That Won. I am thinking about the sudden spike in energy use when the AI search engines were turned on. I am thinking about the Kardashev scale, where a civilization’s level of advancement is measured by its energy use (more=more advanced). I am thinking about the Fermi paradox – is it possible that civilizations throughout the universe invent AI but then can’t come up with a viable way to power it without fouling their own nest? This doesn’t really make sense though, when half a century of investment and research in safe nuclear power could have gotten us to a place where we could be fueling the AI awakening more sustainably. The sun’s energy is virtually limitless on our human space and time scales, and solar panels in space are viable with current technology – we would just have had to invest in this and make it happen. Fusion is more speculative but there are some promising developments. I’m just saying, our human performance here on Earth may be pathetic and it seems like we may not make it long term, but if there are a billion civilizations out there similar to ours there must be some that got it right.
  • Michael Lewis, The Fifth Risk: Undoing Democracy. “the glaring absence of leadership and preparation during the transition to Donald Trump’s first administration, revealing how the US president-elect appointed incompetent and uninformed individuals to oversee America’s vast bureaucracy.” But this time around, it seems like we are getting even less competent, less informed clowns and fools, and only clowns and fools. Maybe the answer to the Fermi Paradox is that in all the billions of advanced civilizations that arise in the galaxy, a Donald Trump always arises at some point and shits the bed.

the other 2024 Nobel prizes

I already talked about the Nobel prize in economics. You can read about the others here.

  • physics: “foundational discoveries and inventions that enable machine learning with artificial neural networks”
  • chemistry: “computational protein design” and “protein structure prediction”
  • physiology or medicine: “the discovery of microRNA and its role in post-transcriptional gene regulation”
  • literature: poetry – yay humanities
  • peace: “efforts to achieve a world free of nuclear weapons and for demonstrating through witness testimony that nuclear weapons must never be used again”

So AI and AI-assisted biotechnology basically. And I hope the taboo against nuclear weapons ever being used is as strong as this suggests it is.

The Coming Wave

Bill Gates is starting to pump out some end-of-year book recommendations, and he identifies The Coming Wave by Mustafa Suleyman as his “favorite book about AI”. Here are a few quotes (from the Gates article):

…what sets his book apart from others is Mustafa’s insight that AI is only one part of an unprecedented convergence of scientific breakthroughs. Gene editing, DNA synthesis, and other advances in biotechnology are racing forward in parallel. As the title suggests, these changes are building like a wave far out at sea—invisible to many but gathering force. Each would be game-changing on its own; together, they’re poised to reshape every aspect of society…

In my conversations about AI, I often highlight three main risks we need to consider. First is the rapid pace of economic disruption. AI could fundamentally transform the nature of work itself and affect jobs across most industries, including white-collar roles that have traditionally been safe from automation. Second is the control problem, or the difficulty of ensuring that AI systems remain aligned with human values and interests as they become more advanced. The third risk is that when a bad actor has access to AI, they become more powerful—and more capable of conducting cyber-attacks, creating biological weapons, even compromising national security…

So how do we achieve containment in this new reality? …he lays out an agenda that’s appropriately ambitious for the scale of the challenge—ranging from technical solutions (like building an emergency off switch for AI systems) to sweeping institutional changes, including new global treaties, modernized regulatory frameworks, and historic cooperation among governments, companies, and scientists.

When it comes to AI, economic productivity, and job loss, it seems obvious that the answer is to take a portion of the economic value added by AI and reinvest it in services and benefits for the people adversely affected. Easy peasy right? And politically very difficult, at least in the U.S. “Value added tax” and “universal basic services and/or income” are words you could use to describe such programs, but we need to come up with better words and strategies if we are going to successfully describe these concepts to voters and neutralize the powerful interests who so far have been successful obstacles to these practical, somewhat obvious policies. The advantage of a VAT is the broadest possible tax base pays it in small increments over time rather than all at once, and therefore it is resented much less than filing an income tax return. If AI can truly increase economic productivity, then phasing in a VAT over time as productivity increases could be a way to increase quality of life for the greatest number of people possible. Throw in some automated counter-cyclical infrastructure spending along with the usual monetary policy adjustments, and you might have something. AI itself might be able to manage a system like this effectively in a way that is truly win-win for everyone.

It’s hard to be optimistic at this point in history about “historic cooperation among governments, companies, and scientists”. Still, maybe we have hit rock bottom on this and the coming trend will be up at some point.

The discussion of biological weapons and bad actors is chilling. Think of the ideologies that lead people to rationalize mass suicide and mass murder of civilians in events like 9/11 and the Oklahoma City bombing. The people who perpetrated those acts would certainly have used nuclear weapons if they had them handy. They will use biological weapons in the future if they can get their hands on them, and as the article points out it will be easier to get their hands on them and much harder to detect who has their hands on what. I don’t have an answer on this other than surveillance. Surveillance of AI, by AI perhaps? It sounds dystopian, but maybe that is what is needed – AI designed to be pro-human and pro-social looking for that needle in a haystack which is bad humans using bad AI to try to do something really terrible.

October 2024 in Review

Only half way through November – here is an “October in Review” post.

Most frightening and/or depressing story: When it comes to the #1 climate change impact on ordinary people, it’s the food stupid. (Dear reader, I’m not calling you stupid, and I don’t consider myself stupid, but somehow we individually intelligent humans are all managing to be stupid together.) This is the shit that is probably going to hit the fan first while we are shouting stupid slogans like “drill baby drill” (okay, if you are cheering when you hear a politician shout that you might not be stupid, but you are at least uninformed.)

Most hopeful story: AI, at least in theory, should be able to help us manage physical assets like buildings and infrastructure more efficiently. Humans still need to have some up-front vision of what we would like our infrastructure systems to look like in the long term, but then AI should be able to help us make optimal repair-replace-upgrade-abandon decisions that nudge the system toward the vision over time as individual components wear out.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Some explanations proposed for the very high cost of building infrastructure in the U.S. are (1) lack of competition in the construction industry and (2) political fragmentation leading to many relatively small agencies doing many relatively small projects. Some logical solutions then are to encourage the formation of more firms in the U.S., allow foreign firms and foreign workers to compete (hardly consistent with the current political climate!), and consolidate projects into a smaller number of much larger ones where economies of scale can be realized. There is some tension though between scale and competition, because the larger and more complex a project gets, the fewer bidders it will tend to attract who are willing to take the risk.

September 2024 in Review

I was sitting down to do my “October in Review” post and realized I never got around to September. So better late than never. I’m writing this on November 9, 2024 after the U.S. election but I’ll try to give U.S. politics a rest in this post (update: I almost succeeded although I couldn’t resist an interesting point about the U.S. Constitution).

Most frightening and/or depressing story: There is nothing on Earth more frightening than nuclear weapons. China has scrapped its “minimal deterrent” nuclear doctrine in favor of massively scaling up their arsenal to compete with the also ramping up U.S. and Russian arsenals. They do still have an official “no first strike” policy. The U.S. by contrast has an arrogant foreign policy.

Most hopeful story: AI should be able to improve traffic management in cities, although early ideas on this front are not very creative.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Countries around the world update their constitutions about every 20 years on average. They have organized, legal processes for doing this spelled out in the constitutions themselves. The U.S. constitution is considered the world’s most difficult constitution to update and modernize.

AI and asset management

This article is about AI and predictive building maintenance. It also reads like an IBM corporate press release, but nonetheless it sparks some interesting thoughts. Recently I was at a conference where a friend of mine was on stage and as asked what technologies would be most important for the future of public infrastructure (water infrastructure, in the case of this particular conference.) AI and asset management came to my mind, and I willed my friend to also think of this. Alas, he did not. Now, if I had been up there would I have been able to articulate my thoughts clearly on the spot? Probably not, but with the benefit of a few minutes to think here is what I fantasize I might have said.

Basically, AI should be pretty good at asset management. Given good data on assets and their ages, they should be able to identifying assets (we’re talking physical assets here, like pipes or electrical equipment, or even green infrastructure like street trees) that are nearing the end of their service life and likely to fail in the engineering sense of no longer serving their intended purpose efficiently. Or, somewhat obviously, when things really have failed AI can help get that information to the attention of whoever can actually do something about it. Well, I still think humans have to do the up-front planning and have some vision for what they would like the infrastructure system to look like 20, 30, 50 years down the line. But then, AI should really be able to help with those repair-replace-upgrade-abandon decisions, so that as things wear out the system is slowly nudged in the direction of that long-term vision, all while minimizing life cycle cost and balancing whatever other objectives the owners or stakeholders might have. This all looks good on paper and is messy to do with a mish-mash of real-world governments and institutions and companies, but having the vision is a start.

AI-controlled stop lights

Boston and other cities have piloted tested AI-controlled stop lights and found that they can reduce “stop and go traffic”. This seems encouraging to me, but not very imaginative and I hope this is not the end of the story. Stop lights are such an old technology, and it seems to me that with modern LED lights and screens we should be able to do much better. Each traffic lane, including lanes dedicated to light and un-motorized vehicles, need their own signals. Let’s get rid of the colored circles and make every single traffic light a series of arrows, so that we can control who is allowed to go forward entirely separately from who is allowed to make a turn from each lane. Pedestrians also need their own signals, and the heavy/highway vehicles, light/unmotorized vehicles, and pedestrians must never, ever have signals that put them in the same space at the same time. I won’t buy the idea that this would be “too expensive” – I happen to be traveling in a middle income country at the moment and I see a lot more arrows and countdown timers on traffic lights compared to what we have in the U.S. (although the jurisdiction I am in is no traffic safety utopia for sure.) If this sounds like it would be too inefficient with today’s system, this is where AI should come in and make it efficient and safe, at each individual intersection and for the system as a whole.

Another level of science fiction would dispense with the lights and screens, and embed them in our vehicle windshields, augmented reality glasses, headphones, etc. Vehicles that are entirely computer controlled, of course, can just get their signals from cellular or wireless networks. We are not there yet at least when it comes to widespread access/adoption of these technologies, but the technologies themselves either exist or are on the horizon.

July 2024 in Review

Most frightening and/or depressing story: Joe Biden’s depressing decline in the international spotlight, and our failed political system that could let such a thing happen. Not much more I can say about it that has not been said. The “election trifecta” – non-partisan, single ballot primaries; ranked-choice general elections; and non-partisan redistricting – is one promising proposal for improving this system.

Most hopeful story: A universal flu vaccine may be close, the same technology might work for other diseases like Covid, HIV, and tuberculosis.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Maybe we could replace congress with AI agents working tirelessly on behalf of us voters. Or maybe we could just have AI agents tirelessly paying attention to what the humans we have elected are doing, and communicating in both directions.

AI Biden

I can’t tell if this post is serious or not, and this is the mark of bad satire.

Despite an ambitious and widely praised first term in office, he is currently trailing in polls to a man who incited an insurrection and was recently convicted on 34 felony counts. Something needs to change, and much to the chagrin of West Wing fanatics in the beltway, it won’t be the Democrats’ 2024 nominee. Modern technology offers a clear solution. AI can be used to polish how the president comes across, allowing voters to focus on his substance. How many times have we heard voters and pundits alike gripe that “Biden would be the perfect candidate if he were just 10 years younger?” With modern technology, this exact deliverable is possible.

huffpost.com

I sincerely hope this is intended as irony. I have thought however about an AI-based experiment in direct democracy. In this concept, an AI agent would represent me, the individual citizen. It could spend time patiently interviewing me about my views and opinions, and then it could go to Congress and negotiate and vote on legislation with the AI agents of the 328 million other Americans. If I really have time, I could take over control of the agent at any time I want, then hand the keys back over to it any time I want.

A watered down version of this could be AI constantly talking to an elected representatives constituents about their views on various issues and the content of proposed legislation, patiently explaining to the elected representative how his or her actual constituents would like him or her to vote (the pronoun thing is exhausting, and yes I know I have only scratched the surface of potential pronouns), and patiently explaining to the constituents how it all turned out. Imagine if a politician made it a campaign pledge to always vote according to the wishes of a majority of their constituents (as interpreted through the AI agents) no matter what.

I thought of this a long time ago, with the idea that it could be done through polls or through an app, but the natural language AIs could make this much more practical and achievable by just chatting with humans for a few minutes each day and providing constant feedback.

Now, of course the problem with direct democracy is always that 51% of the people might want to exterminate or enslave the other 49%, which is one way to achieve consensus but not what we are looking for. So you obviously have to couple this with protections for the rights of minorities and human rights in general.