Tag Archives: artificial intelligence

weather forecasting

This is interesting. It is not 100% clear to me what the measure of accuracy is below, but the plot shows how much weather forecasting has improved over the last 50 years or so. A 3-5 day forecast is highly accurate now, and 3-5 are not that different. It’s interesting to me that there is such as large drop off in accuracy between a 7 and 10 day forecast – that is not necessarily intuitive, but useful even in everyday life. A 10-day forecast is basically a coin flip, while check back 3 days later and you are closer to 80/20 odds. This is based on pressure measured at a certain height I think, so it doesn’t necessarily mean forecasts of precipitation depth and intensity, rain vs. snow vs. ice, thunder and lightning, tornadoes, etc. are going to be as accurate as this implies.

Our World in Data

There is some suggesting that AI (meaning purely statistical approaches, or AI choosing any blend of statistics and physics it wants?) might make forecasting much faster, cheaper, and easier yet again.

2023 in Review

Warning: This post is not 100% family friendly and profanity free. 2023 was just not that kind of year!

I’ll start with a personal note. After 24+ years of engineering consulting practice, I have decided to leave the world of full-time professional employment and go back to school for a bit. This is some combination of mid-life crisis and post-Covid working parent burnout. I spent a lot of time thinking about it, ran all the financial numbers, and decided I can swing it for a year or so without major implications for my eventual retirement 15-ish years down the line. So, gentle reader, you too can do this sort of thing if you want to. Just be patient, plan, prepare, do the math, and be rational about it.

The Year’s Posts

Stories I picked as “most frightening or depressing”:

  • JANUARY: How about a roundup of awful things, like the corrupt illegitimate U.S. Supreme Court, ongoing grisly wars, the CIA killed JFK after all (?), nuclear proliferation, ethnic cleansing, mass incarceration, Guantanamo Bay, and all talk no walk on climate change? And let’s hope there is a special circle of hell waiting for propaganda artists who worked for Exxon.
  • FEBRUARY: Pfizer says they are not doing gain of function research on potential extinction viruses. But they totally could if they wanted to. And this at a time when the “lab leak hypothesis” is peeking out from the headlines again. I also became concerned about bird flu, then managed to convince myself that maybe it is not a huge risk at the moment, but definitely a significant risk over time.
  • MARCH: The Covid-19 “lab leak hypothesis” is still out there. Is this even news? I’m not sure. But what is frightening to me is that deadly natural and engineered pathogens are being worked with in labs, and they almost inevitably will escape or be released intentionally to threaten us all at some point. It’s like nuclear proliferation, accidents, and terrorism – we have had a lot of near misses and a lot of luck over the last 70 years or so. Can we afford the same with biological threats (not to mention nuclear threats) – I think no. Are we doing enough as a civilization to mitigate this civilization-ending threat? I think almost certainly, obviously not. What are we doing? What are we thinking?
  • APRIL: Chemicals, they’re everywhere! And there were 20,000 accidents with them in 2022 that caused injuries, accidents, or death. Some are useful, some are risky, and some are both. We could do a better job handling and transporting them, we could get rid of the truly useless and dangerous ones, and we could work harder on finding substitutes for the useful but dangerous ones. And we could get rid of a corrupt political system where chemical companies pay the cost of running for office and then reward candidates who say and do what they are told.
  • MAY: There are more “nuclear capable states” than I thought.
  • JUNE: Most frightening and/or depressing story: Before 2007, Americans bought around 7 million guns per year. By 2016, it was around 17 million. In 2020, it was 23 million. Those are the facts and figures. Now for my opinion: no matter how responsible the vast majority of gun owners are, you are going to have a lot more suicides, homicides, and fatal accidents with so many guns around. And sure enough, firearms are now the leading cause of death in children according to CDC. That makes me sick to think about.
  • JULY: Citizens United. Seriously, this might be the moment the United States of America jumped the shark. I’ve argued in the past or Bush v. Gore. But what blindingly obvious characteristic do these two things have in common? THE CORRUPT ILLEGITIMATE UNITED STATES SUPREME COURT!!!
  • AUGUST: Immigration pressure and anti-immigration politics are already a problem in the U.S. and Europe, and climate change is going to make it worse. The 2023 WEF Global Risks Report agrees that “large scale involuntary migration” is going to be up there as an issue. We should not be angry at immigrants, we should be angry at Exxon and the rest of the energy industry, which made an intentional choice not only to directly cause all this but to prevent governments from even understanding the problem let alone doing anything to solve it. We should be very, very angry! Are there any talented politicians out there who know how to stoke anger and channel it for positive change, or is it just the evil genocidal impulses you know how to stoke?
  • SEPTEMBER: “the accumulation of physical and knowledge capital to substitute natural resources cannot guarantee green growth“. Green growth, in my own words, is the state where technological innovation allows increased human activity without a corresponding increase in environmental impact. In other words, this article concludes that technological innovation may not be able to save us. This would be bad, because this is a happy story where our civilization has a “soft landing” rather than a major course correction or a major disaster. There are some signs that human population growth may turn the corner (i.e., go from slowing down to actually decreasing in absolute numbers) relatively soon. Based on this, I speculated that “by focusing on per-capita wealth and income as a metric, rather than total national wealth and income, we can try to come up with ways to improve the quality of human lives rather than just increasing total money spent, activity, and environmental impact ceaselessly. What would this mean for “markets”? I’m not sure, but if we can accelerate productivity growth, and spread the gains fairly among the shrinking pool of humans, I don’t see why it has to be so bad.”
  • OCTOBER: Israel-Palestine. From the long-term grind of the failure to make peace and respect human rights, to the acute horror causing so much human suffering and death at this moment, to the specter of an Israeli and/or U.S. attack on Iran. It’s frightening and depressing – but of course it is not my feelings that matter here, but all the people who are suffering and going to suffer horribly because of this. The most positive thing I can think of to say is that when the dust settles, possibly years from now, maybe cooler heads will prevail on all sides. Honorable mention for most frightening story is the 2024 U.S. Presidential election starting to get more real – I am sure I and everyone else will have more to say about this in the coming (exactly one year as I write this on November 5, 2023) year!
  • NOVEMBER: An economic model that underlies a lot of climate policy may be too conservative. I don’t think this matters much because the world is doing too little, too late even according to the conservative model. Meanwhile, the ice shelves holding back Greenland are in worse shape than previously thought.
  • DECEMBER: Migration pressure and right wing politics create a toxic feedback loop practically everywhere in the world.

Stories I picked as “most hopeful”:

  • JANUARY: Bill Gates says a gene therapy-based cure for HIV could be 10-15 years away.
  • FEBRUARY:  Jimmy Carter is still alive as I write this. The vision for peace he laid out in his 2002 Nobel Peace Prize acceptance speech is well worth a read today. “To suggest that war can prevent war is a base play on words and a despicable form of warmongering. The objective of any who sincerely believe in peace clearly must be to exhaust every honorable recourse in the effort to save the peace. The world has had ample evidence that war begets only conditions that beget further war.”
  • MARCH: Just stop your motor vehicle and let elephants cross the road when and where they want to. Seriously, don’t mess with elephants.
  • APRIL: There has been some progress on phages, viruses intentionally designed to kill antibiotic-resistant bacteria. Also, anti-aging pills may be around the corner.
  • MAY: The U.S. Congress is ponying up $31 billion to give Houston a chance at a future. Many more coastal cities will need to be protected from sea level rise and intensifying storms. Now we will see if the U.S. can do coastal protection right (just ask the Dutch or Danish, no need to reinvent anything), and how many of the coastal cities it will get to before it is too late.
  • JUNE: It makes a lot of sense to tax land based on its potential developed value, whether it has been developed to that level or not. This discourages land speculation, vacant and abandoned property in cities while raising revenue that can offset other taxes.
  • JULY: There is a tiny glimmer of hope that Americans might actually value more walkable communities. And this is also a tiny glimmer of hope for the stability of our global climate, driver/bicyclist/pedestrian injuries and deaths, and the gruesome toll of obesity and diabetes. But it is only a glimmer.
  • AUGUST: Peak natural gas demand could happen by 2030, with the shift being to nuclear and renewables.
  • SEPTEMBER: Autonomous vehicles kill and maim far, far fewer human beings than vehicles driven by humans. I consider this a happy story no matter how matter how much the media hypes each accident autonomous vehicles are involved in while ignoring the tens of thousands of Americans and millions of human beings snuffed out each year by human drivers. I think at some point, insurance companies will start to agree with me and hike premiums on human drivers through the roof. Autonomous parking also has a huge potential to free up space in our urban areas.
  • OCTOBER: Flesh eating bacteria is becoming slightly more common, but seriously you are not that likely to get it. And this really was the most positive statement I could come up with this month!
  • NOVEMBER: Small modular nuclear reactors have been permitted for the first time in the United States, although it looks like the specific project that was permitted will not go through. Meanwhile construction of new nuclear weapons is accelerating (sorry, not hopeful, but I couldn’t help pointing out the contrast…)
  • DECEMBER: I mused about ways to create an early warning system that things in the world or a given country are about to go seriously wrong: “an analysis of government budgets, financial markets, and some demographic/migration data to see where various governments’ priorities lie relative to what their priorities probably should be to successfully address long-term challenges, and their likely ability to bounce back from various types and magnitudes of shock. You could probably develop some kind of risk index at the national and global levels based on this.” Not all that hopeful, you say? Well, I say it fits the mood as we end a sour year.

Stories I picked as “most interesting, not particularly frightening or hopeful, or perhaps a mixture of both”:

  • JANUARY: Genetically engineered beating pig hearts have been sown into dead human bodies. More than once.
  • FEBRUARY: It was slim pickings this month, but Jupiter affects the Sun’s orbit, just a little bit.
  • MARCHChickie Nobs have arrived!
  • APRIL: I had heard the story of the Google engineer who was fired for publicly releasing a conversation with LaMDA, a Google AI. But I hadn’t read the conversation. Well, here it is.
  • MAY: Peter Turchin’s new book proposes four indicators presaging political instability: “stagnating or declining real wages, a growing gap between rich and poor, overproduction of young graduates with advanced degrees, declining public trust, and exploding public debt“. I found myself puzzled by the “overproduction of young graduates” part, and actually had a brief email exchange with Peter Turchin himself, which I very much appreciated! Anyway, he said the problem is not education per se but “credentialism”. I have to think some more about this, but I suppose the idea is that education, like health, wealth, and almost everything else, is not equally distributed but is being horded by a particular class which is not contributing its fair share. These are my words, not Peter’s, and he might or might not agree with my characterization here.
  • JUNE: The U.S. may have alien spacecraft at Area 51 after all. Or, and this is purely my speculation, they might have discovered anti-gravity and want to throw everybody else off the scent.
  • JULY: We are all susceptible to the “end of history effect” in that we tend to assume our personalities will not change in the future, when in fact they almost certainly will. So one way to make decisions is to imagine how a few different possible future yous might look back on them.
  • AUGUST: There are a number of theories on why “western elites” have not been (perceived to be) effective in responding to crises in recent years and decades. Many have to do with institutional power dynamics, where the incentives of the individual to gain power within the institution do not align with the stated goals of the institution. Like for example, not killing everyone. The possible silver lining would be that better institutions could be designed where incentives aligned. I have an alternate, or possibly complementary, theory that there has been a decline in system thinking and moral thinking. Our leaders aren’t educated to see the systems and or think enough about whether their decisions are on the side of right or wrong.
  • SEPTEMBER: Venice has completed a major storm surge barrier project.
  • OCTOBER: The generally accepted story of the “green revolution“, that humanity saved itself from widespread famine in the face of population growth by learning to dump massive quantities of fossil fuel-derived fertilizer on farm fields, may not be fully true.
  • NOVEMBER: India somehow manages to maintain diplomatic relations with Palestine (which they recognize as a state along with 138 other UN members), Israel, and Iran at the same time.
  • DECEMBER: Did an AI named “Q Star” wake up and become super-intelligent this month?

And Now, My Brilliant Analytical Synthesis!

Climate Change. Well really, I’m likely to just say things now I have said many times before. The climate change shit is really starting to hit the fan. Our largely coastal civilization and the food supply that sustains it is at risk. The shit we can obviously see hitting the fan right now is the result of emissions years if not decades ago, and we have continued to not only emit too much but to emit too much at an increasing rate since those emissions, and we continue to not only emit too much but to emit at an increasing rate today. This means that even if we stop emitting too much right now and going forward, the crisis will continue to get worse for some time before it eventually gets better. And we are not doing that, we are continuing to not only emit too much but we are doing it at an increasing rate. We are already seeing the beginnings of massive population movements fueling a downward spiral of nationalist and outright racist geopolitics, which makes it even harder to come together and address our critical planetary carrying capacity issue in a rational manner. We are not only seeing “the return of great power competition”, we are insanely patting ourselves on the back for aiding and abetting this, and piling nuclear proliferation on top of it. Is a soft landing possible in this situation? I am not going to tell you I think it is, or even if it is possible that our species and cowards that pass for our leadership have any hope of making it happen. I think about the best we can hope for is some kind of serious but manageable collapse or crisis that brings us to our senses and allows some real leaders to emerge. To throw out one idea, maybe we could come to a new era of arms reduction for the major nuclear powers, and halts to proliferation for all the emerging nuclear powers, in exchange for civilian nuclear power for everyone who wants it, all under a strict international control and inspection regime. This would begin to address two existential risks (nuclear war and climate change) at once. Or maybe, just maybe, we are on the verge of a massive acceleration of technological progress that could make problems easier to solve. Maybe, but new technology also comes with new risks, and we shouldn’t put all our eggs in this basket. Besides, the singularity is nearing but it still feels a decade or so away to me.

UFOs. Aside from all of that, maybe the weirdest single thing going on in the world right now is the UFOs. There seems to be no real controversy about them – they are out there. They are flying around and if not defying the laws of physics as we know them, defying any technology that is able to accommodate the laws of physics as we know them. And what this logically leads to is that somebody (or some intelligent entity) knows something about the laws of physics that the rest of us do not know. Einstein explained how gravity behaves, but he wasn’t able to fully explain what gravity is or certainly how or why it came to be the way it is. Einstein’s predictions have since been proven through incontrovertible evidence, and the predictions of quantum theory have also been incontrovertibly proven, but the two theories are still at odds and in need of unification despite the efforts of the most brilliant minds today. But…are the most brilliant minds today operating in the open, or are they behind closed doors at private defense contractors and subject to censorship on national security grounds? If there has been a major discovery, would it see the light of day or would it be suppressed? I have no information here, I am just saying this is a narrative that would fit the evidence, and I don’t see other plausible narratives that fit the evidence. Why would aliens be playing with relatively easily discoverable toys in our atmosphere, while in the meantime we have discovered no radio signal evidence, no evidence of their existence in our telescopes? Those things would be very hard if not impossible to cover up, so I think we would know. The Fermi Paradox persists.

Artificial Intelligence. I tend to think the AI hype is ahead of the reality. Nonetheless, the reality is coming. It will probably seize control without our noticing after the hype has passed. Is it possible we could look back in a decade and identify 2023 as the year it woke up? There were a couple queer (in the original dictionary sense – I just couldn’t think of a better word) stories in 2023. One was a Google engineer getting fired after publicly declaring his belief that a Google AI had become conscious. The other was the “ethics board” of a major corporation firing its CEO in relation to a rumored artificial general intelligence breakthrough. Only time will tell what really happened in these cases (if it is ever made public), but one thing we can say is that technological progress does not usually go backwards.

Synthetic Biology. It’s pretty clear we are now in an age of synthetic biology breakthroughs that was hyped over the last few decades, and the media and publics of the world are predictably yawning and ignoring. But we are hearing about vaccines and cures on the horizon for diseases that have long plagued us, genetically engineered organs, synthetic meat, engineered viruses to fight antibiotic-resistant bacteria, and anti-aging pills among other things. And then there is the specter of lab accidents and biological weapons, which might be the single most scary thing in the world today out of all the terrifying things I have mentioned in this post.

2024 U.S. Presidential Election. Ugh, I’m still not ready to think about it, but it is going to happen whether I am ready to think about it or not. I’ll get around to thinking and writing about it soon, I’m sure.

Happy 2024!

December 2023 in Review

Most frightening and/or depressing story: Migration pressure and right wing politics create a toxic feedback loop practically everywhere in the world.

Most hopeful story: I mused about ways to create an early warning system that things in the world or a given country are about to go seriously wrong: “an analysis of government budgets, financial markets, and some demographic/migration data to see where various governments’ priorities lie relative to what their priorities probably should be to successfully address long-term challenges, and their likely ability to bounce back from various types and magnitudes of shock. You could probably develop some kind of risk index at the national and global levels based on this.” Not all that hopeful, you say? Well, I say it fits the mood as we end a sour year.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Did an AI named “Q Star” wake up and become super-intelligent this month?

AI “coscientist”

The idea of computers and robots greatly accelerating the rate of progress in chemical and drug research is not science fiction.

Autonomous chemical research with large language models

Transformer-based large language models are making significant strides in various fields, such as natural language processing1,2,3,4,5, biology6,7, chemistry8,9,10 and computer programming11,12. Here, we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.

Nature

It seems to me that the speed limit here is not anything imposed by the computers and robots, but your ability to measure progress and give the computers and robots feedback. With chemicals, you could tell the robots to find a combination of compounds that will do XYZ, where XYZ is something you can measure like an amount of energy or a color. With drugs, your issue could be how to test the results to see if they are working. If you test them on a computer model, your ability to measure depends on how good the computer model is. Let’s say you wanted to breed a super-intelligent mouse. There should be ways to measure the intelligence of a mouse. So you could take 100 mice test them all, find the two smartest and create a new batch of 100 embryos from the smartest male and female (or maybe at some point gender is no longer a limitation?). Now you have to wait for those 100 embryos to grow up to the point you can repeat the process. The limiting step here would be how long it takes the mice to develop to the point they can be tested. If they could somehow be tested at the embyro stage, maybe you could create a thousand generations of mouse directed mouse evolution in a matter of hours or days? Well, then, you can let the super-intelligent mice design the next round of robots.

Bill Gates on 2023

Bill Gates’s year-end retrospective is kind of rambling but here are a few points I pulled out:

  1. Lots more vaccines were administered to children in developing countries using new technologies and new delivery methods. This has made a big difference in child mortality, and that is always a happy thing. He doesn’t really go into details on the new technologies, but I am imagining things like nasal sprays rather than needles, and vaccines that don’t require refrigeration or not as much. And sometimes we just figure out how to make familiar things but make them much cheaper, and this can make a huge difference. Which would illustrate that important technologies don’t have to seem extremely complicated and high-tech to have a big impact.
  2. On the AI front, he says it will accelerate drug development, including solutions for antibiotic resistance. I don’t doubt this, although I suspect the hype has gotten a bit ahead of the rollout. So I would look for this over the next half-decade or so rather than expecting it to burst on the scene in 2024. Bill actually predicts “18–24 months away from significant levels of AI use by the general population”.
  3. He talks about AI tutors for students. I don’t want to be a Luddite, but I am concerned this will just mean less teachers per student, which will be bad.
  4. Maybe AI can just get our medical records under control. This would be nice. Transparent, common protocols for how medical records should be formatted, stored, and shared could also do this though. I can someday hope robots will constantly clean up and organize my messy house as I just throw my things everywhere, or I could organize my house (which would take a big effort once) and keep it that way (which would take small, disciplined daily efforts).
  5. Gut microbiome-based medicine. Sounds good, I guess. Then again, whenever we try to replace nutritious whole foods with highly manufactured alternatives (vitamin pills, baby formula) we tend to decide later that we should have stuck with the whole foods.
  6. “a major shift toward overall acceptance of nuclear” power. Well, it’s been pretty obvious to me for a long time that this had to happen, but maybe the world is catching up. Nuclear could certainly have been the bridge fuel to renewables if we had fully adopted it decades ago. The question now is whether, given its incredibly long time frames to get up and running and the fact that any technology is obsolete by the time it is up and running, and the current pace of renewables, it still makes sense. I definitely think we should put some eggs in this basket though.
  7. He mentions the fusion breakthrough at Lawrence Livermore about a year ago. It’s been a year and we haven’t heard much more – is the time to refine and rollout that technology going to be measured in years, decades, or never?
  8. He talks about the need for more investment in electric grids and transmission lines. Yes, this is unsexy but really needs to happen. Will it?

Q (the AI)

“Q star” is very badly named, in my view, given the “Q anon” craze it has absolutely nothing to do with. Then again, the idea of an AI building an online cult with human followers does not seem all that far fetched.

Anyway, Gizmodo has an interesting article. Gizmodo does not restrict itself to traditional journalistic practices, such as articles free of profanity.

Some have speculated that the program might (because of its name) have something to do with Q-learning, a form of machine learning. So, yeah, what is Q-learning, and how might it apply to OpenAI’s secretive program? …

Finally, there’s reinforced learning, or RL, which is a category of ML that incentivizes an AI program to achieve a goal within a specific environment. Q-learning is a subcategory of reinforced learning. In RL, researchers treat AI agents sort of like a dog that they’re trying to train. Programs are “rewarded” if they take certain actions to affect certain outcomes and are penalized if they take others. In this way, the program is effectively “trained” to seek the most optimized outcome in a given situation. In Q-learning, the agent apparently works through trial and error to find the best way to go about achieving a goal it’s been programmed to pursue.

What does this all have to do with OpenAI’s supposed “math” breakthrough? One could speculate that the program that managed (allegedly) to do simple math operations may have arrived at that ability via some form of Q-related RL. All of this said, many experts are somewhat skeptical as to whether AI programs can actually do math problems yet. Others seem to think that, even if an AI could accomplish such goals, it wouldn’t necessarily translate to broader AGI breakthroughs.

Gizmodo

My sense is that AI breakthroughs are certainly happening. At the same time, I suspect the commercial hype has gotten ahead of the technology, just like it did for every previous technology from self-driving cars to virtual reality to augmented reality. Every one of these technologies reached a fever pitch where companies were racing to roll out products to consumers ahead of competitors. Because they rush, the consumer applications don’t quite live up to the hype, the hype bubble bursts, and then the technology seems to disappear for a few years. Of course, it doesn’t disappear at all, but rather disappears from headlines and advertisements for a while. Behind the scenes, it continues to progress and then slowly seeps back into our lives. As the real commercial applications arrive and take over our daily lives, we tend to shrug.

So I would keep an eye out on the street for the technologies whose hype bubbles burst a handful of years ago, and I would expect the current AI hype to follow a similar trend. Look for the true AI takeover in the late 2020s (if I remember correctly, close to when when Ray Kurzweil predicted 30-odd years ago???)

“useful principles”

Here is an interesting blog post called “30 useful principles“. I would agree that the majority of them are useful. Anyway, here are a few ideas and phrases that caught my interest. I’ll try to be clear when I am quoting versus paraphrasing or adding my own interpretation.

  • “When a measure becomes a goal, it ceases to be a good measure.” Makes sense to me – measuring is necessary, but I have found that people who are actually doing things on the ground need an understanding of the fundamental goals, or else things will tend to drift over time and no longer be aimed at the fundamental goals.
  • “A man with a watch knows what time it is. A man with 2 watches is never sure.” A good way to talk about the communication of uncertainty. Measuring and understanding uncertainty is critical in science and decision making, but how we communicate it requires a lot of careful thought to avoid unintended consequences. Decisions are often about playing the odds, and sometimes giving decision makers too much information on uncertainty leads to no decisions or delayed decisions, which are themselves a type of decision, and not the type that is likely to produce desirable results. Am I saying we should oversimplify and project an inflated sense of certainty when talking to the public and decision makers? and is this a form of manipulation? Well, sort of and sometimes yes to both these questions.
  • “Reading is the basis of thought.” Yes, this is certainly true for me, and it is even true that the writing process is an important part of thoroughly thinking something through. This is why we may be able to outsource the production of words to AI, but this will not be a substitute for humans thinking. And if we don’t exercise our thinking muscles, we will lose them over time and we will forget how to train the next generation to develop them. So if we are going to outsource thinking and problem solving to computers, let’s hope they will be better at it than we ever were. A better model would be computer-aided decision making, where the computers are giving humans accurate and timely information about the likely consequences of our decisions, but in the end we are still applying our judgment and values in making those decisions.
  • “punishing speech—whether by taking offence or by threatening censorship—is ultimately a request to be deceived.” It’s a good idea to create incentives for people to tell the truth and provide accurate information, even if it is information people in leadership positions don’t want to hear. Leaders get very out of touch if they don’t do this.
  • “Cynicism is not a sign of intelligence but a substitute for it, a way to shield oneself from betrayal & disappointment without having to do or think.” I don’t know that cynical or “realistic” people lack raw intelligence on average, but they certainly lack imagination and creativity. The more people have trouble imagining that things can change, the more it becomes a self-fulfilling prophecy that things will not change.
  • “One death is a tragedy, a million is a statistic.” I’m as horrified by pictures of dying babies in a hospital in a war zone as anyone else, but it also raises my propaganda flag. Who is trying to manipulate me with these images and why? What else is going on at the same time that I might also want to pay attention to?

generative AI in the workplace?

Microsoft is unleashing generative AI on the workplace imminently, according to Slate.

At Microsoft’s New York release event on Thursday, I watched as it revealed products that simplify and automate some of the worst parts of office life. The company demoed a text generator that can read long Word documents and write blog posts highlighting the most relevant points. It showed another feature that allows you to prompt Copilot to summarize a slew of unread messages from an email-happy co-worker. The technology can also read transcripts of meetings you miss and note the most relevant parts, or allow you to query the full discussions. Even simple updates like prompting Copilot to create a header image for a slide deck seem quite useful.

Slate

So maybe this can partially automate some useless tasks that are taking up our time. But if they are useless, do they need to be done at all? Are they adding value at all to begin with?

Here is some advice I would give young people new to the workplace:

  1. When people give you assignments, repeat them back to confirm you understand them. If they are still not clear, put them in writing and ask the person assigning the task to confirm. In most cases, they will like this.
  2. Keep a running list of things you have been asked to do, when they are due, what their status is, and any problems/obstacles/questions you are encountering to getting them done. Look at and update this list every day.
  3. Give updates on your tasks without being asked. When you have a question, encounter an issue, or realize you may not be able to meet a deadline, talk to the person assigning the work early and often about it. They will like this. Often deadlines can be moved or you can get help, but this gets harder as a deadline approaches.
  4. Keep a calendar. Look at it and update it every day.
  5. Make it a habit to take notes in all meetings and phone conversations. You don’t have to be a court reporter. Try to capture assignments and decisions. At the end of the day and again at the end of the week, look through all your notes, list new assignments, and move them to your assignment list.
  6. Basically, you want to be a rock solid and reliable “set it and forget it” employee. This doesn’t mean you do everything perfectly all the time with no help. It means that when someone assigns you a task, they know you will either do it perfectly and on time, or much more likely, you will come to them with updates and issues that need to be resolved to get the work done. Once they assign it to you, they don’t have to think about it again until you walk through the door.
  7. #1-6 are kind of it for maybe your first year. Once you are a master note taker, list and calendar keeper, at some point you will find yourself helping others to get organized. One day, you will find yourself tracking and communicating the work of a small team of people. Which brings me to communication…
  8. Reading, writing, and speaking are all important, of course. But what is really valuable as you start moving up the business ladder is starting to get a sense of how to communicate a message to an audience. I try to ask myself three questions before preparing a document or presentation: (1) Who specifically is my audience? (2) What is the take home message I would like my audience to hear and understand? and (3) What decisions or actions would I like my audience to take after hearing and understanding my message? Get this down, and at some point you will not just be the back office “getting things done” person (although you can make a perfectly good career of that if you want to), but you will find yourself in front of customers and senior management explaining things and adding value for your organization.
  9. Maybe it doesn’t need to be said, but take some time for humanity. A little small talk and banter is how humans connect, and as long as it doesn’t get out of hand it is positive for productivity. When you work in an office, get in the habit of saying hello when you get there and good-bye before you leave. It is annoying when someone just evaporates at 5 pm and you had an important question for them. If you need to vanish at exactly 5 pm, stick your head in at 4 pm and ask if there is anything critical people need from you during the last hour of the day. This is really helpful. If you don’t need to vanish at 5 pm, stick around for a little while and review the happenings of the day with co-workers. Every once in awhile, move the banter to a local eating or drinking establishment. This is how productive, creative, innovative teams are built and I see this culture vanishing.
  10. Notice I didn’t talk much about working from home. I just don’t think it works well. Try to be there in person as much as possible.

Now, do any of the things “generative AI” can do in the short term address anything above? I’m skeptical but willing to give it a chance. A big reason for all that note taking, list and calendar keeping/reviewing/updating I do is to form a big picture in my brain of what is going on in my organization and how I can add value to it. Even if a computer can form that big picture, that is not going to put it in my brain. Maybe a computer can go through a transcript of a meeting or phone call and pull out decisions and action items. It certainly should be able to keep a calendar and do scheduling. It might be valuable if first thing in the morning the computer would say to me “consider doing this thing next” or “consider doing one these two (or three) things next”, and this would always fit into some bigger picture goal of getting everything done on time, on budget and to a high standard. Maybe virtual reality will solve some of the problems with working from home eventually. I doubt we will be there any time soon, but I also don’t doubt the computers will get better at this over time.

now is the singularity near?

The New Yorker has a long article on the possibility of an AI-driven singularity. It surveys many of the other news stories and letters and debates on the subject. The answer is really that nobody knows, but since it is an existential threat of unknown probability it certainly belongs somewhere on the risk matrix.

I can see nearer term problems too. Thinking back to the “flash crash” of 2010, relatively stupid algorithms reacting to each other’s actions and making decisions at lightning-speed were nearly able to crash the financial system. We recovered from that one, but what if these new algorithms lead to a crash of financial or real infrastructure systems (electricity, internet, transportation, water, food?) that we can’t recover from. It doesn’t take a total physical collapse to cause a depression, just a massive loss of confidence leading to panic. That scenario is not too hard for me to imagine.

I suspect that we are approaching the peak of the hype cycle when it comes to AI. It will build to a fever pitch, the bubble will burst (in the sense of our attention), it will seem to the public like nothing much is happening for a few years or even a decade, but in the background quiet progress will be made and it will eventually, stealthily just take over much of our daily lives and we will shrug like we always do.

my (first? last?) AI post

I haven’t talked much about AI. Generally, I don’t feel like I have a lot to add on topics that literally everybody else is talking about (even Al Gore), and at least some people have a lot more specialized knowledge than I do. But here goes:

  • In the near- to medium-term, it seems to me the most typical use will be to streamline our interaction with computers. Writing computer code might be the most talked about application. AI can pretty easily write the first draft of a computer program based on a verbal description of what the programmer wants. This might save time, as long as debugging the draft code and getting it to run and produce reasonable results doesn’t take longer than it would have taken the programmer to draft, debug, and check the code. Automated debugging almost seems like an oxymoron to me, but maybe it will get better over time.
  • The other way we interact with computers, though, is all those endless drop-down menus and pop-up windows and settings of settings of settings, not to mention infuriating “customer service” computers. Surely AI can help to untangle some of this and just make it easier for a normal person to communicate to a computer what they are looking for.
  • So some streamlining and efficiency gains seem like a possibility. Like pretty much any technological process, these will cause some short-term employment loss and longer-term productivity gains. At least a portion of productivity gains do seem to trickle down to greater value (lower prices for what we get in return) for consumers and the middle class. How much trickles down depends on how seriously the society works on problems like market failure, regulatory capture, benefits, childcare, health care, education, training, research and development, etc.
  • Increasingly personalized medicine seems like a medium- to longer-term possibility. We have heard a lot about evidence based medicine, and there is not a lot of evidence it has delivered on the promises so far. Maybe it eventually will, and maybe AI making sense of relatively unstructured health information and medical records will eventually be part of the solution.
  • Longer term, AI might be able synthesize existing information and research across fields and enable better problem solving and decisions. The thing is, computer-aided decision making for policy makers and other leaders is not a new idea. It’s been around for a long time and has not necessarily improved decision making. It’s not that objective information always makes the best decision obvious or that there is always a single best decision. Decisions should be informed by a combination of objective information and values in most cases. But human beings rarely make use of even the objective information readily available to them, and often make decisions based on opinions and hunches.
  • Overcoming this decision problem will be more of a social science problem than a technology problem – and social science has lagged behind the hard sciences and even (god-forbid) the semi-flaccid science of economics. Maybe AI can help these sciences catch up. Where is that psycho-history we were promised so long ago?
  • Construction and urban planning are some more challenging areas that never seem to get anywhere, but maybe that is a just a cynical middle-aged veteran of the urban planning and sustainable development wars talking. I tried to help bring a system theory- and decision science-based approach to the engineering and planning sector earlier in my career, and that attempt foundered badly on the rocks of human indifference at best and ill intention at worst. Maybe that was an idea before its time and this time will be different, but I am not sure the state of information technology was the limiting factor at the time.
  • Education is a tough sector. We all want to make it easier. But I have recently returned to a classroom setting after decades of virtual “training” and “industry” conferences, and the difference in what I am learning for a given investment of effort is night and day. Maybe AI could help human teachers identify the right level of content and the right format that will benefit a given student most, and then deliver it.
  • And those are the ignorant but well-intentioned humans. There are many ill-intentioned humans out there. Speaking of ill-intentioned humans, if AI can be used to accelerate technological progress, it will inevitably be used to accelerate progress on weapons, propaganda, authoritarian control of populations, and just generally to concentrate wealth and power in as few hands as possible.
  • Now for a fun and potentially lucrative idea: As Marc Andreessen puts it, “Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV.” No longer do cults have to be built around pretend gods, we can create actual gods and then build cults around them!