Tag Archives: artificial intelligence

how I’m using AI

AI has definitely improved my personal productivity when it comes to computer programming. I haven’t been successful asking it to write whole programs for me, but it has been fantastic for solving syntax problems in minutes that might otherwise take me hours to figure out. For example, I have data in xyz format and I need it in zyx format, please give me some example code that works. Or, I need to pass an argument to a function and does it need to be in quotes, parentheses, enclosed in ancient hieroglyphics or some random combination of these? In the past, I always started with a Google search on these questions, looking first for a blog post with examples, and failing that for a Stack Overflow post. At some point, I started using ChatGPT when those two options failed. Then I figured out I have access to a version of CoPilot through my employer and any data or code I supply is not going to be automatically broadcast to the world, so I have gradually been shifting to that. I just learned that CoPilot is really a version of ChatGPT. (The article I linked to mentions some other AIs I had not heard of yet, such as “Claude”.)

Then at some point, I started going to AI after blog posts but before Stack Overflow. This is about where I am now. For one thing, AI tends to listen to my question, understand what I am looking for and give me a relevant answer much more often than Stack Overflow. For another, it is much more polite than the dick heads and whining weenies on Stack Overflow. You know who you are. Thank you for your free service in the past, and if you want me to continue coming to you, you may want to at least learn some manners. You could start by asking an AI to analyze your posts and suggest ways to not be such a dick head.

I am not using AI for writing, because for me writing and thinking are two halves of the same coin, and I can’t farm out the thinking. The one exception to this is thank you notes and other social niceties – I have no interest in burning my limited intellectual capacity on learning how to write these, so I am very happy to have AI do it. I tried asking CoPilot to find me promo codes for a few stores, but none of them worked. I suspect the companies are paying for the same AI I am using for free, so it is probably snitching to them so I can’t get a deal.

GPT-4 and the Turing test

I don’t know if the original Turing test was based on just one human participant, or if more than one was used, how those humans were chosen.

Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes — after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time… GPT-3.5 scored 50% while the human participant scored 67%.

livescience.com

Should the criteria to pass be that 51% of a large random sample of humans could not correctly identify computer vs. human? How bad would the results have to be for the control (identifying the human as human) before we would conclude that the Turing test no longer makes sense?

It’s interesting that the Turing test is presented as a test of intelligence, but many of the things that apparently make computer conversationalists convincingly human are in fact cognitive biases, logical errors, and the appearance of emotionally-influenced decision making. These might be things you would look for if you wanted a computer to be a friend, but they are not things I would like for if I wanted a computer to counter my less rational human impulses and help me make more rational decisions.

AI and protein research

Here is a story in MIT News about AI doing experiments on proteins, with drug development and gene therapy implications. This seems like the clearest application of AI at the moment – anything where there is a formula to be figured out and a large number of combinations to be tried. I can definitely see this accelerating scientific and technological progress, although the efficiency to me seems to be more in the “automation” part than the “intelligence” part.

trade “fragmentation” vs. AI?

One interesting thing in the IMF report I mentioned recently forecasting a significant productivity slowdown: the positive effects of AI on productivity and the negative effects of inefficient trade policy were shown offsetting each other. Meanwhile, Eric Posner is concerned that humans will have psychological difficulties leading lives of leisure after the AI-driven productivity revolution, and after our political system correctly decides to redistribute the resulting wealth to everyone. I know, this could be a medium-term pain, long-term gain sort of thing. But how do we know the long term will come? And this kind of thinking clearly ignores the existential threats like climate change and biological weapons, unless you assume the AI productivity revolution will dispatch those threats without creating new ones.

weather forecasting

This is interesting. It is not 100% clear to me what the measure of accuracy is below, but the plot shows how much weather forecasting has improved over the last 50 years or so. A 3-5 day forecast is highly accurate now, and 3-5 are not that different. It’s interesting to me that there is such as large drop off in accuracy between a 7 and 10 day forecast – that is not necessarily intuitive, but useful even in everyday life. A 10-day forecast is basically a coin flip, while check back 3 days later and you are closer to 80/20 odds. This is based on pressure measured at a certain height I think, so it doesn’t necessarily mean forecasts of precipitation depth and intensity, rain vs. snow vs. ice, thunder and lightning, tornadoes, etc. are going to be as accurate as this implies.

Our World in Data

There is some suggesting that AI (meaning purely statistical approaches, or AI choosing any blend of statistics and physics it wants?) might make forecasting much faster, cheaper, and easier yet again.

2023 in Review

Warning: This post is not 100% family friendly and profanity free. 2023 was just not that kind of year!

I’ll start with a personal note. After 24+ years of engineering consulting practice, I have decided to leave the world of full-time professional employment and go back to school for a bit. This is some combination of mid-life crisis and post-Covid working parent burnout. I spent a lot of time thinking about it, ran all the financial numbers, and decided I can swing it for a year or so without major implications for my eventual retirement 15-ish years down the line. So, gentle reader, you too can do this sort of thing if you want to. Just be patient, plan, prepare, do the math, and be rational about it.

The Year’s Posts

Stories I picked as “most frightening or depressing”:

  • JANUARY: How about a roundup of awful things, like the corrupt illegitimate U.S. Supreme Court, ongoing grisly wars, the CIA killed JFK after all (?), nuclear proliferation, ethnic cleansing, mass incarceration, Guantanamo Bay, and all talk no walk on climate change? And let’s hope there is a special circle of hell waiting for propaganda artists who worked for Exxon.
  • FEBRUARY: Pfizer says they are not doing gain of function research on potential extinction viruses. But they totally could if they wanted to. And this at a time when the “lab leak hypothesis” is peeking out from the headlines again. I also became concerned about bird flu, then managed to convince myself that maybe it is not a huge risk at the moment, but definitely a significant risk over time.
  • MARCH: The Covid-19 “lab leak hypothesis” is still out there. Is this even news? I’m not sure. But what is frightening to me is that deadly natural and engineered pathogens are being worked with in labs, and they almost inevitably will escape or be released intentionally to threaten us all at some point. It’s like nuclear proliferation, accidents, and terrorism – we have had a lot of near misses and a lot of luck over the last 70 years or so. Can we afford the same with biological threats (not to mention nuclear threats) – I think no. Are we doing enough as a civilization to mitigate this civilization-ending threat? I think almost certainly, obviously not. What are we doing? What are we thinking?
  • APRIL: Chemicals, they’re everywhere! And there were 20,000 accidents with them in 2022 that caused injuries, accidents, or death. Some are useful, some are risky, and some are both. We could do a better job handling and transporting them, we could get rid of the truly useless and dangerous ones, and we could work harder on finding substitutes for the useful but dangerous ones. And we could get rid of a corrupt political system where chemical companies pay the cost of running for office and then reward candidates who say and do what they are told.
  • MAY: There are more “nuclear capable states” than I thought.
  • JUNE: Most frightening and/or depressing story: Before 2007, Americans bought around 7 million guns per year. By 2016, it was around 17 million. In 2020, it was 23 million. Those are the facts and figures. Now for my opinion: no matter how responsible the vast majority of gun owners are, you are going to have a lot more suicides, homicides, and fatal accidents with so many guns around. And sure enough, firearms are now the leading cause of death in children according to CDC. That makes me sick to think about.
  • JULY: Citizens United. Seriously, this might be the moment the United States of America jumped the shark. I’ve argued in the past or Bush v. Gore. But what blindingly obvious characteristic do these two things have in common? THE CORRUPT ILLEGITIMATE UNITED STATES SUPREME COURT!!!
  • AUGUST: Immigration pressure and anti-immigration politics are already a problem in the U.S. and Europe, and climate change is going to make it worse. The 2023 WEF Global Risks Report agrees that “large scale involuntary migration” is going to be up there as an issue. We should not be angry at immigrants, we should be angry at Exxon and the rest of the energy industry, which made an intentional choice not only to directly cause all this but to prevent governments from even understanding the problem let alone doing anything to solve it. We should be very, very angry! Are there any talented politicians out there who know how to stoke anger and channel it for positive change, or is it just the evil genocidal impulses you know how to stoke?
  • SEPTEMBER: “the accumulation of physical and knowledge capital to substitute natural resources cannot guarantee green growth“. Green growth, in my own words, is the state where technological innovation allows increased human activity without a corresponding increase in environmental impact. In other words, this article concludes that technological innovation may not be able to save us. This would be bad, because this is a happy story where our civilization has a “soft landing” rather than a major course correction or a major disaster. There are some signs that human population growth may turn the corner (i.e., go from slowing down to actually decreasing in absolute numbers) relatively soon. Based on this, I speculated that “by focusing on per-capita wealth and income as a metric, rather than total national wealth and income, we can try to come up with ways to improve the quality of human lives rather than just increasing total money spent, activity, and environmental impact ceaselessly. What would this mean for “markets”? I’m not sure, but if we can accelerate productivity growth, and spread the gains fairly among the shrinking pool of humans, I don’t see why it has to be so bad.”
  • OCTOBER: Israel-Palestine. From the long-term grind of the failure to make peace and respect human rights, to the acute horror causing so much human suffering and death at this moment, to the specter of an Israeli and/or U.S. attack on Iran. It’s frightening and depressing – but of course it is not my feelings that matter here, but all the people who are suffering and going to suffer horribly because of this. The most positive thing I can think of to say is that when the dust settles, possibly years from now, maybe cooler heads will prevail on all sides. Honorable mention for most frightening story is the 2024 U.S. Presidential election starting to get more real – I am sure I and everyone else will have more to say about this in the coming (exactly one year as I write this on November 5, 2023) year!
  • NOVEMBER: An economic model that underlies a lot of climate policy may be too conservative. I don’t think this matters much because the world is doing too little, too late even according to the conservative model. Meanwhile, the ice shelves holding back Greenland are in worse shape than previously thought.
  • DECEMBER: Migration pressure and right wing politics create a toxic feedback loop practically everywhere in the world.

Stories I picked as “most hopeful”:

  • JANUARY: Bill Gates says a gene therapy-based cure for HIV could be 10-15 years away.
  • FEBRUARY:  Jimmy Carter is still alive as I write this. The vision for peace he laid out in his 2002 Nobel Peace Prize acceptance speech is well worth a read today. “To suggest that war can prevent war is a base play on words and a despicable form of warmongering. The objective of any who sincerely believe in peace clearly must be to exhaust every honorable recourse in the effort to save the peace. The world has had ample evidence that war begets only conditions that beget further war.”
  • MARCH: Just stop your motor vehicle and let elephants cross the road when and where they want to. Seriously, don’t mess with elephants.
  • APRIL: There has been some progress on phages, viruses intentionally designed to kill antibiotic-resistant bacteria. Also, anti-aging pills may be around the corner.
  • MAY: The U.S. Congress is ponying up $31 billion to give Houston a chance at a future. Many more coastal cities will need to be protected from sea level rise and intensifying storms. Now we will see if the U.S. can do coastal protection right (just ask the Dutch or Danish, no need to reinvent anything), and how many of the coastal cities it will get to before it is too late.
  • JUNE: It makes a lot of sense to tax land based on its potential developed value, whether it has been developed to that level or not. This discourages land speculation, vacant and abandoned property in cities while raising revenue that can offset other taxes.
  • JULY: There is a tiny glimmer of hope that Americans might actually value more walkable communities. And this is also a tiny glimmer of hope for the stability of our global climate, driver/bicyclist/pedestrian injuries and deaths, and the gruesome toll of obesity and diabetes. But it is only a glimmer.
  • AUGUST: Peak natural gas demand could happen by 2030, with the shift being to nuclear and renewables.
  • SEPTEMBER: Autonomous vehicles kill and maim far, far fewer human beings than vehicles driven by humans. I consider this a happy story no matter how matter how much the media hypes each accident autonomous vehicles are involved in while ignoring the tens of thousands of Americans and millions of human beings snuffed out each year by human drivers. I think at some point, insurance companies will start to agree with me and hike premiums on human drivers through the roof. Autonomous parking also has a huge potential to free up space in our urban areas.
  • OCTOBER: Flesh eating bacteria is becoming slightly more common, but seriously you are not that likely to get it. And this really was the most positive statement I could come up with this month!
  • NOVEMBER: Small modular nuclear reactors have been permitted for the first time in the United States, although it looks like the specific project that was permitted will not go through. Meanwhile construction of new nuclear weapons is accelerating (sorry, not hopeful, but I couldn’t help pointing out the contrast…)
  • DECEMBER: I mused about ways to create an early warning system that things in the world or a given country are about to go seriously wrong: “an analysis of government budgets, financial markets, and some demographic/migration data to see where various governments’ priorities lie relative to what their priorities probably should be to successfully address long-term challenges, and their likely ability to bounce back from various types and magnitudes of shock. You could probably develop some kind of risk index at the national and global levels based on this.” Not all that hopeful, you say? Well, I say it fits the mood as we end a sour year.

Stories I picked as “most interesting, not particularly frightening or hopeful, or perhaps a mixture of both”:

  • JANUARY: Genetically engineered beating pig hearts have been sown into dead human bodies. More than once.
  • FEBRUARY: It was slim pickings this month, but Jupiter affects the Sun’s orbit, just a little bit.
  • MARCHChickie Nobs have arrived!
  • APRIL: I had heard the story of the Google engineer who was fired for publicly releasing a conversation with LaMDA, a Google AI. But I hadn’t read the conversation. Well, here it is.
  • MAY: Peter Turchin’s new book proposes four indicators presaging political instability: “stagnating or declining real wages, a growing gap between rich and poor, overproduction of young graduates with advanced degrees, declining public trust, and exploding public debt“. I found myself puzzled by the “overproduction of young graduates” part, and actually had a brief email exchange with Peter Turchin himself, which I very much appreciated! Anyway, he said the problem is not education per se but “credentialism”. I have to think some more about this, but I suppose the idea is that education, like health, wealth, and almost everything else, is not equally distributed but is being horded by a particular class which is not contributing its fair share. These are my words, not Peter’s, and he might or might not agree with my characterization here.
  • JUNE: The U.S. may have alien spacecraft at Area 51 after all. Or, and this is purely my speculation, they might have discovered anti-gravity and want to throw everybody else off the scent.
  • JULY: We are all susceptible to the “end of history effect” in that we tend to assume our personalities will not change in the future, when in fact they almost certainly will. So one way to make decisions is to imagine how a few different possible future yous might look back on them.
  • AUGUST: There are a number of theories on why “western elites” have not been (perceived to be) effective in responding to crises in recent years and decades. Many have to do with institutional power dynamics, where the incentives of the individual to gain power within the institution do not align with the stated goals of the institution. Like for example, not killing everyone. The possible silver lining would be that better institutions could be designed where incentives aligned. I have an alternate, or possibly complementary, theory that there has been a decline in system thinking and moral thinking. Our leaders aren’t educated to see the systems and or think enough about whether their decisions are on the side of right or wrong.
  • SEPTEMBER: Venice has completed a major storm surge barrier project.
  • OCTOBER: The generally accepted story of the “green revolution“, that humanity saved itself from widespread famine in the face of population growth by learning to dump massive quantities of fossil fuel-derived fertilizer on farm fields, may not be fully true.
  • NOVEMBER: India somehow manages to maintain diplomatic relations with Palestine (which they recognize as a state along with 138 other UN members), Israel, and Iran at the same time.
  • DECEMBER: Did an AI named “Q Star” wake up and become super-intelligent this month?

And Now, My Brilliant Analytical Synthesis!

Climate Change. Well really, I’m likely to just say things now I have said many times before. The climate change shit is really starting to hit the fan. Our largely coastal civilization and the food supply that sustains it is at risk. The shit we can obviously see hitting the fan right now is the result of emissions years if not decades ago, and we have continued to not only emit too much but to emit too much at an increasing rate since those emissions, and we continue to not only emit too much but to emit at an increasing rate today. This means that even if we stop emitting too much right now and going forward, the crisis will continue to get worse for some time before it eventually gets better. And we are not doing that, we are continuing to not only emit too much but we are doing it at an increasing rate. We are already seeing the beginnings of massive population movements fueling a downward spiral of nationalist and outright racist geopolitics, which makes it even harder to come together and address our critical planetary carrying capacity issue in a rational manner. We are not only seeing “the return of great power competition”, we are insanely patting ourselves on the back for aiding and abetting this, and piling nuclear proliferation on top of it. Is a soft landing possible in this situation? I am not going to tell you I think it is, or even if it is possible that our species and cowards that pass for our leadership have any hope of making it happen. I think about the best we can hope for is some kind of serious but manageable collapse or crisis that brings us to our senses and allows some real leaders to emerge. To throw out one idea, maybe we could come to a new era of arms reduction for the major nuclear powers, and halts to proliferation for all the emerging nuclear powers, in exchange for civilian nuclear power for everyone who wants it, all under a strict international control and inspection regime. This would begin to address two existential risks (nuclear war and climate change) at once. Or maybe, just maybe, we are on the verge of a massive acceleration of technological progress that could make problems easier to solve. Maybe, but new technology also comes with new risks, and we shouldn’t put all our eggs in this basket. Besides, the singularity is nearing but it still feels a decade or so away to me.

UFOs. Aside from all of that, maybe the weirdest single thing going on in the world right now is the UFOs. There seems to be no real controversy about them – they are out there. They are flying around and if not defying the laws of physics as we know them, defying any technology that is able to accommodate the laws of physics as we know them. And what this logically leads to is that somebody (or some intelligent entity) knows something about the laws of physics that the rest of us do not know. Einstein explained how gravity behaves, but he wasn’t able to fully explain what gravity is or certainly how or why it came to be the way it is. Einstein’s predictions have since been proven through incontrovertible evidence, and the predictions of quantum theory have also been incontrovertibly proven, but the two theories are still at odds and in need of unification despite the efforts of the most brilliant minds today. But…are the most brilliant minds today operating in the open, or are they behind closed doors at private defense contractors and subject to censorship on national security grounds? If there has been a major discovery, would it see the light of day or would it be suppressed? I have no information here, I am just saying this is a narrative that would fit the evidence, and I don’t see other plausible narratives that fit the evidence. Why would aliens be playing with relatively easily discoverable toys in our atmosphere, while in the meantime we have discovered no radio signal evidence, no evidence of their existence in our telescopes? Those things would be very hard if not impossible to cover up, so I think we would know. The Fermi Paradox persists.

Artificial Intelligence. I tend to think the AI hype is ahead of the reality. Nonetheless, the reality is coming. It will probably seize control without our noticing after the hype has passed. Is it possible we could look back in a decade and identify 2023 as the year it woke up? There were a couple queer (in the original dictionary sense – I just couldn’t think of a better word) stories in 2023. One was a Google engineer getting fired after publicly declaring his belief that a Google AI had become conscious. The other was the “ethics board” of a major corporation firing its CEO in relation to a rumored artificial general intelligence breakthrough. Only time will tell what really happened in these cases (if it is ever made public), but one thing we can say is that technological progress does not usually go backwards.

Synthetic Biology. It’s pretty clear we are now in an age of synthetic biology breakthroughs that was hyped over the last few decades, and the media and publics of the world are predictably yawning and ignoring. But we are hearing about vaccines and cures on the horizon for diseases that have long plagued us, genetically engineered organs, synthetic meat, engineered viruses to fight antibiotic-resistant bacteria, and anti-aging pills among other things. And then there is the specter of lab accidents and biological weapons, which might be the single most scary thing in the world today out of all the terrifying things I have mentioned in this post.

2024 U.S. Presidential Election. Ugh, I’m still not ready to think about it, but it is going to happen whether I am ready to think about it or not. I’ll get around to thinking and writing about it soon, I’m sure.

Happy 2024!

December 2023 in Review

Most frightening and/or depressing story: Migration pressure and right wing politics create a toxic feedback loop practically everywhere in the world.

Most hopeful story: I mused about ways to create an early warning system that things in the world or a given country are about to go seriously wrong: “an analysis of government budgets, financial markets, and some demographic/migration data to see where various governments’ priorities lie relative to what their priorities probably should be to successfully address long-term challenges, and their likely ability to bounce back from various types and magnitudes of shock. You could probably develop some kind of risk index at the national and global levels based on this.” Not all that hopeful, you say? Well, I say it fits the mood as we end a sour year.

Most interesting story, that was not particularly frightening or hopeful, or perhaps was a mixture of both: Did an AI named “Q Star” wake up and become super-intelligent this month?

AI “coscientist”

The idea of computers and robots greatly accelerating the rate of progress in chemical and drug research is not science fiction.

Autonomous chemical research with large language models

Transformer-based large language models are making significant strides in various fields, such as natural language processing1,2,3,4,5, biology6,7, chemistry8,9,10 and computer programming11,12. Here, we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.

Nature

It seems to me that the speed limit here is not anything imposed by the computers and robots, but your ability to measure progress and give the computers and robots feedback. With chemicals, you could tell the robots to find a combination of compounds that will do XYZ, where XYZ is something you can measure like an amount of energy or a color. With drugs, your issue could be how to test the results to see if they are working. If you test them on a computer model, your ability to measure depends on how good the computer model is. Let’s say you wanted to breed a super-intelligent mouse. There should be ways to measure the intelligence of a mouse. So you could take 100 mice test them all, find the two smartest and create a new batch of 100 embryos from the smartest male and female (or maybe at some point gender is no longer a limitation?). Now you have to wait for those 100 embryos to grow up to the point you can repeat the process. The limiting step here would be how long it takes the mice to develop to the point they can be tested. If they could somehow be tested at the embyro stage, maybe you could create a thousand generations of mouse directed mouse evolution in a matter of hours or days? Well, then, you can let the super-intelligent mice design the next round of robots.

Bill Gates on 2023

Bill Gates’s year-end retrospective is kind of rambling but here are a few points I pulled out:

  1. Lots more vaccines were administered to children in developing countries using new technologies and new delivery methods. This has made a big difference in child mortality, and that is always a happy thing. He doesn’t really go into details on the new technologies, but I am imagining things like nasal sprays rather than needles, and vaccines that don’t require refrigeration or not as much. And sometimes we just figure out how to make familiar things but make them much cheaper, and this can make a huge difference. Which would illustrate that important technologies don’t have to seem extremely complicated and high-tech to have a big impact.
  2. On the AI front, he says it will accelerate drug development, including solutions for antibiotic resistance. I don’t doubt this, although I suspect the hype has gotten a bit ahead of the rollout. So I would look for this over the next half-decade or so rather than expecting it to burst on the scene in 2024. Bill actually predicts “18–24 months away from significant levels of AI use by the general population”.
  3. He talks about AI tutors for students. I don’t want to be a Luddite, but I am concerned this will just mean less teachers per student, which will be bad.
  4. Maybe AI can just get our medical records under control. This would be nice. Transparent, common protocols for how medical records should be formatted, stored, and shared could also do this though. I can someday hope robots will constantly clean up and organize my messy house as I just throw my things everywhere, or I could organize my house (which would take a big effort once) and keep it that way (which would take small, disciplined daily efforts).
  5. Gut microbiome-based medicine. Sounds good, I guess. Then again, whenever we try to replace nutritious whole foods with highly manufactured alternatives (vitamin pills, baby formula) we tend to decide later that we should have stuck with the whole foods.
  6. “a major shift toward overall acceptance of nuclear” power. Well, it’s been pretty obvious to me for a long time that this had to happen, but maybe the world is catching up. Nuclear could certainly have been the bridge fuel to renewables if we had fully adopted it decades ago. The question now is whether, given its incredibly long time frames to get up and running and the fact that any technology is obsolete by the time it is up and running, and the current pace of renewables, it still makes sense. I definitely think we should put some eggs in this basket though.
  7. He mentions the fusion breakthrough at Lawrence Livermore about a year ago. It’s been a year and we haven’t heard much more – is the time to refine and rollout that technology going to be measured in years, decades, or never?
  8. He talks about the need for more investment in electric grids and transmission lines. Yes, this is unsexy but really needs to happen. Will it?

Q (the AI)

“Q star” is very badly named, in my view, given the “Q anon” craze it has absolutely nothing to do with. Then again, the idea of an AI building an online cult with human followers does not seem all that far fetched.

Anyway, Gizmodo has an interesting article. Gizmodo does not restrict itself to traditional journalistic practices, such as articles free of profanity.

Some have speculated that the program might (because of its name) have something to do with Q-learning, a form of machine learning. So, yeah, what is Q-learning, and how might it apply to OpenAI’s secretive program? …

Finally, there’s reinforced learning, or RL, which is a category of ML that incentivizes an AI program to achieve a goal within a specific environment. Q-learning is a subcategory of reinforced learning. In RL, researchers treat AI agents sort of like a dog that they’re trying to train. Programs are “rewarded” if they take certain actions to affect certain outcomes and are penalized if they take others. In this way, the program is effectively “trained” to seek the most optimized outcome in a given situation. In Q-learning, the agent apparently works through trial and error to find the best way to go about achieving a goal it’s been programmed to pursue.

What does this all have to do with OpenAI’s supposed “math” breakthrough? One could speculate that the program that managed (allegedly) to do simple math operations may have arrived at that ability via some form of Q-related RL. All of this said, many experts are somewhat skeptical as to whether AI programs can actually do math problems yet. Others seem to think that, even if an AI could accomplish such goals, it wouldn’t necessarily translate to broader AGI breakthroughs.

Gizmodo

My sense is that AI breakthroughs are certainly happening. At the same time, I suspect the commercial hype has gotten ahead of the technology, just like it did for every previous technology from self-driving cars to virtual reality to augmented reality. Every one of these technologies reached a fever pitch where companies were racing to roll out products to consumers ahead of competitors. Because they rush, the consumer applications don’t quite live up to the hype, the hype bubble bursts, and then the technology seems to disappear for a few years. Of course, it doesn’t disappear at all, but rather disappears from headlines and advertisements for a while. Behind the scenes, it continues to progress and then slowly seeps back into our lives. As the real commercial applications arrive and take over our daily lives, we tend to shrug.

So I would keep an eye out on the street for the technologies whose hype bubbles burst a handful of years ago, and I would expect the current AI hype to follow a similar trend. Look for the true AI takeover in the late 2020s (if I remember correctly, close to when when Ray Kurzweil predicted 30-odd years ago???)