Okay, so I got around to reading The Singularity is Nearer by Ray Kurzweil. It is worth a read. It is not really a rewrite or update of The Singularity is Near, and I would still recommend reading that 2005 book first to anyone. I suppose reading it now would require a bit more of a historical lens since some events have already come to pass, and others may not be on the track he envisioned (although surprisingly close consider how much time has passed!) This new book feels a bit more thrown together, and yet it is very interesting. My bullets below are not a summary or review of the book – I’m sure you can find that elsewhere on the internet if you prefer that to reading (or don’t have time to read) the book yourself. These are just a few things I found interesting or surprising enough to bookmark as I was reading the book.
- Okay, well actually this is a pretty good 4 sentence summary:
Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it’s difficult to comprehend. This is what I mean by the Singularity.
In the past I’ve heard him say 1000x. Which is it Ray??? Well, it doesn’t matter does it because we can’t comprehend the difference between 1,000 and 1 million. I can’t comprehend being twice as smart as I am, in fact. Einstein supposedly had an IQ of about 160 so he was only 60% smarter than me if IQ were an adequate measure of anything.
- A random factoid, quoting Stephen Pinker I believe: Violent death in hunter-gatherer societies has been estimated at around 500 per 100,000 people per year. War death rates in Germany, Japan and Russia during the 20th century were more on the order of 100. It’s not clear to me though if this was the annual rate during World War II or if this has been averaged over the entire 20th century, which would seem disingenuous. But his point is that even though violence may seem high to us, civilization has achieved massive reductions compared to historical times.
- While the cost of solar panels has declined exponentially, the cost of permitting and installation has not fallen as fast, if it all. This seems to support some of my recent musings that human institutions and sociopolitics act as a brake on implementation of new technology.
- He’s very bullish on democracy as promoting the spread of peace. This is a nice idea, but I can’t help noting that two of the world’s nominal democracies (the US and Israel) seem to be some of the most violent actors in the world at the moment, while many autocratic countries (thinking of Middle Eastern hereditary dictatorships in particular) seem to be the more restrained and logical parties at the moment. But of course, it may just be that a violent sociopath in charge of one of those autocracies would be much worse than the violent sociopaths being only partially restrained by the struggling democratic institutions in the US and Israel. This moment will eventually pass (barring a civilization-ending nuclear exchange) and the dictatorship in North Korea, to cite one example, seems likely to endure.
- He argues that GDP growth statistics do not account for improved quality of life due to technology. The internet is a simple case – much of it is free to people (this has to do with the marginal cost of providing it) and therefore doesn’t add much to GDP, and yet we clearly value it highly, and in the past we could not even have conceived that there would be such a thing to value.
- He has high hopes for two agricultural technologies, cultured meat and vertical farming, to eventually solve both our food supply problems and most of our environmental problems currently caused by agriculture. He sees AI accelerating advances in material science and clean energy that will “turn all technology into information technology” and make it very inexpensive. It’s a nice vision, and my instinct is that we are taking baby steps in that direction but it will be a long time when and if it happens. But this is Kurzweil, he sees massive acceleration in progress in the next decade while my instinct is based on my past experience of the linear part of the curve.
- 3D printing is another technology he is bullish on. Basically, he sees the trend as being toward decentralized production of almost everything from energy, water, and food to manufactured goods.
- He sees simulation as the key to massive acceleration in medicine. Basically, the idea is that if AI can develop very good digital models of human bodies and brains, then AI can do massive simulated drug trials in minutes or days that would take years or decades in human subjects. Here, you can certainly imagine the human regulatory framework slowing things down, and that is probably for the best, but over time it may be shown that the digital trials are as accurate as the in-real-life trials, and resistance will eventually break down.
- Now for the weird existential stuff. First, everyone should reread Altered Carbon (my suggestion, not mentioned by Kurzweil). We have probably all had the thought that when we wake up from sleep, we feel a continuity in our consciousness from the day before, but how can we really be sure that we are the same person? A perfect copy of my mind downloaded into a perfect copy of my body, or downloaded into a perfect simulation of my body and its surrounding environment, would have the identical experience. So in a sense, I can live forever in this situation. Kurzweil says it is a philosophical question, not a scientific one, whether I would still be myself in this situation. This makes a lot of people uncomfortable, including me. But there is another case where I connect my biological brain to the cloud and gradually extend my consciousness. I do not lose the biological part of my mind in this case, but gradually the biological part of my mind becomes a smaller and smaller part, until I might decide at some point to leave it behind. This is still an uncomfortable thought, but less uncomfortable than the previous one. There are many, many philosophical, moral, and practical socioeconomic thoughts to unpack here of course. More than I can even dip a toe into at the moment (inheritance? will anyone want to have or raise babies? what if someone kills the biological me but not the digital me, is that murder? most people would say yes to that last one – etc…)
- He sees solutions to the idea of out-of-control biological (e.g., vaccines) and nanotechnological (e.g., controlling nanobots by “broadcasting” from a central location with a kill switch) threats. AI can play a role in all of this. But he sees AI as the biggest threat, with no sure-fire way to control it although there are promising ideas to mitigate the risks.
The 2030s and 2040s certainly sound like interesting times, barring any sort of tragic disaster between now and then. At the moment, we need to focus on not derailing our civilization and species to the point that we don’t get to find out.
