effective altruism and existential risk

In my recent “effective altruism” post, I promised to follow up on the idea of existential risk. Basically, there is a debate, bogus in my opinion, between whether an altruist should try to reduce suffering today or make sure humanity has a long term future. The argument goes that the 8 billion or so human souls alive now are only a tiny fraction of the potential trillions that could ultimately exist, so making sure we don’t wipe ourselves out should be top priority. I call this bogus because we have the intelligence, ingenuity, and energy to work on all these problems. Just divide and conquer as they say.

Anyway, here is another existential risk article. Nuclear weapons have been and probably still are the largest existential threat. But biological weapons and mishaps may now or soon will present an equal or greater threat (which doesn’t make nuclear weapons better, it doubles the threat). Artificial intelligence is an unknown unknown, potentially catastrophic medium term threat, and opinions vary from years to decades to never. Nanotechnology is a theoretical long-term threat.

My take is that we need to get on top of the biological threat fast with some kind of treaty and inspection regime we can all live with. In the US, we need a damn health care system. And at least get the nuclear threat back to where it was. Courageous political leadership could make these a priority.

One thought on “effective altruism and existential risk

  1. Pingback: 2022 in Review | Future Yada Yada Yada

Leave a Reply

Your email address will not be published. Required fields are marked *