Tag Archives: stephen hawking

Stephen Hawking: escape the planet in 100 years

From The Independent:

In “Expedition New Earth” – a documentary that debuts this summer as part of the BBC’s “Tomorrow’s World” science season – Hawking claims that Mother Earth would greatly appreciate it if we could gather our belongings and get out – not in 1,000 years, but in the next century or so…

“Professor Stephen Hawking thinks the human species will have to populate a new planet within 100 years if it is to survive,” the BBC said with a notable absence of punctuation marks in a statement posted online. “With climate change, overdue asteroid strikes, epidemics and population growth, our own planet is increasingly precarious…”

The BBC program gives Hawking a chance to wade into the evolving science and technology that may become crucial if humans hatch a plan to escape Earth and find a way to survive on another planet – from questions about biology and astronomy to rocket technology and human hibernation, the BBC notes.

Getting a colony started on Mars, Earth’s moon, or another moon in the nearby solar system within a hundred years doesn’t seem all that daunting to me. Whether it could be truly self-sufficient from Earth in that time frame is the real question. That seems like a tall order, considering how much our current civilization depends on this planet’s natural gifts to get by. Our technology would have to improve a lot.

Dyson, Feynman, Hawking… Carson?

Thinking back to my recent post about Freeman Dyson – a brilliant physicist who has suggested solutions to problems in biology, which biologists refuse to take seriously.

Here is what Richard Feynman has to say about scientists trying to solve puzzles outside their fields:

I believe that a scientist looking at nonscientific problems is just as dumb as the next guy — and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter…

In this age of specialization men who thoroughly know one field are often incompetent to discuss another. The great problems of the relations between one and another aspect of human activity have for this reason been discussed less and less in public. When we look at the past great debates on these subjects we feel jealous of those times, for we should have liked the excitement of such argument. The old problems, such as the relation of science and religion, are still with us, and I believe present as difficult dilemmas as ever, but they are not often publicly discussed because of the limitations of specialization.

Maybe, but is the solution then for everyone to specialize, accept the blinders that specialization causes, and never look beyond them? That can’t be right. The solution has to be for everyone to be trained in a comprehensive, general theory of system science. Then some people remain generalists, while others go on to specialize in a particular type or locality within that larger system theory. Then we would all have a common language and framework for talking to each other.

Take the case of Ben Carson, the “neuroscientist who can’t think“:

When Trump, an alumnus of the University of Pennsylvania’s Wharton School, says that climate change is a hoax, I can believe it’s a cynical lie pandering to the Republican base, rather than an index of his ignorance.  But when Carson, a retired Johns Hopkins neurosurgeon, denies that climate change is man-made, or calls the Big Bang a fairy tale, or blames gun control for the extent of the Holocaust, I think he truly believes it.

It’s conceivable that the exceptional hand-eye coordination and 3D vision that enabled Carson to separate conjoined twins is a compartmentalized gift, wholly independent of his intellectual acuity. But he could not have risen to the top of his profession without learning the Second Law of Thermodynamics (pre-meds have to take physics), without knowing that life on earth began more than 6,000 years ago (pre-meds have to take biology), without understanding the scientific method (an author of more than 120 articles in peer-reviewed journals can’t make up his own rules of evidence).  Yet what does it mean to learn such things, if they don’t stop you from spouting scientific nonsense? …

What I don’t get is how his rigorous scientific education and professional training gave Carson’s blind spots a pass.

The Feynman quote is in a Forbes article trying to refute Stephen Hawking talking about technological unemployment. From the Forbes article:

…the rise of the robots cannot possibly make us any poorer than we are now. And that’s in the very worst case: the worst that can possibly happen is that some other people become richer and we get to jog along much as we do now. That’s also the result that is vanishingly unlikely to actually happen. What is far more likely to happen is that we all, jointly, become vastly wealthier…

We have some mixture of human labour and machinery, automation, which produces the things that we consume today. Further, the only useful definition of income is what we’re able to consume. We’re not really interested in whether people have jobs or not, we also don’t care very much about income as income. The root point that we do care about is that people are able to consume things. Shelter, clothing, food, health care, the real point is that people get to eat, sleep under a roof, not be naked (except, of course, when that’s more fun), get treated for what ails them (possibly the result of that fun) and so on. Or, as Adam Smith said, the sole purpose of any production is consumption. It is only consumption, the ability to consume, which is the issue of any importance.

Well, I have a big philosophical problem that the idea that the purpose of life is consumption. What about love, art, achievement, leisure? But let’s stick to science and economics. I don’t have to be Stephen Hawking or even Richard Feynman to give some easy counter-examples. First, if we “produce” more, as measured in dollars changing hands, we can easily be degrading things that aren’t easily measured in dollars, like the atmosphere, forests, and oceans, for example. And eventually, the loss of these ecosystems could bring our civilization to its knees, making us very poor indeed in material terms, no how many dollars we thought we had. That’s a little theoretical, but for recent and obvious cases of technological unemployment, look at the displacement of agricultural workers in the southern U.S. in the early to mid-20th century, and the continuing poverty, ill health, and social problems of their descendants today. Or if you think racism was a larger factor than economic factors there (I think the two are overlapping and intertwined in many ways), look at the factory workers in Appalachia, both black and white, who were displaced by lower cost labor overseas. Again, their descendants are beset by widespread poverty, health and social problems which show no sign of getting better any time soon. So clearly, technological unemployment causes real poverty and suffering for some people, some places, and some times. The difference between these past examples and the AI future might be that it affects most people, most places, all the time, unless we find political solutions to spread the wealth.

And here is Stephen Hawking on exactly that subject in his recent “ask me anything” session:

The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Now, I’m not a famous physicist or even a brain surgeon, but that sounds about right to me.