Category Archives: Peer Reviewed Article Review

not time to worry about Trump

I’ve been asking whether Trump is using a fascist playbook and we should be worried, but here are a couple articles that say no, it is not time to worry. Both point out that the share of media coverage a candidate gets does not have much to do with the proportion of voters that actually support them. Alternet says that more likely voters support Bernie Sanders than Donald Trump, although the latter gets 23 times more media attention:

The Tyndall Report, which tracks coverage on nightly network newscasts, found that Trump has hogged more than a quarter of all presidential race coverage — and more than the entire Democratic field combined.

Hillary Clinton — who enjoys the most voter support, by far, of any candidate in either party — had received the second-most network news coverage.

Sanders, who is supported by more voters than Trump, has received just 10 minutes of network airtime throughout the entire campaign — which translates to 1/23 of Trump’s campaign coverage.

Nate Silver tries to break it down some more. One interesting data analysis he has done shows that at this point in the last two elections, only 8-16% of all Google searches that would eventually be made related to the primaries had been made. So let’s hope that means that only the crazies on the fringe are paying attention at this point.

more on Robert Paxton

Recently I was musing about the U.S., Donald Trump, and fascism. I suggested that the U.S. has a fairly rigid social order favoring and enforced by traditional political, bureaucratic, business and professional elites. We also have a grassroots movement based on rhetoric of national, religious, and to some extent racial unity, and fearful of outsiders. Here is Robert Paxton’s definition of fascism in his 1998 paper The Five Stages of Fascism:

Fascism is a system of political authority and social order intended to reinforce the unity, energy, and purity of communities in which liberal democracy stands accused of producing division and decline.

So fascism is not just a social order cynically maintained by and for the interests of traditional elites. It is somewhat the opposite – a grassroots movement based on a myth of national, religious or racial purity and unity. The grassroots believe the rhetoric while the elites probably do not, but both are interested in maintaining the social order. The danger arises when the traditional elites cynically choose to join forces with the grassroots fascists, because they do not feel strong enough to maintain the existing social order on their own. Together, the two groups are strong enough to come to power where neither could on its own, but once in power the traditional elites may lose control, particularly under war or crisis conditions.

So ironically, it is at a moment when liberal elements in society are making some progress against the entrenched elites that we may be most vulnerable to a right-wing grassroots movement arising. The Tea Party’s anti-immigrant and anti-Muslim rhetoric, and Donald Trump’s attempts to use that rhetoric to rise to power, would seem to fit the bill. It is ironic that a modern American fascism would use rhetoric of freedom and democracy to undermine freedom and democracy, but that is our national unifying myth so it makes some sense.

cyberattacks and superflares

Need some new things to worry about? Look no further!

  1. a catastrophic cyberattack on the U.S. electric infrastructure

In this New York Times bestselling investigation, Ted Koppel reveals that a major cyberattack on America’s power grid is not only possible but likely, that it would be devastating, and that the United States is shockingly unprepared.
 
Imagine a blackout lasting not days, but weeks or months. Tens of millions of people over several states are affected. For those without access to a generator, there is no running water, no sewage, no refrigeration or light. Food and medical supplies are dwindling. Devices we rely on have gone dark. Banks no longer function, looting is widespread, and law and order are being tested as never before.

It isn’t just a scenario. A well-designed attack on just one of the nation’s three electric power grids could cripple much of our infrastructure—and in the age of cyberwarfare, a laptop has become the only necessary weapon. Several nations hostile to the United States could launch such an assault at any time. In fact, as a former chief scientist of the NSA reveals, China and Russia have already penetrated the grid. And a cybersecurity advisor to President Obama believes that independent actors—from “hacktivists” to terrorists—have the capability as well. “It’s not a question of if,” says Centcom Commander General Lloyd Austin, “it’s a question of when.”

2. in case people are not enough to worry about, the Sun could turn on us.

Astrophysicists have discovered a stellar “superflare” on a star observed by NASA’s Kepler space telescope with wave patterns similar to those that have been observed in the Sun’s solar flares. (Superflares are flares that are thousands of times more powerful than those ever recorded on the Sun, and are frequently observed on some stars.)

The scientists found the evidence in the star KIC9655129 in the Milky Way. They suggest there are similarities between the superflare on KIC9655129 and the Sun’s solar flares, so the underlying physics of the flares might be the same…

Typical solar flares can have energies equivalent to a 100 million megaton bombs, but a superflare on our Sun could release energy equivalent to a billion megaton bombs.

Low-cost solution to the grid reliability problem

I have heard from know-it-alls that the problem with renewable energy is that it is intermittent and hard to store. I have always thought there are many ways to deal with that – charge a battery, pump water uphill, heat something, wind a spring, compress air, electrolyze water into hydrogen and charge a fuel cell. Those are my thoughts with absolutely no expertise at all, but luckily the experts are thinking about this too:

Mark Z. Jacobson, Mark A. Delucchi, Mary A. Cameron, and Bethany A. Frew. A low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes. PNAS 2015; DOI: 10.1073/pnas.1510028112, 2015

This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050–2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.

Peter Checkland

Peter Checkland is another system thinker that I have just discovered. Apparently he is well-known, but I find that systems thinkers are buried in a variety of disciplines, in this case management, and I wasn’t looking there.

This is from a 2000 journal article, Soft Systems Methodology: A Thirty Year Retrospective:

Although the history of thought reveals a number of holistic thinkers — Aristotle, Marx, Husserl among them — it was only in the 1950s that any version of holistic thinking became institutionalized. The kind of holistic thinking which then came to the fore, and was the concern of a newly created organization, was that which makes explicit use of the concept of ‘system’, and today it is ‘systems thinking’ in its various forms which would be taken to be the very paradigm of thinking holistically. In 1954, as recounted in Chapter 3 of Systems Thinking, Systems Practice, only one kind of systems thinking was on the table: the development of a mathematically expressed general theory of systems. It was supposed that this would provide a meta-level language and theory in which the problems of many different disciplines could be expressed and solved; and it was hoped that doing this would help to promote the unity of science.

These were the aspirations of the pioneers, but looking back from 1999 we can see that the project has not succeeded. The literature contains very little of the kind of outcomes anticipated by the founders of the Society for General Systems Research; and scholars in the many subject areas to which a holistic approach is relevant have been understandably reluctant to see their pet subject as simply one more example of some broader ‘general system’!

But the fact that general systems theory (GST) has failed in its application does not mean that systems thinking itself has failed. It has in fact flourished in several different ways which were not anticipated in 1954. There has been development of systems ideas as such, development of the use of systems ideas in particular subject areas, and combinations of the two. The development in the 1970s by Maturana and Varela (1980) of the concept of a system whose elements generate the system itself provided a way of capturing the essence of an autonomous living system without resorting to use of an observer’s notions of ‘purpose’, ‘goal’, ‘information processing’ or ‘function’. (This contrasts with the theory in Miller’s Living Systems (1978), which provides a general model of a living entity expressed in the language of an observer, so that what makes the entity autonomous is not central to the theory.) This provides a good example of the further development of systems ideas as such. The rethinking, by Chorley and Kennedy (1971), of physical geography as the study of the dynamics of systems of four kinds, is an example of the use of systems thinking to illuminate a particular subject area.

It’s sad to me to see his contention that general systems theory has failed.  It should be a central, foundational body of knowledge that people are trained in before they apply their focus to narrower fields. I have said many times, this would give a wider variety of intelligent people a shared body of knowledge, vocabulary, and respect for each other’s pursuits, and might accelerate the pace of innovation.

Watson vs. Shalmaneser

A class at Georgia Tech did an experiment where artificial intelligence (“Watson”) was used to “enhance human creativity”. It sounds like a cool class:

Following research on computational creativity in our Design & Intelligence Laboratory (http://dilab.gatech.edu), most readings and discussions in the class focused on six themes: (1) Design Thinking is thinking about illstructured, open-ended problems with ill-defined goals and evaluation criteria; (2) Analogical Thinking is thinking about novel situations in terms of similar, familiar situations; (3) Meta-Thinking is thinking about one’s own knowledge and thinking; (4) Abductive Thinking is thinking about potential explanations for a set of data; (5) Visual Thinking is thinking about images and in images; and (6) Systems Thinking is thinking about complex phenomena consisting of multiple interacting components and causal processes. Further, following the research in the Design & Intelligence Laboratory, the two major creative domains of discussion in the class were (i) Engineering design and invention, and (ii) Scientific modeling and discovery. The class website provides details about the course (http://www.cc.gatech.edu/classes/AY2015/cs8803_spring)

Here’s how they actually went about using the computer:

The general design process followed by the 6 design teams for using Watson to support biologically inspired design may be decomposed into two phases: an initial learning phase and a latter open-ended research phase. The initial learning phase proceeded roughly as follows. (1) The 6 teams selected a case study of biologically inspired design of their choice from a digital library called DSL (Goel et al. 2015). For each team, the selected case study became the use case. (2) The teams started seeding Watson with articles selected from a collection of around 200 biology articles derived from Biologue. Biologue is an interactive system for retrieving biology articles relevant to a design query (Vattam & Goel 2013). (3) The teams generated about 600 questions relevant to their use cases. (4) The teams identified the best answers in their 200 biology articles for the 600 questions. (5) The teams trained Watson on the 600 question-answer pairs. (6) The 6 teams evaluated Watson for answering design questions related to their respective use cases.

The value of the computer seems to be in helping the humans sort through and screen and enormous amount of literature in a short time that otherwise could take years to go through. This theoretically could accelerate progress by allowing us to make connections that otherwise could not be made. There are going to be some brilliant ideas out there that are stuck in a dead end where they never got to the people who can use them. And there are going to be many more brilliant ideas that emerge only when older ideas are connected.

These students seem to have restricted themselves to a research database in one field (biology). But I think it could be very valuable to cross disciplinary boundaries and look for analogous ideas – let’s say, in thermodynamics, ecology, and economics. Or sociology and animal behavior. These are boundaries that have been crossed by just a few visionary people, but are often ignored by everyone else. If making connections was more of a standard practice, many more brilliant ideas would escape the information cul-de-sacs.

This reminded me of the novel Stand on Zanzibar, where “synthesist” is a job. The world is not doing so well, and governments are seeking out unconventional thinkers to try to synthesize knowledge across multiple fields and try to come up with new problems. There is also an artificial intelligence in the book as I recall, but I don’t remember it being involved in the synthesis. I don’t have a copy of the book, and this particular piece of human knowledge and creativity is walled off from me by “intellectual property” law, so I can’t benefit from it or connect it to anything else right now.

on leadership…

It seems to be out of fashion, but I always find it interesting when people try to draw social parallels between people and animals. This reminds me of E.O. Wilson’s Sociobiology, which spends hundreds of pages on ants and termites, and after I worked my way through it I actually feel more of an affinity for these creatures and the complex mini-civilizations they have built.

Leadership in Mammalian Societies: Emergence, Distribution, Power, and Payoff

Leadership is an active area of research in both the biological and social sciences. This review provides a transdisciplinary synthesis of biological and social-science views of leadership from an evolutionary perspective, and examines patterns of leadership in a set of small-scale human and non-human mammalian societies. We review empirical and theoretical work on leadership in four domains: movement, food acquisition, within-group conflict mediation, and between-group interactions. We categorize patterns of variation in leadership in five dimensions: distribution (across individuals), emergence (achieved versus inherited), power, relative payoff to leadership, and generality (across domains). We find that human leadership exhibits commonalities with and differences from the broader mammalian pattern, raising interesting theoretical and empirical issues.

 

Costanza!

Last year Robert Costanza published an update to his seminal 1997 paper The Value of the World’s Ecosystem Services and Natural Capital.  Here’s what I had to say about that in 2014:

The paradox is that because nature has so far provided many services in abundance, they are not “scarce” in an economic sense and our human markets place little or no monetary value on them. This would change in the event our human civilization caused the services to be reduced or interrupted in any way. While it may seem strange to value ecosystem services in monetary terms, it can be instructive to ask what we would be willing to pay if we had no choice but to pay for these services. There are many conceptual and practical challenges with this sort of monetary valuation, but there have been some brave attempts to do it, such as those led by Robert Costanza at the Australian National University.[9] By comparing the magnitude of what we would be willing to pay for these services to the magnitude of the human economy, we can get a sense of the importance of ecosystem services in underpinning our human economy. Costanza’s estimate of the annual value of global ecosystem services ($33 trillion in 1997 U.S. dollars) is the same order of magnitude as the world output of goods and services in that year (approximately $29 trillion[10])! While the estimated value of ecosystem services is certainly less precisely measured than the monetary value of goods and services produced, the order of magnitude suggests that humanity could not afford to substitute its own technology and efforts in place of the services provided by ecosystems, at least not with the wealth and knowledge available to us now.

The new paper is called Changes in the Global Value of Ecosystem Services. Here’s the abstract:

In 1997, the global value of ecosystem services was estimated to average $33 trillion/yr in 1995 $US ($46 trillion/yr in 2007 $US). In this paper, we provide an updated estimate based on updated unit ecosystem service values and land use change estimates between 1997 and 2011. We also address some of the critiques of the 1997 paper. Using the same methods as in the 1997 paper but with updated data, the estimate for the total global ecosystem services in 2011 is $125 trillion/yr (assuming updated unit values and changes to biome areas) and $145 trillion/yr (assuming only unit values changed), both in 2007 $US. From this we estimated the loss of eco-services from 1997 to 2011 due to land use change at $4.3–20.2 trillion/yr, depending on which unit values are used. Global estimates expressed in monetary accounting units, such as this, are useful to highlight the magnitude of eco-services, but have no specific decision-making context. However, the underlying data and models can be applied at multiple scales to assess changes resulting from various scenarios and policies. We emphasize that valuation of ecoservices (in whatever units) is not the same as commodification or privatization. Many eco-services are best considered public goods or common pool resources, so conventional markets are often not the best institutional frameworks to manage them. However, these services must be (and are being) valued, and we need new, common asset institutions to better take these values into account.

So $125 trillion dollars per year in value, and last year the IMF says the world economy was about $77 trillion. This is important for a few reasons. First, it strengthens the argument even further that ecosystem services are not just something happening on the fringe of our economy that give us a helping hand. They are absolutely essential and we could not afford to do without them. Second, if I understand correctly, the annual value we can derive is lower per unit area of land because of degradation of the land since 1997. Every year we are using up $4-20 trillion that the Earth is not able to replenish. That value is hard to put in context, because we don’t know what the total stock is, or how low that stock could fall before it would start to constrain our economy.

There’s another implication – if we could develop a precise accounting of the natural capital being used up each year, we could orient our economy to shift more of those costs to the people, governments, and business entities choosing to impose those costs on the rest of us. Carbon taxes are a fairly obvious first step.

 

credit, interest, and a steady state economy

This article in Ecological Economics says that a positive interest rate and a no-growth economy could coincide.

Does credit create a ‘growth imperative’? A quasi-stationary economy with interest-bearing debt

This paper addresses the question of whether a capitalist economy can ever sustain a ‘stationary’ (or non-growing) state, or whether, as often claimed, capitalism has an inherent ‘growth imperative’ arising from the charging of interest on debt. We outline the development of a dedicated system dynamics macro-economic model for describing Financial Assets and Liabilities in a Stock-Flow consistent Framework (FALSTAFF) and use this model to explore the potential for stationary state outcomes in an economy with balanced trade, credit creation by banks, and private equity. Contrary to claims in the literature, we find that neither credit creation nor the charging of interest on debt creates a ‘growth imperative’ in and of themselves. This finding remains true even when capital adequacy and liquidity requirements are imposed on banks. We test the robustness of our results in the face of random variations and one-off shocks. We show further that it is possible to move from a growth path towards a stationary state without either crashing the economy or dismantling the system. Nonetheless, there remain several good reasons to support the reform of the monetary system. Our model also supports critiques of austerity and underlines the value of countercyclical spending by government.

work sharing

Work sharing – it’s an idea to look into before the robots take over most of the work.

Work-sharing for a sustainable economy

Achieving low unemployment in an environment of weak growth is a major policy challenge; a more egalitarian distribution of hours worked could be the key to solving it. Whether work-sharing actually increases employment, however, has been debated controversially. In this article we present stylized facts on the distribution of hours worked and discuss the role of work-sharing for a sustainable economy. Building on recent developments in labor market theory we review the determinants of working long hours and its effect on well-being. Finally, we survey work-sharing reforms in the past. While there seems to be a consensus that work-sharing in the Great Depression in the U.S. and in the Great Recession in Europe was successful in reducing employment losses, perceptions of the work-sharing reforms implemented between the 1980s and early 2000s are more ambivalent. However, even the most critical evaluations of these reforms provide no credible evidence of negative employment effects; instead, the overall success of the policy seems to depend on the economic and institutional setting, as well as the specific details of its implementation.