Tag Archives: complex systems

fish

Fishery policy might seem like a fringe issue, but it is another disturbing example of ignorant politicians disrupting science- and evidence-based policies. This is from Pew:

When a fish population falls below a certain level, it is classified as overfished, and the MSA requires regional managers to create a plan to rebuild the species with a date for meeting the recovery goal. That timeline is based on science and accounts for environmental conditions and biological factors that can influence rebuilding, such as how long it takes the fish to reach reproductive age.

Critics of the MSA claim that the law is rigid in requiring a short timeline, but the facts say otherwise: The average timeline for rebuilding plans is close to 20 years, and most plans have recovery timelines longer than 10 years. H.R. 200 would alter the law to allow exceptions to setting science-based rebuilding timelines. It would open the door for political and other considerations to influence a major element of recovery plans, allowing managers to set arbitrary timelines that could postpone the benefits of fully rebuilt stocks indefinitely. That’s why extending rebuilding timelines would be shortsighted and counterproductive…

To prevent overfishing, the MSA requires managers to set science-based annual catch limits. H.R. 200 would exempt more fish populations from the requirement to establish science-based catch limits—which would increase the risk of overfishing.

Fisheries are a poster child for introducing complex systems. They are a straightforward physical system that is just a little complex. And yet, it is easy for a normal intelligent person to have misconceptions about how they will change over time. This is because there are lags and non-linearities in the system. Fish take a while to grow to maturity and reproduce. You can fish a seemingly abundant fish population at a high rate for awhile, but then it will seem to crash without warning and take a long time to recover. This seems unpredictable to people uneducated in systems (who may be perfectly intelligent, literate, and numerate otherwise), but is fairly easy to grasp and predict once you understand the relatively simple theory and dynamics behind it. So the fact that politicians and the population at large aren’t able to grasp this is just a failure of our education system.

Real fisheries are just a bit more complex than what I described above. This podcast from Fresh Air describes how removing the small fish species that form the base of the food chain, rather than just overfishing of larger commercial species, may have led to the Cod collapse off New England in the 1980s.

the unraveling of complex systems

This account of 96 terrible hours at JFK airport caused by terrible, but not unusually terrible, winter weather reminds us how unresilient our modern infrastructure systems can be, and how unresilient our under-maintained and investment-starved U.S. infrastructure is in particular.

By Sunday, after half a foot of snow, gale force winds, and three days of single-digit temperatures, JFK had set a new standard for air travel horrors: Travelers waited for hours or days for their flights, sitting on scavenged sheets of cardboard in their socks, fuming to nearby reporters and farflung Twitter followers. Outside, the scene was more apocalyptic. Aircraft crowded the tarmac, too many of them for the gates. Passengers were left in planes for as long as seven hours after landing. And that was a full day and a half after the cyclone bomb had dissipated.

synergy, uniqueness, and redundancy in interacting environmental variables

This is a bit over my head, but one thing I am interested in is analyzing and making sense of a large number of simultaneous time series, whether measured in the environment, the economy, or output of a computer model. This can easily be overwhelming, so one place people often start is trying to figure out which time series are telling essentially the same story, or directly opposite stories. Understanding this allows you to reduce the number of variables you need to analyze to a more manageable number. Time series make this more complicated though, because two variables could be telling the same or opposite stories, but if the signals are offset in time, simple ways of looking at correlation may not lead to the right conclusions. With simulations you have yet another set of complicating factors, which is the implicit links between your variables, intended or not, and whether they exist in the real world or not.

Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

Information theoretic measures can be used to identify non-linear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1-minute environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.

causal emergence

Causal emergence is either a brilliant new marriage of science and philosophy, or a bunch of useless nonsense. You can be the judge but I am leaning slightly toward the latter.

Some physical entities, which we often refer to as agents, can be described as having intentions and engaging in goal-oriented behavior. Yet agents can also be described in terms of low-level dynamics that are mindless, intention-less, and without goals or purpose. How we can reconcile these seemingly disparate levels of description? This is especially problematic because the lower scales at first appear more fundament in three ways: in terms of their causal work, in terms of the amount of information they contain, and their theoretical superiority in terms of model choice. However, recent research bringing information theory to bear on modeling systems at different scales significantly reframes the issue. I argue that agents, with their associated intentions and goal-oriented behavior, can actually causally emerge from their underlying microscopic physics. This is particularly true of agents because they are autopoietic and possess (apparent) teleological causal relationships.

In other words, how can your atoms and cells, which have no intentions or will, sum up to create you, a person who I presume has intentions and will. Then all of us persons with intentions and will add up to a civilization, which has intentions and will, which is part of a planet, a solar system, a galaxy, and universe, which arguably do not. If I were smoking something, I might find this profound, but I don’t see an application for it. But it does remind me of Howard T. Odum’s concept of a “mesoscope” as opposed to the microscope and macroscope, which refers to understanding systems at a middle scale where these complex, messy interactions between the physical and human worlds take place. Most of our scientists and engineers are studying the world through a microscope, and that is what we as a society and economy are rewarding, while the most important problems that could be solved at the middle scale are not being tackled by many people, and the people who are tackling them are not being sufficiently rewarded.

leading implementation of complex programs

This is just something I have wanted to write down a few thoughts on for awhile. My field is engineering and planning, and I have been involved in a number of programs that are complex technically, financially, and on the people side. I’ve seen some things done well, I’ve seen some things done badly, and I’ve done a few things well and learned a few lessons the hard way myself. So here are my thoughts:

  1. Organize the entire program around achieving a vision and set of goals which everyone understands. Create a crystal clear vision and set of goal statements for the program. Make sure these are thoroughly understood by all senior and mid-level decision makers – communicate, market, train, drill, test – whatever it takes to make sure they get it. Then, set specific objectives for individual functional units within the organization, and for all individual staff members, that advance these goals, all these goals and only these goals. Make each objective SMART – specific, measurable, achievable, realistic, and time bound. Then track every individual’s and every unit’s progress towards meeting the objectives, and hold individuals and managers accountable for meeting their objectives.
  2. Make sure the knowledge level of the entire staff is up-to-date with industry standards and best practices, then encourage system thinking, creativity and innovation to advance the leading edge. Create a formal training and continuing education program for staff. Create a psychological “safe space” for discussion of ideas that are outside the typical daily functions of the organization. Organize talks, discussion groups, and other events. Bring ideas and speakers in from outside the organization. Encourage and reward staff to spend time reading and attending events outside the organization, then bringing back ideas and communicating them to colleagues. Be on guard for the development of group think, and actively encourage and reward the sharing of ideas that are new to the organization.
  3. Focus on communication of system behavior, risk, and other complex information. Continuously improve staff knowledge of communication approaches, strategies, and tools by weaving these into the training and innovation program. Bring in specialized staff with communication and visualization skills. Set up a specific job role, group or committee whose job it is to oversee communication approaches in all aspects of the organization.

cascading computer system failure at Delta

A cascading computer system failure knocked Delta airlines out of commission on August 8.

At least half of all Delta Air Lines flights Monday were delayed or canceled after a power outage knocked out the airline’s computer systems worldwide…

Delta representatives said the airline was investigating the cause of the meltdown. They declined to describe whether the airline’s information-technology system had enough built-in redundancies to recover quickly from a hiccup like a power outage…

Airlines depend on huge, overlapping and complicated systems to operate flights, schedule crews and run ticketing, boarding, airport kiosks, websites and mobile phone apps. Even brief outages can snarl traffic and cause long delays.

As the world becomes more automated, things might get smoother when everything is working well, but when something goes wrong it might get harder and harder to recover. Hopefully, major government, military and financial computer systems will have “enough built-in redundancies”.

They do, according to an article in The Week. Delta actually had backup systems in place, and the problem was that they didn’t kick in correctly. Major financial companies have even more layers of backups and pay more attention to them because they have even more at stake.

Delta, like most major airlines, likely had one or more back-up systems in place to take over in an emergency like this. Often a company has an extra system housed in its main data center identical to the main system, plus another one in a separate data center in case both local systems are taken out in a major event, like a fire. Some companies even have a third redundant system that is cloud-based or housed in a separate location.

“Some of these disruptions should not have occurred,” Hecht says. “Delta IT did something wrong that caused its redundancy structure to not function as needed. The problem was not the power failure itself; 99.9999 percent of power failures never cause service disruptions.” …

…most airlines use manual testing to verify their data protection, meaning a human being actually has to take time out of their day to test the system on a regular basis. Other industries, like banking and finance, rely on automatic systems to lower the risk of a full blackout. Automated systems can be pricey, and while Delta’s outage is probably costing the company a hefty sum (Southwest’s outage last month was expected to cost the airline up to $10 million), an hour-long outage in the banking sector would create far more mayhem and profit-loss, so finance companies are more likely to pay up for automated systems.