Tag Archives: logic

“useful principles”

Here is an interesting blog post called “30 useful principles“. I would agree that the majority of them are useful. Anyway, here are a few ideas and phrases that caught my interest. I’ll try to be clear when I am quoting versus paraphrasing or adding my own interpretation.

  • “When a measure becomes a goal, it ceases to be a good measure.” Makes sense to me – measuring is necessary, but I have found that people who are actually doing things on the ground need an understanding of the fundamental goals, or else things will tend to drift over time and no longer be aimed at the fundamental goals.
  • “A man with a watch knows what time it is. A man with 2 watches is never sure.” A good way to talk about the communication of uncertainty. Measuring and understanding uncertainty is critical in science and decision making, but how we communicate it requires a lot of careful thought to avoid unintended consequences. Decisions are often about playing the odds, and sometimes giving decision makers too much information on uncertainty leads to no decisions or delayed decisions, which are themselves a type of decision, and not the type that is likely to produce desirable results. Am I saying we should oversimplify and project an inflated sense of certainty when talking to the public and decision makers? and is this a form of manipulation? Well, sort of and sometimes yes to both these questions.
  • “Reading is the basis of thought.” Yes, this is certainly true for me, and it is even true that the writing process is an important part of thoroughly thinking something through. This is why we may be able to outsource the production of words to AI, but this will not be a substitute for humans thinking. And if we don’t exercise our thinking muscles, we will lose them over time and we will forget how to train the next generation to develop them. So if we are going to outsource thinking and problem solving to computers, let’s hope they will be better at it than we ever were. A better model would be computer-aided decision making, where the computers are giving humans accurate and timely information about the likely consequences of our decisions, but in the end we are still applying our judgment and values in making those decisions.
  • “punishing speech—whether by taking offence or by threatening censorship—is ultimately a request to be deceived.” It’s a good idea to create incentives for people to tell the truth and provide accurate information, even if it is information people in leadership positions don’t want to hear. Leaders get very out of touch if they don’t do this.
  • “Cynicism is not a sign of intelligence but a substitute for it, a way to shield oneself from betrayal & disappointment without having to do or think.” I don’t know that cynical or “realistic” people lack raw intelligence on average, but they certainly lack imagination and creativity. The more people have trouble imagining that things can change, the more it becomes a self-fulfilling prophecy that things will not change.
  • “One death is a tragedy, a million is a statistic.” I’m as horrified by pictures of dying babies in a hospital in a war zone as anyone else, but it also raises my propaganda flag. Who is trying to manipulate me with these images and why? What else is going on at the same time that I might also want to pay attention to?

cognitive bias

This open access article has a nice summary of cognitive bias research.

Black swans, cognition, and the power of learning from failure

Failure carries undeniable stigma and is difficult to confront for individuals, teams, and organizations. Disciplines such as commercial and military aviation, medicine, and business have long histories of grappling with it, beginning with the recognition that failure is inevitable in every human endeavor. Although conservation may arguably be more complex, conservation professionals can draw on the research and experience of these other disciplines to institutionalize activities and attitudes that foster learning from failure, whether they are minor setbacks or major disasters. Understanding the role of individual cognitive biases, team psychological safety, and organizational willingness to support critical self‐examination all contribute to creating a cultural shift in conservation to one that is open to the learning opportunity that failure provides. This new approach to managing failure is a necessary next step in the evolution of conservation effectiveness.

Carl Sagan’s Baloney Detection Kit

In this essay Carl Sagan suggests a set of guidelines for using the scientific method to decide if something is true.

  • Wherever possible there must be independent confirmation of the “facts.”
  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Arguments from authority carry little weight—“authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts.
  • Spin more than one hypothesis. If there’s something to be explained, think of all the different ways in which it could be explained. Then think of tests by which you might systematically disprove each of the alternatives. What survives, the hypothesis that resists disproof in this Darwinian selection among “multiple working hypotheses,” has a much better chance of being the right answer than if you had simply run with the first idea that caught your fancy.
  • Try not to get overly attached to a hypothesis just because it’s yours. It’s only a way station in the pursuit of knowledge. Ask yourself why you like the idea. Compare it fairly with the alternatives. See if you can find reasons for rejecting it. If you don’t, others will.
  • · Quantify. If whatever it is you’re explaining has some measure, some numerical quantity attached to it, you’ll be much better able to discriminate among competing hypotheses. What is vague and qualitative is open to many explanations. Of course there are truths to be sought in the many qualitative issues we are obliged to confront, but finding them is more challenging.
  • If there’s a chain of argument, every link in the chain must work (including the premise) not just most of them.
  • Occam’s Razor. This convenient rule-of-thumb urges us when faced with two hypotheses that explain the data equally well to choose the simpler.
  • Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable, are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle—an electron, say—in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result.

He goes on to “round out the toolkit” with a list of logical fallacies and rhetorical tricks to be aware of.

the chicken and the egg

This video purports to answer the question of the chicken and the egg once and for all. But really, it’s silly. Of course there were eggs of some sort long before chickens existed. The real question is what came first, the chicken or the chicken egg. And even that might seem obvious – at some point something that was not quite a chicken laid an egg, and the thing that came out was a chicken. But was that egg a chicken egg? You could say that if a chicken came out, it was a chicken egg. But imagine this – if you took an egg laid by a duck, I think we could all agree that would be a duck egg. But now imagine you use some genetic technology to change the duck embryo inside the egg from a duck to a chicken. Now is it a chicken egg or a duck egg. See, it is still ambiguous.

Time Chicken from Nick Black on Vimeo.