Tag Archives: data science

alternatives to word clouds

I like this post on R bloggers proposing several alternatives to word clouds. I’ll list them below but really, you should look at the pictures because hey, this is about pictures.

  1. circle packing (basically this replaces the words with circles, dealing with the problem of bigger/longer words appearing to be more important in standard word clouds); there is a variation on this called the “horn of plenty” where the circles are arranged in order rather than randomly
  2. cartogram (in my ignorance, I have been calling this a “bubble map”. I have used these frequently to show engineering model results and find they work well for many people)
  3. chloropleth (these shade in geographic areas to convey data. I find these work well if the size of the geographic area is important information. If it is not, these tend to draw the viewer’s eye to larger areas, and in that case the bubbles are better. For example, per-person income of Luxembourg vs. China.)
  4. treemap (I’ve been calling these “packed rectangles” and I generally find them good for anything where conveying relative magnitudes of things to people is important)
  5. donuts (surpringly, the author concludes a donut is the best option for the data he is trying to show and I kind of agree, it gets the point across and leaves lots of room for labels)

The article has links to the specific packages and code used to create the graphics.

August 2017 in Review

Most frightening stories:

  • Around 200,000 people may be dying prematurely in the U.S. each year due to air pollution. Meanwhile, the Trump administration may be trying to censor the National Climate Assessment, which presents the consensus among serious scientists in the United States government that climate change is very real and a very real threat to our country.
  • The U.S. may already be in the middle of a soft military coup. We have a batshit-crazy President playing nuclear chicken with all our lives. And with the legislative branch not even trying to do anything about this, we are actually hoping the generals who are running our country will be the coolest heads in the room when it comes to preventing nuclear war. North Korea may be closer to submarine-launched nuclear weapons than previously thought. Meanwhile, there are three ways for terrorists or other non-state actors to get their hands on nuclear weapons: “transfer—the sale or handoff of a weapon from a nuclear-weapon state; leakage—the theft of a nuclear weapon or weapons-grade fissile material; and indigenous production—the construction of a nuclear device from illicitly obtained weapons-grade fissile material.” And the U.S. and Russia are no longer cooperating on non-proliferation.
  • The U.S. construction industry has had negligible productivity gains in the past 40 years.

Most hopeful stories:

  • The United Nations General Assembly adopted a resolution (in July) that could eventually, maybe lead to the total elimination of nuclear weapons on Earth.
  • The Aichi Biodiversity Targets are some very specific numerical targets that have incorporated in the 2015 Sustainable Goals.
  • Great Transitions are ideas for how the world could transition to a sustainable state without going through a major setback along the way.

Most interesting stories, that were not particularly frightening or hopeful, or perhaps were a mixture of both:

  • Elon Musk has thrown his energy into deep tunneling technology.
  • When you sow seeds, it makes sense to sow the ones that have the most trouble establishing at the highest density.
  • You can use R to recreate the famous plot of Napoleon’s invasion of Russia.

As I am writing these words on Labor Day, the news is about a North Korean nuclear test. In a strange coincidence, I happened to see both the USS New Jersey, which was involved in the Korean War, and the Korean War Memorial here in Philadelphia yesterday (which never came). That war caused a lot of pain and suffering on all sides. It would be a tragedy to let it flair up again, and an even bigger tragedy if nuclear weapons were to be involved.

eyes on the street

A group at the University of Pennsylvania looked for statistical evidence that “eyes on the street” are a deterrent to crime. The results are a bit puzzling, as real world data often can be.

ANALYSIS OF URBAN VIBRANCY AND SAFETY IN PHILADELPHIA

Statistical analyses of urban environments have been recently improved through publicly available high resolution data and mapping technologies that have adopted across industries. These technologies allow us to create metrics to empirically investigate urban design principles of the past half-century. Philadelphia is an interesting case study for this work, with its rapid urban development and population increase in the last decade. We focus on features of what urban planners call vibrancy: measures of positive, healthy activity or energy in an area. Historically, vibrancy has been very challenging to measure empirically. We explore the association between safety (violent and non-violent crime) and features of local neighborhood vibrancy such as population, economic measures and land use zoning. Despite rhetoric about the negative effects of population density in the 1960s and 70s, we find very little association between crime and population density. Measures based on land use zoning are not an adequate description of local vibrancy and so we construct a database and set of measures of business activity in each neighborhood. We employ several matching analyses within census block groups to explore the relationship between neighborhood vibrancy and safety at a higher resolution. We fi nd that neighborhoods with more vacancy have higher crime but within neighborhoods, crimes tend not to be located near vacant properties. We also find that more crimes occur near business locations but businesses that are active (open) for longer periods are associated with fewer crimes.

This is particularly fascinating to me because I live my life in the middle of this particular data set and am part of it. So it is very interesting to compare what the data seem to be saying with my own experiences and impressions.

The lack of correlation between population density and crime is not surprising. Two neighborhoods with identical density can be drastically different. The correlation between poverty and crime is not surprising – people who are not succeeding in the formal economy and who are not mobile turn to the informal economy, in other words drug dealing, loan sharking and other illegal ways of trying to earn an income. If they are successful at earning an income, they tend to have a lot of cash around, and other people who know about the cash will take advantage of them, knowing they will not go to the police. Other than going to the police, the remaining options are to be taken advantage of repeatedly, or to retaliate. This is how violence escalates, I believe, and it goes hand in hand with development of a culture that tolerates and even celebrates violence, in a never-ending feedback loop.

The puzzling part comes when they try to drill down and look at explanatory factors at a very fine spatial scale. They found a correlation between crime and mixed use zoning, which appears to contradict the idea that eyes on the street around the clock will help to deter crime. And they found more crime around businesses like cafes, restaurants, bars and retail shops. They found that longer open hours seemed to have some deterrent effect on crime relative to shorter open hours.

I think they have made an excellent effort to do this, and I am not sure it can be done a lot better, but I will point out one idea I have. They talk about some limitations and nuances of their data, but one they do not mention is the idea that they are looking at reported crimes, most likely police reports or 911 calls. It could be that business owners, staff and patrons are much more likely to call 911 and report a crime than are residential neighbors. The business staff and patrons may see this as being in the economic interest, increasing the safety of their families, and the (alleged) criminals they are reporting are generally strangers. In quieter all-residential neighborhoods, people may not observe as many of the crimes that do occur (fewer “eyes on the street”), they may prefer not to report crimes either through a sense of loyalty to one’s neighbors, minding one’s own business, quid pro quo, or in some cases a fear of retaliation. There is also the factor of some demographic groups trusting the police more than others, although the authors’ statistical attempts to control for demographics may tend to factor this out.

 

data-ink ratio

Here’s a wiki post about Edward Tufte’s data-ink ratio:

Tufte refers to data-ink as the non-erasable ink used for the presentation of data. If data-ink would be removed from the image, the graphic would lose the content. Non-Data-Ink is accordingly the ink that does not transport the information but it is used for scales, labels and edges. The data-ink ratio is the proportion of Ink that is used to present actual data compared to the total amount of ink (or pixels) used in the entire display. (Ratio of Data-Ink to non-Data-Ink).

Good graphics should include only data-Ink. Non-Data-Ink is to be deleted everywhere where possible. The reason for this is to avoid drawing the attention of viewers of the data presentation to irrelevant elements.

The goal is to design a display with the highest possible data-ink ratio (that is, as close to the total of 1.0), without eliminating something that is necessary for effective communication.

Before I offer an opinion,  I should state the disclaimer that you should definitely listen to Edward Tufte, not me! So here’s my opinion: this idea is clearly absurd when taken to extremes because it would just mean a bunch of dots on a page that you have no way of interpreting. I can’t think of a way of making graphs without axes, scales, and a legend. Labels, arrows, and text boxes are an alternative which I find myself using often when giving projected slide presentations in fairly large rooms.

A reasonable interpretation of Tufte, I think, is to ask yourself whether each new thing you are adding to a graph provides useful information to the reader/viewer, increases the chances that the reader/viewer will draw the right conclusions, and makes the reader/viewer’s job easier or harder. The holy grail is to help your audience imbibe the point of the graph with very little effort. Unnecessary 3D effects and clip art aren’t going to do that. A splash of color and some nice big labels that middle aged people can read from the back of the room just might help.

R and differential equations

Here’s a new R package for solving differential equations. Sounds like something that might be of interest to only a few ivory tower mathematicians, right? But solving differential equations numerically is the critical core of almost any dynamic simulation model, whether it is simulating water, energy, money, ecology, social systems, or the intertwinings of all of these. So if we are going to understand our systems well enough to solve their problems, we have to have some people around who understand these things on a practical level.

iris scans

Border counties in Texas are using mandatory iris scans to build a database of illegal immigrants. I imagine it will spread to big city police departments, and then to everywhere else. I imagine at some point it will become a form of identification people can use as an alternative to carrying a wallet and passport. I don’t know that the technology is concerning in and of itself – it’s essentially just a modern and accurate form of identification. What’s concerning is what some immoral governments, amoral corporations, and criminal elements might be able to do with large databases of this type of information.

synergy, uniqueness, and redundancy in interacting environmental variables

This is a bit over my head, but one thing I am interested in is analyzing and making sense of a large number of simultaneous time series, whether measured in the environment, the economy, or output of a computer model. This can easily be overwhelming, so one place people often start is trying to figure out which time series are telling essentially the same story, or directly opposite stories. Understanding this allows you to reduce the number of variables you need to analyze to a more manageable number. Time series make this more complicated though, because two variables could be telling the same or opposite stories, but if the signals are offset in time, simple ways of looking at correlation may not lead to the right conclusions. With simulations you have yet another set of complicating factors, which is the implicit links between your variables, intended or not, and whether they exist in the real world or not.

Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

Information theoretic measures can be used to identify non-linear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1-minute environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.