Here’s a new intro to R for Excel users.
Here’s a new intro to R for Excel users.
This R-bloggers post lists 8 types of online dashboards along with some alternatives for creating them.
I like this post on R bloggers proposing several alternatives to word clouds. I’ll list them below but really, you should look at the pictures because hey, this is about pictures.
The article has links to the specific packages and code used to create the graphics.
Most frightening stories:
Most hopeful stories:
Most interesting stories, that were not particularly frightening or hopeful, or perhaps were a mixture of both:
As I am writing these words on Labor Day, the news is about a North Korean nuclear test. In a strange coincidence, I happened to see both the USS New Jersey, which was involved in the Korean War, and the Korean War Memorial here in Philadelphia yesterday (which never came). That war caused a lot of pain and suffering on all sides. It would be a tragedy to let it flair up again, and an even bigger tragedy if nuclear weapons were to be involved.
This R-bloggers post shows you how to recreate the famous Sankey diagram of Napolean’s invasion of Russia. And even how to improve it by overlaying it on a modern satellite image.
A group at the University of Pennsylvania looked for statistical evidence that “eyes on the street” are a deterrent to crime. The results are a bit puzzling, as real world data often can be.
Statistical analyses of urban environments have been recently improved through publicly available high resolution data and mapping technologies that have adopted across industries. These technologies allow us to create metrics to empirically investigate urban design principles of the past half-century. Philadelphia is an interesting case study for this work, with its rapid urban development and population increase in the last decade. We focus on features of what urban planners call vibrancy: measures of positive, healthy activity or energy in an area. Historically, vibrancy has been very challenging to measure empirically. We explore the association between safety (violent and non-violent crime) and features of local neighborhood vibrancy such as population, economic measures and land use zoning. Despite rhetoric about the negative effects of population density in the 1960s and 70s, we find very little association between crime and population density. Measures based on land use zoning are not an adequate description of local vibrancy and so we construct a database and set of measures of business activity in each neighborhood. We employ several matching analyses within census block groups to explore the relationship between neighborhood vibrancy and safety at a higher resolution. We find that neighborhoods with more vacancy have higher crime but within neighborhoods, crimes tend not to be located near vacant properties. We also find that more crimes occur near business locations but businesses that are active (open) for longer periods are associated with fewer crimes.
This is particularly fascinating to me because I live my life in the middle of this particular data set and am part of it. So it is very interesting to compare what the data seem to be saying with my own experiences and impressions.
The lack of correlation between population density and crime is not surprising. Two neighborhoods with identical density can be drastically different. The correlation between poverty and crime is not surprising – people who are not succeeding in the formal economy and who are not mobile turn to the informal economy, in other words drug dealing, loan sharking and other illegal ways of trying to earn an income. If they are successful at earning an income, they tend to have a lot of cash around, and other people who know about the cash will take advantage of them, knowing they will not go to the police. Other than going to the police, the remaining options are to be taken advantage of repeatedly, or to retaliate. This is how violence escalates, I believe, and it goes hand in hand with development of a culture that tolerates and even celebrates violence, in a never-ending feedback loop.
The puzzling part comes when they try to drill down and look at explanatory factors at a very fine spatial scale. They found a correlation between crime and mixed use zoning, which appears to contradict the idea that eyes on the street around the clock will help to deter crime. And they found more crime around businesses like cafes, restaurants, bars and retail shops. They found that longer open hours seemed to have some deterrent effect on crime relative to shorter open hours.
I think they have made an excellent effort to do this, and I am not sure it can be done a lot better, but I will point out one idea I have. They talk about some limitations and nuances of their data, but one they do not mention is the idea that they are looking at reported crimes, most likely police reports or 911 calls. It could be that business owners, staff and patrons are much more likely to call 911 and report a crime than are residential neighbors. The business staff and patrons may see this as being in the economic interest, increasing the safety of their families, and the (alleged) criminals they are reporting are generally strangers. In quieter all-residential neighborhoods, people may not observe as many of the crimes that do occur (fewer “eyes on the street”), they may prefer not to report crimes either through a sense of loyalty to one’s neighbors, minding one’s own business, quid pro quo, or in some cases a fear of retaliation. There is also the factor of some demographic groups trusting the police more than others, although the authors’ statistical attempts to control for demographics may tend to factor this out.
Here’s a wiki post about Edward Tufte’s data-ink ratio:
Tufte refers to data-ink as the non-erasable ink used for the presentation of data. If data-ink would be removed from the image, the graphic would lose the content. Non-Data-Ink is accordingly the ink that does not transport the information but it is used for scales, labels and edges. The data-ink ratio is the proportion of Ink that is used to present actual data compared to the total amount of ink (or pixels) used in the entire display. (Ratio of Data-Ink to non-Data-Ink).
Good graphics should include only data-Ink. Non-Data-Ink is to be deleted everywhere where possible. The reason for this is to avoid drawing the attention of viewers of the data presentation to irrelevant elements.
The goal is to design a display with the highest possible data-ink ratio (that is, as close to the total of 1.0), without eliminating something that is necessary for effective communication.
Before I offer an opinion, I should state the disclaimer that you should definitely listen to Edward Tufte, not me! So here’s my opinion: this idea is clearly absurd when taken to extremes because it would just mean a bunch of dots on a page that you have no way of interpreting. I can’t think of a way of making graphs without axes, scales, and a legend. Labels, arrows, and text boxes are an alternative which I find myself using often when giving projected slide presentations in fairly large rooms.
A reasonable interpretation of Tufte, I think, is to ask yourself whether each new thing you are adding to a graph provides useful information to the reader/viewer, increases the chances that the reader/viewer will draw the right conclusions, and makes the reader/viewer’s job easier or harder. The holy grail is to help your audience imbibe the point of the graph with very little effort. Unnecessary 3D effects and clip art aren’t going to do that. A splash of color and some nice big labels that middle aged people can read from the back of the room just might help.
Here’s a new R package for solving differential equations. Sounds like something that might be of interest to only a few ivory tower mathematicians, right? But solving differential equations numerically is the critical core of almost any dynamic simulation model, whether it is simulating water, energy, money, ecology, social systems, or the intertwinings of all of these. So if we are going to understand our systems well enough to solve their problems, we have to have some people around who understand these things on a practical level.
Border counties in Texas are using mandatory iris scans to build a database of illegal immigrants. I imagine it will spread to big city police departments, and then to everywhere else. I imagine at some point it will become a form of identification people can use as an alternative to carrying a wallet and passport. I don’t know that the technology is concerning in and of itself – it’s essentially just a modern and accurate form of identification. What’s concerning is what some immoral governments, amoral corporations, and criminal elements might be able to do with large databases of this type of information.
This is a bit over my head, but one thing I am interested in is analyzing and making sense of a large number of simultaneous time series, whether measured in the environment, the economy, or output of a computer model. This can easily be overwhelming, so one place people often start is trying to figure out which time series are telling essentially the same story, or directly opposite stories. Understanding this allows you to reduce the number of variables you need to analyze to a more manageable number. Time series make this more complicated though, because two variables could be telling the same or opposite stories, but if the signals are offset in time, simple ways of looking at correlation may not lead to the right conclusions. With simulations you have yet another set of complicating factors, which is the implicit links between your variables, intended or not, and whether they exist in the real world or not.
Information theoretic measures can be used to identify non-linear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1-minute environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.