more on lottery winners

I followed up on a link in yesterday’s story about lottery winners. In 2017 a group publishing in the Columbia Journalism Review submitted Freedom of Information Act requests to basically all the U.S. state lotteries and analyzed all the data they were able to get. The results are really surprising, verging on basically impossible.

  • Clarance Jones of Lynn, Massachusetts, the nation’s most frequent winner, claimed more than 7,300 tickets worth $600 or more in only six years.
  • Jones would have had to spend at least $300 million to have a 1-in-10 million chance of winning so often, according to a statistician we consulted at the University of California, Berkeley. (Jones did not respond to requests for comment.)
  • The odds are extraordinary even for winners with far smaller win tallies. According to the analysis, Nadine Vukovich, Pennsylvania’s most frequent winner, would have had to spend $7.8 million to have a 1-in-10 million chance of winning her 209 tickets worth $600 or more.

What could explain any of this? I don’t know, of course. But here are a few explanations that would fit the evidence.

  1. Psychic powers, or just straight up magic. Let’s rule this out.
  2. The data is flawed and/or the analysis of the data is flawed. An intern filled down the same name next to all the winning numbers in a spreadsheet. Something like this seems likely.
  3. Corruption. Certainly plausible.
  4. Computer bugs or computer hacking. This does not seem impossible to me. A pseudo-random number generator could be programmed wrong, using a seed that is predictable somehow. Or someone stole the code and figured out the seed. This has happened with slot machines. I don’t know how similar lottery machines are to slot machines but they would seem similar.
  5. People are figuring out ways to exploit certain obscure, flawed games. We know this has happened. The people who run the lottery know this too, and it is hard to imagine them making these mistakes often, and not correcting them quickly when they occasionally do.
  6. Shadowy crime syndicates, corporations, middle eastern princes, Russian oligarchs, Professor Moriarty (etc.) are funding corruption and/or exploiting flaws on a large scale and/or hacking into lottery computers. The world is not what it seems, and if you are not one of the chosen few you are just another victim plugged into the blood-sucking matrix.

I’d place most of my bets on #2 and #3, and a small side bet on #4 or #5.

beating the lottery

Here’s a long, interesting article in Huffington Post about a couple who developed a system to beat flawed lottery games in Michigan and Massachusetts. Eventually, they got found out, but not before making over $7 million. They reported all their earnings and paid all their taxes. Nobody really got in trouble, expect some store owners who lost their licenses to sell lottery tickets for breaking minor rules. Some other groups of people managed to exploit this same game too.

As interesting as the whole story is, there are a few paragraphs buried in the middle that really caught my eye. There really are people out there who win the lottery more than anyone should by random chance.

A 2017 investigation by the Columbia Journalism Review found widespread anomalies in lottery results, difficult to explain by luck alone. According to CJR’s analysis, nearly 1,700 Americans have claimed winning tickets of $600 or more at least 50 times in the last seven years, including the country’s most frequent winner, a 79-year-old man from Massachusetts named Clarance W. Jones, who has redeemed more than 10,000 tickets for prizes exceeding $18 million.

It’s possible, as some lottery officials have speculated, that a few of these improbably lucky individuals are simply cashing tickets on behalf of others who don’t want to report the income. There are also cases in which players have colluded with lottery employees to cheat the game from the inside; last August, a director of a multistate lottery association was sentenced to 25 years in prison after using his computer programming skills to rig jackpots in Colorado, Iowa, Kansas, Oklahoma and Wisconsin, funneling $2.2 million to himself and his brother.

But it’s also possible that math whizzes like Jerry Selbee are finding and exploiting flaws that lottery officials haven’t noticed yet. In 2011, Harper’s wrote about “The Luckiest Woman on Earth,” Joan Ginther, who has won multimillion-dollar jackpots in the Texas lottery four times. Her professional background as a PhD statistician raised suspicions that Ginther had discovered an anomaly in Texas’ system. In a similar vein, a Stanford- and MIT-trained statistician named Mohan Srivastava proved in 2003 that he could predict patterns in certain kinds of scratch-off tickets in Canada, guessing the correct numbers around 90 percent of the time. Srivastava alerted authorities as soon as he found the flaw. If he could have exploited it, he later explained to a reporter at Wired, he would have, but he had calculated that it wasn’t worth his time. It would take too many hours to buy the tickets in bulk, count the winners, redeem them for prizes, file the tax forms. He already had a full-time job.

agent-based wildlife modeling in cities

This is an agent-based model of wild boars coming from wild lands into a city. We don’t have wild boar issues where I live, but raccoons and deer occasionally show up. I’ve lived places where black bears show up unexpectedly in urban areas, and that can cause a stir.

Pigs in space: An agent-based model of wild boar (Sus scrofa) movement into cities

Last decades saw a dramatic increase in wildlife populations within urban areas. Policymakers seek to minimize human-wildlife conflicts resulting from overabundance of species, such as wild boars (Sus scrofa). To this end, there is a need to understand the drivers governing infiltration of wildlife into cities. In this paper we study the availability and distribution of food resources in urban areas as driver of wild boar movement patterns. Based on the optimal foraging theory, we utilize an agent-based simulation model to investigate the ever-growing infiltration of wild boars into some cities. We apply the model to an artificial city that mimics the landscape of the city of Haifa. Manipulating food availability and relative resistance costs of different land-covers we demonstrate that infiltration of boars depends on population size of wild boars and on the amount and spatial distribution of attractors (e.g., food). Model outputs for likely sets of parameters demonstrate good correspondence to the reports of boar observations within the city of Haifa, Israel, where the porosity of the urban fabric and the connectivity of open space patches provide a trail network that makes food throughout the city accessible at a relatively low search-cost. Our results indicate that land cover and food patterns determine critically boars’ foraging movement and infiltration into the city. The proposed modeling framework provides a tool to investigate wildlife management policies that aim at reducing people-wildlife conflicts in cities.

one large or many smaller cities for maximum productivity

This paper looks at data from 306 cities in China to identify trends in how different sizes and densities of cities relative to each other affect economic productivity. The interesting finding is that it is best to have either one big low-density city or many smaller high-density ones.

How did urban polycentricity and dispersion affect economic productivity? A case study of 306 Chinese cities

This article aims to assess the impacts of urban spatial structure on economic productivity. Drawing upon detailed gridded population data of 306 Chinese cities at the prefecture level and above, we identify their urban (sub)centers through exploratory spatial data analysis, construct indicators to measure their degrees of polycentricity and dispersion, and model the impacts of spatial structure on urban productivity. A regression analysis reveals that economic productivity is significantly associated with urban spatial structure. Conditioning on other factors, higher degrees of dispersion are associated with lower level of urban productivity whereas the effects of polycentricity depend on urban population density. Less densely populated cities are likely to have higher productivity levels when they are more monocentric, while urban productivity of cities with high population density tend to benefit from a more polycentric structure. The paper concludes with spatial planning implications.

mapping urban vegetation on a fine scale

This is an interesting paper about mapping urban vegetation on a fine scale based on photos.

Mapping vegetation functional types in urban areas with WorldView-2 imagery: Integrating object-based classification with phenology

Mapping urban vegetation is a prerequisite to accurately understanding landscape patterns and ecological services provided by urban vegetation. However, the uncertainties in fine-scale vegetation biodiversity mapping still exist in capturing vegetation functional types efficiently at fine scale. To facilitate the application of fine-scale vegetation spatial configuration used for urban landscape planning and ecosystem service valuation, we present an approach integrating object-based classification with vegetation phenology for fine-scale vegetation functional type mapping in compact city of Beijing, China. The phenological information derived from two WorldView-2 imagery scenes, acquired on 14 September 2012 and 26 November 2012, was used to aid in the classification of tree functional types and grass. Then we further compared the approach to that of using only one WorldView imagery. We found WorldView-2 imagery can be successfully applied to map functional types of urban vegetation with its high spatial resolution and relatively high spectral resolution. The application of the vegetation phenology into classification greatly improved the overall accuracy of classification from 82.3% to 91.1%. In particular, the accuracies of vegetation types was improved by from 10% to 13.26%. The approach integrating vegetation phenology with high-resolution remote sensed images provides an efficient tool to incorporate multi-temporal data into fine-scale urban classification.

habitat complexity doesn’t affect biodiversity?

There’s theory, and then there is collecting actual evidence to support a theory, which tends to be messy. In this case, the theory is that more complex habitats should support more diversity. They didn’t at least in this study of insects and spiders in Sydney.

Habitat complexity does not affect arthropod community composition in roadside greenspaces

Urban greenspaces including remnant patches of vegetation, backyard gardens and public parks provide important habitat for wildlife conservation. Maintaining and enhancing the conservation value of these spaces requires both an understanding of the biodiversity they support, and the factors, including habitat traits, influencing species occurrence. Roadside greenspaces, including road verges and median strips are often overlooked in current greenspace biodiversity studies. We quantified arthropod community assemblages in roadside and public park greenspaces, and determined if habitat complexity was an important trait influencing species composition in these areas. Using pitfall traps, we sampled ground dwelling arthropods along five major roads in the greater Sydney Region and in public parks. Whilst roadside greenspaces (road verges and median strips) and public parks supported significantly different arthropod assemblages, habitat complexity had no impact on community assemblage and neither factor affected the assemblage of key arthropods taxa including ants, beetles and spiders. Additionally, in public parks but not road side greenspaces we found an effect of habitat complexity on arthropod abundance; arthropods were more abundant in high complexity park sites. Our results highlight the unique arthropod community assemblage supported by roadside greenspaces, and suggest management practices like increasing habitat complexity may be important in some but not all urban greenspace types.

utilities, power lines, and wild fires

Apparently the devastating wild fires in California recently may have been sparked by downed electric lines, and there is a California law that may hold the utilities responsible for those lines liable for massive damages. Their stocks are now plunging as a result. Somewhat ironically, they are arguing that the severity of the wild fires is a result of climate change, even if they were sparked by the power lines. Climate change is a “societal issue” requiring “holistic solutions”, they say. I’m thinking that the mix of fossil and renewable fuels used to generate electricity could be part of the problem.

11 cities most likely to run out of drinking water

BBC has a list of the 11 cities most likely to run out of drinking water. Cape Town, South Africa is not on the list, because it is out of drinking water. Here’s the list:

  1. Sao Paulo
  2. Bangalore
  3. Beijing
  4. Cairo
  5. Jakarta
  6. Moscow
  7. Istanbul
  8. Mexico City
  9. London
  10. Tokyo
  11. Miami

London and Tokyo surprised me, while some of the high-growth developing capitals didn’t surprise me but are nonetheless extremely concerning. There are plenty of cities that probably would be on the list but aren’t because they have invested massively in desalination. many of the coastal cities on this list may ultimately have to follow suit, or else convince their national governments to invest in major pipeline projects. And this is just drinking water, of course. Food has to be grown elsewhere and brought in to all the world’s cities, and industry also has water needs. Ecosystems also need water, but does anyone expect them to be anywhere other than last on this list?

Alzheimer’s reversed in mice

An experimental genetic treatment has been able to completely remove the brain plaques that cause Alzheimers disease in mice.

BACE1 deletion in the adult mouse reverses preformed amyloid deposition and improves cognitive functions

BACE1 initiates the generation of the β-amyloid peptide, which likely causes Alzheimer’s disease (AD) when accumulated abnormally. BACE1 inhibitory drugs are currently being developed to treat AD patients. To mimic BACE1 inhibition in adults, we generated BACE1 conditional knockout (BACE1fl/fl) mice and bred BACE1fl/fl mice with ubiquitin-CreER mice to induce deletion of BACE1 after passing early developmental stages. Strikingly, sequential and increased deletion of BACE1 in an adult AD mouse model (5xFAD) was capable of completely reversing amyloid deposition. This reversal in amyloid deposition also resulted in significant improvement in gliosis and neuritic dystrophy. Moreover, synaptic functions, as determined by long-term potentiation and contextual fear conditioning experiments, were significantly improved, correlating with the reversal of amyloid plaques. Our results demonstrate that sustained and increasing BACE1 inhibition in adults can reverse amyloid deposition in an AD mouse model, and this observation will help to provide guidance for the proper use of BACE1 inhibitors in human patients.