John Lott and the War on Truth: A Response to Lott’s Continued Lies

12307578_10207977527907176_5902497571566615380_o

(Photo credit Flickr user Fling93, image has been modified from original)

Nearly seven months after we published our article detailing John Lott’s storied history of academic dishonesty, Lott decided to issue his own rebuttal to our work. He begins his screed by dismissing our skirmish with Gary Kleck over defensive gun use as a copy-and-paste rehashing of other’s people’s work.  Lott’s reference is appropriate, though for precisely the opposite reason he intended: Like Kleck before him, Lott ignored half our article and repeated  errors and falsehoods that have been debunked from the moment his work was published.  

Amusingly, Lott explicitly mentions that he saved a screenshot of our original article, possibly concerned that we may surreptitiously try to alter the contents of our article after his critique. Given Lott’s extensive history of using this very tactic on multiple occasions (which he has continued to use even after our article detailed this behavior), we have taken the same precaution. You can find an archived version of his critique here: https://archive.is/j9otM.

What follows is an extensive counter-rebuttal that addresses all of Lott’s claims in his response. To demonstrate we haven’t skipped anything (other than his quotes of our article and a list of praise for his books), we extensively quote from Lott’s article and have added headers to make it easier to find each topic and specific rebuttal.  

We must add that Lott is extremely slippery –– his responses are littered with half-truths that are just technical enough to confuse anybody without a statistics background, yet so transparently incorrect to experts in the field that many of them with whom we have corresponded have an explicit policy of avoiding confrontation with Lott to avoid giving him legitimacy.  

Instead of actually responding to his opponents, Lott instead simply links back to incomplete and thoroughly repudiated articles he wrote a decade ago.  His debate strategy is the academic equivalent of security through obscurity; he counts on his readership being too lazy to follow the rabbit hole to the end.  Indeed, as sociologist Ted Goertzel wrote in 2000, “Faced with critics who want some proof that they can predict trends, regression modelers often fall back on statistical one-upmanship. They make arguments so complex that only other highly trained regression analysts can understand, let alone refute, them. Often this technique works. Potential critics simply give up in frustration.”

We hope that if you’ve made it this far, you’ll see it through to the end.  The future of our country quite literally depends upon ordinary citizens refusing to give into frustration.

Tim Lambert Ad Hominem

“Tim Lambert as a source.  Professor Jim Purtilo at the University of Maryland put up a post in 2004 that he has updated over the years that shows that Lambert has been caught falsifying evidence on multiple occasions and has otherwise been dishonest.  See:

John Lott begins his argument with a blatant ad hominem attack on Tim Lambert, a curious tactic given that Lott’s own history would be far more disqualifying. To support his attack (rather than address Lambert’s 47 page thesis) Lott relies on Jim Purtilo, a University of Maryland professor. However, what Lott doesn’t mention is that Purtilo is a long time friend of his and he was an active part of a Wikipedia edit war that pitched half a dozen or so fervent Lott supporters (some of them were even real people) against the Wikipedia editor community (of which Lambert was a part). Edit wars are a relatively common phenomenon, and can get quite heated. As Lambert details, Lott’s supporters (including Purtilo) kept editing in false material on Lott’s Wikipedia entry. Purtilo is nowhere close to an objective bystander, especially given his long-history personally funding a pro-gun newsletter in Maryland. Ironically, using their own logic, Lott and Purtilo types are also the reason nobody can trust Wikipedia entries.

Of course, none of this actually addresses the validity of Lambert’s critique, and is the first of many attempts by Lott to obfuscate the points we raise.

Using “all the crime data available” and Survey Problems

“My paper with Mustard as well as my book looked at all the crime data available when those pieces were written and I updated that data with each successive updated edition of my book.  

— Paper with David Mustard: crime data for all the counties and states in the US from 1977 to 1992.  

— First edition of MGLC: crime data for all the counties and states in the US from 1977 to 1992 as well as up to 1994 for a comparison.  Literally hundreds of different factors that could impact crime rates were accounted for.

— Second edition of MGLC: crime data for all the counties, cities, and states in the US from 1977 to 1996. 

— Third edition of MGLC: crime data for all the counties and states in the US from 1977 to 2005. 

The regressions in those publications account for all the data available (all counties, all cities, all states for all the years the data is available), no cherry picking, and, following earlier work by William Alan Bartley and Mark Cohen, report all possible combination of these hundreds of control variables to show that the results are not sensitive to a particular specification.”

For starters, Lott’s claim that he used “all the crime data available” is blatantly false, which is easy to demonstrate. As we will detail below, Lott uses two surveys to demonstrate gun ownership increases while more than 80 others show no effect or a decrease. Lott uses data from Dade county when it supports his position and conveniently overlooks inconvenient data from the same county.  Lott also consistently fails to use proper criminal justice controls in his analysis, use robust standard errors, and makes many more basic econometric oversights that problematize his results.

More importantly, it is bizarre that Lott would even want to claim this, as using “all the crime data available” is not a desirable feature of economic analysis anyway. It is clear he is trying to convince a lay readership when he reiterates this claim over and over again, as real academics don’t gather all the data they can scrounge, throw it into a regression, and pray that doing so will accurately capture all the effects they are trying to measure. Choosing what data is relevant and not flawed is the most important part of econometrics, and requires a high amount of trust in the researcher’s integrity. Attempting to use all the available data just to claim comprehensiveness is the antithesis of careful data analysis.  

Lott’s concerns regarding the sensitivity of his results have been proven to be woefully inadequate. Indeed, if there is any consensus that has emerged in the debate over Right-to-Carry laws, it is that the models are extremely sensitive to even the smallest changes in specifications and control variables, as demonstrated by the National Research Council (NRC) and numerous other statistical examinations.

The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys.  Those two exit polls included a question on gun ownership.  The third edition of MGLC updates the data to include the 2004 exit poll survey.  The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states.  The GSS survey only has data for 600 to 800 observations at a time every two years.  Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons.  So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.

Lott’s use of these exit poll surveys is statistical malpractice. As Hemenway explains:

“Sometimes it is not the model that Lott uses but the data that are just plain wrong. For example, in the one analysis not involving carrying laws, Lott takes data on gun ownership from 1988 and 1996 voter exit polls and purports to show that higher levels of gun ownership mean less crime. According to the polling source, Voter News Service, these data cannot be used as Lott has used them — either to determine state-level gun ownership or changes in gun ownership. For example, the data from the exit polls indicate that gun ownership rates in the United States increased an incredible 50 percent during those eight years, yet all other surveys show either no change or a decrease in the percentage of Americans who personally own firearms.”

When Lott tried to defend his use of the surveys, Hemenway issued another rebuke:

“As the source of the voter exit poll has emphasized, these data are not appropriate for determining levels of or changes in gun ownership. (For example, the percentage of adults who personally owned a gun in 1996 was not 39 percent, as the poll suggests, but 25 percent.) Lott’s reweighting of the data does not make them appropriate.”

Further, as Lambert extensively details, while Lott was quite content to use these exit poll surveys when he needed gun ownership rates to rise to support his hypothesis, in a different study Lott used a more respected survey that showed gun ownership rates declining. Lott was then able to tie this declining gun ownership rate to an increase in crime in states with safe storage laws, fitting his hypothesis, the very definition of cherry-picking data sources.

“The surveys that DeFilippis and Hughes are referring to involve people carrying guns for any reason, including going hunting or simply moving guns between places (See the discussion in MGLC).”

On page 200, Chapter 6 of Gary Kleck’s book Targeting Guns, Kleck says the following:

“Therefore, the focus here is on national surveys of probability samples. None of the national surveys of adults (Table 6.1 Panel A) are satisfactory for estimating the prevalence of gun carrying. Only one of the surveys specified carrying for protection against crime, so the rest probably include a good deal of carrying linked with recreational uses of guns or other innocuous activities. Further, the Gallup survey, which did specifically ask about carrying for protection, asked the question only of Rs reporting that they personally owned guns. This procedure excludes people who carry guns belonging to other members of their household, a practice likely to be more common among women. The surveys also did not define a specific time period in the past to which Rs’ answers were to pertain. The estimates indicate that between 5 and 11% of U.S. adults at least occasionally carry guns in public places. None of the surveys of adults asked how many times carriers carried their guns, so they do not allow one to estimate the incidence of carrying or what fraction of the population is carrying on any given day.” (emphasis added)

The Gallup poll referenced above, which controls for Lott’s concern, indicates 5% of people carry guns for protection. As Kleck indicates, this estimate could be an underestimation due to how the question is phrased. Some of the surveys referenced include all forms of carrying (for protection or otherwise), but at least one specifically asks about carrying for defense, unlike what Lott suggests.

Kleck’s data on carrying firearms is also supported by a National Institute of Justice survey, which found that in 1994, of 44 million gun owners, 14 million carried a firearm at least once in the past year, which was approximately 7% of the adult population. This survey did not include people who carried for transportation or hunting.

It is troubling that Lott has consistently lied about these surveys for the past decade when there is no good reason to do so. It would be more than acceptable for Lott to question the validity of the surveys, or admit that he misread the results and move on. Instead, Lott engages in a losing battle that only serves to cast doubt on all his other claims.

Defensive Gun Use

“Again, I refer to the same discussion from MGLC as it shows that this 1% number is misleading and it also shows a simple numerical example regarding what would be required to get the expected reduction in crime.  This is part of a consistent pattern where DeFilippis and Hughes make no attempt to discuss the responses that I have already made on these issues….

I have a long discussion about why purely cross-sectional analysis is unreliable.  Regarding: “zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.”  Well, given that it cost $140 and 10 hours of training to get a permit, it isn’t very surprising to me that poor areas have both high crime rates and low permit rates.  As to cherry-picking, even if cross-sectional analysis was useful, somehow the authors have to explain why they picked one city in the entire US to look at.  In any case, I note this paper and respond to it in MGLC.”

Unintentionally, Lott refutes his own argument in More Guns, Less Crime. At the time of Lott’s data, the general population faces an aggravated assault rate of 0.18% annually. However, according to Lott’s own work (pages 133-134 of MGLC) and assuming a generous 2% of the population obtained permits, 0.65% of concealed carry holders would have to stop an aggravated assault. In other words, for permit holders to reduce aggravated assaults by the amount Lott’s models show, they would have to stop 3.6 times more aggravated assaults than they would likely encounter. Whoops.

Of course, we are not the first to point out this mathematically inconvenient fact. Lambert detailed this problem more than a decade ago, a source we cited quite prominently in our article. Following his typical modus operandi, Lott regurgitates decade old defenses, pretends that our sources don’t thoroughly repudiate those defenses, and then proceeds to accuse us of “ignoring” what he has written.

Even further, all of this is under the assumption that permit holders face the same level of aggravated assaults non-permit holders do. However, this assumption is completely incoherent. As Lott all but admits, permits are far more prevalent in areas that already had low levels of crime even before RTC laws, and this has been confirmed by more recent Illinois data. This indicates that the rate of crime permit holders are facing is almost certainly lower than that for the general population, meaning that permit holders would likely have to stop far more than 3.6 times the number of aggravated assaults than they would encounter. While Lott flippantly dismisses this distribution of permit holders, the reality has devastating implications for the validity of his hypothesis.

“Note on the Dade county data….

Anyone who has been following the debate on justifiable police homicides knows that the data is not very reliable.  The justifiable homicide data for civilians is even worse.”

For starters, it is curious that Lott will dismiss a data set for being “not very reliable”, yet he does not apply the same discretion to using county-level crime data.  It is well-established that county-level data, the basis upon which Lott’s original thesis was founded, is so riddled with errors and gaps (even after imputation strategies) as to be useless for criminological analysis.  Lott had a skirmish with criminologist Michael Maltz over this subject and, instead of soberly responding to Maltz’s latest rebuttal, Lott, under the pseudonym of Mary Rosh, accused Maltz of fraud.  Clearly, this is the behavior of a serious academic we should trust…  

To be clear: Maltz’s rebuttal to Lott is devastating — Maltz is the expert in the very type of reports that Lott uses in his analysis.  As Maltz writes, “I’ve been involved in studying the characteristics of the UCR since 1995, which I started during my tenure as a Visiting Fellow at the Justice Department’s Bureau of Justice Statistics (BJS).”

Maltz finds that:

  1. There are gaps in the FBI’s Crime Data that are not correctly handled by Lott.  This includes a 10% decrease between 1977 and 1992 in the population covered by Uniform Crime Reports. Furthermore, within states there are many counties in which “not a single agency provided crime data to the FBI”, making it impossible to produce meaningful estimates for these areas.  
  2. Lott miscalculates crime rates. In the UCR, there are many zero population counties that cover “transit police, park police, or campus police, or any agency with overlapping jurisdiction with the primary policing agency” which nevertheless still report crime.  Because the calculated crime rate of these areas would be infinite (because of a divide-by-zero error), Lott makes the assumption that the crime rate for these counties is zero. Agencies with fewer than 5 months of reported crime data also are not included in National Archive of Criminal Justice Data (NACJD) used by Lott, and are also assumed to have zero reported crime.  As Maltz explains, “Lott claims that these many data gaps amount to random error, easily handled by standard statistical techniques. This is absolutely false: the error isn’t random, and standard techniques don’t apply.”
  3. Lott incorrectly manages changes to imputation strategies – Starting in 1994, the NACJD changed its imputation strategy to be consistent with the practice used by the FBI.  Beginning in 1994 it issued the following statement about pre-1994 data:

These changes will result in a break in series from previous UCR county-level files. Consequently data from earlier year files should not be compared to data from 1994 and subsequent years because changes in procedures used to adjust for incomplete reporting at the [agency] level may be expected to have an impact on aggregates for counties in which some [agencies] have not reported for all 12 months. [Emphasis added.]” 

Nevertheless, Lott uses data from 1993-1996 in the second edition of More Guns, Less Crime.  In later editions of More Guns, Less Crime, Lott finds that some of his results are sensitive to whether or not state or county-level data is used, and dismisses state data altogether:

“Yet, at least in the case of property crimes, the concealed- handgun law coefficients are sensitive to whether the regressions are run at the state or county level. This suggests that aggregating observations into units as large as states is a bad idea.” (pg. 64)

Ironically, this is the exact opposite conclusion that Lott should have drawn, as all the evidence points to the fact that the county level data, not state level, is essentially useless. Lott likely chose to dismiss the state data because it was inconsistent with More Guns, Less Crime.

Putting that aside, the Dade county data included all forms of defensive gun use reported to the police, so Lott’s point about justifiable homicides is off base. Second, in the link Lott provides, the data on Kalamazoo county heavily undermines Lott’s case. As Lott mentions, the data indicates that there were three justified homicides by civilians between 2000 and 2010, despite FBI data only showing one. Over this period, the county’s population ranged from 240,000 to 250,000. As Kleck’s surveys indicate (numbers which Lott supports), roughly one percent of the population has a DGU each year, which would mean around 2,400 DGUs in the county each year. Let’s be charitable and cut this number to 1,000 each year. Lott goes on to claim that “by any measure defensive gun uses, only a tiny fraction of one percent of defensive gun uses result in the criminal attacker being killed or wounded.” This is a blatant lie, as Kleck’s own surveys estimate that the attacker is killed or wounded in more than 2% of DGUs. But, for the sake of argument, let’s assume that Lott was being fully honest, and say that only 0.2% of DGUs result in the death of the attacker. This would correspond to two justifiable homicides in the county each year. Over 10 years this would mean 20 justifiable homicides from DGUs. Yet the investigation could only turn up three justifiable homicides over that period. So even after making several assumptions that are absurdly charitable to Lott, his DGU claims still fail. Oops.

Since Lott’s initial work was published, we now have extensive empirical data demonstrating the rarity of defensive gun uses (DGUs) on a national level. The Gun Violence Archive collects DGU data on a national scale, and could only find 1,600 defensive gun uses in 2014. This includes all types of DGUs, even ones where the gun isn’t fired. The best available empirical data decisively refutes Lott’s claims. This is probably why instead of using empirical data to support his claims, Lott uses survey data, the least reliable data source possible.

Problematic Demographic Claims

“My responses to these claims can be found in MGLC (here and here), though DeFilippis and Hughes ignore my responses.”

Let’s unpack these claims as simply as possible.  Dr. Aschuler argues that Lott’s work is filled with results that are inconsistent with established facts in criminology.  The coefficient on one of Lott’s variables, for example, implies that middle-age black females are responsible for a large share of crime– meaning Lott’s model is likely unreliable. Lott argues that this result is similar to his finding that auto-theft is higher in areas with high personal incomes. Lott contends that this result doesn’t mean that people with higher incomes tend to steal cars more, but rather that criminals likely target those areas due to the abundance of expensive cars. Hence, the reason there is an association between wealthier individuals and car theft is because those individuals are victims at higher rates.

Lott uses this as an analogy to explain away the bizarre result with respect to Black elderly females arguing, “the positive relationship may exist because these people are relatively easy or attractive victims.”  However, we know this isn’t the case (and so does Lott).  Black females are neither victims nor perpetrators of crime at significantly higher rates than the general population. If black females are neither victims nor perpetrators, but the statistical model Lott is using nevertheless shows that they have a massive effect on the crime rate, then there is something terribly wrong with the model. There is no other explanation.  

Further, Lott boasts that he “overcontrols” for demographic variables. However, “overcontrolling” is one of the largest and most well known red flags in econometrics. Taking Lott’s approach can lead to multicollinearity or amplify “garbage in, garbage out” results should any of the underlying data have flaws. As one note for a Management class at Northwestern University amusingly describes:

“You will undoubtedly come across “kitchen sink” regressions that include dozens of variables. This is often an indication that the researcher was brain dead, throwing in every available predictor variable, rather than thinking about what actually belongs. You can imagine that if completely different predictors had been available, the researcher would have used those instead. And who knows what the researcher would have done if there were thousands of predictors in the data set? (Not to mention the possibilities for exponents and interactions!)”

Highway Robbery

“My response to this type of point is available here in MGLC.”

From Lott’s link:

“First, as anyone who has carefully reads my work will know, it is simply not true that the results show “little or no effect on robbery rates.” Whether the effect was greater for robbery or other violent crimes depends on whether one simply compares the mean crime rates before and after the laws (in which case the effect is relatively small for robbery) or compares the slopes before and after the law (in which case the effect for robbery is the largest).”

Let’s be clear about why failing to find an effect on robbery rates unequivocally shatters the entire edifice of Lott’s research.  Lott is quick to dismiss this point with distracting magician’s patter about why we should only look at RTC’s effect on rape, aggravated assault, and murder. This is deliberately misleading and flies in the face of empirical evidence and theoretical predictions.

Gordon Hawkins and Franklin Zimring were the first delineate the two mechanisms by which RTC laws could credibly impact crime:

  • Announcement – the announcement of RTC laws increases criminal anxiety, deterring bad behavior
  • Crime Hazard – changes to RTC laws produce changes in the behavior of potential criminal victims, which in turn raises the cost of criminal interaction with ordinary citizens, which eventually produces changes in the crime rate.

These mechanisms are depicted in the figure below:

Figure Deterrence

The second mechanism, the “increased crime hazard model” is the version canonized in economic theory, and it is indeed the version advocated by Lott.  The problem, noted by Hawkins and Zimring , is that Lott makes no attempt to rigorously evaluate the crucial intermediate step between the passage of RTC laws and reductions in crime: namely, changes in the behavior of law-abiding citizens.  They explain: “Lott and Mustard make no attempt to measure carrying of handguns by citizens, use by citizens in self-defense from crime, or offender behavior in relation to street crime. There is merely the legislation and the crime data, linked only by the argument that no plausible rival hypothesis exists other than “shall issue” laws to explain the lower than predicted levels of selected crimes.”

Of course, when intermediate measures like defensive gun use, or citizen awareness of RTC laws, or the crime risk to the average gun owner, have actually been measured, it fails to support Lott’s hypothesis.  

Why is the “crime hazard” model problematic for Lott?  As Ayres and Donohue explain, it means that criminal recidivists are the most likely group to be deterred by RTC laws, given the fact that they come into contact with citizens on a more regular basis than non-repeat criminals, and are thus more likely to confront a gun-owning citizen.  Ayres and Donohue write, ”If 2 percent of the population carries concealed weapons, then a criminal who robs 100 people a year faces an 86.7 percent chance of encountering a concealed weapon over the course of the year. A 2 percent chance of encountering an armed victim may not be sufficient to deter a one-time criminal, but it may be sufficient to deter someone from making a profession out of robbery.”

Furthermore, we would expect to see stronger decreases in robbery than for other crimes such as murder because there’s at least some non-trivial risk that RTC laws might induce ordinary citizens to become murderers (one can imagine a hostile argument becoming lethal in the presence of a gun), but no such effect occurs to balance out robberies.  As Ayres and Donohue explain, “But while law abiding citizens with concealed weapons may get angry and commit murder, it is less likely that they will get angry and commit robbery, which is primarily an economically motivated crime. “

What Lott fails to appreciate is that all this means, if RTC laws have any effect at all on crime, then we should see the effect most prominently in crimes that are committed by recidivists, of which robbery is the most likely candidate.  Instead, Lott finds effects for non-repeat crimes like murder and rape, but not for robbery.  

Now, to respond to his comments directly.  Changes in the slope on robberies do not matter if those changes are not statistically significant as there is no way to differentiate them from random chance.  This is especially given that, in Lott’s own graph (see below), the difference between “number of robberies per 100,000” 6 years before the adoption of concealed carry is only marginally higher than 6 years after (104 compared to 100).  Because of the poor econometric practices used by Lott (including using arrest rates to predict crime rates despite the well-established problems with doing so), there’s no way to know if these changes are part of a natural crime cycle.  

Crime graph

“Second, it is not clear that robbery should exhibit the largest impacts primarily because “robbery” encompasses many crimes that are not street robberies. For instance, we do not expect bank or residential robberies to decrease, and, in fact, they could even rise.  Allowing law-abiding citizens to carry concealed handguns makes street robberies more difficult, and thus may make other crimes like residential robbery relatively more attractive.”

Every single sentence in Lott’s response is deliberately misinformed.  From 1973-2011, by far the most common type of burglary was street / highway burglaries, which accounted for between 42%- 56% of all robberies.  At the same time, residential burglaries accounted for only 10-17% of robberies, while the percentage of bank robberies never exceeded 2%.  Street/highway robberies account for such a large percentage of overall robberies, that Lott should have been able to find an effect of RTC laws on robberies if there was one.

For a far better analysis of robberies, take a look at Figure 1 from the study done by Ayres and Donohue:

Donohue robbery graph

The authors point out the following from this figure.  First, the 22 states that never adopted concealed carry laws (the pink line) had substantially higher rates of robbery than the states that eventually adopted shall-issue laws.  These differences preceded the widespread adoption of concealed carry laws, and so were caused by external factors unrelated to gun carrying in public.  Ayres and Donohue point out that, from graphical inspection alone, the only group that experienced significant downturns in robbery were the 22 states that never adopted shall-issue laws.  If the “change in slope” matters at all, which Lott seems to believe, than it’s apparent you can get significant decreases in the robbery rate by not adopting shall issue laws.  Furthermore, by the same logic, concealed carry laws INCREASE robbery, as the teal line changes slope after 1985 and steadily increases until the 90s, while blue line sharply increases after 1989.  

Lott’s own admission that “concealed handguns makes street robberies more difficult” does not bear out in the data.  Between 1977 and 1992 (the time period analyzed by Lott), street/highway robberies INCREASED from 46% to 56%.  At that same time, residential robberies remained constant.  Clearly, criminals were not being deterred by the prospect of meeting armed resistance.

Curiously, Lott who routinely announces that he uses “all the available data” somehow missed the fact that the FBI breaks robberies down into categories, and that he could have just run regressions on “street/highway robberies.” The fact that he didn’t do so so that he could claim that the robbery category is too broad is instructive. [From Lott, in Mustard article: “given that the FBI includes many categories of robberies besides robberies that “take place between strangers on the street,” it is not obvious why this should exhibit the greatest sensitivity to concealed handgun laws.”]

Obfuscating Black and Nagin

“Frank Zimring and Gordon Hawkins as well as Dan Black and Daniel Nagin are intertwined here….

Again, DeFilippis and Hughes ignore that I have extensive discussions on this in both MGLC and a 1998 paper published in the Journal of Legal Studies.   

1) Note that even throwing out all counties with populations below 100,000 and Florida, still produced statistically significant drops in some violent crime categories.  They thus removed about 89 percent of the data in the study.  There are so many combinations of county sizes and states that could have been dropped from the sample — for example, why not Georgia or Pennsylvania or Virginia or West Virginia or any of the other six states?  Why not drop counties with populations under 50,000?  Black and Nagin never really explain the combination that they pick.”

For starters, Black and Nagin became suspicious of Lott’s model when they noticed absolutely bizarre results that were inconsistent across states and time.  As Black and Nagin explain: “Murders decline in Florida but increase in West Virginia. Assaults fall in Maine but increase in Pennsylvania. Nor are the estimates consistent within states. Murders increase, but rapes decrease in West Virginia. Moreover, the magnitudes of the estimates are often implausibly large. The parameter estimates imply that RTC laws increased murders 105 percent in West Virginia but reduced aggravated assaults by 67 percent in Maine”

This points to the fact that there may be a single state with large swings in crime, and the model is attempting to overfit to that data point.  It turns out that state is Florida:

“With the Mariel boat lift of 1980 and South Florida’s thriving drug trade, Florida’s crime rates are quite volatile. Further, 4 years after its 1987 passage of the RTC law, Florida passed several other gun-related measures, including background checks of handgun buyers and a waiting period for handgun purchases”

Furthermore, as pointed out in a paper by Lambert that Lott routinely ignores, excluding counties smaller than 100,000 is not a necessary condition for Black and Nagin’s critique.   “Black and Nagin report that removing Florida makes the effects on murder and rape not statistically significant whether or not the analysis is restricted to large counties.” [Emphasis added]

Second, the reason clearly stated in the paper for only looking at countries larger than 100,000 is because data for smaller counties is extremely weak in quality and riddled with missing values.  Restricting the data set in this way decreases the missing-data rate to “3.82 percent for homicide, 1.08 percent for rape, 1.18 percent for assault, and 1.09 percent for robberies.”  It would be absurd to argue that the vast majority of the effect of concealed carry on crime is concentrated in small counties where the data happens to conveniently abysmal and missing.  Indeed, Lott and Mustard make this very point for us: “the impacts of RTC laws are greater in more populous areas, and “larger counties have a much greater response . . . to changes in the [RTC] laws.’’

In Lott’s own link, he undermines his argument against dropping data for counties with less than 100,000: “Both figures also looked at the sensitivity of the overall violent- crime rate for counties over 100,000. The range of estimates was again very similar, though they implied a slightly larger benefit than for the more populous counties.”

2) More importantly, even when they drop out counties with fewer than 100,000 people as well as Florida, Black and Nagin still find statistically significant drops in aggravated assaults (significant at the 5% level) and robberies (significant at the 8% level) and no evidence that any type of violent crime increases.  Note that they also didn’t report over all violent crime, and the reason that they don’t report that is because even with their choices the drop in over all violent crime would have been statistically significant.

Lott is missing the point.  His own argument in More Guns, Less Crime was that decreases in murder and rape are responsible for 80% of the social benefit of concealed carry laws.  Black and Nagin demonstrate that achieving statistical significance for either of those crime categories is completely dependent on model specification –small changes in the data under consideration affect the result.  As Black and Nagin argue, “the seemingly salutary impacts of RTC laws on murder and rape depend entirely on the data for Florida.” If Lott’s model is being driven by data from Florida and small counties (with their attendant sloppy data), he needs to provide a criminological theory to explain that– yet, even by Lott’s own admission, he cannot do so.

Also, 8% significance is not at all standard in criminological literature.  

Moreover, it is curious that Lott is arguing that dropping counties with fewer than 100,000 people is an egregious practice, when Lott’s own imputation strategy for dealing with missing data resulted in the deletion of entire states.  As Maltz writes,

“Lott and Whitley attempt to account for this by eliminating observations, known as listwise deletion in the imputation literature; however, we are concerned with the procedure they used. They did not delete only those counties that had many observations with high levels of missing data. Rather, even though it is a county-level analysis, they deleted entire states that had many counties with high levels of missing data.” [Emphasis added]

3) As to the increase in West Virginia, there was only one county in WV (Kanawha County) with more than 100,000 people in it.  What they showed is not that crime increased in WV (it fell over all), but that there was an increase in one type of violent crime in one county in WV.

In 1990, West Virginia’s population was 1,793,477.   Kanawha County at the time had a population of 207,619.  So, while it is true that Kanawha County is “one county in WV”, it contained 11.5% of the overall population. It would be bizarre for the largest county in West Virginia to observe no effect of RTC laws on assaults and robberies and a POSITIVE effect of RTC laws on homicides (that is, RTC laws are associated with increases in homicide), but all the small counties mysteriously have different experiences that fly in the face of criminological theory. .  

4) DeFilippis and Hughes continually write about “Florida” being removed from the sample, but it is Florida as well as counties with fewer than 100,000 people.

See above.

5) If one is interested in my other responses, I suggest that people read both MGLC and the paper published in the Journal of Legal Studies.”

First, Black and Nagin are not the only authors to find that the inclusion of Florida is responsible for the ostensibly beneficial effect of RTC laws.  Lott’s colleagues and allies in the gun debate, Carlisle Moody and Thomas Marvell, confidently pronounced in a 2008 paper that entire statistically significant benefit of RTC laws is due to the inclusion of Florida:

“The cumulative results through 2000 are dominated by Florida, which benefited to the tune of $30.8 billion from passing the shall-issue law in 1987. Since the net effect across all states is $28 billion, the other states have experienced a net increase in crime amounting to a cost of $2.8 billion. However, this sum is not significantly different from zero.

Or, as Ayres and Donohue write, “Based on the state-specific estimates they generate from this new model, Moody and Marvell conclude that RTC laws are beneficial because one state – Florida – outweighs the overall harmful effects estimated for the other 23 jurisdictions.”

Further, a 2006 paper Marvell coauthored examined only Florida and concluded that there were no perceptible benefits from Right-To-Carry. Indeed, in an early draft of the paper, the authors contended that there was slight evidence of an increase in crime stemming from the law, but backed away from that claim in the final iteration, content to state merely that there was no significant effect. The advantage of just studying Florida is that any regressions don’t try to overfit the data, a massive problem if one state in a regression has especially volatile crime rates relative to the other states in the analysis. This confirms Black and Nagin’s suspicion that the inclusion of Florida in the aggregate results had a profound impact when it shouldn’t have.

In Lott’s link, he argues that the Mariel boatlift couldn’t possibly have influenced the effect of RTC laws.  To support his claim, he provides the following figure:

Lott Florida Murder

Referring to this figure, Lott says the following: “The Mariel boat lift did dramatically raise violent crime rates like murder, but these rates had returned to their pre-Mariel levels by the early 1980’s. For murder, the rate was extremely stable until the concealed handgun law passed there in 1987, when it began to drop dramatically.”

However, let’s take a look at a graph depicting the exact same results from Ayres and Donohue (1999).

Donohue Florida Murder

While Lott appears to be correct that the effect of the Mariel boatlift on crime disappeared by the time Florida passed its concealed carry law in 1987, what he fails to mention is that this spike nevertheless biases Florida’s pre-passage fixed effects upwards.  To be clear: it appears if as Florida was more violent, on average, a decade prior to the passage of Florida’s RTC law than it was a decade after largely because the decade prior to the 1987 law included an anomalous spike in crime from the Mariel boatlift.  

To test this claim, Ayres and Donohue (1999) re-ran Lott’s regressions with dummy variables to track the effect of Florida and Maine (two states with anomalous crime waves).  They found the following: “The first effect in regression 8 is essentially the pure Maine and Florida effects, which are extremely large. In fact, they are too large to be believed. A drop in murders of over 20 percent from the passage of a shall issue law strains credulity. This result provides a strong indication that the model is over-attributing deviations from national trends in crime to the shall issue laws for these two states. “ [Emphasis added].

In other words, Lott is being disingenuous when he makes the argument that the effect of the Mariel boatlift had dissipated by the 1987– even if this were true, the Mariel boatlift among other possibly unobserved causal factors have coalesced to produce a spike in Florida’s crime rates, the “regression to the mean” of which drive the apparent effect of RTC laws on crime.  

Maintaining historical accuracy  

“Regarding  Ted Goertzel’s comments, DeFilippis and Hughes plagiarize/copied his comments in their discussion of Dan Black and Nagin.  In general their approach is to copy, slightly rewrite other critiques, and then ignore what I have written in response.”

Almost the entirety of Lott’s critique focuses on the historical portion of our investigation. When compiling the nearly comprehensive history of Lott’s numerous ethical and academic transgressions, we relied on existing sources to determine what happened. We then organized those scattered sources into a cohesive narrative. Of course our work in this portion is going to look similar to those existing sources, which is why we clearly cite them. There are only so many ways to say the same thing. Unlike Lott, we aren’t trying to rewrite history, but rather present it in an easy to read fashion. And in line with his usual pattern, Lott doesn’t bother mentioning the large section of our work dedicated to his newer transgressions, for which he has yet to present a response.

Sensitivity Issues

“This is one time where DeFilippis and Hughes pretend that they are actually linking to what I wrote in response to Goertzel, but instead they misstate what I wrote and link back again to Goertzel.  My responses to Goertzel were similar to what I just note above in response to Black and Nagin.  

DeFilippis and Hughes claim “Lott’s response to Goetzl was to shrug him off, insisting that he had enough controls to account for the problem.”  But that is not accurate.  I point out that I was also concerned that the sensitivity of specifications.  That is why I pointed to papers such as the one by Bartley and Cohen that provided tests of whether the results were indeed sensitive.”

It’s not entirely clear that Lott understands the criticism here. Goertzel’s point that we were referencing was not a broad critique on the sensitivity of specifications, but rather a specific critique about major cities not having RTC laws at the time. In Goertzel’s own words, when he asked Lott about this flaw, Lott “shrugged it off, saying that he had ‘controlled’ for population size in his analysis.”

Further, if Lott is indeed the one portraying his exchange with Goertzel accurately (which given Lott’s history is highly unlikely), his explanation still falls short as we thoroughly demonstrated with our commentary on Lott’s battle with Black and Nagin. While Lott may have been concerned about sensitivity, clearly his concerns were insufficient as further studies have revealed that Lott’s results are extremely sensitive, which is one reason for numerous studies showing different results due to only a few minor changes in the statistical models.

Mischaracterizing Ayres & Donohue

“As to Ayres and Donohue’s 2003 law review paper, DeFilippis and Hughes are just simply wrong about the facts….

The 2nd edition of MGLC came out in 2000 and, as noted above, it had data through 1996.  I provided Ayres and Donohue with my data set and they added one year to the study, 1997.  That single year did not change the results.  While Ayres and Donohue also claimed that the my research had ended with 1992, anyone who checks the 2nd edition of the book or reads chapter 9 in the third edition will see that I had looked at data from 1977 to 1996.”

This is the one time John Lott actually points out a legitimate error in our work. In our original article we state: “Fortunately, Lott’s data set ended in 1992, permitting researchers to test Lott’s own model with new data.” Instead, we should have made it clear that Lott’s initial data set ended in 1992 (as Ayres and Donohue were very careful to indicate, and they even mentioned Lott’s 2nd book in a footnote, unlike what Lott suggests), and that over time he has added to his data. The omission of this one word does change the implication of our sentence, and so we apologize for the error.

That being said, it is important to note, as Ayres and Donohue point out, that the only results Lott reported with this data in the 2nd edition were from tests of trend specification. No dummy or hybrid models that could have provided a fuller picture of the new data were used. Ayres and Donohue analyzed the data in much greater depth, added an additional year, and found that it contained little support for Lott’s hypothesis.  

“The reply to Ayres and Donohue in the law review was by Florenz Plassmann and John Whitley.  I had helped them out and Whitley notes “We thank John Lott for his support, comments and discussion.”  There were minor data errors in the additional years that they added from 1997 to 2000, but those errors didn’t alter their main results that dealt with count data.  They had accidentally left 180 cell blank out of some 7 million cells.  Donohue has himself made much more serious data errors in his own work on this issue.  For example, he repeats the data for one county in Alaska 73 times, says that Kansas’ right to carry law was passed in 1996 and not 2006, and made other errors.  I did co-author a corrected version of the Plassmann and Whitley paper that fixed the data errors and is available here.  But DeFilippis and Hughes can’t even get it straight what paper I co-authored.”

It is bizarre that Lott is now obfuscating his role in the initial paper. That Lott was the lead author on Plassmann and Whitley’s paper but then pulled his name off during the editing process is a well established fact that we detail in our original article. We did not confuse anything.

The test of whether errors are serious or not is whether they significantly change the results of the study. The errors in Donohue’s paper did not significantly change his results. The errors in Plassmann and Whitley’s paper (that Lott initially coauthored) did cause the results of their main regression to significantly change. Donohue freely admitted the errors in his paper and corrected them in a transparent manner. Lott initially refused to admit the errors in Plassmann and Whitley’s paper, finally was forced to do so but desperately tried to downplay their importance, and then proceeded to try to obfuscate their effect in a bungled series of “corrections.”

Further, the “corrected version” Lott currently links to isn’t actually corrected at all. As we detailed in our “The Curious Case of the Changing Table” section, Lott’s corrected version (after multiple iterations) still contains the data errors and additionally has the wrong type of standard errors. In layman’s terms, Lott’s “corrections” ultimately didn’t correct anything and only made the paper less rigorous. More than a decade later this is still the case. This pattern of behavior is unfortunately very common for Lott and his supporters. As Ayres and Donohue concluded their devastating takedown:  

“In the wake of some of the criticisms that we have leveled against the Lott and Mustard thesis, John Lott appeared before a National Academy of Sciences panel examining the plausibility of the more guns, less crime thesis and presented them with a series of figures showing year-by-year estimates that appeared to show sharp and immediate declines in crime with adoption of concealed-carry laws. David Mustard even included these graphs in his initial comment on the Donohue paper in the Brookings book that PW refer to repeatedly in their current response. But Donohue privately showed Mustard as well as the Brookings editors that the graphs were the product of coding errors in creating the year-by-year dummies, and in the end Mustard conceded and withdrew them from his comment on Donohue. Now PW respond to our paper with an array of regressions that purport to support their thesis, but again are utterly flawed by similar coding errors. We previously made no mention of the initial National Academy of Sciences/Brookings comment error, since we know how easy it is to make mistakes in doing this work. But for the second time Lott and coauthors have put into the public domain flawed regression results that happen to support their thesis, even though their results disappear when corrected. Claiming we misread our results in the face of such obvious evidence to the contrary and repeatedly bringing erroneous data into the public debate starts suggesting a pattern of behavior that is unlikely to engender support for the Lott and Mustard hypothesis. We feel confident concluding that we have indeed shot down the more guns, less crime hypothesis. Perhaps PW can now assist in laying it to rest.”  

Cherry-Picking Kleck & Hemenway

“Again, talk about DeFilippis and Hughes cherry-picking, there are several ways of responding to the quotes by Kleck and Hemenway.

1) Note that Kleck has also said many positive things about my research. For example, see this quote: “John Lott has done the most extensive, thorough, and sophisticated study we have on the effects of loosening gun control laws. Regardless of whether one agrees with his conclusions, his work is mandatory reading for anyone who is open-minded and serious about the gun control issue. Especially fascinating is his account of the often unscrupulous reactions to his research by gun control advocates, academic critics, and the news media.”

2) I have discussed Kleck’s quote in MGLC (see attached file).

3) The vast majority of peer reviewed research that looks at national data on crime rates supports my research (see table 2 here and also here).  

4)  There are a lot of prominent academics and people involved in law enforcement who have said positive things about my research.  I can list a few here, but I don’t really see the point.”

  1. A recent Mother Jones article (published after both our article and Lott’s response) quotes Kleck extensively on his views about Lott’s research. As Mother Jones states: “Even Kleck, who conducted a controversial, yet often-cited survey on defensive gun use, observes, “Do I know anybody who specifically believes with more guns there are less crimes and they’re a credible criminologist? No.”” This should make it abundantly clear that it is Lott, not us, who is cherry-picking and mischaracterizing Kleck’s views on this.
  2. Lott’s response overlooks the main thrust of Kleck’s quote, which is that Kleck’s own research indicated that a lot of people were carrying firearms in public (illegally) before RTC laws. Many of those who got concealed carry licenses were already carrying firearms, except now it was legal for them to do so. Hence, according to Kleck, there was likely no real change in the number of “good guys with guns” carrying in public, undermining the main mechanism through which RTC laws could theoretically reduce crime. Lambert also pointed out this obvious flaw in Lott’s analysis more than a decade ago.
  3. Blatantly false, as we describe in the original article (final 3 paragraphs in the “The National Research Council Verdict” section) and elaborate even further in our second article on Lott (1st section).
  4. We agree on not seeing the point.

Also, none of Lott’s points address Hemenway’s criticism. From our article:

“David Hemenway from the Harvard School of Public Health, writes a similarly devastating review of “More Guns, Less Crime” in his book, Private Guns, Public Health. He argues that there are five main ways to determine the appropriateness of a statistical model: “(1) Does it pass the statistical tests designed to determine its accuracy? (2) Are the results robust (or do small changes in the modeling lead to very different results)? (3) Do the disaggregate results make sense? (4) Do results for the control variables make sense? and (5) Does the model make accurate predictions about the future?” John Lott’s model appears to fail every one of these tests.”

The Vanishing Survey & Donohue’s Latest Study

“Up to this point in their list, I have tried to go through each of DeFilippis and Hughes’ claims.  What should be clear is that I haven’t skipped points and I have already answered these claims elsewhere and the same is true for their other assertions.”

Lott conveniently stops his analysis of our article about halfway through, before we talk about his extensive history of ethical missteps and devote an entire section (“Lott’s Return to Prominence”) to Lott’s newer fabrications. This section took great care to choose cases that Lott had either not responded to or his responses were provably false. For example, we were the first (to our knowledge) to notice and challenge Lott’s claim about accidental child shootings. Hence it isn’t possible for him to have responded to that critique in previous writings as we were the first to make it. Unsurprisingly, he still has not provided any evidence to support his accidental shooting assertion. It should be clear at this point that all but one of Lott’s attempted rebuttals have either been woefully inadequate or simply repeating the same falsehood our original article called him out on without any attempt to improve on his position.

“Regarding their attack on “The Vanishing Survey,” they again completely ignore what I have already written on the issue.”

Lott is simply lying. Ctrl + F “a veritable zoo” in our original article and you will find this link back to John Lott’s own website: http://www.johnlott.org/files/GenDisc97_02surveys.html with a number of his responses. This link also contains a few other links at the bottom to Lott’s other defenses. Along with this direct link to Lott’s responses, we provided another link that contains a number of Lott’s emails on the subject:  http://www.cse.unsw.edu.au/~lambert/guns/lindgren.html (Ctrl + F “all of the following”). We take great care in our article to address Lott’s most salient counter-claims on this, and point out that all of these are either flat-out lies, deceptively written statements that aren’t real evidence, or extremely improbable coincidences. The entire point of this section was to demonstrate how utterly inadequate Lott’s defenses have been. That Lott was and still is unable to provide a shred of solid evidence against these claims (when it should be trivially easy to do so) is very telling.

Throughout Lott’s attempted rebuttal he repeatedly states that we “ignore” what he has written. To quote Princess Bride and echo what Tim Lambert pointed out a decade ago: “You keep using that word. I do not think it means what you think it means.”

“For example, regarding Donohue’s latest piece that DeFilippis and Hughes you can see discussions here, here, and here.”

Lott simply ignores (proper usage) that we dedicated 4 paragraphs in our original article to refuting his claims contained in the three links above against Donohue’s work and the Washington Post. This is clearly seen in the last half of our “The National Research Council Verdict” section, beginning with the sentence: “After an article citing Donohue’s work appeared in the Washington Post on November 14th, Lott, in response, sought to discredit the study to little avail.” Further, we have another article on Lott that provides a deeper discussion on why his critique of Donohue is utterly inadequate and almost entirely false.

Still Lying about the NRC Panel   

“I will make one final point.  DeFilippis and Hughes incorrectly describe the National Research Council report.  Their report examined seemingly ever possible gun law that has been studied by academics, but the panel could not identify one single law that made a statistically significant difference.  They made the same response regarding right-to-carry, but unlike all the other laws studied the discussion on right-to-carry laws was the only one that drew a dissent by James Q. Wilson, who pointed out that all of the panel’s own regressions found that right-to-carry regressions reduced murder rates.  In 15 years prior to that there had only been one other dissent.  Academics who don’t sign on to a NRC report are not invited back to be on future panels.  That creates pressure for people not to dissent, but it also means that virtually all the reports indicate that they can’t say anything matters.”

Lott has been repeating the same falsehoods about the NRC panel’s findings for more than a decade now. Indeed, as we referenced in our original article, Lott’s errors were so egregious that the Executive Officer of the NRC felt obliged to pen a refutation. Here is the most pertinent portion:

“Lott’s column gave the clear impression that the study was about gun control. It was not. The study was about the quality of the data and research on firearms injury and violence. These data and studies are frequently used by both sides in the debate on gun control. It was the committee’s task to make judgments about the quality of this scientific knowledge. The committee was not asked and does not offer any conclusions or comments on gun control policy.”

And you don’t have to take the Executive Officer’s word for it. Perusing the NRC report itself leads to the same conclusion that Lott is badly mischaracterizing it. The reason the panel couldn’t find any laws that “made a statistically significant difference” is because the only time the committee used its own regressions to study a law was in the RTC chapter. The rest of the report was a literature review, meaning it was commenting on the validity and quality of the data and studies, not conducting its own statistical analysis to determine significance, like Lott suggests.

Also, this review did not include every gun law as Lott indicates, or find all the ones it did look at lacking. For example, Permit-to-Purchase laws were not examined. Further, the Panel looked at a couple studies demonstrating that broadening background check denial criteria had a beneficial impact (page 94), though due to the paucity of studies and potential confounding factors they stated that this evidence was “suggestive rather than conclusive.” The panel concluded its discussion of various firearms laws with the following note:

“Current research evidence suggests that illegal diversions from legitimate commerce are important sources of guns and therefore tightening regulations of such markets may be promising. There also may be promising avenues to control gun theft and informal transfers (through problem-oriented policing, requiring guns to be locked up, etc.). We do not yet know whether it is possible to actually shut down illegal pipelines of guns to criminals or what the costs of such a shutdown would be to legitimate buyers. Answering these questions is essential.” (page 99)

There is a world of difference between calling studies “suggestive” and various policies “promising,” but wanting more research, and stating at the end of their analysis of Lott’s work that: “Thus, the committee concludes that with the current evidence it is not possible to determine that there is a causal link between the passage of right-to-carry laws and crime rates.” Further, Lott is lying about all the panel’s regressions showing a reduction in murder. As the Panel itself states in response to Wilson’s dissent (page 272):

“While it is true that most of the reported estimates are negative, several are positive and many are statistically insignificant. In addition, when we use Lott’s trend model but restrict the out years to five years or less (Table 6-7), the trends for murder become positive and those for other crimes remain negative. Therefore, the key question is how to reconcile the contrary findings or, conversely, how to explain why these particular positive, or negative, findings should be dismissed.”

It is also important to note, as we discussed in previous articles, the only reason the NRC panel didn’t find that RTC laws had a detrimental impact on crime is because they followed Lott’s model too closely and repeated some of his errors.

A Lott More Lies

Lott calls our article “a big waste of people’s time.” We won’t return the insult. In fact, we highly encourage our readers not only to read our original article, but also Lott’s critique for yourself. Lott’s numerous errors and falsehoods are very instructive of how both sides of this debate conduct themselves. We’ll let the reader be the judge.

 

 

Related posts

7 Comments