Shooting Down the Gun Lobby’s Favorite “Academic”: A Lott of Lies


By Evan DeFilippis and Devin Hughes

  • More guns, less crime.
  • Guns are used more in self-defense than in criminal acts.
  • The vast majority of mass shootings occur in gun-free zones.
  • Good guys with guns stop mass shootings.
  • Guns make women safer.

All of these claims are staples of the gun lobby, and all of them rely on the research and advocacy of Dr. John R. Lott.

John Lott is, without exception, the most prolific and influential writer on the topic of gun violence and gun control. He has credentials that would make most academics envious—with various stints at Stanford, Rice, UCLA, Wharton, Cornell, the University of Chicago and Yale.

According to LexisNexis queries, his op-ed pieces have appeared in newspapers at least 161 times. He has been referred to in more than 1,100 newspaper columns. Lott’s most famous book, More Guns, Less Crime, has been referenced by major news publications at least 727 times. The lobbying arm of the National Rifle Association, the Institute for Legislative Action (NRA-ILA), has spotlighted Lott’s scholarship at least 140 times on their website. After every mass shooting or national gun violence tragedy, Lott is the de facto talking head for the pro-gun community on news programs such as Fox News. He has also testified numerous times in front of Congress and state legislatures, having been a critical voice in the expansion of Right-to-Carry (RTC) laws.

Yet his daunting resume fails to tell the entire story. While his initial research was groundbreaking, further examination revealed numerous flaws. Today the “more guns, less crime” hypothesis has been thoroughly repudiated. On closer inspection his impressive credentials reveal an academic nomad, never able to secure a place in academia. His ethical transgressions range from accusations of fabricating an entire survey, to presenting faulty regressions, to creating elaborate online personas to defend his work and bash critics, to trying to revise his online history to deflect arguments. And this doesn’t even begin to cover the whole host of false claims and statistics he has peddled repeatedly in articles and TV appearances.

His descent into dishonesty began ironically with a groundbreaking study.

The Birth of More Guns, Less Crime

On August 2nd, 1996, USA Today published the results from a new comprehensive study on the effect of “shall-issue” concealed carry laws (also known as RTC laws), and homicide, rape, and aggravated assaults incidence. The headline read, “Fewer Rapes, Killings Found Where Concealed Guns Legal.”

The study, conducted by John Lott and David Mustard from the University of Chicago analyzed crime statistics in 3,054 counties from 1977 to 1992. The study made the remarkable claim that, if all states had adopted “shall issue” laws by 1992, 1,500 murders, 4,000 rapes, 11,000 robberies, and 60,000 aggravated assaults would be avoided annually. Lott subsequently published and expanded upon this research in a book entitled More Guns, Less Crime.

Lott’s research relied heavily on advanced econometrics, a highly specialized sub-field of economics. Using an econometric model with a blistering array of controls, including 36 independent demographic variables, John Lott analyzed a total of 54,000 observations for 3,056 counties over an 18 year period. Such a massive dataset would ostensibly permit Lott, for the first time, to identify the specific effect of concealed carry laws on crime with great precision.

The book was extremely well-received by the pro-gun community, being described as the bible of the gun lobby, and has since sold 100,000 copies. His book inaugurated a new era in the pro-gun movement: for the first time ever, organizations like the National Rifle Association could articulate their advocacy in the language of public health, rather than in constitutional terms. To this day, legislators continue to cite John Lott’s work as the basis for votes against gun control.

The only problem? Nearly nothing in the book is correct.

Lott’s Swift Fall from Grace

Tim Lambert, a Computer Scientist at the University of New South Wales, wrote a massive, 47-page critique of Lott’s book. His paper revealed that the book title More Guns, Less Crime already gets everything about the gun debate wrong: there weren’t significantly more guns, there wasn’t less crime, and guns wouldn’t have caused the decrease in crime anyway.

The paper deserves a complete reading, but we’ll summarize just a few of Lott’s many missteps here:

In an audacious display of cherry-picking, Lott argues that there were “more guns” between 1977 to 1992 by choosing to examine two seemingly arbitrary surveys on gun ownership, and then sloppily applying a formula he devised to correct for survey limitations. Since 1959, however, there have been at least 86 surveys examining gun ownership, and none of them show any clear trend establishing a rise in gun ownership. Differences between surveys appear to be dependent almost entirely on sampling errors, question wordings, and people’s willingness to answer questions honestly.

Lott replied to this accusation by arguing that, even if there weren’t more households owning guns, there were still more people carrying guns in public after the passage of shall-issue laws. However, we know this assertion is factually untenable, based on surveys showing that 5-11% of US adults already carried guns for self-protection before the implementation of concealed carry laws. It’s extremely unlikely, therefore, for the 1% of the population identified by Lott who obtained concealed carry permits after the passage of “shall-issue” laws to be responsible for all the crime decrease. Lambert also argues that many of the people who obtained concealed carry permits after the passage of shall-issue laws, were already illegally carrying firearms in the first place. This means, of course, that “shall-issue” laws would produce almost no material changes in the reality of gun ownership.

Lott replied with an ever-weakening series of explanations, suggesting that the 1% of people who obtained permits likely had a higher risk of being involved in crime, and thus disproportionately accounted for the crime decrease. Except, yet again, this statement does not comport with reality. One study by Hood and Neeley analyzed permit data in Dallas and showed the opposite of Lott’s predictions: zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.

Empirical data from Dade county police records, which cataloged arrest and non-arrests incidents for permit holders in a five-year period, also disproves Lott’s point. This data showed unequivocally that defensive gun use by permit holders is extremely rare. In Dade county, for example, there were only 12 incidents of a concealed carry permit owner encountering a criminal, compared with 100,000 violent crimes occurring in that period. That means, at most, getting a permit increases the risk of an armed civilian meeting a criminal by .012 percentage points. This is essentially a round-off error. What’s particularly revealing about this episode is that Lott had to have known about Dade county police records because he cited the exact same study in his book when the records supported a separate position of his. In other words, Lott simply cherry-picked the evidence that supported his conclusion and disregarded the rest. Even academics on Lott’s side of the argument strongly doubt that concealed carry laws could have such profound effects on crime. Gary Kleck, a criminologist at Florida State University, for example, writes:

Lott and Mustard argued that their results indicated that the laws caused substantial reductions in violence rates by deterring prospective criminals afraid of encountering an armed victim. This conclusion could be challenged, in light of how modest the intervention was. The 1.3% of the population in places like Florida who obtained permits would represent at best only a slight increase in the share of potential crime victims who carry guns in public places. And if those who got permits were merely legitimating what they were already doing before the new laws, it would mean there was no increase at all in carrying or in actual risks to criminals. One can always speculate that criminals’ perceptions of risk outran reality, but that is all this is–a speculation. More likely, the declines in crime coinciding with relaxation of carry laws were largely attributable to other factors not controlled in the Lott and Mustard analysis.

David Hemenway from the Harvard School of Public Health, writes a similarly devastating review of “More Guns, Less Crime” in his book, Private Guns, Public Health. He argues that there are five main ways to determine the appropriateness of a statistical model: “(1) Does it pass the statistical tests designed to determine its accuracy? (2) Are the results robust (or do small changes in the modeling lead to very different results)? (3) Do the disaggregate results make sense? (4) Do results for the control variables make sense? and (5) Does the model make accurate predictions about the future?” John Lott’s model appears to fail every one of these tests.

A Lott of Suspicious methodology

As Albert Alschuler explains in “Two Guns, Four Guns, Six Guns, More Guns: Does Arming the Public Reduce Crime,” Lott’s work is filled with bizarre results that are inconsistent with established facts in criminology.

According to Lott’s data, for example, rural areas are more dangerous than cities. FBI data clearly shows this is not the case. Lott’s model finds that both increasing unemployment and decreasing the number of middle-aged and elderly black women would produce substantial decreases in the homicide rate, conclusions that are so bizarre that they should cast doubt on the entire study. Indeed, as Hemenway explains, while middle-aged black women are rarely either victims or perpetrators of homicide, “according to the results, a decrease of 1 percentage point in the percentage of the population that is black, female, and aged forty to forty-nine is associated with a 59% decrease in homicide (and a 74% increase in rape).”

Lott also finds that there is only a weak deterrent effect on robberies, the most common street crime. As gun violence researcher Dennis Hennigan writes, “the absence of an effect on robbery does much to destroy the theory that more law-abiding citizens carrying concealed guns in public deter crime.”

Strangely, Lott’s data also shows that, while concealed-carry laws decrease the incidence of murder and rape, it increases the rate of property crimes. Lott explained that this result meant that the criminals were encouraged to switch from predatory crimes to property crimes in order to avoid contact with potentially armed civilians. But as one researcher skeptically commented, “Does anyone really believe that auto theft is a substitute for rape and murder?”

Another problem with Lott’s model is that it does not account for the fact that crime moves in waves. In order to even begin to account for the cyclical nature of crime, one would need to include variables on gangs, drug consumption, community policing, illegal gun carrying, and so on, which are needed to track the peaks and troughs of crime waves. Instead, when John Lott tries to correct for the absence of these variables through time trends, he uses a linear time trend, which is inappropriate because it incorrectly predicts that increases in crime at one point in time will increase forever.

Along with this host of weird results permeating Lott’s work, other researchers found other damaging flaws. Minute changes in Lott’s model and specifications resulted in drastically different results and several key crime controls were excluded.

Debunking More Guns, Less Crime

Despite these glaring methodological concerns, bizarre results, and fundamental empirical problems, Lott’s hypothesis remained relatively popular. While studies did come out in the following years supporting the claim that RTC laws reduce crime, more studies began to shoot holes in the more guns, less crime hypothesis.

Ted Goertzel, a retired professor of Sociology at Rutgers University, published a paper in The Skeptical Inquirer in 2002, cataloging the most egregious abuses of econometrics in criminology. Unsurprising, John R. Lott’s most significant work, More Guns, Less Crime, was at the top of the list.

Goertzel argues that Lott’s studies consistently rely on extremely complicated econometric models, often requiring the computational data-crunching power exceeding that of an ordinary desktop computer.   Lott then assumes the rather convenient position of insisting that his critics use the same data and the same methods he used to rebuke his claims, even after both his data and methods are repudiated.

John Lott’s strategy, then, is the academic equivalent of “security through obscurity.” In response to criticism, Lott simply papers over his mistakes with even more perilously complex models and tendentious data analysis. This technique allows Lott to preordain a certain conclusion as truth (e.g. “More Guns, Less Crime”), and simply pick the model that produces this result.

Two respected criminal justice researchers, Frank Zimring and Gordon Hawkins, wrote an article in 1997 in response to this strategy, explaining: “just as Messrs. Lott and Mustard can, with one model of the determinants of homicide, produce statistical residuals suggesting that ‘shall issue’ laws reduce homicide, we expect that a determined econometrician can produce a treatment of the same historical periods with different models and opposite effects. Econometric modeling is a double-edged sword in its capacity to facilitate statistical findings to warm the hearts of true believers of any stripe.”

Within a year, two econometricians, Dan Black and Daniel Nagin validated this concern. By altering Lott’s statistical models with a couple of superficial modeling changes, or by re-running Lott’s own methods on a different grouping of the data, they were able to produce entirely different results.

Black and Nagin noticed that there were large variations in state-specific estimates for the effect of “shall-issue” laws on crime. For example, Lott’s findings indicated that right-to-carry laws caused “murders to decline in Florida, but increase in West Virginia. Assaults fall in Maine but increase in Pennsylvania.” In addition, “the magnitudes of the estimates are often implausibly large. The parameter estimates that RTC laws increased murders by 105 percent in West Virginia but reduced aggravated assaults by 67 percent in Maine. While one could ascribe the effects to the RTC laws themselves, we doubt that any model of criminal behavior could account for the variation we observe in the signs and magnitudes of these parameters.”

From this, Black and Nagin understood that a single state, for which the data was poorly fitted, must be driving the bizarre variations and magnitude in state-specific estimates. They determined Florida was the culprit due to its volatile crime rates in the period under analysis, and regularly changing gun laws.

Indeed, Black and Nagin discovered that when Florida was removed from the results there was “no detectable impact of the right-to-carry laws on the rate of murder and rape.” They concluded that “inference based on the Lott and Mustard model is inappropriate, and their results cannot be used responsibly to formulate public policy.”

John Lott’s response to this, in his most recent edition of More Guns, Less Crime has been to explain that Black and Nagin’s argumentative technique is deceptive, that it was a form of “data mining,” in which the researchers intentionally selected parameters that would break Lott’s model. With no apparent irony, Lott moralized that “traditional statistical tests of significance are based on the assumption that the researcher is not deliberately choosing which results to present.” It should be abundantly clear by now that Lott does not deserve the benefit of this assumption. To be clear: excluding Florida is a legitimate and obvious test of a model’s robustness. The fact that Lott’s results completely depend on the inclusion of a single state means that the model reflects nothing about the actual effect of concealed carry laws on crime.

After the release of the Black and Nagin paper, Goertzel had a conversation with Lott concerning a fundamental problem in his study, namely: “America’s counties vary tremendously in size and social characteristics. A few large ones, containing major cities, account for a very large percentage of the murders in the United States,” and that, as it would turn out, “none of these very large counties have “shall-issue” gun control laws.”

This means that Lott’s dataset is powerless to answer the very questions for which it was designed. Statistical tests require that there be substantial variation in the causal variable of interest, in this case, there needs to be “shall-issue” laws in places where the most murders occurred. This type of variation is simply absent.

Lott’s response to Goertzel was to shrug him off, insisting that he had enough controls to account for the problem. Fortunately, Zimring and Hawkins identified the same problem, noting that “shall-issue” laws materialized predominantly in the South, the West, and rural regions, areas in which the National Rifle Association was dominant. These states already had lax restrictions on guns—it’s not as if the implementation of “shall-issue” laws in these areas radically changed the social landscape. This pre-existing legislative history means that comparing across legislative categories merely confuses regional and demographic differences with the social impact of some legislative intervention. Zimring and Hawking conclude their damning criticism by explaining that, “[Lott’s] models are the ultimate in statistical home cooking in that they are created for this data set by these authors and only tested on the data that will be used in the evaluation of the right-to-carry impacts.”

What criminologists actually believed happened is that the crack epidemic spiked the homicide rate of major eastern cities in the 1980s and early 1990s. Lott’s argument, then, is that “shall-issue laws” somehow spared rural and western states from the crack epidemic, while eastern states were not as fortunate. As Goertzel writes, “this would never have been taken seriously if it had not been obscured by a maze of equations.”

University of Arkansas Law Professor Andrew J. McClurg, in a critical review of Lott entitled, “‘Lotts’ More Guns and Other Fallacies: Infecting the Gun Control Debate,” concludes that Lott’s entire project is one large example of fallacious post-hoc reasoning. Although Lott concedes that there is danger in inferring causality from specious correlations, he defends himself by insisting: “this study uses the most comprehensive set of control variables yet used in a study of crime.” Lott further explains that the correct method of argumentation is to “state what variables were not included in the analysis.” As McClurg notes, however, Lott manages to control for a dizzying array of irrelevant or redundant demographic variables, while ignoring a nearly endless list of important factors that could influence crime.

Perhaps the most glaring weakness with Lott’s paper is its lack of predictive power. It simply does not matter how complex a model is, if it lacks predictive power, it’s simply useless. Any model that is fully capable of expressing the true impact of concealed carry laws on crime, all things equal, will also have predictive potential outside of the scope of a single study. Fortunately, Lott’s initial* data set ended in 1992, permitting researchers to test Lott’s own model with new data. Researchers Ian Ayres, from Yale Law School, and John Donohue, from Stanford Law School, did just this, and examined 14 additional jurisdictions between 1992 and 1996 that adopted concealed carry laws. Using Lott’s own model, they found that these jurisdictions were associated with more crime in all crime categories. In other words, “More Guns, More Crime.” Ayres and Donohue conclude with the rather damning paragraph, “Those who were swayed by the statistical evidence previously offered by Lott and Mustard to believe the more guns, less crime hypothesis should now be more strongly inclined to accept the even stronger statistical evidence suggesting the crime inducing effect of shall issue laws.”

John Lott, along with two other young researchers, Florenz Plassmann and John Whitely, wrote a reply to Ayres and Donohue attempting to confirm the “more guns, less crime” hypothesis. However, when Ayres and Donohue examined Lott’s reply, they discovered numerous coding errors and empty cells that, when corrected, showed that RTC laws did not reduce crime and in some categories even increased it. In a latter email exchange with Tim Lambert, Plassmann even admitted that correcting the coding errors caused his paper’s conclusions to evaporate. Eventually, Lott removed his name from the final paper, citing disagreements over edits (which turned out to be a conflict over a single word) that had been made to Ayres and Donohue’s paper.

The Curious Case of the Changing Table

In the wake of the 2003 Stanford Law Review series addressing the more guns, less crime hypothesis, there was a large debate over whether coding errors found by Ayres and Donohue in Plassmann and Whitley’s response changed their conclusion that RTC laws are beneficial. At the heart of the discussion was a critical data table provided by Ayres and Donohue clearly showing that the results do indeed change significantly. Lott disagreed.

To prove that the coding errors didn’t alter the conclusion, Lott posted a “corrected” table on his site showing mistake-free values that were still statistically significant. This clearly contradicted Ayres and Donohue’s stark claim. How was this possible? As Chris Mooney eloquently describes in his article “Double Barreled Double Standards,” Lott changed the rules his side had created in the middle of the game. In Ayres and Donohue’s data table, all the values assumed clustered standard errors, the same assumption Plassmann and Whitley made. To sway the results in his favor, Lott removed the clustering to make the results look statistically significant once again.

When Mooney confronted Lott with this fact, Lott fired back with a bewildering barrage of often contradictory claims. First he asserted that the table must simply be incorrectly labeled, but emailed Mooney the same exact table the next day, asserting the table was properly labeled. A week or so later, Lott’s story metamorphosed yet again. Lott asserted that the table was back to how it should have been all along, yet now the table contained the original faulty data which Lott had previously admitted was incorrect. Throughout all of these changes occurring in August and early September, the table still stated it had been last corrected on “April 18, 2003” (except for one telling moment when Lott botched his attempted backdating and posted a last modified date that was several months in the future).

When the story broke Lott tried to deny any attempt at backdating and accused his critics of engaging in a “conspiracy theory.” When this claim fell flat Lott changed tactics. Rather than admit any error on his part, Lott instead had his webmaster fall on his sword and claim the entire blame should fall on him, not Lott. The webmaster’s explanation of events though didn’t pass close scrutiny. In a latter email exchange with Tim Lambert, the webmaster even admitted his timeline of events wasn’t accurate, though he still tried desperately to find a scenario where he was still the one to blame, not Lott. An unfortunate series of coincidental mistakes must have occurred. Yet this chain of errors theory is contradicted by an earlier strong statement made by Lott about the contents of the table. Even months after the incident, neither Mooney nor Lambert received an adequate explanation of what actually transpired.

The National Research Council Verdict

In response to the growing controversy over gun violence and particularly Right-to-Carry (RTC) laws, the National Research Council (NRC) convened a panel of 16 experts to examine the existing literature. In 2004, they released their findings. Most of their report was typical academic fare and caused little stir. Not so for their findings on RTC laws.

The NRC panel closely followed Lott’s previous work, using his data, specifications, and method of computing standard errors. Even using this approach, the panel found inconclusive results. Further, as the panel stressed, the results were extremely sensitive to minute changes in the models and control variables. These findings mirrored the existing literature on the subject, which was heavily divided. One member of the panel went so far as to suggest that finding the true effect of RTC laws simply wasn’t possible with econometric analysis. In the end though, 15 of the 16 panel members concluded that the existing evidence could not support claims that RTC laws had a beneficial (or detrimental) impact on crime rates.

As usual, Lott wrote a scathing critique of the panel’s findings, accusing the panel of being biased and stacked against him. However, his critique was so flawed that the executive officer of the NRC felt it necessary to pen his own reply to Lott titled “A Lott of misinformation.”

In 2011 Dr. John Donohue and two of his colleagues examined and improved on the NRC panel’s findings in “The Impact of Right-to-Carry Laws and the NRC Report: Lessons for the Empirical Evaluation of Law and Policy.” This paper has undergone two updates (the newest published this September) and is considered “the best study on the topic” by Daniel Webster, the director of the Johns Hopkins Center for Gun Policy and Research.

Donohue’s study improves on the NRC report in several ways. First, Lott’s dataset that was used by the panel had several errors, which Donohue corrects. Also, the panel failed to incorporate proper criminal justice control variables by paralleling Lott’s model. Perhaps the most important omission corrected was the lack of clustered standard errors. As Donohue explains, not clustering standard errors (which is now standard practice among econometricians) drastically increases the odds that a spurious result will incorrectly be deemed to be statistically significant.

While these changes appear arcane and minor to those of us not well-versed in the intricacies of econometrics, they have a dramatic impact on the results. Whereas the NRC panel found contradictory yet statistically significant results across most of the crime categories, Donohue and his coauthors found very few statistically significant effects of RTC laws on crime rates, but almost all of them, significant or not, show crime increases.

After conclusively demonstrating that these corrections eliminate any shred of evidence that RTC laws are beneficial, Donohue and his colleagues sought to further refine their model with additional data, superior specifications, and a far more reasonable set of controls. After testing several models, time periods, and sets of controls, their results still show increases in many crime categories (especially aggravated assault), though these results are not consistently significant. There is no trace of crime decreases.

The conclusion of the best and most sophisticated RTC study to date: these laws have no beneficial impact and may actually increase crime. Lott is wrong. It is no longer a question of whether RTC laws are beneficial, but rather if they are impotent or harmful.

After an article citing Donohue’s work appeared in the Washington Post on November 14th, Lott, in response, sought to discredit the study to little avail. Lott begins by trying to paint Donohue as a hypocrite for using a dummy model, a frequently used model in statistical analysis. This claim not only ignores that Donohue never actually said that using the dummy model was misleading, but also that Donohue uses multiple models and displays all of them. Lott further attempts to portray Donohue as a liar for saying Lott’s models don’t control for the crack epidemic. However, this completely overlooks Donohue’s detailed discussion in the study of why Lott’s control for the epidemic, a non-randomized sample of cocaine prices from a collection of 21 cities, fails at almost every level. Pricing data is highly unlikely to map the actual demand for cocaine or its influence on crime. Also, the data from 1989 and 1990, two of the most critical years of the epidemic, is missing. Lott’s “control” then doesn’t work and its inclusion is likely detrimental to finding the true relationship between RTC laws and crime. Lott finally attempts to paint Donohue’s study as being filled with errors, despite the fact that most of the errors he cites have been corrected in the most recent version, while other purported errors clearly aren’t.

Lott also cites a 2014 study by Moody et al. as supporting his claims. However, the authors misreport their own results, which actually find significant evidence that RTC laws increase aggravated assault (for statistics geeks: the t-stats in Table 3 for Assault, post-law trend are statistically significant, despite not being bolded, oops), significantly altering the study’s conclusion.

Lott and the NRA seek solace in that more studies support the conclusion that RTC laws reduce crime than don’t. And on paper they appear to be correct. Lott tallies 21 academic works in favor of his hypothesis, and only 14 against. Yet even in this simple counting exercise Lott displays a significant amount of deceit.

To begin with, he excludes at least two studies and, depending on the criteria, potentially more than seven that disagree with him. He then proceeds to pad his side of the ledger with studies that have been thoroughly repudiated. Lott lists a study by Olsen & Maltz as supporting his hypothesis. Yet in a latter study Maltz rejects the finding of his old study as flawed and urges other academics not to use it. Lott also cites Plassmann and Whitley’s paper as confirming evidence, yet, as we described earlier, their data set had numerous flaws which, once corrected, eliminated their findings. Plassmann has even admitted this fact. This flawed data set was also used in Lott’s book, The Bias Against Guns, which he lists as a supporting study. Further, Lott lists the lone dissent from the National Research Council panel that agreed with him on only one crime category, and gives it equal weight to the 15 member majority that found the evidence could not support Lott’s claim. Lott cites Moody and Marvell’s 2008 paper as support. Yet this paper’s conclusion that RTC laws are generally beneficial rests solely on data from Florida (if Florida is excluded they find RTC laws may actually increase crime). Yet a different study by none other than Marvell analyzed Florida and found that there was no statistical evidence of crime decreases resulting from the RTC law, which completely undermines the 2008 paper. These dubious citations are further compounded by the inclusion of many studies with data sets ending in 1992, which is especially problematic as they fail to capture the dramatic crime reversal at the end of the crack epidemic.

Lott’s daunting list of studies then relies heavily on old studies and ones that have been conclusively shown to have fatal errors, as well as undercounting the number of studies that disagree with him. It is telling that Lott has to resort to such tactics to preserve the tatters of his once proud theory.

Lott’s Return to Prominence

After the National Research Council released its findings in 2004, Lott’s fall from grace seemed complete. His research was discredited and an ethics maelstrom caused even some pro-gun advocates to distance themselves. Despite his fall though, Lott has remained a prolific author and has largely restored his once shattered reputation. Since 2004, Lott has written Freedomnomics, Straight Shooting, At the Brink, Dumbing Down the Courts, and continued to update More Guns, Less Crime.

Along with these books, Lott has continued to maintain his blog and written numerous columns for Fox News, the National Review, and other publications. He is a frequent guest on several national news programs (especially Fox) and is once again touted as a firearms policy expert. Further, he has even created the Crime Prevention Research Center, which despite fundraising difficulties regularly churns out pro-gun research.

In his more recent work Lott has become a perpetual misinformation machine. In one of Lott’s books The Bias Against Guns, for example, Hemenway details a litany of errors. In one instance, Lott claims “that the few existing studies that test for the impact of gun control laws on total suicide… find no significant relationship.” As Hemenway points out though, this statement completely ignores at least seven academic studies and two review articles, all of which find a significant effect of gun control laws on suicide rates.

However, the most egregious falsehoods Lott promulgates are strewn throughout his numerous op-ed columns. Here are but a few:

  • Lott writes in multiple articles that: “With just two exceptions, every public mass shooting in the United States since at least 1950 has taken place where citizens were banned from carrying guns.”

This is blatantly false. In fact, the best evidence indicates that since January 2009 there have been 16 mass shootings (where 4 or more people are killed by a firearm) that took place either in part or wholly in areas where guns were not banned, and 2 others where armed guards or police were there at the time of the shooting. In contrast, 15 mass shootings occurred in “gun-free zones.” Lott’s repeated falsehoods are not born out of ignorance though. Lott penned a sloppy critique of an earlier report on mass shootings, in which he tacitly admitted that of the 8 incidents he took issue with, 4 did not occur in areas that banned guns. He has since turned his critique into an even more error-ridden report.

  • Lott states: “In addition, two thirds of these accidental gun deaths involving young children are not shots fired by other little kids but rather by adult males with criminal backgrounds.”

Lott provides no citation for this remark and it appears to be a complete fabrication. There is no academic study that comes to this conclusion, and raw data from the National Violent Death Reporting System (compiled for us by Harvard Injury Control Research Center) directly refutes Lott’s claim. Examining fatal accidental shootings from 2003-2006, two thirds of the time children between the ages of 0-14 were shot by another child aged 0-14. Including self-inflicted accidental deaths, this figure rises to 74%. Lott’s claim is clearly wrong. Further, Lott cannot take refuge in the fact that accidental shootings involving children are sometimes misclassified as homicides, because the National Violent Death Reporting System largely avoids that error. And as a New York Times report found, the vast majority of such shootings are either self-inflicted or involved another child. Children’s access to firearms is the problem, not criminals.

  • In a critique of Daniel Webster’s recent study examining the repeal of Missouri’s handgun purchaser licensing law, Lott claims (along with accusations of cherry picking) that: “Most likely, getting rid of the law slowed the growth rate in murders.”

Lott’s dual accusation that Webster’s study shows that Missouri’s gun law increased crime and that Webster cherry-picked his data is wrong on both counts. When we contacted Daniel Webster about Lott’s remarks, he replied that “I have no idea why Lott says that getting rid of the law slowed the growth rate in murders.” We saw little basis for Lott’s claim as well, as the only way Lott can actually make his claim that Missouri’s homicide rate was increasing more before the law’s repeal than after is to cherry-pick 2002 as the beginning of his analysis. While Webster chose to start the study period at 1999 to avoid the significant fluctuations in nationwide homicide rates between 1985 and 1998, Lott clearly picks 2002 in order to fabricate an upward pre-repeal homicide trend.

Further, Lott conflates Missouri’s permit-to-purchase (PTP) handgun law with other potentially less effective forms of universal background check laws to insinuate there were a sea of states for Webster to choose from. In fact, the only other state that had recently changed its PTP law was Connecticut in 1995, a state Webster is currently studying. Further, Webster’s focus on homicide is driven by superior data, the greater importance of lethal crimes, and is standard in gun policy research despite what Lott implies.

Lott’s dishonesty doesn’t stop at promulgating false claims on various op-ed columns. Indeed, he has continued his unfortunate habit of revising history in an attempt to cast doubt on his opponents. In 2011 Media Matters published a typical critique of one of Lott’s numerous articles. This time though, Lott accused them of misquoting him, and looking at the article he appeared to be correct. However, it turns out that Lott had edited his article after Media Matters’ critique, and then slammed them for “misquoting” the edited section. Screenshots prove the change. Rather than admit to being caught red-handed, Lott instead lashed out, wildly accusing Media Matters of doctoring the screenshots that so clearly showed Lott had changed his quote after the fact.

His evidence of such nefarious activity on the part of Media Matters? A particularly unflattering edited photo of Lott that they used. That’s it. However, to believe his suggestion that Media Matters had in fact misquoted him and subsequently tried to cover it up, the conspiracy has to extend far beyond doctored screenshots. For Lott to be right, not only would Media Matters have had to doctor multiple screenshots, but they would have also had to hack several websites and surreptitiously changed Lott’s original quote in each of them. Or maybe these sites were in on the doctoring scheme.

Media Matters is not an isolated incident. Even in low-stakes confrontations, Lott tries to modify history in an attempt to embarrass his opponent. In one case, Chad R. MacDonald, a relatively obscure writer who has written on Lott’s dubious history, challenged Lott in the comment section of a National Review column. While Lott could have easily ignored the criticism, he instead pounced, following his usual pattern of never letting any criticism escape unscathed. Then after a further reply, Lott went back and revised his comment, making MacDonald’s response look inadequate and out of context. Apparently Lott forgot screenshots are a thing. While in isolation this case could be chalked up as merely a lapse of internet etiquette, when seen in context with the rest of Lott’s sordid history it further highlights a continuing pattern of dishonesty.

A Lott of Ethical Missteps

Concurrently with the collapse of the “More Guns, Less Crime” hypothesis, troubling ethical allegations concerning Lott began to surface.

The Vanishing Survey

John R. Lott has claimed, over and over again, that 98% of defensive gun uses require only the mere brandishing the gun with no shots fired. The first recorded instance of this claim appeared on February 6, 1997 when Lott attributed the statistic to more than 15 “national survey organizations” including the L.A. Times, Roper, and Peter Hart. Since then, Lott has referred to this statistic more than four dozen times in publicly available sources.

In the 1998 edition of Lott’s book, More Guns, Less Crime, this 98 percent figure was attributed broadly to “national surveys.” In 2000, Lott changed this attribution from “national surveys” to “a national survey that I conducted.”

Tim Lambert aggregated every publically available reference to the 98% figure and tabulated it. He discovered that, “before May 1999, Lott consistently implied that the 98% came from Kleck, after May 1999 he consistently implied that it came from his own survey.” For two entire years then, John Lott said the 98 percent figure came from other people’s surveys, and then, out of nowhere, suddenly remembered that the statistic came from his own survey. One wonders how Lott could forget about his own enormous undertaking, and accidentally attribute his hard work to someone else.

In a 2000 article written for The Criminologist, Lott elaborates on his new position, explaining that the 98 percent figure derives from a study he conducted in the first 3 months of 1997, surveying a representative sample of 2,424 people. To support this claim, in an email written to Northwestern Law Professor Lingdren, Lott explains: “I am willing to bet that I don’t start mentioning this [98%] figure until the spring of 1997. If I use it before I said that I did the survey, I will say that they nailed me. But if I only started using it about the time that I said that I did the survey, I think that it would be strong evidence the other way.”

Embarrassingly, Lott doubled down and it backfired. Otis Duncan discovered a reference to the statistic on February 6th, two months before Lott could have even completed the survey:

“There are surveys that have been done by the Los Angeles Times, Gallup, Roper, Peter Hart, about 15 national survey organizations in total that range from anything from 760,000 times a year to 3.6 million times a year people use guns defensively. About 98 percent of those simply involve people brandishing a gun and not using them.

Page 41, State of Nebraska, Committee on Judiciary LB465, February 6, 1997, statement of John Lott, Transcript prepared by the Clerk of the Legislature, Transcriber’s Office

This means one of two things: Either Lott successfully surveyed 2,424 people in one month with unpaid full-time undergraduates, which would have required calling at least 10,000 people with a minimum of five full-time interviewers, performing comprehensive statistical analysis (without the assistance of colleagues), and reporting on that analysis in one month, or Lott is simply lying.

The evidence, however, overwhelmingly points to the second possibility. When pressed for evidence of the survey, Lott has claimed all of the following:

  1. His hard drive crashed in June of 1997, erasing all evidence of the survey. There is therefore no hard evidence of the survey’s existence.
  2. He paid for the entire project out of his own pocket, and no expense information is available to substantiate the fact that any survey was ever administered.
  3. The survey was done by unpaid fulltime undergraduates at the University of Chicago in their junior and senior years. Therefore, there are no employee records.
  4. He instructed each of the students to use their own telephones, and would subsequently reimburse them out of his own pocket. Thus, there are no telephone records.
  5. He does not remember the names, contact details, or faces of a single student volunteer. Thus, there is no way to corroborate the fact that students were involved in survey administration.
  6. He does not remember the questions asked on the survey.
  7. He had no discussions with anybody about sampling design.
  8. He did not retain any of the tally sheets because they were lost in an office move in 1997.

After investigating the controversy, Lingdren wrote that “all evidence of a study with 2,400 respondents does not just disappear when a computer crashes.” He goes on to explain that to lose every conceivable form of hard evidence—phone records, funding, tally sheets, potential communication with consultants, records of employees, ad infinitum—is essentially impossible:

Having done one large survey (about half the size of John Lott’s) and several smaller surveys, I can attest that it is an enormous undertaking. Typically, there is funding, employees who did the survey, financial records on the employees, financial records on the mailing or telephoning, the survey instrument, completed surveys or tally sheets, a list of everyone in the sample, records on who responded and who declined to participate, and so on. While all of these things might not be preserved in every study, some of them would almost always be retained or recoverable.

Lingdren pressed further, and asked Lott how he drew the sample for the survey. Lott explained that he used a CD-ROM, but that he can’t remember where the CD is, and doesn’t remember how he obtained it. Bending over backwards to give Lott a way out, Lingdren suggested he email all the students at the University of Chicago from 1997 to 1998, to see if anybody replies. Lott resisted, however, saying that he had “serious questions about how complete the University’s alumni records are.” For such serious accusations, Lott seemed rather disinterested in exonerating himself.

Of course, Lott’s response to this has been to assemble a veritable zoo of anecdotal claims from various tangentially related colleagues who vaguely remember him sort-of talking about some-kind-of-survey that probably happened sometime. Duncan and Lambert investigated Lott’s nightmarish hodgepodge of “evidence,” and discovered that they contain deceptively written statements that don’t constitute real evidence. For example, one of Lott’s colleagues claimed that, “John told me that he had conducted a survey in 1997.” Notice, however, this is very different from “John told me in 1997 that he was conducting a survey.”

Lott did, however, find a person who claims to have been interviewed in 1997 as part of the survey. His name—David M. Gross—a former board member of the National Rifle Association and founding director of the Minnesota Gun Owners Civil Rights Association. As Jon Wiener, writing in Historians in Trouble: Plagiarism, Fraud, and Politics in the Ivory Tower, puts it: “It seems unlikely, to put it charitably, that [David Gross] would turn up in a random sample of a few thousand people out of the 300 million Americans.”

Lott has also gone to heroic lengths to remind readers that the statistic is such a small portion of the book: “The reference to the original survey involves one number in one sentence in my book,” and “there have been many claims that I didn’t conduct a survey in 1997 that was reported in one sentence on page 3 of my book, More Guns, Less Crime.”

One wonders why someone who ostensibly went through such a herculean effort to conduct a huge survey with his own money in three months time would then turn around and diminish his own work by insisting that it’s only “one number in one sentence” in one book. That’s a lot of work and a lot of excuses for one sentence. Or, as Donald Kennedy, current editor-in-chief of Science Magazine, put it, Lott’s cavalcade of excuses “does not restore life to the data—which, ‘far from being one number in one sentence’, were at the center of controversy between Lott and his critics.”

In the Bias Against Guns, Lott returns to this point and reports on a new survey he conducted in November 2002 which was presumably intended to validate the original 98 percent figure. Lott writes: “…the survey I conducted during the fall of 2002 indicates that simply brandishing a gun stops crime 95% of the time, and other surveys have also found high rates.” He claims that 1,015 people were interviewed, but the book ends before the results can be reported on. Online reports describe the results further, showing that, contrary to Lott’s claims, one of the 7 respondents out of 1,015 who claimed self-defense gun use (13 total uses) said he fired the gun, which is far below the claim that 98% or 95% or users merely brandish the gun. Indeed, none of the many studies of defensive gun use has found anything close to a 98% brandishing rate.

If this doesn’t seem like a big deal to the reader, consider that John Lott is considered the de facto gun violence expert for conservative outlets, consistently returning to Fox News after every mass shooting and gun violence episode, and is regularly cited by The National Rifle Association and gun acolytes. As Dr. David Hemenway explains, “Lott virtually always uses complicated econometrics. For readers to accept the results requires complete faith in Lott’s integrity, that he will always conduct careful and competent research. Lott does not merit such faith.”

This whole debacle was so controversial and plainly dishonest that it earned Lott a spot in the book, A Human Enterprise: Controversies in the Social Sciences, under the header: “Outright Lies.” In his concluding analysis, Jon Wiener writes, “The conclusion seemed obvious: Lott had never done the national survey. He was lying.”

Mary Had a Little Secret

Of course, John Lott isn’t the only person who defends John Lott. Incessant criticism from internet sleuths left Lott without many supporters until one passionate blogger, Mary Rosh, assumed the unenviable position of defending Lott’s character.

Mary identified herself as a former student of Lott at the University of Pennsylvania’s Wharton School. She gushed profusely for her former professor, describing Lott as “the best professor I ever had.” She admiringly recalled that, “Lott taught me more about analysis than any other professor that I had and I was not alone. There were a group of us students who would try to take any class that he taught. Lott finally had to tell us that it was best for us to try and take classes from other professors more to be exposed to other ways of teaching graduate material.”

A rave Amazon review authored by Mary contained the subject line, “SAVE YOUR LIFE, READ THIS BOOK — GREAT BUY!!!!” She explained that “Unlike other studies, Lott used all the data that was available. He did not pick certain cities to include and others to exclude. No previous study had accounted for even a small fraction of the variables that he accounted for.”

Mary had clearly invested a great deal of time in the gun debate, defending Lott at every turn with extremely precise citations, often mentioning exact page numbers and table references in Lott’s work. Over time, Mary appeared almost pathologically obsessed with bolstering the reputation of Lott, insisting, for example, that various online communities artificially inflate the popularity of Lott’s work by downloading his publication: “If you want to read the research paper upon which this research is based, go to: The papers that get downloaded the most get noticed the most by other academics. It is very important that people download this paper has frequently as possible.”

Mary also seemed to have a deep emotional attachment to Lott, often taking attacks personally that were directed towards him. She would defend Lott with fiery passion, “This posting is filled with lies. Lott is not a ‘shill’ for anyone. Prove your claim…” A few members online thought this behavior was a bit bizarre, consoling Mary: “I’m sorry if you’re taking this personally, but you are not John Lott.”

Except, funny thing, she totally was. Mary Rosh was exposed as an internet sockpuppet of John Lott in 2003 by Julian Sanchez, one of Lott’s colleagues at the CATO institute. Sanchez compared the IP addresses from Mary Rosh’s forum posts with the IP address in an email that John Lott had sent Julian. On January 22, 2003, Mary Rush uttered her final words, confessing that John Lott and Mary Rosh were one in the same. Lott wrote in a confession email, “I shouldn’t have used it, but I didn’t want to get directly involved with my real name because I could not commit large blocks of time to discussions.” “I shouldn’t have used it,” will go down as perhaps the largest understatement in Lott’s storied career of academic dishonesty.

In a letter to Science Magazine, Lott explained that he “used a pseudonym” because “earlier postings under my name elicited threatening and obnoxious telephone calls.” In response to Lott’s letter, Donald Kennedy, the editor of Science, wrote, “Lott cannot dismiss his use of a fictitious ally as a ‘pseudonym.’ What he did was to construct a false identity for a scholar, whom he then deployed in repeated support of his positions and in repeated attacks on his opponents. In most circles, this goes down as fraud.” Let’s be clear: Lott wasn’t just defending the content of his work, or fielding unresolved questions, he was deliberately buttressing a delusion, one in which John Lott was the hero of some young woman, a champion of just causes, and a balanced and deliberate scholar. As journalist Michelle Malkin put it, “Lott’s invention of Mary Rosh to praise his own research and blast other scholars is beyond creepy. And it shows his extensive willingness to deceive to protect and promote his work.” Indeed, if Lott is willing to go to such extraordinary lengths to deceive, just what else is he lying about? And what does it say about Lott that his defenders in academia are so few and far between that he has to invent his own imaginary friends to high-five?

In any case, the whole incident earned Lott a spot in the recent book: The Encyclopedia of Liars and Deceivers.

Lott’s Lasting Legacy

Why does all this matter? After all, if Lott was just another pundit, his egregious dishonesty would be of little consequence. However, Lott isn’t merely just another pundit. He is the intellectual backbone of the gun lobby and is still considered an expert on gun policy.

Lott’s most significant legacy can be seen in the explosion of RTC laws from a few gun friendly states to the entire country. Lott has testified for the expansion of RTC laws in numerous states, including Nebraska, Minnesota, Ohio, and Wisconsin soon after he published his original paper on RTC laws. In the intense Missouri vote over its license to carry law in 2003, Lott’s research played an especially decisive role. A pro-gun organization lobbying for the bill sent a copy of Lott’s The Bias Against Guns to every State Senator. This effort was enough to sway key votes to override the governor’s veto, securing the bill’s passage. Without Lott’s research, it would be unlikely that Missouri and other states since would have adopted RTC laws.

His research has even had a significant influence on Congress. For example, the “Personal Safety and Community Protection Act of 1997” cited Lott’s research in glowing terms, as did the much more recent “National Right-to-Carry Reciprocity Act of 2011.” Further, Lott has also testified as an expert in front of Congress on multiple occasions, most recently this past year on Stand Your Ground laws. That Lott still has the prestige necessary to potentially sway national policy is troubling given his ethically dubious past.

Given Lott’s national influence, it is unsurprising that the gun lobby relies heavily on his research. The NRA and other pro-gun organizations also frequently cite his work on their “fact sheets.” Recently, the NRA even leapt to Lott’s defense after a column questioning the “More Guns, Less Crime” hypothesis appeared in the Washington Post.

The gun lobbies’ ties with Lott extend much beyond merely citing his research and occasionally defending his work. The NRA frequently promotes Lott’s events and he is often a distinguished guest at the NRA’s annual conventions. Ted Nugent, a NRA board member who has called Lott “…my academic hero,” has helped Lott with fundraising efforts for his research center.

Removing Lott from the Debate

One last thing: At one point, John Lott, writing as Mary Rosh, insisted that Michael Bellesiles, a former professor of history at Emory University whose work has been thoroughly discredited, be completely dismissed as a fraud, explaining that “…among the amazing things is that Belleslies has lost his data so that no one can replicate his results, ” and “Bellesiles changes his stories multiple times on where he got the California data and it turns out after each new story that the data was not at the place that he claimed.” Lott also berated him for his inconsistent story and dubious results: “as people have said, he may have accidentally misphrased something a couple of times, but not more than 50 times” and, “The numbers that others have found looking at this data is several time[s] what Belleslies claims to have found.”

Yet if you were to replace the name Bellesiles with “John Lott” the stories would be almost identical. The difference, however, is that Bellesiles was barred from academia, retroactively stripped of his most prestigious award, and was last spotted in 2012, teaching and working as a bartender. John Lott, meanwhile, is regularly invited to appear on national television as a gun policy expert and carries enormous influence over gun control legislation to this day.

Time and time again Lott has abused his academic credentials to peddle falsehoods. Instead of soberly presenting evidence, and letting the research speak for itself, Lott instead authored his own fan-base, fabricated evidence, manipulated models, mischaracterized data, and then attempted to bulldoze anybody that dared question the authenticity of his research. This is not the behavior of someone who is interested in truth-seeking; it is the behavior of an ideologue who is concerned only with making his opinions as loud and virulent as possible.


12/1/15: Earlier this year, John Lott posted a rebuttal to our article (here is an archived copy: On social media, Lott has been touting his rebuttal as a decisive debunking of our points. Nothing could be further from the truth. His lengthy article managed to find only one small error in our work, which we have since corrected by adding the word “initial” to a single sentence, with a star next to it to highlight the change. We also discovered that we repeatedly misspelled Ted Goertzel’s name, which we have also corrected. Those are the only corrections we have made. Meanwhile, Lott’s “rebuttal” is filled with half-truths, errors, and outright lies, which we extensively detail in our own comprehensive counter-rebuttal:

Related posts