Вадим Дудченко
Администратор портала

Dan Ariely is a behavioral science superstar. His research on honesty, cheating, and irrationality is “extremely clever and extremely intuitive,” says behavioral scientist Eugen Dimant of the University of Pennsylvania—and it has had a huge impact on both the field and government policies. Ariely, who founded the Center for Advanced Hindsight at Duke University, has also written three New York Times bestsellers and is a TED Talks regular.

But some researchers are calling Ariely’s large body of work into question after a 17 August blog post revealed that fabricated data underlie part of a high-profile 2012 paper about dishonesty that he co-wrote. None of the five study authors disputes that fabrication occurred, but Ariely’s colleagues have washed their hands of responsibility for it. Ariely acknowledges that only he had handled the earliest known version of the data file, which contained the fabrications.

Ariely emphatically denies making up the data, however, and says he quickly brought the matter to the attention of Duke’s Office of Scientific Integrity. (The university declined to say whether it is investigating Ariely.) The data were collected by an insurance company, Ariely says, but he no longer has records of interactions with it that could reveal where things went awry. “I wish I had a good story,” Ariely told Science. “And I just don’t.”

Finding possible fraud in the work of such an influential scientist is jarring, Dimant says, especially for “the new generation of researchers who follow in his footsteps.” Behavioral scientists Leif Nelson and Joseph Simmons, who exposed the apparent fraud via their blog Data Colada together with their colleague Uri Simonsohn, say a thorough, transparent investigation is needed. But given other universities’ past reluctance to investigate their own researchers, they are skeptical that Duke will conduct one. That may leave Ariely’s supporters insisting he is innocent and detractors assuming he is guilty, Nelson says. “No one knows. And that’s terrible.”

The 2012 paper, published in the Proceedings of the National Academy of Sciences (PNAS), reported a field study for which an unnamed insurance company purportedly randomized 13,488 customers to sign an honesty declaration at either the top or bottom of a form asking for an update to their odometer reading. Those who signed at the top were more honest, according to the study: They reported driving 2428 miles (3907 kilometers) more on average than those who signed at the bottom, which would result in a higher insurance premium. The paper also contained data from two lab experiments showing similar results from upfront honesty declarations.

The Obama administration’s Social and Behavioral Sciences Team recommended the intervention as a “nonfinancial incentive” to improve honesty, for instance on tax declarations, in its 2016 annual report. Lemonade, an insurance company, hired Ariely as its “chief behavioral officer.” But several other studies found that an upfront honesty declaration did not lead people to be more truthful; one even concluded it led to more false claims.

After discovering the result didn’t replicate in what he thought would be a “straightforward” extension study, one of the authors of the PNAS paper, Harvard Business School behavioral scientist Max Bazerman, asked the other authors to collaborate on a replication of one of their two lab experiments. This time, the team found no effects on honesty, it reported in 2020, again in PNAS.

While conducting the new lab study, Harvard Business School Ph.D. student Ariella Kristal found an odd detail in the original field study: Customers asked to sign at the top had significantly different baseline mileages—about 15,000 miles lower on average—than customers who signed at the bottom. The researchers reported this as a possible randomization failure in the 2020 paper, and also published the full data set.

Some time later, a group of anonymous researchers downloaded those data, according to last week’s post on Data Colada. A simple look at the participants’ mileage distribution revealed something very suspicious. Other data sets of people’s driving distances show a bell curve, with some people driving a lot, a few very little, and most somewhere in the middle. In the 2012 study, there was an unusually equal spread: Roughly the same number of people drove every distance between 0 and 50,000 miles. “I was flabbergasted,” says the researcher who made the discovery. (They spoke to Science on condition of anonymity because of fears for their career.)

Worrying that PNAS would not investigate the issue thoroughly, the whistleblower contacted the Data Colada bloggers instead, who conducted a follow-up review that convinced them the field study results were statistically impossible.

For example, a set of odometer readings provided by customers when they first signed up for insurance, apparently real, was duplicated to suggest the study had twice as many participants, with random numbers between one and 1000 added to the original mileages to disguise the deceit. In the spreadsheet, the original figures appeared in the font Calibri, but each had a close twin in another font, Cambria, with the same number of cars listed on the policy, and odometer readings within 1000 miles of the original. In 1 million simulated versions of the experiment, the same kind of similarity appeared not a single time, Simmons, Nelson, and Simonsohn found. “These data are not just excessively similar,” they write. “They are impossibly similar.”

Ariely calls the analysis “damning” and “clear beyond doubt.” He says he has requested a retraction, as have his co-authors, separately. “We are aware of the situation and are in communication with the authors,” PNAS Editorial Ethics Manager Yael Fitzpatrick said in a statement to Science.

Three of the authors say they were only involved in the two lab studies reported in the paper; a fourth, Boston University behavioral economist Nina Mazar, forwarded the Data Colada investigators a 16 February 2011 email from Ariely with an attached Excel file that contains the problems identified in the blog post. Its metadata suggest Ariely had created the file 3 days earlier.

Ariely tells Science he made a mistake in not checking the data he received from the insurance company, and that he no longer has the company’s original file. He says Duke’s integrity office told him the university’s IT department does not have email records from that long ago. His contacts at the insurance company no longer work there, Ariely adds, but he is seeking someone at the company who could find archived emails or files that could clear his name. His publication of the full data set last year showed he was unaware of any problems with it, he says: “I’m not an idiot. This is a very easy fraud to catch.”

Marc Ruef, an independent data forensics specialist, says Ariely could show as the “creator” of the Excel file even if the data did originate elsewhere, for instance because he created the spreadsheet and sent it to an insurance company to populate. But some behavioral scientists have asked on social media why a company would make up data about its clients’ behavior in a way that supported one of Ariely’s theories. (Ariely, citing Duke’s legal advice, declined to name the company or comment about its involvement in possible fraud.)

The timeline is also hazy: Ariely mentioned the study in a 2008 lecture and in a 2009 Harvard Business Review piece, years before the metadata indicates the Excel file was created. Ariely says he does not remember when the study was conducted.

The odometer study has resurfaced other worries about Ariely’s work. In July, an expression of concern was attached to a paper he published in 2004 in Psychological Science; in that case, statistical errors could not be resolved because Ariely was unable to produce the original data. In a 2010 NPR interview, Ariely referred to dental insurance data that the company involved later said did not exist, WBUR reported.

The Data Colada bloggers say they consider Ariely a friend. Finding his name as the creator of the field data file was “a very unpleasant moment,” Simmons says. “This whole thing has been incredibly stressful.”


Actual news

  • Sunday
  • Day
  • Month