James Lind set about determining the most effective treatment for scurvy, a disease that was killing thousands of sailors around the world.
Lind selected 12 sailors suffering from scurvy and divided them into 6 pairs.
Each pair received a different treatment: cider; sulphuric acid; vinegar; seawater; a concoction of nutmeg, garlic and mustard; or 2 oranges and a lemon.
In less than a week, the pair who had received oranges and lemons each day were back on active duty, while the others languished.
Given that sulphuric acid was then the British Navy’s main treatment for scurvy, this was a crucial finding.
The trial provided robust evidence for the powers of citrus because it created a credible counterfactual.
The sailors didn’t choose their treatments, nor were they assigned based on the severity of the ailment.
Instead, they were randomly allocated, making it likely that differences in the sailors’ recoveries were due to the treatment rather than other characteristics.
Lind’s randomised trial, one of the first in history, has attained legendary status.
Today, new medical treatments are routinely subjected to randomised trials.
When a pharmaceutical company is testing a new wonder drug, it will typically be administered to 2 groups – randomly chosen so they are as similar as possible.
Only if the outcomes are significantly different between the groups will the drug be judged a success.
Companies such as Amazon and Netflix routinely conduct randomised trials to improve their products.
Yet some are now arguing that ‘big data’ makes randomised trials unnecessary.
Between large‑scale surveys and extensive administrative datasets, the world is awash in data as never before.
Each day, hundreds of exabytes of data are produced.
Big data has improved the accuracy of weather forecasts, permitted researchers to study social interactions across racial and ethnic lines, enabled the analysis of income mobility at a fine geographic scale and much more.
The problem is big data can produce the wrong answer.
Take the example of hormone replacement therapy (HRT) for post‑menopausal women.
In 1976, the Nurses’ Health Study began tracking some 100,000 registered nurses.
The study found that women who chose to use HRT halved their risk of heart disease.
Largely as a result of this non‑randomised study, HRT usage rates soared.
By the late 1990s, about 2 fifths of post‑menopausal women in the United States were using HRT – mostly to reduce the risk of heart disease.
However, no randomised trial had evaluated its impact on heart health.
Then the US National Institutes of Health funded 2 randomised trials, comparing HRT against a placebo.
The trials, which began in 1993, did not support menopausal HRT to prevent coronary heart disease.
Indeed, one of the randomised trials was stopped early because the data and safety monitoring board concluded there was some evidence of harm.
With the health of millions of women at stake, the early observational data had presented an inaccurate picture of HRT’s impact on postmenopausal women.
The fact that the observational studies had a larger sample size than the randomised trials did not help.
Lacking evidence from randomised trials, many women took a treatment that may have had an adverse impact on their health.
Subsequent research has shown a far more nuanced picture around the potential uses and benefits of HRT, which has enhanced medical understanding of ways to alleviate the symptoms of menopause.
Researcher Rory Collins and his co‑authors refer to this as the ‘magic of randomisation.’
Large datasets are a valuable complement to randomised trials. But big data is not a substitute for randomisation.
In a 2023 joint statement, the European Society of Cardiology, American Heart Association, American College of Cardiology and the World Heart Federation concluded:
‘The widespread availability of large scale, population‑wide, real‑world data is increasingly being promoted as a way of bypassing the challenges of conducting randomised trials. Yet, despite the small random errors around the estimates of the effects of an intervention that can be yielded by analyses of such large datasets, non‑randomized observational analyses of the effects of an intervention should not be relied on as a substitute, due to their potential for systematic error.’
The same is true in policymaking, where low‑quality evaluations can produce misleading answers.
Randomised trials provide a more rigorous way to evaluate what works.
For example, Scared Straight, a program aimed to deter young people from crime by exposing them to prison life, was found in randomised trials to increase offending rates instead.
Similarly, randomised evaluations of abstinence‑only programs aimed at reducing rates of HIV among young people found no evidence that the programs curbed behaviours associated with higher risk.
In other cases, randomised trials produce powerful evidence of effectiveness. Job training programs Year Up and Per Scholas, targeted at low‑income adults, focus on fast‑growing industries with well‑paying jobs, and provide paid internships with local employers. Randomised trials find that these programs boost long‑term earnings by 20 to 40 per cent.
Saga Tutoring, an intensive school‑wide program for 9th and 10th graders, was shown in randomised trials to raise achievement in maths among high schools with the most socioeconomically disadvantaged students by more than two‑thirds of a grade level.
Randomised trials have also helped temper the claims of advocates.
Despite assertions that police body cameras would transform officers’ interactions with the public, randomised trials of body cameras suggest the cameras produced only small and statistically insignificant effects on police use of force and civilian complaints, though they may provide better evidence when cases go to court.
Contrary to the belief that microcredit would transform lives, randomised trials have shown little evidence that this practice of providing small loans to entrepreneurs in emerging economies boosted spending on education or health or empowered more women.
And despite bold claims for the impact of universal basic income on recipients, a recent randomised trial of universal basic income in 2 US states found that a payment of US$1000 a month over 3 years has fairly modest impacts on job quality and health.
Some of these findings surprised me, and I hope you had the same reaction.
People are complex and it would be strange if theory alone predicted which programs will work.
Surprises are also common in medicine, where a recent study concluded that the overall success rate of clinical trials for new drugs is 8 per cent.
Nonetheless, global spending on clinical trials exceeds US$30 billion a year, because such trials identify which treatments work, and which do not.
Clinical trials have extended life spans by proving effective treatments. Clinical trials have saved lives by weeding out ineffective treatments.
James Lind’s scurvy trial was groundbreaking.
Alas, his write‑up left something to be desired.
Six years after his experiment, Lind published the 456‑page tome A Treatise of the Scurvy.
His experimental results were spot‑on, but Lind’s theoretical explanations for why citrus worked were hocus‑pocus.
The treatise was largely ignored.
Finally, in the 1790s, a follower of Lind, surgeon Gilbert Blane, was able to persuade senior naval officials that oranges and lemons could prevent scurvy.
In 1795 – almost half a century after Lind’s findings – lemon juice was issued on demand; by 1799 it became part of the standard naval provisions.
In the early 1800s British sailors were consuming 200,000 litres of lemon juice annually.
The British may have been slow to adopt Lind’s findings, but they were faster at curing scurvy than their main naval opponents.
An end to scurvy was a key reason why the British, under the command of Admiral Lord Nelson, were able to maintain a sea blockade of France and ultimately win the 1805 Battle of Trafalgar against a larger force of scurvy ridden French and Spanish ships.
Nelson had clever tactics, but it also helped that scurvy wasn’t ravaging his crew.
His victory is just a part of the legacy of James Lind.
His example reminds us even today, in our world awash with data, that a curious mind can change the course of history.
This is an edited extract of a speech delivered at Oxford University.