Talking to voters about LGBT issues changes bias

Ran one of the many headlines this week. “Hang on”, I though, “this seems awfully familiar…”

If it feels familiar, it’s because we have heard this story before. Specifically, in late 2014 when Michael LaCour and Donald Green published a flashy article in Science claiming a short conversation with a door-to-door gay canvasser was enough to change people’s opinion towards gay marriage, and maintain that opinion nine months later. The study received wide coverage, both for the large implications for political campaigning and reducing discriminatory bias, but also for the feel-good factor of the power of human interaction. “At least”, exclaimed the campaigners, “here is science proving human connection can break down prejudice”.

Alas, it was not meant to be. In May the following year, Broockman and Kalla, two graduate students at University of California, Berkeley published a comprehensive critique of LaCour’s paper. A thorough statistical analysis of the original data, as well as following up on the survey companies used for the study revealed several irregularities, which were hard to explain without accepting the data had been falsified. Shortly afterwards, the original paper was retracted.

So far the story is unusual, but not surprising. Cheating in science happens on occasion, particularly in the glamorous, high impact journals like Science. The story was a disappointment for LGBT campaigners, but a small victory for open science and the public availability of data. And that should have been the end of the story, except for those two graduate students back in Berkeley. Broockman & Kalla’s were deeply interested in the LaCour paper in the first place (they did write a 27 page analysis on it, after all), not because they set out to debunk it, but rather because they were planning their own follow-up study on canvassing techniques and opinion shifting.

That follow-up study is now published, and much to everyone’s surprise it seems to broadly agree with the trend in the (allegedly fraudulent) original paper. This new study shows that a 10-minute conversation with a canvasser can change people’s opinion about transgender issues in a similarly long-lasting way. But importantly, and here is where the study differs from LaCour’s interpretation, this effect is seen even if the canvasser are not transgender themselves, changing the focus of why the effect occurs. The original study was all about humanising an issue – by exposing voters to real, flesh-and-bone gay activists their opinion can be shifted by linking the issue to the person at their door, or so the logic ran. With their latest findings, Broockman suggests instead that the success of canvassing relies on a particular perspective-taking technique, and not with exposing prejudiced individuals to a gay or transgender canvasser to humanise them.

Many of the positive design features in the new study came about from the LaCour debacle. Recruitment techniques, canvassing methods and the perspective-taking approach were all informed by the ensuing debate, fine-tuned over the course of the fallout, and it could be argued that this new study wouldn’t have been as successful without it.

This story is therefore a tale of good science. By making the data for LaCour’s original study public, the irregularities were exposed and a better study came out of it. We found out the original interpretation was likely inaccurate, and now have a better idea of what works when changing people’s opinions (whether the interpretation of the original data has any validity in light of the alleged fraud remains an open question).

Most tellingly, Broockman and his colleagues remain committed to open science, having seen the benefit of inspecting other scientist’s data. They have published all of the data and code for their study, in case any amateur sleuth wishes to take a magnifying glass and scrutinise their findings – and, as we have learned, that can be a very good thing indeed.

—————————————————————————————————

Broockman & Kalla (2016) was published in Sciencealong with accompanying commentary. Some great coverage of the story has featured in NPR, FiveThirtyEight, and Wired.

On Wednesday the news broke that an artificial intelligence has finally cracked one of the most complex board games out there, Go. The announcement and subsequent paper from Google’s DeepMind showcases their brainchild, nicknamed AlphaGo. This computer program is not only capable of playing Go, but also defeated the European Go champion, Fan Hui back in October 2015. It is currently scheduled for a match against Lee Sedol in March, arguably the reigning world champion in the game.

If this story feels familiar, it’s because it is – IBM’s Deep Blue famously beat chess grandmaster Garry Kasparov back in 1997, launching artificial intelligence (AI) into the public consciousness and starting a great pursuit of computer programs that could beat humans at every conceivable game of skill. The team at DeepMind announced early last year they had develop another AI which could beat humans at a whole swathe of classic arcade video games, from pong to space invaders.

The game of Go itself has a deep appeal to the mathematically minded, from baseball-like in-depth player statistics, to convoluted mathematical models for player rankings, so it is no surprise that it would appeal as a challenge for designers of artificial intelligence. But what caught my attention amongst the media flurry, was this statement by Demis Hassabis, CEO of Google DeepMind in interview with Nature:

“Go is a very ancient game, it is probably the most complex game humans play, it has more configurations on the board that there atoms in the universe […]”.

These kinds of statements are often hard to wrap your head around. From intuition, it does not make any sense: I can sit at my desk with a standard 19×19 Go board and the 361 stones it takes to fill it, and very laboriously make every possible combination on the board. This way I haven’t used up all the atoms in the universe, only those for a single set of Go that I have in front of me.

Dealing with very large sums such as the number of atoms in the universe is a notoriously difficult thing to grasp intuitively. Let us begin by looking at the actual numbers. What is the number of possible configurations on a Go board? For the standard 19×19 board, there are 361 positions where the two players can place their stones. In Go, any given position can be either empty, a black stone or a white stone, meaning any of the 361 positions can be in one of 3 states. Thus, the number of possible board configurations is 3361, or to put it in more standard notation, 1×10172. Of those, only about 1×10170 are legal positions, in other words not violating the rules of Go. Surprisingly, the exact number of legal positions has been calculated as:

208168199381979984699478633344862770286522453884530548425639456820927419612738015378525648451698519643907259916015628128546089888314427129715319317557736620397247064840935

Which, I am sure you will agree, a rather large number indeed. Now, onto the universe. The usual number banded about is of 1×1082 atoms in the universe. In truth, any such estimate is made with some very strong assumptions. The typical approach an astrophysicist might take when thinking about this problem is to start with the number of stars in the universe. Computer simulations put that number at around 1×1023. Next we need to know how much stuff is in a star – based on the universe observable from Earth, each star weighs an average of 1×1035 grams. Next, a perhaps the most precise estimate in this equation, is that each gram of matter contains about 1×1024 protons. Finally, we can put all those numbers together:

1023 × 1035 ×1024 = 1082

Planets and other planetary bodies don’t make it into the calculation since stars are substantially more massive. So ultimately it is a very rough, back of an envelope estimate, which is probably in the correct region, give or take a few orders of magnitude.

Going back to the number of Go board positions, we can see that there is a massive difference between the two estimates. How can this be? The first way to conceptualise this is to think of not using a single Go board to make every combination, but to have many Go boards sitting side by side, each with a different combination of stones laid on top. As we established before, we would need 1×10172 individual boards to make every possible combination. How much room would that take? Laying them side-by-side in single file, the line of Go boards would measure 4×10168 km. For comparison, the diameter of the observable universe is about 1×1033 km.

Another helpful way to think about the problem is to consider how long it would take a person to set a single board to display every possible combination of stones. Let us assume a reasonable competent person can shift the stones to any new configuration in 20 seconds. Even better, let us assume we have a robot that can do the same, with the added advantage that it does not eat, sleep, or make mistakes. Starting with a blank board, how long would it take to complete every possible board combination? Using the values above, it would take 1×10163 years, on expressed fully:

1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

That is not only a very long time; it is many times longer than the age of the universe, which is 13.8 billion (109) years old.

This astounding number of board combinations is often cited as one of the reasons Go is particularly fiendish game; while it is true that Go is very complex, we as humans constantly engage in games that have fantastical numbers of possibilities from poker (106 possible hands) to chess (1010^50 possible games). Even humble Connect Four has 1013 legal playing positions, and most 10-year-olds seems to manage just fine. We are certainly capable of dealing with exponential complexity, just not very good at thinking about it.

—————————————————————————————————

The AlphaGo story was widely reported across the general press, and more extensively in science reporting.

Women ‘don’t understand’ fracking due to lack of education, industry chief claims

Or so ran the headline in The Independent newspaper today.

If you live in the US or the UK, it’s likely you have come across hydraulic fracturing or ‘fracking’ in the media. This relatively new approach at extracting fossil fuels has generated plenty of controversy, both for it’s potential to restore fuel security and drive down prices and rising concerns regarding environmental impact.

Now professor Averil MacDonald, chair of science engagement at the University of Reading has given a statement to The Times saying that women disagree with fracking because they “don’t understand” the science and instead rely on their gut reaction.

Ah, where to begin… I have three main complaints about this statement; that it contravenes a responsibility in scientific engagement, that it harms dialogue and finally that it focuses on the wrong point. I will deal with these in turn.

1. Duty to scientific engagement

Firstly, the idea of a professor of science engagement accusing women of effectively making the ‘wrong choice’ because they are uneducated is both short-sighted and potentially derelict of a scientific duty to put evidence first.

Professor MacDonald’s opinions are clearly partisan – she has recently been appointed chair of UK Onshore Oil and Gas, an industry group – and she makes no pretence otherwise. That is not a bad thing; scientists, like other citizens, can take sides on public issues, it’s part of what makes a democracy vibrant. However, her responsibility as chair of scientific engagement is to provide a balanced account that puts evidence first, as is the role of all scientific engagement with the public. Whether she states these opinions in her capacity, as UKOOG chair is irrelevant – her academic responsibility extends to all public engagement activities, a point sorely missing in discussions of the recent Tim Hunt affair.

For someone who emphasises the role of ‘facts’ in determining the role of fracking in the UK, it is rather curious that any evidence on scientific education or decision making in women is conspicuously absent form her argument.

2. Harming dialogue

At a fundamental level, what could possibly be accomplished by making these public statements? If you are a woman who disagrees with fracking, you will likely feel patronised and potentially less likely to listen to arguments from the pro-fracking camp, given that instead of communicating evidence they are now resorting to unnecessarily gendered attacks. If you are a woman who agrees with fracking, you will also likely also feel patronised, because the argument made is that you have somehow subverted your automatic emotional response (you are, after all, “naturally protective of your children” unlike men, according to MacDonald) and have risen to accept facts. And if you are a man, you are perhaps, like me, wondering what all of this has to do with fracking in the first place.

The issue of fracking is a complex, multi-faceted issue that draws in considerations of economic development, energy security, fuel efficiency, environmental impact and disruption of rural communities, among others. It is certainly not an issue that can be reduced to “women don’t know better because they are poorly educated”. First, suggesting that someone cannot make an informed decision because of a lack of formal education is belittling and elitist, and dismisses the important democratic contribution of opinions from all sectors of society. Second, while women are under-represented in science and engineering, things are changing fast – many young women today will have an excellent scientific education and will be able to digest the complex evidence for and against fracking.

In short, these kind of reductionist attacks will do more to alienate the kind of people that both sides need to appeal to, people who do want to be convinced by evidence, but who feel pushed away by sweeping generalisations and unfair portrayal in the media.

3. Focusing on the wrong point

This may strike our readers as obvious, but talking about why one group or another is for or against an issue, is not addressing the issue. If we are to have a constructive dialogue between those wishing to improve the financial and energy development of the this country, and those wishing to minimise the environmental impact and risks to human health, we need a common ground, and that common ground is provided by evidence. Focusing on evidence allows both sides to present their cases in a rational framework and most importantly, allows for compromise to be reached over facts that are agreed to by both sides. And attacking the education of decision-making skills of your opponent most certainly does not fall under this category.

——————————————————————————————

The Times (paywall) published the original interview with Professor MacDonald, which has also been covered by The Independent, Telegraph, Guardian and others.

I had a recent conversation that went something like this:

“Hey, you know about brains. What about Boltzmann brains? Can they exist?”

If, like me, you are not versed in statistical physics, you might as surprised as I was. Boltzmann brains? are we talking about 19th century physicist Ludwig Boltzmann’s actual brain, or some crazy AI project?

As it turns out, neither. Boltzmann brains are a logical argument that speak deeply to cosmology, stochastic systems and our intuitions about the universe – and very little about actual brains.

The argument goes like this: you have a universe around you governed by the laws of thermodynamics. Specifically the second law stipulates that in a closed system, any physical process leads to an increase in entropy; in other words, disorder amongst the component atoms inside the system. The universe is a closed system (i.e. it is finite) and therefore it tends to increase in entropy. Therefore, the ideal state for our universe is of thermodynamic equilibrium, in other words, high entropy. However, the world that surrounds us is not like that, we observe order everywhere, from stars and galaxies to the presence of life, which indicates low entropy in the system. So we are left with a sticky situation: thermodynamics predicts high entropy, and we observe low entropy.

One solution would be to theorise the universe as mostly being in a state of high entropy, which occasionally fluctuates into a low entropy state and gives rise to our reality, before being quickly ushered back into chaos. This is when Boltzmann comes in – while he sees the universe as fundamentally statistical system, fluctuating between states of varying entropy, the idea of our perceptible reality being a statistical fluke appeared to him as nonsense.

The problem is, every increasing step of complexity requires a increasingly rare statistical fluke of low entropy to give rise to it. So a bit of dust clumping together is more likely than a planet forming. Similarly, a star or a galaxy are increasingly more improbably scenarios in our thermodynamic balanced universe. Taking this argument to the extreme, if you wish to create a scenario where a series of unlikely stochastic processes from a maximally entropic starting point lead to a sentient being, it would vastly more probable that a) such sentience would be made up from the minimum number of components necessary, for example a lonesome brain without a body and b) that such sentience would exist for the smallest amount of time possible. It therefore follows that in this universe any sentient being is far more likely to be a single, floating brain immersed in the a chaotic universe flickering in and out of existence by sheer randomness, rather than the complex pastiche of order we see in our low entropy universe.

Boltzmann brains are therefore, a reductio ad absurdum of the argument for our observable universe being a statistical fluke from thermal equilibrium. More fundamentally however, it’s also a paradox – the second law of thermodynamics holds in virtually every conceivable scenario covered by classical physics and yet our universe is ordered and low-entropic.

In essence, Boltzmann brain can’t exist – it is in fact a tool in cosmological theory for arguing a theory is flawed, if it predicts their presence. And as I promised, they have little to do with actual brains. But they do speak of some fundamental concepts about the way we think about the physical universe. Firstly, and probably most appealing to Ludwig Boltzmann, that entropy is fundamentally statistical and our universe follows its rules. Secondly, that our current understanding of the universe is not reconcilable with fundamental thermodynamics, at least not without considering some more exotic theories about how the universe works, which is what physics is all about. And finally, it allows us to consider how unlikely sentience is, in the ‘high-entropy stochastic process’ sense; it is perhaps the most unlikely of all the observable things we see in the universe.

So cheers to you, you vanishingly unlikely statistical fluke atop a high-entropy universe.

——————————————————————————————

An awful lot has been written about Boltzmann’s brain including a great series of articles for Discover Magazine, and in the New York Times. There’s even a video, if that’s more your thing. Sadly, I could not find any literature on Boltzmann’s actual brain.

I recently joined the chattering forces of Twitter. Not a decision taken lightly, as I am well aware of the obsessive-compulsive nature of social media. But I was attracted by the excellent community of practicing scientists sharing thoughts, research ideas and just being a bit silly.

Having watched from the sidelines for a few months now, and noticed something quite interesting. There was a fruitful and lively dialogue between established scientists – you know, busy people – and interested non-specialists. This kind of interaction was unheard of scarcely a few decades ago.

It may sound like a small thing, and in part it is, but it’s also a sign of how the face of science communication is changing. Social media use is pervasive, democratic and increasingly prevalent in developing countries with little or no science journalism in the press.

And while a pessimist may have prospected that science communication would have suffered with the advent of the likes of Twitter and Facebook, that we no longer know what is a trustworthy source and what is not, we’ve seen the rise of direct communication. Scientists, junior and senior, talking to people. Listening to people. This is a fundamentally valuable shift in the avenues for information, and, I may humbly argue, improves our focus and resolve for making science relevant to the lives of people around us.

A nice example of this is the British Library competition #ShareMyThesis. Graduate students convey the relevance of their labours in little 140-character nuggets, that range from the amusing;

To the bluntly modest;

To the downright terrifying;

Oh yeah, and TV references too;

This is a great example of forcing scientists to think about the big picture while providing a colourful landscape of the research of today. And no small feat too, considering it condenses a 50,000 word behemoth into a pithy and succinct phrase – and still manages to amuse in the process.

——————————————————————————————

#ShareMyThesis is sponsored by the British Library and is full of amusing tales of research. Yours truly can now be followed on @scienceisnews for musings on neuroscience, journalism and science communication.

You’ve probably heard it. There’s an ongoing Ebola virus epidemic in West Africa, which so far has claimed over 4,000 lives. While various news sites have been going into overdrive with the press coverage, both with extensive, pernicious coverage and speculation on the potential threat to the Western world, and the gargantuan task faced by healthcare workers in West Africa.

Much discussion has been devoted to how dangerous Ebola virus really is. Both the WHO and CDC have made it abundantly clear that the Ebola virus does not pose a major health risk to developed nations with modern healthcare systems. But the threat to West Africa is very real, with the humanitarian NGO Medicine Sans Frontières declaring that unless an international effort is secured, the outbreak will worsen and expand for months to come.

Despite this, many people have tried to put the outbreak in context. In the last few days, this image has been making the rounds on social media:

AfricasKillers

As you can see, it purportedly shows the burden of Ebola is much smaller when compared to other diseases. This message has an important consequence, that many of the ‘big killers’ are curable or can be managed, unlike Ebola virus which, at present, has no cure and only experimental therapies exist. It is an appealing notion to the utilitarians who may wish to maximise their relief effort through donations to combat the more manageable, and deadlier, diseases.

Unfortunately the truth, as always, is in the data. The image above shows yearly deaths due to disease across the whole of Africa. Considering the Ebola virus outbreak is only seriously affecting three countries so far, Liberia, Sierra Leone and Guinea, this is hardly a fair comparison. So I’ve gone back to compare the disease burden of Ebola against the other ‘big killers in Africa’ in those three countries:

WestAfrica1

Malaria, TB and HIV are still significant problems, but Ebola is not the paltry, minor killer shown in the previous graph. Instead, it hovers at around half the number of deaths of those two diseases and equivalent to hunger. Now, ‘hunger’ is a difficult concept to categorise, and the WHO prefers deaths attributed to nutritional deficiency, so we have gone with that. Also note numbers for nutritional deficiency are somewhat out-dated, from 2008, and big progress has been made against infant malnutrition since then, particularly in Sierra Leone.

The numbers for Ebola are also conservative and ignore that the outbreak is currently ongoing and likely to lead to more deaths before the year is out. More impressively, we can also include the most common killer in all three countries, as categorised by the WHO:

WestAfrica2

Diarrhoea, particularly in children under the age of 1, is by far the greatest killer in these three West African nations. So the original image is an excellent example of cherry-picking your data, by excluding perinatal complications such as diarrhoea and collapsing data across the whole African continent, instead of the relevant countries for this outbreak.

It is difficult to compare the real, day-to-day impact of such diseases on the ground. Unlike malaria, TB or even HIV, the Ebola epidemic has had a devastating human, social and economic impact in West Africa. While other diseases may cause a greater number of deaths, and should be tackled as vigorously as Ebola, we must not obfuscate the point with bad graphics. The best kind of healthcare response is an informed one, and for that we need responsible, clear and objective data.

——————————————————————————————

Rates of Ebola virus infection and death in the current outbreak are being produced by the WHO and frequently updated on the relevant Wikipedia page. The WHO also keeps excellent public and readable statistics on health worldwide, including their 2012 report on the burden of disease showing country-by-country statistics.

For those looking a more in-depth look at the nuts and bolts of Ebola, transmission and mechanisms of infection, I highly recommend this episode of the podcast This Week In Virology.

Science and journalism have an uneasy relationship in the United States. For a country that has by far the largest scientific output in the world, it seems rather shy of discussing it in its popular media.

Now, I do not wish to suggest the US lacks in science communication. Much to the contrary, it has produced some outstanding science communicators including Carl Sagan, Steven Pinker and Michio Kaku and some great citizen science initiatives such as Science At Home and Zooniverse. Instead, it seems to be something more specific to the mass market of mainstream media.

As a simple experiment, I looked at the 10 most popular, non-aggregator news websites based in the United States to see which ones have a science news section. Here is the breakdown:

USNewSites

Six out of ten top websites choose to have a ‘tech’ section rather than science, and while the content is often similar, it is interesting to note how the same story can be labelled as either science or technology. Indeed, the word ‘science’ has attracted a certain negative connotation in US popular media – for example, the UK animated film The Pirates! In an Adventure with Scientists! was subsequently released in the US as The Pirates! Band of Misfits. We may speculate that the original title was deemed unmarketable for the US consumer, but it highlights the general point that science, or at least things that appear superficially related to science, are not seen as suitable for the mass market.

That is not to say that there are no science programs in the mainstream – the popularity of Cosmos: A Spacetime Odyssey, a science documentary narrated by Neil deGrasse Tyson is a testament to the contrary. But an interesting example is the hit Discovery Channel show Mythbusters, where popular myths are subjected to rigorous tests to determine their validity. The model of Mythbusters is a fantastic example of the scientific method – confirmation through experimentation – but at no point in the show it is labelled ‘science’.

There is clearly an appetite for popular science in the US, and with increasing levels of penetration in media and schools, we are likely to see a positive growth. Despite this, a reticence for the popularisation of science in mainstream media, at least in name, remains. If this trend is deep-rooted or simply an orthodox response to fears of a conservative backlash against the label of ‘science’ remains to be seen, but continuous improvements in education and engagement shows us we are likely to see a lot more science, under that name or another, in US popular media.

—————————————————————————————————

Top 10 most popular non-aggregator news sites based on Alexa ratings on unique page views originating in the United States over the past 12 months.

Is progress inevitable or are we doomed as a species by our own folly? It seems like a fairly fundamental question about our own condition, and it has certainly entertained many great minds and spawned thousands of works in literature and the arts – but the extinction or survival of civilisation enters the realm of science in the latest episode of The Infinite Monkey Cage on BBC Radio 4, a chat show mixing comedy and popular science.

Host Brian Cox asks a very simple question: can science save us? And that is perhaps a tricky formulations, since we must first understand what we mean by science. Cox equates science to knowledge, a conception I would disagree vehemently with. While this may be accurate etymologically, the modern conception of science is the application of the scientific method to understand nature – with the resulting understanding forming part of a much wider umbrella term of knowledge. The key distinction is that science is a way of doing things rather than the end result, which is what permits us to falsify previously held ideas (i.e. knowledge) about how the world works.

If we step back then, and assume we are speaking about knowledge derived from science, we can follow the line of the radio programme and ask: is knowledge always a good thing?

It not in the nature of knowledge to be necessarily a positive thing – it is a truism to say ‘ignorance is bliss’, after all. For example, if you were very far away from your family and you learnt that one of your loved ones has passed away, you may feel distressed. But if such news never arrived, you will carry on your merry ways, despite the reality of the world is that that person is well and truly dead.

But knowledge is completely dependent on interpretation – for example, if you learn that our Earth, this planet, is going to be consumed by the sun and disappear and in a few million years, you might lose sleep over that. But if you also learnt that those few million years are an astronomically large amount of time, and that humans will either be gone or long since evolved into some inconceivable creature, that we will have little to no emotional attachment to whatever species plods the Earth at that time, and the Earth itself would be unrecognisable to our eyes, then you will feel perhaps less upset about the whole affair.

While knowledge may impact our thinking and acting, in the strict sense, neither science (a method) nor knowledge (ideas) are of any practical use to us, but rather their distillation into technological advances or behavioural and societal changes that positively impact our ability to survive and live comfortable lives on this planet.

While it may seem pedantic, it is worth arguing that understanding this triad; science as the approach, knowledge as the vehicle and change as the product, is important to how we conceive the world, how we tackle the issues of today and, as Dr Lucie Green unremittingly points out, how we fund the science of tomorrow.

—————————————————————————————————

Episode 4, Series 10 of The Infinite Monkey Cage aired on BBC Radio and is available as a podcast with extra material. The hosts were Brian Cox and Robert Ince and the guests were Stephen Fry, Prof Tony Ryan and Dr Lucie Green.

A friend recently asked me this question:

“Why don’t they sell scientific journals in shops? That way, people could find out what science is going on.”

 The idea was quite bizarre to my close-minded scientific head, and about a dozen potential replies circled my head. “Most people wouldn’t understand it”, “academic papers are not written for mass consumption”, “we don’t need to, we have science journalism in mainstream media” and “there would just be too much stuff“.

In the end, I settled for “it’s all digital now”, which is true for the vast majority of scientific publishing – papers are increasingly published, purchased and consumed in digital format, with the sight of the actual physical print a rarity in most academic departments.

But this got me thinking. Why not? People have the right to buy scientific journals, particularly if they live in a country where research programmes are heavily funded by public entities – in fact, some people are arguing that they shouldn’t have to pay at all. But could we do this with old-fashioned print version of journals? Could we put them in a shop? Just how much space would it take?

To find this out, we need to be a bit ingenious with numbers. According to the STM 2012 report on scholarly publishing, there were 1.8 million peer-reviewed papers published in 2012. Using growth rates estimated from a couple of sources, that number should hit around 2 million in 2014. If we presume our humble local magazine shop stocks these journals in the normal fashion, a new one every week, we are dealing with 38,462 articles a week.

How much paper is that? A peer-reviewed article is typically between 3,000 and 10,000 words and following a completely unscientific survey of a few representative journals I had lying around (Nature, Science, PNAS) the going rate seems to be 800-1000 words per page, pictures and all. This works out at an average of 7.22 pages per article, as a very rough guess. Multiplying that by the number of weekly articles our shop would need to stock, we get 277,781 pages, and if we add covers, editorials and advertisement we can reasonably round it up to 300,000 weekly pages being churned out.

Now the big question is whether our imaginary shop can actually hold that much content. With a typical magazine format of 120 pages in 12 x 28cm size, we are dealing with 2,500 unique weekly publications. Of course, we want to stock more than one copy of each, so 50,000 issues is not unreasonable. How much space would that take? At 8.50 m3 it would just about fit into a large transit van. Of course, if you actually want to be able to rifle through the contents, our shop would need a considerable floor space. If we display the journals face-on, as they typically are in book stores, it would need about 500 m of shelf space.

Assuming absolutely no backlog and definitely not stocking the previous week’s edition, a very large, multi-storied shop could just about manage that. And it probably wouldn’t have room for National Geographic.

—————————————————————————————————

For numbers of scholarly articles published, this article by Larsen & von Ins (2010), as well as reports by STM and the Royal Society were particularly helpful.  

Oh boy, here we go again.

Today’s story comes from PBS, with the eye-grabbing title:

Heavy marijuana use may cause poor memory and abnormal brain structure, study says

Cue college students and worried mothers clicking away. A study by Northwestern University looked at long-term effects of cannabis in teenagers and compared the size and shape of subcortical structures of the brain in users vs. non-users. Interestingly, schizophrenic subjects who used cannabis had more exaggerated differences, suggesting a possible biological link between cannabis use and the condition.

But tellingly, the study was not reported quite like that. Indeed, the PBS report focuses on the effects on cannabis on normal teenagers, and its title was originally more strongly worded:

Correction: The title of this post was corrected to indicate that researchers have not concluded a direct link between heavy marijuana use and abnormal brain structure or poor memory, but to reflect that the study shows a possible association between the two.

And this is exactly the issue we see with these kinds of reports, and what I would like to discuss. You could argue there is a fundamental incompatibility between piecemeal research and the journalistic demand for headlines. In order to make good news, a science story must be gripping – it must tell us something new that is interesting, or relevant to our lives. It must have a beginning, middle and end. Good story telling is what a good journalist does.

The problem is that this leads to de-contextualisation, with studies looked at in isolation and generating the impression that each of them is a single morsel of the ‘truth’. This is why we feel flabbergasted when yesterday’s Guardian tells us coffee causes cancer and today’s CNN report says is cures cancer. In reality, research studies form continuous lines of evidence that must be interpreted together, approaching something closer to the truth.

Let us examine the specific case of cannabis usage reported here. While this report may lead you to think that THC will destroy your brain, there is evidence that cannabis use is not nearly as harmful as heavy consumption of other psychoactive substances such as alcohol. Of course, cannabis use remains an unsafe recreational activity, solidly linked to cardiovascular, respiratory and mental disorders (here, here and here). It’s all a matter of relative risk.

How we read a scientific paper and a newspaper article are two different things. We cannot expect our audience to know the full background to the story, or the current consensus on a given topic. But it can guide us towards learning more – a simple statement of caveats leads many readers thinking “well hang on, why?”.

Science journalism can be a powerful window into the vastness of science, and encourage people to explore, learn and wonder. So let us encourage this, show how exciting a finding or theory is, and leave the door open. If enough people are left in suspense, wondering why, they will go and find out – and that is a good thing.

—————————————————————————————————

Press release from Northwestern University and original paper published in the journal Schizophrenia Bulletin.