Archives for category: Uncategorized

Ah, it’s spring; the flowers are out, the sunshine pokes through the clouds and the smell of democracy is in the air. No, I am not talking about the Ghanaian or Philippine general elections, which are indeed taking place this year, or the whooping 8 elections and 1 referendum taking place in the United Kingdom. I am of course talking of star-spangled democracy with a capital D – the 58th United States presidential election.

As discussion of the presidential candidates heats up in the media, the casual reader might be forgiven for thinking this campaign has been going on for years. In fact, much has been written about the perceived length and strain of the presidential campaign, with many commentators labelling it as ‘absurd’.

This piqued my interest and made me wonder if the US election race is truly an interminable soporific, or whether it could be attributable to perception alone. After all, the US is a world leader in most international matters, and the election of the president is the high point of the political calendar, so extensive news coverage is to be expected.

To find out, we will need to compare the length of the US election to equivalent elections in other countries. I have chosen to do this with the other 33 OECD nations, the rich-world club of industrialised economies that are most similar to the US in terms of economic prosperity and demographic makeup.

Now, defining an election campaign is a tricky business. Some countries like Canada and Australia limit the duration of a political campaign by law, but most don’t. Others like Slovakia limit the funds available to each party for campaigning, while others have free-for-alls. Ultimately, we are interested in the number of days viable candidates do spend campaigning for their election in a major race in a given OECD country; be it general, presidential or parliamentary election.

We will therefore have to make some rules. When no legal limit is imposed, we will take the number of days between the first announcement of a major candidacy (so no Monster Raving Loony Party) and voting day. This works well for most countries except for the Scandinavian nations, where candidates typically don’t launch their campaign individually, but rather their party does. In such cases, we will take the last date for valid submission of candidacy as the starting point, and voting day as the end. In all cases, when legal limits are not present, we will use the last major election as indicative.

The United States is, unsurprisingly, an oddball case here. We can divide the US presidential campaign into two halves. First, the time between a major candidate announcing their intention to run for office and the party national convention, can be considered the first leg of the race. Following the convention, each party selects a candidate and the second leg goes from nomination to election day. A few other countries conduct this out-of-season political campaigning, most notably Italy, so we have included those as well.

For the 2016 presidential race, Hillary Clinton was the first major candidate to pronounce herself on 12th April 2015 towards the Democratic convention in late July 2016. The race then goes on until Election Day on the 8th November. How does that compare to the other OECD nations?

CampaignDurationBy a wide margin, the United States remains the undisputed leader of absurdly long election races. South Korea and Italy are notable outliers, both countries that engage in similar pre-campaign campaigning; but at a mere 241 and 190 campaign days in their last electoral cycles, they are dwarfed by the gigantic 576 days of US presidential campaigning. At the other end of the scale, Japan imposes a restrictive maximum of 12 days of political campaigning, with heavy fines for any infractions inching over the two-week line.

Regulating campaign time is a comparably crude method by which countries limit political opinion making. Regulating party financing is arguably a much more effective way of doing this, and much has been written about the successes and failures of financial approaches. Alas, the US election remains, as of now, a wide field of possibilities for financial backing of political campaigns. And while that continues to be the case, we are likely to continue enjoying the marathon races that make up the US presidential election.


While researching this article, I discovered the curious practise of electoral silence, where countries impose a complete ban on political advertising or campaigning on the day, or the day immediately before, an election. These range from a reasonable 24-hour ban in Bulgaria, to the ludicrous 15-day ban on polling in Italy. In Spain, this day preceding an election is delightfully called “reflection day”. Perhaps we should all reflect on that.

Talking to voters about LGBT issues changes bias

Ran one of the many headlines this week. “Hang on”, I though, “this seems awfully familiar…”

If it feels familiar, it’s because we have heard this story before. Specifically, in late 2014 when Michael LaCour and Donald Green published a flashy article in Science claiming a short conversation with a door-to-door gay canvasser was enough to change people’s opinion towards gay marriage, and maintain that opinion nine months later. The study received wide coverage, both for the large implications for political campaigning and reducing discriminatory bias, but also for the feel-good factor of the power of human interaction. “At least”, exclaimed the campaigners, “here is science proving human connection can break down prejudice”.

Alas, it was not meant to be. In May the following year, Broockman and Kalla, two graduate students at University of California, Berkeley published a comprehensive critique of LaCour’s paper. A thorough statistical analysis of the original data, as well as following up on the survey companies used for the study revealed several irregularities, which were hard to explain without accepting the data had been falsified. Shortly afterwards, the original paper was retracted.

So far the story is unusual, but not surprising. Cheating in science happens on occasion, particularly in the glamorous, high impact journals like Science. The story was a disappointment for LGBT campaigners, but a small victory for open science and the public availability of data. And that should have been the end of the story, except for those two graduate students back in Berkeley. Broockman & Kalla’s were deeply interested in the LaCour paper in the first place (they did write a 27 page analysis on it, after all), not because they set out to debunk it, but rather because they were planning their own follow-up study on canvassing techniques and opinion shifting.

That follow-up study is now published, and much to everyone’s surprise it seems to broadly agree with the trend in the (allegedly fraudulent) original paper. This new study shows that a 10-minute conversation with a canvasser can change people’s opinion about transgender issues in a similarly long-lasting way. But importantly, and here is where the study differs from LaCour’s interpretation, this effect is seen even if the canvasser are not transgender themselves, changing the focus of why the effect occurs. The original study was all about humanising an issue – by exposing voters to real, flesh-and-bone gay activists their opinion can be shifted by linking the issue to the person at their door, or so the logic ran. With their latest findings, Broockman suggests instead that the success of canvassing relies on a particular perspective-taking technique, and not with exposing prejudiced individuals to a gay or transgender canvasser to humanise them.

Many of the positive design features in the new study came about from the LaCour debacle. Recruitment techniques, canvassing methods and the perspective-taking approach were all informed by the ensuing debate, fine-tuned over the course of the fallout, and it could be argued that this new study wouldn’t have been as successful without it.

This story is therefore a tale of good science. By making the data for LaCour’s original study public, the irregularities were exposed and a better study came out of it. We found out the original interpretation was likely inaccurate, and now have a better idea of what works when changing people’s opinions (whether the interpretation of the original data has any validity in light of the alleged fraud remains an open question).

Most tellingly, Broockman and his colleagues remain committed to open science, having seen the benefit of inspecting other scientist’s data. They have published all of the data and code for their study, in case any amateur sleuth wishes to take a magnifying glass and scrutinise their findings – and, as we have learned, that can be a very good thing indeed.


Broockman & Kalla (2016) was published in Sciencealong with accompanying commentary. Some great coverage of the story has featured in NPR, FiveThirtyEight, and Wired.

On Wednesday the news broke that an artificial intelligence has finally cracked one of the most complex board games out there, Go. The announcement and subsequent paper from Google’s DeepMind showcases their brainchild, nicknamed AlphaGo. This computer program is not only capable of playing Go, but also defeated the European Go champion, Fan Hui back in October 2015. It is currently scheduled for a match against Lee Sedol in March, arguably the reigning world champion in the game.

If this story feels familiar, it’s because it is – IBM’s Deep Blue famously beat chess grandmaster Garry Kasparov back in 1997, launching artificial intelligence (AI) into the public consciousness and starting a great pursuit of computer programs that could beat humans at every conceivable game of skill. The team at DeepMind announced early last year they had develop another AI which could beat humans at a whole swathe of classic arcade video games, from pong to space invaders.

The game of Go itself has a deep appeal to the mathematically minded, from baseball-like in-depth player statistics, to convoluted mathematical models for player rankings, so it is no surprise that it would appeal as a challenge for designers of artificial intelligence. But what caught my attention amongst the media flurry, was this statement by Demis Hassabis, CEO of Google DeepMind in interview with Nature:

“Go is a very ancient game, it is probably the most complex game humans play, it has more configurations on the board that there atoms in the universe […]”.

These kinds of statements are often hard to wrap your head around. From intuition, it does not make any sense: I can sit at my desk with a standard 19×19 Go board and the 361 stones it takes to fill it, and very laboriously make every possible combination on the board. This way I haven’t used up all the atoms in the universe, only those for a single set of Go that I have in front of me.

Dealing with very large sums such as the number of atoms in the universe is a notoriously difficult thing to grasp intuitively. Let us begin by looking at the actual numbers. What is the number of possible configurations on a Go board? For the standard 19×19 board, there are 361 positions where the two players can place their stones. In Go, any given position can be either empty, a black stone or a white stone, meaning any of the 361 positions can be in one of 3 states. Thus, the number of possible board configurations is 3361, or to put it in more standard notation, 1×10172. Of those, only about 1×10170 are legal positions, in other words not violating the rules of Go. Surprisingly, the exact number of legal positions has been calculated as:


Which, I am sure you will agree, a rather large number indeed. Now, onto the universe. The usual number banded about is of 1×1082 atoms in the universe. In truth, any such estimate is made with some very strong assumptions. The typical approach an astrophysicist might take when thinking about this problem is to start with the number of stars in the universe. Computer simulations put that number at around 1×1023. Next we need to know how much stuff is in a star – based on the universe observable from Earth, each star weighs an average of 1×1035 grams. Next, a perhaps the most precise estimate in this equation, is that each gram of matter contains about 1×1024 protons. Finally, we can put all those numbers together:

1023 × 1035 ×1024 = 1082

Planets and other planetary bodies don’t make it into the calculation since stars are substantially more massive. So ultimately it is a very rough, back of an envelope estimate, which is probably in the correct region, give or take a few orders of magnitude.

Going back to the number of Go board positions, we can see that there is a massive difference between the two estimates. How can this be? The first way to conceptualise this is to think of not using a single Go board to make every combination, but to have many Go boards sitting side by side, each with a different combination of stones laid on top. As we established before, we would need 1×10172 individual boards to make every possible combination. How much room would that take? Laying them side-by-side in single file, the line of Go boards would measure 4×10168 km. For comparison, the diameter of the observable universe is about 1×1033 km.

Another helpful way to think about the problem is to consider how long it would take a person to set a single board to display every possible combination of stones. Let us assume a reasonable competent person can shift the stones to any new configuration in 20 seconds. Even better, let us assume we have a robot that can do the same, with the added advantage that it does not eat, sleep, or make mistakes. Starting with a blank board, how long would it take to complete every possible board combination? Using the values above, it would take 1×10163 years, on expressed fully:


That is not only a very long time; it is many times longer than the age of the universe, which is 13.8 billion (109) years old.

This astounding number of board combinations is often cited as one of the reasons Go is particularly fiendish game; while it is true that Go is very complex, we as humans constantly engage in games that have fantastical numbers of possibilities from poker (106 possible hands) to chess (1010^50 possible games). Even humble Connect Four has 1013 legal playing positions, and most 10-year-olds seems to manage just fine. We are certainly capable of dealing with exponential complexity, just not very good at thinking about it.


The AlphaGo story was widely reported across the general press, and more extensively in science reporting.

Women ‘don’t understand’ fracking due to lack of education, industry chief claims

Or so ran the headline in The Independent newspaper today.

If you live in the US or the UK, it’s likely you have come across hydraulic fracturing or ‘fracking’ in the media. This relatively new approach at extracting fossil fuels has generated plenty of controversy, both for it’s potential to restore fuel security and drive down prices and rising concerns regarding environmental impact.

Now professor Averil MacDonald, chair of science engagement at the University of Reading has given a statement to The Times saying that women disagree with fracking because they “don’t understand” the science and instead rely on their gut reaction.

Ah, where to begin… I have three main complaints about this statement; that it contravenes a responsibility in scientific engagement, that it harms dialogue and finally that it focuses on the wrong point. I will deal with these in turn.

1. Duty to scientific engagement

Firstly, the idea of a professor of science engagement accusing women of effectively making the ‘wrong choice’ because they are uneducated is both short-sighted and potentially derelict of a scientific duty to put evidence first.

Professor MacDonald’s opinions are clearly partisan – she has recently been appointed chair of UK Onshore Oil and Gas, an industry group – and she makes no pretence otherwise. That is not a bad thing; scientists, like other citizens, can take sides on public issues, it’s part of what makes a democracy vibrant. However, her responsibility as chair of scientific engagement is to provide a balanced account that puts evidence first, as is the role of all scientific engagement with the public. Whether she states these opinions in her capacity, as UKOOG chair is irrelevant – her academic responsibility extends to all public engagement activities, a point sorely missing in discussions of the recent Tim Hunt affair.

For someone who emphasises the role of ‘facts’ in determining the role of fracking in the UK, it is rather curious that any evidence on scientific education or decision making in women is conspicuously absent form her argument.

2. Harming dialogue

At a fundamental level, what could possibly be accomplished by making these public statements? If you are a woman who disagrees with fracking, you will likely feel patronised and potentially less likely to listen to arguments from the pro-fracking camp, given that instead of communicating evidence they are now resorting to unnecessarily gendered attacks. If you are a woman who agrees with fracking, you will also likely also feel patronised, because the argument made is that you have somehow subverted your automatic emotional response (you are, after all, “naturally protective of your children” unlike men, according to MacDonald) and have risen to accept facts. And if you are a man, you are perhaps, like me, wondering what all of this has to do with fracking in the first place.

The issue of fracking is a complex, multi-faceted issue that draws in considerations of economic development, energy security, fuel efficiency, environmental impact and disruption of rural communities, among others. It is certainly not an issue that can be reduced to “women don’t know better because they are poorly educated”. First, suggesting that someone cannot make an informed decision because of a lack of formal education is belittling and elitist, and dismisses the important democratic contribution of opinions from all sectors of society. Second, while women are under-represented in science and engineering, things are changing fast – many young women today will have an excellent scientific education and will be able to digest the complex evidence for and against fracking.

In short, these kind of reductionist attacks will do more to alienate the kind of people that both sides need to appeal to, people who do want to be convinced by evidence, but who feel pushed away by sweeping generalisations and unfair portrayal in the media.

3. Focusing on the wrong point

This may strike our readers as obvious, but talking about why one group or another is for or against an issue, is not addressing the issue. If we are to have a constructive dialogue between those wishing to improve the financial and energy development of the this country, and those wishing to minimise the environmental impact and risks to human health, we need a common ground, and that common ground is provided by evidence. Focusing on evidence allows both sides to present their cases in a rational framework and most importantly, allows for compromise to be reached over facts that are agreed to by both sides. And attacking the education of decision-making skills of your opponent most certainly does not fall under this category.


The Times (paywall) published the original interview with Professor MacDonald, which has also been covered by The Independent, Telegraph, Guardian and others.

I had a recent conversation that went something like this:

“Hey, you know about brains. What about Boltzmann brains? Can they exist?”

If, like me, you are not versed in statistical physics, you might as surprised as I was. Boltzmann brains? are we talking about 19th century physicist Ludwig Boltzmann’s actual brain, or some crazy AI project?

As it turns out, neither. Boltzmann brains are a logical argument that speak deeply to cosmology, stochastic systems and our intuitions about the universe – and very little about actual brains.

The argument goes like this: you have a universe around you governed by the laws of thermodynamics. Specifically the second law stipulates that in a closed system, any physical process leads to an increase in entropy; in other words, disorder amongst the component atoms inside the system. The universe is a closed system (i.e. it is finite) and therefore it tends to increase in entropy. Therefore, the ideal state for our universe is of thermodynamic equilibrium, in other words, high entropy. However, the world that surrounds us is not like that, we observe order everywhere, from stars and galaxies to the presence of life, which indicates low entropy in the system. So we are left with a sticky situation: thermodynamics predicts high entropy, and we observe low entropy.

One solution would be to theorise the universe as mostly being in a state of high entropy, which occasionally fluctuates into a low entropy state and gives rise to our reality, before being quickly ushered back into chaos. This is when Boltzmann comes in – while he sees the universe as fundamentally statistical system, fluctuating between states of varying entropy, the idea of our perceptible reality being a statistical fluke appeared to him as nonsense.

The problem is, every increasing step of complexity requires a increasingly rare statistical fluke of low entropy to give rise to it. So a bit of dust clumping together is more likely than a planet forming. Similarly, a star or a galaxy are increasingly more improbably scenarios in our thermodynamic balanced universe. Taking this argument to the extreme, if you wish to create a scenario where a series of unlikely stochastic processes from a maximally entropic starting point lead to a sentient being, it would vastly more probable that a) such sentience would be made up from the minimum number of components necessary, for example a lonesome brain without a body and b) that such sentience would exist for the smallest amount of time possible. It therefore follows that in this universe any sentient being is far more likely to be a single, floating brain immersed in the a chaotic universe flickering in and out of existence by sheer randomness, rather than the complex pastiche of order we see in our low entropy universe.

Boltzmann brains are therefore, a reductio ad absurdum of the argument for our observable universe being a statistical fluke from thermal equilibrium. More fundamentally however, it’s also a paradox – the second law of thermodynamics holds in virtually every conceivable scenario covered by classical physics and yet our universe is ordered and low-entropic.

In essence, Boltzmann brain can’t exist – it is in fact a tool in cosmological theory for arguing a theory is flawed, if it predicts their presence. And as I promised, they have little to do with actual brains. But they do speak of some fundamental concepts about the way we think about the physical universe. Firstly, and probably most appealing to Ludwig Boltzmann, that entropy is fundamentally statistical and our universe follows its rules. Secondly, that our current understanding of the universe is not reconcilable with fundamental thermodynamics, at least not without considering some more exotic theories about how the universe works, which is what physics is all about. And finally, it allows us to consider how unlikely sentience is, in the ‘high-entropy stochastic process’ sense; it is perhaps the most unlikely of all the observable things we see in the universe.

So cheers to you, you vanishingly unlikely statistical fluke atop a high-entropy universe.


An awful lot has been written about Boltzmann’s brain including a great series of articles for Discover Magazine, and in the New York Times. There’s even a video, if that’s more your thing. Sadly, I could not find any literature on Boltzmann’s actual brain.

I recently joined the chattering forces of Twitter. Not a decision taken lightly, as I am well aware of the obsessive-compulsive nature of social media. But I was attracted by the excellent community of practicing scientists sharing thoughts, research ideas and just being a bit silly.

Having watched from the sidelines for a few months now, and noticed something quite interesting. There was a fruitful and lively dialogue between established scientists – you know, busy people – and interested non-specialists. This kind of interaction was unheard of scarcely a few decades ago.

It may sound like a small thing, and in part it is, but it’s also a sign of how the face of science communication is changing. Social media use is pervasive, democratic and increasingly prevalent in developing countries with little or no science journalism in the press.

And while a pessimist may have prospected that science communication would have suffered with the advent of the likes of Twitter and Facebook, that we no longer know what is a trustworthy source and what is not, we’ve seen the rise of direct communication. Scientists, junior and senior, talking to people. Listening to people. This is a fundamentally valuable shift in the avenues for information, and, I may humbly argue, improves our focus and resolve for making science relevant to the lives of people around us.

A nice example of this is the British Library competition #ShareMyThesis. Graduate students convey the relevance of their labours in little 140-character nuggets, that range from the amusing;

To the bluntly modest;

To the downright terrifying;

Oh yeah, and TV references too;

This is a great example of forcing scientists to think about the big picture while providing a colourful landscape of the research of today. And no small feat too, considering it condenses a 50,000 word behemoth into a pithy and succinct phrase – and still manages to amuse in the process.


#ShareMyThesis is sponsored by the British Library and is full of amusing tales of research. Yours truly can now be followed on @scienceisnews for musings on neuroscience, journalism and science communication.

You’ve probably heard it. There’s an ongoing Ebola virus epidemic in West Africa, which so far has claimed over 4,000 lives. While various news sites have been going into overdrive with the press coverage, both with extensive, pernicious coverage and speculation on the potential threat to the Western world, and the gargantuan task faced by healthcare workers in West Africa.

Much discussion has been devoted to how dangerous Ebola virus really is. Both the WHO and CDC have made it abundantly clear that the Ebola virus does not pose a major health risk to developed nations with modern healthcare systems. But the threat to West Africa is very real, with the humanitarian NGO Medicine Sans Frontières declaring that unless an international effort is secured, the outbreak will worsen and expand for months to come.

Despite this, many people have tried to put the outbreak in context. In the last few days, this image has been making the rounds on social media:


As you can see, it purportedly shows the burden of Ebola is much smaller when compared to other diseases. This message has an important consequence, that many of the ‘big killers’ are curable or can be managed, unlike Ebola virus which, at present, has no cure and only experimental therapies exist. It is an appealing notion to the utilitarians who may wish to maximise their relief effort through donations to combat the more manageable, and deadlier, diseases.

Unfortunately the truth, as always, is in the data. The image above shows yearly deaths due to disease across the whole of Africa. Considering the Ebola virus outbreak is only seriously affecting three countries so far, Liberia, Sierra Leone and Guinea, this is hardly a fair comparison. So I’ve gone back to compare the disease burden of Ebola against the other ‘big killers in Africa’ in those three countries:


Malaria, TB and HIV are still significant problems, but Ebola is not the paltry, minor killer shown in the previous graph. Instead, it hovers at around half the number of deaths of those two diseases and equivalent to hunger. Now, ‘hunger’ is a difficult concept to categorise, and the WHO prefers deaths attributed to nutritional deficiency, so we have gone with that. Also note numbers for nutritional deficiency are somewhat out-dated, from 2008, and big progress has been made against infant malnutrition since then, particularly in Sierra Leone.

The numbers for Ebola are also conservative and ignore that the outbreak is currently ongoing and likely to lead to more deaths before the year is out. More impressively, we can also include the most common killer in all three countries, as categorised by the WHO:


Diarrhoea, particularly in children under the age of 1, is by far the greatest killer in these three West African nations. So the original image is an excellent example of cherry-picking your data, by excluding perinatal complications such as diarrhoea and collapsing data across the whole African continent, instead of the relevant countries for this outbreak.

It is difficult to compare the real, day-to-day impact of such diseases on the ground. Unlike malaria, TB or even HIV, the Ebola epidemic has had a devastating human, social and economic impact in West Africa. While other diseases may cause a greater number of deaths, and should be tackled as vigorously as Ebola, we must not obfuscate the point with bad graphics. The best kind of healthcare response is an informed one, and for that we need responsible, clear and objective data.


Rates of Ebola virus infection and death in the current outbreak are being produced by the WHO and frequently updated on the relevant Wikipedia page. The WHO also keeps excellent public and readable statistics on health worldwide, including their 2012 report on the burden of disease showing country-by-country statistics.

For those looking a more in-depth look at the nuts and bolts of Ebola, transmission and mechanisms of infection, I highly recommend this episode of the podcast This Week In Virology.

Science and journalism have an uneasy relationship in the United States. For a country that has by far the largest scientific output in the world, it seems rather shy of discussing it in its popular media.

Now, I do not wish to suggest the US lacks in science communication. Much to the contrary, it has produced some outstanding science communicators including Carl Sagan, Steven Pinker and Michio Kaku and some great citizen science initiatives such as Science At Home and Zooniverse. Instead, it seems to be something more specific to the mass market of mainstream media.

As a simple experiment, I looked at the 10 most popular, non-aggregator news websites based in the United States to see which ones have a science news section. Here is the breakdown:


Six out of ten top websites choose to have a ‘tech’ section rather than science, and while the content is often similar, it is interesting to note how the same story can be labelled as either science or technology. Indeed, the word ‘science’ has attracted a certain negative connotation in US popular media – for example, the UK animated film The Pirates! In an Adventure with Scientists! was subsequently released in the US as The Pirates! Band of Misfits. We may speculate that the original title was deemed unmarketable for the US consumer, but it highlights the general point that science, or at least things that appear superficially related to science, are not seen as suitable for the mass market.

That is not to say that there are no science programs in the mainstream – the popularity of Cosmos: A Spacetime Odyssey, a science documentary narrated by Neil deGrasse Tyson is a testament to the contrary. But an interesting example is the hit Discovery Channel show Mythbusters, where popular myths are subjected to rigorous tests to determine their validity. The model of Mythbusters is a fantastic example of the scientific method – confirmation through experimentation – but at no point in the show it is labelled ‘science’.

There is clearly an appetite for popular science in the US, and with increasing levels of penetration in media and schools, we are likely to see a positive growth. Despite this, a reticence for the popularisation of science in mainstream media, at least in name, remains. If this trend is deep-rooted or simply an orthodox response to fears of a conservative backlash against the label of ‘science’ remains to be seen, but continuous improvements in education and engagement shows us we are likely to see a lot more science, under that name or another, in US popular media.


Top 10 most popular non-aggregator news sites based on Alexa ratings on unique page views originating in the United States over the past 12 months.

Is progress inevitable or are we doomed as a species by our own folly? It seems like a fairly fundamental question about our own condition, and it has certainly entertained many great minds and spawned thousands of works in literature and the arts – but the extinction or survival of civilisation enters the realm of science in the latest episode of The Infinite Monkey Cage on BBC Radio 4, a chat show mixing comedy and popular science.

Host Brian Cox asks a very simple question: can science save us? And that is perhaps a tricky formulations, since we must first understand what we mean by science. Cox equates science to knowledge, a conception I would disagree vehemently with. While this may be accurate etymologically, the modern conception of science is the application of the scientific method to understand nature – with the resulting understanding forming part of a much wider umbrella term of knowledge. The key distinction is that science is a way of doing things rather than the end result, which is what permits us to falsify previously held ideas (i.e. knowledge) about how the world works.

If we step back then, and assume we are speaking about knowledge derived from science, we can follow the line of the radio programme and ask: is knowledge always a good thing?

It not in the nature of knowledge to be necessarily a positive thing – it is a truism to say ‘ignorance is bliss’, after all. For example, if you were very far away from your family and you learnt that one of your loved ones has passed away, you may feel distressed. But if such news never arrived, you will carry on your merry ways, despite the reality of the world is that that person is well and truly dead.

But knowledge is completely dependent on interpretation – for example, if you learn that our Earth, this planet, is going to be consumed by the sun and disappear and in a few million years, you might lose sleep over that. But if you also learnt that those few million years are an astronomically large amount of time, and that humans will either be gone or long since evolved into some inconceivable creature, that we will have little to no emotional attachment to whatever species plods the Earth at that time, and the Earth itself would be unrecognisable to our eyes, then you will feel perhaps less upset about the whole affair.

While knowledge may impact our thinking and acting, in the strict sense, neither science (a method) nor knowledge (ideas) are of any practical use to us, but rather their distillation into technological advances or behavioural and societal changes that positively impact our ability to survive and live comfortable lives on this planet.

While it may seem pedantic, it is worth arguing that understanding this triad; science as the approach, knowledge as the vehicle and change as the product, is important to how we conceive the world, how we tackle the issues of today and, as Dr Lucie Green unremittingly points out, how we fund the science of tomorrow.


Episode 4, Series 10 of The Infinite Monkey Cage aired on BBC Radio and is available as a podcast with extra material. The hosts were Brian Cox and Robert Ince and the guests were Stephen Fry, Prof Tony Ryan and Dr Lucie Green.

A friend recently asked me this question:

“Why don’t they sell scientific journals in shops? That way, people could find out what science is going on.”

 The idea was quite bizarre to my close-minded scientific head, and about a dozen potential replies circled my head. “Most people wouldn’t understand it”, “academic papers are not written for mass consumption”, “we don’t need to, we have science journalism in mainstream media” and “there would just be too much stuff“.

In the end, I settled for “it’s all digital now”, which is true for the vast majority of scientific publishing – papers are increasingly published, purchased and consumed in digital format, with the sight of the actual physical print a rarity in most academic departments.

But this got me thinking. Why not? People have the right to buy scientific journals, particularly if they live in a country where research programmes are heavily funded by public entities – in fact, some people are arguing that they shouldn’t have to pay at all. But could we do this with old-fashioned print version of journals? Could we put them in a shop? Just how much space would it take?

To find this out, we need to be a bit ingenious with numbers. According to the STM 2012 report on scholarly publishing, there were 1.8 million peer-reviewed papers published in 2012. Using growth rates estimated from a couple of sources, that number should hit around 2 million in 2014. If we presume our humble local magazine shop stocks these journals in the normal fashion, a new one every week, we are dealing with 38,462 articles a week.

How much paper is that? A peer-reviewed article is typically between 3,000 and 10,000 words and following a completely unscientific survey of a few representative journals I had lying around (Nature, Science, PNAS) the going rate seems to be 800-1000 words per page, pictures and all. This works out at an average of 7.22 pages per article, as a very rough guess. Multiplying that by the number of weekly articles our shop would need to stock, we get 277,781 pages, and if we add covers, editorials and advertisement we can reasonably round it up to 300,000 weekly pages being churned out.

Now the big question is whether our imaginary shop can actually hold that much content. With a typical magazine format of 120 pages in 12 x 28cm size, we are dealing with 2,500 unique weekly publications. Of course, we want to stock more than one copy of each, so 50,000 issues is not unreasonable. How much space would that take? At 8.50 m3 it would just about fit into a large transit van. Of course, if you actually want to be able to rifle through the contents, our shop would need a considerable floor space. If we display the journals face-on, as they typically are in book stores, it would need about 500 m of shelf space.

Assuming absolutely no backlog and definitely not stocking the previous week’s edition, a very large, multi-storied shop could just about manage that. And it probably wouldn’t have room for National Geographic.


For numbers of scholarly articles published, this article by Larsen & von Ins (2010), as well as reports by STM and the Royal Society were particularly helpful.