Aside on climate politics and tobacco

This blog is my opinion.

I am intentionally separating the science of climate change from a discussion of the politics and what we should do about it. Too often, people have conflated the two. I think Al Gore talking about climate change was one of the most damaging decisions ever (and he should never have got a Nobel Prize). Because, and particularly in the USA, people who disagreed with his suggested solutions to the problem, chose to argue with the science, rather than the politics. I think they didn’t understand the difference between different types of “truth”. (I wrote a lot about different types of truth in 2016 and the 2nd-5th posts on this blog are about that). I believe politicians and all of us should be grappling with (and that includes arguing about) what we are going to be doing about climate change. We should not be arguing about whether anthropogenic climate change is real or not.

I am trying to give a faithful and honest account of what I understand about climate change in my lessons. The science is not perfectly known and there are some very big unknowns – for example how positive cloud feedback is – but just because we don’t know everything doesn’t mean we know nothing. The science of climate change will advance and with that advance it will become ever more possible to understand the detail of what’s happening, but we already know the main point: anthropogenic climate change is putting human civilisation as we know it at risk. We either have to stop it (mitigation) or we have to adapt to it. Or perhaps a bit of both.

But we’ve only fully understood this for about 20 years. We’ve had hints before that, and the hints have got stronger and clearer over time, but the clear picture we have now is very recent. I think there are parallels with how we learnt about – and then reacted to – the dangers in tobacco which it’s useful to draw.

The first scientific study on the dangers of tobacco was in 1791. John Hill did a clinical study that showed that snuff users were more likely to get nose cancer. A debate about tobacco in the Lancet started in 1856. In 1889 Langley and Dickenson do the scientific studies that start to explain why nicotine is dangerous. They start modelling the processes by which nicotine effects the cells in our bodies. In 1912 the connection between smoking and lung cancer is first published. The first large-scale scientific analysis of that connection was in 1951. In 1954 the Readers Digest published an article about this and that article contributed to the largest drop in cigarette sales since the depression. In 1962 the British Royal College of Physicians published a report saying that the link was real and in 1964 the US Surgeon General did the same. Cigarette adverts were banned on tv in 1965. Cigarette smoking was banned on the London underground in 1984 – but not for health reasons, instead because a dropped cigarette may have contributed to a fire at Oxford Circus. A comprehensive review about the dangers of passive smoking came out in 1992. Over time more and more things are banned – no smoking zones are introduced in pubs, advertising has bigger warnings …. and eventually in 2003 tobacco advertising is banned in the UK and in 2007 smoking in workplaces is banned in England. Now, 12 years on, I think most of us consider this normal. [I got these dates from an interesting document online: http://ash.org.uk/information-and-resources/briefings/key-dates-in-the-history-of-anti-tobacco-campaigning/]

In 1964 the evidence was clear. We didn’t understand everything – we didn’t understand all the effects of passive smoking, we weren’t quite sure about how a mother smoking affected the fetus in her womb, we didn’t know the link between smoking and cervical cancer or heart disease… but we knew it was dangerous and we took our first steps towards changing it. We had to change people’s attitudes, we had to get people to change how they did things, we had to make smokers uncomfortable on long-haul flights. And people sued the tobacco firms and they fought back – and often won – court cases. It was a long journey that often didn’t go what we now, in hindsight, see as the right way.

I think in climate change we reached that 1964 moment with the publication of the first IPCC report in 1990. There was a lot that that report didn’t know – just like the 1964 tobacco and health reports didn’t know everything either. But equally, it was the first clear report that the problem was real.

If it follows a similar timescale, and I think human nature is such that that’s a good first approximation, that would put climate change in 2020 in the same place as tobacco smoking in 1994. That’s the year some individual organisations made voluntary changes – like Wetherspoons introducing smoke free areas in their pubs, and Cathay Pacific introducing smoke free long-haul flights. It’s also the year that the tobacco companies lost their court battle to stop the warnings being printed in big font on their cigarette packets. There were signs that the numbers of smokers were dropping and British Rail had banned smoking a year earlier – to 85% approval. But there were still 8 years to go before smoking was banned in workplaces – and it probably would have felt too much back then. (I remember being pleased to have a smoke free area in the pub and I didn’t question that the rest of the pub still allowed smoking, I just held my breath walking from the bar to the place I was sitting).

I think that if we’re doing the voluntary stuff now, and the legal stuff catches up with us in 5-10 years – we’ll probably end up ok. But we all need to be talking about this and saying that we want to live in a world where burning fossil fuels seems as old fashioned, unhealthy and odd as smoking in British pubs does today.

 

 

Lesson 10: Anthropogenic Climate Change

models-observed-human-natural
Figure from the report  “Climate Change Impacts in the United States: the Third National Climate Assessment” (2014). https://nca2014.globalchange.gov/

In the last few lessons I’ve been talking about climate models and how they can model incredible complexity including energy balance, convection (circulation) in the atmosphere and oceans, and biogeochemical processes. Once we have such models we can do many things. First, the models help us ask questions and test our assumptions. They allow us to explore “what if” scenarios and understand how important certain components of the system are. Second, the models help us to predict the future and third, they allow us to understand what we can, and cannot, influence.

The figure above comes from a US government report published in 2014. It compares two runs of a climate model with observations of “global average temperature”.

The two model runs have a broad shaded area. That represents the uncertainty of the model – it indicates the range that the temperature could be in, based on multiple runs of the model (the so-called “ensemble run”) in which initial starting points (and the sizes of certain effects) are varied from run-to-run in a way that is consistent with our understanding of our lack of knowledge.

Global average temperature is not an easy thing to measure (we’ll come on to that in later lessons), but the black line is the result of our best attempt at combining the data we have. Really it should also have “uncertainty” prescribed to it – I’d prefer to see this graph with a band around the black line too. I don’t know enough about how this value is determined (I’ll try to find out and get back to you!), but my guess is that it has an uncertainty (width) of somewhere between half that of the models and the same size as the models.

The green model band describes “natural factors only”. This runs the model considering all the biogeophysical processes, and also considering the distance between the Earth and the Sun, variations in the solar cycle, volcanos erupting and releasing gases into the atmosphere, trees growing and dying, lightning-caused fires and so on. The blue model band describes “natural and human factors”. It includes all the quantities above, but also includes anthropogenic (human released) fossil fuel burning (coal, oil, gas), cement making, the release of particles in cities (smog, air pollution), refrigerant gases (CFCs and their more modern replacements), methane release in industrial-style farming and landfill waste tips), and land use changes (cities, deforestation). Note that 80% of the observed difference between the blue and green lines is due to fossil fuel burning. The other things make up a further 20% of that.

Until 1980 you can’t tell the difference between the lines. It becomes clear (now, in hindsight) around 1990. But it’s worth remembering that in 1990 our computers were a lot smaller, our climate models a lot less detailed (remember the 1987 storm that the MetOffice failed to predict – that was because the weather forecasts were a lot less reliable then – and the climate models are based on the same programs as the weather models). So while in hindsight it was around 1990 that humans became a driving force in the climate, we’ve only had the science to understand that since about 2010. We are in the very early days of our full understanding of the problem.

I’d like to keep the science and the politics separate, so I’ll write a separate note on my thoughts about this.

 

 

Lesson 9b: Seeing the wood for the trees

In Lesson 9 I make a common mistake of describing scientific progress in terms of increasing complexity. I explained about “early” climate models that were energy balance models, “later” climate models that included the circulation/convection of the atmosphere and ocean and “modern” climate models that include all these things and also chemistry and biology.

Since I wrote that I’ve been realising that this, while a nice “story”, is not really true. Because I am writing these blog posts and then scheduling them for publication a few days later, I realised I could edit the previous lesson before it was published, or write this follow on post. I went for the latter option, because I think the “nice story” is easier to follow. I guess in that way it’s like the model itself – the nice story of a progression of complexity is a simple model of the history of climate modelling and one that is very helpful to explain why models have got better over time. The nice story models some “big picture” stuff, but gets a lot of details wrong. A fuller story will describe the detail more accurately, but will be messier and we’ll lose information. We’ll be “unable to see the wood for the trees” – metaphorically in the case of how I tell the history.

Being literally “unable to see the wood for the trees” is one of the reasons why we still use simple climate models today. A thorough modelling of all the details can sometimes lose something. Earlier in my career I came across the concept of the “missing sunlight” – what this was telling us was that the detailed modelling of where incoming sunlight went (some reflected from clouds, some from the surface, UV parts absorbed by the ozone layer, some lines absorbed by atmospheric gases, some absorbed by the surface to heat up the Earth …) didn’t add up to what the big picture model of “energy in = energy out” was saying. In our forest, the treatment of individual trees misses some of the interactions between trees. There’s a similar “missing water” problem in the Amazon rainforest where the total rainfall seemed twice as big as the outflow of water from the Amazon river system. Later it was realised that water wasn’t just evaporating from the rivers and oceans, it was also evaporating from leaves and being released by trees – and that water was raining down again: a large proportion of the rain was recycled.

For all these reasons, simpler climate models have a very important part to play in modern climate research. They help us understand the processes and test the complex models, they allow for faster “experimental” tests of different processes. They make sure we continue to see the wood as well as the trees.

Conversely, the first attempt at a fully integrated climate model that considered many different complex interactions and treated the calculations in a three dimensional way, was in the 1950s. Computer power was considerably poorer then, and the models were less sophisticated in some ways, but there was an attempt to model all the interactions together.

Lesson 9: Inside the climate models

Globe_as_a_grid
Representation of a Global Climate Model (note this image is in multiple places on the internet but none seem to have the authenticity of being the original version

In the last lesson we learnt about Lewis Fry Richardson developing the concept of numerical weather forecasting. In the 1910s and 1920s his idea could not be realised because we did not have sufficient computing power. Today, that computing power exists – indeed of the UK’s top 7 supercomputers, four are at the Metoffice and two at ECMWF (the European Centre for Medium-Range Weather Forecasting). The only one that isn’t used for weather and climate forecasting is at the Atomic Weapons Establishment (and I dread to think what they use it for).

The weather and climate models of today work as Lewis Fry Richardson predicted: they break the Earth and its atmosphere up into little boxes and in each box they predict the change in conditions over a certain defined time step. They then pass that information to neighbouring boxes.

Over time, weather and climate models have become more complex in:

  • The range of phenomena that they include in their models (discussed below)
  • The size of the boxes and time steps (smaller boxes, smaller time steps)
  • The variety of observational data that they bring into the models
  • Their handling of uncertainty in the modelling processes and in the observations
  • Their ability to predict both overall trends and detail (so moving from making predictions for averages to predictions for specific areas)
  • The human and geological behaviour that they can include in the models (fossil fuel burning, deforestation, volcanos etc).

The simplest climate models are “energy balance models” (EBMs). These do what we considered in our thought experiment in lesson 4, extending it as we did in 4b. They generally split the world into rings of latitude. In each ring they consider the energy in (from the sun based on the average amount of sunlight to hit that ring over a day and a year) and the energy out (the reflected sunlight, which depends on the average albedo – that is reflectance, and the thermal infrared Earth emission and thermal infrared emissivity – that is how well it emits that wavelength). The greenhouse effect is included as a temperature increment – the amount that those greenhouse gases cause a temperature rise. Such models can give basic information about the Earth system – and explain the basic temperature changes that we see.

The simple models can also consider some feedback processes. Since 1969 climate models have considered the “sea-ice albedo” feedback. This affects these energy balance equations near the poles. When the temperature of the Earth is cooler, there is more sea ice and that reflects sunlight back to space, reducing the amount of sunlight that heats up the Earth and therefore cooling the Earth further (this was an important feedback mechanism during the ice age). When the temperature of the Earth is warmer, the sea ice melts and the dark sea that is there instead absorbs a much larger fraction of the sun’s light, warning up the Earth further.

Energy Balance Models can also study the impact of changes in the output of the sun (the sun has an 11-year sunspot cycle and is about 0.3 % brighter when there are more sunspots than it is when there aren’t any. During 1650 – 1700 there was a period of time with almost no sunspots (the Observatoire de Paris was taking records daily) and that corresponds to the “Little Ice Age” (though at the same time there were increased volcanic activity and probably a significant regrowth of rainforest in central America after European diseases, introduced by the explorers, wiped out a very large population – both of those factor may also have altered the climate).

However, energy balance models must be superficial when used alone. Instead they are one component of more complex models. The next, and essential, level of sophistication is to add in convection. I mentioned in an earlier lesson that a garden greenhouse does not heat up because of “the greenhouse effect” but because the glass stops the air circulating. We also know that “radiators” in our houses don’t really work by radiating heat, but by setting up circulation patterns in the air in the room (hot air rises). Similar processes happen in the oceans. London (51 degrees latitude North) is much warmer than Ottawa (45 degrees latitude North) because of the gulf stream that transports hot water from central America towards Europe.

Circulation models need to consider the Earth not in latitude bands, but in the small boxes (including boxes on top of each other into the atmosphere and down into the sea) and consider the currents in the ocean and the winds in the atmosphere and how that means water or air is passed from one box to the next. Circulation models also include physical processes in the ocean and atmosphere – how water vapour condenses into clouds and how clouds precipitate into rain and snow. It is circulation models that model “cloud feedback” which we discussed before.

The gulf stream is driven by salt in the sea water. As water travels from the Equator towards the poles, some evaporates, and therefore the remaining water becomes more salty. Salty water has a higher density (is heavy) and sinks and this sinking drives the “conveyor belt”. There’s a nice video from the MetOffice on youtube that explains this.

One topic that has been discussed in the media (and was the basis of a film) is a concerning possible future feedback could be that as the Greenland ice sheet melts, the fresh (not salty) water introduced just at the point where the Gulf Stream sinks, could stop the whole circulation – changing the patterns across the world and, potentially, making Europe colder! The latest IPCC report, however, says that this is “very unlikely”, though there may be changes in how the circulation occurs.

Modern models “coupled climate system models” include more processes, including chemical processes (chemistry in the ocean, in the atmosphere and at the boundary between the ocean and the atmosphere) and biological processes (growth of trees and algae and the chemical and biological changes that creates: e.g. photosynthesis, carbon storage in trees and in the soil, the effects of fire). They also model human effects – from the “heat island” effects of cities to the impact of paving our roads and gardens on the water cycle.

Modern climate models are some of the most complex computer programs in the world, written by huge teams of experts, each concentrating on one small detail, and running on some of the world’s most powerful computers. They are the achievement of huge multidisciplinary teams of physicists, chemists, biologists (and most importantly those working at the cross-over between disciplines: biochemists, biophysicists), computer scientists, engineers and mathematicians. There are approximately 30 teams of scientists who have developed climate models that run on different computers running different codes. Those teams go to conferences together and learn from each other, but each team makes its own decisions about which details to include and how to model them. They also make different decisions about which observational data (the subject of a later lesson) to include.

The Earth System is extremely complicated. Our models are our best attempt to simulate the real Earth. As our science has become more sophisticated, and as our computers have become more powerful, we have been able to include more and more detail into those models. But we must never forget that they are models and not reality in and of themselves.

 

Lesson 8: The first numerical weather forecast

factory
Painting of imaginary prediction factory, based on Ch.11 of Richardson’s ‘Weather Prediction by Numerical Process’, ink and water colour, commissioned and owned by Prof.J.G.Byrne, painted by and Copyright of Stephen Conlin, 1986. Obtained here.

So, we’ve discussed blackbody radiation and how the hot sun emits electromagnetic radiation at short wavelengths (UV, Visible, near IR) and the much cooler Earth radiates in the thermal IR. We’ve discussed how the Earth needs to reach an equilibrium where the incoming energy matches the outgoing energy and how without greenhouse gases that would be achieved at around -18 ºC, but, because greenhouse gases absorb thermal IR to excite various vibrational modes (make the molecules wobble), a lot of the thermal IR gets absorbed in the atmosphere and the Earth warms up.

I hope I’ve expressed two core concepts: these processes are all basic physics and chemistry in and of themselves, but there is complexity in the Earth system because of interactions and feedback loops. It’s not quite as simple as more CO2 means more vibrating molecules and hence more warming: increasing CO2 does cause warming, but to understand how much, you need to understand exactly how the light interacts with all the molecules and how the atmosphere itself radiates and how increasing atmospheric temperature holds more water vapour which also acts as a greenhouse gas. It’s both very simple – and very complicated!

Now, a slight aside to get to how that complexity is handled. Back in World War 1 a young Quaker (this is a subject that brings together both my faith and my science!), Lewis Fry Richardson was working in the Friends’ Ambulance Unit in the trenches. By day he dealt with the wounded and the dying. And at night he solved differential equations. I get that: after the horrors of the day, maths provided the rational logic that helped him control emotions.

What he was trying to do was to make the first weather forecast. He had weather measurement data for an area in Central Europe and he decided he’d try to predict the temperature in one place by using what had happened six hours earlier in other places. He had the concept of the first numerical weather forecast. The idea was simple; he would split his map up into lots of different cells and then in each cell he would know both the current temperature, pressure, wind speed and direction and, crucially, how that was changing with time (what in maths is known as “the differential”). He’d solve the differential equations in each cell and that would pass information to the next cell. That way he could calculate numerically what the weather would be a six hours later in one of his cells. He spent six weeks on his calculations – and ended up with the wrong answer (I know that feeling too!). We now know that his wrong answer was because of problems with the input data (the measurements of temperature and pressure that he had were not reliable enough – we’ll certainly come back to that message since my job is to make sure the measurements that go into models are reliable!)

However, his principle was right – you can predict the weather in one place by cutting the Earth up into lots of cells, using measurements and estimates of the current conditions in each place and the rate of change of those conditions, and then solving numerically the differential equations in each cell to show the change until the next time period. He knew that it had taken him 6 weeks to calculate the one cell he was working on, but he imagined that if there were 64000 (human) calculators working together, they could do real time weather forecasting and predict the future. His concept of a “weather forecast factory” (illustrated above) and is exactly what is done in the supercomputers that run today’s weather forecasts.

We’ll go into them in more detail in a later lesson, but basically numerical weather forecast models split the Earth and its atmosphere and oceans into lots of “cells” – boxes that cover a certain longitude and latitude at a particular atmospheric height (or ocean depth). In each box they model the basic physics of radiation (heat, light, temperature) and convection (air/water pressure and winds/water currents) in each box and solve differential equations to show how that is changing over a defined time step. Modern models also model the chemistry (how gases in the atmosphere interact with each, changing salinity and pH of the oceans) and biology (growth of plants and algae, respiration) as well as the large scale geoscience (sun irradiance changes, volcanoes, …).

Numerical weather forecasts are some of the most complex computer programs in the world, being run on some of the biggest and most powerful computers in the world.

The “short term weather forecast” models (which can accurately predict ~3-5 days), the “medium term weather forecasts” and the “climate forecasts” all run exactly the same model at the UK MetOffice – they just use smaller cells and do the calculation on a much finer time scale for weather forecasting and use bigger cells and averages over a month on the climate forecasting. Each meteorological office has its own model developed by its own scientists and programmers – and even within one meteorological office they may have multiple variations of their model. That’s how they can say “there’s a 70% chance of rain” – what they mean is that when they ran their model many times with minor changes to account for what they didn’t know, 70% of the models put out rain and 30% didn’t.

Now I know what you’re thinking! If you’re British and older than 40 you’re remembering Michael Fish on the BBC saying there wouldn’t be a hurricane the day before the 1987 storm. I remember that day vividly as I tried to cycle to school around the fallen trees and got there to find school was closed – which is sort of the point – I couldn’t check in advance if school was closed because there was no (well no established) internet: computers were significantly less powerful back then. The weather forecasts of today are much more sophisticated and much more accurate. But, granted, they are only accurate for around 3-5 days (and we all know there is a limit – the famous “butterfly effect” that means minor changes make big differences to a chaotic system – so we can’t predict more than about 10 days ahead, no matter how sophisticated our models and how powerful our supercomputers).

So how can we predict climate with the same models? The reason for that is that with climate we’re asking a somewhat different question – instead of asking “what will the temperature be at Heathrow at 10 am on the 3 June 2080?” we’re asking “what will the average temperature be for all Junes in the 2080s in outer London?” That’s a different question – and ones the models, with bigger cells and more time averaging, can answer.

Aside on Lewis Fry Richardson

When I started this blog as “Scientific Quaker” back in 2016, I intended to write about science and faith (and some of the earliest posts are about the difference between religious and scientific truth and include my thoughts on faith). When people started asking me to move the Climate Lessons to a blog I did think about whether to start a new one or continue with the 2016 blog and eventually I chose to continue here.

But now my Climate Lessons have got onto Lewis Fry Richardson, I feel I can indulge myself in discussing this other “Scientific Quaker” and the difficulty of choosing between faith and career. (For what he did scientifically see here: Lesson 8: The first numerical weather forecast)

I don’t know a lot about him, beyond what is easily available on online searches, but I’d like to share what I do know.

Both “Richardson” and “Fry” are big Quaker names and I’m guessing he was brought up in families that had been Quaker for generations. In the 19th century, Quakers couldn’t go to university, and intelligent Quakers set up businesses instead. His father was a successful leather manufacturer. That had changed by the turn of the 20th century, though, and Lewis Fry Richardson went to Durham and then Cambridge Universities.

He did several different jobs – he was a generalist who found it hard to find the subject that really spoke to him – but eventually he settled in the Meteorological Office where he started to think about weather forecasting. In those days forecasts were based on pattern recognition – in other words they would find the last time the weather map most closely matched today’s map and then assumed that tomorrow’s map would be what the day after that last one was. Richardson realised that it might be possible to predict the weather based on an understanding of the physics behind the processes instead.

image-2-richardson_lewis_1_f_01
Lewis Fry Richardson’s Friends’ Ambulance Unit personnel card. From: http://engineersatwar.imeche.org/features/friends-ambulance-unit

In January 1916 Britain introduced conscription when they realised that the war was going to go on far longer than earlier optimism had suggested. Quakers generally do not believe that any violent conflict can be compatible with a Christian faith and the young men of Britain’s Quaker meetings struggled with the decision of how to respond to the Peace Testimony of their faith and the conscription law. Some Quakers became absolute conscientious objectors – refusing any form of military activity and going to prison for these beliefs. A small number felt that their conscience required them to fight with their fellow countrymen. The majority of Quakers, though, joined the “Friends’ Ambulance Unit” which was created in 1914 as a response to the horrors of casualties in the war. The Friends Ambulance Unit served in both world wars to treat the injured of all sides – and after the second world war in helping displaced people and people released from concentration camps. (My grandmother’s cousin also served in it in the second world war).

At the end of World War One, Lewis Fry Richardson returned to the Meteorological Office, but in 1920 it became part of (what was to become) the Airforce, and he felt he had to resign. (It remained linked to the Ministry of Defence until 2011, when it became linked to the governmental department BEIS: when I graduated in the late 1990s intentionally ignored adverts for the MetOffice for the same reason that Lewis Fry Richardson did).

He did, however, write up his thoughts on weather forecasting in his book Weather Forecasting by Numerical Methods, which he published in 1922 and he worked on new ideas – but he destroyed all his work when he realised the military could use it to predict where poisonous gas bombs (chemical weapons) would disperse.

Instead, he applied the same concepts to trying to predict the likelihood of war – using equations to understand how much countries were preparing to fight each other. For those calculations he had to work out the length of the border between two countries – and that’s when he realised that borders were fractal (if you make your straight ruler shorter, it goes round more wiggles, and the border is measured as longer). But other than a few details like that (his fractal borders were described in the famous 1967 paper by Benoît Mandelbrot who developed the concept of fractals more generally), Richardson’s work in predicting the chance of war was ignored by everyone (I understand a few modern researchers of peace studies are looking at it now – Lancaster University now has a “Richardson Institute” for Peace Studies named after him). He ended up being a physics teacher in a tech college – the only job he could get with his record as a conscientious objector and his unwillingness to work for the MetOffice.

This story challenges any Quaker working in science today. In hindsight it seems a huge waste of his talents for him not to have worked for the Met Office. And yet, the Met Office was heavily involved in decisions in World War 2 that chose the best dates for bombing civilians in Germany. They still are used for the military. However, his resignation did not stop any of that happening.

I have always refused to work on military projects and, fortunately, that has had no real impact on my career (and has even got me out of a few projects that went wrong). But there are always grey areas. I’m setting up a large consortium at the moment that involves lots of scientists working on the observations that form the basis of climate models – and there are scientists from the navies of two countries who have asked to join the consortium.

 

Lesson 7: Carbon dioxide as a greenhouse gas

Lesson 7: Carbon dioxide as a greenhouse gas
CO2_H2O_absorption
Photo found on web, attributed to Robert Rohde’s “Global Warming Art” which I can’t find a live link to.

I showed the picture above in the previous lesson and discussed how water vapour absorbs a very broad set of wavelengths in the thermal infrared (and a few in the near infrared). This absorption is due to how the light of those wavelengths causes the water molecules to change their vibrational modes in lots and lots of different ways.

The carbon dioxide molecule has three atoms arranged in a straight line: a carbon atom in the middle and two oxygen atoms either side. It doesn’t have quite as many ways of vibrating as water, but it has quite a few – and crucially different ones (pink in the diagram above), so it absorbs thermal infrared at wavelengths that water vapour cannot respond to. Thus, carbon dioxide removes even more wavelengths that the Earth can use to cool down in outgoing radiation.

In the last lesson, I also described the water feedback loops – simplistically if there’s too much water vapour in the atmosphere it rains. More completely, a higher temperature means both more water vapour in the atmosphere as hot air holds more water – creating more heating – and it means more clouds which may either accelerate warming (trapping heat in at night), or slow down warming (reflecting more sunlight in the day time) – but we’re not quite sure which.

We are increasing the amount of carbon dioxide in the atmosphere (we’ll come back to the evidence for that later – but basically, for most of the last ten thousand years there were 250-280 carbon dioxide molecules in a million air molecules and now there are 400). And there isn’t a feedback loop as simple and immediate as “rain” to get rid of it. There are ways it can naturally come out of the atmosphere: the main ones are increased plant growth (eg in rainforests) and increased ocean algae. The oceans can also absorb some carbon dioxide, but that makes them more acidic which impacts marine life – particularly corals. Of course, if we’ve cut down the rainforests (which we really have) they can’t absorb as much carbon dioxide either.

Because I’m still taking about the basic physics, I want first to consider what the increased carbon dioxide (wherever it comes from) does.

Now you might think that’s easy – CO2 is a greenhouse gas so more CO2 means more warming; but that isn’t directly true. The atmosphere is very thick – so the thermal infrared meets lots of carbon dioxide molecules on the way up: that means that the atmosphere already absorbs all the light at some wavelengths (the ones where the graph above touches the top of the image). Increasing the concentration of carbon dioxide might make it be fully absorbed slightly earlier, but you can’t be more absorbed than fully absorbed (and at some wavelengths it only takes 25 metres of air to block the light completely).

Instead there are two important effects. The easier effect to understand is that not all infrared wavelengths are completely blocked by the atmosphere. In the last lesson I showed a graph of atmospheric absorption zoomed in and there you see lots and lots of thin lines. As the concentration of carbon dioxide in the atmosphere increases, some of those lines get broader, and some of them get deeper. For example, some wavelengths represent changes from an unusual vibrational mode to another, that are rarely “set up” – it’s rare for the light to meet a molecule in the right starting state, but when there are more carbon dioxide molecules, the light is more likely to find one of these rare molecule vibrational states, so those wavelengths are more frequently absorbed and the absorption line deepens.

The more subtle effect is that the atmosphere itself is also lots of little blackbodies radiating thermal infrared blackbody spectra that depends on the temperature of the gases. (As the thermal infrared radiation is absorbed by the carbon dioxide and water vapour in the atmosphere it heats the atmosphere up).

At low altitudes, any infrared emitted by the atmosphere is absorbed by the carbon dioxide molecules and can’t make it through. But there is a height where the atmosphere can radiate to space because there aren’t enough carbon dioxide molecules above it. Increasing the concentration of carbon dioxide means more molecules throughout the atmosphere and therefore this level has to go up towards space (at the lower height where light once could escape it now is more likely to hit other molecules and therefore not escape). Since the higher parts of the atmosphere are colder, there is less energy escaping to space than would be there at lower levels (smaller blackbody curve at lower temperatures) – so the planet loses less heat.

co2SaturationMyth_Atmosphere_med
Image from: https://skepticalscience.com/graphics.php?g=104

Eek. Sorry. I could have over-simplified this: more CO2 means more greenhouse warming. But I want to try to explain the whole story as I understand it (I am not an expert on climate modelling, so there are still huge simplifications in here I don’t know about!)

One last point: water vapour and carbon dioxide are not the only greenhouse gases. Methane is another important one – with four hydrogen atoms round a carbon atom, it has a lot of vibrational modes – but there’s not as much of it in the atmosphere as there is carbon dioxide. It is also increasing. The refrigerants (HFCs, HCFCs, CFCs) don’t only damage the ozone layer (ozone has vibrational modes that block UV on the way in) but are also very potent greenhouse gases – partly because they don’t occur naturally so they absorb wavelengths nothing is absorbing already. There are currently very low levels of these, but if we don’t dispose of our old refrigerators and air conditioning units carefully we’ll release them into the atmosphere and because there is no absorption at these wavelengths already – a small increase makes a big difference. Just think about how many air conditioning units there are – and the human feedback loop: more warming, more air conditioning, more refrigerant gases, more warming… (that’s why Project Drawdown puts disposing of refrigerant gases carefully as their number 1 activity for solving climate change problems).

Lesson 6: water vapour absorption

CO2_H2O_absorption
Photo found on web, attributed to Robert Rohde’s “Global Warming Art” which I can’t find a live link to.

The image above shows the “absorption spectra” of H2O (water – in blue) and CO2 (carbon dioxide – in pink). The absorption is because light (electromagnetic radiation) at each wavelength causes the water or carbon dioxide molecules to change their vibration from one way of vibrating to another. Because water has so many different ways of vibrating, a very large number of wavelengths are absorbed. You can see that the edges are “jagged” – actually, if you zoom in on any one part of the spectrum, you can see that it’s actually made up of lots and lots of lines.

trans10
(Image from: http://www.gemini.edu/sciops/telescopes-and-sites/observing-condition-constraints/ir-transmission-spectra, showing the absorption spectrum of the atmosphere above a mountain for two levels of water vapour in the atmosphere – almost none in black, and a bit in blue-green). Note the picture at the top is about absorption whereas, this graph is for transmission – so where this is high, there is low absorption and vice versa.

The top image has a wavelength scale in microns (micrometres – 1000 times bigger than the nanometres I’ve used so far). The sun’s spectrum is in the wavelength range from 0.3 microns to around 3 microns. The Earth’s thermal infrared emission spectrum is from 4 microns to 40 microns.

The dominant greenhouse gas is water vapour. Water vapour absorbs more infrared radiation than any other gas because of the many, many different ways the molecule can vibrate. And because that radiation (light) is absorbed, it doesn’t get released into space and the Earth has to heat up to maintain a thermal balance between the incoming solar radiation and the outgoing thermal infrared radiation.

So, what would happen if all 7.5 billion of us boiled a kettle and released water vapour into the atmosphere simultaneously? Well, the simple answer is – it would rain. The atmosphere can only hold so much water (the exact amount depends on temperature and pressure) and when it’s exceeded, the water condenses into clouds and, eventually, rain. The water cycle is a very complex, but also very rapid feedback loop. The exception would be if we boiled those kettles in the upper atmosphere. There it is harder to make clouds, and the extra water vapour creates significant increased warming. This is one of the problems with aeroplanes – they are not only creating carbon dioxide, but also releasing water vapour into the high levels of the atmosphere.

The aeroplane effect is more complex still – aeroplanes burn fuel and the by-products are carbon dioxide, water vapour and nitrous oxides – all greenhouse gases. Emitting water at high altitude creates increased warming – but this, too, eventually falls as rain. The carbon dioxide has a much longer lifetime. If we all stopped flying, the water vapour would disperse quickly – the carbon dioxide would stay around for decades. Note that if we powered our planes with hydrogen, they would still emit water. (See also: this Guardian article from 2010). Of course, planes also make contrails – which means more clouds (see comments below about clouds).

However, hotter air can hold more water than colder air. So if the air temperature increases (for whatever reason), there is more water vapour in the atmosphere, which in turn leads to more “greenhouse effect” heating. This is known as a “positive feedback” – positive in the sense that it makes the effect bigger, rather than that it’s a good thing!

We don’t have a lot of global records of water vapour levels, so it’s hard to put precise numbers on the amount of water vapour in the atmosphere and how that’s changing with time. But there are indications from spot measurements (where it’s been measured in one place for a long time) that water vapour is increasing as the atmospheric temperature increases.

Of course, increased water vapour also leads to more clouds and we’re still not completely sure what more clouds means for the climate. On the one hand, more clouds means more sunlight is reflected – reducing the incoming energy and therefore cooling the Earth. On the other hand, more clouds means that more heat is held in at night (we all know clear nights are the coldest), therefore heating the Earth. For clouds at high altitudes we know that the keeping heat in is the bigger effect (positive feedback), for clouds at low altitudes, reflecting sunlight is a bigger effect (negative feedback). Overall, there’s a lot of uncertainty in what we understand about both the positive and the negative feedback mechanisms. The latest IPCC report concluded that the positive feedback was likely to be more significant than the negative feedback – but it’s still not clear by how much. “Cloud feedback” is the biggest uncertainty in climate models (and better satellite data is needed to improve our understanding of it). The climate modellers have been predicting that doubling the carbon dioxide will change the Earth’s temperature by “something between 1.5 ºC and 4 ºC”. The reason they give a range is almost entirely due to our lack of  – almost all of that range is caused by our lack of understanding of cloud feedback. As we improve our understanding of clouds, we’ll reduce that range (and recent studies suggest the lower end of that range was too optimistic).

 

Lesson 5b: More on atmospheric absorption

800px-Atmospheric_Transmission

This image comes from the Wikipedia article on the greenhouse effect.

The red bit is the sunlight coming down. The drawn line is roughly what’s at the top of the atmosphere (there is some loss because of Fraunhofer lines, but basically it’s a perfect blackbody) and the coloured in red bit is what reaches the Earth’s surface. The missing wavelengths are absorbed – and you can see below why:

  • UV and blue are absorbed by ozone in the upper atmosphere and “Rayleigh Scattering” (the thing that makes the sky look blue in the daytime and red at sunset).
  • The near infrared (sunlight with wavelengths too long for us to see) is absorbed mostly by water vapour (and a few wavelengths by carbon dioxide) (more to come)

The blue and purple lines are what the Earth would emit at different Earth temperatures (the one most to the left is for a blackbody at 310 K – or about 35 ºC – the one furthest right for temperatures around 210 K – or about -63 ºC). The solid blue bit is the only bit that gets through the atmosphere. All other wavelengths are absorbed by water molecules (“water vapor” plot) or by carbon dioxide or other greenhouse gases.

The diagram is drawn for a normalised blackbody curve – so you can see the thermal infrared one and the solar one on the same picture. In reality the solar one would be much “higher” as well – and the blue ones would not only shift left with higher temperatures, but also get taller.

Because so much of the output spectrum is absorbed, the Earth will heat up until it’s output is equal to its input: it needs to be at a hotter temperature for the energy in the coloured in blue bit to be equal to the area under the whole curve at a lower temperature.

This is known as the ‘greenhouse effect’ – but that’s actually a poor name. Yes, there is some real “greenhouse effect” in a greenhouse: the sunlight gets through the glass, but the thermal radiative energy of the surfaces in the greenhouse, emitting thermal infrared radiation, can’t get back out again … but actually the main reason real greenhouses warm up is that hot air can’t escape… ah well!

Lesson 5: Atmospheric absorption

So in Lesson 4, we learnt that if the Earth had no atmosphere, but still reflected about the same amount of sunlight as it does now, it would be at about -15 °C to -20 °C on average to be in “thermal equilibrium” where the energy coming in from the sun matched the energy coming out through the Earth’s own, thermal infrared, blackbody radiation.

Of course, we all know from our personal experience that the average temperature of the Earth (averaged over the whole Earth, whole day, whole year) is a lot hotter than that. So what is it that the atmosphere does?

To think about that, let’s start with a revision of Lesson 3 about light being absorbed and emitted by atoms. First, the “electromagnetic spectrum” is what I drew in lesson 1: it is the “rainbow” in the visible, and extends that to other wavelengths of electromagnetic radiation. If you look at the visible spectrum (the rainbow) of the sun, you see black lines in the spectrum. These are known as Fraunhofer Lines after the scientist who first described them (see lesson 3b).

Light coming from inside the sun “excites” an atom in the outer parts of the sun, which means that an electron goes to a higher orbital. Then, when the atom returns to its lower state, it releases light with the same wavelength: but it does so in a random direction. So the amount of light heading towards us decreases at that wavelength and we see a black line in the solar spectrum.


In the Earth’s atmosphere the same thing happens – both on the way down and on the way up. Every atom has its own set of lines where it absorbs. But additionally, molecules can absorb lines too. In the atom case, the absorbed energy from the light is used to move a very light-weight electron up to another orbital inside the atom. With molecules, the absorbed energy from the light makes the molecules vibrate in new ways. Since in molecules the things moving are much heavier atoms (rather than very light electrons), all this happens with a lower frequency – and molecular absorptions are in the thermal infrared.

Incoming light from the Sun reaching the top of the atmosphere is in the UV, visible and near infrared spectral region. The UV is absorbed by atoms (and some molecules like ozone) this light gets re-emitted but in all directions, including out of the atmosphere, and is lost. That’s how our ozone layer protects us from harmful UV. Other visible wavelengths are absorbed by the atmosphere too – some Fraunhofer lines are due to atoms in the Sun, others are due to atoms in our atmosphere. This means that some wavelengths do not make it down to Earth.  But this absorption is only a few lines, and it doesn’t affect the overall amount of energy reaching the surface very much.

The Earth’s emitted radiation is in the thermal infrared. This longer wavelength (lower energy) light gets absorbed by molecules to make them vibrate in lots of ways.

Wikipedia has some great images of water molecules vibrating:

The yellow ball is the oxygen. The blue balls (which should really be much smaller than the yellow ball) are the hydrogen atoms (H2O!). Imagine you were holding a model of this with springs for the bonds and balls for the atoms. You can imagine that there are lots of ways for the molecule to vibrate and rotate. Each transition from one way of vibrating to any other way of vibrating requires just the right amount of energy supplied through light at “just the right wavelength”. So you can imagine there are lots and lots of thermal infrared wavelengths that get absorbed by all the water molecules in the atmosphere. And, while that light can also be re-emitted, that will be in any direction – including straight back down to Earth and into the path of another molecule.

[Actually, because the water molecules aren’t cold themselves, they are already doing some vibrations of their own – this actually leads to even more wavelengths being “just right” to create transitions between different vibrational modes.]


There are some difficult concepts in here, so I’ll stop and add give space for questions.