HDR is an increasingly familiar term in photography. We’ve all seen the punchy #HDR photos on Instagram (over eight million of them and counting). But what does HDR actually mean, and how do you produce HDR photos? Read on for my HDR photography tutorial.
This tutorial is split into five sections:
I’ll take you through the steps one by one…
1. The theory
This is a bit technical, but understanding it will help you with your HDR photography. So read on and, if you have any questions, leave a comment below.
What is dynamic range?
The term ‘dynamic range’ means the ratio between the largest and smallest numbers a value can be (see Wikipedia). In photography, it is used to refer to light in a given scene: it means the ratio between the brightest and darkest parts of a picture.
Imagine three scenes:
- The first is a dark room at night. Light levels are low throughout, and there aren’t any bright areas. The dynamic range is therefore low, because the difference between the brightest and darkest parts of the scene is small – everything is dark.
- The second is a bright, sunlit, empty beach. Light levels are high everywhere; there are no shadows and no dark areas. The dynamic range is therefore also low, because the difference between the brightest and darkest parts of the scene is again small – everything is bright.
- The third is a castle dominating the landscape, with the sun behind it and the front of the castle in shadow, like in the image above. The sun is obviously very bright. But the side of the castle facing you is dark because it’s in shadow. Here, the dynamic range is high because the difference between the brightest part (the sun) and the darkest part (in shadow) is large.
As normal with photography, this light range is measured in Exposure Values (EVs) or ‘stops’ of light. One additional EV or stop means double the amount of light. So if we have a photograph with a dynamic range of 10 EVs, then the brightest pixel is 210 times (which means 2 x 2 x 2… 10 times) brighter than the darkest.
Why is dynamic range a challenge for photographers?
The dynamic range that can be handled by a camera’s sensor is limited. The Canon 7D I used for these photos has a reported dynamic range of 11.7 EVs, which means it can only process a difference of 11.7 stops of light between the darkest and brightest parts of an image. (You can get an idea of the dynamic range of your camera on DxOMark.com – but it’s an inexact science, so take the numbers with a pinch of salt.)
The human eye is similar. It can see in bright light (on a sunlit day), and in very low light (stars on a moonless night). But if you combine dark and light – trying to see stars when you’re standing next to a streetlamp, for example – you’ll struggle. Like a camera, your eye can only cope with a limited dynamic range. Your eyes will adjust to either a dark scene or a bright scene, but it can’t process both at once.
If you take a standard photograph of a scene with a high dynamic range, you’ll end up with some parts which are all black (the shadow) and some parts which are all white (the sunlight) – as shown in the photo below. This means you’re losing detail in those parts of the image because the camera can’t handle both extremes at once.
(I should point out here that, when shooting in raw format, it is possible to regain a lot of detail from the shadows without resorting to HDR. But HDR does allow you to go further, with less noise penalty, than a single raw photograph does. I’ll explore this more in a future post.)
Lightroom’s histogram is a handy tool at this point. In a nutshell, the histogram shows visually how much of the photograph is dark (towards the left) and how much is bright (towards the right). If there is pure black, this will be shown on the far left, and pure white will be show on the far right. If we look at the histogram for this image, we can see that there is both pure black and pure white, meaning detail is being lost at both ends of the spectrum, because the dynamic range is too great for the camera to handle.
One solution with this scene would be to retake the photograph when the castle is not in shadow. But if that’s not possible, there is an alternative.
The solution – HDR photography
HDR photography – or High Dynamic Range photography – is a technique to get around the physical limitations of your camera sensor. It involves taking multiple shots at different exposures to cater for the different parts of the scene. Then you combine the photos in post-processing into a single image. Essentially you’re taking two (or more) photographs at the ‘standard dynamic range’ of your camera and combining them into one ‘high dynamic range’ image.
If you’re struggling to get your head around this, read on for a practical example of HDR photography.
2. Kit, time and location
The beauty of HDR photography is that (other than software on your computer) you don’t need any special kit and most cameras are capable of it. The basic requirements are:
- A camera that lets you manually adjust exposure (and ideally lets you shoot in raw format)
- Software (such as Lightroom) to allow you to combine the photos you take into a single HDR image
- A steady hand or a tripod, to make sure all the photos you take are aligned
- A static scene. If you have things running around, when you merge the photos you’ll most likely end up with a messy result!
Shooting in raw format is always a good idea: it stores much more information than shooting JPEG, and it doesn’t lose detail though compression. As a result, it’s possible to ‘recover’ more from the shadows of a raw image than from a JPEG (and there are other benefits too).
The time and location of your shots are less critical: the whole point of HDR photography is that it lets you deal with scenes that would normally be difficult to shoot due to challenging light conditions. These photos are all of Dolbadarn Castle, taken on a recent trip to Snowdonia with a Canon 7D and 10-22mm lens.
3. Take the shots
The photo above shows what happens when your camera won’t deal with the dynamic range of a scene. So I delete the previous photo – I won’t need it again. Instead, I’m going to take two new photographs with different settings. One with high exposure to show the details in the shadows, and one with low exposure to improve the sky.
For the first photo, I increase the exposure by changing the shutter speed from 1/80 sec to 1/40 sec. This means the shutter is open for twice as long, so it lets in twice as much light, which means we can see more detail in the shadow (but the sky and brighter parts of the background are even more washed out).
If we look at the histogram for this photo, you can see that the graph is bunch up to the far right hand side, but there is no longer any pure black at the far left. This means that we’re not losing details in the shadows.
For the second photo, I reduce the exposure by setting the shutter speed to 1/320 sec. This means the shutter is open for a quarter of the time (in comparison to the original at 1/80 sec) and so lets in a quarter of the light. This time the sky is a much richer, more satisfying blue (but you can’t see a thing in the shadows – the castle is just a silhouette).
If we look at the histogram this time, we can see the opposite to the previous one: the graph is bunched up to the far left hand side, but there is longer any pure white. This means that we’re not losing detail in the brighter parts of the image.
By combining the two photographs, we now have enough information to produce a single image showing detail across the whole scene.
I use Lightroom Classic CC on my Mac to process all my photos, but there are a number of free alternatives out there – just search the web. Some of them will work in different ways, but the principle is the same.
In Lightroom, I import the two photographs and select them both. Then I go to the Photo menu, select Photo Merge and HDR.
After a few seconds, the HDR Photo Merge window appears showing a preview of my new image and a few options.
The options are:
- Auto Align – I always choose this, as it means that if the alignment of my images doesn’t match exactly (for example if I took them by hand instead of using a tripod) Lightroom will match them up itself
- Auto Settings – this just applies the ‘auto’ develop module function once the photos are merged, so if you prefer to do manual adjustments (as you should!) leave it unticked
- Deghosting is designed to deal with moving objects in the photo. If there was a tree swaying in the breeze, you might want to try out the different options, but for this photo ‘None’ is fine
When you’re ready, click Merge. After a few seconds of processing, the final image appears:
I’ve applied a few additional settings as shown here to further balance the image:
If we look at the histogram for this combined image, we can see that the graph is much more balanced and there is no lost detail on the far left or right.
Overall, it’s significantly better than any of the single shot images, combining a rich blue sky with detail in the castle. By taking two separate photos each aiming at a different portion of the scene, we’ve produced a single high dynamic range image that would have been difficult to achieve in a single photo.
Why don’t you have a go yourself and let me know below how you get on? You can even try merging three or more photos – Lightroom will handle it!
5. Alternative methods
In this tutorial I’ve described the simplest method for producing HDR photographs. But there are a few alternative approaches.
Many cameras (including mine) now have a ‘bracketing’ option, where you can tell the camera to automatically take three (or five, or even seven) sequential photos at different exposures when you press the shutter button. The advantages are that it’s really quick and simple to use, and can give you a set of shots across a wide exposure range. The main disadvantage is that every scene is different, and you may get better results if you tailor your settings using the built in histogram – I’ll cover this in a future post.
It’s definitely worth trying, especially if you’re new to HDR, as it’s a much faster process (and so you’ll probably use it more often!).
Cameras can often now do all the hard work for you with an ‘HDR’ setting. (Even the iPhone has this now!) It tells the camera to automatically take several shots at different exposures, combine them in-camera into one file, and apply automatic adjustments to produce a final HDR image – and all you did was press the button!
The downside is that it produces a JPEG image rather than a raw image, which significantly restricts the changes you can make in post-processing. As a result, I don’t use this function very often.
Graduated neutral density filters
This isn’t strictly HDR photography, but it’s the traditional low-tech way of dealing with high dynamic range scenes. Graduated neutral density filters (often known as grad filters) rely on the fact that in landscape photography, it’s normally the sky that’s very bright and the ground that’s dark – and they are divided by the horizon, which is straight and horizontal.
A grad filter is a single ND filter with two filter strengths. The top half is a high strength filter to cover the sky. The bottom half is a lower strength (or no filter at all) to cover the ground. By reducing the amount of light that gets through from the sky, it transforms a high dynamic range scene into a normal dynamic range scene that your camera can handle. There’s a great explanation of grad filters on Wikipedia.
I hope you found this tutorial useful. I’d love to hear what you think. So if you have any questions, comments or tips, please share them in the comments below.