Process your astro images like a James Webb Space Telescope scientist

Lead Image Processor for the James Webb Space Telescope, Joe DePasquale explains how you can apply the processes he uses to create JWST’s stunning images to enhance your own astrophotos

Try out a subscription to BBC Sky At Night Magazine and pay just £9.99 for 6 issues today!

Published: January 5, 2024 at 5:18 am

The James Webb Space Telescope, with its huge sun shield deployed like the sail of a cosmic ship, is the latest of our great explorers.

Orbiting an imaginary point a million miles away, its giant golden mirror array captures light that has been streaming across the Universe for billions of years. 

How lucky are we to live in a time when we can work with this ancient light?

And not only Webb – even a decent backyard telescope is capable of producing incredible views of the cosmos.

An image of Saturn and some of its moons, captured by the James Webb Space Telescope’s NIRCam instrument on 25 June 2023. Credit: NASA, ESA, CSA, STScI, M. Tiscareno (SETI Institute), M. Hedman (University of Idaho), M. El Moutamid (Cornell University), M. Showalter (SETI Institute), L. Fletcher (University of Leicester), H. Hammel (AURA); image processing by J. DePasquale (STScI).
An image of Saturn and some of its moons, captured by the James Webb Space Telescope. Credit: NASA, ESA, CSA, STScI, M. Tiscareno (SETI Institute), M. Hedman (University of Idaho), M. El Moutamid (Cornell University), M. Showalter (SETI Institute), L. Fletcher (University of Leicester), H. Hammel (AURA); image processing by J. DePasquale (STScI).

I'm a science visuals developer at the Space Telescope Science Institute in Baltimore, Maryland, and I work on processing the James Webb Space Telescope images.

Many of the techniques I use in processing Webb data are easily transferrable to data obtained from backyard telescopes.

In this guide I'll show you some of the methods we adopt when processing Webb's images, and how you can use these for your own astrophotos.

Let’s journey through the process of working with calibrated data from any telescope, transforming it into colourful views of the cosmos.

For more info, read our guide to astrophotography and our guides to image processing and deep-sky image processing.

For inspiration, see the latest James Webb Space Telescope images

James Webb Space Telescope's image of the Pillars of Creation. Credit: NASA, ESA, CSA, STScI; J. DePasquale, A. Koekemoer, A. Pagan (STScI).
James Webb Space Telescope's image of the Pillars of Creation. Credit: NASA, ESA, CSA, STScI; J. DePasquale, A. Koekemoer, A. Pagan (STScI).

Data stretching

What exactly do we mean by ‘stretching’, and why do we need to do it?

Stretching here refers to the values of the individual pixels that make up the image.

Modern charge-coupled devices (CCDs) are so sensitive to light that the images they produce have an enormous dynamic range, far beyond what can be seen with the eye.

In a 16-bit monochromatic image, each pixel can be one of 65,536 different shades of grey. This specific number is directly related to the digital nature of the data.

A bit can be one of two values, 0 or 1. In a 16-bit image, each pixel holds a value defined by a sequence of 16 bits where 0000000000000000 is pure black and 1111111111111111 is pure white.

The total amount of possible values is found by taking 2 to the 16th power, or 65,536. 

For simplicity, let’s consider this 2-bit image of a parrot.

2-bit greyscale image of a parrot. Credit: Joe DePasquale
Credit: Joe DePasquale

The total possible pixel values from black to white will be 2 raised to the 2nd power, or 4. With only four possible values, we can actually name them!

Each pixel can be either black, dark grey, light grey, or white. This simple image also helps us define the histogram.

Imagine if all of the pixels fell to the floor and you want to organise them by their brightness.

Each pixel can go into one of the four bins that we just defined. When you’ve accounted for all of the pixels, you’ve created a histogram!

If we want to make this image brighter, we need to change the pixel values using an image-processing application like Photoshop or the freely available GIMP.

A curves adjustment tool from one of these programs allows you to alter the values of the pixels across the entire histogram.

Three versions of a simple 2-bit greyscale image 
in which the pixel values have been changed using a curves adjustment
Three versions of a simple 2-bit greyscale image in which the pixel values have been changed using a curves adjustment

We can alter the pixels so that dark grey becomes light grey, and light grey becomes white.

This is a linear change, altering the white point of the image; while it is brighter, we’ve lost some information.

Instead, we apply a non-linear change to the pixel values, preserving the details of those white pixels while raising the light gray and dark gray pixels to a higher value.

In astronomical images we use non-linear transformations to allow us to see the faint details hiding in the darkness and preserve the bright details.

Take a spiral galaxy. You see a bright core surrounded by much fainter spiral arms of light, dotted with bright regions where stars may be forming.

Hubble Space Telescope image of spiral galaxy UGC 2885. Credit: NASA, ESA, and B. Holwerda (University of Louisville); CC BY 4.0
A non-linear stretch of a spiral galaxy should strike a balance between the bright core, any foreground stars, the nuggets of gleaming star formation and the faint arms. Credit: NASA, ESA, and B. Holwerda (University of Louisville); CC BY 4.0

Composing an image like that requires systematically brightening the data to allow us to see those faint spiral arms and also see the bright inner core.

This is what image stretching is all about, and why it is one of the most important parts of producing a quality astronomical image.  

There are many tools available for stretching astronomical data.

A great free option is FITS Liberator, a standalone tool that works on PC or Mac.

Its intuitive interface allows you to explore the pixel values in your image and stretch them as you see fit.

The ‘Asinh’ option under ‘Scaling (Dynamic Range)’ provides a great starting point for stretching.

Try loading your data, setting the Scaling to Asinh, then setting the ‘Scaled peak level’ to 100, ensuring that the black and white point sliders encompass the bulk of the data in the bottom histogram.

FITS Liberator working with JWST data for HH 46/47, the image at the top of this page.
FITS Liberator working with JWST data for HH 46/47, the image at the top of this page.

Data calibration

One of the benefits of working with data from professional observatories like Webb is that it is already fully calibrated, allowing you to jump right in at the point of composing a colour image.

If you’re working with your own telescope data, you’ll want to have those calibration steps taken care of before starting here.

However, even fully calibrated data will still contain artefacts of the imaging system like cosmic rays, reflections, uneven background levels and saturated data.

After you’ve stretched the data, you may notice these artefacts.

This is a good time to start addressing them; this cleanup work will likely continue through the following sections as well.   

Example artefacts from Webb data are the black cores that we see in bright stars, which are actually a result of the calibration process flagging saturated pixels.

I deal with this issue using an adapted script in PixInsight (available here: bit.ly/PixelClipJDP) which replaces those black pixels with surrounding white pixels.

A similar result could be achieved through the use of the ‘Color Range’ selection tool in Photoshop, carefully set to flag pixels below a certain threshold and then set them to white. 

Working with colour

Demonstrating the breakdown of a digital image into separate red, green and blue colour channels
Demonstrating the breakdown of a digital image into separate red, green and blue colour channels

Colour in astronomical images comes from the use of filters when obtaining the data.

Filters separate the light we’re observing into very specific wavelength ranges.

Even the cameras in our cell phones use filters to produce colour images, but those filters are usually built into the pixels so we never see them.

Webb’s NIRCam instrument has 29 bandpass filters, providing astronomers with an array of options for capturing their data.

Similarly, cameras developed for ground-based astrophotography use filters to provide the necessary data for a colour image. 

A basic colour image would be composed from three wideband filters, combining light from a red filter with light from green and blue filters.

The data is grayscale, so colour is applied in an image-processing application and we use a chromatic ordering to apply colours systematically.

The full suite of filters available for Webb's NIRCam, showing each filter’s ability to allow specific wavelength ranges of light through to the detector
The full suite of filters available for Webb's NIRCam, showing each filter’s ability to allow specific wavelength ranges of light through to the detector

In the case of R G and B filters, this is very straightforward.

Chromatically ordering the data means that we apply colour according to wavelength, with the longest wavelength being assigned red, the medium wavelength green and the shortest wavelength blue.

We use this same approach with Webb data, even though the light captured by Webb is infrared

I apply colour either in PixInsight with the use of the PixelMath tool or in Photoshop using adjustment layers.

You could also try using GIMP to apply colour to astronomical data.

As you’re combining data in colour, watch out for any calibration issues like cosmic rays.

They can be dealt with by finding the offending pixels and replacing them with the average value of nearby pixels, being careful not to add or remove anything that wasn’t already there in the data.   

How Webb's ‘Cosmic Cliffs’ splits into its component colour channels according to wavelength, as determined by the filter used.
How Webb's ‘Cosmic Cliffs’ splits into its component colour channels according to wavelength, as determined by the filter used.

Colour balancing and image composition

Congratulations on making it this far into the process!

By now, you’ve found the best stretch for your data and applied colour chromatically.

You could stop here, but you would be missing out on some very important steps in this process.

Colour balancing and tonally adjusting your image can turn your image into something truly spectacular.

Not all filters allow the equivalent amount of light to pass through, so we need to balance the filter contributions in the final composite image to avoid any colour imbalances.  

At the very least, you’ll want to make sure that the blank sky background of your image is roughly a neutral dark grey.

If your image contains face-on spiral galaxies, they make an excellent white reference point.

This can be done in PixInsight using the ‘ColorCalibration’ tool or in Photoshop using curves adjustment layers.

Balancing the colours in the JWST image of 30 Doradus, from initial colour composite to finished image
Balancing the colours in the JWST image of 30 Doradus, from initial colour composite to finished image

Once content with your colour balance, you can make some final tonal adjustments, again using curves, with an eye towards obtaining a good overall contrast across the image.

I prefer my blank sky background to be slightly above absolute black, and I make sure that the brightest parts of my image are pure white without being overly saturated. 

Finally, consider the composition of your image.

Does it roughly follow the rule of thirds?

Are there leading lines that may guide the eye towards different features?

Consider what orientation works best for your target. Traditional orientation would have the ‘Cosmic Cliffs’ image tilted on its side (right)
Consider what orientation works best for your target. Traditional orientation would have the ‘Cosmic Cliffs’ image tilted on its side (right)

If not, can you reorient through cropping and/or rotation to make a stronger image?

There is no up or down in space, and you’re not obliged to adhere to the scientific standard of north = up, east = left.

Of course, for well-known objects you may want to stick with the standard orientation for ease of recognition, but you be the judge of what will make your image visually strongest.

I sometimes draw inspiration from Old Masters paintings or landscape photography.

There’s a reason these paintings and photos have withstood the test of time – they appeal to our visual instincts, and we would do well to learn their aesthetic lessons.

Learn from the Masters: compare and contrast, say, the tonality and composition of 30 Doradus with Caravaggio’s Judith Beheading Holofernes.
Learn from the Masters: compare and contrast, say, the tonality and composition of 30 Doradus with Caravaggio’s Judith Beheading Holofernes.

Conclusions

Our journey together ends here, but yours is just beginning!

I hope this has given you the guidance you need to take your own data to the next level.

Despite the drastic differences in the technology of JWST versus a backyard telescope, we can apply the same principles of image processing.

As 21st-century explorers of the cosmos, our oceans are the seemingly infinite pixels beamed back to us by our instruments.

Navigating those pixels and finding the details buried in the darkness is our charge.

Go forth and discover, and as always, clear skies!

Are you an astrophotographer? Don't forget to send us your images!

This guide appeared in the November 2023 issue of BBC Sky at Night Magazine.

This website is owned and published by Our Media Ltd. www.ourmedia.co.uk
© Our Media 2024