
Imagine for a second a manufacturer selling a telescope with terrible coma. Or terrible astigmatism. Your stars are completely distorted, but you are told “don’t worry, it’s a feature of the telescope, you will get used to it and even come to love it”. Of course, nobody would put up with this.
But when it comes to diffraction pattern caused by the telescope’s spider, all of a sudden it becomes acceptable, and we are happy with star fields where the diffraction is the most prominent feature in the image. This is the way it has always been, this is the way the Hubble images look, so this is how it has to be.
But does it?
The problem can be solved by using a design that does not have a central obstruction (like a refractor), or by supporting the secondary mirror on a piece of glass (the menicus of a mak, the corrector of a SCT, etc…) or by curving/masking the spider’s vanes to smear the diffraction over the image.
This certainly works, but those solutions have drawbacks: refractors are limitted in aperture, as are correctors, any glass at the front of the scope is a dew collector, curved spiders are not mechanically sturdy, and masks reduce aperture. Further, you are not going to implement any of those solutions if you are trying to reprocess a Hubble image.
So there has to be an image processing solution. Here is a little brain storming to solve this problem:
- deconvolution using the impulse response of the scope as PSF.
- Transformation to the Radon domain to separate the signal from the diffraction, mask the diffraction in the Radon domain, transform back to the space domain.
- Model the diffraction, do an adaptive subtraction of the model to the image.
- Perform a full inversion: iterative forward modelling of the diffraction, and subtraction to the data until an objective function has be maximized.
Deconvolution would work only for very small stars with a very small diffraction pattern. Maybe the subject of another post.
Masking in the Radon domain works. This is the subject of this post. The other 2 methods would probably work even better, but are much more involved. So I started with the low hanging fruit.
The code, fully functional, operating on FITS is available on Github.
The Radon transform is a well established integral transform which maps the space (x, y) domain into a new domain, the radon domain (tau, p).
Radon transform is so popular, it has many different names depending of the branch of science using it: Radon transform, Hough transform, Sonogram, Tau p transform.
The Radon transform collapses (focuses) linear events into points.
After Radon transform:
- The linear events (diffraction patterns) and the signal are disjoint (map in different areas of the Radon plane)
- The linear events (diffraction patterns) are collapsed (concentrated in a small area of the Radon plane). So now the diffraction can be muted (masked) without damaging the data.
The code attached is written in python 2. I will futurize to python 3 at some point.
The module astropy.io handles fits format (works on both mono and RGB) for input and output. Skimage is used for the radon and inverse radon transform. It is very slow!
There is quite a bit happening under the hood. The data is split in overlapping window, each processed independently. This is a classical signal processing technique.
Bottom line: it works, as shown in the example attached. The diffraction is attenuated or even completely removed. The underlying data is revealed.