Astrograph: The Genesis

I have been imaging with a 280mm (11″) SCT for over a decade now. I use it with a 0.75x focal reducer which brings the focal ratio down to 7.5 and the focal length down to 2,100mm. My camera has a KAF8300 image sensor with 5.4 micrometre pixels. This gives me a pixel scale of 0.53 arcsec. Given the theoretical diffraction limit for the scope’s aperture is 0.49 arcsec you would think this would make me happy as a clam.

I make a very poor clam, even though I am a marine biologist.

Despite the apparent match between the pixel scale and the diffraction limit all is not well. Reality in the form of astronomical seeing and mount tracking completely mess up this perfect theoretical approach. Now, I’ve tuned my mount and guiding setup so I consistently get sub-arcsec tracking but I can’t do anything about my seeing except move to the Atacama desert. My seeing here varies around 2 – 4 arcsec. (I blame it on being only 46 metres above the cold North Pacific) Doing a little calculation using a mean seeing of 3 arcsec I can estimate the size of a star on the sensor.

Quite a bit bigger than the theoretical pixel scale.

People talk a lot about sampling in image processing. The idea is very simple, take your star image blob size or Full Width at Half Maximum (FWHM) and divide it by your pixel scale. Here’s what the SCT gives:

By now you have probably guessed this is not good. Here’s why; enter the Nyquist–Shannon sampling theorem or Nyquist theorem for short. What Harry and Claude did back in the 1920’s was to realize that if you knew the shape of a signal function you only needed two points to reconstruct it. (All digital signal processing today is based on this idea) Well, I do know the shape of the blob or Point Spread Function (PSF) as that is how FWHM is derived in the first place. So I have 3 times as many pixels across the PSF as I need. And since the light in the blob is spread over an area proportional to the square of the sampling it means the incoming photons per pixel are diluted by a factor of 9 over the ideal. This is serious (!) oversampling and is wasting my camera’s capabilities. It also reduces the signal to noise ratio (SNR) significantly.

In addition to the oversampling problem, my field of view with this setup is only 29 arcmins. Both pixel scale and field of view are functions of focal length and 2,100mm is not exactly short. In essence, I have too fine a pixel scale and too small a field of view.

How did I get into this situation? I am a normal amateur astronomer! I bought a big, shiny SCT because it looked nice and everyone else was doing it. And the old wisdom said about 250mm (10″) was an ideal aperture because it matched the average scintillation cell size in the atmosphere. Maybe on Mauna Kea, but not where I live! I didn’t do my homework because, to be fair, I really had no idea what I was doing. I just bought the scope and started to attach things to it and experiment. Most things I tried didn’t work very well, but I persisted until it was as good as it gets. I rebuilt that scope and mount within an inch of its life. I learned a lot. And I did get some nice images, although anything bigger than my field required mosaics and a significant amount of work.

Like most of us I was always dreaming about the next scope. Then it occurred to me that I should turn my thinking on its head. Instead of trying to make the camera fit the telescope, why didn’t I make the telescope fit the environment and camera? (Doh! Just call me Homer!)

So the “Ideal” Astrograph Project was born!

Last <——————–> Next