Michael A. Covington    Michael A. Covington, Ph.D.
Books by Michael Covington
Previous months
About this notebook
Search site or Web

Daily Notebook

Popular topics on this page:
Derotating Jupiter with WinJUPOS
OOP: Inheritance vs. Interface vs. Wrapper
OOP: Constructor vs. Factory Method
Periodic-error-correction training
Moon (Mare Crisium)
Mars showing crater Newton
NGC 7331 and Stephan's Quintet
Many more...

This web site is protected by copyright law. Reusing pictures or text requires permission from the author.
For more topics, scroll down, press Ctrl-F to search the page, or check previous months.
For the latest edition of this page at any time, create a link to "www.covingtoninnovations.com/michael/blog"

Ads by Google, based on your browsing history

Mars, showing a crater (or maybe quite a few)



A veritable army of amateur astronomers continues taking some of the best earth-based pictures of Mars ever taken. And the maps available to us have not kept up. There are detailed topographic maps of mountains, canyons, and craters, but not very good albedo (brightness) maps, and albdeo is what we see from earth. The light and dark areas actually shift a little from year to year, since they are formed by Martian sand blowing around in the wind, influenced but not tightly controlled by the topography. Most maps show a light area between Mare Sirenum and Mare Cimmerium; this year's photographs do not.

The dark spots in the best pictures correspond roughly but not exactly to craters. In particular, the huge crater Newton, at the end of Mare Sirenum, is a reliable dark spot — it stays full of darker-than-usual material. You can see it in my labeled picture. Of course, earth-based observers cannot tell that these spots are craters; they are just spots. See also what I wrote two years ago.

This was taken from my driveway with an 8-inch Celestron EdgeHD telescope, 3× focal extender, and ASI120MC-S camera. I recorded 16,697 frames of video (which took 5 minutes), then used software to select the best 20% and stack and sharpen them.

The doubling effect around the edge is not, as I used to think, due to overshoot in a sharpening algorithm. It is diffraction in the telescope — just like the rings around a star seen at high power — and I have done only a small amount of processing to weaken it, since it is optically genuine and I don't want to discard detail.


NGC 7331 and Stephan's Quintet through murk


On the evening of October 15, our skies were not particularly clear, but I needed to test how well my AVX was tracking the stars (see below), so I took this picture of the galaxy NGC 7331 and, below it in the picture, the distant galaxy group Stephan's Quintet.

Not only were the atmospheric conditions poor, I was using an unusually small instrument. You are looking at the enlarged central portion of a picture taken with a Sharpstar Askar 200-mm f/4 telephoto lens (2 inches aperture) and a Nikon D5500 camera.

Both the lens and my Celestron AVX mount performed well. I took 23 2-minute exposures and only had to discard three of them because of poor tracking.

Some notes on periodic-error training

To get a telescope to track the stars perfectly even though it has imperfect gears, we use (ideally) a guidescope and autoguider, with a camera that constantly watches a star and tells the mount how to move to re-center it; or (for convenience) periodic-error correction. That is, the microprocessor in the telescope mount memorizes the irregularies in the gears and plays back appropriate corrections every time the slowest gear goes around.

The memorized corrections have to come from somewhere, of course. That's why I use an autoguider to train the periodic-error correction (PEC) even though I usually don't use it when taking pictures.

The other night, I re-trained the PEC of my Celestron AVX. Using Celestron's PECtool software, I was able to train it three times, download the results from each, average them, and upload the result to the mount. PECtool produces CSV files that I can analyze and plot in R. Here's a handy graph:


The vertical axis reads in arc-seconds. The horizontal axis uses an arbitrary scale where 0 to 88 spans one rotation of the main gear, taking 10 minutes of sidereal time (about 2 seconds short of 10 ordinary minutes).

If the black curve doesn't look like the mean of the other three, it's because the drift was removed from all of the curves before averaging them. That is, any overall tilt was removed so that each curve would start and end at height 0.

Here are some things to notice:

  • The total amount of correction needed is small, on the order of 10 arc-seconds from highest to lowest point. This is a Celestron AVX mount with a roller-bearing upgrade to the RA axis, and it performs well.
  • Many small jerks in tracking (up and down) are reproducible. This is why PE correction curves should be averaged but not smoothed. If a small, sharp irregularity is predictable, it should be corrected.
  • Non-reproducible drift (as in the green curve) is common and is on the order of 1 arc-second per minute of time. This is due to flexure in the mount or the apparatus mounted on it.

Only Celestron offers PECtool. But for mounts of all types there's something even better, PEMpro. I was using PECtool because I wanted to compare to earlier PECtool data, and with limited time available, I didn't want to risk misunderstanding something. But I highly recommend PEMpro for analysis and correction of periodic error.

There's another way. Some of the best mounts have high-resolution encoders that constantly measure the rate of rotation very accurately and correct irregularities on the fly. The less expensive ones are prone to SDE ("sub-divisional error"), which manifests as a slow oscillation at a low amplitude; better ones have overcome that problem. But even if periodic error is completely eliminated, in many astrophotographic situations autoguiding will still be necessary because of atmospheric refraction, polar alignment inaccuracy, and unavoidable flexure in the instruments.


Mars again


Last night I took this image of Mars using a technique that I think will be reproducible — that is, I've settled on a technique and no longer need to experiment so much. Using the C8 EdgeHD telescope (8-inch f/10) with a 3× focal extender and an ASI120MC-S camera, I recorded 5 minutes of video, comprising about 12,000 frames, and aligned and stacked the best 50% of the frames. Theoretically it would be better to stack the best 20%, but I find that by using 50%, I get a smooth, grainless image, and no sharpness is lost.

I'm now in a position to get get good images of Mars but am not actually studying Mars — not tracking the clouds (such as there are) or the effect of wind on the gradually changing surface features.

One thing I did do is view Mars at 400 power, and it was surprising how much I could see, although I didn't see as much detail as is visible in the picture. In my earlier years I developed an aversion to using high magnification, and in fact my usual power with this telescope has been 110× for all types of objects, occasionally going to 200× for double stars and planets. I think 200× may have actually been the worst of both worlds, too high to be sharp but too low to be really big. I viewed double stars at 400× and adjusted the collimation before looking at Mars.

One factor might be better optics. Thirty years ago, neither telescopes nor eyepieces were as good; I formed my opinion using conventional Schmidt-Cassegrains and eyepieces where the high-power ones had very short eye relief. My new eyepiece is an Astro-Tech Paradigm, 5 mm, with about 15 mm of eye relief, very comfortable to look through.

Other upgrades are in progress. The AVX mount has recently acquired an iOptron iPolar finder, which makes polar alignment very quick if I bring along a computer. (If I don't, I can still align it the old way.) The C8 EdgeHD is going to get a bigger finderscope as soon as the bracket to hold it arrives.


More Mars

A small army of amateur astronomers now have the ability to take better pictures of Mars than anyone on earth could do fifty years ago. In the following pictures, many of the spots correspond to craters. There are of course no "canals" — those were an optical illusion to which not all astronomers were susceptible.

The first of these is a derotated stack of 3 pictures each of which was a stack of several thousand video frames. The second one is from a large number of video frames stacked directly. (No derotation is needed for as much as a 5-minute exposure of Mars because the planet doesn't rotate that fast.) And the third one is of course a map generated with WinJUPOS.




Particularly in the first picture, you can see the "rind" at the left edge, a bright band with a dark band next to it. This is a diffraction effect, and I have chosen not to try very hard to get rid of it. To be precise, the bright band is real, though thinner than it looks; it is the sharp sunlit edge of the planet. The dark band is caused by diffraction; it corresponds to the gap between the center of a star image and the first diffraction ring. After the dark band comes another bright band, which is less noticeable and mostly make the dark band look darker by contrast. An article by Martin Lewis in this month's Journal of the B.A.A. shows convincingly that this effect is indeed diffraction, not digital processing, and although digital sharpening brings it out, it is already present before sharpening is done. More about that here.

Two object-oriented-programming topics

I've spent much of the past two weeks reorganizing a large C# program that was originally written as a series of experiments, which means its design was not consistent; I changed horses in the middle of several streams, and there was a fair bit of unused code, not to mention code that was badly organized.

That, plus quickly reading The Pragmatic Programmer, got me thinking about a couple of points about object-oriented design.

Inheritance vs. Interface vs. Wrapper

If you are thinking of defining a class that inherits another class, be cautious. Inheritance is no longer quite the glamorous new thing that it used to be. Already, the designers of C# backed off from it by eschewing multiple inheritance. The Pragmatic Programmer warns against the "inheritance tax."

The disadvantage of inheritance is that if A inherits B, then A contains everything that is in B, including things added to B by other programmers later on, and any change to B is a change to A.

If A and B are merely supposed to be similar or analogous, and their internal workings are different, consider defining an interface instead, and having both A and B implement it. Consider this especially if you find that in A, you are writing replacements (overrides) for things inherited from B.

Another alternative is to have A contain an object of class B; that is, A is a wrapper around B. That is a good way to create a B with a little more information added.

The situation in which I most often use inheritance is to create a class that slightly extends a built-in class. For example, I might have a type that inherits List and adds a field saying where the list came from, or whether it has been validated. This could also be a wrapper, but direct inheritance can make the derived class easier to use.

Constructor vs. Factory Method

On the questions of constructors versus factory methods (static methods that create an object), here are the considerations:

Parameterless constructor

  • Is always there, even if you don't define it
  • Can be marked private to prevent calling it from elsewhere
  • Creates the object in its empty or initialized form

Constructor with parameters

  • Recommended when the parameters are data that goes into the object
  • Must create the object — cannot fail or bail out
  • Cannot perform any computation before calling the base (parameterless) constructor

Factory method

  • Recommended when parameters are extraneous to the object, such as a file name to read the object from
  • Can return null if the object is not successfully created
  • Can return a subclass of the class (e.g., create an object of a different subclass depending on what is discovered while creating it)

I had had the dim idea (from 1990s object-oriented programming) that factory methods were considered less desirable. No; in many situations they are just what is needed.

Bear in mind that although I do a titanic amount of computer programming, the main thrust of my work is not finding the most elegant way to express simple algorithms. It is finding ways to do unusual things. Elegance often suffers.



Jupiter, derotated


As you know, I photograph planets by recording video through my telescope and then aligning and stacking the video frames. Stacking many frames removes the random noise from the digital sensor. It also turns random atmospheric blurring into a Gaussian blur, which can be undone by computation. (The sum of random variables is a Gaussian variable; that's where bell curves come from.) In the process I can discard frames that aren't very sharp; in fact, I usually keep only the sharpest 50%, or if there are plenty of frames, the sharpest 25%.

One thing that limits me is that Jupiter rotates so fast that I can't record more than about two minutes of video without smearing the fine detail.

At least, the theoretical limit is 1 to 2 minutes depending on how much blur is tolerable. In practice, we do better than that, because the process of stacking the frames makes some compensation for rotation. It's not perfect, but insofar as features shift about the same distance in the same direction, stacking can bring them together again. I've never heard anyone mention this, but it's why we often get surprisingly good results with videos that are theoretically too long. It only works in the middle of the face of the planet, of course.

But there's a better way. The software package WinJUPOS actually calculates the rotation of the planet, separates the video frames, de-rotates each one by transforming the image, and puts them together again.

The following are my brief notes on how to do this. There are better tutorials elsewhere, but here I hope to put the whole process in context.

(1) You can de-rotate either a set of finished, stacked, sharpened still pictures, or a video file. In the first case, you simply take short (60-second) videos, one right after another ("Autorun" in FireCapture is good for this). In the second case, you simply make a longish video recording (5 to 10 minutes).

(2) If you are using FireCapture, it helps if you tell the software to use WinJUPOS file naming. Then WinJUPOS will know the exact time of the exposure from the file name.

(3) If you're de-rotating a series of still pictures, they must already be stacked and sharpened (AutoStakkert and RegiStax or equivalent). If you're going to de-rotate a video file, you must first make a preliminary stacked and sharpened still picture from it for measurement purposes, even though it won't be as sharp as the picture you're going to make.

(4) Open WinJUPOS.

(5) Under Program, choose Jupiter. (Important!)

(6) Under Recording, choose Image Measurement. You will be using two tabs, "Imag." and "Adj."

(7) On the "Imag." tab, open your stacked and sharpened still picture. Make sure the date and time come in correctly, or type them in.

(8) Also enter your (the observer's) latitude and longitude to the nearest degree. WinJUPOS wants to know where you're looking at the planet from. Mars, in particular, looks a tiny bit different from widely separated places on earth at the same time.

(9) On the "Adj." tab you are going to tell WinJUPOS exactly where the planet is positioned, and how it's oriented, in your picture.

- Make sure "LD compensation" is checked, and enter an LD value of 0.65.

- Make sure "Draw outline frame" is also checked.

(10) On the outline that you see, "N" and "P" denote the north pole of the planet and the preceding side (westward in the sky).

- Press F11 to automatically position the outline. (Available only with Jupiter.)
- If you don't think the computer got it right, you can move the outline with the arrow keys, scale it with PgUp and PgDn, rotate it by pressing N or P, or make other changes descibed by clicking on Help.

(11) Go back to the "Imag." tab and save the IMS file in the same folder as the image file.

(12) If you are derotating still pictures, make an IMS file for each of them. For a video file, you need, of course, only one IMS file, made from the preliminary stacked, sharpened picture that you created from the video.

(13) Close the Image Measurement window.

(14) If you're de-rotating a set of still pictures, open Tools, De-rotation of Images, choose Edit, and load the IMS files. Each IMS file, in turn, refers to the picture file (which is in the same folder). Make other settings and click Compile Image.

(15) If you're de-rotating a video file (.SER file), open Tools, De-rotation of video streams, and follow the instructions. Choose the original video file and the IMS file that pertains to it. Your output can be a stacked still image, a corrected video file, or both. I prefer the latter. (Don't check Bayer mosaic; you want a normal color video file.) Click Compile Image and wait... and make sure you have a lot of disk space, because the output file may be 50 gigabytes! Then stack and sharpen it (AutoStakkert and RegiStax) and there's your finished product!

How much did it help? It enabled me to take the best 25% from over 18,000 video frames and get the smooth, sharp image that you see at the top. Without this kind of high-tech help, I was already getting quite good images. But I don't mind getting results that are even better! Taking a long video and derotating was equivalent to doubling the size of my telescope.


Busy times and a plethora of astrophotography

I'm back... All of a sudden, instead of incessant cloudy weather, we've had a succession of clear nights, and I did astrophotography on four consecutive nights (all of it lunar and planetary work, from home). Scroll down to see the fruit of my labors.

And tomorrow (Oct. 5), Melody will get a birthday present — cataract surgery on her other eye. She will then have better vision than she's had for years if not decades. We joke that each of us got a new lens for our birthday — mine for my camera — hers for her eye.

For future readers who want to know what life was like in the plague year, I note that the coronavirus epidemic is continuing, and much of the country is having a third wave of it, but in Georgia the infection rate has fallen off nicely. We still limit trips out of the house and wear masks when out in public. Now that most people know someone who has been severely ill or died of coronavirus, we're not encountering so many rabid anti-maskers.

For no good reason, anti-mask sentiment and disregard of the hazard has become associated with conservative politics, and Republicans proudly doff their masks at party gatherings. And now President Trump is in the hospital with coronavirus. Proof that choosing not to believe in the hazard does not protect you from it.

More generally, you can't make problems to away by choosing not to believe in them.

Beyond Mare Crisium

As you know, I've made a minor hobby of photographing the region of Mare Orientale on the moon, which is only visible when the non-circularity of the moon's orbit tips the region slightly toward us; it's on the edge of what we can see.

Right now the moon is tipped the other way, and here's what's on the other edge:


The large round dark area is of course Mare Crisium, easy to see as the "eye" of the face in the moon, which you can see with the unaided eye; it's conspicuous any time the moon is in the evening sky. Beyond it is the appropriately named Mare Marginis ("sea of the edge").

8-inch telescope, ASI120MC-S camera. Like all the lunar and planetary images here, this is a stack of thousands of video frames, sharpened by wavelet processing.


Mars is now unusually close to the earth and looks like a bright reddish star high in the sky late at night. (It doesn't look like the moon; it never looks like the moon; a press release in 2003 said Mars would look as big in a 75-power telescope as the moon without a telescope, and someone copied it leaving out the telescope!)

Because the rotation period of Mars is about 24 hours, we see almost exactly the same side of it if we observe it at the same time of night on consecutive days. So here are a couple of pictures and a map. The map is generated with WinJUPOS software and a file I made containing a labeled map instead of the usual unlabeled one (more about that here).




8-inch telescope (Celestron C8 EdgeHD), 3× focal extender, ASI120MC-S camera. Stack of thousands of video frames, wavelet-sharpened.

The exciting thing about Mars is that it's the only planet on which we can easily see surface detail. Venus, Jupiter, Saturn, Uranus, and Neptune are shrouded in thick atmospheres. Mercury is small and hard to see from earth; the dwarf planets are even harder to see.

The surface features of Mars do change from year to year as dust and sand are blown around by the wind. Mismatch between the map and the photographs does not mean the map was wrong when it was made.

In the past, writing about history of amateur astronomy, I noted that from about 1900 to the beginning of the space program, lunar and planetary work was in amateur hands; professionals were focused on spectroscopy and astrophysics. In Planets and Perception, William Sheehan points out one reason why this was so. For decades, in the popular and even professional mind, planetary astronomy was associated with Percival Lowell's speculations about canals and Martians. It was almost like a UFO cult.

Suddenly the space program turned planetary exploration back into serious science. In later years we've seen a merging of planetary science with geology — which makes sense.


Jupiter is now high in the evening sky right after sunset. You can tell that the sunlight isn't hitting it straight on; the right edge in the picture is the sunlit side. Jupiter's satellites often cross in front of the planet or cast their shadows on it. The Great Red Spot is still there, and still red, but much smaller than when I first saw it half a century ago.





These pictures were also taken with my 8-inch telescope, using stacked video frames. Here you see some variation in the software settings that I used when processing them. Just like a film photographer making a print of a negative, a digital astrophotographer has to make decisions about the contrast and color balance of the finished picture. The steadiness of the earth's atmosphere, through which I am taking pictures, also varies from session to session.


Saturn is up there right next to Jupiter; the two planets happen to be almost the same direction from earth right now. (That means that in a few months, we won't be able to see either of them, which is a pity; they are similar, and the same astronomers tend to be interested in both of them.)

The hard part about photographing Saturn is that it's about twice as far from the sun as Jupiter is, and so only a quarter as brightly lit. I have to take longer exposures, capturing fewer video frames. But it rotates at the same speed as Jupiter, so with either planet, I can't expose more than about two minutes without blurring the details on the surface (of the upper atmosphere) — when there are any details; Saturn is striped, not spotted.



Same technique: 8-inch telescope, focal extender, and video camera.

If what you are looking for is not here, please look at previous months .