Michael A. Covington    Michael A. Covington, Ph.D.
Books by Michael Covington
Consulting Services
Previous months
About this notebook
Search site or Web

Daily Notebook

Popular topics on this page:
The computer Dad and I built in 1966
The history of amateur astronomy
The invention of the transistor
The college admissions scandal
Significance level .05 leading us astray
Bayes' Theorem in plain English
Moon (gibbous)
Moon (Vallis Rheita)
Many more...

This web site is protected by copyright law. Reusing pictures or text requires permission from the author.
For more topics, scroll down, press Ctrl-F to search the page, or check previous months.
For the latest edition of this page at any time, create a link to "www.covingtoninnovations.com/michael/blog"

If your browser labels this site "Not Secure," don't worry. You are not typing personal information here, so security is not needed. Click here for more explanation.


Vallis Rheita

I didn't mean for the Daily Notebook to become a Weekly Notebook, but I've been busy. (Below are some things I've been busy with.) But I did take an astronomical picture on March 21. It's an area of the moon I don't often photograph, the region of Vallis Rheita and Reimarius, photographed shortly after full moon.


Stack of a large number of video frames taken in infrared light with a Celestron 8 EdgeHD and ASI120MM-S camera.

More about significance level .05

A large number of research results in statistical sciences (including medicine and psychology) are wrong because researchers traditionally use a significance level of .05 as a cutoff. For more about this, see below.

The significance level measures the chance of getting a "positive" result by random chance.

You might think that with .05 as the cutoff, 5% of research results would be wrong. Actually, according to this paper, the false discovery rate is much higher. Over 30% of statistical research (including medical research) may be wrong! What gets you is the familiar Bayesian fact that if the thing you're looking for is rare, a positive is more likely to be a false positive.

And that brings me to another point. This is not a license to believe quacks or crackpots on the ground that "traditional" medical research has been discredited. Only the individual studies are discredited — not the overall understanding that arises from looking at the big picture and distrusting anomalous data. What you should do is be skeptical of any study that claims to make a sudden breakthrough going against what we understood from other evidence.

I think the use of .05 as a strict cutoff actually comes from the time before computers were widely available. If .05 is your cutoff, you don't have to compute p. You just look at a table of whatever statistic you are using (chi-squared, etc.) and check whether you're above or below the level that corresponds to .05. No thinking required. Maybe even no thinking allowed — that's the problem.

Bayes' Theorem in simple English

I've written about Bayes' Theorem, an important principle in probability theory that should have been common sense, but actually was either ignored or distrusted until recently.

Here's a really simple way to express Bayes' Theorem in plain English.

Suppose you see a red vehicle far away, can't tell what it is, and wonder if it is a fire truck.

It is more likely to be a fire truck if:

  • A larger fraction of the fire trucks are red;
  • Fire trucks are more common;
  • Red vehicles as a whole are less common.

Changing any of these (making more fire trucks non-red, making fire trucks less common, or making red vehicles of all types more common) would lower the odds that what you're seeing is a fire truck.

(The third one may not be obvious. But remember that if you make red vehicles more common, without making fire trucks more common, you reduce the fraction of red vehicles that are fire trucks.)

Clear enough? People often make mistakes about this in common-sense reasoning. An example is having a medical test for a rare disease. If the disease is rare, a positive is more likely to be false, just as if fire trucks are rare, a red vehicle is more likely not to be a fire truck.


Down with the cult of significance level .05

See also more comments above.

Nature reports that a group of scientists are calling for abandonment of the significance level .05 as a test of scientific results. Click here for the paper. Key quote:

We should never conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. Neither should we conclude that two studies conflict because one had a statistically significant result and the other did not. These errors waste research efforts and misinform policy decisions.

Their point? Many research results are wrong, in fields like medicine, psychology, the social sciences, and others that rely heavily on statistics, because of a pervasive mistake in statistical practice, which I call the cult of p<.05.

The p value of a statistical test is the chance that an equal or greater apparent effect could have been caused by chance. For example, if you measure the heights of ten boys and ten girls, and the boys average 2 inches shorter, does this mean boys really are shorter overall, or just the ones you picked? The p value, calculated from the sample size, mean, and standard deviation, tells you how likely it is that it's just the ones you picked.

The problem is, by the late 20th Century, people were using the p value in a mindless way, and some of them were quite dogmatic about it. Here is what happened.

Years ago, R. A. Fisher, founder of significance testing as we know it, advocated p=.05 as the threshold for considering a result to be noteworthy. He remarked:

A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance.

Note the words "rarely fails." He has in mind results that repeatedly and reproducibly pass the significance test in repeated experiments.

By about 1975, though, the dogma had become the following:

  • You must choose a significance level before doing the experiment. And, by the way, it must be .05.
  • The only purpose of the significance level is to divide results into "significant" and "not significant." That is, the experiment says "yes" if p<.05 and "no" otherwise.
  • You cannot even assert that a result with p=.001 is stronger than a result with p=.049. Both of them are "significant." A result of p=.0500001 is "not significant."

I have heard people hold forth dogmatically about the third point, and in my opinion it's just as foolish as it sounds. (In Fisher's opinion, too, if you actually read his writings.)

The paper in Nature cites two medical studies that tried to measure the risk ratio between two treatments. Their results were:

Estimated value Confidence interval
(based on p<.05)
Small study 1.20 0.97—1.48
Big study 1.20 1.09—1.33

Did these studies get opposite results or did they get the same result, with different levels of precision?

Common sense says they got the same result (1.2), but one of them showed it more confidently.

But the dogmatic .05 cultist would say the first study found no increased risk at all because "its confidence interval includes 1.0." That is, you don't quite get p<.05 comparing your results to 1.0. You might get p=.051, but that's above the threshold you had to choose in advance.

And that, sadly, is often standard methodology in the biosciences! Instead of saying "we're not quite sure we proved it" you have to say "we failed to prove it" and even "we disproved it."

That particular example would lead researchers to say that a treatment is known to be safe when it isn't. That's not good for us.

The problem here is the use of .05 as a sharp dividing line between "yes" and "no." All Fisher meant by it — all anyone can mean by it — is actually a dividing line between more confidence and less confidence.

What this leads to in practice is that you can get your paper published if you find p<.05, and you can't if you don't. This leads to "data dredging" and meaningless variations on experiments (or ex post facto decisions about the data).

The trouble is, you're going to get p<.05 erroneously, by mere chance, about 5% of the time! And you're going to fail to get p<.05 in plenty of perfectly good studies that didn't happen to have quite a large enough sample size to detect a small effect. You may have other ("sub-significant") evidence that the effect is likely to be real; you should be allowed to say so!

The purpose of statistical analysis is not to judge significance levels. It is to find out how things work.

Gibbous (super worm?) moon

The media are calling tonight's (March 20) full moon a "super worm moon" or "worm supermoon" or something like that. "Supermoon" is a term made up by a fortuneteller (really! not an astronomer) to denote a full moon that coincides with the moon's closest approach to earth. Now it's been broadened so that about a quarter of all full moons are close enough to be considered "super." Meanwhile, "worm moon" (like "wolf moon") is part of a traditional series of names for months that the media (Fox, I'm looking at you!) have decided to dust off and use. The goal is to say that every full moon is "rare" in some way.

Well, this is what it looks like one day before a full worm supermoon. I didn't see any worms.


What I did get out of this was some refinement of my technique. I took three pictures of the moon through my AT65EDQ (6.5-cm f/6.5) refractor using a Canon 60Da camera body. I deBayered and cropped them with PIPP, stacked with AutoStakkert, sharpened with PixInsight, and did the final processing with Photoshop. The color saturation has been enhanced to bring out the difference between different minerals on the lunar surface.

Why stack three? To reduce grain, which might otherwise be bothersome after digital sharpening. I might stack twenty next time, now that I have this technique working smoothly.

M67 and an informative session


Before taking the moon picture above, I did some testing of the newly modified AVX mount. This was the first clear night in three weeks, and in spite of the nearly full moon, I photographed a star cluster in order to test tracking.

This is a stack of fifteen 2-minute exposures made without guiding corrections using the AT65EDQ (6.5-cm f/6.5) refractor on my Celestron AVX mount using PEC. Fifteen of the twenty that I made were good enough to use. This tells me the mount does indeed track better than it used to.

However, it also tells me that when PEC works that well, something else pops up as the limiting factor — namely small errors in polar alignment. To be sure that there will be less than one arc-second of drift per minute of time — which is more or less the accuracy I need for this — you have to polar-align to within 4 arc-minutes. That is better than a polar scope or even Celestron's ASPA can do; you need to hook up a computer and do drift alignment or use a Polemaster. (I didn't.) The effect of polar alignment error depends on the direction (as well as amount) of the error and the part of the sky you're in, so some of the time you're lucky and things work much better than at other times.

But I've learned my lesson, and as long as I have to bring along the computer and the guidescope, I'll simply autoguide.

The other thing I learned (not having tried it before, believe it or not) is that a Bahtinov mask works very well with live view focusing on a DSLR. It enabled me to detect and compensate for a slight shift caused by tightening the lock knob on the focuser.


A sketch of the history of amateur astronomy

After a long lapse, just like London buses, here come three or four Notebook entries at once. I'm getting caught up!


Last night I gave a talk to the Atlanta Astronomy Club at the Fernbank Science Center in a room that used to be their library, but has been converted into a multi-purpose room with just a few books. (Sad, but if they hadn't been keeping up the library, it must have become almost useless in 50 years. I first visited it in October 1968.)

The topic was the history of amateur astronomy, a subject I've never seen anyone survey. I've seen lots of details about particular astronomers and organizations, but not the big picture. In what follows I'm going to give you a very short summary of my talk.

I divide history into five periods. But there are no walls on the borders between periods, and some of the events typical of each period are just outside its borders.

(1) Victorian England (1837-1901). Amateur astronomy worldwide has deep roots in the England of Queen Victoria's era. Most present-day serious amateurs have read at least one Victorian astronomy book.

This is the period when both professional and amateur astronomy became established as distinct pursuits. Academic astronomy became professionalized, with the establishment of organizations, journals, and professorships; it was no longer a part-time specialty for mathematicians or the like. And two kinds of amateur astronomy emerged.

The first kind comprised "grand amateurs" as Chapman calls them, well-to-do people who built observatories at their own expense (sometimes even hiring observers) and became serious unsalaried scientists. The first prominent one was probably Admiral W. H. Smyth, whose deep-sky observing guide is still in print; in its time, it was serious original science. Another was Isaac Roberts, pioneer astrophotographer. And the last and greatest was Percival Lowell, American, founder of Lowell Observatory.

The second kind comprised people who used "common telescopes" (affordable small refractors) as encouraged by Rev. T. W. Webb's Celestial Objects for Common Telescopes, another book that is still in print and widely used.

Crucially, in his introduction, Webb encouraged people to appreciate the sky as a form of self-improvement, and to better appreciate God's creation, even if there was no possibility of contributing to science.

That was the manifesto for amateur astronomy: it is good to learn about the sky and enjoy looking at it, just for its own sake.

(2) The early 20th Century (1901-1957).

This is the period when amateur astronomy grew, formed organizations, spun off amateur telescope making as a sub-hobby, and (to a considerable extent) moved the center of innovative activity to the American Northeast.

The British Astronomical Association, for amateurs as distinct from professionals, dates from 1890 (the British are often ahead of their time). Other organizational developments include the AAVSO, the Springfield Telescope Makers and Stellafane, the Scientific American amateur science column and telescope-making books, Sky and Telescope, and the ALPO.

Meanwhile, professional astronomy became radically different from amateur astronomy. Professional were discovering the nature of galaxies and the expansion of the universe, using the largest possible telescopes (Mt. Wilson and Palomar). Amateurs continued the Victorians' interest in lunar and planetary observing.

(3) The Space Race (1957-1969).

The time between Sputnik and Apollo 11 was a time of great enthusiasm for astronomy and for science in general; increased funding for astronomy education; proliferation of college observatories and planetariums; but uncertainty about the future. Here is the Fernbank Science Center, where I gave my talk, very much a product of that era:


Notice that Sky and Telescope was, in those days, a magazine for professional astronomers and educators as well as amateurs. It was rarely or never sold on newsstands. It mostly kept people together who were already connected. Outsiders found out about amateur astronomy through Scientific American and through the advertisements of Edmund Scientific Company, which appeared in general-interest magazines such as Popular Mechanics. Amateur astronomy broke upon me through Edmund's Catalog 645.

One good thing is that planetary science suddenly came back to life — professionals started taking an interest in the moon and planets (and soliciting data from amateurs, who had never given them up).

Amateur astronomy was thriving, but there was uncertainty. Was "space" the same thing as astronomy? (It's a long way from rockets to quasars, and it's always going to be a long way, and many people failed to appreciate the fact that we can never explore much of the universe by physically going there.) Were ground-based astronomers about to be made unnecessary by space probes? No one knew.

(4) The late 20th Century (1969-2000).

As the dust settled after Apollo 11 — in exactly the era when I was becoming an amateur astronomer — amateur astronomy was changing directions.

The most obvious fact was that professionals were divided from amateurs not so much by interests as by technology. They had big telescopes, autoguiders, special photographic plates, and even digital image sensors (since 1976). Amateurs were making do with equipment little changed from Victorian times, at least at first.

Amateur telescopes were more affordable. Amateur telescope making was no longer on the rise because you no longer had to make a telescope in order to have one. Criterion's RV-6 reflector (6-inch f/8) was a key product; I got one in 1970; and its $195 price held constant through more than a decade of high inflation, so it became more affordable every year. Though not cheap in 1970 dollars, $195 was not more than a teen-ager could earn in a summer, or (as in my case) prevail on his parents to provide.

Those who did want to make telescopes took a cue from John Dobson of the San Francisco Sidewalk Astronomers and got a lot more for their money. The idea was no longer to imitate commercial products, but rather to build a "Dobsonian" telescope that delivers more performance for less cost. Medium-sized telescopes became very affordable, and large telescopes (up to 30 inches) became within amateur reach. Commercially made Dobsonians appeared on the market and were also bargains. This enabled serious amateur viewing of faint galaxies.

The other big change in telescopes was the Celestron Schmidt-Cassegrain, made as observatory instruments in the 1960s and mass-marketed starting around 1973. What Celestron gave us was portability. It was easy to put your telescope in the car, head for a remote dark-sky site, and set it up there in a matter of minutes. Celestrons were also ready for astrophotography to a greater extent than any earlier design had been.

Amateur astronomy shifted direction. Many amateurs, including me, advocated "taking the esthetic path" and viewing the sky as a sightseer, without trying to contribute to science. This is just what Webb had advocated, and a new magazine sprang up to cater to the new kind of amateurs. Astronomy began publication in 1973 and was unconnected to professional astronomy, except for some coverage of research results; most of its content was aimed at amateur observers. Sky and Telescope was eventually remodeled to become much more like it.

Fortunately, we spun off another sub-hobby, astrophotography, and it ended up being our salvation. If we amateurs couldn't make discoveries about galaxies, maybe we could at least contribute to photographic science. And we did. This was the beginning of the process by which amateur and professional observing technology came together again.

(5) The early 21st Century (2001-present).

After 2000, amateur and professional capabilities came back together to a remarkable extent. Our digital image sensors are just like professional ones, just smaller — it's the same technology. We amateurs have access to a professional library (ADSABS), a professional deep-sky database (SIMBAD), and the same image processing software that professionals use. Our optics have gotten somewhat better (with inventions such as Celestron's EdgeHD and RASA), and there's a veritable arms race going on to improve portable equatorial mounts; we complain about their irregularities, but they are better than anything money could buy 20 years ago, even professional money.

And amateurs with 14-inch telescopes routinely photograph Jupiter as well as any ground-based observatory can. With small portable telescopes, we've started photographing nebulae that are not well known to science, including the integrated flux nebulosity (IFN, galactic cirrus). Meanwhile, professionals are using mass-produced portable instruments for research projects such as Dragonfly.

Are space probes and orbiting observatories about to make us obsolete? I don't think so, because although the supply of data from them is massive, the demand still exceeds the supply. The more people discover, the more they can discover. And anyhow, someone needs to analyze the data that the observatories gather. A new but important form of pro-am collaboration is for amateurs to examine or even process image data from professional observatories. The Internet makes this possible.

I think the time has come for amateurs to turn back to science, not just sightseeing. There is nothing at all wrong with just enjoying the majesty of the sky and capturing its beauty photographically, but that is no longer the only thing we can do.

In my opinion, we need to do some organizing. The only all-purpose organization we have is the BAA. In the United States, for planets we have the ALPO, and for variable stars, the AAVSO, but much needs to be done to further utilize the new capabilities that amateurs can offer.

A mystery about the invention of the transistor


I've been reading about the invention of the transistor by Bardeen, Brattain, and Shockley in 1947 (A History of Engineering and Science in the Bell System: Electronics Technology (1925-1975), ed. F. M. Smits, AT&T, 1985). And there turns out to be a puzzle involved.

As is well-known, the researchers were trying to make a field-effect transistor (a device in which the conductivity of a semiconductor material is influenced by a nearby electric field). This had been conceived by others earlier, but not successfully implemented. (It was in fact invented a while later, and works well, and your PC is made of them.) But at the time, their field-effect transistor didn't work, and while trying to figure out why, they stumbled on something useful but mysterious, now known as the ordinary (bipolar) transistor.

The first transistor was a crystal of N-type germanium with two tiny gold "cat's whiskers" sticking down onto it, close together, supported by a triangular insulator. This is like the way diodes were made, but with two cat's whiskers instead of one. Because the support was triangular, it really did look like the transistor symbol in the circuit diagram below.


This is the circuit in Brattain's lab notebook, showing the first transistor amplifier. It's an audio amplifier, so the input and output are small variations in the voltages at the indicated points; the idea was to turn small variations into bigger ones, and it worked. You could build this with modern transistors if you change that 90-volt battery to about 12 volts, please because 90 volts will burn up most of the transistors we use today.

The block of N-type germanium is called the "base" and the cat's whiskers are the "emitter" and "collector." The idea is that small changes in emitter current affect the collector current.

Just to confuse you, because this is a PNP transistor, the emitter emits, and the collector collects, not electrons but "holes," places where electrons are missing from the crystal structure. A key point of transistor theory is that holes and electrons act just alike, but flow in opposite directions.

Modern transistors don't have cat's whiskers. They have areas of P-type germanium (or rather silicon nowadays) in a crystal of N-type.

Now then. With a modern transistor, when you inject a current into the emitter, a fixed percentage of it (about 2%) flows into the base, and the rest goes on to the collector. Power amplification occurs because the collector is supplied from a much higher voltage than the emitter; the same number of milliamps makes more watts at a higher voltage. You have a small current controlling a current that is 98% as big, but powered by a higher voltage. That's an amplifier. You also have voltage amplification because the output is taken across a much larger resistor than the input; the (nearly) same number of milliamps makes more volts.

So far so good, but the inventors' original transistor did something more. The current variation at the collector, in response to variation at the emitter, was not 98% or so; it was appreciably more than 100% (about 180% in Bell Labs' first mass-produced transistor). In technical terms, α > 1.0. The rest of the current came from the base, of course.

Why this is so seems to have remained somewhat mysterious. The metal cat's whiskers seem to do something unusual to the crystal where they touch it, creating a further amplifying effect at the collector. Shockley's 1950 book (section 4.5) proposes that the metal-to-semiconductor contact creates not only the P-type collector of the main transistor, but also a further N-type region, so the whole thing is not PNP but PNPN. He calls this a p-n hook. This is not unlike two transistors in cascade (and sharing two electrodes), the second one amplifying the output of the first. I don't know if this theory is still held. As far as I can tell, uncertainty persisted right up to the end of the point-contact-transistor era (which only lasted a couple of years), partly because no one quite knew all the microscopic effects of the manufacturing process. Then people lost interest in the question.

The college admissions scandal

It has been discovered that parents were bribing staff members at Yale, USC, and several other universities to get their children admitted. The tactic was to tell admissions officials that the children were being recruited as varsity athletes when they weren't. There are also reports of unauthorized help on entrance exams, exams taken by impostors, and the like. I have several thoughts (and thank several people who discussed this with me on Facebook, especially Avery Andrews and Al Cave).

(1) If people paid $500,000 bribes and still didn't succeed, this speaks highly of the usual integrity of the admissions system. Contrary to some chit-chat, this doesn't prove the system is corrupt and everybody is buying their way in. Quite the contrary! The high price indicates that successful bribery is uncommon and difficult. If people were regularly taking payments under the table, they'd make them affordable in order to collect more money.

(2) What people are realizing is that educational fraud can rise to the level of being a crime. We may need to revise some laws to clarify how educational fraud is handled. Make it an actual crime, not a civil liability, to falsify educational records, take tests, or even do homework for someone else.

This is analogous to the way laws against forgery and computer hacking have been developed for clearer handling of acts that would already have been illegal.

(3) You don't want to go to a college that you can't properly get into. You'll be the weakest student there. Why do people think college admission is just a status symbol? It's a judgment of whether you can pass the courses.

(I know this doesn't apply universally. Some colleges want to have a "happy bottom quarter," a subpopulation of students who aren't extremely competitive; it reduces the pressure on the others. But I still think tricking a college is a bad idea. I've seen it turn out badly for people. And of course if you trick them into giving you a scholarship, you're stealing money.)

(4) As a society, we need to think about whether we really believe athletes deserve better educational opportunities than the rest of society.

Of course, it is praiseworthy when students work hard at sports (or anything else) to open up opportunities. The students deserve credit for recognizing, and working hard to utilize, whatever opportunities are made available to them.

And it is fortunate that athletics is often a path from the ghetto to higher education and success. But is athletic prowess the only criterion by which people should be selected for this? What about disadvantaged young people who aren't athletic?

Finally, there is the big-money aspect of college football. The NFL has no farm teams. College football serves as a farm system for the NFL and brings in huge amounts of money for the colleges. But the players are required to be amateur (student) athletes and get none of the money (except indirectly as educational benefits). I wonder if this is the right relationship between laborers (that's what they players are) and the profits from their work.

I should add that I'm in favor of sports in college. Students enjoy it, and the college is an organized setting that makes it easy to organize teams and games. But its intention should be more recreational than it is now.

(5) What to do with the students involved in the scam is unclear. Here are my thoughts.

(a) Students who were consciously involved in the scam should be dealt with severely — expel them, even revoke degrees. It is hard for a student to be unaware that someone is taking tests for him or giving him unauthorized help on tests, for example. Such a person is likely to cheat at other things his whole life long unless stopped.

(b) If a student benefited from the scam but was unaware of it — parents did it behind his back — the situation is more difficult. I think that if the student is less than halfway through the degree program (and hence can transfer), his admission should be revoked and he should finish college somewhere else. This is not punishment, but correction of an erroneous administrative action; the university never intended to admit that student, and they're just putting him back on the path he should have been on.

(c) If the student, unaware of the scam, is more than halfway through and has decent grades, or has graduated, then I don't think he should lose anything. At this point he's earning or has earned the degree. But his parents, or whoever arranged the cheating, should have to pay back any financial aid that was secured as a result of falsifications. (Many private universities award some kind of scholarship to almost every student, not just the needy.)

(6) What about the other good students who were turned down because scam-assisted students were admitted ahead of them? There is really no way to make such people whole; it may not even be possible to identify them with certainty. (Exactly who would have been admitted if there had been one more slot? You may or may not have records that say. For privacy, applications and rankings from past years may well have been destroyed. You may not even have made the decision — you would have looked at more applicants and made more decisions if you had had more slots.) Fortunately, such people are good at making the most of other opportunities. Maybe they went to other universities that were, for them, actually better.

If they can be identified, then: (a) if only part way through a degree program elsewhere, they should be invited to transfer in; (b) they should be given some kind of formal recognition by the university that wanted to admit them and was scammed out of it, something they can list on their résumés.

Short notes

I am aghast at the mass shooting at mosques in New Zealand, a country that normally has little violent crime. As I write this, I don't have enough facts to do more than express sadness and dismay. But I do have a concern. Over the last few years, the scale of acceptable behavior and rhetoric in the United States has shifted; white supremacists have an easier time considering themselves patriots, much less distant from the political mainstream than they used to be. To what extent did North American white supremacism, and North Americans' toleration or encouragement of it, lead to this New Zealand crime?

On a much less serious matter, I am glad to hear lots of sentiment in favor of year-round Daylight Saving Time. Let's get rid of the twice-yearly changes. And if the schoolchildren have to wait for the bus in the dark in Michigan, change the time of school there. Don't command all of us to change our clocks twice a year.


The computer Dad and I built in 1966

[Revised and updated.]

Before his untimely death, my father (Charles Gordon Covington, 1923-1966) and I did several electronic projects together. The most elaborate was an analog computer. This is what it looked like when I unpacked it a few days ago:


It was built from plans published in Electronics Illustrated, January 1966.

Click on any page for higher resolution:
Picture Picture
Picture Picture

Recall that in 1966, most people had never seen a computer of any kind. Large corporations had digital computers that filled rooms, and analog computers (more complex than this one) were still in use in industry. Nobody used them for addition and subtraction, but more elaborate ones could do derivatives and integrals and were very useful.

We built this computer in mid-1966, I’m not sure exactly when. I remember that we bought the magazine at the gift shop of the hospital in Thomasville, Ga., where my maternal grandfather, J. C. Roberts, Sr., was recuperating from a car wreck. To the best of my recollection, we got the magazine before Christmas 1965 – magazines often came out before the cover date. But I think we built the computer during the summer.

Because my father died soon afterward, this was also my last electronic project for a long time that was what you might call fully funded, not hobbled by an attempt to keep the cost impossibly low. Rather than improvise or substitute, we bought the parts as specified in the magazine article. The potentiometers and meter were ordered from Lafayette; the knobs, battery holder, and switches came from the local Specialty Distributing Company store. (That was before Radio Shacks were widespread.) I think a sales slip from ordering parts may still be in my files somewhere, and if it surfaces, I’ll update this entry.

Click on any page for higher resolution:
Picture Picture Picture

At the time, we lived at 1103 Lake Drive, Valdosta, and the computer is housed in paneling left over from building the house, or maybe the previous house (1721 2nd St. S.E., Moultrie). The potentiometers (variable resistors) arrived with long shafts, and, either not realizing we were supposed to cut them or not able to do so, we double-decked the front panel, which at least made the computer look more massive. I made the dials by hand, all by myself – I still think that was an accomplishment for an 8-year-old.



After my father died, my mother and my teachers wanted me to get some recognition for the computer. Valdosta had no science fair at the time, but it did have a social-science fair, which I was duly bundled into, in the spring of 1967, with lots of support not only from my current science teacher (Mrs. Atkinson) but also my teacher from the previous year (Mrs. Bowers). To fit into a social-science fair, the computer became an illustration (not a terribly germane one) to a project on automation and employment. I interviewed a local Sears manager and had a lot of help with the library work (one book was The Challenge of Full Employment, by Lineberry, 1962). I came to the same conclusion that I still hold – that computers don’t create unemployment, but they certainly do eliminate particular kinds of jobs.

For the social-science fair, I was told that the computer needed “blinking lights,” but the best I could manage was four #47 lamps as power-on indicators, powered by a separate transformer (I used what I had), with the DPST power-on switch temporarily replaced by a 3PST switch in another enclosure made of balsa wood. At least this made the computer look complicated.

At the time I felt that the computer wasn’t really my own brainchild, I was having trouble getting it to work right, and I felt I might be getting more credit than I deserved, or at least credit for the wrong things, so after 1967 I restored the power switch to the original configuration, we packed the computer up, and it has not been used since, until now.

More about the social science fair project

Click on picture to view full story

I went and dug up the newspaper coverage of the science fair project, and the best part of it was reading the fine print listing all the other children who won prizes. In fifth grade I didn't know many people, but several of the names in that list are people I made friends with soon afterward and am in touch with even today, including Kent Buescher (of U.S. Press) and Sue Wilkinson (musician).

The newspaper article gives the impression that I built the computer by myself, which I didn't, and that I was somehow top in the whole fair. That is not my recollection. As I remember, I got one of several divisional first prizes at the city level and went on to get a second prize (I think, maybe less than that) at the regional fair.

I had forgotten the catch-phrase of the project, "From a toaster to UNIVAC." My point was that even a toaster is an automated machine that performs a simple computing function (sensing when the toast is hot enough). Computers are not a new intelligence loosed upon the earth to threaten humans; they're just machines with more elaborate control functions. That is still how I see them, even though I've spent much of my career trying to get them closer to emulating human intelligence.

How does it work?

This is an analog computer, not digital, not programmable. It works like a slide rule, but implemented with electricity. The key idea is to use a Wheatstone bridge to compare two voltages. If the meter reads zero, the voltages match. One voltage is proportional to the setting of a variable resistor, so it can be read as a number, the answer. The other voltage is the product or the sum of two variable resistors, depending on whether they are arranged as a two-stage voltage divider or a pair of single voltage dividers in series.

The magazine calls it a “pot computer” because the variable resistors are also called potentiometers or (at the time) pots.

To get subtraction and division, you just use the second rather than the third variable resistor as the unknown. For squares and square roots, you replace the second variable resistor with a copy of the first one – two potentiometers on the same shaft – so that you are multiplying a number by itself, and the second dial is ignored.


The indicator is a zero-center meter intended for balancing stereo speaker outputs. After the parts arrived but before we built the computer, I did an experiment with it that fortunately did not turn out badly. I momentarily applied 10 volts AC across it to watch the needle swing violently back and forth, which it did. Fortunately this did not burn out the meter. In retrospect, 10 V AC is probably not more than it was designed to withstand in speaker systems; for example, 50 watts into a 4-ohm speaker is 14 volts. But until I did this calculation just now, I thought I had had a narrow escape.

Checking it out

Prior to turning the computer on, I checked over the soldering, re-soldered several joints, and soldered two that had somehow escaped being soldered in 1966 (!). I also anointed the potentiometers and switches generously (internally) with Caig MCL contact cleaner and replaced the two missing knobs (authentic replacements will come later). And I found the meter case coming apart, so I pulled it out, rasped out the hole it fits in, fixed it, and reattached the wires.

When I finally put batteries in the computer and turned it on, it was obviously alive but wouldn’t give correct answers. My recollection was that some of the functions of the computer had been inaccurate in 1966, so I was on the lookout for a wiring error. And I found one – the positive terminals of the two batteries were interchanged, apparently a mistake I introduced during the blinking-light caper in 1967, which is when I noticed some malfunctions.

With the wiring corrected, and all the other wiring checked, multiplication and division worked fine, allowing a tolerance of 0.5 (dials 1 and 2) and 5 (dial 3). Addition and subtraction also worked fine once I figured out that for addition and subtraction, the third dial reads 0-10, not 0-100 – an important fact not mentioned in the magazine article!

How I fixed the potentiometer

What still wasn’t working was the square and square root function because the rear potentiometer on dial 1 (the one that is used as a second copy of dial 1 in place of dial 2) was apparently inoperative.

Back in 1966, the two-section potentiometer had arrived in two parts; my father pried the shell off the front part (which was a complete potentiometer) and added the rear part.

I decided to pry them apart again and see what I could see. Conclusion? A mechanical part was missing, and apparently had been missing since 1966. The shaft simply did not turn the rear part at all. Two prongs that were supposed to couple the shaft to the rear section did not touch anything.



I have no idea what the missing part originally looked like, but after several tries, I made a block of aluminum that does the job and gives the prongs something to hold. Now the square and square root function works perfectly.


Final buttoning-up

Here's what the face of my computer looks like now:


The missing knobs are now exact replacements (still made from the same mold by the same manufacurer, Daka-Ware). Apart from the other repairs already mentioned, there are lockwashers under all the switches (to finally keep them from turning) and two additions to the labeling, an ON label for the SQUARE switch (needed for correct operation) and a label that says ANALOG COMPUTER, replacing an earlier label that was lost. I made these with my 1969 Astro tape labelmaker, which is still functioning and had been used earlier to make the ON and OFF labels for the power switch. The other tape labels were made with a Tapeprinter labelmaker in 1966.

On the back panel, I marked the battery polarity (only visible with the batteries out) and added an instruction card. It took me long enough to figure out how to use this computer that I didn’t want anyone else to be puzzled!



A second copy of the instructions and copies of the magazine article are stored, rolled up, between the decks of the chassis. And that’s the story of my first computer.


Star cluster M67

Don't panic — nothing has happened to me — I've just been busy. There's no calamity going on at all; in fact, things are going smoothly, but posting in the Daily Notebook is not what I've been spending my time on. Nonetheless, here's a picture that has been in the "to do" box since a week and a half ago:


That's the star cluster M67, taken on the evening of February 27, a stack of five 2-minute exposures with AT65EDQ telescope, Canon 60Da camera, and AVX mount. We've been having awful weather, and even when it's clear, the air is unsteady. Maybe spring will come before the interesting winter constellations have rotated out of view.

If what you are looking for is not here, please look at previous months .