Michael A. Covington      Michael A. Covington, Ph.D.
Books by Michael Covington
  
Previous months
About this notebook
Search site or Web
Ichthys

Daily Notebook

Links to selected items on this page:
Training AI on one person's knowledge
Repairing a snow globe

This web site is protected by copyright law. Reusing pictures or text requires permission from the author.
For more topics, scroll down, press Ctrl-F to search the page, or check previous months.
For the latest edition of this page at any time, create a link to "www.covingtoninnovations.com/michael/blog"
I am a human author. Nothing on this web site is AI-generated unless specifically marked as such.

2025
December
4

Refilling a snow globe

The photography here isn't up to my usual standard, but I wanted to show you something useful.

Before I repaired it, this snow globe had lost about 1/3 of its water. Now it has only a small bubble at the top, which is probably desirable — if it had no bubble, there would not be much to absorb pressure from thermal expansion.

Picture

To work on it, I put it upside down on top of a bowl and a piece of non-skid shelf liner:

Picture

The black part, containing the music box, is held in with peelable glue and pries off easily. (Arrows show where to pry.)

That reveals how the globe is sealed: with a big rubber stopper that is sealed in with glue. My next move was to drill at 1-mm hole near the edge of the stopper, tilt the globe to put that hole as high up as possible, and use one of Melody's insulin syringes to remove air and inject distilled water:

Picture

I recommend that the very last step should be withdrawing a bit of air or water to leave negative pressure.

Then I got it good and dry and sealed it with Loctite Shoe Glue (a useful substance that dries flexible), let the glue dry overnight, and finally set the globe down on a paper towel to check for leaks. In so doing, I found the original slow leak that had caused it to lose its water (one drop per day or so, not noticed from the outside) and sealed that. Another 24-hour test, and I glued the music box back in place and declared it fixed.

2025
December
3

Training AI on one person's expertise

My Cambridge friend Mike Knee asks: LLMs are trained on all available text, the whole Internet. Could an AI system be trained on one person or a few people's expertise so that it is reliable on one subject?

Answer: Yes, but then it wouldn't be an LLM. What you are describing is a knowledge-engineered expert system and was one of the dominant kinds of AI when I first got into it. Expert systems are very useful for specific purposes but don't act very humanlike (don't carry on conversations) — they require formatted input and output. Small ones are commonly built into the control systems of machines nowadays. Training large ones tends to be a formidable task, hence the move to machine learning (automatic training).

Knowledge-engineered rule-based systems live on. I built one over the past several years for the purpose of credit scoring (RIKI). It's vital to control what the score is based on, so machine learning is not appropriate — it would learn biases and prejudices that we can't allow.

What you may have in mind is training an LLM on a small set of reliable texts rather than the whole Internet. In that case it wouldn't learn enough English. An LLM is a model, not of knowledge, but of how words are used in context, and it needs billions of words to learn English vocabulary, syntax, and discourse structure, because it learns inefficiently, with no preconceptions about how human language works.

The reason LLMs give false (hallucinatory) output is not just inaccuracies in their training matter. More importantly, it's because they paraphrase texts in ways that are not truth-preserving. Fundamentally, all they are doing is using words in common ways. They are not checking their utterances against reality.

Improvements in commercial LLMs recently have come from (1) fine-tuning (post-training) to make accurate responses more likely (still not guaranteed), and (2) connecting LLMs to other kinds of software and knowledge bases to answer specific kinds of questions (RAG, MCP, etc.).

I think there is a bright future for using LLMs as the user interface to more rigorous knowledge-based software, and also using LLMs to collect material for training and testing knowledge-based systems. I do not think "consciousness will emerge" in LLMs or that they will replace all other software.

<< PREVIOUS MONTH


If what you are looking for is not here, please look at index of all months.