|
||||||||
Daily Notebook
This web site is protected by copyright law.
Reusing pictures or text requires permission from the author. |
|
2025 December 20 |
Fractional derivatives (no, I don't mean partial derivatives)
At this advanced age I just came across a curious piece of mathematics -- fractional derivatives. Not partial derivatives (where you differentiate only one argument of a function of many variables). Fractional derivatives are like first derivative, second derivative, etc., but not whole numbers -- for example a one-and-a-half derivative, or a half derivative, or a 32.7th derivative. What could that mean? If you approach it via theory of limits, like a normal person, you don't get anything. It's nonsense. But consider that the derivative of a sine curve is the same curve shifted left 90 degrees. The second derivative shifts it another 90 degrees. And so forth. So if you shift a sine curve left 45 degrees, you've only halfway differentiated it. Do that again, and you get the first derivative. Accordingly, the 45-degree shift is a half derivative. Other fractional derivatives are straightforward. And since any waveform can be expressed as a sum of sine curves (Fourier transform), and the derivative of a sum is the sum of the derivatives, you can get fractional derivatives of any waveform. This comes up in electrical engineering — if the phase shift of a capacitor is differentiation, then half as much phase shift through an RC network must be half-differentiation. I had known about phase shifts for a long time but never thought of them as fractional derivatives. |
|
|
2025 December 18 |
Vibe coding (AI generation of computer programs) It is now common for computer programs to be generated from queries to large language models. Here are some thoughts I shared on LinkedIn and Facebook this morning: Some thoughts about AI code generation ("vibe coding"): (1) If a lot of coders are about to be unemployed, maybe we were paying too many people to code things very similar to what others have already done. Higher-level programming tools were already needed. (2) If the entry-level programmers are eliminated, where will the experienced programmers come from? (3) When compilers first appeared c. 1960, people said coders would be out of work. Then they realized skill was still needed. Mathematicians couldn't just write in FORTRAN the formulas that were on their chalkboards -- the variables wouldn't get initialized in the right order, there would be overflows and underflows, etc. (4) The big difference between compilers and LLMs is that LLMs are nondeterministic. AI-assisted coding turns into a wacky trial-and-error process that needs to mature. (5) We might need new, deterministic software tools to assist with AI-assisted coding, to check the output in various ways. (6) AI-assisted coding discourages or blocks the creation of new programming languages. |
|
|
2025 December 16 |
A time of rapid political and cultural change I have been tied up a few days, partly with a lot of scheduled activities, and partly with a gastrointestinal infection. Today I'm back... This month, America is undergoing rapid political and cultural change, revolving around two things: Trump is falling from favor and the infatuation with generative AI is coming to a sudden end. Paradoxically, the second has caused interest in my consulting services to go up, not down, although I'm still looking for projects (innovative software of any kind, whether or not you call it AI). Taking the second point first, I've written an article in AskWoody about the current paradigm shift in artificial intelligence. (Click through and read it.) Over two or three years we've seen the popular impression go through several stages:
Along the way, all the talk about AGI ("artificial general intelligence") and "the singularity" went quiet. People who answer a question in a forum with "I asked ChatGPT" get laughed at; ChatGPT doesn't know things; it only guesses or summarizes what it has been given; it can help you find knowledge, but it is not itself an authority. Knowledge takes time to trickle down, so I'm still hearing from people who are stage 1 while I'm actually working at stage 5. I'm not saying generative AI is useless or washed up. No; it's very powerful for the things it's good at, which include paraphrasing and summarizing texts and generating familiar types of pictures and computer programs. In fact, it is revolutionizing routine computer programming because it's so easy to get a rough draft of your program generated immediately. Then you have to check it, a step often neglected. But the continued move toward enormous, commercial LLMs may be financially impossible anyway. The current ones are losing money at a huge rate. They won't earn back their investment on data centers before it's time to build newer, better data centers. (That's the big difference between this and the 19th-century railroad boom; once built, railroads could be operated for decades with little additional construction.) Meanwhile, free LLMs, including a big one from the Swiss government, are available for people to run on their own PCs without paying any money to the big companies. Something's going to come crashing down. Now what about politics? Crucially, Trump is losing support and the hard-right wing of the Republican Party is pulling away from him (led by the flamboyant Marjorie Taylor Greene). This is partly for political reasons, and partly because Trump's state of health is causing alarm. He has fallen asleep in most of the past week's press conferences. He has given strange, rambling, confabulatory speeches. He twice responded to the tragic murder of filmmaker Rob Reiner with something that sounded like endorsement, taking it to be a political assassination in support of himself (Trump). Almost all the numbers he ever gives are false — and we can't tell if they're intended to deceive or if he's just confused. My big question for his supporters is: If he's incapacitated, why do you still want him in office? Shouldn't you want a competent right-wing president? Or do you want a disabled president so someone else can pull all the strings? Never mind, of course, the fast-emerging Epstein sex-trafficking scandal, which is what drove Ms. Greene away (and there's an organized group of Epstein victims that she is now supporting; they are out for Trump's hide). Bear in mind that we don't have any proof that Trump committed crimes, but there are accusations in sworn testimony, and his attempts to prevent the release of the evidence have been vigorous. The end of the Trump era is now bringing to a head another issue, the public image of Christians, especially doctrinally conservative Christians such as myself. Much of the public equates conservative Christianity with Trumpism. And that's a problem. The sincere Christians that I know never supported Trump very strongly. Some opposed him all along (like Russell Moore and myself); some tolerated him very reluctantly as the lesser of two evils. The latter should be ready to drop him like a hot potato when he is no longer the lesser evil. A weaker type of Christians, or semi-Christian churchgoers, however, jumped right on Trump's bandwagon, equating it with God's. It is much easier to support a politician than to get close to God. And they have a problem. Matthew 24:24 may apply. Political enthusiasm will not get you to Heaven. It may get you farther away. In retrospect, some of these people are going to look just as silly as the people (they exist) who seem to think that rooting for the right football team is part of the via salvationis. More importantly, Christendom as a whole has a problem. There is actually an upsurge of interest in Christ among younger adults, but:
All of these facts seriously undermine our ability to get out the Christian message. How quickly can we overcome them? |
|
|
2025 December 4 |
Refilling a snow globe The photography here isn't up to my usual standard, but I wanted to show you something useful. Before I repaired it, this snow globe had lost about 1/3 of its water. Now it has only a small bubble at the top, which is probably desirable — if it had no bubble, there would not be much to absorb pressure from thermal expansion.
To work on it, I put it upside down on top of a bowl and a piece of non-skid shelf liner:
The black part, containing the music box, is held in with peelable glue and pries off easily. (Arrows show where to pry.) That reveals how the globe is sealed: with a big rubber stopper that is sealed in with glue. My next move was to drill at 1-mm hole near the edge of the stopper, tilt the globe to put that hole as high up as possible, and use one of Melody's insulin syringes to remove air and inject distilled water:
I recommend that the very last step should be withdrawing a bit of air or water to leave negative pressure. Then I got it good and dry and sealed it with Loctite Shoe Glue (a useful substance that dries flexible), let the glue dry overnight, and finally set the globe down on a paper towel to check for leaks. In so doing, I found the original slow leak that had caused it to lose its water (one drop per day or so, not noticed from the outside) and sealed that. Another 24-hour test, and I glued the music box back in place and declared it fixed. |
|
|
2025 December 3 |
Training AI on one person's expertise My Cambridge friend Mike Knee asks: LLMs are trained on all available text, the whole Internet. Could an AI system be trained on one person or a few people's expertise so that it is reliable on one subject? Answer: Yes, but then it wouldn't be an LLM. What you are describing is a knowledge-engineered expert system and was one of the dominant kinds of AI when I first got into it. Expert systems are very useful for specific purposes but don't act very humanlike (don't carry on conversations) — they require formatted input and output. Small ones are commonly built into the control systems of machines nowadays. Training large ones tends to be a formidable task, hence the move to machine learning (automatic training). Knowledge-engineered rule-based systems live on. I built one over the past several years for the purpose of credit scoring (RIKI). It's vital to control what the score is based on, so machine learning is not appropriate — it would learn biases and prejudices that we can't allow. What you may have in mind is training an LLM on a small set of reliable texts rather than the whole Internet. In that case it wouldn't learn enough English. An LLM is a model, not of knowledge, but of how words are used in context, and it needs billions of words to learn English vocabulary, syntax, and discourse structure, because it learns inefficiently, with no preconceptions about how human language works. The reason LLMs give false (hallucinatory) output is not just inaccuracies in their training matter. More importantly, it's because they paraphrase texts in ways that are not truth-preserving. Fundamentally, all they are doing is using words in common ways. They are not checking their utterances against reality. Improvements in commercial LLMs recently have come from (1) fine-tuning (post-training) to make accurate responses more likely (still not guaranteed), and (2) connecting LLMs to other kinds of software and knowledge bases to answer specific kinds of questions (RAG, MCP, etc.). I think there is a bright future for using LLMs as the user interface to more rigorous knowledge-based software, and also using LLMs to collect material for training and testing knowledge-based systems. I do not think "consciousness will emerge" in LLMs or that they will replace all other software. |
|
|
|
||
|
This is a private web page,
not hosted or sponsored by the University of Georgia. Portrait at top of page by Sharon Covington. This web site has never collected personal information and is not affected by GDPR. Google Ads may use cookies to manage the rotation of ads, but those cookies are not made available to Covington Innovations. No personal information is collected or stored by Covington Innovations, and never has been. This web site is based and served entirely in the United States.
In compliance with U.S. FTC guidelines,
I am glad to point out that unless explicitly
indicated, I do not receive substantial payments, free merchandise, or other remuneration
for reviewing or mentioning products on this web site.
Any remuneration valued at more than about $10 will always be mentioned here,
and in any case my writing about products and dealers is always truthful.
|