Shooting Stars and the Singularity

Gramps Tom
4 min readAug 13, 2023

It’s August again. I’m out in the backyard gazing up at the summer sky, hoping to catch a bit of the Perseid meteor shower. Bob Dylan sings softly from the Bluetooth speaker. The Singularity is on my mind.

The internet is buzzing with the rapid advance of large language models, breakthroughs in fusion and superconductivity. It could be any day now.

I wonder if we’ll notice. Do we even know what to look for? Will the lights dip? Will the earth shake with a deep and thrilling rumble? Or will a silent surge of light spring from the horizon like an artificial dawn?

Will the stars begin to fall?

There it is! The first shooting star sizzles earthward accelerating crisply according to Newtonian physics. I imagine the earth spinning on its axis, circling the sun, its orbit intersecting that of a random cluster of asteroids, the whole solar system a tiny spec on the spiral arm of a majestically spinning galaxy lost in the vast cold darkness of intergalactic space.

A giant Newtonian clock.

That is running down, apparently. Because of friction. The friction that slows the fall of the meteor as it enters the atmosphere, releasing light and heat, burning as it falls.

Everything in the universe is tending toward a state of lesser energy and greater entropy.

Bob Dylan breaks in on my thoughts:

“Seen a shooting star tonight
And I thought of you
You were trying to break into another world
A world I never knew
I always kind of wondered
If you ever made it through
Seen a shooting star tonight
And I thought of you…”

Is it though? Tending toward greater entropy I mean? What about life forms and evolution and all that? Isn’t that the big exception? Isn’t there a trend toward greater organization? More understanding and less ignorance?

Newton stood on the shoulders of Galileo, Copernicus, Aristotle, Plato, Socrates, Pythagoras and on and on and back and back to the day the first amoeba crawled out of the primordial slime.

And after Newton, Einstein. After Einstein, Bohr, Heisenberg, Schrödinger, right on up to the present day where we have the likes of Elon Musk scattering the sky with satellites and rebranding Twitter. Sam Altman unleashing ChatGPT on the unsuspecting public.

Which brings us to the Singularity.

We have fed the entire internet, the accumulated information, the aggregated understanding, the complete comprehension of civilization to date into the hungry maw of an AI which is now cheerfully proceeding to write user manuals in iambic pentameter and beat the stock market.

Surely any day now an AI will design and build an even more powerful AI, which in turn designs and builds its successor? Fusion supplies unlimited electricity. Superconductor technology releases faster and faster processing power.

Foom.

The arc of progress bends to the near vertical. Humanity stands on the sidelines to await developments.

“…Listen to the engine, listen to the bell
As the last fire truck from hell
Goes rolling by, all good people are praying,
It’s the last temptation
The last account
The last time you might hear the sermon on the mount
The last radio is playing…”

A mosquito whines in my ear, and I slap at my neck.

Researchers studying the output of Large Language Models such as ChatGPT as compared to human writing have developed two interesting metrics, the technical terms for which are ‘burstiness’ and ‘perplexity’.

Sentences in human writing tend to vary widely in length and structure. This is known as ‘burstiness’. In contrast, the output of today’s LLMs contains sentences that are more similar in length and structure.

Perplexity is a measure of how predictable a particular word sequence is. Almost by definition, the output of LLMs tends to have a lower perplexity score than human writing.

This seems to suggest a type of averaging or loss of detail.

The science fiction author Ted Chiang, writing for the New Yorker, compares ChatGPT to a ‘blurry JPEG of the web’ — a lossy compression algorithm rather than an engine of creativity.

It is an interesting thought that all modern Large Language Models are trained exclusively on human generated text. What would happen if you used the output of one LLM as the training data set for another one? And then took the output of that one to train another one?

According to recent experiments conducted at Cornell University, something called Model Collapse. After multiple iterations, the output of the final LLM is complete gibberish.

There you have it. The second law strikes again.

Everything tends toward a state of lesser perplexity and greater homogeneity.

The mountains and valleys of human experience flattened into a featureless desert devoid of meaning.

“…Seen a shooting star tonight
Slip away
Tomorrow will be another day…”

You know what Bob, you’re right. It’s getting late. I stare upward hoping for one more meteor before I go.

But where is that perplexity coming from in the first place? Are we missing something? Is there someone out there? Sunlight of the soul streaming in from another dimension?

”…Guess it’s too late to say the things to you
That you needed to hear me say
Seen a shooting star tonight
Slip away”

--

--

Gramps Tom

Banjo picker, blogger, bewildered bystander. Still wondering vaguely what makes the universe tick.