Skip to content →

Category: books

Stephen Wolfram on ChatGPT

A month ago, Stephen Wolfram put out a little booklet (140 pages) What Is ChatGPT Doing … and Why Does It Work?.



It gives a gentle introduction to large language models and the architecture and training of neural networks.

The entire book is freely available:

The advantage of these online texts is that you can click on any of the images, copy their content into a Mathematica notebook, and play with the code.

This really gives a good idea of how an extremely simplified version of ChatGPT (based on GPT-2) works.

Downloading the model (within Mathematica) uses about 500Mb, but afterwards you can complete any prompt quickly, and see how the results change if you turn up the ‘temperature’.

You should’t expect too much from this model. Here’s what it came up with from the prompt “The major results obtained by non-commutative geometry include …” after 20 steps, at temperature 0.8:


NestList[StringJoin[#, model[#, {"RandomSample", "Temperature" -> 0.8}]] &,
"The major results obtained by non-commutative geometry include ", 20]

The major results obtained by non-commutative geometry include vernacular accuracy of math and arithmetic, a stable balance between simplicity and complexity and a relatively low level of violence.

Lol.

In the more philosophical sections of the book, Wolfram speculates about the secret rules of language that ChatGPT must have found if we want to explain its apparent succes. One of these rules, he argues, must be the ‘logic’ of languages:

But is there a general way to tell if a sentence is meaningful? There’s no traditional overall theory for that. But it’s something that one can think of ChatGPT as having implicitly “developed a theory for” after being trained with billions of (presumably meaningful) sentences from the web, etc.

What might this theory be like? Well, there’s one tiny corner that’s basically been known for two millennia, and that’s logic. And certainly in the syllogistic form in which Aristotle discovered it, logic is basically a way of saying that sentences that follow certain patterns are reasonable, while others are not.

Something else ChatGPT may have discovered are language’s ‘semantic laws of motion’, being able to complete sentences by following ‘geodesics’:

And, yes, this seems like a mess—and doesn’t do anything to particularly encourage the idea that one can expect to identify “mathematical-physics-like” “semantic laws of motion” by empirically studying “what ChatGPT is doing inside”. But perhaps we’re just looking at the “wrong variables” (or wrong coordinate system) and if only we looked at the right one, we’d immediately see that ChatGPT is doing something “mathematical-physics-simple” like following geodesics. But as of now, we’re not ready to “empirically decode” from its “internal behavior” what ChatGPT has “discovered” about how human language is “put together”.

So, the ‘hidden secret’ of successful large language models may very well be a combination of logic and geometry. Does this sound familiar?

If you prefer watching YouTube over reading a book, or if you want to see the examples in action, here’s a video by Stephen Wolfram. The stream starts about 10 minutes into the clip, and the whole lecture is pretty long, well over 3 hours (about as long as it takes to read What Is ChatGPT Doing … and Why Does It Work?).

Leave a Comment

Stella Maris (Cormac McCarthy)

This week, I was hit hard by synchronicity.

Lately, I’ve been reading up a bit on psycho-analysis, tried to get through Grothendieck’s La clef des songes (the key to dreams) and I’m in the process of writing a series of blogposts on how to construct a topos of the unconscious.

And then I read Cormac McCarthy‘s novels The passenger and Stella Maris, and got hit.



Stella Maris is set in 1972, when the math-prodigy Alicia Western, suffering from hallucinations, admits herself to a psychiatric hospital, carrying a plastic bag containing forty thousand dollars. The book consists entirely of dialogues, the transcripts of seven sessions with her psychiatrist Dr. Cohen (nomen est omen).

Alicia is a doctoral candidate at the University Of Chicago who got a scholarship to visit the IHES to work with Grothendieck on toposes.

During the psychiatric sessions, they talk on a wide variety of topics, including the nature of mathematics, quantum mechanics, music theory, dreams, and the unconscious (and its role in doing mathematics).

The core question is not how you do math but how does the unconscious do it. How it is that it’s demonstrably better at it than you are? You work on a problem and then you put it away for a while. But it doesnt go away. It reappears at lunch. Or while you’re taking a shower. It says: Take a look at this. What do you think? Then you wonder why the shower is cold. Or the soup. Is this doing math? I’m afraid it is. How is it doing it? We dont know. How does the unconscious do math? (page 99)

Before going to the IHES she had to send Grothendieck a paper (‘It was an explication of topos theory that I thought he probably hadn’t considered.’ page 136, and ‘while it proved three problems in topos theory it then set about dismantling the mechanism of the proofs.’ page 151). At the IHES ‘I met three men that I could talk to: Grothendieck, Deligne, and Oscar Zariski.’ (page 136).

I don’t know whether Zariski visited the IHES in the early 70ties, and while most historical allusions (to Grothendieck’s life, his role in Bourbaki etc.) are correct, Alicia mentions the ‘Langlands project’ (page 66) which may very well have been the talk of town at the IHES in 1972, but the mention of Witten ‘Grothendieck writes everything down. Witten nothing.’ (page 100) raised an eyebrow.

The book also contains these two nice attempts to capture some of the essence of topos theory:

When you get to topos theory you are at the edge of another universe.
You have found a place to stand where you can look back at the world from nowhere. It’s not just some gestalt. It’s fundamental. (page 13)

You asked me about Grothendieck. The topos theory he came up with is a witches’ brew of topology and algebra and mathematical logic.
It doesnt even have a clear identity. The power of the theory is still speculative. But it’s there.
You have a sense that it is waiting quietly with answers to questions that nobody has asked yet. (page 68)

I did read ‘The passenger’ first, which is probably better as then you’d know already some of the ghosts haunting Alicia, but it’s not a must if you are only interested in their discussions about the nature of mathematics. Be warned that it is a pretty dark book, better not read when you’re already feeling low, and it should come with a link to a suicide prevention line.

Here’s a more considered take on Stella Maris:

4 Comments

Loading a second brain

Before ChatGPT, the hype among productivity boosters was a PKMs or Personal knowledge management system.

It gained popularity through Tiago Forte’s book ‘Building a second brain’, and (for academics perhaps a more useful read) ‘How to take smart notes’ by Sönke Ahrens.



These books promote new techniques for note-taking (and for storing these notes) such as the PARA-method, the CODE-system, and Zettelkasten.

Unmistakable Creative has some posts on the principles behing the ‘second brain’ approach.

Your brain isn’t like a hard drive or a dropbox, where information is stored in folders and subfolders. None of our thoughts or ideas exist in isolation. Information is organized in a series of non-linear associative networks in the brain.

Networked thinking is not just a more efficient way to organize information. It frees your brain to do what it does best: Imagine, invent, innovate, and create. The less you have to remember where information is, the more you can use it to summarize that information and turn knowledge into action.

and

A network has no “correct” orientation and thus no bottom and no top. Each individual, or “node,” in a network functions autonomously, negotiating its own relationships and coalescing into groups. Examples of networks include a flock of birds, the World Wide Web, and the social ties in a neighborhood. Networks are inherently “bottom-up” in that the structure emerges organically from small interactions without direction from a central authority.

-Tiago Forte, Tagging for Personal Knowledge Management

There are several apps you can use to start building your second brain, the more popular seem to be Roam Research, LogSeq, and Obsidian.

These systems allow you to store, link and manipulate a large collection of notes, query them as a database, modify them in various ways via plugins or scripts, and navigate the network created via graph-views.

Exactly the kind of things we need to modify the simple system from the shape of languages-post into a proper topos of the unconscious.

I’ve been playing around with Obsidian which I like because it has good LaTeX plugins, powerful database tools via the Dataview plugin, and one can execute codeblocks within notes in almost any programming language (python, haskell, lean, Mathematica, ruby, javascript, R, …).

Most of all it has a vibrant community of users, an excellent forum, and a well-documented Obsidian hub.

There’s just one problem, I’m a terrible note-taker, so how can I begin to load my ‘second brain’?

Obsidian has several plugins to import data, such as your Kindle highlights, your Twitter feed, your Readwise-data, and many others, but having been too lazy in the past, I cannot use any of them.

In fact, the only useful collection of notes I have are my blog-posts. So, I’ve uploaded NeverEndingBooks into Obsidian, one note per post (admittedly, not very Zettelkasten-like), half a million words in total.

Fortunately, I did tag most of these posts at the time. Together with other meta-data this results in the Graph view below (under ‘Files’ toggled tags, under ‘Groups’ three tag-colours, and under ‘Display’ toggled arrows). One can add colour-groups based on tags or other information (here, red dots are posts tagged ‘Grothendieck’, the blue ones are tagged ‘Conway’, the purple ones tagged ‘Connes’, just for the sake of illustration). In Obsidian you can zoom into this graph, place a pointer on a node to highlight the connecting dots, and much more.



Because I tend to forget such things, and as it may be useful to other people running a WordPress-blog making heavy use of MathJax, here’s the procedure I followed:

1. Follow the instructions from Convert wordpress articles to markdown.

In the wizard I’ve opted to go only for yearly folders, to prefix posts with the date, and to save all images.

2. This gives you a directory with one folder per year containing markdown versions of your posts, and in each year-folder a subfolder ‘img’ containing all images.

Turn this directory into an Obsidian-vault by opening Obsidian, click on the ‘open another vault’ icon (third from bottom-left), select ‘Open folder as vault’ and navigate to your directory.

3. You will notice that most of your LaTeX cannot be parsed because during the markdown-process backslashes are treaded as special character, resulting in two backslashes for every LaTeX-command…

A remark before trying to solve this: another option might be to use the wordpress-to-hugo-exporter, resulting in clean LaTeX, but lacking the possibility to opt for yearly-folders (it dumps all posts into one folder), and it makes a mess of the image-files.

4. So, we will need to do a lot of search&replaces in all files, and need a convenient tool for this.

First option was the Sublime Text app, which is free and does the search&replaces quickly. The problem is that you have to save each of the files, one at a time! This may take hours.

I’ve done it using the Search and Replace app (3$) which allows you to make several searches/replaces at the same time (I messed up LaTeX code in previous exports, so needed to do many more changes). It warns you that it is dangerous to replace strings in all files (which is the reason why Sublime Text makes it difficult), you can ignore it, but only after you put the ‘img’ folders away in a safe place. Otherwise it will also try to make the changes to these files, recognise that they are not text-files, and drop them altogether…

That’s it.

I now have a backup network-version of this blog.



As we mentioned in the previous post a first attempt to construct the ‘topos of the unconscious’ might be to start with a collection of notes (the ‘conscious’) and work on the semantics of text-snippets to unravel (a part of) the unconscious underpinning of these notes. We also mentioned that the poset-structure in that post should be replaced by a more involved network structure.

What interests me most is whether such an approach might be doable ‘in practice’, and Obsidian looks like the perfect tool to try this out.

What we need is a sufficiently large set of notes, of independent interest, to inject into Obsidian. The more meta it is, the better…

(tbc)

Previously in this series:

Next:

The enriched vault

Leave a Comment