Note-taking, note-thinking, AI, and peripheral vision
I take notes for a couple of reasons. The most important is to help me think, to bring structure to what I read, listen to, watch, what I hear colleagues say in a meeting, or my own random thoughts while driving to work. It's not so much "note-taking" as it is "note-thinking".
If all you do is store quotes, you are only getting halfway, or probably even less. I want to put my notes to work, to use them. Having highlights and annotations is only the starting point for your own thinking process. Or as Bob Doto phrased it in an essay last summer: "Your zettelkasten should be made up of your ideas about the author's ideas".
In practice, an important part is finding connections between thought fragments in your note collection. But finding the connections isn't enough. Making meaning from the connection is the important part. It's in the idea that link note A to note B the value is created.
Casey Newton touched on this in his newsletter Platformer recently, when he argued that note-taking apps don't make us smarter:
Thinking takes place in your brain. And thinking is an active pursuit.
The sub-header for Newton's piece is a question: "They're designed for storage, not sparking insights. Can AI change that?"
My answer is yes, AI will – and already have. My experience is that note-taking apps indeed make me smarter. But at the same time, Newton nails why this is: It's not the note-taking part of a note-taking app that provides its true value. It is how well the tool assists in thinking.
Newton links to a text by Andy Matuschak that I had notes on since before. In it, Andy Matuschak argues that peripheral vision is an important feature of physical notes that are hard to replicate in a digital context. When sitting at a desk working with ideas on Post-It notes och index cards, it's easy to group them together, form clusters, and build hierarchies. And as you do, some clusters will become more important than others and take center stage. But the others are still there, in your peripheral vision.
Reading Matuschack, I realized that I've been striving for peripheral vision in my note-taking process for years. And also slowly getting there. In Obsidian, there are at least four approaches I use:
- Embedded queries to list notes containing certain keywords related to the text I'm working on.
- The local graph to visualize links I've already created.
- The canvas builds a more spatial overview of how things relate to each other (which most closely resembles Matuschak's peripheral vision in the physical space).
- Smart Connections to surface possible connections between my notes.
Smart Connections is the latest addition to my toolset. It uses OpenAI's API to create embeddings of my notes, and based on them suggest notes that are similar to the active one. While OpenAI's language models can't think on my behalf, helping me find a possible connection between my notes is equally important.
The text files with highlights, annotations, and random thoughts I have in Obsidian are counted in the thousands. I can't remember them all. If I could, there would be no use in writing them down in the first place.
So what Smart Connections, and the other methods listed above, do is give me suggestions. "Based on the note you are working on right now, these are other notes that have some similarities with it." And when having a look at these notes – doing the active pursuit Newton is talking about – I make out if the relationship between the notes is of relevance to me or not, in the current context I'm working in.
Smart Connections, or really any of the note-taking tools I've tried or read about, can help you with
but it's me who adds the important part
It's in the "because X" where the value lies.
The tools act like a metal detector when on a treasure hunt on a beach: They can signal where there is an object of possible interest, but it's only I who can decide whether there is any value or not.