A lot has been said about potential risks with generative AI. Lately, much of what I read are more grounded in research rather than gut instinct. The conclusion is often (most of the time, even) that generative AI won’t change the playing field for disinformation that much. The reason for this is that distribution, rather than creation, serves as the bottleneck. This is the latest text I’ve read, and it also outlines a couple of beneficial use cases for LLMs in this space.
On a related topic, the Verge had Lawrence Lessig on its pod. Many interesting ideas on democracy and the need of innovation of our democratic systems.
I’ve just started to build my understanding of embeddings and how I can make use of them. In that process, this primer by Simon Willison is one of the better I’ve read. Really looking forward to try implementing this in my Obsidian vault.
I usually don’t post much about what's going on over at Twitter. But this piece by Ben Evans is worth sharing.
How to decide when AI is good enough and when it's not? Ethan Mollick suggests a thought pattern grounded in how the Best Available Human would perform. I think this is useful. Rather than demanding AI that's always 100 percent on the spot, this is a more realistic and pragmatic way of thinking.
If the first solar entrepreneur hadn't been kidnapped, would fossil fuels have dominated the 20th century the way they did?
While it might feel painful to ponder this great “what if” as the climate breaks down in front of our eyes, it can arm us with something useful: the knowledge that drawing energy from the sun is nothing radical or even new. It’s an idea as old as fossil fuel companies themselves.
A fascinating story about a kidnapped inventor in the early 1900s.