Weekend links: Govern AI – and govern with AI

Weekend links: Govern AI – and govern with AI
Photo by Tingey Injury Law Firm / Unsplash

The first link this week is a long-read about the future of democracy and what part AI will play. Not only as a subject that needs to be governed, in one way or another. But also as a tool for building democracy. I think of this as duality that's important to keep in mind: We need not only find effective ways to regulate artificial intelligence that limit possible drawbacks but also allow its beneficial use. We also need to think about how AI can help us get there.

More links next weekend!

Reimagining Democracy for AI

I don’t know nearly enough about innovation of the democratic system and what’s possible in that domain. But this, about governing AI (but also other issues) and using AI as a tool in the democratic process, I found really interesting.

Unbundling AI

Lately, I’ve read a lot of papers, articles, and blogposts on where LLMs will go next. I’ve even written one myself, but that’s still to be translated into English.

This one is by Benedict Evans, and it has a couple of good metaphors on why chat probably isn’t the best way to interact with a LLM. Too much of a blank canvas (which really is what the chat input is) becomes a constraint of its own.

Within a couple of months I expect there will be much more of LLMs being available in the tools we use, rather than us leaving context to enter a chat with a LLM.

AI safety guardrails easily thwarted, security study finds

Fine-tuning LLMs turns out to be a way to break their safeguards. Obviously more research, much more research, is needed in this field. And one way to get there is through openly available models giving more researchers the study objects they need. To me, this seems a lot like to old “security by obscurity” idea – which didn’t turn out that well.

Large language models, explained with a minimum of math and jargon

With everything going on around LLMs, everywhere from usage to implications, it think we all will be better of with a bit of understanding on how the models work. This explainer is one of the better I’ve read.

Can human moderators ever really rein in harmful online content? New research says yes

Having human moderators keeping up with harmful content on social media seems like an impossible task. This study indicates that “Trusted Flaggers” (a part of EU’s Digital Services Act) can reduce the spread a lot. One aspect is that moderating team not always understand the cultural context for the markets they moderate, while the Trusted Flaggers do.

People are obsessed with Obsidian, the darling of notetaking apps

Obsidian is since 2021 the tool I use the most on my devices. For project management, for note taking, for articles, pretty much anything related to text. This is the story about Obsidian, told by both the company’s CEO and some of the community’s more well known members.

Top Stories Daily

If you find yourself in the intersection of the Venn diagram “RSS”, “News”, and “Fediverse” I think you should have a look at this RSS feed with the top stories shared on Fediverse, generated by @murmel_social@mastodon.social.

Flipped coins found not to be as fair as thought

And to wrap up, som important research for all the coin-flippers out there: Heads vs tails is NOT a 50/50 chance, depending on the starting position.

Subscribe to myttl

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe