Weekend links: Open vs closed AI in the wake of Biden's EO

Weekend links: Open vs closed AI in the wake of Biden's EO
Dall-E prompt: Visualize the concept of regulating artificial intelligence, focusing on the debate between closed versus open foundational models. The scene shows two distinct halves. On the left, a metallic, sealed vault represents closed foundational models, with intricate locks and digital code streaming around it, symbolizing restricted access. On the right, an open, flourishing digital tree with circuit-like branches and leaves represents open foundational models, surrounded by light and data points flowing freely to symbolize openness and accessibility. The background is a digital landscape, with abstract binary code subtly visible, hinting at the underlying theme of AI regulation. No humans are present in the illustration.

With Joe Biden signing an Executive Order on AI, and a high-level meeting on AI safety hosted by the UK government, there should be no surprise that this weeks linkblog is mainly about artificial intelligence.

With these two as the backdrop, much of the debate now is focusing on the risks and benefits with open foundational models. From Mozilla Foundation, there is an open letter, which reads completely different compared to the open letters published during the spring. The signatories of Mozilla's aren't expressing that much fear of human extinction, but rather about risks with closed models:

The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.

Over at X, Percy Liang, Associate Professor in computer science at Stanford from posted a great thread outlining the benefits from open models. On on their excellent blog/newsletter AI Snake Oil, Arvind Narayanan and Sayash Kapoor, also from Standford, discusses openness as its framed int the executive order.

Much of what I've read during the week is pretty pessimistic about the Executive Order. Some see it as successful regulatory capture by some of the current giants in the field. Steven Sinofsky’s tear down of the Executive Order on AI is the most detailed I’ve seen yet. He highlights a lot of issues, in many different directions, and is well worth reading.

And here is Ben Thompsons take on Biden's EO. Thompson is a tech analysts that has published his Stratechery newsletter for years. He is among the best I regularly read when it comes to putting technology into context. In this analysis, he isn't particularly positive about what it is the White House is trying to achieve, making a comparison between Gates' and Jobs' approach to mobile, and also having a lot of interesting reasoning around regulatory capture that many worries about at them moment.

Casey Newton is a bit more optimistic, even though he also concludes that this plays into the hands of the biggest players. In a follow-up, Newton expands on why regulation is needed, and saying that any regulation benefits the incumbents. No or little regulation does as well. Apple vs Spotify, Meta’s dominating role in social, etc.

I think we can conclude that regulation is needed.

The hard thing is to find out how to do it right.

Finally, not related to regulation, but still relevant in the context of all the links above: Two weeks ago I shared a link to a transparency index for foundational models developed at Stanford, MIT, and Princeton. Since the FMTI was published a few weeks ago, I’ve seen some criticism of the index. But none so elaborate and detailed as this blog post from Eleuther.

Subscribe to myttl

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe