Weekend links: AI transparency and AI and misinformation
While a lot of the public debate about AI risks are frustratingly vague, there is also a lot going on around AI safety that is much more specific. Among other things research aiming at building better understanding for how models are built and function, but also about what effects AI has on society.
Researchers from Stanford, MIT, and Princeton have created a transparency index for foundational AI models. 100 different indicators are used to measure how transparent a model is. This matters in a time where a lot of new products and services are built on models trained elsewhere.
And another group of researchers have had a look at to what extent we can expect that generative AI models will impact misinformation. Not as much as one would guess.
Other interesting things from the week:
What people ask me most. Also, some answers.
Ethan Mollick, one of the writers on LLMs and their use I read a lot at the moment, has just published a FAQ on where the technology stands as of today. Worth reading.
How I think about LLM prompt engineering
Can LLMs be framed as databases with a vast number of programs ready to be run? Thinking about the big language models in that way also explains why prompt engineering is a thing we are talking about, where different prompts launches different programs.
Apple GPT: What We Know About Apple's Work on Generative AI - MacRumors
Of the biggest players out there, the least is known about Appleβs work on AI. MacRumors with some hints on what might come.
Rock, paper, scissors
The game implemented as a series of YouTube videos. Excellent work by CGP Grey!