We need to refine our discourse on artificial intelligence

Instead of mocking tools that aren’t suitable for every task, consider what they can actually be used for and why.

We need to refine our discourse on artificial intelligence
Photo by Mika Baumeister / Unsplash

OpenAI's ChatGPT, Microsoft's Bing, Google's Bard, Anthropic's Claude – AI is currently a part of public conversation like never before thanks to the large language models and the chatbots built on top of them. As such, it's worth reminding ourselves that discussions about AI benefit from specificity – as do most conversations. The clearer we are about what opportunities and challenges we're really discussing, the easier it becomes to grapple with them.

Several years ago, I wrote an article about the term "digitalization". One of the persons I interviewed reminded me that there's no single entity known as "the digitalization". Digitalization encompasses many different things depending on industry or societal sector and how technology is used or can be used within these different contexts. Digitalization within manufacturing differs from that within healthcare, which in turn differs from digitalization in schools.

And we can break this down even further. In public debates, "digitalization in schools" is often synonymous with screens in classrooms. But digitalization in schools also includes administrative tools, new ways of understanding pedagogical approaches, and of course, teaching about the technology itself.

The development of artificial intelligence has advanced so much that it has moved beyond tech companies' R&D departments. It is now being packaged into various tools that can be used in numerous ways depending on needs. However, discussions still often revolve around the umbrella term AI, albeit the chatbots has somewhat narrowed the conversation recently.

One major benefit of moving from sweeping abstraction ("AI in manufacturing") to more concrete boundaries ("image analysis for quality assurance in paint coating metal parts") is that conversation can shift from focusing on technology itself to its usage and subsequent consequences.

This shift also allows more people to feel comfortable in the discussions, thereby enabling them to share valuable experiences and thoughts about how an operation works and what can be improved. And this is incredibly important.

Sometimes AI provides the solution; other times, different approaches or technologies might be more suitable. But neither the technical expert nor the employee with domain knowledge about a specific need is best suited to make that call alone. Here, one plus one often equals three. The person with domain knowledge can outline needs, while the AI expert can suggest possible solutions. Conversely, an AI expert's description of technology can help domain experts envision entirely new ways of doing things.

As the development of artificial intelligence charges into the future, we need conversations about AI that include as many different skills as possible and discuss specific usage areas for technology. This holds true within individual organizations, in both private and public operations, but also on a societal level. Those who understand technology are undoubtedly crucial participants – but equally important are those who understand operations and needs. And let's not forget customers and citizens themselves.

When we don't just talk about technology in a vacuum but in relation to individual applications, we can start considering costs and value. Many examples from chats with the LLM's reveal laughable factual errors. But if chat results aren’t placed within their intended context, it’s hard to gauge how problematic these mistakes actually are.

💡
Curious about how LLMs work? Don't miss Financial Times' excellent primer Generative AI exists because of the transformer.

If I use an AI chatbot as a journalist to create fact boxes, it would be incredibly naive of me to take the chatbot's responses at face value without first fact-checking them myself. However when using it as a creative sounding board – for instance by copying an article draft into the chat and asking for ten headline suggestions – the risk/value calculation changes dramatically.

In terms of language models specifically, this judgement is further complicated because "text" itself is also rather vague. Text comes in many forms and sometimes serves as a tool in and of itself. Text can be used to tell a fictional story, convey new factual information, or assist in a thought process.

I have on several occasions used ChatGPT to explore various angles of a topic in the same way I would with colleagues over coffee – only with ChatGPT, the conversation is always available. And it feels like conversing with a collective given all the training material that has laid the groundwork for the model.

Right now, I believe this is one of the most important lessons we can learn from generative chat models. Let them which act as a kind of test environment for how we'll need to approach different types of AI models in future: How much effort is required to get an acceptable end result for a specific application?

Instead of mocking tools that aren’t suitable for every task, consider what they can actually be used for and why. You'll arrive at this understanding more readily by moving from general to specific discussions about potential benefits and associated challenges within your organization – or for yourself.

As we continue to develop and implement AI technologies, it's crucial that we focus our discussions not just on the technology itself, but on its specific applications and implications. By doing so, we can better understand its value and potential pitfalls in various contexts.

Subscribe to myttl

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe