Using language sciences for social good

12.07.24

Language technologies, like ChatGPT, are developing quickly. But we don’t really know how they work. And they are not always designed to help society.

When ChatGPT provides an answer, we can't be sure it's accurate. OpenAI, its creator, does not guarantee truthfulness. Similarly, language technologies used by employers to screen CVs may be biased.

These technologies use self-learning algorithms, a type of AI trained on example inputs and outputs. It's unclear why an algorithm might choose one answer over another.

Floris Roelofsen, professor at the Institute for Logic, Language and Computation at LAB42, leads the new large-scale research project ‘Language Sciences for Social Good’. This project aims to ensure language technology serves social good, not just cost-efficiency and results.

'Language technology is booming, it’s really transforming society. With ChatGPT it’s very clear, but this development has been going on for the past 25 years. We have been using language technology when we search on Google, when we translate pieces of text, or when we use voice assistants on our phone.'
/floris_roelofsen/professor/

Language technology has a lot of impact. But in areas where it could potentially have a positive impact, it is still lacking, according to Roelofsen. The LSG project has a threefold goal: 

  1. To use more responsible methods in developing language technology;
  2. To make sure that language technology can contribute to a safe society;
  3. To make sure that language technology can contribute to an inclusive society.
This is the intro of a blog item on the Institute of Logic, Language and Computation site. Read the  full article.