Using language sciences for social good
Language technologies, like ChatGPT, are developing quickly. But we don’t really know how they work. And they are not always designed to help society.
When ChatGPT provides an answer, we can't be sure it's accurate. OpenAI, its creator, does not guarantee truthfulness. Similarly, language technologies used by employers to screen CVs may be biased.
These technologies use self-learning algorithms, a type of AI trained on example inputs and outputs. It's unclear why an algorithm might choose one answer over another.
Floris Roelofsen, professor at the Institute for Logic, Language and Computation at LAB42, leads the new large-scale research project ‘Language Sciences for Social Good’. This project aims to ensure language technology serves social good, not just cost-efficiency and results.
Language technology has a lot of impact. But in areas where it could potentially have a positive impact, it is still lacking, according to Roelofsen. The LSG project has a threefold goal:
- To use more responsible methods in developing language technology;
- To make sure that language technology can contribute to a safe society;
- To make sure that language technology can contribute to an inclusive society.