Raquel Fernández: talking to your computer as if it were human
You may have seen one while shopping online: you scroll a bit and a pop-up suddenly appears in the corner of your screen, asking, ‘Is there anything I can help you with?’ These ‘conversational agents’ or chatbots are like digital assistants meant to make our lives easier. The question is, do they?
Raquel Fernández, full professor at the Institute for Logic, Language and Computation, is studying how to make computers communicate more like real people do.
‘We humans are constantly using language. This way of communicating is a fascinating thing,’ Raquel says with enthusiasm. ‘You produce sounds that another person hears and this allows you to transmit ideas and cooperate. I want to understand how humans do this and then capture it in computer models. That way, it will be possible to talk to a computer as if it were a person.’
Raquel praises the interdisciplinary nature of the field. ‘We’re working with Communication Sciences, for instance, to see whether conversational agents can adapt their language use to different age groups.
First, we looked at subjects of varying ages to see whether they used different styles or slang that could be detected automatically. That was indeed the case. The conversational agent was then able to both detect the variance in language use and adjust its own language use based on who it was speaking with, the older or younger user.
We're now at the point where it's time to let this adaptive agent communicate with real people and see whether people find it a more pleasant experience. Does the digital agent seem more human to them now? Did it feel like a conversation with a real person? That study is scheduled to take place this year. I’m extremely curious to see the results.’
Still, computer vision is a very real part of Raquel's work. ‘In conversations, people constantly refer to things we perceive around us. This is not a problem when you're talking to another person, but it could be if you were to ask your smart assistant: “what kind of flower is that?” Then the assistant would have no idea what you were talking about.
We still have a long way to go in this regard. Visual information could be really useful in giving conversations with your smart assistant that human touch.’ Raquel mentions an example: preparing a recipe together. ‘With visual clues, the smart assistant could not only tell you the ingredients, it could also warn you if you're going to need a bigger pan. Then it becomes a genuine cooperative effort.’