Communicating with AI Systems
Homework assistance, health app, or daily planner – with Alexa, Siri, and ChatGPT, new technologies are taking over our daily lives. How does artificial intelligence (AI) change our communication? And how does it influence our relationships? These questions were addressed by a study within the context of the research project IMPACT, headed by Prof. Dr Nicole Krämer.
Over a period of four years, experts from computer science, psychology, ethics, and law investigated what users know about modern technology, how they can use it in a self-determined manner, and whether laws need to be changed. And they did so in an integrated approach with the help of three groups: children, adults, and seniors took part in the studies. Interaction with AI was thus considered over the entire life span.
‘We now understand better how we respond to artificially intelligent machines that speak,’ says the social psychologist Krämer. The study was accompanied by a citizen science project: ‘It was important to us that this socially relevant topic was also studied by an active citizenry.’ Two workshops shed light on further perspectives.
From a scientific perspective, providers will increasingly humanise human-machine relationships in the future. Users will have to respond to this in a competent and informed manner. Here are a few findings and suggestions for action, based on the three research foci of the study:
Lack of transparency
People are often not aware of how extensive the data being collected are. ‘We give our okay in good faith,’ says Krämer. ‘But many people lack an adequate understanding of the technology.’
One solution is to disclose less data in interactions and find legal regulations that constrain companies to store less data – and also to explain what goes on in the machine. Who would let their child play with a robot without knowing what it is capable of? Not everyone reads the instruction book, but the scientists found ‘that explanations can also be complex if this makes comprehensible how the digital assistants work. Surprisingly, the comprehension was best for the most com- plicated explanation.’
When grandma no longer reads to you: Sometimes the virtual assistant is seen as a family member. Things that would otherwise be done by relatives are gratefully accepted by the machine as well. Children use the devices even before they can read and write. ‘They anthropomorphise the assistant, carelessly tell it secrets, or believe that Alexa has her knowledge from the Alexa school.’ This is also problematic from an ethical perspective: ‘Everyone should be aware that they are talking to a machine.’ Parents have a great responsibility in this regard.
'Please’ and ‘thank you’ as polite anchors: people who regularly speak with a machine change their communication behaviour. For example, many adjust their way of speaking to the assistant. Here it helps if the systems work only after a ‘please’ or with complete sentences. The closing event for IMPACT this spring also addressed legal and ethical concerns and the consequences of new developments like ChatGPT. It became apparent that more research is necessary – and that the more complex the technology is, the more complex legal provisions and rules also must be.