Symbolic image of AI-supported software development
© Gorodenkoff – stock.adobe.com

How Language Influences AI-Assisted Software Development

Clear Communication

  • von Birgit Kremer
  • 21.01.2026

Artificial intelligence (AI) is taking on more and more tasks in software development. But this does not always go smoothly. Prof. Dr. Andreas Vogelsang from paluno, The Ruhr Institute for Software Technology at the University of Duisburg-Essen is investigating how precise language affects the outcomes of AI-assisted software development. The project, ReSPro, will be funded for three years by the German Research Foundation (DFG).

Large language models (LLMs) such as ChatGPT are increasingly being used in development to generate source code, derive test cases, create software models, or link requirements to source code. In practice, textual requirements often serve as input for this purpose — but they frequently contain vagueness, ambiguity, or contradictions. This can lead, for example, to the AI writing unsuitable code. Such linguistic weaknesses are referred to as “requirements smells.” They have long been known in classical software technology; however, their impact on AI-based tools has so far hardly been examined systematically.

This is where the project “Requirements Smells in Prompts (ReSPro)” led by Prof. Dr. Andreas Vogelsang and his team comes in. It provides a fundamental analysis of how strongly language quality shapes the results of AI-assisted software development. “Many current AI systems work with requirements that are not even unambiguous for humans to understand, and these imprecise descriptions are then passed on to the AI systems,” explains project lead Vogelsang. “For the first time, we will systematically analyze which types of vagueness are particularly problematic for AI — and how we can support developers in formulating better prompts.”

To this end, the project examines various use cases, including automatic code and test-case generation, model generation, and the tracing of requirements in source code. Based on the findings, tools are also to be developed that automatically detect problematic wording in prompts and provide specific suggestions for improvement or correct them directly.

The long-term goal is to make the use of AI systems in software development more reliable, robust, and easier to understand, thereby strengthening quality assurance in AI-assisted development processes.
 

Further information:
Prof. Dr. Andreas Vogelsang, andreas.vogelsang@uni-due.de

Editor: Birgit Kremer, paluno,birgit.kremer@paluno.uni-due.de

Zurück