Artificial intelligence (AI) is rapidly becoming a fundamental pillar in the modern research toolkit, and its integration has created significant discussions within the scientific community. While many researchers are enthusiastic, a lack of consensus on ethical and appropriate use has led to widespread uncertainty and inconsistent guidelines, as well as stigmatization of AI usage in research. This free webinar aims to discuss one of the questions that divides the scientific community: Should scientists build a trusted partnership with AI in their research? If yes, how can they do it?
With our invited speakers, we aim to tackle the risks of cognitive offloading, the high cost of verifying AI-generated content, the human tendency to trust conversational agents, and the definition of trust.
Quick facts:
- Format: Webinar with three short talks and a discussion round
- Date: April 10, 2026 at 14:00 (CEST, Berlin Time)
- Registration: Sign up here
Speakers:
- Dr. Joss von Hadeln
Dr. Joss von Hadeln is the Managing Director of the Center for Applied Informatics and Data Science at Justus Liebig University Giessen. With a background in neuroscience, he explores how perception, autonomy, and human creativity relate to artificial intelligence. His work focuses on AI literacy, critical reflection, and developing frameworks that balance technological innovation with human agency.
This talk examines how AI tools extend our cognitive abilities through distributed cognition, and what this means for education in an uncertain future. Since we cannot predict which specific skills will matter tomorrow, the focus shifts to strengthening core human capacities like autonomy, adaptability, and critical thinking, that allow us to navigate change effectively. The talk encourages us to use AI tools intentionally as an extension of our thinking rather than a replacement for it. - Dr. Olya Vvedenskaya
Dr. Olya Vvedenskaya is a science communication expert and long-term supporter of scienceOS, with a strong interest in ethics and bioethics. With a background in medicine and science, some of her work focuses on trust in artificial intelligence, risk perception, and how AI can be responsibly integrated into scientific work. She explores how researchers evaluate and use AI systems, and how trust in these systems can be intentionally built rather than simply expected.
In this talk, Olya presents trust in AI as something that changes depending on the task and the level of perceived risk. Based on survey data collected by scienceOS and research on trust, she shows that scientists may confidently use AI for some tasks while remaining cautious about others. The talk highlights how transparency, clear system limits, and human oversight help create well-founded trust – and why appropriate trust, rather than blind trust, is essential for responsible AI use in scientific research. - Dr. Ulrich Degenhardt:
Dr. Ulrich Degenhardt holds a PhD in theoretical physics and has worked as a consultant and project manager in both industry and research-oriented environments. He served for more than ten years as Head of IT at a Max Planck Institute. Currently, he divides his time between scientific research and responsibilities in IT Security. His scientific work focuses on applying modern mathematical methods to the analysis of time series. He is particularly interested in how to derive reliable and trustworthy answers from inherently unreliable outputs of generative AI.
Large language models (LLMs) are transforming scientific practice by drastically reducing the cost of generating text and code. This shift makes verification far more costly than generation and strains limited cognitive resources. Abundant plausible but unverified content threatens to overwhelm traditional assessment mechanisms. LLMs do not generate knowledge, but rather candidates for knowledge. Consequently, scientific practice must prioritize the careful vetting of generated candidates, requiring both the revision of existing assessment methods and the introduction of new ones.
Join the webinar: Sign up here
We would like to underline that we do not aim to answer all the questions that might occur – most likely there will be more questions than answers at the end of the webinar. However, we believe that it is essential to start this discussion transparently and bring it to the public, even if we cannot at the moment solve all the rising concerns. This aligns with the core academic norms of accountability and transparency that remain unchanged in the age of AI.

