Detail

Back
04/21/2026 | Conference | Workshop

Lea Stöter and Konstantin Lackner at the 1. STS NL Conference

Lea Stöter and Konstantin Lackner gave a presentation and ran a workshop at the STS NL Conference from April 15-17, 2026.

About the Conference 

First STS NL Conference at the University of Twente (Enschede) from April 15-17, 2026, organized jointly with the Netherlands Research School for Science, Technology and Modern Culture (WTMC) with the theme "Knowledge & Technology in TimeS of Global Shifts."

To Prompt or not to Prompt: A Mini-Workshop on Academia's Relationship with Generative Artificial Intelligence

Konstantin Lackner and Lea Stöter

Generative artificial intelligence (genAI) could be considered a hot topic of scholarly practices: Some researchers are turning to editorials, opinion pieces, and articles to either denounce or encourage the use of genAI in knowledge production processes (e.g. Jowsey et al. 2025, Nguyen and Welch 2025, Anis and French 2023). Meanwhile, the promise of increased efficiency may create a sense of necessity around adopting these tools for everyone else (Bin-Nashwan et al. 2023).

As general application tools, the use of genAI is already shaping research processes, the peer review system, and higher education (Barros, Prasad, and Sliwa 2023). Next to these touch points within academia, the impact of genAI can also be seen in the published output: Andrews et al. (2024), for example, discuss the emergence of pseudoscientific language and assumptions in machine learning research. Shardlow and Przybyła (2024) identify tendencies of anthropomorphising language in NLP research reporting.

In the proposed mini-workshop we draw on speculative design (Auger 2013) and fabulation methods (Hartman 2008, Rosner 2018), inviting participants to imagine diverse futures for academia and its touch points with genAI. Structured in three parts, we invite participants to examine different areas of academia, for example research, education, self-service or peer review, based on their respective interests. Following a short introduction, participants engage in word-building supported by brainstorming exercises, before entering the paper prototyping phase to speculate and imagine, how a curriculum, abstract or call for papers might look like in their imagined world.

Overall, the workshops aim is to bring together interested scholars to discuss questions such as: At which points might academia resist or embrace generative artificial intelligence? How might processes of scholarly production be shaped by the introduction of this technology? What are the roles and responsibilities of researchers and educators in these scenarios?

The AI-Question in Academia: Exploring the Practices and Perceptions of Researchers using AI-driven Software

Lea Stöter

The use of artificial intelligence in knowledge production processes and in academia has been illuminated from different disciplinary perspectives, focusing e.g on authorship or academic integrity (e.g. Barros et al. 2023; Watermeyer et al. 2023). With the introduction of OpenAI's ChatGPT, and the associated API-access, the integration of LLMs in software tools for literature reviews or paper writing was simplified, impacting the ways knowledge is produced. Drawing on new materialism (Barad 2008) and situated knowledges (Haraway 1988) to investigate this knowledge produced by human researchers and their AI-driven tools, in this early-stage paper, I aim to present insights from an interview study with researchers from differing fields and career stages. In this study, I explore the use practices, justifications for and against AI-usage, as well as the underlying conceptions about the integration of artificial intelligence tools into knowledge production processes using card-sorting techniques. Conceptualising the intra-actions between researchers and their tools as part of a decision space within knowledge production processes, I aim to make sense of the differences and similarities when employing other computational technologies within the research process compared to the usage of genAI. In this study, I explore the perceived implications for research methods, ethical and transparent research, and the understanding of what constitutes scientific knowledge. Following Vu's (2018) approach of ‘thinking with’ as analysing knowledge and practice as co-produced by multiplicities of human/material, I aim to attune to the multiple knowledges within these assemblages of researchers, digital technologies, and knowledge sources to understand what might be necessary for accountable research practices with AI-driven software.

(To note: As of submitting this abstract, interviews for this study are still being conducted and planned.)