Artificial intelligence in research

The content on this page was translated automatically.

Artificial intelligence (abbreviated as AI in the following) is undergoing rapid development and is increasingly being used for more and more tasks - including at the University of Kassel. Although the benefits, consequences and risks of AI are only partially foreseeable, we can assume that the speed of development in the field of AI will continue to increase in the future. Many graduates of our university will work in professional fields in which dealing with AI is part of everyday working life. In the same way, AI will also increasingly change research work in the various disciplines.

The University of Kassel sees an immense need for research into the potential and challenges of this development. On the one hand, AI can be a means of increasing the well-being of individuals and society and the common good, as well as contributing to the promotion of progress and innovation. For example, AI systems can help to achieve the United Nations' Sustainable Development Goals, such as gender equality, combating climate change, the rational use of natural resources, health promotion, mobility and production processes. On the other hand, it is also important to keep an eye on the risks, for example the extent to which the rapid growth of the required infrastructures drives the consumption of resources, especially energy resources. Our university wants to help shape the change brought about by AI with good research. At the same time, our university must be a place where critical, reflective, transparent and responsible ways of dealing with AI are developed, negotiated, communicated and learned.

In order to act responsibly in this area of tension, the university management is currently developing guidelines that should provide scope for action. These are to be discussed and finally adopted by the university committees in the course of the winter semester 25/26.

With regard to research, the guidelines will be based on existing requirements - such as ethical and legal conditions in the area of information security - and will take into account external requirements from the DFG and other third-party funding bodies. In addition, they should focus on publication activities, as the methods and procedures used in research must be disclosed, justified and/or evaluated.

The FAQs below are already intended to provide examples of possible application and safety aspects as well as in connection with the use of AI in research and provide initial guidance. The FAQs will be revised regularly as a guide so that they can keep pace with the development of technology, our social environment and our research in this area. This also means constantly and jointly reviewing the extent to which they can be applied to fields that have not yet been considered - such as artistic and creative practice. This regular review and adaptation will be just as important as a continuous university-wide discussion on new developments in this dynamic field of science using suitable event formats.

These questions and answers are based on the FAQ by Frisch (2024) [1]. Questions and answers 1, 2, 3, 4, 5, 7 and 8 essentially follow this source.

Questions and answers

The guidelines set out in the Code represent the consensus of the members of the DFG on the fundamental principles and standards of good scientific practice. These guidelines provide all researchers with a reliable guideline for establishing good scientific practice as a fixed and binding component of research. Methodological transparency and traceability are essential basic principles of scientific integrity. In terms of content, full responsibility for compliance with scientific integrity lies with the researchers.

The Code itself does not (yet) contain any statements on the use of generative AI. With regard to its application, however, the generally established standards of the GWP in the sciences are fundamental. On the basis of the GWP Code and other guidelines and recommendations on AI, a certain consensus has emerged on two aspects since the publication of ChatGPT and other AI systems:

  • AI cannot act as an author, especially since it cannot take responsibility for the content of a manuscript or for other research products, but this is generally required of human authors according to the GWP.
  • The use of AI must be disclosed transparently and appropriately in manuscripts. However, the exact implementation of the transparent disclosure of AI is sometimes interpreted differently or often not defined more precisely.

The disclosure of the use of AI complies with the applicable transparency standards of the GWP (see Guidelines 12 and 13 in the DFG Code of Conduct "Guidelines for Safeguarding Good Scientific Practice"). Information on the use of AI allows readers and reviewers to better understand the results, methods and work steps and offers the opportunity to evaluate the research results more reliably in retrospect. However, due to the large number of different AI applications and functions, there are different recommendations regarding the exact design of the information. Some aspects of the information on the use of AI are also still being increasingly discussed in the research community, e.g. the documentation of text entries (prompts).

There is no uniform answer to the question of the appropriate type of disclosure. Most editorial policies do not specify usage information, or only minimally. However, there are also initiatives by smaller publishers, e.g. Berlin Universities Publishing (BUP) [3], which have developed precise proposals. Despite all the differences in the level of detail of the information, there is a certain minimum consensus. This includes specifying the AI application used, including version, date of use, URL and information on what the AI application was used for and how. Against the background of increasing requirements for the documentation of AI usage, it should be considered whether the existing technical possibilities for full documentation (including the storage of prompts and chat histories) are already being used today.

Some editorial policies distinguish between the type of use of AI or between different AIs (usually generative AI on the one hand and applications for checking and adapting language, grammar and style, such as Grammarly, on the other) and specify that the guidelines for disclosing AI do not refer to the latter (see, for example, the AI policy of Elsevier or Wiley).

The DFG [4] also has a clear position: "Disclosure" is understood to mean the indication of which generative models were used for what purpose and to what extent, for example in the preparation of the state of research, in the development of a scientific method, in the evaluation of data or in the generation of hypotheses. AI used that does not affect the scientific content of the application (e.g. grammar, style, spell-checking, translation programs) does not have to be specified. The content can be described in a few explanatory sentences. Specific identification of the text passages concerned in the application is expressly not required." This question can also be assessed differently within the subjects, depending on the significance of texts or linguistic formulations. In the humanities, cultural and social sciences in particular, linguistic style, grammar and spelling as well as argumentation patterns are fundamental for generating, securing and presenting knowledge and are therefore key components of the evaluation of academic texts. In this case, it would be conceivable to indicate the use of applications that improve linguistic style.

Authors should be aware that important information may be lost or distorted when machine translating texts. Here, too, the translated texts must be carefully checked and the responsibility for the text as a whole, and in particular for any technical errors, lies with the authors.

When writing proposals, the information provided by the respective research funder must be observed. The DFG has approved the use of AI in proposals if this is clearly marked. Furthermore, the statement explains that the use of AI in proposals is assessed neutrally, i.e. neither negative nor positive claims should be made by the reviewers.

For assessments, the use of AI is either subject to severe restrictions or not permitted at all in many guidelines and recommendations. In general, it is not permitted to feed the manuscript (or project proposal) to be reviewed into an AI. The reasons cited for this are confidentiality and data protection, which may be compromised as a result. The DFG statement [5] clearly states: "When preparing reviews, the use of generative models is not permitted with regard to the confidentiality of the review process. Documents provided for review are confidential and in particular may not be used as input for generative models."

It should also be borne in mind that the function of peer review also serves the production of knowledge and the further development of one's own discipline. These guiding tasks should not be outsourced to an AI. AI should only be used, if at all, for linguistic post-processing of the review. Reviewers should always check which requirements apply to them.

For AI, hallucination in particular is a known weakness. However, there are a whole range of other risks and weaknesses associated with common applications. These include missing or incorrect references, errors in literal quotations, completely fabricated quotations or references, passing on outdated information and the reproduction of bias or prejudiced statements. The large proportion of English-language (often also US-American) sources in the training material calls for caution when using AI in other languages [6]. Authors take responsibility for the misstatements and rule violations produced by AI. Good prompting and checking can reduce the risks of use.

The use of generative AI in research also raises copyright issues, particularly with regard to the use of training data and the authorship of AI-generated content:

For example, the adoption of the output of generative AI can, under certain circumstances, infringe third-party copyrights. If the AI has been trained with copyrighted material, the texts or images generated may be very similar to protected works or contain parts of them. The use of such AI outputs (e.g. in a publication) can therefore cause legal problems. It is therefore advisable to check the legality of the use of AI results [7]. Beyond the copyright issues, however, the general terms and conditions of the providers of generative AI systems usually grant the right to use the generated outputs as desired, i.e. also for commercial purposes, regardless of whether the outputs were generated via a free or paid plan.

Under German copyright law, only works that have been creatively created by a human are protected. Content that has been generated entirely by an AI is generally not copyrightable under the current legal situation, as there is no human creative activity involved. Users of generative AI therefore do not automatically acquire their own copyrights to the AI results. However, AI-generated results that are characterized by significant creative selection or post-processing by a human can be copyright-protected works in individual cases. However, a creative prompt that may be protected as a work of literature does not result in copyright protection for the AI-generated output.

If your own texts or data are entered into a generative AI tool, any existing copyright is not lost. However, it should be borne in mind that many publicly accessible AI services store entered content and use it for their own purposes, which they usually try to protect legally through their terms and conditions. This means that confidential information or information worthy of protection can in fact be passed on to third parties, thereby undermining existing property rights or thwarting future claims (e.g. patents). Sensitive or copyrighted content should therefore not be entered into externally operated AI systems. However, it is often possible to "opt out" by making the appropriate settings in your own account so that your own prompts may not be used for training future language models.

Researchers should continuously familiarize themselves with the relevant AI guidelines based on the best practices of the respective specialist community (including publishers) and correctly assess their own expertise in relation to AI. This is especially true at this stage, when working with AI is not yet fully established among many researchers and recommendations/guidelines are still emerging. If researchers are working in a team or a long-term research project with fluctuating personnel (e.g. editing projects), there should always be open communication about the use of AI and transparent internal documentation about it. Other researchers in the team/project should also be able to see whether and how AI was used. This also applies to other products of scientific work apart from "classic" publications, e.g. software, lecture manuscripts and slides (which may be shared), templates for poster presentations and more.

This depends, among other things, on what AI applications are used for and in what context. In addition to institutional guidelines, the publishers' AI policies - if available - should be observed when creating publications. If guidelines exist within your own specialist community, these should be applied, especially if they contain stricter criteria. Doctoral and examination regulations apply to doctorates. In addition, the topic can and should be discussed with the supervisors. If contradictions arise - e.g. between institutional guidelines and publishing guidelines - these should be communicated at an early stage.

With the exception of the Volkswagen Foundation, the other research-funding foundations in Germany and the BMFTR do not (yet) have an AI guideline. If you submit an application there - describe transparently (a) the role of AI in the project or in the preparation of the application, (b) measures on bias/data protection/ethics and (c) governance/responsibilities; many foundations rate these aspects positively.

[1] Frisch, Katrin (2024). FAQ Artificial Intelligence and Good Scientific Practice (Version 1). Ombuds Committee for Research Integrity in Germany (OWID). https://doi.org/10.5281/zenodo.14045172

[2] German Research Foundation: Guidelines for Safeguarding Good Scientific Practice, 2019: https://wissenschaftliche-integritaet.de/

[3] Berlin Universities Publishing: "Guideline on the use of artificial intelligence" and "Handout on the citation of AI tools", 2024: https://www.berlin-universities-publishing.de/ueber-uns/policies/ki-leitlinie/index.html

[4] German Research Foundation: Use of generative models for text and image creation in the DFG's funding activities. https://wissenschaftliche-integritaet.de/verwendung-generativer-modelle/ (Downloaded on 10.04.2025)

[5] German Research Foundation: Statement of the Executive Committee of the German Research Foundation (DFG) on the influence of generative models for text and image generation on the sciences and the DFG's funding activities, 2023: https://www.dfg.de/resource/blob/289674/ff57cf46c5ca109cb18533b21fba49bd/230921-stellungnahme-praesidium-ki-ai-data.pdf

[6] Oertner, Monika (2024). ChatGPT as a research tool?: Error typology, technical cause analysis and didactic implications for higher education. Bibliotheksdienst 58, No. 5 : 259-297. https://doi.org/10.1515/bd-2024-0042

[7] Verch, Ulrike (2024). By prompt to plagiarism? Legally compliant publishing of AI-generated content. API, Vol. 5, No. 1. https://doi.org/10.15460/apimagazin.2024.5.1

 

Further reading:
German Ethics Council (2023). Man and Machine - Challenges posed by Artificial Intelligence. Berlin.