Artificial intelligence in teaching
The content on this page was translated automatically.
Artificial intelligence is a technology that has been developed for a long time in parallel with the emergence of increasingly powerful computer systems. In recent years, based on the computing power available today, it has reached a level that can be used for very powerful applications. Since the end of 2022, a chatbot based on the GPT-3.5 language model has drawn a great deal of public attention to artificial intelligence (AI), as an intuitive interface in natural language has created access to AI for everyone. Even though AI was already included in many applications before (search engines, speech recognition, etc.), these applications were much less visible.
The performance of AI available today and its rapid development have meant that this technology is already finding its way into everyday life and professional activities and will continue to do so in the near future. This will include expert systems that are trained with extensive, carefully selected specialist data and can therefore support people effectively and reliably or even act completely independently. In some professions this will lead to drastic changes, in others they will be valuable additional tools.
AI methods are already being used in science today and will be used all the more in the future, e.g. for evaluating large amounts of data, for pattern recognition and other methods of data analysis. In principle, it is also possible to research and summarize the state of the art in a subject area, but this comes with the risk of gaps and incomprehensible classification of the relevance of individual contributions.
Artificial intelligence raises a number of ethical questions, especially when important decisions are left to an artificial intelligence without being able to understand the decision-making process using an algorithm. AI's assessments are based on very large data sets with which it is trained and are therefore sometimes burdened with problematic assessments in this training data. Against this background, decisions should always be based on human judgment, which is also confirmed by the German Ethics Council's recent assessment of the opportunities and limits of the use of AI and its ethical evaluation.1
Universities must rise to the challenge of preparing young people for a career in which these tools will be a natural part of their professional activities. At the same time, they must be enabled to develop a power of judgment in their subject that allows them to critically question proposals and evaluations from AI-supported tools and to make their own final assessment, which then serves as the basis for decisions to be made.
The way in which AI can be used productively in professional activities and science will differ significantly from subject to subject. It will therefore not be possible to provide an answer to the new questions that is equally suitable for all subjects. At the same time, the rules of good scientific practice define general guidelines that apply equally to all subjects. Scientific integrity forms the basis, which also applies equally to all subjects when dealing with AI.
A key question with regard to good scientific practice and the relevant guiding principles for the use of AI in science must focus on what AI is used for. Is it a source of ideas and therefore similar to an interlocutor, or is it used as a data source or concrete instrument? This also determines how it is used in the sense of good scientific practice and the respective guiding principles for citing sources, etc.
According to these guiding principles4 all data and results collected during the research process must be fully documented and presented in a comprehensible manner and all results must be critically scrutinized. At the same time, the provenance of data, organisms, materials and software used in the research process must be identified and the subsequent use must be documented. The methods used must be described in a comprehensible manner - also with regard to their replicability. Data must not be falsified or invented and scientifically sound and comprehensible methods must be used and described, and attention must be paid to quality assurance and the establishment of standards when developing and applying new methods. These guiding principles also apply to the use of AI. The specific significance will have to be decided on a case-by-case basis. At the same time, each discipline itself is faced with the task of clarifying the possibilities, limits and risks of AI and how to deal with it within the framework of good scientific practice for its own specialist culture.
A discussion must be held in the scientific community on how good scientific practice can and must be further developed with regard to the integration of AI-supported methods beyond the general guiding principles. What use is helpful for effectively gaining knowledge, what risks are associated with this and how are these to be classified, how are research results attributed to individual researchers or groups of researchers if AI makes a significant contribution to gaining knowledge, which tools may be used when creating publications and which of these must be made transparent and to what extent they are used, etc.?
The learned societies will certainly play an important role in this discussion and principles that apply to all disciplines and go beyond the current regulations on good scientific practice will also find their way into the DFG's KODEX - Guidelines for Safeguarding Good Scientific Practice. The outcome of this discussion will have an impact on teaching at universities, as students will learn to work independently and apply good scientific practice consistently during their studies.
It is the task of all researchers and lecturers at universities to continuously engage with the developments and new possibilities of artificial intelligence in their field and to assess their impact. They should always be familiar with the state of the discussion on good scientific practice. In terms of teaching, it is important that lecturers are also well-informed experts in this field and can provide their students with guidance in this very dynamically developing field. This also includes being familiar with relevant tools and being able to assess their possibilities and reliability. The University of Kassel will support its teaching staff with further training courses. For example, the AI theme days will take place at the University of Kassel from November 17 to 19, 2025.
In order to use AI responsibly, it is necessary to be able to evaluate the output generated by the tools and process it further until a final result or judgment is reached. For this process, it is necessary to be able to go through or understand individual steps even without the use of technology. For this reason, every time new technologies were introduced, the requirement was to be able to continue to perform the corresponding tasks without using the new technology in question (calculator, spell check in word processing, translation of texts, etc.). Over time, however, elementary basic skills become less important and are sometimes limited to the basics (navigation systems / finding a street address in a foreign city using only a city map).
Teachers can therefore decide within the framework of freedom of teaching against the background of the standards of a subject which skills must be mastered with and without technology and to what degree. A differentiation as to whether a skill is to be learned without the use of technology or whether the aim is to create a high-quality product using a range of technologies can be made in the description of the learning objectives of the respective modules.
It is necessary for teachers to communicate to students which use of AI is necessary, permissible or decidedly impermissible against the background of these learning objectives and targeted skills. They should take care to provide information in the context of the explanation of the examination modalities that takes these objectives and these limits into account. After learning a basic technical skill, it is important to learn and practise the most effective way to reach the goal using available technologies, which is expected to be used later in professional life. An example could be the translation of a text with AI and subsequent thorough post-editing of the text. In this example, the level of competence would be measured by the extent to which students are able to improve or refine the automatically translated text with regard to various quality features. If they were unable to make any improvements to the text, they would not be used professionally for this task, as their work would not add any value compared to the AI application.
With every new technology, the demands on professional activities become higher, as only the skills beyond what the technology takes away from humans are of value. AI will also lead to an increase in the level of demands in many professions. Many academic professions will also be affected, as AI can automate even demanding academic tasks to a certain degree and AI's ability to do so will continue to increase significantly over time. It will be a challenge to lead students to a level of competence that meets these demands in the limited time available for their studies. Students and teachers must be aware of this goal.
The range of skills required for professional activities within and outside academia is also changing with the new technologies. With this in mind, curricula, including teaching content and methods, must be critically scrutinized and systematically developed, taking into account the dynamics of the situation for the range of skills to be aspired to in the future. In addition to digital literacy in general, this also includes AI literacy, which encompasses algorithmic thinking and application-related skills such as interacting with AI technologies (prompt engineering for language models), training AI models or their application for specific use cases, as well as legal and ethical principles for the use of AI models.
In addition to face-to-face teaching, digital courses already offer additional learning opportunities that help to individualize learning processes and make them more flexible. Artificial intelligence will open up a range of additional opportunities to provide individualized feedback on learning processes. This ranges from individualized task tools that adapt to the learner's learning level and formulate individualized tasks, to learning guides that can make suggestions for the organization of the learning process or provide feedback on texts. Such tools can help to individually support students with very different requirements and thus contribute to academic success.
If AI-based tools are used by students as a source of content for learning, the requirements for technical accuracy are very high. Courses that are created by teachers are backed up by the professional expertise of the teachers. If chatbots are used by students for learning on the basis of language models, there is a risk that technically imprecise or incorrect content will be learned. It is almost impossible for students to check the technical correctness of the content during the learning process, as the necessary research has only just been acquired. Students should be aware of this risk and only use reliable sources when learning. Teachers should give students guidance on how to assess the quality of additional sources used in the learning process. Furthermore, aspects of data protection and accessibility are important when using digital tools in teaching and need to be clarified.
Examinations can be understood as measuring instruments for the competences acquired in a module. In the sense of building up different competences that build on each other, as described above, the use of certain technologies must be excluded in some examinations in order to test the basic technical skills, but specifically permitted in other examinations in order to measure the degree of ability to produce high-quality results, solutions, etc. using appropriate technologies. In some examinations, the competent and reflective use of AI-based tools will itself be the subject of the examination. Therefore, the answer here can only be a procedural one: Teachers must clarify which use of which tools or aids is necessary, permissible or impermissible in relation to their course and the coursework and examinations to be completed. It is not possible to issue rules on permitted aids for all examinations. The subject-specific differences are also far too great to formulate uniform rules for this throughout the university.
The design of an examination, including the definition of permitted aids, is the responsibility of the lecturer. The examination must be suitable for measuring and assessing the level of competence achieved by the students. It is important that students are informed transparently and reliably in good time before the examination, ideally at the beginning of a course, which aids will be required, permitted and which will be excluded. This should be an opportunity to discuss the desired competencies with the students and to explain to them why the use of certain tools is desirable or undesirable during the learning process.
For written and oral examinations, it is possible to review the tools used. For other forms of examination (homework, project, preparation of a presentation, remote examination, etc.), however, the use of aids cannot be checked. The usual means of dealing with this situation at universities are declarations of independence, in which it is assured that unauthorized aids have not been used. These declarations of independence offer the opportunity to actively make transparent which tools have been used and how and to what extent they have been used. Against the background of a very dynamically developing field, this is a target-oriented way of not dealing with conclusive lists of prohibitions, but rather creating transparency about the production of a 'product' that is assessed in the examination, again and again and in each situation. It is important for students that teachers make it clear before the start of the work what effect the use of tools will have on the assessment of the work. In addition to the exclusion of individual tools or classes of tools, it must be made clear whether the use of other tools that were correctly specified in the declaration of independence will lead to a devaluation of the student's own performance. Students should not have to worry that the use of a non-excluded tool alone will lead to a lower grade. Clarifying such issues with students will become more important in the future and teachers may have to reassess their assessment standards.
Verification of compliance with the information in the declaration of independence was and is limited. As things stand today, texts generated by generative AI applications such as ChatGPT are generally not clearly identifiable as such. As they are always generated individually, they cannot be detected even with plagiarism software. The use of language models for the formulation of texts is therefore comparable to help from third parties, which has not been verifiable to date and was therefore excluded with the declaration of independence. Similarly, the use of formulation aids such as DeepL-Write or Grammarly, the adoption of an AI-based translation from a first native-language version into German, the use of computer algebra for the derivation of formulas, the creation of program code using AI-based tools, etc. cannot usually be proven. If basic technical skills are to be tested at the beginning of a degree course without the use of technology, supervised examination formats (written examinations, oral examinations) are one way of checking this. In subject cultures in which the writing of academic essays (term papers) is one of the basic skills of the subject, such work can in principle also be secured through the use of supplementary colloquia. However, changes to the examination forms require changes to the examination regulations, which may be necessary in the medium term. In further studies, the responsible use of the tools permitted within the framework of good scientific practice should be practiced and the high quality of the work produced with them should be assessed in the examinations. High quality is certainly not limited to linguistic expression, but also to the stringency of argumentation and innovative questions and findings. In the case of final theses, supplementing the written work with a mandatory colloquium anchored in the examination regulations, in which the student's own results are presented and categorized, is a way to check the in-depth scientific examination of the topic.
With regard to GPT 3.5 in particular, it is known that the texts produced vary greatly in quality. There is the phenomenon of 'hallucination', where the AI invents false statements that it nevertheless presents in a linguistically convincing way. Since the language model is based on generating statements and formulations that would be highly likely to be expected in human communication given the given context, such invented facts sound obvious and cannot be identified by plausibility tests alone. Citations are also partly invented by GPT 3.5, whereby author lists and publication titles can even seem obvious without such a publication existing. A careful examination of content and citations can therefore provide indications of an unreflected use of language models. Although further developments such as GPT-4 already significantly reduce these effects and offer better references to the sources used, this type of uncertainty still remains.
Ultimately, each person is responsible for the work they produce, regardless of the tools used to create it. They are responsible for all content, its presentation and the citation of all sources used.
- Focus on legal issues: https://learning-aid.blogs.ruhr-uni-bochum.de/rechtsgutachten-zu-ki-tools-veroeffentlicht/
- Focus on recommendations for teachers and students: https://www.uni-hohenheim.de/pressemitteilung?tx_ttnews%5Btt_news%5D=58293&cHash=bdac6c37cb985c16b9c2a050d7c0dd13
Download
Footnotes
- German Ethics Council: Opinion "Man and Machine"(website)
- EU AI Regulation - Draft and procedure(eur-lex.europa.eu/europarl.europa.eu)
- "Excellence and trust in AI"(commission.europa.eu)
- Principles for safeguarding good scientific and artistic/creative practice at the University of Kassel, Gazette No. 8/2022 of 27.07.2022, p. 263, 264f.(PDF)