The responsible design and use of information systems and platforms
The content on this page was translated automatically.
Summary
The responsible design and use of information systems and platforms is becoming increasingly important, and not just due to the latest developments in the field of artificial intelligence. In this context, the term "digital responsibility" is used, referring in particular to efforts by individuals, companies or public institutions to create a sustainable, inclusive, fair and value-oriented digital world (cf. Trier et. al, 2023). The field is particularly interested in researching how organizations and their employees design and use (novel) information systems and platforms responsibly, taking into account the principles of digital responsibility. These include the principles of IT security, privacy, accountability, transparency, fairness and sustainability. These principles are a cornerstone for the digitalization of the economy and society and are indispensable for a successful and sustainable digital transformation.
Source: Trier, M., Kundisch, D., Beverungen, D. et al. (2023). Digital responsibility. Bus Inf Syst Eng 65, 463-474. doi.org/10.1007/s12599-023-00822-x
Research assistants
Philipp Danylak, Guangyu Du, Yannick Heß, Long Nguyen, Eva Späthe, Dr. Kathrin Brecker.
Alumni: Dr. Heiner Teigeler, Dr. Malte Greulich
Main research areas
Investigate how companies actually embed formal security standards (e.g. ISO/IEC 27001) into internal processes, culture and behavior.
Investigation of AI systems that are traceable and accountable. Creation of foundations for AI accountability, promotion of proactive behavior to increase AI accountability and effects of AI accountability on users.
Research into mechanisms and frameworks that actively promote data protection in accordance with legal requirements.
Development of test procedures and certification guidelines to validate trustworthy AI solutions. Consideration of the variability of AI systems.
Examination of how employees implement security guidelines in the company and (proactively) protect the company's IT resources.
Research into dynamic, continuous auditing and certification methods instead of selective audits.
Investigate how digital platforms and ecosystems can be created to strengthen national and organizational autonomy.
Exemplary publications
- Greulich, M.; Lins, S.; Pienta, D.; Thatcher, J. B.; Sunyaev, A. (2024). Exploring Contrasting Effects of Trust in Organizational Security Practices and Protective Structures on Employees' Security-Related Precaution Taking. In: Information Systems Research 35 (4). Pages 1507-2085. doi:10.1287/isre.2021.0528
- Du, G.; Lins, S.; Blohm, I.; Sunyaev, A. "My Fault, Not AI's Fault. Self-serving Bias Impacts Employees' Attribution of AI Accountability". In: Proceedings of the 45th International Conference on Information Systems (ICIS)
- Nguyen, L. H.; Lins, S.; Sunyaev, A. (2024). "Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions Across Disciplines", In: Proceedings of the European Conference on Information Systems (ECIS).
- Brecker, K.; Lins, S.; Sunyaev, A. (2023). Why it Remains Challenging to Assess Artificial Intelligence. Proceedings of the 56th Hawaii International Conference on System Sciences (HICSS)
- Danylak, P.; Lins, S.; Greulich, M.; Sunyaev, A. (2022). Toward a Unified Framework for Information Systems Certification Internalization. Proceedings of the 24th IEEE Conference on Business Informatics (CBI)
- Lins, S., Schneider, S., Szefer, J., Ibraheem, S. & Sunyaev, A. (2019). Designing Monitoring Systems for Continuous Certification of Cloud Services: Deriving Meta-Requirements and Design Guidelines. In: Communications of the AIS, 44, 10.17705/1CAIS.04425.