Abschlussarbeiten

Konnten wir Ihr Interesse wecken, bieten wir Ihnen gerne die Möglichkeit Ihre Abschlussarbeit an unserem Lehrstuhl zu schreiben. Hier besteht die Möglichkeit zwischen folgenden Themenfeldern zu wählen:

  • AI-Based Systems
  • Digital Assistants
  • Digital Detox
  • Digital Work and Remote Organizations
  • Ethics & AI
  • Krisenkommunikation und Krisenmanagement
  • Social Media

Falls Sie die angebotenen Themenfelder ansprechen und Sie sich für eine Abschlussarbeit bei uns entscheiden wollen, kontaktieren Sie uns gerne per Email.

Unsere aktuell angebotenen Themen für Bachelor- und Masterarbeiten finden Sie hier:

Unsere Themenvorschläge

AI-powered Social Bots in Crisis Communication

Due to climate change, there are severe weather changes, bushfires, floods, and heat waves that have increased in recent decades and have all been occurring on an unprecedented scale. In these extreme situations, the public needs a reliable source of information and recommendations on how to act to ensure safety and avoid the spread of fake news. Such information is being disseminated not only via traditional channels but also via social media, the necessity, and effectiveness of which has been confirmed by various studies (Willems et al., 2021; Bec and Becken, 2021; Yigitcanlar et al., 2022). This thesis focuses on AI-powered social bots (i.e., automated actors in social networks) to disseminate relevant information and automatically debunks disinformation. This raises the question of the extent to which safety can be guaranteed and how we can prepare for natural disasters using AI-powered social bots in order to make a reliable source of information accessible to the public. The aim of this thesis is to conduct a literature review to capture the current state of research on social media crisis communication using social bots during natural hazards and to develop a prototype of an AI-powered social bot.

 Literatur:

  • Rieskamp, J., Mirbabaie, M., & Zander, K. (2023). GenAI-powered Social Bots for Crisis Communication: A Systematic Literature Review. Proceedings of the 2023 Australasian Conference on Information Systems. Australasian Conference on Information Systems, Wellington. https://aisel.aisnet.org/acis2023/65

  • Stieglitz, S., Hofeditz, L., Brünker, F., Ehnis, C., Mirbabaie, M., & Ross, B. (2022). Design principles for conversational agents to support Emergency Management Agencies. International Journal of Information Management, 63, 102469. https://doi.org/10.1016/J.IJINFOMGT.2021.102469

  • Yigitcanlar, T., M. Regona, N. Kankanamge, R. Mehmood, J. D’Costa, S. Lindsay, S. Nelson and A. Brhane (2022). “Detecting Natural Hazard-Related Disaster Impacts with Social Media Analytics: The Case of Australian States and Territories” Sustainability 14 (2), 810.
  • Stieglitz, S., Mirbabaie, M., Ross, B., & Neuberger, C. (2018). Social media analytics – Challenges in topic discovery, data collection, and data preparation. International Journal of Information Management39, 156–168.
  • Hofeditz, L., Ehnis, C., Bunker, D., Brachten, F., & Stieglitz, S. (2019). Meaningful Use of Social Bots? Possible Applications in Crisis Communication during Disasters. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden.
  • Lahby, M., Pathan, A.-S. K., Maleh, Y., & Yafooz, W. M. S. (Eds.). (2022). Studies in Computational IntelligenceCombating Fake News with Computational Intelligence Techniques. Springer International Publishing.
  • Messias, J., Schmidt, L., Oliveira, R., & Benevenuto, F. (2013). You followed my bot! Transforming robots into influential users in Twitter. First Monday.
  • Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A Design Science Research Methodology for Information Systems Research. Journal of Management Information Systems, 24(3), 45–77.

Level: Bachelor

Kontakt: jonas.rieskamp(at)uni-bamberg.de

 

Recognizability of AI-generated false Information

The aim of this thesis is to investigate the question of how well AI-generated false information can be recognized. For this purpose, the participants are alternately (or randomly) shown false information generated by humans and by AI, and the participants are then asked to indicate whether the messages shown are false information or real information. In this context, it can / should also be investigated whether prior and general knowledge on a certain topic has an influence on how well / how many false reports can be recognized. The investigation can be carried out as a quantitative online study or qualitative interview series with self-created / organized stimulus material.

Literatur:

  • Kreps, S. E., McCain, R. M. & Brundage, M. (2020). All the news that’s fit to fabricate: AI-Generated text as a tool of media misinformation. Journal of experimental political science (Print), 9(1), 104–117. https://doi.org/10.1017/xps.2020.37
  • Epstein, Z., Foppiani, N., Hilgard, S., Sharma, S., Glassman, E. L. & Rand, D. G. (2022). Do explanations increase the effectiveness of AI-Crowd generated fake news warnings? Proceedings of the International AAAI Conference on Web and Social Media, 16, 183–193. https://doi.org/10.1609/icwsm.v16i1.19283

Level: Master

Kontakt: Lukas.Erle(at)hs-ruhrwest.de 

 

Credibility of Misinformation generated by AI and Humans

The aim of this thesis is to investigate misinformation. Specifically, the question is whether people are more likely to believe AI-generated misinformation on social networks than misinformation generated by humans. The nature of this misinformation (political, social, economic, ...) can be freely chosen. A quantitative online study should be conducted using stimulus material to investigate whether there is a difference in credibility.

Literature: 

  • Kreps, S. E., McCain, R. M. & Brundage, M. (2020). All the news that’s fit to fabricate: AI-Generated text as a tool of media misinformation. Journal of experimental political science (Print), 9(1), 104–117. https://doi.org/10.1017/xps.2020.37
  • Ferrario, A., Loi, M. & Viganò, E. (2019). In AI we trust incrementally: a multi-layer model of trust to analyze Human-Artificial intelligence interactions. Philosophy & Technology, 33(3), 523–539. https://doi.org/10.1007/s13347-019-00378-3
  • Schmidt, P., Bießmann, F. & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094

Level: Bachelor oder Master

Kontakt: Lukas.Erle(at)hs-ruhrwest.de 

Influence of Human Recommendations vs. Artificial Intelligence Recommendations on Decision-making Processes

The aim of this thesis is to investigate the human decision-making process: when people have to make a decision - especially in a situation where they cannot rely on experience or their own knowledge - are they more inclined to trust the recommendation of a human or an artificial intelligence? This will be investigated with the help of a quantitative online study.

Literatur: 

  • Mesbah, N., Tauchert, C. & Buxmann, P. (2021). Whose advice counts more – man or machine? An experimental investigation of AI-based advice utilization. Proceedings of the . . . Annual Hawaii International Conference on System Sciences. https://doi.org/10.24251/hicss.2021.496
  • Li, Z., Rau, P. P. & Huang, D. (2020). Who should provide clothing recommendation services. Journal of Information Technology Research, 13(3), 113–125. https://doi.org/10.4018/jitr.2020070107
  • Wien, A. H. & Peluso, A. M. (2021). Influence of Human versus AI recommenders: the roles of product type and cognitive processes. Journal of Business Research, 137, 13–27. https://doi.org/10.1016/j.jbusres.2021.08.016

Level: Bachelor oder Master

Lukas.Erle(at)hs-ruhrwest.de 

Human vs. AI-generated Content in E-commerce

The aim of this thesis is to investigate whether there is a difference between AI-generated and human-generated e-commerce content (e.g. product images, product descriptions, ...) with regard to the willingness to buy an item. In this context, it can also be investigated whether AI can generate persuasive content and whether this a) is convincing and b) has a different persuasive power to human-generated persuasive content. This will be investigated with the help of a quantitative online study.

Literatur: 

  • Yue-Jiao, F. & Liu, X. (2022). Exploring the role of AI Algorithmic Agents: The impact of algorithmic decision autonomy on consumer purchase decisions. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.1009173
  • Beyari, H. & Garamoun, H. (2022). The Effect of Artificial Intelligence on End-User Online Purchasing Decisions: Toward an Integrated Conceptual framework. Sustainability, 14(15), 9637. https://doi.org/10.3390/su14159637
  • Huang, J. & Chen, Y. (2006). Herding in online product choice. Psychology & Marketing, 23(5), 413–428. https://doi.org/10.1002/mar.20119

Level: Bachelor

Kontakt: Lukas.Erle(at)hs-ruhrwest.de 

AI as a Learning Assistant at Schools and/or Universities

This thesis deals with the question of whether artificial intelligence can be used to support teaching staff. This is not to be examined against the background of replacing human teachers, but as a "learning assistant" that offers the possibility of repeating content in addition to teachers. Aspects such as cognitive load and acceptance play a role here, as well as the efficiency and willingness to learn of the users. This thesis can be carried out with the help of qualitative interviews to generate best practices and design principles, or as a quantitative study to investigate the described effects of such a learning assistant.

Literatur:

  • Chen, Y., Jensen, S. A., Albert, L. J., Gupta, S. & Lee, T. (2022). Artificial intelligence (AI) Student assistants in the classroom: Designing chatbots to support student success. Information Systems Frontiers, 25(1), 161–182. https://doi.org/10.1007/s10796-022-10291-4
  • Edwards, C., Edwards, A., Spence, P. R. & Lin, X. (2018). I, Teacher: Using artificial intelligence (AI) and social robots in communication and instruction. Communication Education, 67(4), 473–480. https://doi.org/10.1080/03634523.2018.1502459
  • Du Boulay, B. (2016). Artificial intelligence as an effective classroom assistant. IEEE Intelligent Systems, 31(6), 76–81. https://doi.org/10.1109/mis.2016.93

Level: Master

Kontakt: Lukas.Erle(at)hs-ruhrwest.de 

Exploring the Integration of Artificial Intelligence in Disease Diagnostics

Skin cancer is the most common type of cancer in humans. It is divided into two main types: Melanoma and non-melanoma. Non-melanoma is considered less worrying as it is usually not fatal and can be cured surgically. Melanoma, on the other hand, poses a greater threat: It is the deadliest form of skin cancer with a high mortality rate, although it accounts for less than 5% of all skin cancer cases. It is often not easy to detect, making technical assistance essential. The inclusion of artificial intelligence (AI) in skin cancer diagnostics offers the potential for better outcomes through the support of medical experts. However, previous research in the field of AI for disease diagnosis has mainly focused on the technical implementation and neglected the crucial aspect of integrating AI into existing diagnostic processes. Therefore, this thesis will analyze the prerequisites for a successful collaboration between medical professionals and AI in skin cancer diagnostics.

Literatur: 

Mirbabaie, M., Stieglitz, S. & Frick, N.R.J. Hybrid intelligence in hospitals: towards a research agenda for collaboration. Electron Markets 31, 365–387 (2021). https://doi.org/10.1007/s12525-021-00457-4

Mirbabaie, M., Stieglitz, S., Brünker, F. et al. Understanding Collaboration with Virtual Assistants – The Role of Social Identity and the Extended Self. Bus Inf Syst Eng 63, 21–37 (2021). https://doi.org/10.1007/s12599-020-00672-x

Mirbabaie, M., Stieglitz, S. & Frick, N.R.J. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health Technol. 11, 693–731 (2021). doi.org/10.1007/s12553-021-00555-5

Debashree Devi, Saroj K. Biswas & Biswajit Purkayastha (2019) Learning in presence of class imbalance and class overlapping by using one-class SVM and undersampling technique, Connection Science, 31:2, 105-142, DOI: 10.1080/09540091.2018.1560394

Tenório JM, Hummel AD, Cohrs FM, Sdepanian VL, Pisa IT, De Fátima MH. Artificial intelligence techniques applied to the development of a decision-support system for diagnosing celiac disease. Int J Med Inform. 2011;80:793 802. https://doi.org/10.1016/j.ijmedinf.2011.08.001.

Takiddin A, Schneider J, Yang Y, Abd-Alrazaq A, Househ M. Artificial Intelligence for Skin Cancer Detection: Scoping Review. J Med Internet Res 2021;23(11):e22934 doi.org/10.2196/22934

Devi D, Biswas SK, Purkayastha B. Learning in presence of class imbalance and class overlapping by using one-class SVM and undersampling technique [Internet]. Conn Sci. 2019. https://doi.org/10.1080/09540091.2018.1560394.

Ray A, Gupta A, Al A. Skin lesion classification with deep convolutional neural network: process development and validation. JMIR Dermatol 2020 May 7;3(1):e18438. 

Level: 

  • Bachelor: Systematic Literature Review
  • Master: Systematic Literature Review + Qualitative Research 

Kontakt: jana.lekscha(at)uni-bamberg.de 

The Imperative of Human Oversight: Evaluating the Necessity in GenAI-powered Social Bots for Crisis Communication Tasks

Social media platforms have become important channels for disseminating information in times of crisis. Users are looking for specific guidance and real-time information to alleviate feelings of vulnerability. However, the landscape continues to evolve with the increasing presence of social bots, particularly those powered by generative artificial intelligence (GenAI), adding a new facet to crisis communications. While social media is invaluable for urgent interactions, GenAI's inherent tendency to produce inaccurate results poses a challenge for its use in tasks that require precision. In tasks where accuracy is critical, human oversight is crucial, suggesting that augmentation may be a more appropriate strategy than full automation. This research addresses the identification of specific tasks within the functions of GenAI-driven social bots in crisis communication that require human supervision to strike the delicate balance between automation and augmentation.

Literatur:

  • Austin, L., Fisher Liu, B., and Jin, Y. 2012. “How Audiences Seek Out Crisis Information: Exploring the Social-Mediated Crisis Communication Model,” Journal of Applied Communication Research (40:2), pp. 188–207. (https://doi.org/10.1080/00909882.2012.654498).
  • Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. (https://doi.org/10.1145/3442188.3445922).
  • Brachten, F., Mirbabaie, M., Stieglitz, S., Berger, O., Bludau, S., and Schrickel, K. 2018. “Threat or Opportunity? - Examining Social Bots in Social Media Crisis Communication,” in Proceedings of the Australasian Conference on Information Systems.
  • Maniou, T. A., and Veglis, A. 2020. “Employing a Chatbot for News Dissemination during Crisis: Design, Implementation and Evaluation,” Future Internet (12:12). (https://doi.org/10.3390/FI12070109).
  • Ross, B., Pilz, L., Cabrera, B., Brachten, F., Neubaum, G., and Stieglitz, S. 2019. “Are Social Bots a Real Threat? An Agent-Based Model of the Spiral of Silence to Analyse the Impact of Manipulative Actors in Social Networks,” European Journal of Information Systems (28:4), pp. 394–412.
  • Ross, B., Potthoff, T., Majchrzak, T. A., Chakraborty, N. R., Ben Lazreg, M., and Stieglitz, S. 2018. The Diffusion of Crisis-Related Communication on Social Media: An Empirical Analysis of Facebook Reactions. (https://doi.org/10.24251/HICSS.2018.319).
  • Stieglitz, S., Hofeditz, L., Brünker, F., Ehnis, C., Mirbabaie, M., and Ross, B. 2022. “Design Principles for Conversational Agents to Support Emergency Management Agencies,” International Journal of Information Management (63), (https://doi.org/10.1016/J.IJINFOMGT.2021.102469). Pergamon, p. 102469.

Level: 

  • Master: Mixed-Methods-Design - Qualitative analyses (e.g. interviews) and content analysis

Kontakt: jana.lekscha(at)uni-bamberg.de 

Decoding Cyberbullying Dynamics in the Age of Social Media Growth

As the significance of social media continues to grow, phenomena such as cyberbullying and hate messages gain increasing prominence. According to a comparative study conducted by Bündnis gegen Cybermobbing e.V., approximately 11.5% of the German population experienced cyberbullying in 2021. This issue transcends the boundaries of the private sphere, extending its impact to the working environment. To comprehend the ensuing consequences, including depression, a thorough understanding of the dynamics within social media becomes paramount.
This thesis aims to investigate the dynamics of cyber abuse in social media using an analysis of existing relations.
 New datasets from platforms like Twitter will be meticulously collected, focusing on key terms, relevant time periods, and pivotal actors. Employing social media analytics methods, as outlined by Stieglitz et al. (2018), the study will comprehensively analyze and interpret social data, shedding light on the roles of actors, entities, and their relationships in the propagation of cyber abuse.

Literatur: 

  • Topic discovery, data collection, and data preparation. International journal of information management39, 156-168.
  • Beitzinger, F. & Leest U. (2021). Mobbing und Cybermobbing bei Erwachsenen: Eine empirische Bestandsaufnahme in Deutschland, Österreich und der deutschsprachigen Schweiz. [Online]. Available: www.buendnis-gegen- cybermobbing.de/mobbingstudie2021.html.
  • D. K. Citron and H. Norton, “Intermediaries and hate speech: Fostering digital citizenship for our information age,” Bost. Univ. Law Rev., vol. 91, p. 1435, 2011.
  • M.-A. Kaufhold, M. Bayer, and C. Reuter, “Rapid relevance classification of social media posts in disasters and emergencies: A system and evaluation featuring active, incremental and online learning, ” Inf. Process. Manag. , vol. 57, no. 1, pp. 1–32, 2020, [Online]. Available: www.peasec.de/paper/2020/2020_Kauf holdKalleReuter_RapidRelevanceClassificatio n_IPM.pdf.
  • K. Hartwig and C. Reuter, “TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter,” 2019.

Level: Bachelor - Social Media Content Analysis

Kontakt: jana.lekscha(at)uni-bamberg.de 

Empowering Digital Safety: An Innovative Dashboard Approach for Proactive Cyberbullying and Hate Message Intervention

The growing presence of cyberbullying and hate messages in digital environments has become alarmingly relevant. Protecting individuals from the harmful effects of these phenomena has become a pressing concern. Against this background, this research project aims to develop practical solutions to actively combat cyberbullying.

In this context, this research project aims not only to understand the dynamics of cyberbullying, but also to develop practical measures for containment and prevention. The research will focus on designing an innovative dashboard that integrates and visually represents AI-detected entities related to cyberbullying and hate messages. This dashboard will not only serve as a tool to respond to incoming reports, but also enable preventive action by identifying patterns and trends. Through a practice-oriented approach, this work intends to make a concrete contribution to the fight against cyberbullying and hate messages in digital spaces.

Literatur:

  • Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A Design Science Research Methodology for Information Systems Research. Journal of Management Information Systems, 24(3), 45–77.
  • M.-A. Kaufhold, M. Bayer, D. Hartung, and C. Reuter , “Design and Evaluation of Deep Learning Models for Real- Time Credibility Assessment in Twitter,” in 30th International Conference on Artificial Neural Networks (ICANN2021), 2021, pp. 1–13, doi: doi.org/10.1007/978-3-030-86383- 8_32.
  • K. Hartwig and C. Reuter, “TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter,” 2019.
  • M.-A. Kaufhold, Information Refinement Technologies for Crisis Informatics: User Expectations and Design Principles for Social Media and Mobile Apps. Wiesbaden: Springer Vieweg, 2021.

Level: Master - Design Science Research nach Peffers et al. (2007)

Kontakt: jana.lekscha(at)uni-bamberg.de 

Toxic Positivity: Analyzing the AI Hype on LinkedIn

The widespread use of AI has generated a lot of excitement in the world of technology and business. Especially on platforms like LinkedIn, we face content that is strongly positive towards AI and its use. Although AI and language models such as ChatGPT are praised for their ability to bring about significant changes they still face important challenges like biases, high costs, and discrimination, which are largely neglected in the public discourse. This thesis aims to explore the interconnected relationship, between the hype surrounding AI and the phenomenon of toxic positivity on LinkedIn. We will delve into how the positive narratives surrounding AI tend to overshadow the challenges it presents. By employing frame analysis, this research aims to decipher how individuals and groups perceive and interpret AI-related information on LinkedIn, shedding light on the nuances of the AI discourse in the context of toxic positivity.

Literatur:

  • Lecompte-Van Poucke, M. (2022). ‘You got this!’: A critical discourse analysis of toxic positivity as a discursive construct on Facebook. Applied Corpus Linguistics, 2(1), 100015.
  • Kwon, S., & Park, A. (2023). Examining thematic and emotional differences across Twitter, Reddit, and YouTube: The case of COVID-19 vaccine side effects. Computers in Human Behavior, 144, 107734.
  • LaGrandeur, K. (2023). The consequences of AI hype. AI and Ethics. doi.org/10.1007/s43681-023-00352-y
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology, 3(2), 77-101.

Level: Bachelor or Master

Kontakt: jonas.rieskamp(at)uni-bamberg.de

Understanding the Mechanisms of Social Media-Induced Polarization

In the realm of crisis communication, there is a growing concern, about the impact of social media-induced polarization (SMIP). Polarization is characterized by increasing divisions of opinions and attitudes and poses a significant threat. With the spread of information and misinformation on media, the risk of causing significant harm and widespread suffering among people increases. The goal of this thesis is to explore and understand the complexities of SMIP and examine what role social media platforms play in the increasing polarization. The problem at hand is exacerbated by information overload, where the constant exchange of information on these platforms reinforces existing beliefs, creating echo chambers and fostering a 'us versus them' mentality. The research focus is guided by Frame Theory, utilizing an algorithmic approach to delve into the causes of polarization and its impact on crisis communication.

Literatur:

  • Qureshi, I., Bhatt, B., Gupta, S., & Tiwari, A. A. (2020). Causes, symptoms and consequences of social media induced polarization (SMIP). Information Systems Journal, 11.
  • Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9).
  • Snow, D. A., Rochford Jr, E. B., Worden, S. K., & Benford, R. D. (1986). Frame alignment processes, micromobilization, and movement participation. American sociological review, 464-481.
  • Johannessen, M. R. (2015). Please like and share! A frame analysis of opinion articles in online news. Pp. 15–26 in Lecture Notes in Computer Science.

Level:

Master

Kontakt: jonas.rieskamp(at)uni-bamberg.de

Sustainability of the Growth Paradigm in AI Research

Advances in AI research made AI-based systems readily available and beneficial for a variety of use cases The broad applicability of AI is further driven by increased performance of said systems. These improvements in terms of performance are made possible by increasingly large AI models. For instance, while GPT-2 had 1.5 billion parameters, GPT-3 has 175 billion parameters, and GPT-4 is expected to have even more. This thesis aim to investigate the consequences of this trend from a sustainability perspective with focus on the ecological, social, and economic dimensions. To do so, a Critical Theory perspective is taken, which allows reflecting on the sociotechnical reality. The insights gained from this study intend to synthesize the current knowledge and guide the direction toward sustainable AI.

Literatur:

Level:

  • Bachelor: Systematic Literature Review
  • Master: Systematic Literature Review + Interviews

Kontakt: jonas.rieskamp(at)uni-bamberg.de

Enabling a Better Management of AI – An AI Taxonomy

The pervasive use of the term Artificial Intelligence (AI) has led to inflation, rendering it a catch-all for a multitude of concepts. In navigating the expansive "frontiers of computing," as discussed by Berente et al. (2021), the challenge arises in discerning meaningful boundaries to facilitate the management of AI. Effectively managing AI necessitates a nuanced understanding, distinguishing between probabilistic and deterministic systems, particularly to mitigate negative consequences. Notably, rule-based AI systems entail different implications than probabilistic counterparts, emphasizing the need to categorize and conceptualize a more nuanced view of AI types.

The goal of this thesis is to explore the various facets of AI types comprehensively. Understanding the capabilities and consequences of each type is crucial for informed decision-making and management. The ultimate goal is twofold: to derive a more nuanced definition of AI and to develop a systematic taxonomy categorizing AI types based on their unique capabilities and characteristics.

Literatur:

  • Ågerfalk, P. J., Conboy, K., Crowston, K., Eriksson Lundström, J. S., Jarvenpaa, S., Ram, S., & Mikalef, P. (2022). Artificial Intelligence in Information Systems: State of the Art and Research Roadmap. Communications of the Association for Information Systems50(420–438). https://doi.org/10.17705/1CAIS.05017
  • Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Special issue editor’s comments: Managing artificial intelligence. Management Information Systems Quarterly45(3), 1433–1450. https://doi.org/10.25300/MISQ/2021/16274
  • Kundisch, D., Muntermann, J., Oberländer, A. M., Rau, D., Röglinger, M., Schoormann, T., & Szopinski, D. (2022). An Update for Taxonomy Designers. Business & Information Systems Engineering64(4), 421–439. https://doi.org/10.1007/s12599-021-00723-x
  • Mikalef, P., Conboy, K., Eriksson Lundström, J., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’ of AI Thinking responsibly about responsible AI and ‘the dark side’ of AIhttps://doi.org/10.1080/0960085X.2022.2026621
  • Raisch, S., & Krakowski, S. (2021). Artificial Intelligence and Management: The Automation–Augmentation Paradox. Academy of Management Review46(1), 192–210. https://doi.org/10.5465/amr.2018.0072

Level:

  • Bachelor: Taxonomy development

Kontakt: jonas.rieskamp(at)uni-bamberg.de

All-Remote Organising: ‘Handbooks,’ ‘Guidelines,’ and ‘Manifestos’

Remote work practices have become increasingly prevalent in organisations. Yet, it remains puzzling why remote work at scale, that is, remote organising, creates substantive challenges for transforming organisations, while perennial all-remote organisations seem to thrive with it. Many all-remote organisations openly share and promote their work processes through remote work ‘handbooks,’ ‘guidelines,’ and ‘manifestos.’ The goal of this thesis is to qualitatively analyse the ‘handbooks,’ ‘guidelines,’ and ‘manifestos’ to improve our understanding of remote organising. 

Literatur:

  • Brünker, F., Marx, J., Mirbabaie, M., & Stieglitz, S. (2023). Proactive digital workplace transformation: Unpacking identity change mechanisms in remote-first organisations. Journal of Information Technology, 0(0), 1-19. https://doi.org/10.1177/02683962231219516 
  • Choudhury, P. (Raj)., Foroughi, C., & Larson, B. (2021). Work-from-anywhere: The productivity effects of geographic flexibility. Strategic Management Journal, 42(4), 655–683. https://doi.org/10.1002/smj.3251

  • Rhymer, J. (2022). Location-Independent Organizations: Designing Collaboration Across Space and Tme. Administrative Science Quarterly, 68(1), 1-43. https://doi.org/10.1177/00018392221129175

Level: Master

Kontakt: j.marx(at)unimelb.edu.au