top of page
Writer's pictureINQ LAW

Change Management and Artificial Intelligence: A Literature Review



Change management refers to “the term for all approaches to prepare, support, and help individuals, teams, and organizations in making organizational change.” In the era of the Fourth Industrial Revolution, organizations have been making changes to integrate emerging technologies into their work – and this has become a matter of necessity, not a choice, for maintaining competitiveness in today’s environment.[1] Among them, artificial intelligence (AI) presents great potential not just for the tech-specific but most imaginable sectors with its capacity to enhance existing processes, services, and products.


First, there is an emphasis on the need to acquire or at least gain access to AI experts and integrate them into the work process within the organization. Obviously, AI adoption requires engineers, data scientists, or designers who can create and maintain AI systems, and therefore talent is at the forefront of the discussion.

Much of the literature addresses the topic of re-skilling or upskilling, as well as talent recruitment and retention (Agrawal, Gans, and Goldfarb 2017; MIT Sloan Management Review 2020c). Davenport and Ronanki emphasize that whether a company chooses to retain data science or analytics capabilities in-house or outsources them, “having the right capabilities is essential to progress” (Davenport and Ronanki 2018). Similarly, Fountaine et al. also discuss the placement of AI and analytics capabilities within the organization, offering the ‘hub,’ ‘spokes,’ and hybrid models as examples (2019). Here, a critical question at hand is how to facilitate an interdisciplinary workflow between the AI talent and the rest of the organization.


Building Trust


Another critical theme in the literature is the building of trust in using AI, inside and out of the organization. Implicitly and/or explicitly acknowledging the highly publicized concerns over the risks of using AI and pushback against a comprehensive transformation in the organization, experts have emphasized the importance of obtaining buy-in from the leaders, employees, and clients. This aspect of the literature links well with the existing initiatives for trustworthy, responsible, or ethical AI.


Fountaine et al. argue that “a compelling story” about the need for organizational change and AI adoption is helpful for getting the workers to support the adoption of AI within their work and make a case for replacing ‘experience-based or ‘leader driven decision making with ‘data-driven decision,’ built on greater trust in AI (2019). Similarly, the World Economic Forum’s AI Oversight Toolkit highlights the importance of employees being “confident that the algorithms will lead to better decisions and actions,” and executives “assured that the use of algorithms won’t lead to legal troubles, inaccurate financial reports or other problems” (World Economic Forum n.d.)


Here, trust-building measures include compelling reasons for using AI (Fountaine, McCarthy, and Saleh 2019); gradualist approach that spans from piloting to scaling up while demonstrating results (Accenture 2019; MIT Sloan Management Review 2020a; Davenport and Ronanki 2018); sound governance mechanisms (World Economic Forum n.d.); robust IT and cybersecurity infrastructure; diversity and inclusion in the organization (MIT Sloan Management Review 2020e); and re-skilling and upskilling options made available to employees (MIT Sloan Management Review 2020b).


Finally, there is the point that AI adoption is not just a technical but an organizational process that should address institutional and cultural issues in a tailored manner. It is stressed throughout the literature that organizations should not adopt AI for the sake of adopting AI but start by identifying specific problems or opportunities where AI adoption would make the greatest impact, based on the assessment of the organization’s current operations and culture.


Hence, there is a lot of emphasis on choosing ‘use cases’ or ‘pilot projects’ that allow organizations to gradually adopt AI to address their own specific issues (Accenture 2019; Davenport and Ronanki 2018). Project-based approaches also bring the added benefit of creating interdisciplinary teams (Fountaine, McCarthy, and Saleh 2019). Also, this is coupled with the idea that it is necessary to take a comprehensive and long-term perspective to AI adoption, recognizing that this is a process that requires a broader transformation and could take multiple years.


This view of AI adoption as an organizational process is aptly summed up by Tarafdar et al.: “[AI] applications do not deliver value by simply processing data and delivering outputs. They deliver value when the organization changes its behaviour – that is, when it changes processes, policies, and practices – to gain and apply the insights from those output” (MIT Sloan Management Review 2020d).


All in all, these three themes highlight the fact that, at the end of the day, change management and AI adoption are about people – and that it is important to avoid the Promethean thinking that views technology as a silver bullet to all problems.


Special thanks to contributor, Dongwoo Kim , for his contributions to this article.

[1] For instance, three out of four C-suite executives believe that if they don’t scale AI within the next five years, they risk going out of business: https://www.accenture.com/us-en/insights/artificial-intelligence/ai-investments.



290 views0 comments

Commentaires


bottom of page