This page is designed to help people who are considering the adoption of genAI tools across their organisation (whatever the size).
Starting at the end
Professor Sara Khareghani articulated best what we think is the key task for managers:
AI is not magic dust. It requires careful consideration from the outset. Imagine we “get it all right”. What will we do with the productivity gains – lay off or redirect staff?
Prof Sana Khareghani
Professor of Practice in AI, Kings College London
Former Head of the UK Government Office for Artificial Intelligence
Afternoon keynote Jisc Digifest 12th March 2025
Too often people seem in a rush to implement generative AI but aren’t sure why, or the potential consequences. “What happens if we get it right?” should be a question all managers ask before beginning this journey.
Data
Bringing generative AI tools into your organisation and granting them access to sources of information such as corporate intranets is risky. These models may surface content that is old, out of date or which should have had tighter access restrictions in ways that many of the built in search tools do not (SharePoint search I am looking at you!).
If you are carrying out research or just holding commercially sensitive or personal data, you need to be careful that this information is not exposed to third parties and have a plan in place if this happens by accident.
Parity
There are many generative AI tools and integrations, but many come at a cost. Worse, some of these costs may be difficult to predict because they are linked to the volume of usage.
When considering the use of generative AI in your organisation, it helps to have clear policies explaining who gets access to which tools and why. Will you just choose a few standard tools and make them available to everyone, or do some roles require access to more specialist tools? In the research sphere for example, staff will want access to the latest models. In the admin space, staff may benefit most from the premium versions of tools that provide better, enterprise-grade automation.
Generative AI tools may offer particular benefits to some staff and students – for example tools that summarise documents and help break tasks down into smaller components may help reduce the cognitive load to a manageable level. How will staff become aware and apply for access to these tools? How will you evaluate the cost?
If you work in HE or FE, you might want to take a look at the Principles for use of
artificial intelligence developed by a working group of the University and College Union (UCU).
If you work in healthcare, the BMJ recently published a list of guidelines for trustworthy and deployable AI in healthcare.
This link provides some handy questions to consider when the reps come calling:
Policy alignment
Before signing up to a particular package, it would be helpful to think about how the use of generative AI fits in with the ethos of your institution. Consider how you approach the environmental, ethical and ownership concerns. In an ideal world, you’d have solid well-developed policies in this area that you could apply when rolling out generative AI. Before rushing out to add generative AI use to your list of current policies, read this cautionary note from Leon Furze who argues that you might well be able to address the issues by updating existing policies.
Also think about the fit between these tools and your institutional approach to equality, diversity and inclusion. If you decide not to use (or even ban) the use of generative AI tools such as Otter.ai or Fireflies.ai that summarise meetings, how will that impact users that depend on them?
Command and Control
Whether you decide to limit generative AI use to a small list of tools, or permit staff and students to try others, you need to make it clear what is and isn’t acceptable use. This will require an understanding of privacy and data security processes and legislation.
Looking forward
It is very easy to feel overwhelmed about generative AI. This post Something big is happening by developer Matt Shumer is worth reading regardless of your job.
Learn more
Here are some articles we have found useful:
- AI at work: A practical guide to implementing and scaling new tools. World Economic Forum. 25 Nov 2024.
- AI skills for the UK workforce. Skills England. 29 October 2025.
- The great acceleration: CIO perspectives on generative AI. MIT Technology review Insights. (Registration required) July 2023.
- Reconfiguring work: Change management in the age of gen AI. McKinsey & Co. 21 Aug 2025.
- The People Factor: A human-centred approach to scaling AI tools. UK Cabinet Office. 4 Jun 2025
- AI maturity toolkit for tertiary education. Jisc. undated.
- Understanding AI at Work. Institute for the Future of Work. undated. This toolkit is for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.
- AI Incident database. The AI Incident Database is free and open-source project dedicated to indexing the collective history of harms or near harms realised in the real world by the deployment of artificial intelligence systems.
- AI Board game. European Trade Union Institute. 2022. A horizon-scanning tool and board game developed using foresight methods. It integrates horizon-scanning, long-term strategic thinking, role-playing, and immersive scenarios, enabling learners and players to explore AI’s impact on the workplace. By adopting different roles, participants engage in discussions and collaboratively agree on plausible solutions to AI-related challenges.
- Data Carbon Scorecard. Digital Decarb 2023. The tool is an enhanced iteration of the Data Carbon Ladder, designed to facilitate insightful discussions surrounding the conceptualisation of new data projects and their potential environmental ramifications.
- The AIAAIC repository. An independent, open, public interest resource, the AI, Algorithmic and Automation Incidents and Controversies Repository details incidents and controversies driven by and relating to artificial intelligence, algorithms, and automation.
- AI Procurement in a Box. World Economic Forum 2020. The Centre for the Fourth Industrial Revolution Global Network has brought together representatives from the public and private sectors, academia and civil society to co-design the “Al Procurement in a Box” toolkit for governments to rethink their public procurement processes.
- AIRO (AI Risk Ontology) 2025. An ontology for expressing risk of AI systems based on the requirements of the AI Act, ISO/IEC 23894 on AI risk management and and ISO 31000 series of standards. AIRO assists stakeholders in determining “high-risk” AI systems, maintaining and documenting risk information, performing impact assessments, and achieving conformity with AI regulations.
- Organisational AI Trustworthy Index. AI Transparency Institute 2024. The tests can provide information to improve the sustainability, the ethic and the legal aspects of AI systems.
- Responsible AI Index. AI Transparency Institute 2024. Designed to provide information to improve the sustainability, the ethic and the legal aspects of AI systems.
- ForHumanity. Tools and processes to examine and analyse the downside risks associated with the ubiquitous advance of AI & Automation, to engage in risk mitigation and ensure the optimal outcome… for Humanity.

