More and more generative AI tools are being developed that are promising to change how we approach teaching. In the primary and secondary sector, there are government-backed tools via the Oak National Academy and commercial options such as the worryingly-named Teachermatic that create lesson plans and resources. In the higher education sector, the major VLE providers are introducing new AI-based products on a regular basis, and lecturers are finding new ways to apply generative AI to their working practices. Many schools, colleges, universities and companies also provide access to tools such as Microsoft Copilot or Google Gemini, as part of their core IT subscription. Within this section, we hope to provide a bit of guidance on how you can use generative AI in your own teaching and how to evaluate their effectiveness in a critical way.

Before using generative AI in your teaching, please consider the ethical, environmental and ownership concerns.
We have a responsibility to introduce our learners to generative AI and develop their critical skills when using it and dealing with the outputs. We suggest you encourage them to use it carefully and only when appropriate – we don’t need a class full of people creating cat GIFs!
Never used generative AI?
That’s not a problem. To get a feel for what they can do, we suggest asking one of these models a question relating to what you teach or something know a lot about. If you have an old essay question lying around then try that. Simply copy and paste it, or type the words into the box on one of these web pages and hit the send button.
Think where your data goes
When using generative AI tools, the first thing that anyone needs to be conscious of, is the type of data that you are uploading. If the generative AI tool you are using is not provided as part of an institutional licence, then you should NEVER upload any confidential information (such as the names or email addresses of students), to these products. Where the institution does have a licence to use a generative AI product, then you still need to be careful about the type of information that is being uploaded, and whether you have permission from the author to do this.
You should exercise a degree of caution. If you are in any doubt about whether you should upload something, then don’t.
Generative AI Tools
At Durham University there are three main generative AI products that available to our staff and students to use. They represent different ways that generative AI is made accessible to others.
1. Microsoft Copilot
Microsoft Copilot is a general-purpose generative AI tool and is increasingly integrated into products such as the web browser Edge, and office packages such as Word, Powerpoint and Excel. It provides easy access to generative AI via a chatbot. Other institutions may access similar functionality via Google Gemini.
2. Blackboard’s AI Design Assistant Tools
Blackboard is an online learning environment used to support teaching and includes AI features such as adding chatbot-based activities to a course, or suggesting potential discussion board topics based on selected course content.
3. LM Studio
LM Studio is a programme that allows the user to download and run generative AI models on their own computer. This allows you to securely analyse data offline.
Ideas for using generative AI in your teaching
Reformatting content
If you have complicated instructions for a practical, or an assignment, you could use a generative AI tool to suggest other ways of phrasing the instructions, to make them easier to read, or to break it down into smaller more manageable steps. You could also generate an infographic to give a visual summary of the process.
Generative AI tools are also getting much better at converting speech to text, e.g to create a first draft of video captions, or generate notes from a meeting. Some solutions require you to upload the content to a website for processing, but others (such as implementations of the Whisper model for Windows and Mac computers) can run directly on your desktop computer with no internet connection.
Custom chatbots
We are seeing an increasing number of AI chatbots being deployed within Higher Education to deal with a myriad of different tasks. For example, there are many commercial chatbot solutions that can provide first-line support by providing students with instant, AI-generated answers based on course content. We are also seeing chatbots that have been built by individual university departments to help with specific tasks, such as the AI chatbot to help with student learning in a laboratory module that was developed by the Physics department here in Durham.
Blackboard also offers assessable chatbot-based activities for students. Known as the AI conversation tool, this allows a student to engage in an AI-generated conversation built around a scenario created by the teaching staff and added to a suitable location in the course. Students can explore the scenario in a safe environment, without having to sign up to use a specific generative AI tool. We have seen some very creative uses – simulated negotiations, mock job interviews and social work practice.
Improving feedback
While we would not suggest putting student work into a generative AI, there have been research papers that suggest there is a benefit to putting the feedback written by academics into a generative AI and asking the AI to make it more constructive. One study from Queen’s University Belfast Schultze-Gerlach et al. (2024) found that when students were given both types of feedback, they preferred the AI-enhanced version. Interestingly, this directly contradicts the views expressed by student representatives at a recent meeting of the University’s Educational Development Committee in November 2025, where students wanted to be sure that the feedback was expressed in their lecturer’s own words.
If you are considering using generative AI to help make your feedback easier to read, we suggest discussing this with students first, to ensure they would welcome this.
Staff using AI to generate feedback on an assignment that prohibited generative AI use by students, does not seem a very equitable approach.
Helping to generate content
Blackboard is our online learning platform, and it has a number of AI tools built into it. We have extensive guidance on these tools on our main Digital Help guides site. We have already discussed the AI conversation tool (the chatbot). Another common use at Durham is suggesting test questions using the AI Design Assistant. Both of these uses could be replicated in a general purpose generative AI tool with some prompt engineering.
The AI Design Assistant can generate a range of different types of questions (MCQ, true false, matching, short answer, etc). These can be relatively generic – just using the course name as input, or they can be tied to particular materials in the course. Our recommended use for this particular tool is to create a series of low-stakes, formative quizzes, which can be used as a spaced retrieval practice exercise across the duration of the module.
AI-generated questions are not suitable for summative work or any kind of remote/open-book exam, as AI is just as effective at answering questions as it is generating them! Don’t be fooled by colleagues who say that if you convert the questions to images they can’t be used with generative AI tools, – they most certainly can.
If you find that all the imagery relating to your topic seems very stereotypical – e.g. lots of bearded, bespectacled white men in lab coats – then you can use generative AI to create images that challenge this and use in your course. You may need to specify the image quite strictly to overcome the bias inherent in the model’s training data.

All AI-generated content needs to be carefully checked before it is released to students.
Creating sample data
Generative AI tools are good at making copies of things, so can be used to create large unique datasets from a small example or to write personas or case study topics helpful for learning with. These should be carefully checked before giving them to students, as the caveats around accuracy, hallucination and bias still apply.
We have also seen staff use openly AI-generated answers as a starting point for discussion in seminars, or for use in practice grading activities. This needs careful scaffolding.
Co-creating rubrics
If you are teaching a topic where students have more agency and you want to develop the grading rubric together, then AI tools can help you overcome the “blank page/screen problem”. You can use the Rubric-generator in Blackboard, or any generative AI tool live in class, to create and refine potential rubrics.
Here is an example of the sort of prompt you could use:
Create a detailed evaluation rubric for the following task:
[Insert your description of the assignment here]
The rubric should include:
4–5 performance levels (e.g., Excellent, Good, Satisfactory, Needs Improvement)
Clear, measurable criteria (e.g., accuracy, structure, creativity, technical skill, teamwork, etc.)
Specific descriptors for each performance level that make it easy to distinguish between the levels
A scoring system [select points or percentages]
Any relevant notes for staff and students
Format the rubric as a simple table.
Supporting a diverse student body
There is a developing body of literature looking at how generative AI tools might help all our students. A recent OECD study (Linsenmeyer, 2025) evaluated a range of tools and their ability to support children with special educational needs.
Add Neurodiverse study – TO ADD SOURCE
What does generative AI mean for assessment?
Generative AI provides both challenges and new opportunities for assessment. Assessment has always been an evolving and contentious area in education and generative AI has shed light on some out-dated practices.
Exams
The widespread availability of generative AI tools means that unsupervised online exams are no longer robust measures of students’ understanding of a topic. If you feel that an online exam is the best method of assessment for your discipline, then there are two common options used in the sector:
- Students complete the exam in a location under controlled conditions, e.g. a dedicated examination venue, where there are restrictions on the software they can access. This is costly, requires bringing your students together at one time and will favour some types of learners over others.
- Students complete the exams online but with an additional level of security – typically some kind of online proctoring solution. This may require identity verification, recording of the student during the exam and possibly also filming around the student (to check for secondary screens or devices). Early solutions were criticised for bias – triggering more warning flags for students with dark coloured skin due to the poor face-tracking solutions (see Burgess et al 2022)). There are also significant privacy concerns about sharing personal details and live filming of our students with a third party. This solution also disadvantages students who can’t afford fast computers, strong broadband or a room of their own where they can work uninterrupted. For these reasons many institutions have decided not to routinely use remote proctoring solutions.
Multiple choice questions
Multiple-choice questions (MCQs) are particularly vulnerable to generative AI use because the format contains the correct answers, and large language models have been trained to provide the most plausible output. As such MCQs should only be used for formative assessment, or ideally self-assessment – e.g. a quick knowledge check for students at the end of a topic or as a primer before they start revising.
Permitting generative AI
In many cases (particularly when designing formative assessments, or those not assessing programme learning outcomes) you might want to allow the use of generative AI. As well as a tool to help drafting and organising work, it may be appropriate for students to use generative AI to help with the analysis (e.g. if they are not coders or experts in statistical analysis) as long as you are confident that they have consciously selected an approach, understand what the output means and can communicate this to you.
Detecting generative AI use
We take the position that current generative AI detection tools are so unreliable and biased that we do not advocate their use (see Liang et al. (2023), Weber-Wulff et al. (2023), Perkins et al. (2024)). AI tools are developing far more rapidly than detectors and with far larger budgets, so we feel this is an arms race that the detectors will never win. There are a legion of online tools that claim to detect generative AI. You should never upload student work into these tools (unless they are provided under subscription by your institution which clearly takes a different view to us) as this work belongs to the student and so you would need their explicit permission before exchanging their work for a meaningless number.
Advice for students
As an expert in your discipline, you are well-placed to help students understand when it is and when it not acceptable to use generative AI in your subject. This is a good time to raise the ethical, environmental and social concerns. Below are some suggestions – do they fit with your approach?
Acceptable
- Using it as a source of ideas (e.g. to help select potential dissertation topics)
- Developing study tools – e.g. making flash cards or test questions
- Using it to discuss topics to check your understanding or explain terms in simpler language
- Helping to organise and summarise material
- Help reword text or advising on grammar
- Check that your work meets the requirements
Unacceptable
- Any task where the thinking and cognitive struggle has been offloaded to the generative AI tool (e.g. using an AI summary of an article without actually reading it).
- Passing off completely AI-generated content as your own work
- Using AI-generated content (images, flowcharts, graphs etc.) in your own work without acknowledgement
- Using AI to generate a range of references that you haven’t read (and may not actually exist)
- Using it instead of asking a human, when only a person can provide a suitable answer.
The need for clarity
There is a risk that differences between:
- the institutional message(s)
- the software provided by the institution
- the way staff use generative AI ,and
- the way students feel able to use generative AI
result in the erosion of trust between staff and students.
In an attempt to combat this, several potential frameworks and assessment scales have been created, seeking to define positions and a common vocabulary. One example is the AI Assessment Scale developed by Perkins et al. in 2023 and revised in 2024.
