Generative AI

Advice in a rapidly changing world

Some may consider that many of the environmental concerns have an ethical dimension. We agree, but for simplicity they have their own page.

Here are some questions that you might want to ask when thinking about your use of generative AI tools. Sadly when making a choice a lot of the answers to these questions are not easy to find. That shouldn’t stop you asking them though!

Is it free to use?

Article 26 of the UN Declaration of Human Rights states:

Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.

If our education processes become dependent upon generative AI will they remain free?

  • If a tool is free, how do the workers’ wages get paid and who pays for the infrastructure? Is it your data being sold on? Or are users subjected to advertising you can’t control?
  • If a tool isn’t free, who are you paying and what for?

Is there equity of access?

If you are suggesting that students might want to use a particular tool for an assignment, then one of the questions you should ask first is can they all access it? Some tools require registration, possibly using a third party account like a gmail or Microsoft email address – do all your students have these? Are there free and premium versions (almost certainly). If some students pay for the premium version, what advantage could that give them (and how will you know of this)? Some services might be restricted by geography – e.g. blocked by organisational or national firewalls.

What happens to your data?

By data we mean everything you type into the chat, any files you upload and anything else that can be gleaned about you (IP address, computer type, operating system, location, etc.). Be sure you understand whether this information is slowly collated as part of a history linked to your account, if it can be used to train this or further models, indeed can other people access it.

If you are uploading the work of others (and that applies to students’ work you are marking) have you their explicit permission to do this?

If you are considering using personal data or material that needs to be confidential, then most generative AI tools are not suitable for this purpose.

Does it matter what model I use?

Can it be used anonymously?

There may be times when you are researching sensitive topics (e.g. racism or genocide) or perhaps looking for an alternative opinion that you don’t want to be linked to your search history. Can the tools be used in a way that your activity is not traceable? Some institutional integrations keep the user’s identity within the tenancy, passing only a transient identifier to the external AI model. This approach, when verified, can help protect the anonymity of staff and students.

Where has the data come from?

Is the model representative or biased?

Was the model’s creation process ethical?

Does the tool respect and reward creators?

Many of the generative AI tools available require the end user to pay for premium services. Yet most models were trained from, or at least build on earlier models that took a very cavalier attitude to respecting copyright and rewarding the work of the original content creators and publishers. There are a raft of lawsuits from illustrators, authors, musicians and image archives. Does this redirection of funds matter to you? Are we only paying the thieves?

Who are the gatekeepers?

When the generative AI models are owned by a small number of companies there is a risk that they can exert undue influence upon learning.

Who decides how the model is trained?

This encompasses questions such as which data sources should be used? Where are they from, which countries, cultures and historical periods? Will the data need processed and if so who does this and who defines what to keep and what to throw away? Model training is an iterative process – so who decides when the results are “good enough”?

What are and aren’t permitted questions?

Most users don’t get direct, unfiltered access to the underlying models. Your input may be modified or augmented by a meta prompt before the model output is returned to you it may also have to pass certain checks. Whilst as a society we may not want these to include step by step instructions for making powerful explosives from household materials, or hateful speech, what impact has that on the freedom to research topics or explore alternative viewpoints? Are these guardrails visible to you and do you have a chance to appeal against them?

Do I have to use generative AI?

This is a question we are hearing people begin to ask – whether students or staff. In many situations you are likely to have the freedom to choose, and here you should certainly consider the environmental implications of your choice. There may be cases though where it is required (for example a student studying artificial intelligence is probably going to be assessed at some point in time on their knowledge and understanding of generative AI tools). Universities who are members of the Russell Group have committed to support their staff and students to become AI-literate. There may be corporate systems that use generative AI that you are required to use. If you have strong ethical objections, talk to your lecturers and line managers. We have seen some excellent examples where staff have asked students to critique the output of certain generative AI tools without requiring the students to create accounts and repeat the activity multiple times, getting a similar answer. This allows students to get a better understanding of the strengths and limitations of generative AI without having to use it.

Is this going to lead to the enslavement of humanity and the world being taken over by machines?

This is the science fiction apocalypse predicted by some, if Artificial General Intelligence is achieved without the necessary safeguards. Are there checks in the development of the tools you are using? Are you asked to confirm potentially harmful actions, or have you already surrendered control to the algorithm?