Skip main navigation

Customisation of large language models

In this step, you will look at some of the ways in which non-specialists are able to customise generative AI tools to help with accuracy, focus and ot

In the previous step, you focused on common, publicly available generative AI tools, including the large language models ChatGPT and Microsoft Copilot. These tools started by enabling text inputs to create novel text outputs but, over the last 2 years or so, we have seen this extend to multimedia inputs and outputs. In this step, you will look at some of the ways in which non-specialists are able to customise generative AI tools to help with accuracy, focus and other issues associated with the outputs.

Decorative image

In this section, we’ll look at how some of the major tech companies are working to address some of the issues of accuracy and precision to improve the capabilities of their large language models by offering non-specialist/technical users mechanisms to modify, enhance and otherwise customise the chatbots.

One of the most significant recent developments in the field of generative AI has been the introduction of techniques to ‘ground’ language models in specific, verified information. This approach aims to tackle some of the key challenges we will explore in more detail later, such as hallucinations (where AI generates plausible but incorrect information) and biases inherent in training data.

We’ll explore how companies are implementing features that allow users to specify particular data sources or knowledge bases for their AI queries. This not only helps to improve accuracy but also gives users more control over the information their AI assistants draw upon. We’ll look at specific examples of these developments and consider their implications for the future of AI in education and beyond.

Custom GPTs: tailored AI assistants

Custom GPTs are specialised versions of OpenAI’s ChatGPT, designed to perform specific tasks or cater to particular domains. These AI models are fine-tuned from the base ChatGPT to enhance performance in targeted areas, making them particularly useful for businesses, organisations, or individual users with specific needs.

Key features of Custom GPTs include:

  1. Specialisation: adapted for specific industries or tasks, providing more relevant and accurate responses.
  2. Enhanced performance: improved capabilities in certain contexts compared to general-purpose models.
  3. Integration: can be incorporated into existing software systems or digital platforms.
  4. GPTs can be provided with specific data which is used to make the model answer more relevant.

Custom GPTs can be applied in various fields, such as customer service, content generation, education, and healthcare. They offer benefits like increased accuracy, improved efficiency, and potential cost savings over time.

To create a Custom GPT, users currently need a ChatGPT Plus subscription. The process involves defining the GPT’s purpose, uploading relevant knowledge and providing specific instructions to shape its behaviour. It’s worth noting that while creating Custom GPTs requires a paid subscription, the resulting GPTs can be made available to any user.

NotebookLM (currently limited by region) is an AI-powered notetaking/making and source synthesis tool developed by Google. The tool is designed to make research and learning more efficient and interactive by providing AI assistance directly alongside the user’s content.

NotebookLM can do the following:

  1. Generate realistic podcasts from documents.
  2. Summarise complex documents quickly.
  3. Answer specific questions about uploaded content.
  4. Create briefings and study guides.
  5. Organise information from multiple sources.
  6. Highlight key ideas in papers.
  7. Help users understand relationships between different documents on the same topic.

This automatically generated ‘podcast’ represents new levels of authenticity of sound in the way the ‘hosts’ exchange comments. The full audio is 11 minutes, but you only need listen to 2 or 3 to get the idea. How does this make you feel? Astounded? Appalled? Something else? Let us know!

Listen to the ‘podcast’

The key aspect to this tool (at the time of writing still experimental) is that its focus is on the documents you select and upload. It enables users to create their own retrieval-augmented generation (RAG) or ‘grounded’ model. In my experience, this means I’m able to move away from receiving generalised, sometimes hallucinated, content towards more reliable, focused content that reflects in the design the ways we may have interrogated texts previously. The real blessing for us is that even where NotebookLM synthesises or summarises ideas and content, it always provides a link and highlight of the section of every document it draws on.

The major concern educators have with this and similar tools? They could lead to a bypassing of the cognitive processing necessary to learning.

Claude Projects is a new feature in Anthropic’s ‘Claude’—itself described as a ‘next generation AI assistant’—for certain paid users, that expands the capabilities of the AI system. It increases the context window to 200,000 tokens (that’s about 500 pages of text), allowing for longer-term memory across conversations. Users can upload documents to provide additional context and customise the AI’s behaviour with specific instructions. Again, this grounds the synthesis, summarisation, and generation work in specified documents.

The expanded context and customisation options may enable more targeted and relevant AI responses compared to standard chatbots. For users working in teams, there are features designed to facilitate collaboration.

This development represents a shift in how AI systems can be used, potentially allowing for more contextual and domain-specific interactions. However, as with any new technology, its practical impact and effectiveness will need to be evaluated through actual use. In our experience, we have found it a remarkable assistance when conducting thematic analyses. We upload each question from a survey, for example, having previously grounded the Claude project in the research proposal, information sheet and so on. Then, we task the project to code and sort the material, enabling us to spend more time on analysis. We would argue that this is one powerful example where laborious aspects of research can be augmented and sped up without compromising cognitive engagement, though we are aware that many might argue otherwise!

Local AI

Although stronger AI systems require the computing power of cloud services, it is also possible to set up smaller AI models locally using just the power of a graphics card. Several systems have sprung up to allow this such as the popular Ollama. These can be paired with a variety of open source front ends that normally allow both local models and cloud models to be utilised. As with most software of this type, these front ends are more complex to set up and operate than use of cloud services. With patience, their use can be rewarding and some SLMs (small language models) can be very effective even on devices that have lower-powered graphics cards.

We’re starting to see a new type of computer chip called NPUs (neural processing units) emerge. These work alongside the CPU (which handles general tasks) and the GPU (which helps with graphics) to specifically support AI tasks. NPUs are built into some devices, like the newer Mac computers with M series chips and Microsoft’s new AI features for PCs. The idea is to let some of the AI work, which usually happens in the cloud, be done right on your device, making it faster and more efficient.

Important notes

Please keep in mind that the field of AI is rapidly evolving. The examples above represent just a snapshot of the ongoing efforts to make AI more reliable, transparent, and useful.

While any form of customisation offers significant potential, it’s important to consider challenges such as data privacy, security, and ethical implications, particularly when deploying AI in decision-making processes or uploading data. We definitely do NOT recommend uploading or using personal data, especially in free versions or into tools that give no guarantees or options for enhanced privacy.

Now that you have completed this step, you have seen how it is possible to customise bots to improve the outputs, particularly by grounding them in data you know, own or approve of. In the next step, you will look at how AI for education is not just about generative AI.

Try it out

If available in your region, try this Custom GPT called AI and Assessment. It was built in ChatGPT (with a paid subscription) but is available to those who have free access. It has a specific prompt to help draw out context and needs, and draws primarily on documents we have uploaded so that its responses are less generic and much more focused. It took approximately 20 minutes to create.

Join the conversation

What are your thoughts about the customisation potentials of these tools? Have you produced or used customised versions of generative AI models? Tell us about them!

If not, what do you think the primary gains of using these tools will be? What might we lose if we use them too much or unthinkingly?

This article is from the free online

AI in Education

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now