Please note javascript is required for full website functionality.

Blog

AI Blog: What is AI?

15 November 2024

Welcome to our new AI blog.  Today, we consider what AI is.

AI stands for Artificial Intelligence and has historically been used to describe any sort of computer program designed to simulate human decision making.  In its early days this could consist of a simple set of rules governing how to behave, e.g. basic video game enemies and early chess engines.  In modern parlance AI is now typically used to refer to machine learning algorithms, specifically large language models (LLMs) which is what this blog series will primarily focus on.  But first, let’s consider what these terms actually mean.

 

Machine Learning

Let’s start with machine learning.  There are a few approaches to machine learning but they all essentially boil down to feeding data into a computer program and providing some guidance as to the desired outputs for certain scenarios until an algorithm can be developed that can make the decisions with less human input.  More specifically, there are three general categories:

  • Supervised learning: example inputs and their desired outputs are provided until general rules that allow the computer to map inputs to outputs are developed
  • Unsupervised learning: uncategorised data is provided, allowing the computer to independently identify patterns that may have been missed or need verification
  • Reinforcement learning: the computer is allowed to interact with a dynamic environment and is given points based off of how effectively its interactions have helped it approach a certain goal, allowing it to identify in which scenarios certain courses of actions will help or hinder it towards reaching its goal.

Large language models rely on machine learning algorithms to function, but how do they work exactly?

 

Large Language Models

LLMs are built to understand, analyse and generate language.  In this blog series we’ll be focusing on AI chatbots such as ChatGPT, Gemini, and Copilot. 

These LLMs are trained on large datasets consisting of pieces of writing created by actual humans, this allows them to generate text that reads like something written by a human.  LLMs achieve this by analysing what has been written so far and considering what’s likely to come next, based off of the data they have been trained on.  It’s important to understand that AI can, and does, make mistakes as the most likely words to come next are not always correct, for example:

Take care when using AI, it’s a powerful tool that can aid you in many ways (which we’ll explore throughout this blog series) but it’s important to verify any information that it provides.

 

Come back next time for more on artificial intelligence!

Newsletter