close
close

ChatGPT Glossary: ​​46 AI Terms Everyone Should Know

The introduction of ChatGPT in late 2022 completely changed human understanding of technology. Suddenly, humans could have meaningful conversations with machines, meaning you could ask questions to an AI chatbot in natural language and it would respond with new answers, just like a human. It was so groundbreaking that Google, Meta, Microsoft, and Apple quickly began integrating AI into their product suite.

But this aspect of AI chatbots is just one part of the AI ​​landscape. Sure, it’s cool when ChatGPT helps you do your homework or when Midjourney creates fascinating images of mecha based on their country of origin, but the potential of generative AI could completely change the economy. According to the McKinsey Global Institute, it could be worth $4.4 trillion a year to the global economy, so expect to see more and more talk about AI.

ai-atlas-tag.png ai-atlas-tag.png

It appears in a dizzying array of products—a short, short list includes Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, the Perplexity AI search tool, and gadgets from Humane and Rabbit. You can read our reviews and hands-on reviews of these and other products, as well as news, explainers, and how-to posts, in our new AI Atlas hub.

As people get used to a world interwoven with AI, new terms are popping up everywhere. So whether you’re trying to sound smart over a drink or impress at a job interview, here are some important AI terms you should know.

This dictionary will be updated regularly.

Artificial General Intelligence, or AGI:A concept suggesting a more advanced version of artificial intelligence than we know today, one that can perform tasks much better than humans while also learning and developing its own capabilities.

Agent: Systems or models that demonstrate the ability to autonomously perform actions to achieve a goal. In the context of AI, an agentive model can operate without constant supervision, such as a high-level autonomous car. Unlike “agentive” frameworks that are in the background, agentive frameworks are in the foreground, focusing on the user experience.

Ethics of Artificial Intelligence:Principles aimed at preventing harm to humans from AI, such as by governing how AI systems should collect data or deal with biases.

Artificial Intelligence Security:An interdisciplinary field that examines the long-term effects of artificial intelligence and its sudden transformation into a superintelligence hostile to humans.

algorithm:A series of instructions that allows a computer program to learn and analyze data in a specific way, such as recognizing patterns and then learning from those patterns and performing tasks on its own.

alignment:Improving AI to better achieve a desired outcome. This can refer to anything from moderating content to maintaining positive interactions with people.

anthropomorphism: When humans tend to give nonhuman objects human characteristics. In the case of AI, this can include believing that a chatbot is more human and conscious than it actually is, such as believing that it is happy, sad, or even conscious at all.

artificial intelligence, or AI:The use of technology to simulate human intelligence, both in computer programs and robotics. A field of computer science that aims to build systems that can perform human tasks.

autonomous agents: An AI model that has the skills, programming, and other tools to perform a specific task. For example, a self-driving car is an autonomous agent because it has sensory data, GPS, and driving algorithms to navigate itself on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions, and a common language.

bias: In relation to large language models, errors arise from training data. This can result in incorrect attribution of certain features to certain races or groups based on stereotypes.

chatbot:A program that communicates with people using text simulating human language.

ChatGPT:An artificial intelligence (AI) chatbot developed by OpenAI that leverages large language model technology.

cognitive computing:Another term for artificial intelligence.

data augmentation:Mixing existing data or adding a more diverse dataset to train AI.

deep learning: An AI method and subfield of machine learning that uses multiple parameters to recognize complex patterns in images, sounds, and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

diffusion:A machine learning method that takes an existing piece of data, such as a photo, and adds random noise. Diffusion models train their networks to re-project or recover that photo.

emergent behavior:When an AI model exhibits unintended abilities.

comprehensive learning, i.e. E2E: A deep learning process where a model is instructed to complete a task from start to finish. It is not trained to perform the task sequentially, but instead learns from the inputs and solves them all at once.

ethical considerations:Be aware of the ethical implications of AI and issues related to privacy, data use, fairness, misuse, and other security issues.

foam: Also known as a quick start or hard start. The concept that if someone builds AGI, it may already be too late to save humanity.

generative adversarial networks, or GANs:A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks if it is authentic.

generative artificial intelligence:A content generation technology that uses AI to create text, video, computer code, or images. The AI ​​is fed large amounts of training data, finds patterns to generate its own new answers, which can sometimes be similar to the source material.

Google Gemini:Google’s AI-powered chatbot that works similarly to ChatGPT, but pulls information from the current web, while ChatGPT is limited to data up to 2021 and is not connected to the internet.

protective barriers:Rules and constraints imposed on AI models to ensure responsible data processing and prevent the model from creating disturbing content.

hallucination: An incorrect response from the AI. This could involve generative AI producing responses that are incorrect, but confidently delivered as if they were correct. The reasons for this are not entirely known. For example, if you ask an AI chatbot “When did Leonardo da Vinci paint the Mona Lisa?” it might respond with an incorrect statement, saying, “Leonardo da Vinci painted the Mona Lisa in 1815,” which is 300 years after it was actually painted.

large language model, or LLM:An AI model trained on large amounts of text data to understand language and generate new content in a human-like language.

machine learning, or ML:A component in AI that allows computers to learn and produce better predictive results without being explicitly programmed. It can be combined with training sets to generate new content.

Bing-Microsoft: Microsoft’s search engine that can now use ChatGPT technology to deliver AI-powered search results. It’s similar to Google Gemini in that it’s connected to the internet.

multimodal artificial intelligence:A type of artificial intelligence that can process many types of input, including text, images, videos, and speech.

natural language processing:A branch of artificial intelligence that uses machine learning and deep learning to enable computers to understand human language, often using learning algorithms, statistical models, and linguistic rules.

Neural network:A computational model resembling the structure of the human brain that is designed to recognize patterns in data. It consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

overfitting:A bug in machine learning where it operates too closely on training data and may only be able to identify specific examples in that data but not new data.

paper clips: The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario in which an AI system creates as many literal paperclips as possible. In its goal of producing as many paperclips as possible, the AI ​​system will hypothetically consume or recycle all materials to achieve its goal. This could include dismantling other machines to produce more paperclips, machines that could be beneficial to humans. An unintended consequence of this AI system is that it could destroy humanity in its goal of producing paperclips.

parameters:Numerical values ​​that give the LLM structure and behavior, enabling it to make predictions.

hint: A suggestion or question you type into an AI chatbot to get an answer.

quick chain connection:The ability of AI to use information from previous interactions to shape future responses.

stochastic parrot:The LLM analogy, which shows that software has no greater understanding of the meaning of language or the world around it, no matter how convincing the output sounds. The phrase refers to how a parrot can imitate human words without understanding their meaning.

style transfer: The ability to adapt the style of one image to the content of another, allowing AI to interpret the visual attributes of one image and use them in another. For example, taking a Rembrandt self-portrait and reproducing it in the style of Picasso.

temperature: Parameters set to control how random the output of the language model is. A higher temperature means the model takes more risks.

generating text to image:Creating images from text descriptions.

tokens: Small pieces of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four English characters, or about three-quarters of a word.

training dataDatasets used to support AI model learning, including text, images, code, or data.

transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, such as sentences or parts of images. Instead of analyzing a sentence one word at a time, it can look at the entire sentence and understand the context.

Turing Test: Named after the famous mathematician and computer scientist Alan Turing, it tests the ability of a machine to behave like a human. The machine passes if the human cannot distinguish the machine’s response from that of another human.

weak AI, also called narrow AI:AI that focuses on a specific task and cannot learn outside of its skill set. Most AI today is weak AI.

lossless learning:A test where the model is required to perform a task without being given the required training data. An example would be recognizing a lion when it is trained only on tigers.