AGI? GPUs? Learn the definitions of the most common AI terms to enter our vocabulary
Sebastien Bozon/Getty Images
- Do you know what LLM even is? How about a GPU? A new vocabulary has emerged with the rise of AI.
- From AGI to prompt engineering, new terms and concepts are being created seemingly every day.
- Use this glossary of AI-related terms to speak about this technology with authority.
It's becoming increasingly impossible to ignore AI in our everyday lives.
That doesn't mean it's always easy to understand. From agentic AI to UBI, tech CEOs, Wall Street, and politicians increasingly sound like they are speaking another language.
Even if you don't use AI in your day-to-day life, chances are your bank, your doctor, the streaming service you're using, and maybe even your car do.
Here's a list of the people, companies, and terms you need to know to talk about AI, in alphabetical order.
The AI terms you need to know
Agentic: A type of artificial intelligence that can make proactive, autonomous decisions with limited human input. Unlike generative AI models like ChatGPT, agentic AI does not need a human prompt to take action — for example, it can perform complex tasks and adapt when objectives change.
AGI: "Artificial general intelligence," or the ability of artificial intelligence to perform complex cognitive tasks such as displaying self-awareness and critical thinking the way humans do.
Alignment: A field of AI safety research that aims to ensure that the goals, decisions, and behaviors of AI systems are consistent with human values and intentions.
Bias: Because AI models are trained on data created by humans, they can also adopt the same fallible human biases present in that data. There are a number of different types of bias that AI models can succumb to, including prejudice bias, measurement bias, cognitive bias, and exclusion bias — all of which can distort the results.
Capability overhang: The term, credited to Microsoft Chief Technology Officer Kevin Scott, for the gap between what AI models are capable of doing and what applications are tapping in terms of the models' potential.
ChatGPT: OpenAI's signature chatbot that launched to significant fanfare in 2022 and is often credited for kickstarting the AI race. As of 2026, OpenAI is facing concerns that rival AI tools may be surpassing ChatGPT's capabilities. GPT stands for
Claude: Anthropic's flagship model was first launched in March 2023. While Anthropic's core focus is on the enterprise business, Claude has been lauded for its ability to write code. In early 2026, Anthropic added healthcare and more general-focused tools to Claude.
Compute: The AI computing resources needed to train models and carry out tasks, including processing data. This can include GPUs, servers, and cloud services.
Data centers: Large warehouses filled with tens, if not hundreds, of thousands of advanced computer chips and graphics processing units, used to handle large amounts of data, storage, and complex processing required to power AI models. Unlike older iterations, AI data centers require significantly more space and energy because of the widely held assumption that AI models learn best at a massive scale.
Deepfake: An AI-generated image, video, or voice meant to appear real that tends to be used to deceive viewers or listeners. Deepfakes have been used to create non-consensual pornography and extort people for money.
Distillation: The process of extracting the reasoning process and learned knowledge of a larger, existing AI model to a new, smaller AI model — essentially, copying an AI model to start your own.
Doomer: A derisive term for AI skeptics who express reservations about either the potential risks of AI development (developing technology that could turn against humanity) or even just pessimism that AI will achieve lofty ambitions like creating models capable of human-like reasoning.
Effective altruists: Broadly speaking, this is a social movement that stakes its claim in the idea that all lives are equally valuable and those with resources should allocate them to helping as many as possible. And in the context of AI, effective altruists, or EAs, are interested in how AI can be safely deployed to reduce the suffering caused by social ills like climate change and poverty. Figures including Elon Musk, Sam Bankman-Fried, and Peter Thiel identify as effective altruists. (See also: e/accs and decels).
Federal preemption: The debate over whether each state should set some of its own AI-related policies, or if the federal government should place limitations on what can be done. The White House and some tech companies have pushed for a moratorium on state-level AI laws. Republicans are split enough on the policy that Congress has been unable to pass it into law. In December 2025, President Donald Trump signed an executive order discouraging states from passing their own laws.
Frontier models: Refers to the most advanced examples of AI technology. The Frontier Model Forum — an industry nonprofit launched by Microsoft, Google, OpenAI, and Anthropic in 2023 — defines frontier models as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."
Gemini: Google's flagship AI model, first launched in 2023 under its former name "Bard." Despite funding the groundbreaking research that fueled AI's development, Google faced criticism for falling behind OpenAI, a startup. As of late 2025, leading voices in the industry saw Gemini 3 as meeting, if not surpassing, ChatGPT's capabilities. Google has said Gemini's name was inspired by the zodiac constellation and NASA's famed 1965 to 1968 project which helped form the foundation for putting humans on the Moon.
Gigawatts: A large measurement of energy, a single gigawatt can power roughly 750,000 homes. Leading tech and AI CEOs have increasingly used the metric to put the sheer size of the data centers they plan to build into perspective. In terms of computing power, 10 gigawatts is equal to roughly 4 to 5 million graphics processing units.
GPU: A computer chip, short for graphics processing unit, that companies use to train and deploy their AI models. Nvidia's GPUs are used by Microsoft and Meta to run their AI models.
Hallucinations: A phenomenon where a large language model (see below) generates inaccurate information that it presents as a fact. For example, during an early demo, Google's AI chatbot Bard hallucinated by generating a factual error about the James Webb Space Telescope.
Large language model: A complex computer program designed to understand and generate human-like text. The model is trained on large amounts of data and produces answers by scraping information across the web. Examples of LLMs include OpenAI's GPT-5, Meta's Llama 4, and Google's Gemini.
Machine learning: Also known as deep learning, machine learning refers to AI systems that can adapt and learn on their own, without following human instructions or explicit programming.
Multimodal: The ability for AI models to process text, images, and audio to generate an output. Users of ChatGPT, for instance, can now write, speak, and upload images to the AI chatbot.
Natural language processing: The umbrella term encompasses a variety of methods for interpreting and understanding human language. LLMs are one tool for interpreting language within the field of NLP.
Neural network: A machine learning program designed to think and learn like a human brain. Facial recognition systems, for instance, are designed using neural networks in order to identify a person by analyzing their facial features.
Open-source: A trait used to describe a computer program that anyone can freely access, use, and modify without asking for permission. Some AI experts have called for models behind AI, like ChatGPT, to be open-source so the public knows how exactly they are trained.
Optical character recognition: OCR is technology that can recognize text within images — like scanned documents, text in photos, and read-only PDFs — and extract it into text-only format that machines can read.
Prompt engineering: The process of asking AI chatbots questions that can produce desired responses. As a profession, prompt engineers are experts in fine tuning AI models on the backend to improve outputs.
Rationalists: People who believe that the most effective way to understand the world is through logic, reason, and scientific evidence. They draw conclusions by gathering evidence and critical thinking rather than following their personal feelings.
When it comes to AI, rationalists seek to answer questions like how AI can be smarter, how AI can solve complex problems, and how AI can better process information around risk. That stands in opposition to empiricists, who in the context of AI, may favor advancements in AI backed by observational data.
Responsible scaling policies: Guidelines for AI developers to follow that are designed to mitigate safety risks and ensure the responsible development of AI systems, their impact on society, and the resources they will consume, such as energy and data. Such policies help ensure that AI is ethical, beneficial, and sustainable as systems become more powerful.
Singularity: A hypothetical moment where artificial intelligence becomes so advanced that the technology surpasses human intelligence. Think of a science fiction scenario where an AI robot develops agency and takes over the world.
Transformer: A type of neural network that is at the core of large language models like OpenAI's GPT, which in turn powers chatbots like ChatGPT. In fact, the last T in "GPT" is for transformer. Critically for AI, transformers were able to process massive datasets simultaneously, as opposed to earlier neural networks that processed data sequentially — dramatically reducing training time and enabling much larger models.
Universal basic income: A policy where the local, state, or federal government would guarantee a minimum income for citizens. Popularized by then-Democratic presidential hopeful Andrew Yang in 2020, the idea has taken on renewed relevance amid fears that AI may replace a significant number of jobs, potentially causing widespread unemployment. Alternatively, some, such as Musk, believe AI could create an abundance for humanity, enabling everyone to become wealthy and achieve "universal high income."
The top AI leaders and companies
Sam Altman: The cofounder and CEO of OpenAI, the company behind ChatGPT. In 2023, Altman was ousted by OpenAI's board before returning to the company as CEO days later.
Dario Amodei: The CEO and cofounder of Anthropic, a major rival to OpenAI, where he previously worked. The AI startup is behind an AI chatbot called Claude 2. Google and Amazon are investors in Anthropic.
Demis Hassabis: the cofounder of DeepMind and now CEO of Google DeepMind, Hassabis leads its AI efforts at Alphabet.
Jensen Huang: The CEO and cofounder of Nvidia, the tech giant behind the specialized chips companies use to power their AI technology.
Alex Karp: The CEO and cofounder of Palantir, a defense and data company that has skyrocketed in value. Known as an iconoclastic leader, Karp called Palantir the "first to be anti-woke" and takes pride in the company's national security business, especially their work with the US government.
Yann LeCun: Formerly Meta's chief AI scientist, LeCun is a renowned researcher who is considered among the "Godfathers of AI" due to his work on deep learning with Nobel laureate Geoffrey Hinton and others. LeCun has been critical of some of Meta's AI direction and is a leading skeptic of the extent to which Large Language Models (LLMs) will unlock the biggest breakthroughs in AI.
Mira Murati: The CEO and cofounder of Thinking Machines, Murati has made waves in Silicon Valley since leaving OpenAI, where she was CTO and briefly interim CEO.
Elon Musk: The Tesla and SpaceX CEO founded artificial intelligence startup xAI in 2023. The valuation of this new venture had risen dramatically as of late last year, pegged at an estimated $50 billion, according to reports at the time. Musk also cofounded OpenAI, and after leaving the company in 2018, he has maintained a feud with Altman.
Satya Nadella: The CEO of Microsoft, the software giant behind the Bing AI-powered search engine Copilot, a suite of generative AI tools. Microsoft is also an investor in OpenAI.
Sundar Pichai: The CEO of Google, Pichai sustained some criticism of Google's AI leadership after the release of OpenAI's ChatGPT in 2022. By the end of 2025, some in the industry were beginning to proclaim Google had caught up, if not surpassed, the startup with the release of Gemini 3.
Mustafa Suleyman: The cofounder of DeepMind, Google's AI division, who left the company in 2022. He cofounded Inflection AI before he joined Microsoft as its chief of AI in March 2024.
Ilya Sutskever: The cofounder and chief scientist at Safe Superintelligence, Sutskever helped start OpenAI before eventually pushing for Altman's ouster, a move he regrets. Like LeCun, Sutskever has expressed skepticism that scaling compute is enough to advance AI.
Alexandr Wang: Meta's chief AI office has experienced a rapid rise since cofounding Scale AI in 2016, out of famed Silicon Valley startup incubator Y Combinator. In June 2025, Meta acquired a 49% stake in Scale AI and poached Wang as part of its campaign to lure top AI talent. Meta has since reorganized its AI teams to focus on training, research, product, and infrastructure in a race to build "personal superintelligence."
Liang Wenfeng: The hedge fund manager who founded Chinese AI startup, DeepSeek, in 2023. At the beginning of 2025, the startup made waves across the AI industry with its flagship model, R1, which reportedly rivals its top competitors in capability but operates at a fraction of the cost.
Mark Zuckerberg: The Facebook founder and Meta CEO who has been spending big to advance Meta's AI capabilities, including training its own models and integrating the technology into its platforms.