SpeedUpHire

AI Glossary

Common AI terms explained

Evals

Evals are tests used to measure how well an AI model performs on specific tasks, such as accuracy, reasoning quality, tone, safety, or consistency.

Fine-tuning

Fine-tuning is a post-training method where a pre-trained model is further trained on a smaller, task-specific dataset to improve performance in a particular domain.

Hallucination

A hallucination occurs when an AI model generates information that sounds correct but is actually false or made up, due to pattern prediction rather than fact-checking.

Large Language Model (LLM)

A large language model is a type of AI trained on massive amounts of text to understand, generate, and reason over natural language.

Prompt

A prompt is the input given to an AI model that guides how it responds, including instructions, questions, or examples.

Prompt Engineering

Prompt engineering is the practice of designing and refining prompts to get more accurate, consistent, or useful outputs from AI models.

Token

A token is a unit of text (such as a word, part of a word, or symbol) that an AI model processes when reading or generating language.

Context Window

The context window is the maximum amount of text an AI model can consider at once when generating a response.

Embeddings

Embeddings are numerical representations of text, images, or other data that capture meaning and relationships in a format AI models can work with.

Vector Database

A vector database stores embeddings and enables fast similarity search, often used for semantic search and retrieval-augmented generation.

Retrieval-Augmented Generation (RAG)

RAG is a technique where an AI model retrieves relevant external data before generating a response, helping reduce hallucinations and improve accuracy.

Inference

Inference is the process of using a trained AI model to generate predictions or responses based on new input data.

Training

Training is the process of teaching an AI model patterns by adjusting its parameters based on large datasets.

Foundation Model

A foundation model is a large, general-purpose model trained on broad data that can be adapted to many tasks through fine-tuning or prompting.

Multimodal Model

A multimodal model can process and understand multiple types of input, such as text, images, audio, or video.

Temperature

Temperature controls how random or deterministic an AI model's responses are, with higher values producing more varied outputs.

Model Drift

Model drift happens when a model's performance degrades over time because real-world data changes from the data it was trained on.

MLOps

MLOps refers to the practices and tools used to deploy, monitor, and maintain machine learning models in production.

Feature Engineering

Feature engineering is the process of transforming raw data into meaningful inputs that improve machine learning model performance.

Human-in-the-Loop

Human-in-the-loop systems involve people reviewing, correcting, or guiding AI outputs to improve quality, safety, or learning.