Glossary: Prompt Engineering & AI
Learn the language of AI and prompt engineering.
Key Terms
This glossary is your quick reference for understanding the language of prompt engineering and AI. Each term is explained in plain English, with links to deeper resources and practical examples. For a hands-on introduction, visit the Resources or Templates pages.
- Prompt
- An instruction or input given to an AI model to generate a response. Example: "Summarize this article in 3 bullet points."
- Prompt Engineering
- The process of designing, testing, and refining prompts to achieve optimal AI outputs. See best practices.
- LLM (Large Language Model)
- A type of AI model trained on vast text data to understand and generate human-like language. Learn more.
- Context
- Background information or details provided to the AI to improve response relevance and accuracy. Why context matters. Example: "The audience is non-technical managers."
- Output Format
- The structure or style in which the AI is instructed to deliver its response (e.g., JSON, bullet points). See template examples.
- Zero-shot / Few-shot
- Prompting techniques where the AI is given no examples (zero-shot) or a few examples (few-shot) to guide its output. Example: "Translate this sentence to French." (zero-shot)
- Chain-of-Thought
- A prompting method that encourages the AI to explain its reasoning step by step. Example: "Explain your answer step by step."
- Temperature
- A parameter that controls the randomness of AI-generated responses. Lower values = more focused, higher = more creative.
- Token
- A unit of text (word or part of a word) processed by the AI model.
- Prompt Tuning
- Adjusting the wording, structure, or parameters of a prompt to improve the quality and relevance of AI responses. Example: Changing "Summarize this" to "Summarize the following article for a business executive in 3 bullet points."
- Persona Prompting
- Instructing the AI to respond as a specific role or character (e.g., "Act as a marketing expert..."). See interactive demo.
- Hallucination
- When an AI generates information that is plausible-sounding but factually incorrect or made up. See FAQ.
- Fine-tuning
- Training an AI model further on a specific dataset to improve its performance for a particular use case.
- Prompt Injection
- A security risk where a user manipulates the prompt to make the AI behave in unintended ways. Read more.
- System Prompt
- A special instruction given to the AI to set its behavior or constraints for an entire session.
- Token Limit
- The maximum number of tokens (words or parts of words) an AI model can process in a single prompt or response.
Still have questions? Visit the FAQ or Contact page for more help.