Novice Intermediate Advanced
Awareness of key terms associated with generative AI, including machine learning, deep learning, natural language processing, generative AI, language models/LLMs, prompts, and tokens. Awareness of key terms associated with generative AI, including learning paradigms—supervised, unsupervised, and reinforcement—and task families—generative vs. discriminative. Understanding key terms associated with generative AI, including machine learning, deep learning, natural language processing, generative AI, language models/LLMs, prompts, and tokens.
Awareness of major model types, including foundation and domain-specific models, SLMs and LLMs, open-source and closed models, and simple and reasoning models. Ability to explain interrelations among ML, DL, NLP, and generative AI, and classify types of intelligence, such as narrow AI, general AI, and superintelligent AI. Ability to describe rule-of-thumb differences using real-world examples:
- Foundation vs. domain-specific models;
- SLM vs. LLM;
- Open vs. Closed models;
- Simple vs. reasoning models.
Awareness of the main LLM attribute categories, including model parameters, context window size, knowledge cutoff, cost & performance. Understanding model attributes and their implications, including parameter size, context windows, and knowledge cutoff. Ability to select the most suitable model type for a given use case based on capability, latency, privacy, and cost considerations.

Ability to compare LLMs and SLMs for efficiency and suitability.
Awareness that generative AI tools can accept and produce different types of content (text, images, audio, video). Understanding real-life examples of multimodal AI tools applications. Ability to explain the difference between architecture of text-only LLMs (text→text) and multimodal LLMs that can process multiple content types (images, audio, video).
Awareness of major AI vendors/labs (e.g., Anthropic, OpenAI, Mistral, Meta, Google DeepMind, and Cohere). Awareness of the basic roles of vendors and CSP. Understanding the wider developer and research ecosystem, including Hugging Face, GitHub, arXiv, benchmark suites (MMLU, HumanEval), and evaluation frameworks.

Ability to compare various LLMs, including the most recent versions, and select the most suitable for the task at hand.
Awareness of key GenAI/LLM limitations, including reliability and factuality issues, hallucinations, cognitive/reasoning limits, need for human oversight, context/knowledge cutoff, stochastic behavior, cost, and speed of inference. Understanding of key GenAI limitations’ root causes, such as probabilistic decoding, context truncation, and safety–bias trade-offs. Ability to explain and mitigate LLM limitations using grounded generation (RAG), prompt engineering, and human-in-the-loop review.
Awareness of data protection and security principles, including what must not be shared with LLM tools. Awareness of what steps need to be taken to obtain permission to use LLM. Awareness of common OWASP-10 vulnerabilities in prompts/outputs, such as prompt injection, jailbreak attacks, sensitive information disclosure, overreliance, insecure output handling, logging/sharing exposure, insecure plugin/tool use/excessive permissions, data poisoning via RAG/context, multimodal payloads.

In-depth understanding of common OWASP-10 vulnerabilities in prompts/outputs.
Awareness that LLMs are based on the transformer architecture and employ attention mechanisms to process text. Understanding of the transformer process at a conceptual level, including tokenization, embeddings, attention over the context window, and next-token decoding. Ability to explain common failure modes such as lost context, distraction by noisy text, exposure/verbosity biases, and how users can mitigate them via prompt structure and context management.
Awareness that LLMs split text into tokens, and their count affects context limits and usage cost. Understanding of the tokenization process and token types. Ability to explain how to estimate/monitor token counts for inputs/outputs.
Awareness of the difference between end-user tools (e.g., ChatGPT, GitHub Copilot) and LLM. Awareness that many LLMs offer APIs for programmatic access. Understanding when to prefer end-user tools vs. APIs for a task.
Awareness of when the LLM training process occurs. Understanding conversational context and how messages are passed as user/assistant sequences in a stateless system. Understanding which end-user tools can expand their context based on past conversations is also crucial.
Basic awareness of what an AI agent is. Basic awareness of what RAG and MCP are. Understanding of RAG, MCP, and multi-agent collaboration, including how data retrieval and tool orchestration enhance LLM capabilities.

In-depth understanding of the system architecture of RAG and MCP.