Skip to main content

Understanding Generative AI and Prompt Injection Fundamentals

This resource provides foundational information on GenAI, firm-size considerations related to prompt injection, and definitions of key terms.

Foundational Concepts

What is Generative AI (GenAI)?

GenAI refers to artificial intelligence systems that create new content—such as text, computer code, images or audio—based on user-provided instructions. Unlike traditional software that follows rigid programming rules, GenAI systems learn patterns from vast amounts of data and generate original responses.

Common examples of GenAI include:

  • chatbots that answer questions with natural language;
  • AI writing assistants that draft emails or documents; and
  • tools that summarize lengthy documents or conduct research.

Common examples of how member firms use GenAI include:

  • using AI chat and voice tools to respond to questions from employees;
  • code generation tools to help developers write software; and
  • creating reports and summaries for compliance or management.

What is a large language model?

Large language models are a type of GenAI that use deep learning techniques and large datasets of language to identify, summarize, predict and generate new text-based content. They process instructions (called "prompts") and generate responses based on patterns learned during training.

What is deep learning?

Deep learning is a type of artificial intelligence that teaches computers to recognize patterns and make decisions by processing information through layers of simulated "neural networks." These layers allow the system to learn increasingly complex patterns, similar to how humans learn from experience.

What is a neural network?

A neural network is a computer system designed to process information in a way loosely inspired by how the human brain works by using interconnected nodes (called "neurons") that work together to recognize patterns and make decisions.

What is “prompt"?

A prompt is the instruction, question or input that a user provides to a GenAI system. The GenAI system interprets the prompt and generates a response.

Examples of prompts:

  • "Summarize this customer complaint.”
  • "What were our top compliance issues last quarter?"
  • "Change the tone of this draft to be more neutral.”

What is an AI agent?

AI agents (sometimes referred to as "agentic AI") are systems or programs capable of autonomously performing and completing tasks on behalf of a user. An AI agent can interact within an environment, plan, make decisions and take action to achieve specific goals without predefined rules or logic programming.

Unlike simple chatbots, AI agents can:

  • access multiple systems and databases;
  • execute actions without human supervision; and
  • use tools and external resources to complete objectives.

An AI agent might read an email, extract relevant data, query internal databases and generate a comprehensive response without human intervention.

Understanding the Prompt Injection Threat

What is a prompt injection?

A prompt injection is a cyberattack technique where malicious instructions are provided to and processed by a GenAI system. The GenAI system cannot reliably distinguish between legitimate operational instructions and these hidden malicious commands.

Why this matters to a firm’s cybersecurity program

Traditional cybersecurity focuses on preventing unauthorized access to systems. Prompt injection attacks do not require a computer intrusion or some other unauthorized access. Rather, a prompt injection manipulates a GenAI system that already has legitimate access to your firm’s data and systems, exploiting the GenAI system to misuse this authorized access. Prompt injection threats and the technical complexity of GenAI systems can make thorough security assessment challenging.

Why this threat matters for all firm sizes

Prompt injection threats pose serious risks regardless of your firm's size, though the specific vulnerabilities and impacts may differ.

  • Small firms typically deploy vendor-provided GenAI solutions where the third-party provider manages security controls. With reliance upon third parties to identify the risks across all of their deployed systems, the third-party security protocols may differ from those of the firm’s internal systems. There is also an inherent risk if the external GenAI system can access the firm’s other internal systems.
  • Midsize firms typically implement GenAI across multiple business units, integrating these systems with sensitive data and operational systems. A prompt injection attack could expose customer data, compromise compliance monitoring or reveal proprietary strategies.
  • Large firms typically deploy GenAI at enterprise scale with integrations across customer databases, trading systems and proprietary models. With the breadth of these implementations prompt injection vulnerabilities could enable unauthorized access to substantial volumes of sensitive data.

Key Terms

This quick reference defines commonly used technical terms.

  • API (application programming interface): A set of rules and protocols that allow different software applications to communicate and share data with each other.
  • Data exfiltration: The unauthorized transfer or copying of data from a computer system or network, typically to a location controlled by an attacker.
  • Embedding/embedded instructions: Hiding malicious commands within legitimate content (e.g., documents, images, emails, websites) in ways that are invisible or inconspicuous to human users but may be read and executed by GenAI systems.
  • Indirect prompt injection: An attack where malicious instructions are hidden in external content (e.g., websites, documents, emails) that a GenAI system processes, rather than directly entered by the bad actor.
  • Obfuscation: Techniques used to hide or disguise malicious content to avoid detection, such as unusual character substitutions, different languages, encoding or invisible text.
  • System prompt: The underlying instructions that define a GenAI system's role, capabilities, limitations and behavioral guidelines—to govern all interactions and are typically not viewable by users.
  • Vector (attack vector): A path, method or technique that a bad actor uses to gain unauthorized access to a system or to compromise the system’s security.