Skip to content

Hooks reference

This document provides the technical specification for Gemini CLI hooks, including JSON schemas and API details.

  • Communication: stdin for Input (JSON), stdout for Output (JSON), and stderr for logs and feedback.
  • Exit codes:
    • 0: Success. stdout is parsed as JSON. Preferred for all logic.
    • 2: System Block. The action is blocked; stderr is used as the rejection reason.
    • Other: Warning. A non-fatal failure occurred; the CLI continues with a warning.
  • Silence is Mandatory: Your script must not print any plain text to stdout other than the final JSON.

Hooks are defined in settings.json within the hooks object. Each event (e.g., BeforeTool) contains an array of hook definitions.

FieldTypeRequiredDescription
matcherstringNoA regex (for tools) or exact string (for lifecycle) to filter when the hook runs.
sequentialbooleanNoIf true, hooks in this group run one after another. If false, they run in parallel.
hooksarrayYesAn array of hook configurations.
FieldTypeRequiredDescription
typestringYesThe execution engine. Currently only "command" is supported.
commandstringYes*The shell command to execute. (Required when type is "command").
namestringNoA friendly name for identifying the hook in logs and CLI commands.
timeoutnumberNoExecution timeout in milliseconds (default: 60000).
descriptionstringNoA brief explanation of the hook’s purpose.

All hooks receive these common fields via stdin:

{
"session_id": string, // Unique ID for the current session
"transcript_path": string, // Absolute path to session transcript JSON
"cwd": string, // Current working directory
"hook_event_name": string, // The firing event (e.g. "BeforeTool")
"timestamp": string // ISO 8601 execution time
}

Most hooks support these fields in their stdout JSON:

FieldTypeDescription
systemMessagestringDisplayed immediately to the user in the terminal.
suppressOutputbooleanIf true, hides internal hook metadata from logs/telemetry.
continuebooleanIf false, stops the entire agent loop immediately.
stopReasonstringDisplayed to the user when continue is false.
decisionstring"allow" or "deny" (alias "block"). Specific impact depends on the event.
reasonstringThe feedback/error message provided when a decision is "deny".

For BeforeTool and AfterTool events, the matcher field in your settings is compared against the name of the tool being executed.

  • Built-in Tools: You can match any built-in tool (e.g., read_file, run_shell_command). See the Tools Reference for a full list of available tool names.
  • MCP Tools: Tools from MCP servers follow the naming pattern mcp__<server_name>__<tool_name>.
  • Regex Support: Matchers support regular expressions (e.g., matcher: "read_.*" matches all file reading tools).

Fires before a tool is invoked. Used for argument validation, security checks, and parameter rewriting.

  • Input Fields:
    • tool_name: (string) The name of the tool being called.
    • tool_input: (object) The raw arguments generated by the model.
    • mcp_context: (object) Optional metadata for MCP-based tools.
  • Relevant Output Fields:
    • decision: Set to "deny" (or "block") to prevent the tool from executing.
    • reason: Required if denied. This text is sent to the agent as a tool error, allowing it to respond or retry.
    • hookSpecificOutput.tool_input: An object that merges with and overrides the model’s arguments before execution.
    • continue: Set to false to kill the entire agent loop immediately.
  • Exit Code 2 (Block Tool): Prevents execution. Uses stderr as the reason sent to the agent. The turn continues.

Fires after a tool executes. Used for result auditing, context injection, or hiding sensitive output from the agent.

  • Input Fields:
    • tool_name: (string)
    • tool_input: (object) The original arguments.
    • tool_response: (object) The result containing llmContent, returnDisplay, and optional error.
    • mcp_context: (object)
  • Relevant Output Fields:
    • decision: Set to "deny" to hide the real tool output from the agent.
    • reason: Required if denied. This text replaces the tool result sent back to the model.
    • hookSpecificOutput.additionalContext: Text that is appended to the tool result for the agent.
    • continue: Set to false to kill the entire agent loop immediately.
  • Exit Code 2 (Block Result): Hides the tool result. Uses stderr as the replacement content sent to the agent. The turn continues.

Fires after a user submits a prompt, but before the agent begins planning. Used for prompt validation or injecting dynamic context.

  • Input Fields:
    • prompt: (string) The original text submitted by the user.
  • Relevant Output Fields:
    • hookSpecificOutput.additionalContext: Text that is appended to the prompt for this turn only.
    • decision: Set to "deny" to block the turn and discard the user’s message (it will not appear in history).
    • continue: Set to false to block the turn but save the message to history.
    • reason: Required if denied or stopped.
  • Exit Code 2 (Block Turn): Aborts the turn and erases the prompt from context. Same as decision: "deny".

Fires once per turn after the model generates its final response. Primary use case is response validation and automatic retries.

  • Input Fields:
    • prompt: (string) The user’s original request.
    • prompt_response: (string) The final text generated by the agent.
    • stop_hook_active: (boolean) Indicates if this hook is already running as part of a retry sequence.
  • Relevant Output Fields:
    • decision: Set to "deny" to reject the response and force a retry.
    • reason: Required if denied. This text is sent to the agent as a new prompt to request a correction.
    • continue: Set to false to stop the session without retrying.
    • hookSpecificOutput.clearContext: If true, clears conversation history (LLM memory) while preserving UI display.
  • Exit Code 2 (Retry): Rejects the response and triggers an automatic retry turn using stderr as the feedback prompt.

Fires before sending a request to the LLM. Operates on a stable, SDK-agnostic request format.

  • Input Fields:
    • llm_request: (object) Contains model, messages, and config (generation params).
  • Relevant Output Fields:
    • hookSpecificOutput.llm_request: An object that overrides parts of the outgoing request (e.g., changing models or temperature).
    • hookSpecificOutput.llm_response: A Synthetic Response object. If provided, the CLI skips the LLM call entirely and uses this as the response.
    • decision: Set to "deny" to block the request and abort the turn.
  • Exit Code 2 (Block Turn): Aborts the turn and skips the LLM call. Uses stderr as the error message.

Fires before the LLM decides which tools to call. Used to filter the available toolset or force specific tool modes.

  • Input Fields:
    • llm_request: (object) Same format as BeforeModel.
  • Relevant Output Fields:
    • hookSpecificOutput.toolConfig.mode: ("AUTO" | "ANY" | "NONE")
      • "NONE": Disables all tools (Wins over other hooks).
      • "ANY": Forces at least one tool call.
    • hookSpecificOutput.toolConfig.allowedFunctionNames: (string[]) Whitelist of tool names.
  • Union Strategy: Multiple hooks’ whitelists are combined.
  • Limitations: Does not support decision, continue, or systemMessage.

Fires immediately after an LLM response chunk is received. Used for real-time redaction or PII filtering.

  • Input Fields:
    • llm_request: (object) The original request.
    • llm_response: (object) The model’s response (or a single chunk during streaming).
  • Relevant Output Fields:
    • hookSpecificOutput.llm_response: An object that replaces the model’s response chunk.
    • decision: Set to "deny" to discard the response chunk and block the turn.
    • continue: Set to false to kill the entire agent loop immediately.
  • Note on Streaming: Fired for every chunk generated by the model. Modifying the response only affects the current chunk.
  • Exit Code 2 (Block Response): Aborts the turn and discards the model’s output. Uses stderr as the error message.

Fires on application startup, resuming a session, or after a /clear command. Used for loading initial context.

  • Input fields:
    • source: ("startup" | "resume" | "clear")
  • Relevant output fields:
    • hookSpecificOutput.additionalContext: (string)
      • Interactive: Injected as the first turn in history.
      • Non-interactive: Prepended to the user’s prompt.
    • systemMessage: Shown at the start of the session.
  • Advisory only: continue and decision fields are ignored. Startup is never blocked.

Fires when the CLI exits or a session is cleared. Used for cleanup or final telemetry.

  • Input Fields:
    • reason: ("exit" | "clear" | "logout" | "prompt_input_exit" | "other")
  • Relevant Output Fields:
    • systemMessage: Displayed to the user during shutdown.
  • Best Effort: The CLI will not wait for this hook to complete and ignores all flow-control fields (continue, decision).

Fires when the CLI emits a system alert (e.g., Tool Permissions). Used for external logging or cross-platform alerts.

  • Input Fields:
    • notification_type: ("ToolPermission")
    • message: Summary of the alert.
    • details: JSON object with alert-specific metadata (e.g., tool name, file path).
  • Relevant Output Fields:
    • systemMessage: Displayed alongside the system alert.
  • Observability Only: This hook cannot block alerts or grant permissions automatically. Flow-control fields are ignored.

Fires before the CLI summarizes history to save tokens. Used for logging or state saving.

  • Input Fields:
    • trigger: ("auto" | "manual")
  • Relevant Output Fields:
    • systemMessage: Displayed to the user before compression.
  • Advisory Only: Fired asynchronously. It cannot block or modify the compression process. Flow-control fields are ignored.

Gemini CLI uses these structures to ensure hooks don’t break across SDK updates.

LLMRequest:

{
"model": string,
"messages": Array<{
"role": "user" | "model" | "system",
"content": string // Non-text parts are filtered out for hooks
}>,
"config": { "temperature": number, ... },
"toolConfig": { "mode": string, "allowedFunctionNames": string[] }
}

LLMResponse:

{
"candidates": Array<{
"content": { "role": "model", "parts": string[] },
"finishReason": string
}>,
"usageMetadata": { "totalTokenCount": number }
}