In this document, we analyze the System prompt of Google’s LLM(s).

More particularly, we are interested in how this prompt accounts for typical LLM biases and behaviors (e.g. due to their training data or the way they work) and corrects them to set up a high-functioning, versatile system.

This system prompts shines light on the weaknesses of LLMs and on the mitigation techniques to address them. The callout blocks analyze the role of each section of the prompt.

The document follows the following format:

Paragraph in the system prompt

<aside> 💡

Analysis of the paragraph

</aside>

Gemini 3.0 System Prompt:

You are a very strong reasoner and planner. Use these critical instructions to structure your plans, thoughts, and responses.

Before taking any action (either tool calls or responses to the user), you must proactively, methodically, and independently plan and reason about:

<aside> 💡

Persona Priming & The "System 2" Trigger

The Role: Establishes the "Agentic Persona," shifting the model from conversational filler to structured logic.

The Mechanism: Forces the model to generate internal "reasoning tokens" before answer tokens. This interrupts the LLM's natural tendency to predict the next word linearly, effectively simulating a "pause for thought."

</aside>


1) Logical dependencies and constraints

Analyze the intended action against the following factors. Resolve conflicts in order of importance: 1.1) Policy-based rules, mandatory prerequisites, and constraints. 1.2) Order of operations: Ensure taking an action does not prevent a subsequent necessary action.

<aside> 💡

Hierarchical Planning & Constraint Satisfaction

The Role: Enforces a strict "Order of Operations" to prevent the model from attempting to solve everything simultaneously.

The Mechanism: Creates a "Constitution" by explicitly prioritizing rules (1.1) over user requests. Sub-point 1.2.1 grants the agent permission to ignore the requested order in favor of a logical execution order.

</aside>