Snowflake Cortex Agents Go GA: Warehouses Become Runtimes
On November 4, 2025, Snowflake made Cortex Agents generally available, shifting the data warehouse from answers to actions. Here is what that unlocks, why it matters, and how to ship real use cases in weeks.

Breaking: the agent moves into the warehouse
On November 4, 2025, Snowflake announced that Cortex Agents are generally available. In plain terms, the data warehouse is no longer just where answers live. It is now where work gets done. Agents can plan, call governed tools, and act directly where your enterprise data already sits. See the official confirmation in the Cortex Agents general availability release notes.
If the last decade was about getting better business intelligence answers, the next one looks like closed loop automation. Instead of dashboards that tell you what happened, you get agents that notice what is happening and then fix it. The change is not philosophical. It is architectural. The compute, the permissions, the audit trails, and the tools that an agent needs are now native to the warehouse.
What changed under the hood
Snowflake’s pitch is simple. Cortex Agents can:
- Plan: break a request into steps and decide which tools to use.
- Use governed tools: query structured data with Cortex Analyst and search unstructured data with Cortex Search, all under role based access control.
- Act: write back, call functions, or trigger downstream workflows while keeping the data perimeter intact.
Think of an airport with secure airside gates. In the old world, your data kept leaving the airside to visit external services. Every trip meant new security checks and risk. With Cortex Agents, the work comes airside. Your policies, lineage, and data do not have to travel to be useful.
Two details matter for builders:
- Tooling lives behind governance. Because tools like Cortex Analyst and Cortex Search execute inside Snowflake’s security model, least privilege can be enforced without custom gateways. If a user cannot see a row, the agent cannot see it either.
- Standard interfaces reduce glue code. Snowflake now exposes a managed Model Context Protocol server so external agent clients can discover and invoke warehouse tools without inventing a one off integration pattern. That lowers the cost of connecting planning runtimes while keeping data access centralized. For specifics, see the Snowflake managed MCP server overview.
From BI answers to closed loop automation
Business intelligence gives you answers. Agents give you actions. The practical shift is that the warehouse now hosts decision logic that can observe, decide, and execute in a single governed loop.
- Observe: agents continuously query live tables, logs, and events.
- Decide: plans combine structured analysis and unstructured retrieval to weigh options.
- Execute: approved actions write back, post to queues, or call first party tools, all tracked for audit.
This loop shortens time to value. A forecast that would have triggered a ticket can now trigger the fix. A compliance exception that would have generated a quarterly report can now generate a remediation and a signed attestation.
The platform race begins
Three camps are forming.
- Warehouse native agents: Snowflake is betting that keeping data and tools in one governed boundary will win on trust, latency, and cost predictability.
- Lakehouse builders: Databricks focuses on building and evaluating agentic retrieval applications on the lakehouse. Their emphasis on evaluation and guardrails caters to teams iterating on complex retrieval and model selection.
- App layer suites: Microsoft Copilot Studio and peers compete at the application boundary with triggers, orchestration, and catalogs that reach into many systems. Their strength is breadth of actions and user facing packaging.
Expect these lines to blur. App layer tools will adopt better data governance stories. Data platforms will expand their agent stores and workflow surfaces. If you are coordinating multiple vendors, you will want a control plane for agents. See how others approach this in our take on GitHub Agent HQ mission control. Identity will also mature so agents can act like first class users. If that topic is on your roadmap, start with Agent ID makes agents identities. CRM centric organizations will layer agents into customer workflows, as we explored in Agentforce 360 turns CRM.
Build now playbooks
Here are three patterns you can ship this quarter. Each assumes Cortex Agents GA in Snowflake, plus common tools you likely already run.
1) RevOps enrichment to action
Goal: shorten lead response times and improve conversion by automatically enriching and routing high intent leads.
Ingredients
- Data sources: product telemetry, marketing automation events, account hierarchies, entitlement and contract tables.
- Tools: Cortex Analyst for joins and aggregations, Cortex Search for unstructured notes or call summaries, a governed function to write to the lead router or opportunity system.
- Policies: row level access so the agent only sees accounts it is allowed to act on, masking for sensitive fields.
Plan
- Detect: agent watches a streaming table of sign ups and trials.
- Enrich: use Cortex Analyst to compute product qualified lead scores that combine feature usage, recency, and propensity models.
- Explain: use Cortex Search to pull relevant call notes and support threads for context.
- Act: if the score crosses a threshold, write to the routing table, update the opportunity stage, and post a summary to the owner channel. If permissions block a write, escalate with a reason.
Why warehouse native helps
- Governance is inherent. Sales and support data often sit under strict policies. Keeping enrichment and routing inside those policies removes a class of manual exceptions.
- Latency is predictable. No extra hops across services for core joins and lookups. That makes real time routing viable.
What to measure
- Median time from new lead to first owner touch.
- Conversion lift on agent routed vs baseline cohorts.
- False positive rate where agents routed leads that were disqualified later.
2) FinOps cost spike auto remediation
Goal: reduce surprise cloud spend by detecting anomalies and taking safe, reversible actions.
Ingredients
- Data sources: daily and hourly cost tables, workload logs, query history.
- Tools: Cortex Analyst for anomaly detection queries or model inference via user defined functions, governed functions to pause warehouses, change schedules, or insert guardrails.
- Policies: strict role separation. The action role can only pause or scale down a narrow set of resources.
Plan
- Detect: run rolling z score or seasonal decomposition over cost by warehouse and by workload.
- Diagnose: correlate with query history to find the responsible users, roles, or services. Summarize outlier queries and their estimated costs.
- Act: if conditions match a safe pattern, apply a minimal fix. For example, scale down a dev warehouse started after hours or shorten a runaway task schedule.
- Notify and document: write an audit row with action, reason, and rollback instructions, then notify owners with evidence.
- Learn: if an action is rolled back by a human, capture the reason and update the policy.
Why warehouse native helps
- Evidence and action share a boundary. The same tables that describe the incident are the ones the agent reads to justify a change. Easy to review and audit.
- Safe defaults. SQL native roles and policies make it simpler to limit blast radius than blanket cloud permissions.
What to measure
- Hours from spike to mitigation.
- Spend avoided per month based on counterfactual estimates.
- Number of human rollbacks, plus annotated causes.
3) Compliance sweeps with attested remediation
Goal: turn quarterly compliance checks into continuous monitoring that auto fixes low risk issues and generates signed attestations for the rest.
Ingredients
- Data sources: access logs, permission tables, records of processing activities, data classification catalogs.
- Tools: Cortex Search for policy documents, Cortex Analyst for joins between users, roles, and assets, an attestation function that stores signed results with hashes of evidence.
- Policies: row and column rules that prevent the agent from seeing the contents of restricted data while still allowing it to reason over metadata.
Plan
- Model the rules as data. Express policy checks as queries that produce pass or fail per asset.
- Sweep continuously. The agent runs these checks on a schedule, producing a list of violations with linked evidence.
- Auto remediate low risk items. For example, remove stale access for inactive service accounts that meet strict criteria.
- Prepare attestation packets. For items that require human review, the agent compiles evidence, cites the policy section, and proposes a fix.
- Log everything. Every action and decision is hashed and stored with timestamps for auditors.
Why warehouse native helps
- The auditor’s world is the warehouse. Evidence, queries, and results are all first class objects with lineage.
- Policy as data encourages repeatability and removes ambiguity about what was checked.
What to measure
- Percent of checks that run continuously vs quarterly.
- Days to close findings by severity.
- Attestation coverage and sampling error rate.
How to think about architecture
Cortex Agents are not magic. They perform better when you give them structure.
- Semantic views over messy tables. A clean layer of business views beats ad hoc joins. Agents plan better when the logical model is predictable.
- Tool contracts as interfaces. Treat each tool like a function with a schema and clear preconditions. Document inputs, outputs, and failure modes as you would for an API.
- Policies first. Start with deny by default, then open the minimal paths the agent needs. Test with simulated requests.
- Human in the loop where stakes are high. Route certain actions through a short approval queue with attached evidence and a one click approve or deny.
- Traceability by design. Log every plan, tool call, and write with a unique trace id. Store these in tables you can query for audits and incident reviews.
What comes next
This release is the opening shot. The next twelve months will be shaped by four themes.
1) Agent observability that developers actually use
Expect query plans, tool call graphs, and token level traces to land in the same observability tables you already monitor. The winner will offer a single place to answer three questions: what happened, why, and what it cost. Look for built in counterfactuals that simulate alternative plans so you can compare outcomes without running them in production.
2) SQL native safety rails
Developers will want guardrails they can reason about. Think of constraints expressed as SQL or policy tables rather than opaque prompts. Good rails include:
- Action allowlists keyed by semantic role.
- Budget constraints that stop a plan when estimated cost exceeds a limit.
- Data use policies that prevent tools from reading or writing certain objects.
3) Cost controls that are both transparent and automatic
Agent cost is not just tokens. It is data scans, tool invocations, and retries. Expect first class budgeting with three controls:
- Pre flight cost estimation before a plan runs.
- Rate limits per tool and per role, adjustable by time of day.
- Adaptive caching across plans so common sub answers do not re scan.
4) A marketplace of warehouse tools and skills
As more work happens inside the warehouse boundary, you will see catalogs of tools and skills that can be installed, governed, and metered like extensions. The important design choice will be clear provenance and permission models. Who built the tool, what data it can touch, and how it is billed should be understandable at a glance.
Competitive notes to keep you honest
- Snowflake’s differentiator is governance and proximity to data. If your hardest problems are policy and lineage, warehouse native agents will feel right.
- Databricks brings strong evaluation and experimentation patterns for agents on the lakehouse. If you are iterating on complex retrieval and need deep quality tooling, the lakehouse stack may move faster.
- App layer suites excel at cross application actions and enterprise packaging. If your priority is to reach a wide surface of business users quickly, they can be the fastest way to ship.
Most enterprises will blend these. A warehouse native agent may remediate cost spikes. An app layer agent may coordinate tickets and chat. A lakehouse agent may run heavy retrieval and reasoning. The center of gravity will depend on your data gravity and your trust boundary.
A short build checklist
Use this to de risk your first production agent.
- Define one success metric that a human agrees is meaningful. For example, mean hours to mitigate spend spikes.
- Create semantic views that expose exactly the fields the agent needs. Nothing more.
- Write tool contracts. Inputs, outputs, error codes, idempotency rules.
- Start with read only and record counterfactual actions. Promote to write for low risk cases with explicit thresholds.
- Log every plan, tool call, and write with a unique trace id. Store these in a table you can query.
- Schedule weekly red team reviews. Try to break the agent with ambiguous instructions and noisy data.
- Cap cost with budgets and alerts. Stop the agent before it surprises you.
What to tell your executives
- Timeline: pilot in four weeks, limited production in eight, broader rollout in a quarter if the pilot meets predefined thresholds.
- Risk posture: actions are gated by policies, allowlists, and budgets. There is a rollback path for every write.
- Expected value: quantify with one metric per use case. For FinOps, aim for a measurable monthly spend reduction. For RevOps, target conversion lift on agent routed leads. For compliance, aim for fewer audit findings and faster closure times.
The bottom line
With Cortex Agents now generally available, Snowflake has turned the warehouse into a runtime for work, not just a store for facts. The early play is to pick the smallest high value loop you can close inside your existing governance. Ship a RevOps router that enriches and acts. Ship a FinOps guardrail that detects and fixes. Ship a compliance sweep that remediates simple issues and assembles evidence for the rest. Measure the result, raise the stakes, and repeat.
A platform race is underway. Warehouses bring governance and proximity. Lakehouses bring experimentation and evaluation. App suites bring distribution. Your best move is to place a smart bet in each lane where it makes sense, keep your policies and costs as data you can query, and design agents that are easy to explain when they are right and even easier to stop when they are not.








