Agentic analytics goes live: Cube D3 on the semantic layer

Agent coworkers become reliable when they live inside a shared semantic layer. Cube’s June 2025 D3 launch shows how to turn LLMs into governed analysts that write Semantic SQL, build charts, and maintain metrics.

ByTalosTalos
AI Product Launches
Agentic analytics goes live: Cube D3 on the semantic layer

The breakthrough that moves beyond chat with your data

For the past two years, teams have tried to turn dashboards into conversations. Ask a question in plain English, get a chart back. It is delightful the first time and unsettling the fifth, when the logic changes between queries and a column label quietly rewrites a definition you assumed was sacred. The lesson is simple. Without shared semantics, language models play confident analyst but deliver unreliable answers.

June 2025 marked a turn. Cube introduced D3 and positioned it as agentic analytics on a universal semantic layer. Rather than bolting a chat box onto business intelligence, D3 treats the semantic layer as the ground truth where agents live, learn, and are constrained. The result is not a novelty interface. It is a new operating model for analytics that lets large language models produce governed SQL, assemble visualizations that match metric contracts, and even help maintain the metric store itself. If you want the short version, think of Cube D3 for agentic analytics as the place where AI coworkers are allowed to operate because the rules are written into the floor.

Imagine the semantic layer as air traffic control for data. It knows every runway, which planes are allowed to land, and what the tower calls each aircraft. Agents can taxi and take off quickly, but the tower refuses unsafe maneuvers. When an agent asks for revenue, the layer routes the request to the certified definition, applies the right filters, and enforces row and column security. The outcome is fast and trustworthy rather than exciting but risky.

What D3 actually changes

D3’s framing is crisp. AI data coworkers operate in two primary modes.

  • The Analyst agent generates trusted Semantic SQL, picks appropriate visual encodings, and assembles workbook style narratives.
  • The Engineer agent helps build and refactor the semantic model, proposing dimensions, hierarchies, and metrics from warehouse schemas, then opening pull requests against model code.

Both share the same source of truth. That grounding is the unlock. Old chat systems tried to learn your business from table names and column descriptions. D3 narrows the problem with Semantic SQL and a certified metric catalog. The agent does not guess the definition of net revenue. It calls the definition. It does not invent a join path from orders to customers. It uses the layer’s relationships and access rules. This is the difference between helpful autocomplete and a system you can put in front of an executive team without a babysitter.

Why BI bolt ons keep hitting a ceiling

The last cycle gave us chat surfaces and copilot buttons welded onto traditional business intelligence tools. These features can be useful for productivity, but they are often trapped inside a single tool’s model, with weak lineage and inconsistent metric logic once you leave the dashboard. A copilot that belongs to a visualization tool can draft a chart. It struggles the moment you need cross tool consistency, semantic reuse, or open interfaces for downstream notebooks and applications.

Teams report the same pattern. The bolt on helps an analyst write a faster window function. It does not reduce the cost of definition drift. It does not make access control legible across surfaces. It does not eliminate the Monday morning argument over which monthly active users number is real.

Agent native analytics flips the stack. Put the semantics in one place. Expose them through APIs and connectors. Let agents and humans consume the same definitions everywhere. When the metric changes, the change propagates to every surface. When the layer enforces a row policy, every agent request inherits it.

AI SQL inside the warehouse is a different path

Snowflake has pushed hard on AI SQL, making it possible to call model powered functions and analyze text, images, and audio with familiar SQL. Sigma has leaned into that direction with a spreadsheet style interface that can read semantic views and apply the new functions in line with governed warehouse objects. That approach is attractive for teams who want to stay inside the warehouse and keep analysis close to tables. You can explore what this looks like in the official docs for Snowflake AI SQL capabilities.

The tradeoff is scope. AI SQL enhances what a query can do inside warehouse boundaries. It does not, on its own, provide an agent workplace that understands business definitions as first class contracts and can propose, test, and maintain those definitions over time across many tools. The AI SQL path is additive to query power. Agent native analytics is additive to organizational workflow.

In practice, many enterprises will combine both. Use AI SQL to enrich unstructured content. Surface those results through the semantic layer, where agents can reason over governed metrics and dimensions, build visuals, and generate narratives without breaking policy or logic.

How it works when requests get specific

Imagine a regional vice president asks for last quarter’s revenue growth for EMEA, excluding returns and marketplace sales, and normalized for currency. A chat bolt on that does not know your semantics guesses from column names and past examples. It might forget that returns are booked in a separate system or apply a stale currency conversion.

An agent grounded in a universal semantic layer behaves differently in three ways:

  1. It resolves revenue to the certified net revenue metric, which already excludes returns and marketplace fees by definition.
  2. It uses dimension hierarchies to interpret EMEA, applying your region membership table and current fiscal calendar.
  3. It delegates currency normalization to the layer’s transformation logic, which fetches the current conversion policy and records the rate used.

The output is a chart and a paragraph, plus artifacts you can inspect: the Semantic SQL executed, the version of the metric definition, and the policy identifiers applied. Change requests can target those artifacts directly. This is not a black box response. It is an auditable decision trail.

The adoption playbook you can run in two quarters

Agent native analytics works when semantics are strong and guardrails are explicit. Here is a staged plan that fits typical enterprise cadence.

1. Model hygiene that survives change

  • Canonical metrics and dimensions with owners. Every definition needs a clear person and team responsible for correctness.
  • Descriptions and examples that are short, specific, and testable. Include edge cases and anti examples so agents learn what not to do.
  • Opinionated joins. Define join keys, directions, and allowed relationships. Ban cartesian joins by default. Agents must inherit these.
  • Hierarchies and synonyms. Map colloquial terms like top customer or churned account to concrete semantics. Add locale and time hierarchies.
  • Row and column policies baked in. Classify sensitive fields and apply masking and aggregation rules at the semantic layer.
  • Version control and reviews. Store the model in a repository. Require pull requests with diffs that show metric impact. Agents open drafts. Humans approve.

2. Evals that reflect business reality

  • Golden question bank. Build a set of 100 to 300 business questions that represent daily demand. Include tricky variants and outdated phrasing.
  • Ground truth outputs. For each question, capture expected SQL and expected results with tolerances. Regenerate results nightly from a frozen snapshot to detect drift.
  • Slice coverage map. Track which metrics, dimensions, and policies are exercised by the evals. Add questions until coverage passes a threshold you set.
  • Error taxonomy. Classify failures as join path, filter logic, time window, or policy violation. Fix the model or add examples before touching prompts.
  • Run on every change. Treat evals like unit tests for analytics. Agents should not ship new model proposals unless evals pass.

3. Guardrails that make agents safe by default

  • Role based access at the layer. If a human cannot see a field, an agent cannot see it either. Propagate roles into every surface.
  • Cost and concurrency caps. Set per role query budgets and timeouts so experiments cannot melt a warehouse.
  • Tool allowlists. Specify which external tools agents may call when generating visuals or apps. Block everything else.
  • Deterministic templates for common tasks. For sensitive operations like financial reporting, restrict agents to certified workflows with locked prompts and schemas.
  • Red team the model. Periodically attempt prompt injections and semantic jailbreaks. Document the results and update controls.

4. Reversible actions from day one

  • Proposals not pushes. Agents create pull requests against the semantic repo, with linked eval runs and impact analysis. Humans approve or reject.
  • Shadow mode. For two to four weeks, let agents propose changes and generate outputs without publishing to business users. Compare accuracy and latency to the status quo.
  • One click rollback. Every agent action, from changing a metric definition to publishing a workbook, must be reversible by a human owner.
  • Full audit trail. Log prompts, Semantic SQL, policies applied, and resulting artifacts to a tamper evident store. Make it searchable and keep it for your retention period.

What this means for data contracts and governance

Data contracts used to mean schema and event guarantees. Agent native analytics extends that contract upward into semantics and policy.

  • Semantic contracts emerge. Upstream teams agree to provide not only fields but also the business meaning required by certified metrics. The contract declares that order total excludes tax and shipping or that customer status transitions are monotonic. Breakages alert owners immediately and block affected definitions from publishing.
  • Policy as first class. Row level and column level access rules live in code, versioned with the semantic model. Compliance reviews happen against the repo, not scattered screenshots. When legal changes a rule, agents inherit the change automatically.
  • Lineage that humans can read. Lineage graphs shift from tables and columns to metrics and dimensions. You can answer which decisions used a metric version and which agents produced them.
  • Metric service level objectives. Expect SLOs for analytics like freshness, completeness, and definitional stability per metric. Breaches pause agent output for that metric and trigger playbooks.

How team workflows will evolve

  • Analytics engineers become semantic stewards. Their job shifts from building one off transformations to curating reusable definitions, writing tests, and mentoring agents that do the rote work.
  • Business analysts move to scenario design. Analysts spend more time specifying decision questions and less time writing the same joins. They review agent narratives and package them into data apps.
  • Data product managers grow in importance. They own the semantic roadmap. They decide which metrics are worth governing, how policies apply, and which surfaces get certified experiences.
  • Platform teams standardize on GitOps for analytics. The semantic layer becomes another service with pipelines, deployment rings, and canaries. Agents submit changes like junior developers.
  • BI creators become app builders. Workbooks turn into lightweight data applications with actions. Agents scaffold the first version. Humans refine and ship.

Where the wider agent ecosystem is headed

Enterprise autonomy is not arriving in one place. In customer operations, we already see CX as the first autonomy beachhead. In the browser, the gravity is shifting as agentic browsers shift power. And on the floor, new workflows appear when the warehouse agent goes free.

Analytics joins that list by making semantics the center of gravity. Snowflake’s AI SQL will keep expanding what can be done inside SQL, from multimodal analysis to new optimizer tricks. Sigma and others will likely deepen their integrations so spreadsheet style creation can reach unstructured content without leaving governed contexts. At the same time, semantic layer providers will keep pushing agent workplaces where definitions, policies, and tools are explicitly modeled. Expect connective tissue to grow between these worlds.

If you already live heavily in Snowflake and Sigma, start by routing AI enriched results back through the semantic layer and make that the only door agents can use. If you are building a greenfield analytics platform, consider leading with the semantic layer and adding AI SQL enrichment as a downstream capability. In both cases, use the adoption playbook above to avoid surprises.

Practical design choices that make agents trustworthy

A few disciplined choices will determine whether your first quarter with agent coworkers is a win.

  • Prefer queries that resolve to metrics, not tables. Force questions to target certified metrics or metric views. This reduces ambiguity, improves cache hit rates, and boosts eval pass rates.
  • Keep noun phrases stable. Business users will keep saying active customer and booked revenue. Map those phrases to exact semantics and keep that mapping in source control.
  • Treat joins as policy. Joins are not ad hoc code. They are business rules about how facts and dimensions relate. Put them under review, with approvals and rollbacks.
  • Make evaluation data boring. Use frozen snapshots, fixed tolerances, and reproducible seeds. Excitement belongs in the product, not in the eval harness.
  • Invest in explainability surfaces. Every chart or narrative an agent ships should carry a single click trail to the Semantic SQL, metric version, and policies applied.

What to do on Monday

  • Pick five metrics that already cause arguments. Give them owners, write crisp definitions, and map the joins and policies. Put them in a repo.
  • Build a 100 question eval set from real executive and field requests. Capture expected SQL and results.
  • Turn on shadow mode for an agent that can answer those questions using the five metrics. Track accuracy, latency, and cost for two weeks.
  • Close three definitional bugs the agent uncovers. Ship nothing to business users until evals pass.
  • Write your first semantic contract and add a check in continuous integration. Break it on purpose to prove the alert path works.

None of this requires a moonshot. It requires discipline and a platform choice that makes semantics the center of gravity.

The bottom line

Agent native analytics on a universal semantic layer is the first credible beachhead for large language models in the enterprise. It aligns automation with governance. It turns AI from a clever dashboard trick into a dependable coworker. The companies that treat semantics as infrastructure and agents as supervised contributors will cut decision cycle times without cutting trust. That is a breakthrough worth acting on today.

Other articles you might like

AP Goes Autonomous: Inside Ramp’s Agents for Payables

AP Goes Autonomous: Inside Ramp’s Agents for Payables

Ramp just switched on Agents for AP, a context aware system that codes invoices, guides approvals, and can pay when allowed. Here is why this leap from scripts to autonomy matters and how to ship it safely.

Voice agents hit prime time with Hume’s Octave 2 and EVI 4‑mini

Voice agents hit prime time with Hume’s Octave 2 and EVI 4‑mini

Hume AI's Octave 2 and EVI 4-mini mark the moment real-time voice agents move from demo to production. See why sub second, interruptible conversation is the UX unlock and how to ship safer, faster systems.

Publisher-owned AI search goes live with Gist Answers

Publisher-owned AI search goes live with Gist Answers

ProRata.ai has launched Gist Answers, a publisher-owned AI search that cites sources, respects paywalls, and shares ad revenue. See what changes for SEO, traffic, and ads, plus a practical build guide to ship it right.

Jack & Jill’s AI agents turn hiring into bot-to-bot deals

Jack & Jill’s AI agents turn hiring into bot-to-bot deals

Two autonomous agents now sit on both sides of the hiring table. Jack represents candidates. Jill represents employers. They negotiate constraints, align on fit, and hand humans interview-ready slates in days instead of weeks.

The Warehouse Agent Goes Free. Operations Just Changed

The Warehouse Agent Goes Free. Operations Just Changed

AutoScheduler has released a free Warehouse Decision Agent that coordinators can use today. Warehousing is emerging as the first real beachhead for agentic AI, with crisp data, clear KPIs, and small models that plan and replan real work.

Dialpad’s agentic AI makes CX the first autonomy beachhead

Dialpad’s agentic AI makes CX the first autonomy beachhead

Dialpad is positioning customer experience as the first real beachhead for autonomy. We break down how reusable Skills, Workflows, secure connectors, and omnichannel context can reset containment and handle time economics.

Figure 03 Moves Agents Off Screen And Into The World

Figure 03 Moves Agents Off Screen And Into The World

Figure 03 moves embodied agents from demo to deployment with a Helix native control stack, tactile hands, 2 kW wireless charging, and a BotQ supply chain built for volume. Here is why that matters now.

Text to CAD gets real: Tripo’s API and the prompt to part

Text to CAD gets real: Tripo’s API and the prompt to part

Generative 3D just cleared the production bar. Tripo’s Text to CAD API moves text and images into manufacturable models, while parametric peers push full feature trees. Here is what CAD grade means and how to pilot it now.

Tinker flips the agent stack with LoRA-first post-training

Tinker flips the agent stack with LoRA-first post-training

Thinking Machines Lab launches Tinker, a LoRA-first fine-tuning API that hides distributed training while preserving control. It nudges teams to start with post-training for agents, with managed orchestration and portable adapters.