Book a Discovery Call
Our Approach

How we deliver a semantic layer program.

A repeatable methodology grounded in 25+ years of analytics transformation — applied specifically to the work of building the metric truth plane mid-market operators now need.

Our Methodology

Stabilize → Improve → Leverage, applied to the semantic layer

The three-phase pattern we’ve used for twenty years, mapped specifically onto the work of building, deploying, and governing a semantic layer. In that order. No skipping.

1
Stabilize
Weeks 1–3
Stop the bleeding on metric drift before building anything new.
  • Catalog every metric definition in the wild — Power BI tabular models, Tableau data sources, AI assistant prompts, Excel pulls.
  • Score the top 30 metrics for definitional drift and conflict.
  • Five to seven structured stakeholder interviews.
  • Platform recommendation based on what you already run.
Deliverable: Numbered conflict-and-drift audit, plus a target-architecture recommendation.
2
Improve
Weeks 4–10
Build the governed layer. One definition per metric, consumed by every BI tool.
  • Implementation in the chosen platform — Snowflake Semantic Views, Databricks Metric Views, or Power BI Tabular on Direct Lake.
  • Top 25–50 metrics migrated to single canonical definitions, each with a named owner in finance or operations.
  • Tableau Published Data Sources and Power BI Semantic Models repointed to read from the layer.
  • Metric stewardship forum stood up; CFO or deputy chairs.
Deliverable: Working semantic layer, governed and operational; conflicting-number meetings end.
3
Leverage
Weeks 11–16+
Convert the layer into competitive advantage. AI agents, governed self-service, predictive use cases.
  • AI assistant pointed at the semantic layer — accuracy jumps from frustrating to trustworthy.
  • Governed self-service for analysts and franchise/operator audiences.
  • Predictive use cases on top of definitions everyone already trusts.
  • OSI export for portability; ongoing stewardship cadence handed off.
Deliverable: Leverage playbook — what to build next on top of the foundation.
First 90 Days Framework

The first 90 days of a semantic layer program.

The opinionated arc we use on every Foundational Build. Stabilize the conversation first; build the layer second; turn on consumers third; hand off stewardship last.

1
Stabilize the conversation, not the data
Days 1–14 · Five to seven structured interviews
Five to seven structured interviews with the CFO, COO, head of analytics, and one or two senior business analysts. Tool inventory across every place a metric is defined today. Single deliverable: an 8–12 page document of what you have, the top 10 metric mess, and the recommended platform. No code yet.
2
Pick the platform, define the first 10 metrics
Days 15–45 · Build the foundation, narrowly
Snowflake Semantic Views, Databricks Metric Views, or Power BI Tabular on Direct Lake — dictated by the warehouse you’re on. Not fifty metrics. Ten. Each with a single canonical definition, a named owner in finance or operations, documented business rules, tested against the historical numbers people remember.
3
First wave of consumers
Days 46–75 · Cut over three consumers
The highest-pain BI dashboard, the AI assistant (Cortex Analyst, Databricks Genie, or your in-house Copilot), and one embedded analytics surface. AI accuracy typically moves from 50–65% to 88–95% on a domain benchmark over this window. The dashboards stop disagreeing on the ten metrics.
4
Stewardship live; expand the backlog
Days 76–90 · Make it sustainable
Metric stewardship forum stood up, monthly cadence, CFO or deputy chairs. We sit in for the first three forums and then hand off. Next 15–25 metrics scoped and prioritized. The platform is humming; the team is building, not firefighting.
5
The pivot point
Somewhere between day 50 and 75
The first executive meeting in which the “whose number is right” conversation does not happen. Same number in every dashboard, same number from the AI assistant, same number in the email the CFO already saw. That moment is the ROI. Everything you build after sits on top of the trust the team built that day.
Architecture Philosophy

Day-2 architecture for the semantic layer

Most semantic layers we inherit were designed for the company the team had eighteen months ago. We design for the company you are becoming.

Day-2 thinking is the cheapest possible insurance against the most expensive possible re-do. Hard-coded org dimensions, single-channel sales facts, tool-specific calculated fields — every one of these mistakes locks the layer to a single Day-1 assumption it can’t flex around. We use three questions before naming a metric: What is the next reorg this needs to survive? What is the next channel this needs to absorb? What is the next consumer — BI, embedded, AI agent — that needs the same answer? If any answer is “we’d have to rewrite it,” the metric isn’t done.
Standards-Based and Proprietary Modeling

OSI, Tableau Published Data Sources, and Power BI Semantic Models

Three semantic-modeling approaches; one engagement that often uses two of them together. How we choose depends on your stack, your AI agents, and how much platform-portability you need to preserve.

Standards-based — OSI

Open Semantic Interchange specifies a vendor-neutral format for metric and dimension definitions, finalized January 2026 with sixteen-plus signatories. We use OSI-compatible implementations (Snowflake Semantic Views, Databricks Metric Views, dbt MetricFlow) whenever the metric definitions need to survive a future platform change — which is most of the time in mid-market.

Proprietary — Tableau Published Data Sources

Tableau’s native semantic model. Tableau Pulse digests, Tableau Cloud workbooks, and Agentforce agents all read from the same Published Data Source. Strongest fit for Tableau-first stacks. We pair Published Data Sources with a Snowflake or Databricks semantic layer underneath when a single platform-level definition needs to feed Tableau alongside other consumers.

Proprietary — Power BI Semantic Models

The Power BI Tabular semantic model, increasingly running on Direct Lake over OneLake in Microsoft Fabric. The reference enterprise implementation for Microsoft shops. Strongest fit when Power BI is the primary BI tool and Copilot is the primary AI surface. We can build the Tabular model standalone, or front a Snowflake/Databricks semantic layer with it.

Our default for mid-market clients running multiple BI tools: canonical definitions in the OSI-compatible platform layer, consumed by the proprietary Tableau and Power BI semantic models above it. One source of truth, multiple BI surfaces, no metric drift.

Three Roles of the Analytics Leader

Evangelist · Translator · Facilitator

Evangelist

We espouse the virtues of data as a strategic asset. We build organizational belief in analytics as a growth driver, not just a cost center — and the semantic layer as the infrastructure that makes the strategic story real.

Translator

We convert ‘data speak’ into business outcomes. The metric layer is fundamentally a translation problem: what does the business actually want this number to mean? We bridge the language gap between technical teams and business leaders.

Facilitator

We work effectively between groups — IT and lines of business, central data teams and distributed analytics users, finance and engineering, today’s metric definition and tomorrow’s AI agent. Stewardship forums are facilitation work.

Information Governance Philosophy

The principles that govern the work

Metric definitions are control activities. Ownership belongs to finance and operations, not engineering.
Capture and manage data at atomic (lowest-grain) levels so the semantic layer can flex without rebuilds.
Clear data provenance is essential to a single source of truth — especially when an AI agent is reading the definitions.
Manage risk by balancing data timeliness, completeness, and accuracy — especially for high-exposure use cases like CFO reporting and Copilot answers.
Reliable, efficient data pipelines as foundational infrastructure — not an afterthought.
End-to-end QA: automation, consistency checks at every stage, active exception monitoring, AI-agent accuracy benchmarking.

See the methodology in action.

Read the case studies that built this playbook, or book a 60-minute discovery call to walk through your situation.