Our Snowflake bill hit $8,200 last January. Not because we were running petabyte-scale workloads or serving thousands of concurrent analysts. We had 12 data team members running ad-hoc queries, 40-something dbt models, and a handful of dashboards. The warehouse was doing what every Snowflake warehouse does: spinning up XS instances to scan a few gigabytes of Parquet, then billing us $2/credit for the privilege. When I ran the same queries locally with DuckDB and got results in milliseconds instead of seconds, the math stopped making sense.
I wrote about DuckDB as an embedded analytics engine last year. That article was about using DuckDB locally for individual analysis. This is the sequel: what happens when you take DuckDB to the cloud with MotherDuck and run it as your team's primary analytics engine for six months. The short version: it worked far better than I expected, with some sharp edges I wish someone had warned me about.
What MotherDuck Actually Is
MotherDuck is a cloud analytics service built on DuckDB. But calling it "cloud DuckDB" undersells the most interesting part: the hybrid execution model. When you connect to MotherDuck, your queries can run locally on your machine, remotely in MotherDuck's cloud, or split across both — and the system decides transparently based on where the data lives.
The core concepts:
- Shared databases — Cloud-hosted DuckDB databases that your entire team can query simultaneously. No data copies, no export/import cycles.
- Local databases — Standard DuckDB files on your machine, accessible alongside cloud databases in the same session.
- Hybrid execution — A query can read from a local CSV, join it with a cloud table, and return results to your laptop. MotherDuck figures out the execution plan.
- Web UI — A notebook-style SQL editor in the browser with visualization, sharing, and collaboration features.
- Wasm client — DuckDB compiled to WebAssembly powers the web UI, so even the browser-based queries use the hybrid model.
Think of it as the experience of running a local database with the collaboration features of a cloud warehouse — without the cloud warehouse pricing model.
Why We Moved Off Snowflake
The decision was not ideological. We did not migrate because DuckDB is trendy or because I wanted to write this article. Three factors converged:
1. Our data volume was embarrassingly small for Snowflake. Our entire analytics warehouse was 180 GB compressed. Snowflake's minimum billing unit (one credit, one second of XS warehouse time) was overkill for 90% of our queries. We were paying supercomputer rates for laptop-scale work.
2. The developer experience was painfully slow. Every ad-hoc query required a warehouse spin-up (5-15 seconds cold start), network round trip, and result transfer. Our analysts were waiting 20-30 seconds for queries that DuckDB answers in 200ms. That friction compounds across hundreds of daily queries.
3. We already had DuckDB in our stack. Three engineers were using DuckDB locally for data exploration. They kept exporting Parquet files from Snowflake, querying them locally, then going back to Snowflake for production. MotherDuck eliminated that round trip.
Architecture: Before and After
Here is what changed in our data platform:
BEFORE (Snowflake-centric):
┌──────────┐ ┌────────────────┐ ┌──────────────┐ ┌────────────┐
│ S3 Raw │───→│ Snowflake XS │───→│ dbt models │───→│ Metabase │
│ (Parquet│ │ $8K/month │ │ (snowflake) │ │ Looker │
│ + CSV) │ │ cold starts │ │ │ │ │
└──────────┘ └────────────────┘ └──────────────┘ └────────────┘
AFTER (MotherDuck hybrid):
┌──────────┐ ┌────────────────┐ ┌──────────────┐ ┌────────────┐
│ S3 Raw │───→│ MotherDuck │───→│ dbt models │───→│ Evidence │
│ (Parquet│ │ $400/month │ │ (dbt-duckdb)│ │ Metabase │
│ + CSV) │ │ instant start │ │ │ │ Web UI │
└──────────┘ └────────────────┘ └──────────────┘ └────────────┘
↕
┌──────────────┐
│ Local DuckDB│ ← analysts run hybrid queries
│ (laptops) │ from notebooks + CLI
└──────────────┘
The key architectural change is not just swapping one database for another. It is that analysts now query production data directly from their laptops through the hybrid execution model, without maintaining separate exports or dev environments.
Migration Playbook: Snowflake to MotherDuck
Our migration took three weeks. The first week was exporting data from Snowflake, the second was loading into MotherDuck and converting dbt models, the third was parallel running both systems to verify results. Here is the export step:
import snowflake.connector
import duckdb
import os
# Step 1: Export from Snowflake to Parquet
def export_snowflake_tables(sf_conn, tables: list[str], output_dir: str):
"""Export Snowflake tables to local Parquet files."""
cursor = sf_conn.cursor()
os.makedirs(output_dir, exist_ok=True)
for table in tables:
print(f"Exporting {table}...")
cursor.execute(f"SELECT * FROM {table}")
batches = cursor.fetch_pandas_batches()
parquet_path = os.path.join(output_dir, f"{table.lower()}.parquet")
first_batch = True
for batch_df in batches:
if first_batch:
batch_df.to_parquet(parquet_path, engine="pyarrow", index=False)
first_batch = False
else:
# Append to existing file using DuckDB
duckdb.sql(f"""
COPY (SELECT * FROM read_parquet('{parquet_path}')
UNION ALL
SELECT * FROM batch_df)
TO '{parquet_path}' (FORMAT PARQUET)
""")
row_count = duckdb.sql(f"SELECT count(*) FROM '{parquet_path}'").fetchone()[0]
print(f" {table}: {row_count:,} rows exported")
cursor.close()
# Step 2: Load into MotherDuck
def load_to_motherduck(parquet_dir: str, md_database: str):
"""Load Parquet files into a MotherDuck shared database."""
md = duckdb.connect(f"md:{md_database}?motherduck_token={os.environ['MOTHERDUCK_TOKEN']}")
parquet_files = [f for f in os.listdir(parquet_dir) if f.endswith(".parquet")]
for pf in parquet_files:
table_name = pf.replace(".parquet", "")
parquet_path = os.path.join(parquet_dir, pf)
print(f"Loading {table_name} into MotherDuck...")
md.sql(f"""
CREATE OR REPLACE TABLE {table_name} AS
SELECT * FROM read_parquet('{parquet_path}')
""")
count = md.sql(f"SELECT count(*) FROM {table_name}").fetchone()[0]
print(f" {table_name}: {count:,} rows loaded")
md.close()
# Usage
if __name__ == "__main__":
sf = snowflake.connector.connect(
account="your-account.us-east-1",
user=os.environ["SNOWFLAKE_USER"],
password=os.environ["SNOWFLAKE_PASSWORD"],
warehouse="ANALYTICS_XS",
database="PRODUCTION",
schema="PUBLIC",
)
tables = [
"RAW_EVENTS", "RAW_USERS", "RAW_ORDERS",
"RAW_PRODUCTS", "RAW_SESSIONS", "RAW_PAYMENTS",
]
export_snowflake_tables(sf, tables, "./export_parquet")
load_to_motherduck("./export_parquet", "analytics_prod")
sf.close()
For BigQuery users, the export path is even simpler because BigQuery already supports Parquet export to GCS natively:
-- BigQuery: export to GCS as Parquet
EXPORT DATA OPTIONS(
uri='gs://your-bucket/export/raw_events/*.parquet',
format='PARQUET',
overwrite=true
) AS
SELECT * FROM `project.dataset.raw_events`;
-- MotherDuck: load directly from GCS
CREATE TABLE raw_events AS
SELECT * FROM read_parquet('gcs://your-bucket/export/raw_events/*.parquet');
The Hybrid Execution Model: Where the Magic Happens
This is the feature that separates MotherDuck from "just another cloud database." When you connect to MotherDuck from your laptop, you have access to both local and cloud databases in the same SQL session. The engine decides where each part of the query executes.
import duckdb
# Connect to MotherDuck (cloud) with a local database attached
md = duckdb.connect("md:analytics_prod?motherduck_token=your_token")
# Attach a local DuckDB file in the same session
md.sql("ATTACH 'local_scratch.duckdb' AS local_db")
# This query joins cloud data with local data seamlessly
result = md.sql("""
SELECT
c.customer_name,
c.segment,
o.total_orders,
o.total_revenue,
l.custom_score -- from local experimental data
FROM analytics_prod.dim_customers c
JOIN analytics_prod.fct_orders o ON c.customer_id = o.customer_id
JOIN local_db.experimental_scores l ON c.customer_id = l.customer_id
WHERE o.total_revenue > 10000
ORDER BY l.custom_score DESC
LIMIT 100
""").fetchdf()
print(result)
In this query, dim_customers and fct_orders live in MotherDuck's cloud. experimental_scores is a local table on my laptop. The engine fetches the cloud data, joins it with local data on my machine, and returns results. No export step, no staging tables, no waiting.
You can also force execution location explicitly:
-- Force local execution (useful for sensitive data)
PRAGMA motherduck_local_only;
SELECT * FROM read_parquet('/home/analyst/sensitive_data.parquet');
-- Force cloud execution (useful for large shared datasets)
PRAGMA motherduck_cloud_only;
SELECT count(*), avg(revenue) FROM analytics_prod.fct_orders;
In practice, about 60% of our analysts' queries run locally (data exploration on subsets, local file analysis), 30% run in the cloud (aggregations on full production tables), and 10% are hybrid (joining cloud production data with local scratch work). The DX is genuinely seamless — you stop thinking about where data lives.
dbt + MotherDuck: Production Transformation Layer
We use the dbt-duckdb adapter with MotherDuck as the backend. Setup is straightforward, but there are a few configuration details that tripped us up.
# profiles.yml
analytics:
target: prod
outputs:
prod:
type: duckdb
path: "md:analytics_prod"
extensions:
- httpfs
- json
settings:
motherduck_token: "{{ env_var('MOTHERDUCK_TOKEN') }}"
threads: 4
dev:
type: duckdb
path: "md:analytics_dev"
extensions:
- httpfs
- json
settings:
motherduck_token: "{{ env_var('MOTHERDUCK_TOKEN') }}"
threads: 4
local:
type: duckdb
path: "./target/local_dev.duckdb"
extensions:
- httpfs
settings:
threads: 8
Notice three targets: prod and dev point to different MotherDuck databases, while local uses a pure local DuckDB file for fast iteration. I run dbt build --target local when developing models, then dbt build --target dev to verify in the cloud, then promote to prod.
Here is a real incremental model from our pipeline:
-- models/marts/fct_daily_revenue.sql
{{ config(
materialized='incremental',
unique_key='date_day || customer_segment',
on_schema_change='sync_all_columns'
) }}
WITH daily_orders AS (
SELECT
date_trunc('day', order_completed_at) AS date_day,
customer_segment,
count(*) AS order_count,
sum(order_total) AS gross_revenue,
sum(discount_amount) AS total_discounts,
sum(order_total - discount_amount) AS net_revenue,
count(DISTINCT customer_id) AS unique_customers,
avg(order_total) AS avg_order_value
FROM {{ ref('stg_orders') }}
WHERE order_status = 'completed'
{% if is_incremental() %}
AND order_completed_at >= (
SELECT max(date_day) - INTERVAL '3 days'
FROM {{ this }}
)
{% endif %}
GROUP BY 1, 2
)
SELECT
date_day,
customer_segment,
order_count,
gross_revenue,
total_discounts,
net_revenue,
unique_customers,
avg_order_value,
net_revenue / nullif(unique_customers, 0) AS revenue_per_customer,
current_timestamp AS _loaded_at
FROM daily_orders
Incremental models in MotherDuck work identically to other DuckDB targets. The dbt-duckdb adapter handles the merge logic. Our full dbt pipeline (42 models) runs in 38 seconds on MotherDuck versus 4 minutes 20 seconds on Snowflake XS. That is not a typo.
Sourcing Directly from S3
One pattern we love is sourcing raw data directly from S3 without an explicit load step:
-- models/staging/stg_raw_events.sql
{{ config(materialized='view') }}
SELECT
event_id,
user_id,
event_type,
event_properties::JSON AS properties,
epoch_ms(event_timestamp_ms) AS event_at,
_filename AS source_file
FROM read_parquet(
's3://our-data-lake/events/year=2026/month=*/day=*/*.parquet',
hive_partitioning=true
)
WHERE year = 2026 AND month >= 1
DuckDB's read_parquet with S3 and hive partitioning means our staging layer is just views over the data lake. No ingestion pipeline, no COPY INTO, no loading step. The data stays in S3 and gets queried in place.
Python Integration: SDK, Pandas, and Polars
The Python experience is where MotherDuck shines for data teams that live in notebooks. The connection is a one-liner, and you get full interop with pandas and Polars DataFrames:
import duckdb
import polars as pl
# Connect to MotherDuck
md = duckdb.connect("md:analytics_prod?motherduck_token=your_token")
# Query directly into a Polars DataFrame (zero-copy via Arrow)
df = md.sql("""
SELECT
customer_segment,
date_trunc('month', date_day) AS month,
sum(net_revenue) AS monthly_revenue,
count(DISTINCT unique_customers) AS monthly_customers
FROM fct_daily_revenue
WHERE date_day >= '2025-07-01'
GROUP BY 1, 2
ORDER BY 1, 2
""").pl()
# Process locally with Polars
pivoted = df.pivot(
on="customer_segment",
index="month",
values="monthly_revenue",
aggregate_function="sum",
)
print(pivoted)
# Write results back to MotherDuck
md.sql("CREATE OR REPLACE TABLE analysis_revenue_pivot AS SELECT * FROM pivoted")
# Or write to a local file
pivoted.write_parquet("revenue_by_segment.parquet")
The .pl() method returns a Polars DataFrame via Arrow with zero-copy transfer. The .fetchdf() method returns pandas. Both work seamlessly. You can also register local DataFrames as virtual tables in your MotherDuck session:
# Register a local DataFrame as a queryable table
import pandas as pd
local_targets = pd.DataFrame({
"customer_segment": ["enterprise", "mid-market", "smb"],
"q1_target": [500000, 200000, 100000],
"q1_stretch": [650000, 260000, 130000],
})
md.register("targets", local_targets)
# Now join cloud data with local DataFrame in SQL
md.sql("""
SELECT
r.customer_segment,
r.monthly_revenue AS actual,
t.q1_target AS target,
round(r.monthly_revenue / t.q1_target * 100, 1) AS pct_to_target
FROM (
SELECT customer_segment, sum(net_revenue) AS monthly_revenue
FROM fct_daily_revenue
WHERE date_day >= '2026-01-01' AND date_day < '2026-04-01'
GROUP BY 1
) r
JOIN targets t ON r.customer_segment = t.customer_segment
""").show()
This pattern — cloud production data joined with a local pandas DataFrame containing business targets — replaced an entire workflow that used to involve exporting CSVs, loading them into Snowflake staging tables, and cleaning up afterward.
Performance Benchmarks: MotherDuck vs Snowflake vs BigQuery
I ran five common query patterns against our production dataset (180 GB, 12 tables, largest table 2.1 billion rows). Each query was run five times, and I took the median. Snowflake used an XS warehouse (1 credit/hour). BigQuery used on-demand pricing. MotherDuck used a Standard plan instance.
| Query Pattern | MotherDuck | Snowflake XS | BigQuery On-Demand | Notes |
|---|---|---|---|---|
| Simple aggregation (COUNT/SUM/AVG on 500M rows) | 1.2s | 3.8s | 4.1s | MotherDuck wins on single-query latency |
| Multi-table JOIN (4 tables, 50M result rows) | 2.8s | 5.2s | 6.7s | DuckDB's join engine is remarkably fast |
| Window functions (RANK + LAG over 200M rows) | 3.1s | 4.5s | 5.3s | All three handle windows well; MotherDuck edges it |
| Full table scan with filters (2.1B rows) | 8.4s | 6.1s | 7.9s | Snowflake wins on large scans (more parallel compute) |
| COUNT DISTINCT on high-cardinality column (800M rows) | 2.9s | 4.8s | 3.5s | DuckDB's HyperLogLog-style approximation is fast |
The headline: MotherDuck wins 4 out of 5 patterns at our scale. Snowflake's advantage appears on large sequential scans where it can throw more parallel compute at the problem. At data volumes above 500 GB, I would expect Snowflake to close the gap; at 1 TB+, Snowflake would likely win most categories. MotherDuck's sweet spot is datasets from 1 GB to 500 GB, which — let's be honest — covers the majority of analytics workloads.
Cost Comparison: The Numbers That Made Our CFO Smile
This is where the conversation gets real. I modeled three usage tiers based on actual teams I have worked with:
| Usage Tier | MotherDuck | Snowflake | BigQuery On-Demand |
|---|---|---|---|
| Small (5 users, 50 GB, 500 queries/day) | $100/mo | $1,500/mo | $800/mo |
| Medium (15 users, 200 GB, 2K queries/day) | $400/mo | $5,000/mo | $3,200/mo |
| Large (40 users, 1 TB, 8K queries/day) | $1,800/mo | $15,000/mo | $9,500/mo |
MotherDuck's pricing model is based on storage and compute units, not per-query bytes scanned (BigQuery) or per-second warehouse time (Snowflake). For ad-hoc analytics workloads where users run many small-to-medium queries, this model is dramatically cheaper. Our actual bill went from $8,200/month on Snowflake to $380/month on MotherDuck Standard. The 95% cost reduction is not an exaggeration — it is our real invoice.
The caveat: Snowflake and BigQuery offer compute elasticity that MotherDuck does not match. If your workload spikes to 200 concurrent heavy queries during a board meeting, Snowflake scales up seamlessly. MotherDuck will queue them.
Sharing Data: How Teams Collaborate
In Snowflake, sharing data between teams means either granting schema access (and dealing with RBAC complexity) or using Snowflake Data Sharing (which works great but locks you deeper into the Snowflake ecosystem). MotherDuck's approach is simpler and feels more natural for small-to-medium teams.
-- Create a shared database (any team member can query it)
CREATE DATABASE analytics_prod;
-- Share a specific database with a teammate
-- (done through MotherDuck web UI or CLI)
-- motherduck share create analytics_prod --email teammate@company.com --permission read
-- Teammates connect and see shared databases automatically
-- No COPY, no CLONE, no data movement
SELECT * FROM analytics_prod.fct_daily_revenue LIMIT 10;
Every team member sees the same shared databases when they connect. There is no data copying, no synchronization lag, no "which schema has the latest version" confusion. An analyst can open a Jupyter notebook, connect to MotherDuck, and immediately query the same production tables that dbt builds into.
For isolation, each team member can also have personal databases that are not shared. Our analysts keep scratch tables, experimental features, and draft analyses in personal databases while querying shared production data. This replaced our Snowflake dev/staging schema nightmare entirely.
What Works Great
Ad-hoc analytics and data exploration. This is MotherDuck's killer use case. The instant query response times (sub-second for most analytical queries) completely changed how our analysts work. They iterate faster, explore more hypotheses, and spend less time waiting. The web UI with built-in charting is genuinely good — not Looker-level, but adequate for quick exploration.
CI/CD testing for data pipelines. We run our full dbt test suite against a MotherDuck dev database in GitHub Actions. A CI run that took 12 minutes on Snowflake takes 90 seconds on MotherDuck. The cost per CI run dropped from about $0.80 to effectively nothing.
Embedded analytics. We embed MotherDuck queries in an internal tool that serves pre-computed metrics to our ops team. The latency is low enough (50-200ms for simple lookups) that it works for interactive dashboards without a caching layer.
Local development with production data. The hybrid model means engineers can develop locally against real production data without downloading dumps or maintaining fixtures. This sounds small, but it eliminated an entire category of "works in dev, breaks in prod" issues.
What Does Not Work (Yet)
Honesty time. MotherDuck has real limitations that you should know about before migrating.
High concurrency (50+ simultaneous users). MotherDuck is not built for serving 200 dashboard users hitting the same database at once. We saw query queuing and latency spikes when more than 40-50 users were active simultaneously. Snowflake handles this trivially by scaling warehouses. If your analytics platform serves a large non-technical audience, Snowflake or BigQuery is still the right choice.
Petabyte-scale data. DuckDB is a single-node engine. MotherDuck adds cloud storage and sharing, but it is not a distributed query engine. Our 180 GB dataset is well within its comfort zone. At 1-2 TB, it still works but queries slow down. Beyond 5 TB, you will hit walls. If your data is measured in terabytes, stick with Snowflake, BigQuery, or Databricks.
Real-time streaming ingestion. There is no Kafka connector, no Kinesis integration, no streaming insert API. MotherDuck is a batch-oriented system. We land data in S3 via our streaming pipeline and MotherDuck queries it from there, but the freshness is minutes, not seconds. If you need sub-minute latency, look at ClickHouse or Materialize.
Complex RBAC and governance. MotherDuck's access control is basic: you can share databases with read or write permissions at the database level. There is no column-level security, no row-level policies, no data masking, no audit logging to the level that compliance teams expect. For SOC 2-audited analytics environments with PII, Snowflake's governance features are years ahead.
Ecosystem maturity. The BI tool integration is growing but not complete. Metabase works well (we use it). Looker and Tableau have experimental DuckDB connectors but they are not production-grade. Power BI does not work at all. If your reporting layer is Tableau or Power BI, MotherDuck is not ready for you.
Decision Framework: MotherDuck vs Staying on Snowflake
After six months, here is my decision framework. I have shared this with three other data teams considering the move:
Move to MotherDuck if:
- Your total data volume is under 500 GB
- Your team is under 30 active analysts/engineers
- Your primary use case is ad-hoc analytics and dbt transformations
- You value developer experience and iteration speed over enterprise features
- Your BI layer is Metabase, Evidence, Streamlit, or custom dashboards
- You do not have strict compliance requirements (SOC 2 column-level audit, HIPAA)
- Your Snowflake/BigQuery bill feels disproportionate to your data volume
Stay on Snowflake/BigQuery if:
- Your data volume exceeds 1 TB and is growing
- You serve dashboards to 100+ concurrent business users
- You need column-level security, data masking, or compliance audit trails
- Your BI layer is Tableau, Power BI, or Looker (native connectors matter)
- You have real-time streaming requirements built into the warehouse
- You use Snowflake-specific features (Snowpark, data sharing marketplace, Cortex)
- Your team is heavily invested in Snowflake/BigQuery-specific tooling
Consider a hybrid approach if:
- You want MotherDuck for dev/test and Snowflake for production serving
- You have some workloads that fit MotherDuck and others that need Snowflake's scale
- You want to gradually migrate, starting with ad-hoc analytics
Setting Up MotherDuck from Scratch
For teams starting fresh, here is the minimal setup to get a production-ready MotherDuck environment:
# Install DuckDB CLI with MotherDuck support
pip install duckdb --upgrade
# Authenticate (opens browser for OAuth)
duckdb -cmd "PRAGMA md_connect"
# Or set token directly for CI/CD
export MOTHERDUCK_TOKEN="your_token_here"
# Create your production database
duckdb "md:" -cmd "CREATE DATABASE analytics_prod;"
# Load initial data from S3
duckdb "md:analytics_prod" -cmd "
INSTALL httpfs; LOAD httpfs;
SET s3_region='us-east-1';
SET s3_access_key_id='${AWS_ACCESS_KEY_ID}';
SET s3_secret_access_key='${AWS_SECRET_ACCESS_KEY}';
CREATE TABLE raw_events AS
SELECT * FROM read_parquet('s3://your-bucket/events/**/*.parquet');
CREATE TABLE raw_users AS
SELECT * FROM read_parquet('s3://your-bucket/users/**/*.parquet');
SELECT table_name, estimated_size, column_count
FROM duckdb_tables();
"
# .github/workflows/dbt-ci.yml — CI/CD with MotherDuck
name: dbt CI
on:
pull_request:
paths: ['dbt/**']
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
pip install dbt-duckdb duckdb
cd dbt && dbt deps
- name: Run dbt build + test
env:
MOTHERDUCK_TOKEN: ${{ secrets.MOTHERDUCK_TOKEN }}
run: |
cd dbt
dbt build --target ci --full-refresh
dbt test --target ci
- name: Validate row counts
env:
MOTHERDUCK_TOKEN: ${{ secrets.MOTHERDUCK_TOKEN }}
run: |
python -c "
import duckdb
md = duckdb.connect('md:analytics_ci')
results = md.sql('''
SELECT table_name, estimated_size
FROM duckdb_tables()
WHERE estimated_size = 0
''').fetchall()
if results:
print('ERROR: Empty tables found:', results)
exit(1)
print('All tables have data.')
"
Six Months In: The Honest Verdict
MotherDuck is not a Snowflake replacement for everyone. It is a Snowflake replacement for teams like ours: mid-scale data (under 500 GB), 10-25 active users, heavy ad-hoc analytics, cost-conscious, and willing to accept a less mature ecosystem in exchange for dramatically better developer experience and 90%+ cost savings.
The hybrid execution model is not a gimmick. It genuinely changes how analysts work. The ability to join local scratch data with production tables in a single query, without export/import cycles, without staging schemas, without waiting — that removes friction that I did not even realize was slowing us down until it was gone.
If your Snowflake bill makes you wince every month and your data fits on a single large VM, give MotherDuck a serious look. Start with a dev environment, run your dbt models against it, and compare the results. The migration is surprisingly painless, and the worst case is you go back to Snowflake with a better understanding of what your queries actually need.
Leave a Comment