Skip to content

JSON Output

Rocky’s JSON output is the interface contract between Rocky and orchestrators such as Dagster. The schema is versioned so that consumers can detect breaking changes. Every command that emits --output json is backed by a typed Rust struct deriving JsonSchema, with autogenerated Pydantic and TypeScript bindings.

The current schema version is 0.3.0. Every JSON response includes a top-level version field. Orchestrators should check this value and handle version mismatches gracefully.

Throughout the output, asset_key arrays follow the format:

[source_type, ...component_values, table_name]

For example, a Fivetran source with tenant acme, region us_west, connector shopify, and table orders produces:

["fivetran", "acme", "us_west", "shopify", "orders"]

This key is designed to map directly to orchestrator asset definitions (e.g., Dagster’s AssetKey).


Returns all discovered sources and their tables.

{
"version": "0.3.0",
"command": "discover",
"sources": [
{
"id": "connector_abc123",
"components": {
"tenant": "acme",
"regions": ["us_west"],
"source": "shopify"
},
"source_type": "fivetran",
"last_sync_at": "2026-03-30T10:00:00Z",
"tables": [
{ "name": "orders", "row_count": null },
{ "name": "customers", "row_count": null },
{ "name": "products", "row_count": null }
]
}
],
"checks": {
"freshness": { "threshold_seconds": 86400 }
}
}

Field reference:

FieldTypeDescription
sources[].idstringConnector identifier from the source system.
sources[].componentsobjectParsed schema pattern components.
sources[].source_typestringSource type ("fivetran" or "manual").
sources[].last_sync_atstring or nullISO 8601 timestamp of the last successful sync. Null if unknown.
sources[].tablesarrayList of tables in this source.
sources[].tables[].namestringTable name.
sources[].tables[].row_countinteger or nullRow count if available, otherwise null.
checksobject or absentPipeline-level check configuration, when [checks] is declared in rocky.toml.
checks.freshness.threshold_secondsintegerFreshness threshold in seconds.

Returns a complete summary of the pipeline execution.

{
"version": "0.3.0",
"command": "run",
"pipeline_type": "replication",
"filter": "tenant=acme",
"duration_ms": 45200,
"tables_copied": 20,
"tables_failed": 0,
"materializations": [
{
"asset_key": ["fivetran", "acme", "us_west", "shopify", "orders"],
"rows_copied": null,
"duration_ms": 2300,
"metadata": {
"strategy": "incremental",
"watermark": "2026-03-30T10:00:00Z",
"target_table_full_name": "acme_warehouse.staging__us_west__shopify.orders",
"sql_hash": "a1b2c3d4e5f67890",
"column_count": 12,
"compile_time_ms": 8
}
}
],
"check_results": [
{
"asset_key": ["fivetran", "acme", "us_west", "shopify", "orders"],
"checks": [
{
"name": "row_count",
"passed": true,
"source_count": 15000,
"target_count": 15000
},
{
"name": "column_match",
"passed": true,
"missing": [],
"extra": []
},
{
"name": "freshness",
"passed": true,
"lag_seconds": 300,
"threshold_seconds": 86400
}
]
}
],
"permissions": {
"grants_added": 3,
"grants_revoked": 0,
"catalogs_created": 1,
"schemas_created": 2
},
"drift": {
"tables_checked": 20,
"tables_drifted": 1,
"actions_taken": [
{
"table": "acme_warehouse.staging__us_west__shopify.line_items",
"action": "drop_and_recreate",
"reason": "column 'status' changed STRING -> INT"
}
]
},
"execution": {
"concurrency": 8,
"tables_processed": 20,
"tables_failed": 0
},
"metrics": {
"tables_processed": 20,
"tables_failed": 0,
"statements_executed": 45,
"retries_attempted": 1,
"retries_succeeded": 1,
"anomalies_detected": 0,
"table_duration_p50_ms": 1200,
"table_duration_p95_ms": 4500,
"table_duration_max_ms": 8200,
"query_duration_p50_ms": 800,
"query_duration_p95_ms": 3200,
"query_duration_max_ms": 7100
},
"errors": [],
"anomalies": []
}

Top-level fields:

FieldTypeDescription
pipeline_typestring or absentPipeline type executed (e.g., "replication").
filterstring or nullThe filter applied to this run, if any.
duration_msintegerTotal pipeline execution time in milliseconds.
tables_copiedintegerNumber of tables that were copied (full or incremental).
tables_failedintegerNumber of tables that failed during processing.
tables_skippedintegerNumber of tables skipped (omitted when 0).
resumed_fromstring or absentRun ID this run resumed from, if --resume was used.
shadowbooleanTrue when running in shadow mode (omitted when false).
errorsarrayError details for tables that failed. Each entry has asset_key and error.
executionobjectConcurrency and throughput summary.
metricsobject or nullCounters and percentile histograms for the run.
anomaliesarrayRow count anomalies detected by historical baseline comparison.
partition_summariesarrayPer-model partition execution summaries (present for time_interval models).

materializations[]:

FieldTypeDescription
asset_keyarray of stringsUnique asset identifier.
rows_copiedinteger or nullNumber of rows inserted. Null if the warehouse does not report this.
duration_msintegerTime spent copying this table in milliseconds.
metadata.strategystringReplication strategy used ("incremental" or "full_refresh").
metadata.watermarkstring or nullThe watermark value after this copy. Null for full refresh.
metadata.target_table_full_namestring or absentFully-qualified target table (catalog.schema.table).
metadata.sql_hashstring or absent16-char hex hash of the generated SQL.
metadata.column_countinteger or absentNumber of columns in the materialized table.
metadata.compile_time_msinteger or absentCompile time in milliseconds for derived models.
partitionobject or absentPartition window info for time_interval materializations.

check_results[]:

FieldTypeDescription
asset_keyarray of stringsThe table this check applies to.
checks[].namestringCheck name: "row_count", "column_match", or "freshness".
checks[].passedbooleanWhether the check passed.

Additional fields vary by check type:

  • row_count: source_count (integer), target_count (integer)
  • column_match: missing (list of column names missing from target), extra (list of unexpected columns in target)
  • freshness: lag_seconds (integer), threshold_seconds (integer)

permissions:

FieldTypeDescription
grants_addedintegerNumber of GRANT statements executed.
grants_revokedintegerNumber of REVOKE statements executed.
catalogs_createdintegerNumber of catalogs created during this run.
schemas_createdintegerNumber of schemas created during this run.

drift:

FieldTypeDescription
tables_checkedintegerTotal tables inspected for schema drift.
tables_driftedintegerNumber of tables where drift was detected.
actions_taken[].tablestringFully qualified table name.
actions_taken[].actionstringAction taken (e.g., "drop_and_recreate").
actions_taken[].reasonstringHuman-readable explanation of the drift.

Returns the SQL statements that would be executed, without running them.

{
"version": "0.3.0",
"command": "plan",
"filter": "tenant=acme",
"statements": [
{
"purpose": "create_catalog",
"target": "acme_warehouse",
"sql": "CREATE CATALOG IF NOT EXISTS acme_warehouse"
},
{
"purpose": "create_schema",
"target": "acme_warehouse.staging__us_west__shopify",
"sql": "CREATE SCHEMA IF NOT EXISTS acme_warehouse.staging__us_west__shopify"
},
{
"purpose": "incremental_copy",
"target": "acme_warehouse.staging__us_west__shopify.orders",
"sql": "INSERT INTO acme_warehouse.staging__us_west__shopify.orders SELECT *, CAST(NULL AS STRING) AS _loaded_by FROM raw_catalog.src__acme__us_west__shopify.orders WHERE _fivetran_synced > (...)"
}
]
}

statements[]:

FieldTypeDescription
purposestringWhat this statement does: "create_catalog", "create_schema", "incremental_copy", "full_refresh", "grant", "revoke", "tag_catalog", "tag_schema".
targetstringThe fully qualified object this statement operates on.
sqlstringThe exact SQL that would be executed.

Returns stored watermarks from the embedded state file.

{
"version": "0.3.0",
"command": "state",
"watermarks": [
{
"table": "acme_warehouse.staging__us_west__shopify.orders",
"last_value": "2026-03-30T10:00:00Z",
"updated_at": "2026-03-30T10:01:32Z"
},
{
"table": "acme_warehouse.staging__us_west__shopify.customers",
"last_value": "2026-03-30T09:55:00Z",
"updated_at": "2026-03-30T10:01:32Z"
}
]
}

watermarks[]:

FieldTypeDescription
tablestringFully qualified target table name.
last_valuestringThe last watermark value (ISO 8601 timestamp).
updated_atstringWhen this watermark was last written (ISO 8601 timestamp).

Aggregate health checks across config, state, adapters, and pipelines.

{
"command": "doctor",
"overall": "healthy",
"checks": [
{ "name": "config", "status": "healthy", "message": "Config syntax valid", "duration_ms": 2 },
{ "name": "state", "status": "healthy", "message": "State store healthy (12 watermarks)", "duration_ms": 5 },
{ "name": "adapters", "status": "healthy", "message": "All adapters reachable", "duration_ms": 340 }
],
"suggestions": []
}

Field reference:

FieldTypeDescription
overallstringAggregate status: "healthy", "warning", or "critical".
checks[].namestringCheck category ("config", "state", "adapters", "pipelines", "state_sync").
checks[].statusstring"healthy", "warning", or "critical".
checks[].messagestringHuman-readable result.
checks[].duration_msintegerTime spent on this check in milliseconds.
suggestionsarray of stringsActionable fix suggestions for any non-healthy checks.

Detect schema drift between source and target tables without running the pipeline.

{
"version": "0.3.0",
"command": "drift",
"drift": {
"tables_checked": 20,
"tables_drifted": 2,
"actions_taken": [
{
"table": "acme_warehouse.staging__us_west__shopify.orders",
"action": "alter_column",
"reason": "column 'amount' widened FLOAT -> DOUBLE (safe)"
},
{
"table": "acme_warehouse.staging__us_west__shopify.line_items",
"action": "drop_and_recreate",
"reason": "column 'status' changed STRING -> INT"
}
]
}
}

Field reference:

FieldTypeDescription
drift.tables_checkedintegerTotal tables inspected for schema drift.
drift.tables_driftedintegerNumber of tables where drift was detected.
drift.actions_taken[].tablestringFully qualified table name.
drift.actions_taken[].actionstringAction taken ("alter_column", "drop_and_recreate").
drift.actions_taken[].reasonstringHuman-readable explanation.

Compare shadow tables against production targets after a shadow run.

{
"version": "0.3.0",
"command": "compare",
"filter": "tenant=acme",
"tables_compared": 20,
"tables_passed": 19,
"tables_warned": 1,
"tables_failed": 0,
"results": [
{
"production_table": "acme_warehouse.staging__us_west__shopify.orders",
"shadow_table": "acme_warehouse.staging__us_west__shopify.orders_rocky_shadow",
"row_count_match": true,
"production_count": 15000,
"shadow_count": 15000,
"row_count_diff_pct": 0.0,
"schema_match": true,
"schema_diffs": [],
"verdict": "pass"
}
],
"overall_verdict": "pass"
}

Field reference:

FieldTypeDescription
filterstringFilter applied to select tables.
tables_comparedintegerTotal tables compared.
tables_passedintegerTables with no differences.
tables_warnedintegerTables with minor differences (within threshold).
tables_failedintegerTables with significant differences.
results[].production_tablestringFully qualified production table name.
results[].shadow_tablestringFully qualified shadow table name.
results[].row_count_matchbooleanWhether row counts match.
results[].production_countintegerRow count in production table.
results[].shadow_countintegerRow count in shadow table.
results[].row_count_diff_pctnumberRow count difference as a percentage.
results[].schema_matchbooleanWhether schemas match.
results[].schema_diffsarray of stringsColumn-level schema differences.
results[].verdictstring"pass", "warn", or "fail".
overall_verdictstringAggregate verdict across all tables.

Compile models, resolve dependencies, type-check SQL, and build the semantic graph.

{
"version": "0.3.0",
"command": "compile",
"models": 14,
"execution_layers": 4,
"diagnostics": [],
"has_errors": false,
"compile_timings": {
"parse_ms": 12,
"resolve_ms": 8,
"typecheck_ms": 18,
"total_ms": 42
},
"models_detail": [
{
"name": "fct_revenue",
"strategy": { "type": "incremental", "timestamp_column": "updated_at" },
"target": { "catalog": "acme_warehouse", "schema": "analytics", "table": "fct_revenue" },
"freshness": { "warn_after_seconds": 3600, "error_after_seconds": 86400 },
"contract_source": "auto"
}
]
}

Field reference:

FieldTypeDescription
modelsintegerNumber of models compiled.
execution_layersintegerNumber of layers in the dependency DAG.
diagnosticsarrayCompiler diagnostics (errors and warnings).
has_errorsbooleanWhether any compilation errors were found.
compile_timingsobjectPer-phase timing breakdown in milliseconds.
models_detailarrayPer-model details (strategy, target, freshness). Empty when no models exist.
models_detail[].namestringModel name.
models_detail[].strategyobjectMaterialization strategy configuration.
models_detail[].targetobjectTarget table coordinates (catalog, schema, table).
models_detail[].freshnessobject or absentPer-model freshness expectation, when declared.
models_detail[].contract_sourcestring or absent"auto" or "explicit", absent when no contract.

Show model-level lineage with columns, upstream/downstream models, and column-level edges.

{
"version": "0.3.0",
"command": "lineage",
"model": "fct_revenue",
"columns": [
{ "name": "customer_id" },
{ "name": "revenue_month" },
{ "name": "net_revenue" }
],
"upstream": ["stg_orders", "stg_refunds"],
"downstream": ["rpt_monthly_summary"],
"edges": [
{
"source": { "model": "stg_orders", "column": "customer_id" },
"target": { "model": "fct_revenue", "column": "customer_id" },
"transform": "direct"
},
{
"source": { "model": "stg_orders", "column": "total_amount" },
"target": { "model": "fct_revenue", "column": "net_revenue" },
"transform": "expression"
}
]
}

Field reference:

FieldTypeDescription
modelstringThe model being analyzed.
columnsarrayColumns in the model’s output schema.
upstreamarray of stringsModels this model depends on.
downstreamarray of stringsModels that depend on this model.
edges[].sourceobjectSource column (model + column).
edges[].targetobjectTarget column (model + column).
edges[].transformstringTransform kind: "direct", "cast", "expression", etc.

Trace a single column back through its upstream lineage chain.

{
"version": "0.3.0",
"command": "lineage",
"model": "fct_revenue",
"column": "net_revenue",
"trace": [
{
"source": { "model": "stg_orders", "column": "total_amount" },
"target": { "model": "fct_revenue", "column": "net_revenue" },
"transform": "expression"
},
{
"source": { "model": "stg_refunds", "column": "refund_amount" },
"target": { "model": "fct_revenue", "column": "net_revenue" },
"transform": "expression"
}
]
}

Field reference:

FieldTypeDescription
modelstringThe model containing the traced column.
columnstringThe column being traced.
trace[].sourceobjectSource column (model + column).
trace[].targetobjectTarget column (model + column).
trace[].transformstringTransform kind: "direct", "cast", "expression", etc.

Run local model tests via DuckDB.

{
"version": "0.3.0",
"command": "test",
"total": 14,
"passed": 12,
"failed": 2,
"failures": [
{ "name": "fct_orders::not_null_order_id", "error": "found 3 null values" },
{ "name": "fct_orders::unique_order_id", "error": "found 7 duplicates" }
]
}

Field reference:

FieldTypeDescription
totalintegerTotal number of tests run.
passedintegerNumber of tests that passed.
failedintegerNumber of tests that failed.
failures[].namestringQualified test name.
failures[].errorstringError message describing the failure.

Combined compile + test output for CI/CD pipelines.

{
"version": "0.3.0",
"command": "ci",
"compile_ok": true,
"tests_ok": true,
"models_compiled": 14,
"tests_passed": 14,
"tests_failed": 0,
"exit_code": 0,
"diagnostics": [],
"failures": []
}

Field reference:

FieldTypeDescription
compile_okbooleanWhether compilation succeeded without errors.
tests_okbooleanWhether all tests passed.
models_compiledintegerNumber of models compiled.
tests_passedintegerNumber of tests that passed.
tests_failedintegerNumber of tests that failed.
exit_codeintegerProcess exit code (0 = success).
diagnosticsarrayCompiler diagnostics (errors and warnings).
failuresarrayTest failures (same shape as rocky test failures).

Show recent pipeline run history.

{
"version": "0.3.0",
"command": "history",
"runs": [
{
"run_id": "run_20260401_143022",
"started_at": "2026-04-01T14:30:22Z",
"status": "success",
"trigger": "cli",
"models_executed": 20
},
{
"run_id": "run_20260401_080015",
"started_at": "2026-04-01T08:00:15Z",
"status": "partial",
"trigger": "dagster",
"models_executed": 19
}
],
"count": 2
}

Field reference:

FieldTypeDescription
runs[].run_idstringUnique run identifier.
runs[].started_atstringISO 8601 timestamp when the run started.
runs[].statusstringRun status ("success", "partial", "failed").
runs[].triggerstringWhat triggered the run ("cli", "dagster", "scheduler").
runs[].models_executedintegerNumber of models executed in this run.
countintegerTotal number of runs returned.

Show execution history for a specific model.

{
"version": "0.3.0",
"command": "history",
"model": "fct_revenue",
"executions": [
{
"started_at": "2026-04-01T14:30:22Z",
"duration_ms": 2300,
"rows_affected": 1482,
"status": "success",
"sql_hash": "a1b2c3d4e5f67890"
},
{
"started_at": "2026-03-31T14:30:05Z",
"duration_ms": 8900,
"rows_affected": null,
"status": "success",
"sql_hash": "f8e7d6c5b4a39210"
}
],
"count": 2
}

Field reference:

FieldTypeDescription
modelstringModel name.
executions[].started_atstringISO 8601 timestamp.
executions[].duration_msintegerExecution time in milliseconds.
executions[].rows_affectedinteger or nullRows affected, if reported by the warehouse.
executions[].statusstringExecution status.
executions[].sql_hashstringHash of the SQL that was executed.
countintegerTotal number of executions returned.

Quality metrics for a model, including snapshots, alerts, and column trends.

{
"version": "0.3.0",
"command": "metrics",
"model": "fct_revenue",
"snapshots": [
{
"run_id": "run_20260401_143022",
"timestamp": "2026-04-01T14:30:22Z",
"row_count": 148203,
"freshness_lag_seconds": 300,
"null_rates": { "customer_id": 0.0, "net_revenue": 0.0 }
}
],
"count": 1,
"alerts": [
{
"type": "anomaly",
"severity": "warning",
"message": "null rate increased from 0.0% to 2.3% in last run",
"run_id": "run_20260401_143022",
"column": "discount_pct"
}
],
"column": "net_revenue",
"column_trend": [
{
"run_id": "run_20260401_143022",
"timestamp": "2026-04-01T14:30:22Z",
"null_rate": 0.0,
"row_count": 148203
}
]
}

Field reference:

FieldTypeDescription
modelstringModel name.
snapshots[].run_idstringRun that produced this snapshot.
snapshots[].timestampstringISO 8601 timestamp.
snapshots[].row_countintegerRow count at snapshot time.
snapshots[].freshness_lag_secondsinteger or absentFreshness lag in seconds.
snapshots[].null_ratesobjectMap of column name to null rate (0.0 — 1.0).
countintegerNumber of snapshots returned.
alertsarrayQuality alerts (present when --alerts is used).
alerts[].typestringAlert type (e.g., "anomaly").
alerts[].severitystring"warning" or "critical".
alerts[].messagestringHuman-readable alert description.
alerts[].run_idstringRun that triggered the alert.
alerts[].columnstring or absentColumn involved, if applicable.
columnstring or absentColumn filter, when --column is used.
column_trendarrayColumn-level trend data (present when --column is used).
messagestring or absentExplanatory message when no data is available.

Materialization strategy recommendations based on execution history and cost model.

{
"version": "0.3.0",
"command": "optimize",
"recommendations": [
{
"model_name": "dim_customers",
"current_strategy": "table",
"recommended_strategy": "incremental",
"estimated_monthly_savings": 42.50,
"reasoning": "Only 0.3% row change rate between runs. Switching to incremental saves ~17s per run."
}
],
"total_models_analyzed": 1
}

Field reference:

FieldTypeDescription
recommendations[].model_namestringModel name.
recommendations[].current_strategystringCurrent materialization strategy.
recommendations[].recommended_strategystringRecommended materialization strategy.
recommendations[].estimated_monthly_savingsnumberEstimated monthly cost savings in dollars.
recommendations[].reasoningstringHuman-readable explanation.
total_models_analyzedintegerNumber of models analyzed.
messagestring or absentExplanation when no recommendations are available (e.g., insufficient history).

Generate OPTIMIZE and VACUUM SQL for Delta table compaction.

{
"version": "0.3.0",
"command": "compact",
"model": "acme_warehouse.staging__us_west__shopify.orders",
"dry_run": true,
"target_size_mb": 256,
"statements": [
{ "purpose": "optimize", "sql": "OPTIMIZE acme_warehouse.staging__us_west__shopify.orders" },
{ "purpose": "vacuum", "sql": "VACUUM acme_warehouse.staging__us_west__shopify.orders" }
]
}

Field reference:

FieldTypeDescription
modelstringTarget table in catalog.schema.table format.
dry_runbooleanWhether this was a dry run.
target_size_mbintegerTarget file size in megabytes.
statements[].purposestringStatement purpose ("optimize", "vacuum").
statements[].sqlstringThe SQL statement.

Generate or execute DELETE and VACUUM SQL for archiving old data.

{
"version": "0.3.0",
"command": "archive",
"model": "acme_warehouse.staging__us_west__shopify.events",
"older_than": "90d",
"older_than_days": 90,
"dry_run": true,
"statements": [
{
"purpose": "delete",
"sql": "DELETE FROM acme_warehouse.staging__us_west__shopify.events WHERE _fivetran_synced < '2026-01-10'"
},
{
"purpose": "vacuum",
"sql": "VACUUM acme_warehouse.staging__us_west__shopify.events"
}
]
}

Field reference:

FieldTypeDescription
modelstring or absentTarget table, if filtered to a specific model.
older_thanstringAge threshold as specified (e.g., "90d", "6m", "1y").
older_than_daysintegerAge threshold normalized to days.
dry_runbooleanWhether this was a dry run.
statements[].purposestringStatement purpose ("delete", "vacuum").
statements[].sqlstringThe SQL statement.

Profile storage layout and generate encoding recommendations.

{
"version": "0.3.0",
"command": "profile-storage",
"model": "acme_warehouse.staging__us_west__shopify.orders",
"profile_sql": "SELECT column_name, data_type, ... FROM information_schema.columns WHERE ...",
"recommendations": [
{
"column": "status",
"data_type": "STRING",
"estimated_cardinality": "5",
"recommended_encoding": "TINYINT",
"reasoning": "Only 5 distinct values. Integer encoding reduces storage by ~60%."
},
{
"column": "customer_notes",
"data_type": "STRING",
"estimated_cardinality": "98200",
"recommended_encoding": "ZSTD",
"reasoning": "High cardinality text column. Compression reduces storage significantly."
}
]
}

Field reference:

FieldTypeDescription
modelstringTarget table in catalog.schema.table format.
profile_sqlstringThe SQL used to profile the table.
recommendations[].columnstringColumn name.
recommendations[].data_typestringCurrent data type.
recommendations[].estimated_cardinalitystringEstimated number of distinct values.
recommendations[].recommended_encodingstringRecommended encoding or type.
recommendations[].reasoningstringHuman-readable explanation.

Import a dbt project and convert to Rocky models.

{
"version": "0.3.0",
"command": "import-dbt",
"import_method": "directory",
"project_name": "acme_analytics",
"dbt_version": "1.7.0",
"imported": 24,
"warnings": 2,
"failed": 0,
"sources_found": 6,
"sources_mapped": 6,
"tests_found": 18,
"tests_converted": 15,
"tests_converted_custom": 2,
"tests_skipped": 1,
"macros_detected": 3,
"imported_models": ["stg_orders", "stg_customers", "fct_revenue"],
"warning_details": [
{
"model": "stg_payments",
"category": "macro",
"message": "custom Jinja macro 'cents_to_dollars' not translated",
"suggestion": "Replace with a Rocky SQL expression"
}
],
"failed_details": [],
"report": {}
}

Field reference:

FieldTypeDescription
import_methodstringImport method used ("directory", "manifest").
project_namestring or absentdbt project name from dbt_project.yml.
dbt_versionstring or absentdbt version detected.
importedintegerNumber of models successfully imported.
warningsintegerNumber of import warnings.
failedintegerNumber of models that failed to import.
sources_foundintegerNumber of dbt sources found.
sources_mappedintegerNumber of sources successfully mapped.
tests_foundintegerNumber of dbt tests found.
tests_convertedintegerNumber of tests converted to Rocky format.
tests_converted_customintegerNumber of custom tests converted.
tests_skippedintegerNumber of tests skipped.
macros_detectedintegerNumber of custom macros detected.
imported_modelsarray of stringsNames of successfully imported models.
warning_details[].modelstringModel with the warning.
warning_details[].categorystringWarning category.
warning_details[].messagestringWarning message.
warning_details[].suggestionstring or absentSuggested fix.
failed_details[].namestringName of the failed model.
failed_details[].reasonstringReason for the failure.
reportobjectFree-form per-model migration report.

Compare a dbt project against its Rocky import to verify correctness.

{
"version": "0.3.0",
"command": "validate-migration",
"project_name": "acme_analytics",
"dbt_version": "1.7.0",
"models_imported": 24,
"models_failed": 0,
"total_tests": 18,
"total_contracts": 12,
"total_warnings": 1,
"validations": [
{
"model": "stg_customers",
"present_in_dbt": true,
"present_in_rocky": true,
"compile_ok": true,
"dbt_description": "Customer staging model",
"rocky_intent": "Customer dimension with contact details",
"test_count": 3,
"contracts_generated": 1,
"warnings": []
},
{
"model": "dim_products",
"present_in_dbt": true,
"present_in_rocky": true,
"compile_ok": true,
"test_count": 2,
"contracts_generated": 1,
"warnings": ["column order differs from dbt"]
}
]
}

Field reference:

FieldTypeDescription
project_namestring or absentdbt project name.
dbt_versionstring or absentdbt version detected.
models_importedintegerTotal models compared.
models_failedintegerModels that failed validation.
total_testsintegerTotal test assertions found.
total_contractsintegerTotal data contracts generated.
total_warningsintegerTotal warnings across all models.
validations[].modelstringModel name.
validations[].present_in_dbtbooleanWhether the model exists in the dbt project.
validations[].present_in_rockybooleanWhether the model exists in the Rocky project.
validations[].compile_okbooleanWhether the Rocky model compiles successfully.
validations[].dbt_descriptionstring or absentDescription from dbt YAML config.
validations[].rocky_intentstring or absentIntent from Rocky TOML config.
validations[].test_countintegerNumber of tests for this model.
validations[].contracts_generatedintegerNumber of contracts generated.
validations[].warningsarray of stringsValidation warnings.

Run conformance tests against a warehouse adapter.

{
"adapter": "duckdb",
"sdk_version": "0.1.0",
"tests_run": 26,
"tests_passed": 22,
"tests_failed": 0,
"tests_skipped": 4,
"results": [
{
"name": "create_table",
"category": "ddl",
"status": "passed",
"duration_ms": 12
},
{
"name": "workspace_bindings",
"category": "governance",
"status": "skipped",
"message": "not supported by duckdb adapter",
"duration_ms": 0
}
]
}

Field reference:

FieldTypeDescription
adapterstringAdapter name being tested.
sdk_versionstringAdapter SDK version.
tests_runintegerTotal tests executed.
tests_passedintegerTests that passed.
tests_failedintegerTests that failed.
tests_skippedintegerTests skipped (unsupported features).
results[].namestringTest name.
results[].categorystringTest category ("connection", "ddl", "dml", "query", "types", "dialect", "governance", "discovery", "batch_checks").
results[].statusstring"passed", "failed", or "skipped".
results[].messagestring or absentError or skip reason.
results[].duration_msintegerTest duration in milliseconds.

List all configured lifecycle hooks.

{
"hooks": [
{
"event": "on_pipeline_start",
"command": "scripts/notify.sh",
"timeout_ms": 5000,
"on_failure": "warn",
"env_keys": ["SLACK_WEBHOOK_URL"]
},
{
"event": "on_materialize_error",
"command": "scripts/alert.sh",
"timeout_ms": 10000,
"on_failure": "error",
"env_keys": ["PAGERDUTY_API_KEY"]
}
],
"total": 2
}

Field reference:

FieldTypeDescription
hooks[].eventstringHook event name (e.g., "on_pipeline_start", "on_materialize_error").
hooks[].commandstringShell command to execute.
hooks[].timeout_msintegerTimeout in milliseconds.
hooks[].on_failurestringFailure behavior: "warn" or "error".
hooks[].env_keysarray of stringsEnvironment variable keys passed to the hook.
totalintegerTotal number of configured hooks.

Fire a synthetic test event to validate hook scripts.

{
"event": "pipeline_start",
"status": "continue",
"result": "HookResult::Continue"
}

Field reference:

FieldTypeDescription
eventstringThe event that was tested.
statusstringResult: "continue", "abort", or "no_hooks".
messagestring or absentExplanation when no hooks are configured.
resultstring or absentDebug rendering of the hook result.

Generate a model from a natural language description.

{
"version": "0.3.0",
"command": "ai",
"intent": "monthly revenue by customer, joining orders and refunds",
"format": "rocky",
"name": "fct_monthly_revenue_by_customer",
"source": "SELECT\n o.customer_id,\n DATE_TRUNC('month', o.order_date) AS revenue_month,\n SUM(o.total_amount) - COALESCE(SUM(r.refund_amount), 0) AS net_revenue\nFROM ref('stg_orders') o\nLEFT JOIN ref('stg_refunds') r ON o.order_id = r.order_id\nGROUP BY 1, 2",
"attempts": 1
}

Field reference:

FieldTypeDescription
intentstringThe natural language intent provided.
formatstringOutput format ("rocky" or "sql").
namestringGenerated model name.
sourcestringGenerated SQL source code.
attemptsintegerNumber of generation attempts (retries on validation failure).

Detect schema changes and propose intent-guided model updates.

{
"version": "0.3.0",
"command": "ai-sync",
"proposals": [
{
"model": "fct_revenue",
"intent": "Add discount_pct to revenue calculation",
"diff": "- SUM(o.total_amount) AS net_revenue\n+ SUM(o.total_amount * (1 - o.discount_pct)) AS net_revenue",
"proposed_source": "SELECT ..."
}
]
}

Field reference:

FieldTypeDescription
proposals[].modelstringModel name.
proposals[].intentstringDescription of the proposed change.
proposals[].diffstringUnified diff of the proposed SQL change.
proposals[].proposed_sourcestringFull proposed SQL source.

Generate natural language intent descriptions from existing model SQL.

{
"version": "0.3.0",
"command": "ai-explain",
"explanations": [
{
"model": "fct_revenue",
"intent": "Calculates monthly net revenue per customer by joining orders with refunds.",
"saved": true
},
{
"model": "dim_customers",
"intent": "Customer dimension combining profile data with computed lifetime metrics.",
"saved": false
}
]
}

Field reference:

FieldTypeDescription
explanations[].modelstringModel name.
explanations[].intentstringGenerated intent description.
explanations[].savedbooleanWhether the intent was saved to the model’s TOML config.

Generate test assertions from model intent and SQL logic.

{
"version": "0.3.0",
"command": "ai-test",
"results": [
{
"model": "fct_revenue",
"tests": [
{
"name": "net_revenue_is_not_negative",
"description": "Net revenue should never be negative after refunds",
"sql": "SELECT COUNT(*) FROM ref('fct_revenue') WHERE net_revenue < 0"
},
{
"name": "customer_id_not_null",
"description": "Every revenue row must have a customer"
}
],
"saved": false
}
]
}

Field reference:

FieldTypeDescription
results[].modelstringModel name.
results[].tests[].namestringTest assertion name.
results[].tests[].descriptionstringHuman-readable description.
results[].tests[].sqlstring or absentSQL assertion query, when applicable.
results[].savedbooleanWhether the tests were saved to disk.