The schema of a SALT-sealed accounting archive. Published openly because the verification model depends on it.
SALT's central claim is that anyone can verify a sealed archive without proprietary tools. That claim is only meaningful if the schema is documented and freely available — otherwise a reviewer would still depend on Sealed Ledger to interpret what's inside.
So we publish it. Every table, column, index, and full-text search definition that's part of the query contract. A SALT-sealed bundle is a SQLite 3 database; a reviewer with sqlite3, openssl, this page, and the schema file has everything they need.
Schema versions are recorded in the _schema table inside every bundle so a reviewer can always tell which version a particular archive uses.
Tables prefixed with an underscore (_schema, _company, _capture, _seal, _integrity_checks, _diagnostics, _field_provenance) are reserved for SALT internal use — capture metadata, integrity checks, and the cryptographic seal. Tables without the underscore prefix carry source-system data.
Every row in every data table includes four provenance columns:
_source_system — originating platform identifier (e.g. qbo, qbd)_source_record_hash — SHA-256 of the canonicalized source record_raw_json — verbatim source-system payload for that record_raw_json_sha256 — SHA-256 of _raw_jsonThese columns make tampering detectable at the row level. Any modification to a row's data without a corresponding update to its provenance hashes — which would itself invalidate the bundle's seal — is immediately identifiable.
All monetary values are stored as integer cents, with a _cents suffix on the column name. This eliminates floating-point rounding artifacts and makes equality comparisons exact.
Source-system records that are referenced by transactions but no longer exist as full master-data entries are recorded as tombstones: rows with is_tombstone = 1 and minimal data. This preserves referential integrity while making the gap visible to reviewers.
Records marked inactive in the source system are preserved in full and flagged with is_active = 0. They are never deleted from the archive; they remain queryable.
The schema is organized into six logical groups:
Capture metadata, integrity checks, and the cryptographic seal that binds the archive together. Every reserved (underscore-prefixed) table lives here.
Reference entities carried forward from the source system. Each table mirrors the source-system shape with provenance columns added.
One table per source-system transaction type. Reviewers can query a single transaction type directly, or use the unified transaction_index to query across all types.
Polymorphic line items, linked-transaction relationships, the journal-line spine (debits and credits at the line level), and the unified transaction index that lets a reviewer query every transaction regardless of type.
Materialized at capture time so reviewers don't have to recompute them: open AR/AP with aging, duplicate clusters, full source-system report payloads, the migration-plan workspace, and the source-system audit log.
FTS5 virtual tables that let reviewers search transactions, master data, and line items by free text. SQLite generates internal storage tables behind each virtual table automatically — those are implementation details, not part of the query contract.
The complete DDL — every table, column, index, and constraint — is available as a single SQL file. It's verbatim from the Sealed Ledger codebase and matches what every SALT bundle's snapshot.db file is constructed from. Save it alongside any bundle you're verifying.
A SALT bundle's snapshot.db file is a standard SQLite 3 database. Open it with any SQLite-compatible tool — sqlite3 on the command line, DB Browser for SQLite, DBeaver, or your tool of choice. No special drivers or libraries required.
-- Identify the archive
SELECT * FROM _company;
SELECT * FROM _capture;
-- Inspect the seal
SELECT * FROM _seal;
-- Surface any failed integrity checks
SELECT check_name, status, severity, actual_value
FROM _integrity_checks
WHERE status != 'pass';
-- See every transaction in one query, regardless of type
SELECT entity_type, txn_date, doc_number, total_cents
FROM transaction_index
ORDER BY txn_date DESC
LIMIT 100;
-- Open AR by aging bucket
SELECT age_bucket,
COUNT(*) AS items,
SUM(balance_cents) / 100.0 AS total_dollars
FROM open_ar_items
WHERE balance_cents > 0
GROUP BY age_bucket
ORDER BY MIN(days_overdue);
-- Verify a sample row's provenance hash hasn't been tampered with
-- (the recomputed SHA-256 must match _raw_json_sha256)
SELECT id,
_raw_json_sha256 AS recorded_hash,
_source_record_hash AS canonical_hash
FROM invoices
LIMIT 5;
Three FTS5 virtual tables make it easy to search the archive without knowing exact field names:
-- Find any transaction mentioning "banjo" anywhere in -- doc number, party name, or memo SELECT * FROM search_transactions WHERE search_transactions MATCH 'banjo' LIMIT 20; -- Find any master-data entity (account, customer, vendor, item) -- mentioning "shipping" SELECT * FROM search_lists WHERE search_lists MATCH 'shipping' LIMIT 20; -- Find any line item (transaction line) mentioning "warranty" SELECT * FROM search_lines WHERE search_lines MATCH 'warranty' LIMIT 20;
If you compare the live snapshot.db schema (via sqlite3 snapshot.db .schema) against the published schema.sql, you will see additional objects in the live database that are not defined in the published file. These are auto-generated by SQLite and are not part of SALT's contract:
search_transactions_data, search_transactions_idx, search_transactions_content, search_transactions_docsize, search_transactions_config — and the equivalents for search_lists and search_lines. These are FTS5's internal storage tables, generated automatically by each CREATE VIRTUAL TABLE … USING fts5(…) statement. They contain inverted indexes and segment metadata, not accounting data, and they are not queryable in any meaningful sense. Use the three named virtual tables instead.sqlite_stat1 and sqlite_stat4 — query-planner statistics populated by SQLite's ANALYZE command. They affect query performance but contain no data.sqlite_sequence — internal counter for AUTOINCREMENT columns.A clean diff that ignores those should produce no output. The published schema is the contract; the auto-generated artifacts are SQLite's implementation.
For full cryptographic verification of the seal itself — recomputing the database hash and validating the RFC 3161 timestamp — see the verification procedure on the SALT white paper page.