diff --git a/.codex/agents/document-reviewer.toml b/.codex/agents/document-reviewer.toml new file mode 100644 index 00000000..131904df --- /dev/null +++ b/.codex/agents/document-reviewer.toml @@ -0,0 +1,93 @@ +description = "Reviews PowerSync documentation against writing standards, technical accuracy, and Mintlify formatting. Use when asked to review, audit, or quality-check documentation pages or PRs." +developer_instructions = """ +# PowerSync Documentation Reviewer + +You are an experienced, pragmatic technical writer reviewing PowerSync documentation. Your job is to help maintain accuracy, clarity, and consistency — not to approve content unconditionally. + +**Rule #1**: If you want an exception to any rule, stop and get explicit permission. Breaking the letter or spirit of the rules is failure. + +## Working Relationship + +- Push back on content that violates standards — cite specific rules when you do +- Never rubber-stamp content. Give honest technical judgment +- Ask for clarification rather than assuming intent +- Never lie, guess, or make up information about the product +- If you're unsure whether something is technically accurate, say so + +## Review Checklist + +For each piece of content, check the following areas: + +### 1. Frontmatter + +- `title` is present and uses Title Case +- `description` is present, concise, and does not duplicate the opening paragraph +- No other required fields are missing + +### 2. Writing Standards + +- Second person ("you") throughout +- Active voice, present tense +- No promotional or marketing language ("breathtaking," "robust," "stands as a testament," etc.) +- No editorializing ("it's important to note," "in conclusion") +- No filler words in titles or descriptions ("Comprehensive," "Complete," "Significant") +- No "local-first" or "offline-first" — replace with outcome language: "responds instantly," "stays fully functional in poor network conditions," "responsive" +- No excessive use of "moreover," "furthermore," "additionally" +- Bold used sparingly — only for terms being defined or critical distinctions +- No em-dash connectors between clauses — two sentences instead +- Concepts written out in full sentences, not comma-separated shorthand + +### 3. Headings + +- Title Case throughout +- Hierarchy starts at H2 (H1 is the page title) +- No verb-first headings unless a procedural step + +### 4. Terminology + +| Use | Avoid | +|-----|-------| +| sync | synchronization | +| Postgres | PostgreSQL | +| partial sync | dynamic partial replication | +| PowerSync Service | powersync service | +| Sync Rules | sync rules | +| Sync Streams | sync streams | + +### 5. Code Examples + +- Language tag present on every code block +- No aliases in SQL unless required (self-joins, ambiguous columns) +- Realistic data — not `foo`, `bar`, `example.com` +- No real API keys or secrets +- No filename shown on Cloud/dashboard examples (only on self-hosted) + +### 6. Mintlify Components + +- `` for sequential procedures +- `` for platform-specific content +- SDK tab order: JS → Dart → Kotlin → Swift → .NET → Rust +- `` wrapping all images +- Callouts used appropriately (``, ``, ``, ``, ``) + +### 7. Links + +- Internal links use relative paths (`/sync/streams/overview`, not absolute URLs) +- No unverified external links + +### 8. Sync Streams and Sync Rules + +- Sync Streams are the default — flag any new content that teaches or adds Sync Rules examples +- If existing content shows both side by side, verify the examples return the same data with no mismatched filters + +## Output Format + +For each issue found, state: +1. The specific rule violated +2. The problematic text (quoted) +3. A suggested fix + +Group by section. Be concise — one line per issue where possible. + +At the end, give an overall assessment: **Approve**, **Approve with minor fixes**, or **Needs revision**.""" +name = "document-reviewer" diff --git a/.github/vale/config/vocabularies/PowerSync/accept.txt b/.github/vale/config/vocabularies/PowerSync/accept.txt index 47081d17..6b15e3a5 100644 --- a/.github/vale/config/vocabularies/PowerSync/accept.txt +++ b/.github/vale/config/vocabularies/PowerSync/accept.txt @@ -236,6 +236,7 @@ CloudWatch Cocoapods Cognito Coolify +Convex cron CSQLite Cursor @@ -310,4 +311,4 @@ Yjs Zod # Package names -drift_sqlite_async \ No newline at end of file +drift_sqlite_async diff --git a/architecture/powersync-service.mdx b/architecture/powersync-service.mdx index 92c6a181..518baad6 100644 --- a/architecture/powersync-service.mdx +++ b/architecture/powersync-service.mdx @@ -3,7 +3,7 @@ title: "PowerSync Service" description: "Understand the PowerSync Service architecture and how it replicates data and delivers real-time sync." --- -When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). +When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server or Convex), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). ## Bucket System @@ -62,7 +62,7 @@ When a change occurs in the source database that affects a certain bucket (based ## Bucket Storage -The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported as _bucket storage_ databases. The _bucket storage_ database is separate from the connection to your _source database_ (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the _bucket storage_ database. +The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported as _bucket storage_ databases. The _bucket storage_ database is separate from the connection to your _source database_ (Postgres, MongoDB, MySQL, SQL Server or Convex). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the _bucket storage_ database. Persisting the bucket state in a database is also part of how PowerSync achieves high scalability: it means that the PowerSync Service can have a low memory footprint even as you scale to very large volumes of synced data and users/clients. @@ -87,7 +87,7 @@ When the PowerSync Service replicates data from the source database, it: Whenever a new version of Sync Streams/Sync Rules is deployed, initial replication takes place by means of taking a snapshot of all tables/collections they reference. -After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture). +After that, data is incrementally replicated using a change data capture stream. The specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, SQL Server Change Data Capture, or Convex document deltas. ## Streaming Sync @@ -111,4 +111,3 @@ For more details on exactly how streaming sync works, see [PowerSync Protocol](/ The repo for the PowerSync Service can be found here: - diff --git a/configuration/app-backend/client-side-integration.mdx b/configuration/app-backend/client-side-integration.mdx index 2c9c4186..46f65e8b 100644 --- a/configuration/app-backend/client-side-integration.mdx +++ b/configuration/app-backend/client-side-integration.mdx @@ -10,7 +10,7 @@ After you've [instantiated](/intro/setup-guide#instantiate-the-powersync-databas | Purpose | Description | |---------|-------------| -| **Uploading mutations to your backend:** | Mutations that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend source database (Postgres, MongoDB, MySQL, or SQL Server). This is how PowerSync achieves bi-directional syncing of data: The [PowerSync Service](/architecture/powersync-service) provides the _server-to-client read path_ based on your [Sync Streams or Sync Rules (legacy)](/sync/overview), and the _client-to-server write path_ goes via your backend. | +| **Uploading mutations to your backend:** | Mutations that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend source database (Postgres, MongoDB, MySQL, SQL Server, or Convex). This is how PowerSync achieves bi-directional syncing of data: The [PowerSync Service](/architecture/powersync-service) provides the _server-to-client read path_ based on your [Sync Streams or Sync Rules (legacy)](/sync/overview), and the _client-to-server write path_ goes via your backend. | | **Authentication integration:** (optional) | PowerSync uses JWTs for authentication between the Client SDK and PowerSync Service. Some [authentication providers](/configuration/auth/overview#common-authentication-providers) generate JWTs for users which PowerSync can verify directly. For others, some code must be [added to your application backend](/configuration/auth/custom) to generate the JWTs. | @@ -180,4 +180,3 @@ For an example implementation of a PowerSync 'backend connector', see the SDK gu ## More Examples For additional implementation examples, see the [Examples](/intro/examples) section. - diff --git a/configuration/app-backend/setup.mdx b/configuration/app-backend/setup.mdx index 6825690a..58838647 100644 --- a/configuration/app-backend/setup.mdx +++ b/configuration/app-backend/setup.mdx @@ -8,7 +8,7 @@ PowerSync generally assumes that you have some kind of "backend application" as When you integrate PowerSync into your app project, PowerSync relies on that "backend application" for a few potential purposes: -1. **Allowing client-side mutations to be uploaded** and [applied](/handling-writes/writing-client-changes) to the backend source database (Postgres, MongoDB, MySQL, or SQL Server). When you write to the client-side SQLite database provided by PowerSync, those mutations are also placed into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue). The PowerSync Client SDK manages uploading of those mutations to your backend using the `uploadData()` function that you define in your [client-side](/configuration/app-backend/client-side-integration) _backend connector_ implementation. Your `uploadData()` implementation should call your backend application API to apply the mutations to your source database. The reason why we designed PowerSync this way is to give you full control over things like server-side data validation and authorization of mutations, while PowerSync itself requires minimal permissions. +1. **Allowing client-side mutations to be uploaded** and [applied](/handling-writes/writing-client-changes) to the backend source database (Postgres, MongoDB, MySQL, SQL Server, or Convex). When you write to the client-side SQLite database provided by PowerSync, those mutations are also placed into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue). The PowerSync Client SDK manages uploading of those mutations to your backend using the `uploadData()` function that you define in your [client-side](/configuration/app-backend/client-side-integration) _backend connector_ implementation. Your `uploadData()` implementation should call your backend application API to apply the mutations to your source database. The reason why we designed PowerSync this way is to give you full control over things like server-side data validation and authorization of mutations, while PowerSync itself requires minimal permissions. 2. **Authentication integration (optional):** _If_ you are implementing custom authentication (see below), your backend is responsible for securely generating the [JWTs](/configuration/auth/overview) used by the PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service). If you will only use the backend for applying mutations and not for authentication, you can also use some kind of data API service or API platform (e.g. Hasura). @@ -56,4 +56,4 @@ You can use a serverless functions system like Azure Functions, AWS Lambda, Goog ### For MongoDB: PowerSync Hosted/Managed Option -For developers using MongoDB as a backend source database, an alternative option is to use CloudCode, a serverless cloud functions environment provided by a sibling product of PowerSync, that runs in the same cloud environment as PowerSync Cloud. We have a template that you can use as a turnkey starting point. See the [documentation](/configuration/app-backend/cloudcode). \ No newline at end of file +For developers using MongoDB as a backend source database, an alternative option is to use CloudCode, a serverless cloud functions environment provided by a sibling product of PowerSync, that runs in the same cloud environment as PowerSync Cloud. We have a template that you can use as a turnkey starting point. See the [documentation](/configuration/app-backend/cloudcode). diff --git a/configuration/source-db/connection.mdx b/configuration/source-db/connection.mdx index 287f844c..c540b98f 100644 --- a/configuration/source-db/connection.mdx +++ b/configuration/source-db/connection.mdx @@ -1,13 +1,13 @@ --- title: "Source Database Connection" -description: "Connect a PowerSync Cloud instance to your Postgres, MongoDB, MySQL or SQL Server source database." +description: "Connect a PowerSync Cloud instance to your Postgres, MongoDB, MySQL, SQL Server or Convex source database." --- import SupabaseConnection from '/snippets/supabase-database-connection.mdx'; Each database provider has their quirks when it comes to specifying connection details, so we have documented database-specific and provider-specific instructions below: -Jump to: [Postgres](#postgres-provider-specifics) | [MongoDB](#mongodb-specifics) | [MySQL](#mysql-specifics) | [SQL Server](#sql-server-specifics) +Jump to: [Postgres](#postgres-provider-specifics) | [MongoDB](#mongodb-specifics) | [MySQL](#mysql-specifics) | [SQL Server](#sql-server-specifics) | [Convex](#convex-specifics) The below instructions are currently written for PowerSync Cloud. For self-hosted PowerSync instances, specify database connection details in the config file as documented [here](/configuration/powersync-service/self-hosted-instances#source-database-connections). @@ -279,4 +279,31 @@ PowerSync deploys and configures an isolated cloud environment for you, which ca Also see: -- [SQL Server Source Database Setup](/configuration/source-db/setup#sql-server) \ No newline at end of file +- [SQL Server Source Database Setup](/configuration/source-db/setup#sql-server) + + +## Convex Specifics + +Convex support is currently in an [Open Alpha release](/resources/feature-status). + +Before connecting PowerSync, add the required `powersync_checkpoints` table and mutation to your Convex deployment. See [Convex Source Database Setup](/configuration/source-db/setup#convex). + +1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to **Database Connections**. +2. Click **Connect to Source Database** and ensure the **Convex** tab is selected. +3. Fill in your Convex connection details: + 1. **Name** can be any name for the connection. + 2. **Deployment URL** is your Convex deployment URL, for example `https://.convex.cloud`. + 3. **Deploy key** is the deploy key for the Convex deployment. In the Convex Dashboard, go to **Settings** → **General** and generate a deploy key. + 4. **Polling interval** controls how often PowerSync polls Convex for document deltas. The default is `1000` milliseconds. +4. Click **Test Connection** and fix any errors. +5. Click **Save Connection**. + +PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. + + + Convex deploy keys grant full read and write access to your Convex data. Use a deploy key for the correct environment, store it as a secret, and rotate it if it is exposed. + + +Also see: +- [Convex Source Database Setup](/configuration/source-db/setup#convex) +- [Type Mapping](/sync/types#convex-type-mapping) diff --git a/configuration/source-db/setup.mdx b/configuration/source-db/setup.mdx index 367c9707..072749dc 100644 --- a/configuration/source-db/setup.mdx +++ b/configuration/source-db/setup.mdx @@ -1,9 +1,9 @@ --- -title: "Source Database Setup" -description: "Prepare your Postgres, MongoDB, MySQL or SQL Server database for PowerSync replication." +title: 'Source Database Setup' +description: 'Prepare your Postgres, MongoDB, MySQL, SQL Server or Convex database for PowerSync replication.' --- -Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql) | [SQL Server](#sql-server) +Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql) | [SQL Server](#sql-server) | [Convex](#convex) import PostgresPowerSyncUser from '/snippets/postgres-powersync-user.mdx'; import PostgresPowerSyncPublication from '/snippets/postgres-powersync-publication.mdx'; @@ -11,9 +11,7 @@ import NeonDatabaseSetup from '/snippets/neon-database-setup.mdx'; ## Postgres - - **Version compatibility**: PowerSync requires Postgres version 11 or greater. - +**Version compatibility**: PowerSync requires Postgres version 11 or greater. Configuring your Postgres database for PowerSync generally involves three tasks: @@ -125,6 +123,7 @@ We have documented steps for some specific hosting providers: Also see our [Neon integration guide](/integrations/neon). + Fly Postgres is a [Fly](https://fly.io/) app with [flyctl](https://fly.io/docs/flyctl/) sugar on top to help you bootstrap and manage a database cluster for your apps. @@ -207,6 +206,7 @@ ALTER SYSTEM SET max_wal_senders = 10; See [Xata's documentation](https://xata.io/docs/tutorials/streaming-replication) for more information on setting up logical replication with Xata. + @@ -224,8 +224,7 @@ If you are managing Postgres yourself, set `wal_level = logical` in your config {" "} ![](/images/setup-6.avif) -Alternatively, you can use the below SQL commands to check and ensure logical replication -is enabled: +Alternatively, you can use the below SQL commands to check and ensure logical replication is enabled: ```sql -- Check the replication type @@ -260,13 +259,13 @@ Due to the logical replication requirement, not all Postgres hosting providers a - [Postgres Maintenance: Logical Replication Slots](/configuration/source-db/postgres-maintenance) - ## MongoDB - - **Version compatibility**: PowerSync requires MongoDB version 6.0 or greater. - -For more information on migrating from MongoDB Atlas Device Sync to PowerSync, see our [migration guide](/migration-guides/atlas-device-sync). +**Version compatibility**: PowerSync requires MongoDB version 6.0 or greater. + + For more information on migrating from MongoDB Atlas Device Sync to PowerSync, see our [migration + guide](/migration-guides/atlas-device-sync). + ### Permissions Required: MongoDB Atlas @@ -325,24 +324,21 @@ db.runCommand( { You can view which collections have the option enabled using: ```js -db.getCollectionInfos().filter( - (c) => c.options?.changeStreamPreAndPostImages?.enabled -); +db.getCollectionInfos().filter((c) => c.options?.changeStreamPreAndPostImages?.enabled); ``` Post-images can be configured for PowerSync instances as follows: - Configure the **Post Images** setting in the database connection configuration in the - [PowerSync Dashboard](https://dashboard.powersync.com/). Select your project - and instance and go to **Database Connections** to edit the connection settings. + Configure the **Post Images** setting in the database connection configuration in the [PowerSync + Dashboard](https://dashboard.powersync.com/). Select your project and instance and go to **Database Connections** to + edit the connection settings. + cta="See an example"> Configure `post_images` in the `service.yaml` file. @@ -351,14 +347,11 @@ Post-images can be configured for PowerSync instances as follows: If you need to use private endpoints with MongoDB Atlas, see [Private Endpoints](/configuration/source-db/private-endpoints) (AWS only). - ## MySQL MySQL support is currently in a [Beta release](/resources/feature-status). - - **Version compatibility**: PowerSync requires MySQL version 5.7 or greater. - +**Version compatibility**: PowerSync requires MySQL version 5.7 or greater. PowerSync reads from the MySQL [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) (binlog) to replicate changes. We use a modified version of the [Zongji MySQL](https://github.com/powersync-ja/powersync-mysql-zongji) binlog listener to achieve this. @@ -371,7 +364,7 @@ To ensure that PowerSync can read the binary log, you need to configure your MyS - [`enforce_gtid_consistency`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_enforce_gtid_consistency): **`ON`**. Enforces GTID consistency. Default is **`OFF`**. - [`gtid_mode`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_gtid_mode): **`ON`**. Enables GTID based logging. Default is **`OFF`**. - [`binlog_format`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_format): **`ROW`**. Sets the binary log format to row-based replication. This is required for PowerSync to correctly replicate changes. Default is **`ROW`**. -- [`binlog_row_image`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_row_image): **`FULL`**. Captures the complete row data for each change. This is required for PowerSync to correctly replicate changes. Default is **`FULL`**. The `MINIMAL`/`NOBLOB` options will be supported in a future release. +- [`binlog_row_image`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_row_image): **`FULL`**. Captures the complete row data for each change. This is required for PowerSync to correctly replicate changes. Default is **`FULL`**. The `MINIMAL`/`NOBLOB` options will be supported in a future release. These can be specified in a MySQL [option file](https://dev.mysql.com/doc/refman/8.4/en/option-files.html): @@ -440,10 +433,8 @@ binlog-ignore-db=user_db SQL Server support is currently in a [Beta release](/resources/feature-status). - **Version compatibility**: - - PowerSync requires SQL Server 2019+ or Azure SQL - Database. - - SQL Server support was introduced in version 1.18.1 of the PowerSync Service. + **Version compatibility**: - PowerSync requires SQL Server 2019+ or Azure SQL Database. - SQL Server support was + introduced in version 1.18.1 of the PowerSync Service. PowerSync can replicate data from a change data capture (CDC) enabled SQL Server. The CDC process builds up change tables based on changes to tracked tables, by scanning the SQL Server transaction log on a fixed interval. @@ -548,8 +539,8 @@ END ``` - For Azure SQL Database, the `VIEW SERVER PERFORMANCE STATE` permission is not - available and not required. Only `VIEW DATABASE PERFORMANCE STATE` is needed. + For Azure SQL Database, the `VIEW SERVER PERFORMANCE STATE` permission is not available and not required. Only `VIEW + DATABASE PERFORMANCE STATE` is needed. #### 3. Create the PowerSync Checkpoints Table @@ -624,9 +615,13 @@ Recommended Capture Job settings: | `pollinginterval` | 1 second | - For Azure SQL Database, the CDC capture and cleanup jobs are managed automatically. Manual configuration is greatly limited. - See [Azure CDC Customization Limitations](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#cdc-customization). - The main limitation is that the capture job polling interval cannot be modified and is fixed at 20 seconds. It is, however, still possible to [manually trigger](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#manual-cdc-control) the capture job on demand. + For Azure SQL Database, the CDC capture and cleanup jobs are managed automatically. Manual configuration is greatly + limited. See [Azure CDC Customization + Limitations](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#cdc-customization). + The main limitation is that the capture job polling interval cannot be modified and is fixed at 20 seconds. It is, + however, still possible to [manually + trigger](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#manual-cdc-control) + the capture job on demand. #### Latency @@ -643,19 +638,193 @@ During each polling cycle, PowerSync will read a limited number of transactions Increasing this will increase throughput at the cost of increased memory usage. If the volume of transactions being replicated is high, and memory is available, it is recommended to increase this value. - Connection configuration parameters for the PowerSync SQL Server Adapter like `pollingIntervalMs` and `pollingBatchSize` can currently only be set when [self-hosting](/intro/setup-guide#self-hosted-2) PowerSync. - See [SQL Server Additional Configuration](/configuration/source-db/sql-server-additional-configuration#additional-configuration-options) - for more details. - We are planning to expose these settings for SQL Server source database connections in the PowerSync Dashboard for PowerSync Cloud instances. + Connection configuration parameters for the PowerSync SQL Server Adapter like `pollingIntervalMs` and + `pollingBatchSize` can currently only be set when [self-hosting](/intro/setup-guide#self-hosted-2) PowerSync. See [SQL + Server Additional + Configuration](/configuration/source-db/sql-server-additional-configuration#additional-configuration-options) for more + details. We are planning to expose these settings for SQL Server source database connections in the PowerSync + Dashboard for PowerSync Cloud instances. + + +## Convex + + + Convex support is currently in an [Open Alpha release](/resources/feature-status). APIs, configuration, schema-change + handling, metrics, and replication behavior may change before this connector is considered stable. +PowerSync reads Convex data using the Convex Streaming Export API. Initial replication pins a single Convex snapshot cursor and snapshots each selected table at that cursor. Streaming replication then reads the global `document_deltas` stream and filters rows according to your Sync Streams. + +Convex does not support user-defined database schemas or namespaces in the same way as SQL databases. You do not need to qualify Convex table names in Sync Streams. If you do choose to qualify them, use the default `convex` schema. + +### Connection Requirements + +PowerSync can replicate from Convex Cloud or self-hosted Convex deployments. It requires: + +- A Convex deployment URL. +- A Convex deploy key. In the Convex Dashboard, go to **Settings** → **General** and generate a deploy key for the deployment PowerSync should replicate. +- The `powersync_checkpoints` table and `createCheckpoint` mutation described below. + + + Convex deploy keys grant full read and write access to your Convex data. Use a deploy key for the correct environment, + store it as a secret, and rotate it if it is exposed. + + +### Checkpoint Table + +PowerSync uses a small Convex table to generate write checkpoint markers. Convex table names cannot start with `_`, so the table is named `powersync_checkpoints`. + +Add the table to your Convex schema: + +```typescript convex/schema.ts +import { defineSchema, defineTable } from 'convex/server'; +import { v } from 'convex/values'; + +export default defineSchema({ + // ... your other tables + + powersync_checkpoints: defineTable({ + last_updated: v.float64() + }) +}); +``` + +### Checkpoint Mutation + +Deploy a Convex mutation named `powersync_checkpoints:createCheckpoint`. PowerSync calls this mutation after recording a write checkpoint so the Convex delta stream advances even when the app is otherwise idle. + +```typescript convex/powersync_checkpoints.ts +import { mutation } from './_generated/server'; + +export const createCheckpoint = mutation({ + args: {}, + handler: async (ctx) => { + const existing = await ctx.db.query('powersync_checkpoints').first(); + + if (existing) { + await ctx.db.patch(existing._id, { last_updated: Date.now() }); + } else { + await ctx.db.insert('powersync_checkpoints', { last_updated: Date.now() }); + } + } +}); +``` + +PowerSync excludes `powersync_checkpoints` from replicated source tables. The table exists only to advance the replication cursor for write checkpoint acknowledgements. + +### Client Writes + +PowerSync does not write application data directly to Convex. Your app still needs an upload path that takes queued client-side writes and applies them through Convex mutations. + +In most Convex apps, you already define one or more mutation functions for each writable table. Your PowerSync backend connector can call those same mutations from `uploadData()`. + +If you use Convex Auth tokens directly for PowerSync client authentication, configure PowerSync to accept the `convex` JWT audience. For self-hosted development this is configured in `client_auth.audience`; for PowerSync Cloud you can configure a custom audience in the instance settings. See [Custom Authentication](/configuration/auth/custom). + +### Schema Changes + +PowerSync does not automatically detect Convex schema changes. + +When you add, remove, or change Convex fields or tables: + +1. Update your Convex schema and deploy it. +2. Update and redeploy your Sync Config. + +Convex schema diagnostics can omit fields that have no stored values yet. + +### Int64 and Bytes Values + +PowerSync syncs Convex `Int64` values as base-10 text values and Convex `Bytes` values as base64 text values. + +If you want a Convex `Int64` value to be synced as a SQLite integer, cast it to `INTEGER` in your Sync Streams. + +### Latency + +PowerSync polls the Convex `document_deltas` endpoint for changes. This means there is an inherent latency between a write being committed in Convex and that change being replicated into PowerSync. + +The polling interval defaults to once every 1000ms. When self-hosting PowerSync, you can configure this with the `polling_interval_ms` connection parameter: + +```yaml service.yaml +replication: + connections: + - type: convex + deployment_url: https://.convex.cloud + deploy_key: + polling_interval_ms: 1000 +``` + +Lowering this value can reduce replication latency, but it increases the number of requests made to Convex and the work performed by the PowerSync Service. + +### Mutation Transaction Atomicity + +Convex mutations are ACID transactions. When a mutation writes multiple documents, Convex exposes those writes in `document_deltas` with the same commit timestamp. PowerSync replicates all writes from the same mutation together as one batch, so clients do not observe a partial result from a single Convex mutation. + +### Limitations + +- Convex support is currently limited to the default Convex component. +- Schema changes are not automatically detected. When you change your Convex schema, redeploy your Sync Config so PowerSync can re-evaluate the affected tables and fields. +- Convex `Int64` values sync as text by default. Cast `Int64` fields to `INTEGER` in Sync Streams when you need client-side integer types. +- PowerSync reports time-based replication lag for Convex, but not byte-based lag. + +### Sync Streams Examples + +You do not need to specify a schema when querying Convex tables in Sync Streams. + + + Convex document IDs are generated by Convex and are exposed as `_id`. Clients cannot create Convex IDs before + inserting documents into Convex. PowerSync clients need stable local IDs before writes are uploaded, so use a + client-generated UUID column as the synced `id` and keep `_id` as the Convex server-generated document ID. This is + similar to the pattern described in [Sequential ID Mapping](/client-sdks/advanced/sequential-id-mapping). + + +The client creates a UUID in its local `id` column before the write is uploaded. Your Convex mutation should store that +value in a separate `uuid` field on the Convex document. PowerSync then syncs `uuid AS id` back to the client, so the +client keeps the same stable local ID while Convex keeps its own server-generated `_id`. + +The example below uses one stream for a user's lists and todos: + +- `uuid` as the synced client-side `id` instead of the Convex `_id`. +- `list_uuid` as the synced relationship column instead of the Convex `list_id`. +- `CAST(an_int64_column AS INTEGER)` to sync a Convex `Int64` value as a SQLite integer. +- `substring(auth.user_id(), 1, 32)` to extract the Convex user ID from a Convex Auth JWT subject. Convex Auth subjects include the 32-character user ID followed by `|` and the user session ID. + +```yaml +config: + edition: 3 + +streams: + user_data: + with: + # Extract the Convex user ID from the JWT subject. + # Convex Auth subjects include `[32 character user ID]|[user session ID]`. + user_lists: | + SELECT uuid + FROM lists + WHERE archived != true + AND owner_id = substring(auth.user_id(), 1, 32) + auto_subscribe: true + queries: + - SELECT uuid AS id, name, owner_id FROM lists WHERE uuid IN user_lists + - | + SELECT + -- Use the client-generated todo UUID as the synced `id`. + uuid AS id, + description, + -- Map relationships that use Convex IDs, such as `list_id`, + -- to the related table's local UUID column. + list_uuid, + -- Cast Convex Int64 values to INTEGER when you want a SQLite integer. + CAST(an_int64_column AS INTEGER) AS an_int64_column + FROM todos + WHERE list_uuid IN user_lists +``` + ## Next Step Next, connect PowerSync to your database: - - - - + + diff --git a/handling-writes/writing-client-changes.mdx b/handling-writes/writing-client-changes.mdx index c79e8f62..60e70175 100644 --- a/handling-writes/writing-client-changes.mdx +++ b/handling-writes/writing-client-changes.mdx @@ -17,13 +17,13 @@ Since you get to define the client-side `uploadData()` function as you wish, you You can also use any API style you want — e.g. REST, GraphQL, gRPC, etc. -It's important that your API endpoint be blocking/synchronous with underlying writes to the backend source database (Postgres, MongoDB, MySQL, or SQL Server). +It's important that your API endpoint be blocking/synchronous with underlying writes to the backend source database (Postgres, MongoDB, MySQL, SQL Server, or Convex). In other words, don't place writes into something like a queue for processing later — process them immediately. For more details, see the explainer below. -PowerSync uses a server-authoritative architecture with a checkpoint system for conflict resolution and [consistency](/architecture/consistency). The client advances to a new write checkpoint after uploads have been processed, so if the client believes that the server has written changes into your backend source database (Postgres, MongoDB, MySQL, or SQL Server), but the next checkpoint does not contain your uploaded changes, those changes will be removed from the client. This could manifest as UI glitches for your end-users, where the changes disappear from the device for a few seconds and then re-appear. +PowerSync uses a server-authoritative architecture with a checkpoint system for conflict resolution and [consistency](/architecture/consistency). The client advances to a new write checkpoint after uploads have been processed, so if the client believes that the server has written changes into your backend source database (Postgres, MongoDB, MySQL, SQL Server, or Convex), but the next checkpoint does not contain your uploaded changes, those changes will be removed from the client. This could manifest as UI glitches for your end-users, where the changes disappear from the device for a few seconds and then re-appear. ### Write Operations Recorded on the Client diff --git a/intro/powersync-overview.mdx b/intro/powersync-overview.mdx index 2c453b04..7d997eb7 100644 --- a/intro/powersync-overview.mdx +++ b/intro/powersync-overview.mdx @@ -38,6 +38,8 @@ PowerSync is designed to be backend database agnostic, and supports these source + + ## Supported Client SDKs @@ -53,4 +55,4 @@ PowerSync is also designed to be client-side stack agnostic, and currently has c ## Need Help? -Can't find what you are looking for in these docs? Try **Ask AI** on this site which is trained on all our documentation, repositories and Discord discussions. Also join us on [our community Discord server](https://discord.gg/powersync) where you can browse topics from the PowerSync community and chat with our team. \ No newline at end of file +Can't find what you are looking for in these docs? Try **Ask AI** on this site which is trained on all our documentation, repositories and Discord discussions. Also join us on [our community Discord server](https://discord.gg/powersync) where you can browse topics from the PowerSync community and chat with our team. diff --git a/intro/powersync-philosophy.mdx b/intro/powersync-philosophy.mdx index d171877c..c39184fd 100644 --- a/intro/powersync-philosophy.mdx +++ b/intro/powersync-philosophy.mdx @@ -13,7 +13,7 @@ The app is always [fast and responsive](https://www.powersync.com/blog/local-fir PowerSync lets you avoid the complexities of using APIs to move app state [over the network](https://www.powersync.com/blog/escaping-the-network-tarpit). Its goal is to solve the hard problems of keeping data in sync, without getting in your way. -You use a standard Postgres, MongoDB, MySQL, or SQL Server \[[1](#footnotes)\] database on the server, a standard SQLite database on the client, and your [own backend](/configuration/app-backend/setup) to process mutations. PowerSync simply keeps the SQLite database in sync with your backend database. +You use a supported backend source such as Postgres, MongoDB, MySQL, SQL Server, or Convex \[[1](#footnotes)\] on the server, a standard SQLite database on the client, and your [own backend](/configuration/app-backend/setup) to process mutations. PowerSync simply keeps the SQLite database in sync with your backend source. #### State Management @@ -38,7 +38,7 @@ Our goal is also to be stack-agnostic: whether you are switching from MySQL to P #### Simplicity -You use plain Postgres, MongoDB, MySQL, or SQL Server on the server — no extensions, and no significant change in your schema required \[[2](#footnotes)\]. PowerSync [uses](/configuration/source-db/setup) Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture (CDC) to replicate changes to the [PowerSync Service](/architecture/powersync-service), where data is transformed and partitioned according to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), and persisted in a way that allows efficiently streaming incremental changes to each client. +You use a supported backend source on the server, with no significant change in your schema required for most databases \[[2](#footnotes)\]. PowerSync [uses](/configuration/source-db/setup) Postgres logical replication, MongoDB change streams, the MySQL binlog, SQL Server Change Data Capture (CDC), or Convex document deltas to replicate changes to the [PowerSync Service](/architecture/powersync-service), where data is transformed and partitioned according to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), and persisted in a way that allows efficiently streaming incremental changes to each client. PowerSync has been used in apps with hundreds of tables. There are no complex migrations to run: You define your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) and [client-side schema](/intro/setup-guide#define-your-client-side-schema), and the data is automatically kept in sync. If you [change Sync Streams/Rules](/maintenance-ops/implementing-schema-changes), the relevant new set of data is applied atomically on the client. When you do need to make schema changes on the server while still supporting older clients, we [have the processes in place](/maintenance-ops/implementing-schema-changes) to do that without hassle. diff --git a/intro/setup-guide.mdx b/intro/setup-guide.mdx index 66378920..09f98879 100644 --- a/intro/setup-guide.mdx +++ b/intro/setup-guide.mdx @@ -19,7 +19,7 @@ import DevTokenSelfHostedSteps from '/snippets/dev-token-self-hosted-steps.mdx'; # 1. Configure Your Source Database -PowerSync needs to connect to your source database (Postgres, MongoDB, MySQL or SQL Server) to replicate data. Before setting up PowerSync, you need to configure your database with the appropriate permissions and replication settings. +PowerSync needs to connect to your source database (Postgres, MongoDB, MySQL, SQL Server or Convex) to replicate data. Before setting up PowerSync, you need to configure your database with the appropriate permissions and replication settings. Using the [PowerSync CLI](/tools/cli) and want an automatically integrated Postgres instance for local development? You can skip to [Step 2](#2-set-up-powersync-service-instance) and set one up with the **CLI (Self-Hosted)** tab. @@ -346,7 +346,7 @@ The next step is to connect your PowerSync Service instance to your source datab In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance, then go to **Database Connections**: 1. Click **Connect to Source Database** - 2. Select the appropriate database type tab (Postgres, MongoDB, MySQL or SQL Server) + 2. Select the appropriate database type tab (Postgres, MongoDB, MySQL, SQL Server or Convex) 3. Fill in your connection details: **Note**: Use the username (e.g., `powersync_role`) and password you created in Step 1: Configure your Source Database. @@ -354,6 +354,7 @@ The next step is to connect your PowerSync Service instance to your source datab - **Postgres**: Host, Port (5432), Database name, Username, Password, SSL Mode - **MongoDB**: Connection URI (e.g., `mongodb+srv://user:pass@cluster.mongodb.net/database`) - **MySQL**: Host, Port (3306), Database name, Username, Password + - **Convex**: Deployment URL, deploy key, and polling interval. See [Convex Source Database Setup](/configuration/source-db/setup#convex). - **SQL Server**: Name, Host, Port (1433), Database name, Username, Password 4. Click **Test Connection** to verify 5. Click **Save Connection** diff --git a/maintenance-ops/implementing-schema-changes.mdx b/maintenance-ops/implementing-schema-changes.mdx index b1749f7c..ec7c83fb 100644 --- a/maintenance-ops/implementing-schema-changes.mdx +++ b/maintenance-ops/implementing-schema-changes.mdx @@ -1,6 +1,6 @@ --- title: "Implementing Schema Changes" -description: "How PowerSync handles schema changes across Postgres, MongoDB, MySQL, and SQL Server during replication." +description: "How PowerSync handles schema changes across source databases during replication." --- ## Introduction @@ -319,4 +319,4 @@ if CDC is enabled on the table again, the data will be removed and the table re- * [JSON, Arrays and Custom Types](/client-sdks/advanced/custom-types-arrays-and-json) -* [Deploying Schema Changes](/maintenance-ops/deploying-schema-changes) \ No newline at end of file +* [Deploying Schema Changes](/maintenance-ops/deploying-schema-changes) diff --git a/maintenance-ops/replication-lag.mdx b/maintenance-ops/replication-lag.mdx index 94f34a80..03125470 100644 --- a/maintenance-ops/replication-lag.mdx +++ b/maintenance-ops/replication-lag.mdx @@ -3,7 +3,7 @@ title: "Replication Lag" description: "Understand, monitor, and reduce replication lag between your source database and the PowerSync Service." --- -Replication lag is the delay between a change being committed in your source database (Postgres, MongoDB, MySQL, SQL Server) and that change being available in the PowerSync Service for clients to sync. A small amount of lag is normal. Sustained or growing lag usually points to a specific cause that you can investigate and act on. +Replication lag is the delay between a change being committed in your source database (Postgres, MongoDB, MySQL, SQL Server or Convex) and that change being available in the PowerSync Service for clients to sync. A small amount of lag is normal. Sustained or growing lag usually points to a specific cause that you can investigate and act on. This page covers what replication lag is, how to monitor it, what commonly causes it, and how to reduce it. @@ -16,6 +16,7 @@ A change committed in the source database goes through roughly three stages befo - **MongoDB**: change streams backed by the oplog. - **MySQL**: the binary log (binlog), read using GTIDs. - **SQL Server**: Change Data Capture (CDC) change tables, populated by a capture job that scans the transaction log. + - **Convex**: document deltas from the Convex Streaming Export API, read on a polling interval. 2. The PowerSync Service reads the change from that stream and processes it into its internal bucket storage. 3. Connected clients receive the change on their next checkpoint. @@ -25,6 +26,10 @@ Replication lag refers specifically to stage 2: the time or volume of changes th SQL Server has an additional source of latency inside stage 1: the CDC capture job itself runs on an interval (default 5 seconds on SQL Server, fixed at 20 seconds on Azure SQL), so changes do not appear in the CDC change tables instantly. See [SQL Server](#sql-server) below. + + Convex replication also has polling latency. PowerSync polls the Convex `document_deltas` endpoint at a configurable interval, which defaults to once every 1000ms. See [Convex Source Database Setup](/configuration/source-db/setup#latency). + + ## How to Monitor Replication Lag ### PowerSync Dashboard diff --git a/resources/feature-status.mdx b/resources/feature-status.mdx index c65b7471..a51ceb9b 100644 --- a/resources/feature-status.mdx +++ b/resources/feature-status.mdx @@ -46,6 +46,7 @@ Below is a summary of the current main PowerSync features and their release stat | **Category / Item** | **Status** | | -------------------------------------------------- | -------------- | | **Database Connectors** | | +| Convex | Open Alpha | | SQL Server | Beta | | MySQL | Beta | | MongoDB | GA | @@ -98,4 +99,4 @@ Below is a summary of the current main PowerSync features and their release stat | CLI | Beta | Also see: -- [PowerSync Roadmap](https://roadmap.powersync.com) \ No newline at end of file +- [PowerSync Roadmap](https://roadmap.powersync.com) diff --git a/resources/usage-and-billing/usage-and-billing-faq.mdx b/resources/usage-and-billing/usage-and-billing-faq.mdx index 8dfd0bc3..81c0ee93 100644 --- a/resources/usage-and-billing/usage-and-billing-faq.mdx +++ b/resources/usage-and-billing/usage-and-billing-faq.mdx @@ -74,7 +74,7 @@ description: "Troubleshoot usage and billing issues and find answers to common q Data processing was calculated as the total uncompressed size of data replicated from your source database(s) to PowerSync Service instances, plus data synced from PowerSync Service instances to user devices. These values are still available in your [Usage metrics](/maintenance-ops/monitoring-and-alerting#usage-metrics) as "Data replicated per day/hour" and "Data synced per day/hour". - Data replicated refers to activity from your backend source database (Postgres, MongoDB, MySQL, or SQL Server database) to the PowerSync Service — this is not billed. + Data replicated refers to activity from your backend source database (Postgres, MongoDB, MySQL, SQL Server, or Convex) to the PowerSync Service — this is not billed. Data synced refers to data streamed from the PowerSync Service to client devices — this is used for billing. @@ -254,4 +254,4 @@ Data hosted can temporarily spike during Sync Rule deployments and defragmentati # Accident Forgiveness -Accidentally ran up a high bill? No problem — we've got your back. Reach out to us at [support@powersync.com](mailto:support@powersync.com) and we'll work with you to resolve the issue and prevent it from happening again. \ No newline at end of file +Accidentally ran up a high bill? No problem — we've got your back. Reach out to us at [support@powersync.com](mailto:support@powersync.com) and we'll work with you to resolve the issue and prevent it from happening again. diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx index eb228225..eb40a831 100644 --- a/sync/streams/overview.mdx +++ b/sync/streams/overview.mdx @@ -270,7 +270,7 @@ const sub = await db.syncStream('todos', { list_id: 'abc' }) - **SQL Syntax**: Stream queries use a SQL-like syntax with `SELECT` statements. You can use subqueries, `INNER JOIN`, and [CTEs](/sync/streams/ctes) for filtering. `GROUP BY`, `ORDER BY`, and `LIMIT` are not supported. See [Writing Queries](/sync/streams/queries) for details on joins, multiple queries per stream, and other features. -- **Type Conversion**: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server) are converted when synced to the client's SQLite database. SQLite has a limited type system, so most types become `text` and you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. +- **Type Conversion**: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server or Convex) are converted when synced to the client's SQLite database. SQLite has a limited type system, so most types become `text` and you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. - **Primary Key**: PowerSync requires every synced table to have a primary key column named `id` of type `text`. If your backend uses a different column name or type, you'll need to map it. For MongoDB, collections use `_id` as the ID field; you must alias it in your stream queries (e.g. `SELECT _id as id, * FROM your_collection`). diff --git a/sync/types.mdx b/sync/types.mdx index 0f911a6f..cecb0df3 100644 --- a/sync/types.mdx +++ b/sync/types.mdx @@ -1,7 +1,7 @@ --- -title: "Types" -sidebarTitle: "Type Mapping" -description: "How Postgres, MongoDB, MySQL and SQL Server types map to PowerSync's SQLite-based sync column definitions." +title: 'Types' +sidebarTitle: 'Type Mapping' +description: "How Postgres, MongoDB, MySQL, SQL Server and Convex types map to PowerSync's SQLite-based sync column definitions." --- import BinaryType from '/snippets/binary-type.mdx'; @@ -14,129 +14,150 @@ The supported client-side SQLite types are: 4. `text`: A UTF-8 text string 5. `blob`: Binary data - ## Postgres Type Mapping Postgres types are mapped to SQLite types as follows: -| Postgres Data Type | PowerSync / SQLite Column Type | Notes | -|--------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `text`, `varchar` | `text` | | -| `int2`, `int4`, `int8` | `integer` | | -| `numeric` / `decimal` | `text` | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite | -| `bool` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | -| `float4`, `float8` | `real` | | -| `enum` | `text` | | -| `uuid` | `text` | | -| `timestamptz` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. | -| `timestamp` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. | -| `date`, `time` | `text` | | -| `json`, `jsonb` | `text` | `json` and `jsonb` values are treated as `text` values in their serialized representation. [JSON functions and operators](/sync/supported-sql#operators) operate directly on these `text` values. | -| `interval` | `text` | | -| `macaddr` | `text` | | -| `inet` | `text` | | -| `bytea` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | -| `geometry` (PostGIS) | `text` | Hex string of the binary data. Use the [ST functions](/sync/supported-sql#functions) to convert to other formats | -| Arrays | `text` | JSON array. | -| `DOMAIN` types | `text` / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). | -| Custom types | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). | -| (Multi-)ranges | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object (array for multi-ranges) or raw wire representation (legacy). | - +| Postgres Data Type | PowerSync / SQLite Column Type | Notes | +| ---------------------- | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `text`, `varchar` | `text` | | +| `int2`, `int4`, `int8` | `integer` | | +| `numeric` / `decimal` | `text` | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite | +| `bool` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | +| `float4`, `float8` | `real` | | +| `enum` | `text` | | +| `uuid` | `text` | | +| `timestamptz` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. | +| `timestamp` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. | +| `date`, `time` | `text` | | +| `json`, `jsonb` | `text` | `json` and `jsonb` values are treated as `text` values in their serialized representation. [JSON functions and operators](/sync/supported-sql#operators) operate directly on these `text` values. | +| `interval` | `text` | | +| `macaddr` | `text` | | +| `inet` | `text` | | +| `bytea` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | +| `geometry` (PostGIS) | `text` | Hex string of the binary data. Use the [ST functions](/sync/supported-sql#functions) to convert to other formats | +| Arrays | `text` | JSON array. | +| `DOMAIN` types | `text` / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). | +| Custom types | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). | +| (Multi-)ranges | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object (array for multi-ranges) or raw wire representation (legacy). | +## Convex Type Mapping + +Convex support is currently in an [Open Alpha release](/resources/feature-status). + +Convex values are mapped to SQLite types as follows: + +| Convex Type | TS/JS Type | PowerSync / SQLite Column Type | Notes | +| ----------- | ---------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Id` | `string` | `text` | Convex document IDs are exposed as `_id` and can be synced as `text`. For synced client tables, use client-side ID mapping with a stable UUID column as `id` instead of relying on Convex-generated `_id` values. | +| `Null` | `null` | `null` | | +| `Int64` | `base-10 string` | `text` | Cast to `INTEGER` in Sync Streams when you want to sync the value as a SQLite integer. | +| `Float64` | `number` | `real` | | +| `Boolean` | `boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | +| `String` | `string` | `text` | | +| `Bytes` | `base64 string` | `text` | Decode from base64 in your app if you need binary data on the client. | +| `Array` | `Array` | `text` | Converted to a JSON string. | +| `Object` | `Object` | `text` | Converted to a JSON string. | +| `Record` | `Record` | `text` | Converted to a JSON string. | + +- Convex documents are converted to a flat list of columns, one column per top-level field. +- Convex system fields used internally by replication are not synced as normal columns. The document `_id` is included, while `_creationTime` is excluded. +- Nested objects and arrays are converted to JSON, and [JSON functions and operators](/sync/supported-sql#operators) can be used to query them in Sync Streams or on the client-side SQLite database. +- Cast Convex `Int64` fields to `INTEGER` in Sync Streams when you want SQLite integer values on the client, for example `CAST(an_int64_column AS INTEGER) AS an_int64_column`. + + ## MongoDB Type Mapping MongoDB types are mapped to SQLite types as follows: -| BSON Type | PowerSync / SQLite Column Type | Notes | -|--------------------|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------| -| `String` | `text` | | -| `Int`, `Long` | `integer` | | -| `Double` | `real` | | -| `Decimal128` | `text` | | -| `Object` | `text` | Converted to a JSON string | -| `Array` | `text` | Converted to a JSON string | -| `ObjectId` | `text` | Lower-case hex string | -| `UUID` | `text` | Lower-case hex string | -| `Boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | -| `Date` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ` | -| `Null` | `null` | | -| `Binary` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | -| Regular Expression | `text` | JSON text in the format `{"pattern":"...","options":"..."}` | -| `Timestamp` | `integer` | Converted to a 64-bit integer | -| `Undefined` | `null` | | -| `DBPointer` | `text` | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` | -| `JavaScript` | `text` | JSON text in the format `{"code": "...", "scope": ...}` | -| `Symbol` | `text` | | -| `MinKey`, `MaxKey` | `null` | | - -* Data is converted to a flat list of columns, one column per top-level field in the MongoDB document. -* Special BSON types are converted to plain SQLite alternatives. For example, `ObjectId`, `Date`, `UUID` are all converted to a plain `TEXT` column. -* Nested objects and arrays are converted to JSON, and [JSON functions and operators](/sync/supported-sql#operators) can be used to query them (in the Sync Streams / Sync Rules and/or on the client-side SQLite statements). -* Binary data nested in objects or arrays is not supported. +| BSON Type | PowerSync / SQLite Column Type | Notes | +| ------------------ | ------------------------------ | ------------------------------------------------------------------------------------------------------------------ | +| `String` | `text` | | +| `Int`, `Long` | `integer` | | +| `Double` | `real` | | +| `Decimal128` | `text` | | +| `Object` | `text` | Converted to a JSON string | +| `Array` | `text` | Converted to a JSON string | +| `ObjectId` | `text` | Lower-case hex string | +| `UUID` | `text` | Lower-case hex string | +| `Boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | +| `Date` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ` | +| `Null` | `null` | | +| `Binary` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | +| Regular Expression | `text` | JSON text in the format `{"pattern":"...","options":"..."}` | +| `Timestamp` | `integer` | Converted to a 64-bit integer | +| `Undefined` | `null` | | +| `DBPointer` | `text` | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` | +| `JavaScript` | `text` | JSON text in the format `{"code": "...", "scope": ...}` | +| `Symbol` | `text` | | +| `MinKey`, `MaxKey` | `null` | | + +- Data is converted to a flat list of columns, one column per top-level field in the MongoDB document. +- Special BSON types are converted to plain SQLite alternatives. For example, `ObjectId`, `Date`, `UUID` are all converted to a plain `TEXT` column. +- Nested objects and arrays are converted to JSON, and [JSON functions and operators](/sync/supported-sql#operators) can be used to query them (in the Sync Streams / Sync Rules and/or on the client-side SQLite statements). +- Binary data nested in objects or arrays is not supported. - ## MySQL Type Mapping MySQL support is currently in a [Beta release](/resources/feature-status). MySQL types are mapped to SQLite types as follows: -| MySQL Data Type | PowerSync / SQLite Column Type | Notes | -|----------------------------------------------------|--------------------------------|-----------------------------------------------------------------------------------| -| `tinyint`, `smallint`, `mediumint`, `bigint`, `integer`, `int` | `integer` | | -| `numeric`, `decimal` | `text` | | -| `bool`, `boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | -| `float`, `double`, `real` | `real` | | -| `enum` | `text` | | -| `set` | `text` | Converted to JSON array | -| `char`, `varchar` | `text` | | -| `tinytext`, `text`, `mediumtext`, `longtext` | `text` | | -| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | -| `date` | `text` | Format: `YYYY-MM-DD` | -| `time`, `datetime` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | -| `year` | `text` | | -| `json` | `text` | There is no dedicated JSON type in SQLite — JSON functions operate directly on text values. | -| `bit` | `blob` | * See note below regarding syncing binary types | -| `binary`, `varbinary` | `blob` | | -| `image` | `blob` | | -| `geometry`, `geometrycollection` | `blob` | | -| `point`, `multipoint` | `blob` | | -| `linestring`, `multilinestring` | `blob` | | -| `polygon`, `multipolygon` | `blob` | | +| MySQL Data Type | PowerSync / SQLite Column Type | Notes | +| -------------------------------------------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------- | +| `tinyint`, `smallint`, `mediumint`, `bigint`, `integer`, `int` | `integer` | | +| `numeric`, `decimal` | `text` | | +| `bool`, `boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. | +| `float`, `double`, `real` | `real` | | +| `enum` | `text` | | +| `set` | `text` | Converted to JSON array | +| `char`, `varchar` | `text` | | +| `tinytext`, `text`, `mediumtext`, `longtext` | `text` | | +| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| `date` | `text` | Format: `YYYY-MM-DD` | +| `time`, `datetime` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| `year` | `text` | | +| `json` | `text` | There is no dedicated JSON type in SQLite — JSON functions operate directly on text values. | +| `bit` | `blob` | \* See note below regarding syncing binary types | +| `binary`, `varbinary` | `blob` | | +| `image` | `blob` | | +| `geometry`, `geometrycollection` | `blob` | | +| `point`, `multipoint` | `blob` | | +| `linestring`, `multilinestring` | `blob` | | +| `polygon`, `multipolygon` | `blob` | | - ## SQL Server Type Mapping SQL Server support is currently in a [Beta release](/resources/feature-status). SQL Server types are mapped to SQLite types as follows: -| SQL Server Data Type | PowerSync / SQLite Column Type | Notes | -|----------------------------------------------------|--------------------------------|--------------------------------------------------------| -| `tinyint`, `smallint`, `int`, `bigint` | `integer` | | -| `numeric`, `decimal` | `text` | Numeric string | -| `float`, `real` | `real` | | -| `bit` | `integer` | | -| `money`, `smallmoney` | `text` | Numeric string | -| `xml` | `text` | | -| `char`, `nchar`, `ntext` | `text` | | -| `varchar`, `nvarchar`, `text` | `text` | | -| `uniqueidentifier` | `text` | | -| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | -| `date` | `text` | Format: `YYYY-MM-DD` | -| `time` | `text` | Format: `HH:mm:ss.sss` | -| `datetime`, `datetime2`, `smalldatetime`, `datetimeoffset` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | -| `json` | `text` | Only exists for Azure SQL Database and SQL Server 2025 | -| `geometry`, `geography` | `text` | `text` of JSON object describing the spatial data type | -| `binary`, `varbinary`, `image` | `blob` | * See note below regarding binary types | -| `rowversion`, `timestamp` | `blob` | * See note below regarding binary types | -| User Defined Types: `hiearchyid` | `blob` | * See note below regarding binary types | +| SQL Server Data Type | PowerSync / SQLite Column Type | Notes | +| ---------------------------------------------------------- | ------------------------------ | ------------------------------------------------------ | +| `tinyint`, `smallint`, `int`, `bigint` | `integer` | | +| `numeric`, `decimal` | `text` | Numeric string | +| `float`, `real` | `real` | | +| `bit` | `integer` | | +| `money`, `smallmoney` | `text` | Numeric string | +| `xml` | `text` | | +| `char`, `nchar`, `ntext` | `text` | | +| `varchar`, `nvarchar`, `text` | `text` | | +| `uniqueidentifier` | `text` | | +| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| `date` | `text` | Format: `YYYY-MM-DD` | +| `time` | `text` | Format: `HH:mm:ss.sss` | +| `datetime`, `datetime2`, `smalldatetime`, `datetimeoffset` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` | +| `json` | `text` | Only exists for Azure SQL Database and SQL Server 2025 | +| `geometry`, `geography` | `text` | `text` of JSON object describing the spatial data type | +| `binary`, `varbinary`, `image` | `blob` | \* See note below regarding binary types | +| `rowversion`, `timestamp` | `blob` | \* See note below regarding binary types | +| User Defined Types: `hiearchyid` | `blob` | \* See note below regarding binary types |