Skip to content

Conversation

@stevensJourney
Copy link
Collaborator

@stevensJourney stevensJourney commented Oct 1, 2025

🎯 Changes

This PR adds a PowerSync integration for TanStack DB collections. This PR is just a temporary internal PR for discussion. Ideally, a PR would be made upstream after internal discussion.

There are multiple options for integrating PowerSync with TanStack DB collections. The main options are to either:

  1. Use a PowerSync SDK client as the data source
  2. Connect directly to a PowerSync service sync stream for data operations

The work here investigates and implements 1, while 2 can be implemented in a different PR.

The PowerSync SDK is used for both syncing data to local SQLite tables as well as perform uploads via the PowerSyncBackendConnector.

TanStack DB collections are tied to the local SQLite tables with the help of collection builders. These collection builders link the TanStack collection state to the SQLite table via Trigger based diff tracking. Mutations made on TanStack DB collections are persisted to SQLite via PowerSyncTransactor, once persisted the upload queue ensures changes are uploaded to a backend.

flowchart TB
    subgraph UI[UI Layer]
        UI_Component[UI Components]
    end

    subgraph TanStack[TanStack DB Layer]
        TS_Cache[In-Memory Cache]
        TS_Collections[TanStack Collections]
    end

    subgraph PowerSync[PowerSync Layer]
        PS_SQLite[Local SQLite DB]
        PS_Client[PowerSync Client]
    end

    subgraph Backend[Backend Infrastructure]
        Backend_App[Application Backend]
        PS_Service[PowerSync Service]
    end

    UI_Component -->|Mutations| TS_Collections
    TS_Cache -->|Updates| UI_Component

    TS_Collections -->|Persist| PS_SQLite
    PS_SQLite -->|Watch| TS_Cache

    PS_Client -->|Sync| PS_SQLite
    PS_SQLite -->|Changes| PS_Client
    PS_Client -->|Changes| Backend_App
    Backend_App -->|Apply| PS_Service
    PS_Service -->|Sync| PS_Client

    style UI fill:#f9f,stroke:#333,stroke-width:2px
    style TanStack fill:#bbf,stroke:#333,stroke-width:2px
    style PowerSync fill:#bfb,stroke:#333,stroke-width:2px
    style Backend fill:#fdb,stroke:#333,stroke-width:2px
Loading

Detailed examples are in the unit tests. The main flow is to:

Create a PowerSync SDK Client

const APP_SCHEMA = new Schema({
  users: new Table({
    name: column.text
  }),
  documents: new Table({
    name: column.text
  })
});

type Document = (typeof APP_SCHEMA)[`types`][`documents`];
type User = (typeof APP_SCHEMA)[`types`][`users`];

const db = new PowerSyncDatabase({
  database: {
    dbFilename: `test.sqlite`
  },
  schema: APP_SCHEMA
});

db.connect(connector);

Define TanStack DB collections using the database

const documentsCollection = createCollection(
  powerSyncCollectionOptions({
    database: db,
    table: APP_SCHEMA.props.documents
  })
);

Use the TanStack DB collections as one would any other collection

// Create a new item synchronously
const id = randomUUID();
const tx = documentsCollection.insert({
  id,
  name: `new`
});

// Synchronous data is available immediately
const newDoc = collection.get(id);

collection.update(id, (d) => (d.name = `updatedNew`));

// More advanced transactions can do this
const addTx = createTransaction({
  autoCommit: false,
  mutationFn: async ({ transaction }) => {
    await new PowerSyncTransactor({ database: db }).applyTransaction(transaction);
  }
});

addTx.mutate(() => {
  for (let i = 0; i < 5; i++) {
    documentsCollection.insert({ id: randomUUID(), name: `tx-${i}` });
    usersCollection.insert({ id: randomUUID(), name: `user` });
  }
});

✅ Checklist

PowerSync TODOs

  • Test Schema support

  • Add documentation

  • The Node.js SDK requires install scripts which don't run automatically - breaks tests

  • I have followed the steps in the Contributing guide (Seems like a dead link, the repo also does not seem to have a CONTRIBUTING.md file)

  • I have tested this code locally with pnpm test:pr (There does not seem to be a script for this)

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

"module": "dist/esm/index.js",
"packageManager": "pnpm@10.17.0",
"author": "JOURNEYAPPS",
"license": "Apache-2.0",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might need to match the other packages in this repo's license (MIT)

KyleAMathews and others added 14 commits October 23, 2025 07:35
…ned synced + optimistic store (TanStack#708)

* Add failing test for issue TanStack#706: writeDelete timing bug in onDelete handler

This test reproduces issue TanStack#706 where calling writeDelete() inside an
onDelete handler causes unexpected behavior.

The Root Cause:
When collection.delete() is called, it creates a transaction and calls
commit() before calling recomputeOptimisticState(). Because commit() is
async but starts executing immediately, the onDelete handler runs BEFORE
the optimistic delete is applied to the collection state.

Timeline:
1. collection.delete('1') is called
2. Transaction is created with autoCommit: true
3. commit() is called (async, but starts immediately)
4. Handler runs inside commit() - optimisticDeletes is empty!
5. commit() completes
6. recomputeOptimisticState() is finally called - too late

Expected Behavior:
- optimisticDeletes.has('1') should be TRUE when handler runs
- writeDelete('1') should throw DeleteOperationItemNotFoundError

Actual Behavior (BUG):
- optimisticDeletes.has('1') is FALSE when handler runs
- writeDelete('1') succeeds instead of throwing
- This causes state inconsistencies and silent failures

The test will fail until this timing issue is fixed.

Related: packages/db/src/collection/mutations.ts lines 529-537

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update test for issue TanStack#706: Root cause is automatic refetch after writeDelete

This test reproduces issue TanStack#706 where calling writeDelete() inside an
onDelete handler causes the deleted item to reappear.

The Complete Root Cause:
========================

When collection.delete() is called with an onDelete handler that uses writeDelete():

1. Transaction is created and commit() starts (mutations.ts:531)
2. Transaction NOT yet added to state.transactions (line 533 runs after)
3. onDelete handler runs while transaction.state = 'persisting'
4. Handler calls writeDelete('1')
5. writeDelete checks for persisting transactions in state.transactions
6. Finds NONE (transaction not added yet), commits synced delete immediately
7. Item removed from syncedData ✓
8. Handler completes
9. wrappedOnDelete automatically calls refetch() (query.ts:681)
10. Refetch fetches data from server
11. Server still has item (transaction delete hasn't executed yet)
12. Refetch OVERWRITES syncedData with server data ✗
13. Item reappears!

The Two-Part Bug:
================

Part 1: Transaction added to state.transactions AFTER commit() starts
- In mutations.ts:529-537, commit() is called on line 531
- Transaction added to state.transactions on line 533 (too late)
- Handler runs before transaction is in the map
- This allows writeDelete to commit immediately

Part 2: Automatic refetch undoes the synced write
- In query.ts:674-686, wrappedOnDelete automatically refetches
- Unless handler returns { refetch: false }
- Refetch restores server data, overwriting synced changes
- This is the reason the item reappears

Test Demonstrates:
- writeDelete succeeds (no error)
- Synced transaction committed immediately (persisting transactions: 0)
- queryFn called twice (initial + refetch)
- Final state: item still present (BUG!)

Expected: Item should stay deleted after writeDelete
Actual: Automatic refetch restores it

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix test for issue TanStack#706: Demonstrate silent error swallowing in onDelete

Corrected understanding of the bug based on issue details:
- User returns { refetch: false } so automatic refetch is not the cause
- User also deletes on backend, so refetch would work anyway

The Real Bug:
=============

The issue is that errors thrown by writeDelete() inside onDelete handlers
are silently swallowed by .catch(() => undefined) in mutations.ts:531

When optimistic delete IS applied before handler runs:
1. collection.delete('1') creates optimistic delete
2. collection.has('1') returns false
3. onDelete handler runs
4. Handler calls writeDelete('1')
5. writeDelete validates: !collection.has('1') → throws DeleteOperationItemNotFoundError
6. Error propagates, commit() rejects
7. .catch(() => undefined) SILENTLY SWALLOWS error
8. User sees: execution stops, no error message, item flickers and reappears

The test demonstrates calling writeDelete in onDelete with refetch: false
(the exact pattern from the issue). The .catch(() => undefined) is the
root cause that prevents users from seeing errors.

Note: Due to timing (transaction not in state.transactions when handler runs),
this test hits the scenario where writeDelete succeeds. The bug manifests
when the optimistic delete IS applied, causing writeDelete to throw.

Related code: mutations.ts:531

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix issue TanStack#706: writeDelete should check synced store only, not combined view

The bug: When calling writeDelete() inside an onDelete handler, it would throw
DeleteOperationItemNotFoundError because it checked the combined view (synced + optimistic)
which already had the item optimistically deleted.

The fix: Change manual-sync.ts to check only the synced store, not the combined view.

Changes in packages/query-db-collection/src/manual-sync.ts:
- Line 116: Changed from ctx.collection.has(op.key) to ctx.collection._state.syncedData.has(op.key)
- Line 120: Same change for delete validation
- Line 155: Changed from ctx.collection.get(op.key) to ctx.collection._state.syncedData.get(op.key)
- Line 173: Same change for delete operation
- Line 182: Changed ctx.collection.has(op.key) to ctx.collection._state.syncedData.has(op.key) for upsert

Why this fixes the issue:
- writeDelete operates on the synced store, not the optimistic state
- Validation should match the store being modified
- This allows write operations to work correctly even when items are optimistically modified
- Now handlers can safely call writeDelete/writeUpdate regardless of optimistic state

Test updated:
- Renamed test to reflect it now verifies the fix works
- Test passes: writeDelete succeeds, handler completes, item deleted successfully
- No errors thrown, execution continues as expected

Fixes TanStack#706

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Clean up test and add changeset for issue TanStack#706

- Removed console.log and debugging output from test
- Removed lengthy comment explanations
- Simplified test to be concise and focused
- Added changeset describing the fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
… overriden (TanStack#707)

* fix(query-db-collection): respect QueryClient defaultOptions when not overridden

Previously, queryCollectionOptions would set query options (staleTime, retry,
retryDelay, refetchInterval, enabled, meta) to undefined even when not provided
in the config. This prevented QueryClient's defaultOptions from being used as
fallbacks.

The fix conditionally includes these options in the observerOptions object only
when they are explicitly defined (not undefined), allowing TanStack Query to
properly use defaultOptions from the QueryClient.

Added comprehensive tests to verify:
1. defaultOptions are respected when not overridden in queryCollectionOptions
2. explicit options in queryCollectionOptions override defaultOptions
3. retry behavior from defaultOptions works correctly

Fixes issue where users couldn't use QueryClient defaultOptions with QueryCollection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: add changeset for queryCollectionOptions defaultOptions fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* Fix optimistic mutation check in transaction processing

* Add test for synced delete after non-optimistic delete

* Add changeset for dedupe filtering non-optimistic mutations fix
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ndlers (TanStack#714)

* fix(collection): fire status:change event before cleaning up event handlers

Event handlers are now cleaned up after the status is changed to 'cleaned-up',
allowing status:change listeners to properly detect the cleaned-up state.

The cleanup process now:
1. Cleans up sync, state, changes, and indexes
2. Sets status to 'cleaned-up' (fires the event)
3. Finally cleans up event handlers

This fixes the collection factory pattern where collections listen for the
'cleaned-up' status to remove themselves from the cache.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* style: format changeset with prettier

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
TanStack#552)

* feat: implement exact targeting for refetching queries to prevent unintended cascading effects

* feat: add refetchType option for more granular refetching control

* chore: add changeset

* refactor: make utils.refetch() bypass enabled: false and remove refetchType

Changes:
- Use queryObserver.refetch() for all refetch calls (both utils and internal handlers)
- Bypasses enabled: false to support manual fetch patterns (matches TanStack Query hook behavior)
- Fixes clearError() to work even when enabled: false
- Return QueryObserverResult instead of void for better DX
- Remove refetchType option - not needed with exact targeting via observer
- Add tests for clearError() exact targeting and throwOnError behavior
- Update docs to clarify refetch semantics

With exact targeting via queryObserver, refetchType filtering doesn't add value.
Users always want their collection data refetched, whether from utils.refetch()
or internal mutation handlers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: clearError should return Promise<void> not QueryObserverResult

* fix: type error in query.test

---------

Co-authored-by: Kyle Mathews <mathews.kyle@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Chriztiaan
Chriztiaan previously approved these changes Oct 29, 2025
sadkebab and others added 10 commits October 30, 2025 07:19
…ections (TanStack#730)

* feat(local-storage): add support for custom parsers/serializers

* added changeset

* feat(local-storage): using parser instead of JSON in loadFromStorage

* feat(local-storage): changed argument order in loadFromStorage to respect previous one

* feat(local-storage): exporting parser type
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
fix(react-db): fix flaky test by preventing race condition

The test "optimistic state is dropped after commit" was flaky because it had a race condition:
1. The test would wait for state size to become 4
2. Then immediately check that the temp-key exists
3. However, the async mutation (with only 10ms delay) could complete between steps 1 and 2

Fixed by moving all assertions into the same waitFor() block, ensuring they execute atomically.
This prevents the mutation from completing between the size check and the temp-key verification.

Co-authored-by: Claude <noreply@anthropic.com>
* docs(svelte-db): Add documentation for destructuring reactivity issue (TanStack#414)

## Summary
This commit addresses issue TanStack#414 where users reported that destructuring
the return value from useLiveQuery() breaks reactivity in Svelte 5.

## Root Cause
This is a fundamental limitation of Svelte 5's reactivity system, not a
bug in the library. When objects with getters are destructured, the
destructuring evaluates getters once and captures the values at that
moment, losing the reactive connection.

## Solution
Added comprehensive documentation explaining:
- Why direct destructuring breaks reactivity
- Two correct usage patterns:
  1. Use dot notation (recommended): `query.data`, `query.isLoading`
  2. Wrap with $derived: `const { data } = $derived(query)`

## Changes
- Updated JSDoc comments in useLiveQuery.svelte.ts with detailed
  explanation and examples
- Updated README.md with clear usage guidelines
- Added test case demonstrating the correct $derived pattern
- All 23 existing tests continue to pass

## References
- Issue: TanStack#414
- Svelte documentation: sveltejs/svelte#11002

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore(svelte-db): Revert README changes to keep it minimal

The README is intentionally kept small, so reverting the detailed
documentation. The comprehensive documentation remains in the JSDoc
comments in useLiveQuery.svelte.ts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: Remove package-lock.json (project uses pnpm)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
…anStack#732)

* fix: Optimize queries without joins by combining multiple WHERE clauses

Addresses issue TanStack#445 - performance slowdown when using multiple .where() calls.

## Problem
When using multiple .where() calls on a query without joins:
```javascript
query.from({ item: collection })
  .where(({ item }) => eq(item.gridId, gridId))
  .where(({ item }) => eq(item.rowId, rowId))
  .where(({ item }) => eq(item.side, side))
```

The optimizer was skipping these queries entirely, leaving multiple WHERE
clauses in an array. During query compilation, each WHERE clause was applied
as a separate filter() operation in the D2 pipeline, causing a 40%+ performance
degradation compared to using a single WHERE clause with AND.

## Solution
Modified the optimizer to combine multiple WHERE clauses into a single AND
expression for queries without joins. This ensures only one filter operator is
added to the pipeline, improving performance while maintaining correct semantics.

The optimizer now:
1. Detects queries without joins that have multiple WHERE clauses
2. Combines them using the AND function
3. Reduces pipeline complexity from N filters to 1 filter

## Testing
- Updated existing optimizer tests to reflect the new behavior
- All 42 optimizer tests pass
- Added new test case for combining multiple WHERE clauses without joins

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add changeset and investigation report for issue TanStack#445

- Added changeset for the WHERE clause optimization fix
- Documented root cause analysis and solution details

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Complete optimizer fix - combine remaining WHERE clauses after pushdown

This completes the fix for issue TanStack#445 by implementing the missing "step 3" of
the optimizer process.

## Problem (Broader than Initially Identified)
The optimizer was missing the final step of combining remaining WHERE clauses
after optimization. This affected:

1. Queries WITHOUT joins: All optimization was skipped, leaving multiple
   WHERE clauses as separate array elements
2. Queries WITH joins: After predicate pushdown, remaining WHERE clauses
   (multi-source + unpushable single-source) were left as separate elements

Both cases resulted in multiple filter() operations in the pipeline instead
of a single combined filter, causing 40%+ performance degradation.

## Solution
Implemented "step 3" (combine remaining WHERE clauses) in two places:

1. **applySingleLevelOptimization**: For queries without joins, combine
   multiple WHERE clauses before returning

2. **applyOptimizations**: After predicate pushdown for queries with joins,
   combine all remaining WHERE clauses (multi-source + unpushable)

## Testing
- Added test: "should combine multiple remaining WHERE clauses after optimization"
- All 43 optimizer tests pass
- Updated investigation report with complete analysis
- Updated changeset to reflect the complete fix

Thanks to colleague feedback for catching that step 3 was missing!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* style: Run prettier on markdown files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add PR body update for issue TanStack#445 fix

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Remove specific 40% performance claim

The original issue compared TanStack db with Redux, not the bug itself.
Changed to more general language about performance degradation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Remove temporary investigation and PR body files

These were used for context during development but aren't needed in the repo.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Flatten nested AND expressions when combining WHERE clauses

Addresses reviewer feedback - when combining remaining WHERE clauses after
predicate pushdown, flatten any nested AND expressions to avoid creating
and(and(...), ...) structures.

Changes:
- Use flatMap(splitAndClausesRecursive) before combineWithAnd to flatten
- Added test for nested AND flattening
- Added test verifying functional WHERE clauses remain separate

All 45 optimizer tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* style: Remove issue reference from code comment

As requested by @samwillis - issue references in code comments can become
stale. The comment is self-explanatory without the reference.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
* fix: enable auto-indexing for nested field paths

This fix allows auto-indexes to be created for nested field paths
(e.g., `profile.score`, `metadata.stats.views`), not just top-level
fields. This resolves performance issues where queries with `eq()`,
`gt()`, etc. on nested fields were forced to do full table scans
instead of using indexes.

Changes:
- Remove the `fieldPath.length !== 1` restriction in `extractIndexableExpressions()`
- Update `ensureIndexForField()` to properly traverse nested paths when creating index accessors
- Add comprehensive tests for nested path auto-indexing with 1, 2, and 3-level nesting
- Verify that nested path indexes are properly used by the query optimizer

Fixes TanStack#727

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: use colon-prefixed naming for auto-indexes to avoid conflicts

Change auto-index naming from 'auto_field_path' to 'auto:field.path'
to prevent ambiguity between nested paths and fields with underscores.

Examples:
- user.profile -> auto:user.profile
- user_profile -> auto:user_profile
(no conflict!)

Co-authored-by: Sam Willis <sam.willis@gmail.com>

* chore: add changeset for nested auto-index fix

* style: format changeset with prettier

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Sam Willis <sam.willis@gmail.com>
feat: use minified builds for bundle size comparisons

Add build:minified scripts that enable minification during builds,
and configure the compressed-size-action to use these scripts.

This ensures that bundle size measurements in PRs reflect actual
code changes rather than being inflated by comments and whitespace,
while keeping the published packages readable and unminified.

Changes:
- Add build:minified script to root package.json
- Add build:minified scripts to @tanstack/db and @tanstack/react-db
- Configure compressed-size-action to use build:minified script

Co-authored-by: Claude <noreply@anthropic.com>
* feat: add useSerializedMutations hook with timing strategies

Implements a new hook for managing optimistic mutations with pluggable timing strategies (debounce, queue, throttle) using TanStack Pacer.

Key features:
- Auto-merge mutations and serialize persistence according to strategy
- Track and rollback superseded pending transactions to prevent memory leaks
- Proper cleanup of pending/executing transactions on unmount
- Queue strategy uses AsyncQueuer for true sequential processing

Breaking changes from initial design:
- Renamed from useSerializedTransaction to useSerializedMutations (more accurate name)
- Each mutate() call creates mutations that are auto-merged, not separate transactions

Addresses feedback:
- HIGH: Rollback superseded transactions to prevent orphaned isPersisted promises
- HIGH: cleanup() now properly rolls back all pending/executing transactions
- HIGH: Queue strategy properly serializes commits using AsyncQueuer with concurrency: 1

Example usage:
```tsx
const mutate = useSerializedMutations({
  mutationFn: async ({ transaction }) => {
    await api.save(transaction.mutations)
  },
  strategy: debounceStrategy({ wait: 500 })
})
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix feedback-4 issues and add interactive demo

Fixes for feedback-4 issues:
- Queue strategy: await isPersisted.promise instead of calling commit() again to fix double-commit error
- cleanup(): check transaction state before rollback to prevent errors on completed transactions
- Pending transactions: rollback all pending transactions on each new mutate() call to handle dropped callbacks

Added interactive serialized mutations demo:
- Visual tracking of transaction states (pending/executing/completed/failed)
- Live configuration of debounce/queue/throttle strategies
- Real-time stats dashboard showing transaction counts
- Transaction timeline with mutation details and durations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: serialized mutations strategy execution and transaction handling

      Core fixes:
      - Save transaction reference before calling strategy.execute() to prevent null returns when strategies (like queue) execute
      callbacks synchronously
      - Call strategy.execute() on every mutate() call to properly reset debounce timers
      - Simplified transaction lifecycle - single active transaction that gets reused for batching

      Demo improvements:
      - Memoized strategy and mutationFn to prevent unnecessary recreations
      - Added fake server sync to demonstrate optimistic updates
      - Enhanced UI to show optimistic vs synced state and detailed timing
      - Added mitt for event-based server communication

      Tests:
      - Replaced comprehensive test suite with focused debounce strategy tests
      - Two tests demonstrating batching and timer reset behavior
      - Tests pass with real timers and validate mutation auto-merging

      🤖 Generated with [Claude Code](https://claude.com/claude-code)

* prettier

* test: add comprehensive tests for queue and throttle strategies

Added test coverage for all three mutation strategies:
- Debounce: batching and timer reset (already passing)
- Queue: accumulation and sequential processing
- Throttle: leading/trailing edge execution

All 5 tests passing with 100% coverage on useSerializedMutations hook.

Also added changeset documenting the new serialized mutations feature.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: resolve TypeScript strict mode errors in useSerializedMutations tests

Added non-null assertions and proper type casting for test variables
to satisfy TypeScript's strict null checking. All 62 tests still passing
with 100% coverage on useSerializedMutations hook.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: convert demo to slider-based interface with 300ms default

Changed from button-based mutations to a slider interface that better
demonstrates the different strategies in action:

- Changed Item.value from string to number (was already being used as number)
- Reduced default wait time from 1000ms to 300ms for more responsive demo
- Replaced "Trigger Mutation" and "Trigger 5 Rapid Mutations" buttons with
  a slider (0-100 range) that triggers mutations on every change
- Updated UI text to reference slider instead of buttons
- Changed mutation display from "value X-1 → X" to "value = X" since slider
  sets absolute values rather than incrementing

The slider provides a more natural and vivid demonstration of how strategies
handle rapid mutations - users can drag it and see debounce wait for stops,
throttle sample during drags, and queue process all changes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(demo): improve UI and fix slider reset issue

- Use mutation.modified instead of mutation.changes for updates to preserve full state
- Remove Delta stat card as it wasn't providing value
- Show newest transactions first in timeline for better UX

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(queue): capture transaction before clearing activeTransaction

Queue strategy now receives a closure that commits the captured transaction instead of calling commitCallback which expects activeTransaction to be set. This prevents "no active transaction exists" errors.

- Capture transaction before clearing activeTransaction for queue strategy
- Pass commit closure to queue that operates on captured transaction
- Remove "Reset to 0" button from demo
- All tests passing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(queue): explicitly default to FIFO processing order

Set explicit defaults for addItemsTo='back' and getItemsFrom='front' to ensure queue strategy processes transactions in FIFO order (oldest first).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: clarify queue strategy creates separate transactions with configurable order

Update changeset to reflect that queue strategy creates separate transactions per mutation and defaults to FIFO (but is configurable).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: rename "Serialized Mutations" to "Paced Mutations"

Rename the feature from "Serialized Mutations" to "Paced Mutations" to better reflect its purpose of controlling mutation timing rather than serialization. This includes:

- Renamed core functions: createSerializedMutations → createPacedMutations
- Renamed React hook: useSerializedMutations → usePacedMutations
- Renamed types: SerializedMutationsConfig → PacedMutationsConfig
- Updated all file names, imports, exports, and documentation
- Updated demo app title and examples
- Updated changeset

All tests pass and the demo app builds successfully.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* update lock

* chore: change paced mutations changeset from minor to patch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: update remaining references to useSerializedMutations

Update todo example and queueStrategy JSDoc to use usePacedMutations instead of useSerializedMutations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: mention TanStack Pacer in changeset

Add reference to TanStack Pacer which powers the paced mutations strategies.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: clarify key design difference between strategies

Make it crystal clear that debounce/throttle only allow one pending tx (collecting mutations) and one persisting tx at a time, while queue guarantees each mutation becomes a separate tx processed in order.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: add comprehensive Paced Mutations guide

Add new "Paced Mutations" section to mutations.md covering:
- Introduction to paced mutations and TanStack Pacer
- Key design differences (debounce/throttle vs queue)
- Detailed examples for each strategy (debounce, throttle, queue)
- Guidance on choosing the right strategy
- React hook usage with usePacedMutations
- Non-React usage with createPacedMutations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: remove id property from PacedMutationsConfig

The id property doesn't make sense for paced mutations because:
- Queue strategy creates separate transactions per mutate() call
- Debounce/throttle create multiple transactions over time
- Users shouldn't control internal transaction IDs

Changed PacedMutationsConfig to explicitly define only the properties
that make sense (mutationFn, strategy, metadata) instead of extending
TransactionConfig.

This prevents TypeScript from accepting invalid configuration like:
  usePacedMutations({ id: 'foo', ... })

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: prevent unnecessary recreation of paced mutations instance

Fixed issue where wrapping usePacedMutations in another hook would
recreate the instance on every render when passing strategy inline:

Before (broken):
  usePacedMutations({ strategy: debounceStrategy({ wait: 3000 }) })
  // Recreates instance every render because strategy object changes

After (fixed):
  // Serializes strategy type + options for stable comparison
  // Only recreates when actual values change

Now uses JSON.stringify to create a stable dependency from the
strategy's type and options, so the instance is only recreated when
the strategy configuration actually changes, not when the object
reference changes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* test: add memoization tests for usePacedMutations

Add comprehensive tests to verify that usePacedMutations doesn't
recreate the instance unnecessarily when wrapped in custom hooks.

Tests cover:
1. Basic memoization - instance stays same when strategy values are same
2. User's exact scenario - custom hook with inline strategy creation
3. Proper recreation - instance changes when strategy options change

These tests verify the fix for the bug where wrapping usePacedMutations
in a custom hook with inline strategy would recreate the instance on
every render.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: stabilize mutationFn to prevent recreating paced mutations instance

Wrap the user-provided mutationFn in a stable callback using useRef,
so that even if the mutationFn reference changes on each render,
the paced mutations instance is not recreated.

This fixes the bug where:
1. User types "123" in a textarea
2. Each keystroke recreates the instance (new mutationFn on each render)
3. Each call to mutate() gets a different transaction ID
4. Old transactions with stale data (e.g. "12") are still pending
5. When they complete, they overwrite the correct "123" value

Now the mutationFn identity is stable, so the same paced mutations
instance is reused across renders, and all mutations during the
debounce window batch into the same transaction.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Refactor paced mutations to work like createOptimisticAction

Modified the paced mutations API to follow the same pattern as
createOptimisticAction, where the hook takes an onMutate callback
and you pass the actual update variables directly to the mutate
function.

Changes:
- Updated PacedMutationsConfig to accept onMutate callback
- Modified createPacedMutations to accept variables instead of callback
- Updated usePacedMutations hook to handle the new API
- Fixed all tests to use the new API with onMutate
- Updated documentation and examples to reflect the new pattern

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update paced mutations demo to use new onMutate API

Modified the example to use the new variables-based API where you pass
the value directly to mutate() and provide an onMutate callback for
optimistic updates. This aligns with the createOptimisticAction pattern.

Changes:
- Removed useCallback wrappers (hook handles stabilization internally)
- Pass newValue directly to mutate() instead of a callback
- Simplified code since hook manages ref stability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants