Synchronization troubleshooting
This is a technical article dealing with performance considerations and possible issues during the synchronization process.
Evaluate performance
Evaluating synchronization performance in general is mainly relevant for full synchronization. Measuring performance during an incremental synchronization is not reliable.
One of the most important parameters to evaluate is Time_spent_in_entities_download / #_of_records_downloaded
- 1 ms/rec = good
- Best times ~0.3 ms/rec
Identify problematic module (if any)
- Downloader
- Uploader
- Cleanup
- SharePoint
- Any module that takes too much time...
Possible performance problems:
- Slow connection
- Web latency
- Server performance - overloaded, large POA table; often only 1-2 entities cause the problem
- Client performance
- device quality (rare)
- large data
- database indexes (seldom a sync problem)
- SyncDownloader pauses
What does it mean large data?
- Overall database size (Database_8_0.sdf on Windows; equivalent SQLite files on other platforms)
- Normal case: O(100) MB
- Possible problems: > 500 MB
- Huge tables, i.e. tables with >> 100K records
- Table record counts can be found in FullSync detailed log
- Slow Reindex operation — may take >20 seconds
- Cause downloader pauses
- Too much data in the blobs (attachments, images, cloud documents)
- Can be measured if you have access to the AppData folder, or via the Storage analyzer on the device
- Sync log contains only indirect data — attachment count and time
Entity size and filter strategy
Choosing the right filter strategy for each entity is one of the most effective ways to keep sync times reasonable. Match the strategy to the size of the entity on the server, not on the device — what determines sync cost is how much the server has to scan and return:
| Entity size on server | Strategy |
|---|---|
| Tiny / Small (up to ~10 MB) | Usually no sync filter needed. Adding filter logic for tiny entities costs more in query complexity than it saves. |
| Large (up to ~100 MB) | Sync filters recommended. There's a trade-off: less selective filters mean more local storage but faster incremental sync; more selective filters mean less storage but more server-side work each cycle. |
| Huge (~500 MB and above) | Selective filters are usually required. Consider SmartSync for use-case-specific subsets, and review the entity's enabled fields — dropping unused fields is often a bigger win than tightening the filter. |
For attachments and cloud documents, the same logic applies: size limits and filtration criteria prevent runaway blob growth.
Did the synchronization complete?
What can happen:
- Aborted synchronization (Reported as UserAbort in the log)
- SyncEngine was not able to write sync log (SyncLog contains only SYNCSTART info and nothing else)
Common reason:
- User switches the app to background (during the synchronization process)
- The operating system needs more resources and kills the app.
- iOS warns the app that it will be terminated, but the SyncEngine is executing a long-running operation that cannot be interrupted (typically REINDEX)
Other problems
The more you understand the sync process, the better you'll be able to identify the problem.
- Incomplete download (MaxSyncCount)
- Unneeded downloads — records are downloaded and then deleted in cleanup. Check sync filters; often the filter on the client and the filter on related entities don't match, so records arrive only to be immediately discarded.
- Slow upload, mainly in questionnaires: if there are too many upload records, increase MaxExecuteMultiple.
- POA table problems (Dataverse only) — the PrincipalObjectAccess table stores record sharing information. When it grows very large, or when individual users/entities have an unusual number of shares, server-side queries during sync slow down dramatically. Use the POA analyzer to identify which entities and users are responsible. This is one of the most common causes of "1-2 entities take forever" symptoms on Dataverse backends.
How to investigate problems
- Log files
- Analyze the Sync log, especially <Analysis> node
- Use SyncStats analyzer if you need to see the big picture from multiple sync logs
- Switch on Diagnostic Sync Logs (either globally in project configuration or in a particular device in the Setup screen).
- Look for problems in other logs (Online log etc.; available from AboutForm)
- Try to collect info about the data size (FullSync detailed logs list record counts per table)
- On-device tools
- Storage analyzer — see what's actually taking up space on the device (entities, blobs, attachments, cloud documents, SmartSync definitions).
- Change List — local changes waiting for the next sync. Useful when troubleshooting upload problems.
- SyncStats analyzer — see above; also available on-device.
- POA analyzer — Dataverse only; inspects the PrincipalObjectAccess table for sharing-related performance issues.
- Network inspection
- Fiddler — inspects HTTP traffic between device and server. Note: enabling Fiddler decreases performance, so use it for diagnosis only, not measurement.
AI-assisted analysis
Resco Agents is an MCP server that gives an AI assistant (Claude, ChatGPT, Copilot, Cursor, etc.) direct access to your project's sync logs and configuration. Instead of exporting CSV from the Sync Dashboard and reading through it manually, you can ask questions in natural language.
Relevant tools include fetchSyncLogs (query the sync log corpus with a prompt, date range, and user filter), getSyncLogDetail (pull a full log body for one sync session), listSyncFilters / getSyncFilter (inspect the filter that produced a given sync behavior), and dataverse_get_table_stats (record counts, growth history, and indexes for any Dataverse table). See Resco Agents for setup and the full tool list.