Synchronization steps

From Resco's Wiki
(Redirected from Sync Analyzer)
Jump to navigation Jump to search
Synchronization

This is a technical description of the steps that are performed during a synchronization. Knowing the steps involved in synchronization can help you to more easily understand and solve any potential issues that you can encounter.

Connection to the CRM server

Security note: All communication is over HTTPS, some users additionally use VPN.

Step 1: Authentication

  • Usually requires login and password. (OAuth2 is multi-factor authentication and can use, e.g., supplementary email verification)
  • Result: Security token (tokens expire after a few hours)

Step 2: WhoAmI -> SystemUserId, OrganizationId, BusinessUnitId

Security token is included in every following server request. Based on the token, server decides:

  • Which client actions are allowed
  • Which server records are visible to the server
  • ...

Uploading local changes

Right in the beginning, the app uploads local changes to the server. This makes sense; even if later phases of synchronization fail, server will have received all changes and thus be up to date.

The application logs all changes of the local database to two special tables:

  • rs_changelog - changes in entity tables
  • rs_changelognn - changes in relationship tables

Entities are sorted before upload so that forward references are minimized. This is not always possible, for example, in case of circular references. So, whenever possible, we upload parent records (quote, invoice...) before child records (quotedetail, invoicedetail...).

Tip By modifying ChildParentMap (i.e. adding new parent-child relation), you can influence the upload order.

Records within an entity are uploaded sorted either by creation date (Woodford Configuration: Chronological Upload Order) or by last modification date (default).

  • ExecuteMultipleRequests - 100 records in a batch
  • Questionnaires need increase to 500 records/batch
Tip Use Change List to display the list of changes in the app.

Special considerations

  • Lookups may present a problem. They may point to a record that does not exist on the server.
  • In some cases, the state of record cannot be updated. For example, to close a quote, certain prerequisites must be met. Therefore, status change must be performed at the end of synchronization.
    • As of release 13.2, this special behavior for the status field only applies to Account entities and all parent entities. Other entities are unaffected. You can disable this behavior using per entity; see Upload Status with Data below.
  • We may create a new record on the server, but do not get any confirmation. Next time we try to re-create an existing record. Similar problem can happen for record update.
  • Some entities, such as quote, require special processing.
  • Conflicts
    • We may update server record that was meanwhile deleted on the server.
    • Both server and client edited the same record.

The upload

The uploader performs two passes

  1. First pass executes "easy" updates (all regular fields from all entities, and lookups - if possible).
    If the update fails, we try to remove lookups and repeat the update. (Log: Update NoLookups for: account[...])
  2. The second pass executes status changes, owner changes, and lookup values that were not accepted in the first pass.

The information about the upload process ends up in the synchronization log

  • SyncLog contains info about all problems (including conflicts)
  • If you enable sync details (using the Setup form in Resco Mobile CRM app (parameter Diag. Sync Log) or via Woodford), you will see all upload actions.
Upload Status with Data
When you enable this option in Woodford entity configuration, status changes for that entity are executed in the first upload pass.
(Salesforce) Force legacy batch upload API
Use the default option (in the project configuration) unless you experience issues with APEX triggers or when recommended by Resco support. The default is faster and consumes fewer API calls.

Result

In the optimal case, all changes are uploaded to the server.

Some changes may fail to upload, due to:

  • Communication problems
  • Server errors
  • Conflicts

In case of upload problems, the Sync Errors form opens in the application:

  • It presents all upload problems (incl. conflicts).
  • It allows users to resolve the conflicts.

Consequences of upload failure

  • ForcedFullSync is refused (unless AppSettings.ForcedFullSyncDateAlways == true)
  • New customization is refused (except special case when the new customization leaves data schema and SyncFilter intact - such customization is applied)
  • Downloader skips changed records (so that the changes are not lost)

Example log files

Standard log

Upload:NoChangesLeft			// Or: SomeChangesNotUploaded + <Error>...</Error>
...
<UploadSync> Entitys=969ms Sent=3 SvrCalls=1 TotalTim=969</UploadSync>

Detailed log

<UploadSync>
<Upload entity=resco_mobileaudit>

1> Create resco_mobileaudit[applicationstate, 5ceaf7df-b283-4cff-8e09-a9e79b9e3bf7]
2> Create resco_mobileaudit[applicationstate, f2d10927-51a6-4db4-9f90-1c9df524b59e]
3> Create resco_mobileaudit[synchronization, 4f20ab68-744d-41bb-9393-40adb057dd6a]
MultiSend> 3 packets in 844ms
ClearLog 5ceaf7df-b283-4cff-8e09-a9e79b9e3bf7
ClearLog f2d10927-51a6-4db4-9f90-1c9df524b59e
ClearLog 4f20ab68-744d-41bb-9393-40adb057dd6a
</Upload>

</UploadSync>

Upload:NoChangesLeft
...
<UploadSync> Entitys=969ms Sent=3 SvrCalls=1 TotalTim=969</UploadSync>

Detailed log with file column operations

       <Upload entity=account>
              account PrepareFileActions=122ms
              1> Update account[JohnV, 10407f42-2cfe-4ce5-87b2-e1ee9df1b338]
              2> Update account[JaneQ, 8ed4e5c5-2e4c-4841-ac33-357dbb000e09]
              MultiSend> 2 packets in 9828ms
              DeleteFile account[JohnV, 10407f42-2cfe-4ce5-87b2-e1ee9df1b338].cr180_filetypecolumn1 - 2ae404fe-7e0f-ee11-8f6d-6045bd8958b2 -> 5360ms
              UploadFile account[JohnV, 10407f42-2cfe-4ce5-87b2-e1ee9df1b338].cr180_filetypecolumn3 - 5.txt -> 672ms
              ClearLog 10407f42-2cfe-4ce5-87b2-e1ee9df1b338
       </Upload>

Note: PrepareFileActions line represents an extra web request needed for file deletes. The line is omitted if this action is not needed or when the server request processes very fast. (Less than 100ms right now.)

Customization

In the next step of synchronization we check licenses and update customization (i.e., download and apply a new app project).

License check

To check licenses, the app connects to Resco licensing service: iservices.resco.net.

The result of the license check is written into the synchronization log. The result can be either success (License: Valid) or one of the following errors:

Trial Expired | NOT FOUND | EXPIRED | INVALID ORGANIZATION | INVALID USER | EDITION NOT ALLOWED | URL NOT ALLOWED | INVALID SIGNATURE | INSUFFICIENT

Device configuration (remote device control)

The app sends device description to the backend server, including the following parameters:

  • SystemUserId
  • Device ID, name, operating system, version of the mobile CRM app
  • Device state (Normal | Wiped | Locked)
Tip App users can see their device ID in About form, in the log file.

This information is stored on the backend server (Dynamics or Resco Cloud), in the entity resco_mobiledevice (Dynamics) or mobiledevice (Resco). Installations with Salesforce store this information on Resco Cloud.

When a device registers at the server during its first synchronization, a new record is created. Each device has exactly one such record.

So what happens in the app during synchronization:

  1. App downloads its resco_mobiledevice record from the server.
  2. App performs the necessary management actions (for example lock device) or replaces the security policy. (Administrator can plan these actions using Woodford's Device control function. Technically, Woodford updates the resco_mobiledevice record for that device.)
  3. App updates the record on the server.

Download app project from backend server

  1. Download resco_mobileproject for this user (settings.SystemUserId).
    • If the project is not found (log "Customization NOT FOUND") => Switch to DEMO mode (delete customization, WWW...) & exit.
    • If the project did not change (log "Customization NOT MODIFIED") => Exit.
  2. Check whether the version of the app supports the customization. Users that use an obsolete version of the app receive the error Your Server customization requires new version of the Application. Please Update your Application.
  3. If this is a background synchronization, the synchronization will end and app's UI will indicate that an update is waiting and a foreground synchronization is necessary.
  4. Download (up to) three customization files. We store hashes of the previously downloaded files; only changed files are downloaded.
  5. Unpack customization.zip
  6. Check license: license.ValidateEntities(). This can result in an exception "LICENSE: INSUFFICIENT".
    At this point, we have two customizations - the old one and the new one. The old is active. The new is unpacked except WWW/Images data.
  7. Adapt database schema to the new customization:
    • Drop tables that were removed.
    • Create new tables.
    • Drop columns that were removed.
    • Add columns that were added. (This forces full synchronization for the table.)
    AppSettings.DatabaseSchemaHash <- new database schema
  8. Apply server settings (= Woodford Configuration):
    The configuration is in the file ServerConfig.generated.bin.
    UseFlexiForms,636661280237246393,1,True		// Setting, version, readonly, value
    MaxImageSize,636665620336850414,1,Max_2048x1536
    RecordQuality,636661324791448237,1,Low
    ...
    
  9. Unpack WWW/Images data. This overwrites old customization.

Summary: At this moment, new customization is fully activated except one thing: The app remembers that it is in the process of changing the customization. Later, when the sync completes (successfully or not) and the updated user interface is loaded, this transitional state will be terminated.

In the opposite case (when the app was terminated during synchronization or the new user interface fails to load), at the next app start MCRM deletes the database and returns to its previous customization. However, the WWW/Images folders are not reverted.

Forced full synchronization

Full synchronization essentially means one thing: delete data. This can be done by the user. However, it can be also initiated by admin via Woodford.

How it works: When the Woodford admin enables Full Sync (using Device control), the respective attribute (resco_forcefullsyncon) in the resco_mobiledevice record belonging to the client's device is set to current date and time. This value is passed to the client during synchronization. The client receives this date/time and compares it with the time of the last synchronization. If the last synchronization started prior to the Woodford device control action, client attempts a full synchronization.

In addition to the date/time value in ForcedFullSyncDate (can be also null), there's also the flag ForcedFullSyncDateAlways. When enabled, full synchronization continues even if there are unsynchronized local changes. The changes are deleted and everything is redownloaded from the server.

Caveats:

  • If this is a background synchronization, it is aborted and an error is displayed: "Please Start Sync Manually: Administrator Forced Full Sync".
  • If ForcedFullSyncDateAlways is false and there are unsynchronized local changes, synchronization is aborted with error "Can't execute Full Sync forced by Administrator because upload of local changes failed".

Download permissions

This step is performed:

  • during full synchronization
  • when the new customization contains new entities
  • at first synchronization since app start
  • (Dynamics) when Field Level Permissions is set to true in Woodford configuration

Permissions include

  • Entity privileges (Create, Read, Write, Delete, Append, AppendTo)
  • Field-level permissions (if AppSettings.SyncFieldLevelPermissions is enabled; CRM5 only)
  • List of teams the current user is a member of (if AppSettings.Teams is enabled)

Besides, a number of other global settings and information items is downloaded from the server, for example in case of CRM5:

  • OrganizationId, BusinessUnitId
  • CurrencyDecimalPrecision, CurrencyDisplayOption, CurrencyFormatCode, UserDefaultCurrencyId
  • FullNameConventionCode
  • DiscountCalculationMethod

Salesforce-specific information

  • The download permissions step is executed for each customization update and on the first sync after the application is restarted.
  • Field-level permissions are always applied (AppSettings.SyncFieldLevelPermissions is ignored).
  • Few extra server actions are executed during each sync: Layout configurations are updated (unless multi-layout support is disabled).
Note In the permission download phase, we download all security roles assigned to a user and remove the roles that don't contain permissions to entities found in the customization, or those that are not assigned to the project itself.

Cleaning up deleted records

The app downloads a list of records deleted from the server (and also the list of records where ownership changed) and deletes these records from the local database.

  • DeletePlugin is a Resco module that can be installed on the Dynamics server. It collects information about deleted records, as well as records where the ownership changed. We recommend that you activate the DeletePlugin on the Dynamics Server. If not, server deletions or ownership changes are not propagated to the client. See Plugins for more information.
  • Salesforce maintains a RecycleBin, which serves a similar purpose. It has its own set of problems, such as limited expiration duration. Also, information about ownership changes is missing for Salesforce.
  • Resco Cloud offers a similar functionality out of the box, covering both deletions and ownership changes without any additional configuration.

See also:

Incremental and full synchronization

Full synchronization fetches all records that match the Sync Filter.

Incremental synchronization fetches all records changed since the last synchronization.

In both cases, the synchronization performs a cleanup as the last step: records that do not match Sync Filter are deleted.

For more information about the differences between full and incremental sync, see Types of synchronization. For information about how to force full sync for an entity, see Managing entities.

When to use Always Full Sync for an entity?

Let's say we have a sync filter that only downloads active contacts. If a contact is deactivated on the server, then this record no longer matches the sync filter.

A naive implementation of the incremental sync that would download only changed records that match the sync filter would miss this change, i.e., the record would remain on the client. Therefore, Resco Mobile CRM performs incremental sync in two steps: 1. Download all changes. 2. Delete records that do not match the sync filter.

This is a safe algorithm. However, in some cases, it can be inefficient because of increased traffic (we may download records that are deleted in the cleanup phase).

Therefore, when your incremental sync downloads too many records that are deleted in the cleanup, consider enforcing full sync for this entity. See Sync Filter and incremental sync for a more detailed analysis of these two can be combined.

Example

Download all accounts except inactive:

  <fetch version="1.0">
    <entity name="account">
      <filter type="and"> <condition attribute="statuscode" operator="ne" value="2" /> </filter>
    </entity>
  </fetch>
  1. Sync1: Account A is downloaded to the client using FullSync-fetch.
  2. Account A is deactivated on the server.
  3. This record won't be downloaded using SyncFilter-fetch.
  4. This record will be downloaded using IncSync-fetch. It will be later deleted in the cleanup phase.

SyncAnalyzer

This step is used during incremental synchronization, unless Sync Analyzer is disabled in the configuration.

It provides an answer to the question, for each entity, is it better to to download all records changed since the last synchronization, or to download all records that match SyncFilter at this instant?

The results are written to the log:

<SyncAnalyzer SkipEntitys=35 Tim=1375ms/>
<SyncAnalyzer <FullSync sf_contentdocumentlink sf_leadstatus sf_opportunitystage/> SkipEntitys=24 Tim=640ms/>

Logged details (optional):

*** SyncAnalyzer ***
contact Inc:15 -> IncSync
annotation Inc:1166 Full:34 -> FullSync (750ms)
koo_herbagetestresult: CantAnalyze(SyncLinked)
koo_soiltestresult: CantAnalyze(SyncLinked)
duplicaterule: ForcedFullSync
resco_chatpost: UploadOnly
resco_chatcomment: UploadOnly
resco_mobileaudit: UploadOnly
duplicaterulecondition: ForcedFullSync
resco_mobilereport: FirstSync
salesorderdetail: FirstSync
businessprocessflowinstance: FirstSync
transactioncurrency Inc:0 -> Skip
pricelevel Inc:0 -> Skip
businessunit Inc:0 -> Skip

Advantages and disadvantages

  • Helps you avoid excessive download
  • It is fast (uses 5 threads simultaneously)
  • One of the problems with SyncAnalyzer is that it increases the number of API calls. (Pricing policy of CRM servers is often directly related to the number of API calls.)

RowVersion versus LastSyncDate

Database tables often have a hidden column RowID - unless the database admin explicitly requested that this column is not created. For each database operation, this column is filled from the global database counter: table.RowID = ++GlobalRowID

Initially (during full synchronization) we request all records that match SyncFilter condition. Fetch response usually contains the so-called MinActiveRowVersion.

<b:MinActiveRowVersion>3329419</b:MinActiveRowVersion>

If we receive this value, we store it for each table. For our subsequent (incremental) synchronization, we request all records where RowVersion > MinActiveRowVersion.

If the response does not contain MinActiveRowVersion

<b:MinActiveRowVersion>-1</b:MinActiveRowVersion>

we store the current date and time instead. This is less exact, but good enough.

We store these values in the rs_sync2 table (one row per table). The table is updated after the table download successfully completes.

[rs_sync2]
table			LastSyncDate		maxrowversion
rs_delete_tracking	12/4/2018 9:38:41 AM
progres_country		12/4/2018 9:38:41 AM	569257891
businessunit		12/4/2018 9:38:42 AM	569257891
systemuser		12/4/2018 9:38:44 AM	569257891
resco_mobileaudit	12/4/2018 9:38:44 AM
team			12/4/2018 9:38:45 AM	569257893
...

The parameters LastSyncDate and maxrowversion are sometimes called anchors.

Synchronization can be interrupted. In such case:

  • Some anchors may be missing. They will be full-synced next time.
  • Some tables may have old anchors. Only part of changes was downloaded - they will be re-downloaded next time.

SyncEntities

In this step we download and update the entity changes for normal tables and relationship tables (NN).

foreach "normal" entity
{
	Start transaction
		Download server records (Either FullSync or IncSync fetch is used)
		If this is activity-entity: Download ActivityParty's for this entity
		For entity "list": Download "listmember" records
	Commit transaction
}

Remarks:

  • This module uses multi-downloader: three simultaneous download threads (each has a separate document SyncSetup.txt).
  • Page-size = 500 except emails (100) // Can be changed
  • FullSync drops indexes before the download and re-creates them at the end. This optimizes performance.
  • Huge tables: Always after 20,000 records there is an intermediate commit. This prevents huge (>> 1 GB) WAL log files.
  • BkSync checks if there is a waiting writer. If so, it temporarily pauses the transaction. (Commit, pause... re-open transaction)
  • Records changed locally (if any) are never overwritten.
foreach NN-entity
{
	Download server records (FullSync or IncSync)
}

Remarks:

  • NN-entities use page-size 2500.
  • Transaction is opened for short periods - while storing one page. => Basically non-blocking.

Cleanup

Cleanup step is performed for each entity. We delete records that do not match SyncFilter.

Special cleanup actions:

  • Delete old audit records that were uploaded to server.
  • Delete activityparty records that do not belong to any activity
  • Delete orphaned annotations.
<Cleanup ... Sum:140ms, Deleted 2869+0nn />
<Cleanup 265ms, Deleted 1+0nn />
<Cleanup ActivityParty*419 annotation.orphans*529 Sum:141ms, Deleted 1016+0nn />
<Cleanup 
	contact*413/3625ms 
	atx_dlsrequest*0/344ms 
	atx_visittopic*0/344ms 
	email*0/313ms 
	activitymimeattachment*0/359ms 
	atx_additionalinstrumentimplant*41857/1188ms 
	annotation*0/407ms 
	atx_navnotes*3743/765ms 
	ActivityParty*22630/672ms 
	annotation.orphans*11556/2172ms 
	listmember*24866/421ms 
	Sum:13719ms, Deleted 80246+24987nn />

Notes:

  • Cleanup of a large database may take long.
  • We avoid partial cleanup actions if we know that they are not needed. (For example, the entity was not changed and the SyncFilter is not time-dependent.)
  • There is no cleanup if the sync was interrupted.

If there are too many deletes, try to find out why.

  • In case of incremental synchronization: Wouldn't be better to set ForcedFullSync for given entity?
  • In case of full synchronization: Can you improve SyncFilter?

There is additional cleanup related to things we did not discuss yet, for example documents.

Interrupted synchronization

Some errors are fatal in the sense they interrupt synchronization. Typical reasons for interruptions are:

  • Dropped connection
  • Program error

Synchronization is a step-by-step process. If it is interrupted during certain step, the following steps are not executed.

Hence, if the sync is interrupted when syncing some entity:

  • Some entities may be completely updated, while the remaining ones may not be updated at all.
  • Entity that was interrupted may be partially updated. (Downloads are committed in steps of 20000 records.)
  • Cleanup was not executed.

Hence the data is workable, but be careful. Your next synchronization will resume where the last sync stopped. For partially updated entities, the records downloaded during the last sync will be re-downloaded.

If you have a huge database with millions of records, there is nothing wrong if the sync executes in several steps.

Special actions

Attachments

Quick facts about attachment sync:

  • Attachment entity records are downloaded along with other entities - the actual attachment body is omitted and downloaded later, after the cleanup phase.
  • Only attachments related to records existing in the offline database will be downloaded.
  • Document filters allow you to further restrict the download of attachments; for example, you can specify max attachment size.

What are attachments?

In case of Dynamics CRM:

  • activitymimeattachment.body
  • annotation.documentbody
  • salesliteratureitem.documentbody

In case of Salesforce:

  • sf_attachment.body
  • sf_document
  • sf_quotedocument.document
  • sf_contentversion.versiondata
  • *__kav.*__body__s

All these attributes can contain large data. Hence they are processed separately and stored as record attachments.

Remark for annotations (notes): An annotation can contain a text note or an attachment or both.

CREATE TABLE [annotation] (
    [annotationid]   UNIQUEIDENTIFIER PRIMARY KEY,
    [createdon]      DATETIME,
    [documentbody]   TEXT,		// Null or blob that is stored as attachment (filename content)
    [filename]       TEXT,		// Label for documentbody
    [filesize]       INT,
    [isdocument]     BIT,		// True if documentbody!=null
    [mimetype]       TEXT,
    [modifiedon]     DATETIME,
    [notetext]       NTEXT,		// Note text, stored as normal attribute in the DB
    [objectid]       UNIQUEIDENTIFIER,  // Record to whom this note belongs; 1 record can have multiple notes
    [objecttypecode] TEXT,
    [ownerid]        UNIQUEIDENTIFIER,  // Note title
    [subject]        TEXT,
    [objectidTarget] TEXT,
    [owneridTarget]  TEXT
);

How are attachments downloaded during sync?

  1. Collection: Download of annotation (activitymimeattachment, salesliteratureitem) records. Attribute documentbody is excluded from the download.
    For each downloaded record
    {
    	if( isdocument == true && 
    		fileSize <= AppSettings.MaxAttachmentSize && 
    		BlobStoreFilter.AttachmentFilter.Matches("annotation", fileSize, fileName, objectidTarget)
    	{
    		Store (annotationid, "annotation", fileSize) -> AttachmentQueue table
    	}
    }
    
  2. Attachments download. This is a separate synchronization step executed once the download of "normal" entity records is over. In case of a foreground synchronization, the app shows "Attachments" in the synchronization progress bar.
    Download jobs (*) stored in AttachmentQueue table are executed:
    • Small attachments are downloaded in parallel (10 threads)
    • Large attachments are downloaded sequentially one by one.
    (*) This is actually a fetch of annotation record with given ID, but this time only 1 column is requested - document body.

Azure attachments

Dynamics 365 users may store note and email attachments using Azure Blob storage. This storage is much cheaper than storing large files directly in Dynamics.

Using this external storage requires adjustment of the fetches that MCRM normally uses. To support Azure blob storage, set AppSettings.SyncAzureAttachments=true.

Note This is a rather undocumented area. MCRM so far supports annotation attachments. Possibly activitymimeattachment attachments should be included, too.
How are the attachments stored?
  • Blob\annotation\{recordID}
  • Blob\activitymimeattachment\{recordID}
  • ...

Attachments FAQ

What happens to non-downloaded (refused) attachments?

  • (For example, those exceeding MaxAttachmentSize)
  • These attachments are downloaded on-demand, when the respective form is opened. The app knows that the attachments exist and it displays a download button.

What happens when the sync is aborted during the attachment download?

  • Downloaded attachments were periodically (~once per second) removed from AttachmentQueue.
  • Non-downloaded attachments remain in the queue and will be downloaded during the next sync.

Asynchronous attachment download

  • (If AppSettings.AsyncAttachmentsDownload==true; only applies to foreground synchronization)
  • The synchronization engine initiates the attachment download, but does not wait until it completes. Synchronization is gracefully terminated (attachment download is usually the last major step), progress switches to background and users can continue working with MCRM.

Download multiple attachments in one batch

Since release 14.2, you can modify the behavior of the sync downloader to combine the download of several attachments into one batch request. (By default attachments are downloaded one by one.) This option can be enabled in the project Configuration, in the setting Download Multiple Attachments.

Limitations:

  • Available for Dynamics or Resco Cloud servers and (so far) annotations only.
  • Max attachment size to be eligible for batch download is 100K.
  • One batch combines attachments with a total combined size of up to 10 MB.

Effect on API calls (examples):

  • If your attachments are predominantly images with an average size of 50K, you may decrease API calls used in attachments download even more than 100x.
  • If you typically use PDF attachments with an average size >1 MB, there will be no savings on API calls.

Effect on performance:

  • Attachments are binary files downloaded using BASE64 encoding. This makes them weakly compressible. Hence the batch downloader typically won't change the amount of data transferred.
  • The second important fact is that the default downloader already employs parallel attachment download and often uses the maximum available throughput capacity.
  • Hence, apart from savings in web latency, we cannot expect a noticeable performance increase in general. The exception is the situation when the client data contains a lot of duplicated annotations (can be considered as a design bug, nevertheless it happens). This allows for higher compression of the transferred data and a noticeable performance increase.

How to analyze attachments download:

In the sync log look for these items:

<Attachments Recv='211' Skipped='1' TotalTim='10563ms' />
...
<ApiCalls DelPlugin='2' SyncAnalyzer='31' Downloader='30' NNDownloader='3' Images='2' Attachments='54' Total='182' />

In this case, 211 attachments were downloaded in 10.5 seconds using 54 web requests (API calls).

BlobStoreFilter

Blob store filters allow you to restrict files that should be downloaded from external storage, for example a SharePoint server.

See Document filters for more details.

<?xml version="1.0" encoding="utf-8"?>
<BlobStoreFilter>
  <FilesFilter>
    <Quota>3000000000</Quota>
    <MaxFileSize>30000000</MaxFileSize>

    <Filters>
      <BlobStoreFileTypeFilter>
        <FileSuffix>jpg</FileSuffix> <MaxFileSize>1000</MaxFileSize>
      </BlobStoreFileTypeFilter>

      <BlobStoreFileTypeFilter>
        <FileSuffix>doc</FileSuffix> <MaxFileSize>1000000</MaxFileSize>
      </BlobStoreFileTypeFilter>

      <BlobStoreFileTypeFilter>
        <FileSuffix>rtf</FileSuffix> <MaxFileSize>0</MaxFileSize>
      </BlobStoreFileTypeFilter>

      <BlobStoreFileTypeFilter>
        <FileSuffix>*</FileSuffix> <MaxFileSize>1000</MaxFileSize>
      </BlobStoreFileTypeFilter>
    </Filters>
  </FilesFilter>

  <AttachmentsFilter>...</AttachmentsFilter>
</BlobStoreFilter>
<AttachmentsFilter>
    <MaxAttachmentSize>1000000</MaxAttachmentSize>
    <NotSyncingAttachmentsFor>
      <string>contact</string>
      <string>campaign</string>
    </NotSyncingAttachmentsFor>
</AttachmentsFilter>
<AttachmentsFilter>
    <MaxAttachmentSize>1000000</MaxAttachmentSize>
    <OnlySyncingAttachmentsFor>
      <string>contact</string>
      <string>campaign</string>
    </OnlySyncingAttachmentsFor>
</AttachmentsFilter>
<AttachmentsFilter>
    <EntityFilters>

      <EntityAttachmentsFilter Name="annotation" AttachedTo="account">
        <Quota>1000000</Quota>				// Quota is ignored (Not implemented) for attachments
        <MaxFileSize>100001</MaxFileSize>
        <Filters>
          <BlobStoreFileTypeFilter>
            <FileSuffix>jpg</FileSuffix> <MaxFileSize>5000</MaxFileSize>
          </BlobStoreFileTypeFilter>
          <BlobStoreFileTypeFilter>
            <FileSuffix>doc</FileSuffix> <MaxFileSize>10000</MaxFileSize>
          </BlobStoreFileTypeFilter>
        </Filters>
      </EntityAttachmentsFilter>

      <EntityAttachmentsFilter Name="annotation">
        <Quota>-1</Quota>
        <MaxFileSize>12345</MaxFileSize>
        <Filters />
      </EntityAttachmentsFilter>

      <EntityAttachmentsFilter Name="activitymimeattachment">
        <Quota>-1</Quota>
        <MaxFileSize>-1</MaxFileSize>
        <Filters />
      </EntityAttachmentsFilter>
    </EntityFilters>
</AttachmentsFilter>

DownloadEntityImages

In Dynamics, entity images are icon-sized images that can be defined for each record of selected entities. For example, you can associate a photo to a person record.

Only some entities support images. These entities must have the following attributes:

The following system entities support images (however, only some of them are enabled by default):

  • Account, Contact
  • Product, Resource
  • Lead, Competitor
  • SalesOrder, Invoice, OpportunityProduct
  • Incident, Contract, Goal, Campaign
  • KbArticle, SalesLiterature
  • Connection, TransactionCurrency
  • EmailServerProfile, Mailbox
  • Queue, Territory
  • SystemUser, Publisher, Organization

During the image download phase of the synchronization process, the app shows "Images" in the progress bar.

We execute the following for each entity that has "entityimage" attribute:

{
    Fetch all record IDs with image timestamp > TimeWhenThisFetchWasExecuted
    {
        Add record ID to the AttachmentQueue
    }
    Store permanently TimeWhenThisFetchWasExecuted.
}

Actual images are downloaded later together with attachments.

The images are stored in the same way attachments; as Blob\Account\{recordID}. There's no conflict with attachments because attachment entities cannot have images.

SyncSharedEntities

Sharing is a mean to make accessible records which are normally invisible to the current user. The synchronization of shared records requires special treatment. See shared records for more information.

Cloud document servers

Multiple solutions exist for saving and sharing documents online, for example: SharePoint, OneDrive, GoogleDrive, DropBox, Box, etc. Users may use any number of them.

Basic idea: A CRM record may have a cloud folder (or multiple folders in case of SharePoint) containing 1 or more attachments. These attachments are listed in the Documents tab in the record DetailView. A single attachment cannot be assigned to multiple CRM records.

Cloud servers must keep on specific folder structures that makes possible to determine the document location.

SharePoint

SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and usage varies substantially among organizations. SharePoint supports hundreds of protocols for every possible use.

Logical storage

  • Site collections
    • Sites (top level site)
      • Sites (optional sites)
      • Lists
        • Lists items organized in a folder hierarchy (task, announcement, contact)
      • (Document) libraries
        • Documents (with their associated metadata) organized in a folder hierarchy

List items and documents (basic units of content) have unique URL.

Users can create lists (1) and customize them with new columns, forms, and views.

SharePoint integration with Microsoft Dynamics

Dynamics CRM sees SharePoint as a set of document libraries that store documents related to CRM records; other SharePoint aspects are ignored.

  • Cloud folders (subfolders of document libraries) are assigned to CRM records.
  • Files in these subfolders are record attachments.

In standard integration, CRM admin selects entities (account, contact…) that can have SharePoint attachments. In turn, respective SharePoint document libraries are created: One library per one entity with the library names identical to the entity names.

Formally, the Dynamics server contains multiple entities that describe the locations of SharePoint documents that are managed by Dynamics CRM:

  • sharepointsite entity records describe SharePoint sites.
  • sharepointdocumentlocation records (“locations”) contain a URL (subfolder of some SharePoint document library) and a reference to the CRM record (regardingobjectid) to which the documents are attached.
  • sharepointdocument entity: From Resco CRM point of view, this entity contains redundant information; do not use it in your projects.
CREATE TABLE sharepointsite (
    sharepointsiteid UNIQUEIDENTIFIER PRIMARY KEY,       // PK
    name TEXT,
    absoluteurl TEXT, relativeurl TEXT,                  // Absolute/Relative URL of the SharePoint site.
    parentsite UNIQUEIDENTIFIER, parentsiteTarget TEXT,  // Parent SharePoint site
);
CREATE TABLE sharepointdocumentlocation (
    sharepointdocumentlocationid UNIQUEIDENTIFIER PRIMARY KEY,
    name TEXT,
    regardingobjectid UNIQUEIDENTIFIER, regardingobjectidTarget TEXT, // Points to CRM record that "owns" attachments in this location
    absoluteurl TEXT, relativeurl TEXT,
    parentsiteorlocation UNIQUEIDENTIFIER,               // FK sharepointdocumentlocation|sharepointsite. Relation sharepointdocumentlocation_parent_sharepointsite uses cascading delete.
    parentsiteorlocationTarget TEXT,
);

Default folder structure:

  • <entity_type_name> (Document location RootFolder)
    • <record_name>_<record_id> (Subfolder for specific entity record)
      • <file_name> (Cloud document attached to the record)

SharePoint synchronization

a) Using old List service API
  1. For each site: GetListCollection (Does not return list RootFolder, hence we need additional GetList calls)
  2. For each possibly interesting list: GetList (get list details)
  3. Get list content (file names and attributes): GetListItems
  4. New files that match BlobStoreFilter are stored in the download queue
b) Using new REST API (enable SharePoint REST API parameter in Woodford configuration)

Same process except step 2 is skipped. Faster.

c) FileDownloadQueue

Download requests are stored in rs_documentQueue table. Download starts with the first request enqueued. Since that time next requests are added to the queue, while they are parallelly executed one by one using 10 threads. The queue runs asynchronously and it is used also by other file engines (if any). However, after the file engines sync is over, SyncEngine waits for the queue completion. If the sync is interrupted, queue download is resumed at the next sync.

Log examples
// Nice case - using REST API
<SharePointDownload REST=1 Tim=3015ms GetListCollection=3x/1438ms GetListItems=9x/1390ms #FileInfosSaved=256/>
<FileDownloadQueue Threads=10 Tim=12625ms Downloaded=256 TotalDownloadSize=56712K Speed=35.094Mbps/>
// Nice case without REST API (Using REST API would save ~860ms)
<SharePointDownload Tim=3016ms GetListCollection=1x/1515ms GetList=31x/860ms IgnoredLists=7 GetListItems=2x/563ms #FileInfosSaved=62 #NotDownloaded=11/>
<FileDownloadQueue Threads=10 Tim=4922ms Downloaded=51 TotalDownloadSize=18714K Speed=29.704Mbps/>
// Nothing downloaded because of BlobStoreFilter
<SharePointDownload REST=0 Tim=2438ms GetListCollection=1x/891ms GetList=6x/438ms IgnoredLists=19 GetListItems=3x/1000ms #FileInfosSaved=30 #NotDownloaded=30/>
// BlobStore quota in action
<SharePointDownload Tim=10328ms GetListCollection=4x/3719ms GetList=41x/1172ms GetListItems=15x/4392ms SaveListItems=548ms #FileInfosSaved=1662 #NotDownloaded=6/>
<FileDownloadQueue Threads=10 Tim=205750ms RefusedForQuotaExceeded=1301 Downloaded=355 TotalDownloadSize=1034536K Speed=39.282Mbps/>
// Slow download (reason unknown)
<SharePointDownload Tim=59547ms GetLists=6954ms/41webcalls GetListItems=16x/52312ms SaveListItems=2499ms #FileInfosSaved=24873 #NotDownloaded=22459/>
<FileDownloadQueue Threads=10 Tim=327531ms Downloaded=2414 TotalDownloadSize=66104K Speed=1.577Mbps/>
// Slow because of many sites (192). At least 1/2 of them could be eliminated in setup. (sharepointsite table)
// Using REST API would help marginally.
<SharePointDownload REST=0 Tim=73094ms GetListCollection=192x/44304ms GetList=81x/3688ms IgnoredLists=1240 GetListItems=173x/16294ms #FileInfosSaved=0/>
// Extreme case - many lists
<SharePointDownload Tim=1215375ms GetListCollection=282x/161684ms GetList=60145x/995196ms IgnoredLists=1332 GetListItems=175x/44298ms #FileInfosSaved=10 #NotDownloaded=10/>

Other file engines

Other cloud storage servers (such as OneDrive, GoogleDrive, DropBox, Box, ...) must keep a specific folder structure that makes possible to determine the document location automatically.

File engines have to use this folder structure (inspired by the folder structure used by Dynamics CRM / SharePoint integration):

  • <root_folder> (setup)
    • <entity_name> (account, contact...)
      • <record_name>_<record_id> (John_Smith_12aaa7f78a7f41deaf4bcaccb43df2ce)
        • files (Capture_170627_171139.jpg, note.txt...)

Common setup error: Users forget to specify root folder in Woodford Configuration. In this case, nothing is downloaded.

Log examples
WARN: GoogleDrive: RootFolder=null, sync skipped
<OneDriveDownloadSync TotalTim=58641ms #FileInfosSaved=3573 #FilterIgnoredFiles=2892 SaveChangesCalls=21x/595ms/>
<FileDownloadQueue TotalTim=57375ms FilesRefusedForQuotaExceeded=531 FilesDownloaded=150 TotalDownloadSize=4882K Speed=4.741Mbps/>

DocuSign offline signatures

If you enable true-offline legally-binding document signing using a third-party tool by DocuSign, the documents are signed at the DocuSign server during synchronization.

Signing runs at the start of the sync. Documents are signed one by one in a loop, while the user sees a progress bar. If there is no problem, signed documents are stored in the client database; from there they will be uploaded to the server in the sync upload phase.

Signing summary:

<DocuSign Signed='3' Failed=’1’ Waiting='1' TotalTime='5245ms'/>

Should the signing encounter a signing error, it is presented in the log in the following way:

<ERROR>DocuSign failure: … </ERROR>

Any DocuSign error breaks the signing loop. However, the error is considered non-fatal (i.e., a warning) and the sync continues.

What happens to the document?

  • An unsigned document is uploaded to the Server.
  • On the client side, the document is still marked as changed and the signing will be reattempted on the next sync.

Closing remarks

These are the app settings that control synchronization:

General
bool LogSyncDetails; Whether the synchronization log should include low level messages
bool MultiThreadSync; Whether to use multi-threaded synchronization. (Both upload+download) Default true.
SyncDownloader - IncSync fetches
bool IncSyncUsesVersionNumberSorting; Whether incremental sync fetches use "order by versionnumber" wherever possible. Since 10.2, default true.
bool UseRowVersionChangeTracking; Whether to use RowVersion or ModifiedOn in IncSync fetches. Default=true, i.e. RowVersion.
bool UseLinkedSyncFilter; Whether to use OR grouped incremental sync filter created from linked parents
SyncDownloader - all fetches
int EmailFetchPageSize; Download page size for email entity used in fetches. Must be between 20-500, default is 100.
bool SyncNoLock; whether to use "no-lock" in FetchXML during sync. Default=false. Consequence: SQL Server uses READ UNCOMMITTED for sync fetches. Faster fetches (on high transaction oriented tables), risk of reading data that will be discarded with Rollback.
SyncDownloader - other settings
bool SyncEmailAsText; Whether to strip Html from email body during sync. Since 6.4, default false.
int MaxSyncCount; maximum number of records (per entity) to download; ActivityPartys use 5*MaxSyncCount limit
SyncUploader
int MaxExecuteMultiple; Max #records to upload in one request. Default=100, Max=1000. (Current questionnaire design needs higher values.)
bool UploadRecordsInChronologicalOrder; Whether sync uploads recs in the order in which they were created/deleted/edited for the 1st time. Default=true
Attachments
int MaxAttachmentSize; maximum attachment size to download sync (in bytes)
bool AsyncAttachmentsDownload;
bool SyncAzureAttachments; Whether to use special web method to download note attachments stored in azure blob storage
Sync modules
bool DisableSyncAnalyzer; Whether the synchronization should use the SyncAnalyzer step
bool SyncShared; Whether to download shared entities. (Extra sync pass).
bool UseRestApiForSharePoint; Whether to use (more efficient) REST API for SharePoint server calls, default false. So far implemented only for 1 web call.
Customization
bool UpdateObjectTypeCodes; Whether to update entity objectypecodes (ObjectTypeMap.csv) as part of DownloadPermissions(). Dynamics only, default=false. Overrides objecttype codes stored in EntitySchema.csv.
bool FullSchemaSync; Whether to do DeleteData for new customization instead of incremental DB schema update
bool SyncFieldLevelPermissions; whether to download field-level permission as part of DownloadPermissions(). Crm5 only. Default=false.
bool EnableCustomizationSecurity; Enables the Offline HTML and customization files content verification. Default=false.
N/A yet
bool ShowSyncWarnings; Whether the SyncEngine should return warnings (if any) in the same way as errors. ALWAYS TRUE.
Exotic settings
bool IsDatabaseAuditEnabled; Special server log that contains SYNCSTART/SYNCEND commands & DB read commands for selected entities. Used for single customer only!
bool UseDatabaseEncryption; Whether the database is encrypted. Only used when the database is created (full sync"). Default true.
bool UseDatabaseBlobStore; Whether to store attachment blobs in database or in files
int SyncEmailOnSend; Whether to sync emails when new email is sent. 0 - enabled

Advanced SyncSetup

  • BlobStoreFilter.xml - see Document filters
  • SyncSetup.xml - see Advanced sync setup
    Finetuning of the SyncDownloader (# of threads, cache size, fetch page size, order of entities). This tool might speed up (mainly) FullSync. Look for the sync log item: <Downloader CacheSize=100MB UsedCache=81MB Paused=18% />. If there are large pauses or UsedCache approaches CacheSize, then it makes sense to apply special sync setup.

Synchronization result

SyncEngine = Code name for the synchronization module.

SyncEngine.Execute() = Method executing the synchronization.

Returns:

  1. Fatal error (error that interrupted sync process.)
  2. Otherwise warning (*)
    • "Warning:\nAccount entity has more records than allowed in the configuration; not all records were downloaded."
    • "Warning:\nBackground Sync cannot reliably synchronize contact entity. Consider using Foreground Sync.\n(Go to Setup > Accounts > CRM and tap the sync button.)"
    • "Warning:\nSharePoint synchronization failed.\n(The reason is logged.)""
    • "Application can't be updated because of Sync Errors.\nPlease resolve them and synchronize again."
    • "Sync generated multiple warnings. Please see the log."
  3. Null (everything ok)

(*) AppSettings.ShowSyncWarnings is the setting that could be used to exclude warnings. So far not used.

Email synchronization

Synchronization of emails is not included in SyncEngine. Even the log is separate (available from the About form). We support the following email servers: Exchange and Gmail.

Email synchronization is launched after SyncEngine.

errMsg = Data.Synchronization.SyncEngine.Execute()
if( errMsg == null &&           // THIS IS WHERE AppSettings.ShowSyncWarnings MATTERS
    && !Aborted                 // Abort is done by OS when the app runs on background and OS needs more resources. iOS gives us a chance to react, Android/W10 not.
    && !AppLocked && !AppWiped) // App can be locked/wiped from Woodford. (The check is done during customization check.)
{
    errMsg = EmailEngine.Execute();
}
if (errMsg != null)
    ShowError(errMsg);

Email synchronization can run also without SyncEngine. Users can initiate the action by clicking a dedicated button for email synchronization, or it can run automatically when they create a new email (assuming AppSettings.SyncEmailOnSend==true).