In some scenarios, however, it is preferable to perform automation directly on the server. Dynamics and Salesforce servers come with their own business logic solutions. In this article, we will discuss server-side business logic options in Resco Cloud. Processes are managed using the Admin Console.
The backend business logic is triggered at the data level - in your Resco Cloud organization, not in the mobile app. You can use this for example when you have several forms/views for one entity and you want to trigger an action when that entity changes. If you did that in the mobile app, you would have to write such logic in each form/view. If you trigger this action server-side, you write the logic once and you can be sure the action happens. You can also send emails from the server (if you set up a mail server), trigger push notifications, run plugins, etc.
When creating a new process, select one of the following types or categories:
- Workflow: When a workflow is triggered, the procedure is added to the queue of processes and performed asynchronously sometime later. Asynchronous workflows don't have an impact on the triggering actions (because the actions are already complete). When the server is processing workflows, work is ordered by Priority - higher priority workflows are executed first.
- Real-time: A real-time process is performed immediately. It runs synchronously with the triggering action and it can affect the result of the action.
- Job: Jobs are not triggered, instead, you can schedule them, e.g., every night at 2 AM; or you can run them on-demand.
What can start a process
Workflows and real-time workflows are always related to a particular entity. They can be triggered:
- When a record is created
- When a record field changes (you can select which fields trigger the action)
- When a record is deleted
Jobs are not related to an entity. Instead, you can directly configure when to start the process:
- Periodically: for example, every 45 minutes
- Daily, Monthly, or Yearly
- On-demand: You must start the job manually using the Run Now button.
Additionally, jobs and workflows can be executed from a different process. The job or workflow must be created before you can use it. You may iterate the same process, but it must be saved first. The number of iteration is limited to 8192.
What can a process do
The series of commands that a process performs is written using a user interface similar to the rules editor:
- Define variables, querying data from the database using FetchXML
- Add conditions that branch the workflow
- Add functions that create, update, or delete records and fields, or perform other actions:
- StopWorkflow: Stop workflow execution, marking the run as either a success or a failure
- SyncService: Synchronize with another server configured via Resco CRM sync
- SendEmail: Send an email, configuring from, to, subject, body, attachments, and regarding
- SendEmailReference: Send an email (record from email entity - probably created out of process editor)
- SendSms: Send a text message (requires SMS integration)
- InvokeWebRequest: GET, POST, PUT, PATCH, DELETE
- Execute: Run a plugin or process
- Process: execute a custom process (must be of the category "job" or "workflow")
- Plugin: execute a server plugin. Select one of the three built-in plugins or a custom plugin.
- Custom plugin
- RescoCRM (built-in)
- Chat (CommentCreate, PostCreate, SendChatMessage)
- Extensions (AddInitialRole, AddUserRole, SetUserPassword)
- Notifications (SendEntityNotification)
If your process includes a string variable, you can access additional built-in plugins:
- GeneratePassword: return value can be used for SetUserPassword
- GenerateReport: generate report based on user definition (e.g format(pdf|excel|word|html))
Who executes the process
In the process configuration, you can select the user account that executes the process: Use Run As to select the user. This function is available for workflows and real-time workflows.
If you are interested in some examples, consider starting a trial for a resco.FieldService organization. It comes with several processes preconfigured.
Distinguish between triggering actions
It is possible to use the OperationName parameter to distinguish which operation triggered the workflow. There are 3 values defined: “Changed”, “Created”, “Deleted”.
For workflows and jobs, you can also select the transaction mode:
- None - The process is not running as a transaction.
- No Lock - Use Isolation Level "Read Uncommitted".
- Default - Use Isolation Level "Read Committed".
(transaction is allowed to read data from a row that has been modified by another running transaction and not yet committed)
(during the course of a transaction, a row is retrieved twice and the values within the row differ between reads)
(in the course of a transaction, new rows are added or removed by another transaction to the records being read)
Technical aspects of process execution
This section offers insight into how processes and plugins are executed on the server.
A transaction is the propagation of one or more changes to the database. It is a sequence of operations performed as a single logical unit of work against a database and accomplished in a logical order.
Resco servers use standard Microsoft SQL Server as their database engine. Every backend process, such as workflow, job, or real-time process, is defined as a single operation performed against this database.
Processes and transactions
Each workflow operation runs in a separate transaction. This transaction is propagated to the database only after the transaction containing its trigger is committed. Only 1 workflow process might be executed per organization at the same time. When several workflows are designed per 1 organization, the workflows are stacked in a queue and executed sequentially. In the case of concurrent workflows created for the same entity and triggered by the same event, the workflows are ordered by the system id and executed one by one.
Real-time processes run in the same transaction as their initial trigger. That means the changes defined in a real-time process are propagated to the database in the same transaction as the create/update/delete operation that triggered it initially. Concurrent real-time processes (triggered by the same event) are propagated all in the same transaction as a sequence of operations.
Jobs are scheduled, trigger-independent operations and in case the data isolation and consistency are secured, several jobs can possibly run at the same time on Resco Cloud. Even when server-side processes are designed in a manner that workflows and real-time processes are not interfering, there are cases, when concurrent data access may occur, and it is very important to ensure data integrity.
Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. Any attempts to modify data in a database require a system of control, the so-called transaction locking.
The transaction locks block other transactions from modifying the data in a way that would cause problems for the prior transaction requesting the lock. Each transaction frees its locks when it is completed with either a COMMIT or ROLLBACK statement.
The level of concurrency control is defined by selecting transaction isolation levels for connections. Resco SQL database uses the default, read-committed isolation level. This allows a transaction to read data previously read (not modified) by another transaction without waiting for the prior transaction to end. The server engine keeps write locks until the completion of the transaction.
Plugins and transaction locks
A server-side plugin runs (as other server-side processes) also in a single transaction and this transaction is locking the corresponding resources according to the defined isolation level. That’s the reason why plugin assemblies in general are not designed for reading and modifying tens or hundreds of records. In that case, the plugin might be fetching and updating records for several minutes and you may temporarily undesirably lock your database.
Multiple records update
Modification of a multitude of records should be handled rather via external services, so you’re able to fetch records separately and only after modifying them, update data to the server. In that case, you avoid the plugin assembly and transaction locks completely.
Another option is to count on this limitation and execute plugin once a day via scheduled jobs, possibly during the time when temporarily database locking would not cause any major issue.
Alternatively, you can use fetch operations with “NoLock” parameter to allow at least dirty reads during transactions within your plugin.