Frequently Asked Questions

Have a question you don't see here? Let us know!

On-Premise Installation

Need help installing Reflection? Find installation resources here or contact our support team.

Reflection requires:

  • Administrative access to your server and administrative permission to install software.
  • Docker must be installed before the Reflection Enterprise software package.
  • Servers or test instances running Reflection will require at a minimum 2 processing cores, 16 Gb RAM available, high speed internet access, 30Gb of disk space available.

Reflection is simple to install! The documentation we send over will walk you step by step through the process. If you need help at any time you can contact our dedicated support professionals to walk you through the entire process at: reflection@riptidesoftware.com

Frequently Asked Technical Questions

How does Reflection work? Here are the answers to some of our prospect's most common questions.

On-premise and cloud-hosted options are available. Free trials of Reflection are all cloud-hosted for you to easily test for functionality. 

Reflection supports MySQL, or Microsoft SQL Server. These can be local in-house physical machines, or hosted in AWS or Azure.

Yes – the Bulk API with PK Chunking is used to collect data from all objects that Salesforce allows (there are some objects that Salesforce requires to be queried via the SOAP API).

API usage is optimized by defaulting to the Bulk API whenever the result set is greater than 5,000 records and the object is available via the Bulk API. The defaults in Reflection are set to incremental, such that after the initial backup, Reflection only collects those records in Salesforce that are either new or have been changed since the last run of a Job configuration.

The schedule that objects are replicated from Salesforce is completely configurable. You can replicate as often as every 5 minutes, or as infrequent as once per year.

As part of Reflection’s replication process, a count of records to be validated is performed and used as validation of completion per object in a Job configuration.

Any additions to the schema are automatically translated into the destination database. Destructive changes such as the removal of a field in SFDC are not replicated in the destination database automatically. Should there be a need to force the column removal, the option “Drop Table Before Run?” in Step 3 of the Reflection Job Configuration can be selected to do so.

Deleted/archived record processing options are configurable in Step 4 of Reflection’s Job Configuration. If you choose to preserve those records, then the “IsDeleted/IsArchived” will be set to “True” in the event that a record has been deleted or archived. If you choose not to preserve these records, the last step of the replication process will delete those records from the destination database.

Yes – formula fields are collected as are any other fields. However, the incremental replication will not collect records where the only changes are to formula fields as the incremental logic relies on the LastModifiedDate to detect changes. If you select the “Drop Table Before Run?” option in the Job configuration, then all records from the objects will be collected at every run and the formula fields will be as up-to-date as the run of the job.

Yes – job configurations are processed at the object level and are replicated in a parallel manner. Also, if there are multiple jobs configured then they will run simultaneously should their independent schedules intersect.

Reflection is able to replicate and backup data from Salesforce into an on-premise database or cloud instance. Pushing data back into Salesforce is available through Reflection’s restoration feature. This allows for data in the local database (or cloud) to be pushed back into the SFDC Org.

Yes – the PK-Chunking feature of the Bulk API is used as a primary means of collecting data over the Bulk API.

Failed processes are re-tried up to a maximum of 3 times before the system stops attempting. However, a given Job configuration is responsible for the replication of on-to-many objects. Should any one object fail, the job continues to process until all objects are processed.

Reflection offers a restoration feature which allows the user to select the source local database, destination SFDC Org, object(s) to be restored, and the records from the database to restore. You can restore anything from the entirety of the objects’ records to one specific record within that object.

Yes – binary files, or the physical files that make up the body field on the Attachment object, are downloaded and stored in a configurable location. The Attachment record itself is created in the local database but the binary is stored outside of the database.

Reflection utilizes the SFDC system field: LastModifiedDate to discern which records are new or changed.

Frequently Asked Security Questions

We've compiled a list of answers to common questions around Reflection's security.

There are 7 components in the Reflection architecture, each of which we recommend hosting in Docker and networked using Docker Swarm. More information on security using Swarm can be found at:


Reflection’s architecture is composed of the webapp, the coordinator service, one or more worker services, the database for internal application use, Kafka, ElasticSearch, and RabbitMQ for communication between the coordinator and worker(s). Security can be imposed between the components by imposing security on where the components are hosted, such as restricting traffic except from specific IPs.

As part of the application setup, you must add Reflection (including the location to reach it) as a Connected App within the Salesforce Organization itself to be able to connect to it. Doing so will allow Reflection to acquire an access token, through the Oauth 2 Webserver flow to authenticate requests to Salesforce with.


The user kicks off this flow from within the Reflection Webapp. On the Organization list on the Reflection web app, there are two buttons, one for sandbox and one for production. Clicking either of these will being the Oauth flow. A new window will open, taking you directly to Salesforce’s secure Oauth login portal. After logging in, you will be redirected to the webapp, and able to name your Organization to be able to easily identify it within Reflection. All calls made by Reflection to Salesforce will include the access token gained during the Oauth flow.

Reflection stores data retrieved from Salesforce in a database that the user chooses. As a result, the user has full control over the security of the database. Users define information to connect to the database from the Web Application. The full credentials are never shown on the webapp.

For the most part, Reflection will never display data that was collected from Salesforce on the webapp nor in the logs. When performing a restoration, users will have the ability to preview the records affected by a restoration, however only can view the id, name, is deleted, created date, last modified date. Users will also have the ability to download a CSV containing the full record information for those that will be restored, however it is never displayed from the webapp. The logs and events generated may contain the name of the Object that is processed, however this is the only data from Salesforce that is displayed through the logs.

Enterprise-Grade Features

Check out the features included with Reflection Enterprise:

Database Support - SQL Server, or MySQL
Salesforce Compatability - Enterprise, Unlimited, Developer, Professional (API enabled)
Still can’t find what you need? Our team is here to help.