Backup & Archive: Implementation Best Practices

Overview

Proper implementation of Backup & Archive is essential to ensure your Salesforce data is secure, recoverable, and resilient against loss or disaster. This guide outlines key best practices for planning, configuring, and maintaining a robust data protection strategy with Backup & Archive.


Develop a Comprehensive Backup Strategy

  • Define what data needs to be backed up and how often.

  • Identify critical data (e.g., customer, financial, transactional) for high-frequency backups.

  • Include procedures for testing and validating backup completeness.

  • Align backup frequency with business continuity and RTO/RPO objectives.


Understand Data Growth and Impact

  • Monitor which data types are expanding (e.g., objects with frequent inserts/updates).

  • Use Salesforce analytics tools (e.g., Einstein Analytics) to identify growth trends.

  • Ensure storage infrastructure is scalable to accommodate future growth.

Classify data by criticality to determine backup cadence (daily, weekly, etc.).


Use SSDs for Self-Hosted Cloud Environments

  • If you deploy Flosum in a self-hosted environment, choose SSDs (Solid State Drives) over HDDs.

  • SSDs offer significantly faster write speeds and reliability—critical for backup performance.


Create and Automate a Backup Schedule

  • Recommended cadence:

    • Full backup: Weekly

    • Delta backup: Daily (incremental)

  • Align the schedule with your data volatility and compliance requirements.


Set API Quotas for Initial Full Backups

  • After org connection, go to Organization Settings and set:

    • REST API quota: 80–90%

    • Bulk API quota: 80–90%

  • This allows the system to allocate maximum API capacity during the first full backup.

  • After the initial backup, you can lower the quotas as delta backups are lighter.


Test Backup and Restore Procedures Regularly

  • Simulate recovery scenarios in sandbox or test environments.

  • Validate that the restored data is complete and accurate.

  • Document and refine your recovery process regularly.


Develop a Disaster Recovery Plan

  • Define a clear step-by-step plan for restoring backups.

  • Include contact information for stakeholders and escalation paths.

  • Review and test the plan quarterly or after major releases.


Store Backups Securely

  • Backup & Archive encrypts backups by default.

  • Ensure access is limited to authorized users only.

  • Use access controls and audit logs to track access events.


Monitor Backup and Recovery Activity

  • Use logs and success/failure reports to validate backup jobs.

  • Set up alerts for failed or skipped jobs.

  • Regularly review status reports to detect anomalies.


Use Version Control for Historical Tracking

  • Leverage version control within Backup & Archive to retain historical data states.

  • Quickly restore previous versions of records when needed for audits or rollback scenarios.


Integrate Backups into Change Management

  • Configure Backup & Archive to trigger backups after key system changes.

  • Supports compliance and rollback if a change introduces errors.

  • Useful for deployments, CPQ changes, or regulatory data operations.


Handle Refreshed Sandboxes Correctly

  • A refreshed sandbox has a new Org ID and is treated as a new org.

  • Reconnect the refreshed sandbox using Add Organization.

  • Remove the obsolete org and its backup data to avoid confusion.


Use Multiple Backup Locations

  • Store copies in both cloud and off-site environments.

  • Helps protect against regional outages, physical damage, or ransomware.

  • Backup redundancy enhances availability and supports compliance mandates.


Restrict Access by Source IP

  • Limit data access by configuring IP whitelisting at the infrastructure level (e.g., EC2 security groups).

  • Helps ensure that only trusted IP addresses can retrieve or manage backup data.


Last updated

Was this helpful?