Thursday, May 14, 2026

Designing Backup & Restore Strategy for US Bank 200 TB Database on SAP ASE

 Designing Backup & Restore Strategy for US Bank 200 TB Database on SAP ASE 


1 — Foundations: What You MUST Understand Before Touching Anything

1.1 What is SAP ASE and Why Backup Strategy Matters

Image

Image

SAP Adaptive Server Enterprise is a high-performance relational database widely used in banking, trading, and telecom.

In a bank-like environment:

  • Transactions are constant

  • Data is regulated

  • Downtime is millions lost

  • Data loss is legal disaster

So backups are not optional. They must be:

  • Reliable

  • Fast

  • Verified

  • Restorable under pressure


1.2 Types of Failures You Must Design For

Before writing a single script, understand what can go wrong:

1. Hardware Failures

  • Disk crashes

  • Storage corruption

2. Logical Failures

  • Accidental deletes

  • Bad deployments

3. System Failures

  • Server crash

  • OS failure

4. Catastrophic Events

  • Data center outage

  • Cyberattack / ransomware

Your backup strategy must handle ALL of these.


1.3 Core ASE Backup Types (You MUST Know These)

There are only a few core commands — but mastering them is everything.

FULL DATABASE BACKUP

dump database mydb to "/backup/mydb_full.dmp"

TRANSACTION LOG BACKUP

dump transaction mydb to "/backup/mydb_log.dmp"

DIFFERENTIAL (via cumulative log chain concept)

ASE doesn’t call it “differential” like SQL Server, but:

  • Full backup

  • Then continuous log backups is point-in-time recovery.


1.4 Key Concepts 

Recovery Model Equivalent in ASE

  • Transaction log must be backed up regularly

  • Otherwise log fills up which leads to database stops

Backup Chain

  • Full backup → log backups → restore sequence

Break the chain = lose recoverability


1.5 Storage Planning for 20–200 TB Databases

At enterprise scale:

Rule of thumb:

  • Full backup ≈ 60–100% of DB size (compressed varies)

  • Logs = depends on activity

So for 200 TB DB:

  • Full backup: 120–200 TB

  • Logs per day: could be multiple TB

👉 You need:

  • Dedicated backup storage

  • Possibly multiple backup stripes


1.6 Backup Strategy Philosophy (Very Important)

In a bank-scale system:

You don’t design backups for convenience.
You design for:

  • Worst-case recovery

  • Time constraints (RTO)

  • Data loss tolerance (RPO)


1.7 Define RTO and RPO 

RTO (Recovery Time Objective)

How fast must system be back?

Example:

  • 2 hours max downtime

RPO (Recovery Point Objective)

How much data can you lose?

Example:

  • 5 minutes max data loss

These define your backup frequency.


1.8 Baseline-Strategy 

Start simple, for example:

  • Full backup: once daily

  • Log backup: every 15 minutes

  • Store backups on separate storage


1.9 Baseline Backup Job

Create a simple SQL script:

-- Full backup
dump database mydb to "/backup/mydb_full_`date`.dmp" with compression = 5 -- Log backup
dump transaction mydb to "/backup/mydb_log_`date`.dmp"

1.10 Verify Backup (CRITICAL)

Backup is useless unless verified.

load database mydb_test
from "/backup/mydb_full.dmp"
with headeronly

2 — Building a Real Enterprise Backup Strategy

2.1 Multi-Stripe Backups (MUST for Large DBs)

Image

Image

Image

Image

Image

Image

For large databases, use striping:

dump database mydb
to "/backup1/mydb_1.dmp"
stripe on "/backup2/mydb_2.dmp"
stripe on "/backup3/mydb_3.dmp"
with compression = 5

Benefits:

  • Faster backups

  • Parallel IO


2.2 Compression (Save Massive Space)

with compression = 5

Levels:

  • 1 = low CPU, low compression

  • 9 = high CPU, best compression

For production: use 3–5


2.3 Backup Scheduling Strategy

Example (Enterprise Standard)

  • Full backup: nightly

  • Log backup: every 5–15 min

  • Weekly archive backup: long retention


2.4 Automating with OS Scripts

Shell Script Example

#!/bin/bash

isql -Usa -Ppassword -SASE_SERVER <<EOF
dump database mydb 
to "/backup/mydb_full_$(date +%Y%m%d).dmp"
with compression = 5
go
EOF

2.5 Monitoring Backup Jobs

You must check:

  • Success/failure

  • Duration

  • Size

Query system tables:

select * from master..sysprocesses

2.6 Backup Retention Policy

Typical:

  • Daily backups: keep 7 days

  • Weekly backups: keep 4 weeks

  • Monthly backups: keep 6–12 months


2.7 Common Mistakes (Avoid These)

  • Not backing up transaction logs

  • Storing backups on same disk as database

  • Not testing restore

  • Not monitoring failures


3 — Restore Strategy (Most Important Skill)

3.1 Full Restore

load database mydb
from "/backup/mydb_full.dmp"

3.2 Restore with Logs (Point-in-Time)

load database mydb
from "/backup/mydb_full.dmp"

load transaction mydb
from "/backup/mydb_log1.dmp"

load transaction mydb
from "/backup/mydb_log2.dmp"

3.3 Point-in-Time Recovery

load transaction mydb
from "/backup/mydb_log.dmp"
with until_time = "May 14 2026 10:15:00"

3.4 Restore Validation Checklist

After restore:

  • DB online?

  • Data consistent?

  • Apps working?


3.5 Disaster Recovery Simulation (YOU MUST PRACTICE)

At least once per month:

  • Restore entire DB to test server

  • Measure time

  • Fix issues


4 — Tools You Can Use (FREE + ENTERPRISE)

4.1 Built-in ASE Tools

  • isql

  • bcp

  • dump/load commands


4.2 Free Tools

1. Cron (Linux scheduling)

  • Automate backups

2. Shell scripting

  • Custom workflows

3. Rsync

  • Copy backups to remote site

rsync -av /backup remote:/backup

4.3 Monitoring Tools

  • Nagios (free)

  • Zabbix (free)


5 — Step-by-Step Execution Plan

Step 1

Understand database size and log growth

Step 2

Set backup directory

Step 3

Create full backup script

Step 4

Create log backup script

Step 5

Automate with cron

Step 6

Test restore

Step 7

Add monitoring


6 — MASTER CHECKLIST 

Daily

  • Check backup success

  • Verify log backups

Weekly

  • Test restore

  • Clean old backups

Monthly

  • Full DR drill


No comments:

Post a Comment

Scaling Backup and Restore for Large (20TB–200TB) Databases in SAP ASE

   Scaling Backup and Restore for Large (20TB–200TB) Databases in SAP ASE Now we move into  real-world scale , where databases are so large ...