As part of our ongoing series, we have previously explored the validation of database objects for migrations from SAP ASE (formerly known as Sybase ASE) to various target databases, including Amazon RDS for SQL Server, Amazon RDS for MySQL, Amazon RDS for MariaDB, and Amazon Aurora MySQL-Compatible Edition. In this final installment, we will concentrate on the validation process for migrations from SAP ASE to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition.
For this schema migration, you can utilize the AWS Schema Conversion Tool (AWS SCT), similar to our previous discussions, but ensure that the target database is set to either Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible.
Prerequisites
To perform the post-migration validation outlined in this guide, you will need:
- A source database from SAP ASE
- An Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible target database after migration
- An Amazon Elastic Compute Cloud (Amazon EC2) instance or a SQL client with the necessary permissions to connect to both source and target databases and execute validation SQLs
- A database user with public access to the primary SAP ASE database
- The database user must have SELECT permissions on the following system tables:
- [source_db].dbo.sysobjects
- [source_db].dbo.syscolumns
- [source_db].dbo.sysconstraints
- [source_db].dbo.syspartitionkeys
- [source_db].dbo.sysindexes
- [source_db].dbo.sysreferences
- [source_db].dbo.sysusers
- master.dbo.spt_values
- A database user with read privileges to the target database, which must include permissions on the following schema:
- information_schema
- pg_catalog
Database Object Identification and Validation
To execute database object validation, it is essential to recognize the various types of database objects to be considered. The following sections cover the database objects you should compare between SAP ASE and the corresponding PostgreSQL database. Validating the schema also helps mitigate issues during the final stages of migration.
- Schemas
- Tables
- Views
- Functions
- Stored Procedures
- Indexes
- Triggers
- Constraints
- Primary Key Constraints
- Foreign Key Constraints
- Check Constraints
In the subsequent sections, we will delve into each of these object types, providing the necessary SQL queries to assist in identifying any discrepancies in the migrated schema objects.
Schemas
To validate schemas at both the source (SAP ASE) and target (PostgreSQL), use the following SQL queries:
For SAP ASE:
SELECT DISTINCT su.name AS SchemaName
FROM sysusers su
INNER JOIN sysobjects so ON su.uid = so.uid
ORDER BY su.name;
For PostgreSQL:
SELECT schema_name AS "SchemaName"
FROM information_schema.schemata
WHERE schema_name NOT IN ('pg_catalog', 'public', 'information_schema', 'aws_sapase_ext')
AND (schema_name NOT LIKE 'pg_toast%'
AND schema_name NOT LIKE 'pg_temp%'
AND schema_name NOT LIKE '%pg_toast_temp%')
ORDER BY schema_name;
Extension Packs
When using AWS SCT to convert your database schema, additional schemas known as extension packs may be created in your target database. Specifically, when migrating from SAP ASE to Amazon RDS for PostgreSQL or Aurora PostgreSQL, AWS SCT generates an extension pack named aws_sapase_ext. You can verify its presence with the following PostgreSQL query:
SELECT schema_name
FROM INFORMATION_SCHEMA.schemata
WHERE SCHEMA_NAME = 'aws_sapase_ext'
ORDER BY schema_name;
Tables
Next, we will examine the tables under each schema (or owner) and the detailed information for each table. The migration tool (in this case, AWS SCT) converts the source SAP ASE tables into equivalent PostgreSQL target tables, maintaining the same or compatible data types.
To ensure all tables have been successfully migrated, use the following SQL queries:
For SAP ASE:
SELECT user_name(uid) AS SchemaName,
COUNT(name) AS TableCount
FROM sysobjects
WHERE type='U'
GROUP BY uid
ORDER BY user_name(uid);
For PostgreSQL:
SELECT schemaname AS "SchemaName", COUNT(tablename) AS "TableCount"
FROM pg_tables
WHERE schemaname NOT IN ('aws_sapase_ext', 'public', 'information_schema', 'pg_catalog', 'pg_toast', 'pg_temp')
GROUP BY schemaname;
For further insights on this topic, you can check out this blog post here. Also, if you seek authoritative information, visit this site. For best practices, this resource is excellent.
Conclusion
Remember to verify the results for any discrepancies between the source and target databases using these queries. Identifying any differences promptly will help you address root causes or refer to AWS SCT logs for guidance before proceeding.

Leave a Reply