This article demonstrates how Redis and PostgreSQL work together to provide caching for improved SQL query performance. This will be useful for SAP and Red Hat administrators supporting SAP deployments who are new to Redis and PostgreSQL and may have applications using these databases running alongside SAP ERP applications.
Background
A common database used by the SAP business suite of applications in Linux environments is SAP HANA. SAP business scenarios now include applications utilizing databases like PostgreSQL and Redis that you can deploy in containers and operate alongside the SAP business suite software, such as:
- Data Services Supplement for Big Data: You can use PostgreSQL as sources and targets in SAP Data Services.
- SAP side-by-side extensions are software applications that run outside of core SAP ERP and interact with it using standard SAP APIs.These custom applications may require their own persistence of business data on databases like PostgreSQL and/or Redis. You can deploy this on SAP’s cloud native development platform SAP Business Technology Platform or on Red Hat Application Foundations on OpenShift.
- SAP Edge Lifecycle Management delivers containerized workloads to customer edge computing locations. An example workload includes the Edge Integration Cell, part of the SAP Integration Suite, which enables customers to manage APIs and run integration scenarios on-premise. The deployment uses PostgreSQL and Redis.
While core SAP applications experience optimized performance derived from the SAP HANA in-memory database, those applications based on PostgreSQL can experience performance benefits by the caching capabilities of PostgreSQL with Redis.
PostgreSQL is a relational database management system with its own caching capabilities. While PostgreSQL caches data, it does not have as much control of what data is cached and when it's cached.
Redis can help by providing more granular control. Redis is an in-memory datastore, providing faster caching performance than PostgreSQL and allows you to reduce the load on the PostgreSQL database. Combining Redis with PostgreSQL enables high-performance SQL operations while ensuring atomic and durable persistence to disk via PostgreSQL’s transactional database engine.
Example program demonstrates caching
We introduce a short Python program that demonstrates the caching of data with Redis and PostgreSQL.
The lab environment is a single KVM virtual machine running Red Hat Enterprise Linux (RHEL) 9.4 (based on RHEL 9.4 KVM Guest Image downloaded from Red Hat Customer Portal -> Downloads).
The Redis and PostgreSQL versions used here are based on the packages available in RHEL 9 Application Streams delivered through the AppStream repository.
Install Redis
You can install Redis by referencing How to install and configure Redis in Red Hat Enterprise Linux:
[root@node1 ~]# dnf install redis
[root@node1 ~]# systemctl start redis
[root@node1 ~]# systemctl enable redis
[root@node1 ~]# systemctl status redis
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; enabled; preset: disabled)
Drop-In: /etc/systemd/system/redis.service.d
└─limit.conf
Active: active (running) since Mon 2025-08-07 12:28:14 EDT; 1min 46s ago
Main PID: 705 (redis-server)
Status: "Ready to accept connections"
Tasks: 5 (limit: 10552)
Memory: 9.9M
CPU: 277ms
CGroup: /system.slice/redis.service
└─705 "/usr/bin/redis-server 127.0.0.1:6379"
May 05 12:28:14 node1 systemd[1]: Starting Redis persistent key-value database...
May 05 12:28:14 node1 systemd[1]: Started Redis persistent key-value database.
[root@node1 ~]#
After installing Redis, further configuration typically involves editing the redis.conf
file to customize settings. For the simple demo, these steps have been omitted.
Test Redis by connecting to Redis via the redis-cli
built-in command-line tool, store and retrieve a key, and remove the key:
[root@node1 ~]# redis-cli -h 127.0.0.1 -p 6379
127.0.0.1:6379>
127.0.0.1:6379> set key1 test
OK
127.0.0.1:6379> get key1
"test"
127.0.0.1:6379> del key key1
(integer) 1
127.0.0.1:6379> get key1
(nil)
127.0.0.1:6379>
Install PostgreSQL
You can refer to the following documentation for PostgreSQL installation: Red Hat product documentation Installing PostgreSQL. See the following:
[root@node1 ~]# dnf install postgresql-server
[root@node1 ~]# postgresql-setup --initdb
* Initializing database in '/var/lib/pgsql/data'
* Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
[root@node1 ~]#
[root@node1 ~]# systemctl start postgresql.service
[root@node1 ~]# systemctl enable postgresql.service
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql.service → /usr/lib/systemd/system/postgresql.service.
[root@node1 ~]#
Configure database password encryption as follows:
- Set password hashing algorithm to “scram-sha-256” in
/var/lib/pgsql/data/postgresql.conf
. - Update
/var/lib/pgsql/data/pg_hba.conf
to allow connections to use “scram-sha-256”. - Restart PostgreSQL:
[root@node1 ~]# more /var/lib/pgsql/data/postgresql.conf | grep scram-sha-256
#password_encryption = md5 # md5 or scram-sha-256
password_encryption = scram-sha-256
[root@node1 ~]#
[root@node1 ~]# more /var/lib/pgsql/data/pg_hba.conf | grep scram-sha-256
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
# "scram-sha-256" are preferred since they send encrypted passwords.
host all all 127.0.0.1/32 scram-sha-256
[root@node1 ~]#
[root@node1 ~]# systemctl stop postgresql.service
[root@node1 ~]# systemctl start postgresql.service
Create the database and table
The installation of PostgreSQL automatically creates the postgres system user, used to run the PostgreSQL server. Create a separate database user “mydbuser” database creation and user management:
[root@node1 ~]# su - postgres
[postgres@node1 ~]$ psql
psql (13.20)
Type "help" for help.
postgres=#
postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd' CREATEROLE CREATEDB;
CREATE ROLE
postgres-#
Log in as “mydbuser” and create database “mydatabase”:
[root@node1 ~]# psql -U mydbuser -h 127.0.0.1 -d postgres
Password for user mydbuser:
psql (13.20)
Type "help" for help.
postgres=>
postgres=> CREATE DATABASE mydatabase;
CREATE DATABASE
postgres=>
Test the connection to the database:
[root@node1 ~]# psql -U mydbuser -h 127.0.0.1 -d mydatabase
Password for user mydbuser:
psql (13.20)
Type "help" for help.
mydatabase=>
mydatabase=> \conninfo
You are connected to database "mydatabase" as user "mydbuser" on host "127.0.0.1" at port "5432".
mydatabase=>
Create a sales order table and insert sample data by creating a table named “orders” and insert data (e.g., sales order table with fields for order number, material/item, and quantity):
mydatabase=> CREATE TABLE orders ( ordernumber SERIAL PRIMARY KEY, material VARCHAR (50), quantity integer );
mydatabase=> INSERT INTO orders (ordernumber, material, quantity) VALUES (100, 'widget1', 5), (101, 'widget4', 12), (102, 'sink', 1);
INSERT 0 3
mydatabase=>
Verify the contents of the table:
mydatabase=> SELECT * FROM orders;
ordernumber | material | quantity
-------------+----------+----------
100 | widget1 | 5
101 | widget4 | 12
102 | sink | 1
(3 rows)
mydatabase=>
Create the Python program
The program is based on this tutorial.
Set up the Python environment. Python 3.9 installed by default on RHEL 9. Install the Python package installer:
[root@node1 ~]# python3 --version
Python 3.9.18
[root@node1 ~]# yum install python3-pip
Create a Python virtual environment “test,” activate it, and install Python libraries to interact with PostgreSQL and Redis:
[root@node1 ~]# python -m venv test
[root@node1 ~]#
[root@node1 ~]# source test/bin/activate
(test) [root@node1 ~]# pip install psycopg2-binary redis
Create a Python program: python redis_postgresql_test.py. The program logic integrates Redis with PostgreSQL:
- Goal: fetch details of order number 101.
- Read the Redis cache first.
- If not in the Redis cache, fetch the data from PostgreSQL.
- Insert data into Redis cache.
- Subsequent read requests for order 101 will be retrieved from the Redis cache:
import psycopg2
import redis
import json
# Connect to PostgreSQL
conn = psycopg2.connect(
dbname="mydatabase",
user="mydbuser",
password="mypasswd",
host="127.0.0.1"
)
# Create a Redis client
redis_client = redis.StrictRedis(host='localhost', port=6379, db=0)
def get_order_from_db(order_num):
"""Fetch order details from PostgreSQL."""
cur = conn.cursor()
cur.execute(f"SELECT * FROM orders WHERE ordernumber = {order_num};")
I_order = cur.fetchone()
cur.close()
return i_order
def get_order(order_num):
"""Fetch order details from Redis cache or PostgreSQL."""
# Check if the order number is in Redis cache
cache_key = f"order:{order_num}"
cached_order = redis_client.get(cache_key)
if cached_order:
print("Fetching from Redis Cache...")
return json.loads(cached_order)
else:
# If not in cache, fetch from PostgreSQL and cache the result
print("Fetching from PostgreSQL...")
i_order = get_order_from_db(order_num)
if i_order:
redis_client.setex(cache_key, 600, json.dumps(i_order))
return i_order
# Example: Fetch order details of order number = 101
i_order = get_order(101)
if i_order:
print(f"Order Details: Order number = {i_order[0]}, Material = {i_order[1]}, Qty = {i_order[2]}")
else:
print("Order number not found")
Execute the program
Execute the program twice to verify the source of the results:
- First time data is retrieved from PostgreSQL.
- Second time data is retrieved from Redis cache:
(test) [root@node1 ~]# python redis_postgresql_test.py
Fetching from PostgreSQL...
Order Details: Order number = 101, Material = widget4, Qty = 12
(test) [root@node1 ~]# python redis_postgresql_test.py
Fetching from Redis Cache...
Order Details: Order number = 101, Material = widget4, Qty = 12
(test) [root@node1 ~]#
Summary
SAP business scenarios now include applications that require databases like PostgreSQL and Redis. These applications are deployed in the same landscape as the SAP Business Suite of applications and S/4HANA.
In this example, we have shown a simple custom program managing the interaction between PostgreSQL and Redis. Applications can implement their own custom logic to manage the integration between Redis with PostgreSQL. Alternatively, there are separate methods/tools to facilitate this, such as Redis Data Integration, n8n and Airbyte.
The deployment of PostgreSQL and Redis shown here are on a single RHEL host. In production environments, you can deploy Redis and PostgreSQL in containers on the Red Hat OpenShift Container Platform for performance and scale (see Databases and data analytics on Red Hat OpenShift).
SAP’s business critical integration software Process Integration/Process Orchestration (PI/PO) is used by more than 12,500+ SAP customers. This application will be migrated to a new architecture that uses PostgreSQL and Redis. For more details of this new deployment on Red Hat refer to: SAP Edge Integration Cell on OpenShift.