PostgreSQL 18 was officially released on September 25, 2025, and it brings one of the biggest architectural upgrades in years, a next-generation asynchronous I/O subsystem. Before PostgreSQL 18, most disk I/O was synchronous, and now the new async I/O layer can fetch data without blocking the main process and intelligently batch operations. This upgrade results in meaningful real-world performance gains: ✅ Faster queries, especially heavy-read workloads ✅ Higher throughput under concurrency ✅ Better hardware utilization without extra tuning For teams building scalable products, this means smoother performance during peak usage — without over engineering infrastructure. If you would like to better understand how this upgrade can impact your architecture and improve scalability in your stack, feel free to reach out to us at hello@whitecodelabs.com #PostgreSQL #CTO #CIO #CEOPerspective #StartupTech #Scalability #CloudArchitecture #Databases #OpenSource #BackendEngineering
PostgreSQL 18: Asynchronous I/O for Scalability
More Relevant Posts
-
Scaling Postgres 388 is released! In this episode, we discuss PG17 and PG18 benchmarks across storage types, more about Postgres locks, sanitizing SQL and can a faster software & hardware environment cause performance problems? https://lnkd.in/eHXb6uwi #Postgres #PostgreSQL
To view or add a comment, sign in
-
Solving a Kubernetes Storage Challenge with Longhorn We hit a storage dilemma in Kubernetes: Some databases can’t be clustered—if a pod dies on one node, it must restart fast on another with the same data. Another challenge appeared when we needed to migrate a PostgreSQL cluster’s backup into a new Kubernetes environment with a new PostgreSQL (CNPG) cluster—a flow that differs from typical app data restores. After testing our options, it all came down to how we handle Volumes/PVCs. We evaluated Ceph vs Longhorn. Ceph ✅ powerful & feature-rich, but resource-heavy and complex to operate for our scale. Longhorn ✅ lightweight, easy to deploy, and a great fit for our use case. What we achieved with Longhorn Brought up PostgreSQL (CNPG) in a new cluster from existing backups/snapshots. Added high availability to single-node SQL databases via replicated volumes. Each volume keeps 2–3 replicas across nodes. If a node/pod fails, the workload can be rescheduled and attach a replica on another node in seconds (controller + scheduler permitting). Why Longhorn for this scenario? Simple to run and resource-friendly Kubernetes-native operations (CSI snapshots, backups/DR) Fast restore paths for both single-node DBs and CNPG-managed clusters Ceph still has powerful, unique capabilities—especially at very large scale or when you need unified block/file/object—but for our goals, Longhorn was the perfect fit. 🔗 I’ve shared a step-by-step doc on restoring a CNPG cluster from an existing Longhorn backup—link below. https://lnkd.in/dDvqpFX2 https://lnkd.in/dwzi2UwH #kubernetes #longhorn #cloudnative #devops #sre #postgresql #cnpg #statefulsets #storage #ceph
To view or add a comment, sign in
-
-
The overhaul of NOT NULL constraints in PostgreSQL 18 redefines one of the database’s oldest foundations, adding flexibility, better validation, and modern architecture without disrupting existing systems. It’s a great example of how EDB engineers and the PostgreSQL community keep refining the core, quietly improving the reliability millions of users count on every day. Read the story in Álvaro Herrera's latest blog post: https://bit.ly/42TX1zr #PostgreSQL #Postgres18 #OpenSource #PostgresCommunity #EDBPostgresAI #DatabaseDevelopment #PostgresDevelopers
To view or add a comment, sign in
-
-
Too many PostgreSQL connections slowing things down? Here’s the simplest fix most teams overlook. When every client connects directly to PostgreSQL, the server quickly gets overloaded — each connection consumes memory, CPU, and backend processes. PgBouncer changes the game. It sits between clients and PostgreSQL, pooling and reusing connections so the database handles only a small, manageable number of backend sessions. Why PgBouncer is essential: • PostgreSQL still struggles with very high connection counts • Connection creation is expensive, especially at scale • Pooling delivers smoother performance under unpredictable workloads Key PgBouncer features: • Ultra-lightweight connection pooling • Session / Transaction / Statement pooling modes • Hot-reload of config • Minimal overhead, massive stability improvement Availability: PgBouncer has been production-ready since the early 1.x releases and continues active development (current versions carry all modern features). If your PostgreSQL workloads spike, PgBouncer remains one of the most reliable, battle-tested solutions to keep your database calm and efficient. #PostgreSQL #PgBouncer #CloudDatabases #DatabaseScaling #Postgres #pgsql
To view or add a comment, sign in
-
-
Zero(ish)-Downtime PostgreSQL Upgrade Recently completed a successful upgrade from PostgreSQL 11 → 15.13 using AWS RDS Blue/Green Deployment, achieving less than 2 minutes of downtime during the final cutover. Here’s the high-level process we followed: 1️⃣ Schema Preparation – Removed deprecated OID columns (SET WITHOUT OIDS) and dropped old aggregates to ensure Postgres 15 compatibility. 2️⃣ Feature Flag – Temporarily disabled weblog inserts (via a feature flag) to prevent blocking on large tables during schema migration. 3️⃣ Blue/Green Deployment – Used AWS-managed replication to sync the upgraded “Green” DB with production “Blue.” 4️⃣ Cutover – Switched traffic to the upgraded DB (took <2 mins). 5️⃣ Validation & Monitoring – Verified transactions, cron jobs, and application connectivity post-upgrade. This approach allowed us to complete a major version upgrade without noticeable downtime and with no data loss. Next stop → PostgreSQL 16 #PostgreSQL #AWSRDS #DatabaseUpgrade #BlueGreenDeployment #DevOps #ZeroDowntime
To view or add a comment, sign in
-
Some PostgreSQL features take years to get right; virtual generated columns are one of them. In PostgreSQL 18, this long-awaited capability finally arrived thanks in large part to the patience, creativity, and persistence of core developer and EDB VP, Chief Architect, Peter Eisentraut. Peter's story traces seven years of trial, redesign, and collaboration to bring this feature to life. It’s a great glimpse into the kind of engineering care that keeps PostgreSQL advancing release after release. Read Peter's blog here: https://lnkd.in/eSFaemXz #PostgreSQL #OpenSource #EDBPostgresAI #PostgresCommunity #Postgres18 #EngineeringCulture #DatabaseDevelopment
To view or add a comment, sign in
-
-
Some PostgreSQL features take years to get right; virtual generated columns are one of them. In PostgreSQL 18, this long-awaited capability finally arrived thanks in large part to the patience, creativity, and persistence of core developer and EDB VP, Chief Architect, Peter Eisentraut. Peter's story traces seven years of trial, redesign, and collaboration to bring this feature to life. It’s a great glimpse into the kind of engineering care that keeps PostgreSQL advancing release after release. Read Peter's blog here: https://lnkd.in/eSFaemXz #PostgreSQL #OpenSource #EDBPostgresAI #PostgresCommunity #Postgres18 #EngineeringCulture #DatabaseDevelopment
To view or add a comment, sign in
-
-
Achieving Four 9s of Availability with Open Source PostgreSQL 99.99% availability with PostgreSQL is absolutely possible using open-source components, a fact that often surprises teams who assume this requires proprietary systems. 99.99% availability means less than 52.6 minutes of downtime per year. Reaching that standard demands architectural discipline and redundancy across every layer. The foundational setup includes: - A primary and replicas across isolated availability zones - A separately protected site to remove regional single points of failure - Streaming replication for seamless continuity - Quorum-based autofailover without human intervention When designed intentionally, PostgreSQL can deliver uptime comparable to that of enterprise-grade systems without the licensing overhead. PostgreSQL is fully capable of production-grade HA when the architecture is built for it. Need help designing and validating a four-9s PostgreSQL architecture? Explore our Signature HA Methodology: https://lnkd.in/dcKWiC57
To view or add a comment, sign in
-
-
A #database architected and built for #Kubernetes #CockroachDB: A #DistributedSQL database built for Kubernetes CockroachDB is the only database architected and built from the ground up to deliver on the core distributed principles of #atomicity, #scale and #survival so you can manage your database IN Kubernetes, not along the side of it. https://lnkd.in/gE7Hp4KW
To view or add a comment, sign in
-
🚀 𝗗𝗮𝘆 𝟲𝟲 - 𝗗𝗲𝗽𝗹𝗼𝘆 𝗠𝘆𝗦𝗤𝗟 𝗼𝗻 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 #KodeKloud #100DaysOfDevOps 𝗛𝗲𝗿𝗲’𝘀 𝘁𝗵𝗲 𝘁𝗮𝘀𝗸 👇 ✅ Created Persistent Volume & Persistent Volume Claim for data persistence ✅ Secured credentials using Kubernetes Secrets 🔐 ✅ Deployed a MySQL container using a Deployment manifest ✅ Exposed MySQL using a NodePort Service on port 30007 ✅ Verified everything with kubectl get pods, svc, pvc, pv 🔗 Github Link: https://lnkd.in/gUvh5Kcj 💡 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: • Always store credentials securely in Secrets — never hardcode them. • Persistent Volumes ensure data stays safe even if Pods restart. • Kubernetes makes deploying databases predictable and repeatable. #DevOps #Kubernetes #MySQL #100DaysOfDevOps #KodeKloud #Containers #CloudNative
To view or add a comment, sign in
-