I have a Cloud SQL PostgreSQL instance with a master and a replica configured.
The replica is correctly reporting around 355 GB of storage usage, which aligns with the expected volume of data.
However, the primary (master) instance is showing more than 8 TB of usage in the Google Cloud Console, which is inconsistent with the actual amount of stored data.
I performed a manual check by connecting to the primary database via the console and running queries to estimate the real storage usage, which also returned a value close to 355 GB.
I would like to understand:
Why is the Cloud SQL dashboard showing more than 8 TB of usage?
Does this value represent some form of hidden storage (e.g., old WALs, internal backups, invisible temporary tables)?
Is there anything I can do to recover this space or correct the display?
Thank you in advance for your help.
Hi @Jasar,
Welcome to Google Cloud Community!
It seems the extra storage usage could be due to Point-in-Time Recovery (PITR) and temporary data. PITR uses write-ahead logs (WALs) that can take up space until they’re deleted with the backups after about 7 days. If these logs are the issue, you can either increase your storage or enable automatic storage increases to avoid surprises.
If you don’t need PITR, disabling it will delete the logs and free up space, though it won’t shrink the disk size itself. Temporary data can also contribute to storage usage but is removed during maintenance and doesn’t incur extra costs.
A newly created database uses about 100 MB for system files. Checking the breakdown of storage types could help pinpoint the issue more clearly. You can also check out Default Metrics to learn more.
Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.