🚀 Built comprehensive Azure PostgreSQL passwordless authentication samples for Java! Microsoft recommends passwordless authentication as a security best practice. I’ve created two complete implementations with interactive testing to explore both approaches outlined in their guidance: ✅ Zero hardcoded secrets — going fully passwordless ✅ Azure Plugin vs Manual Token comparison 🔍 Explore the guidance: https://lnkd.in/g6BGX6gb 📂 Sample code on GitHub: https://lnkd.in/gaJTQJz8 #Azure #Java #PostgreSQL #Security #Passwordless #CloudDevelopment
Azure PostgreSQL passwordless authentication samples for Java
More Relevant Posts
-
Connection Pooling in PostgreSQL 17 — What to use and when? 🚀 Spinning up more DB connections isn’t scaling—it’s thrashing. In my latest post, I break down how connection pooling keeps latency predictable and throughput high in PostgreSQL 17. What’s inside: Why pooling matters: cut connection overhead, smooth traffic spikes, protect max_connections. Options compared: pgBouncer — lightweight, fast; session vs. transaction vs. statement pooling. pgpool-II — pooling plus read load balancing & failover. pgjdbc-ng — in-app pooling for JVM stacks. Quick chooser: Need simple, low-overhead pooling? → pgBouncer Need pooling and read split / HA? → pgpool-II Pure Java and want per-service control? → pgjdbc-ng Takeaway: Picking the right pooler is one of the highest-leverage moves for Postgres performance and scalability. 📢 Stay Updated with Daily PostgreSQL & Cloud Tips! If you’ve been finding my blog posts helpful and want to stay ahead with daily insights on snowflake, AWS, PostgreSQL, Cloud Infrastructure, Performance Tuning, and DBA Best Practices — I invite you to subscribe to my Medium account. 🔔 Subscribe here 👉 https://lnkd.in/gjTTWWKk 🔗 Read the full post: https://lnkd.in/gzwHQbXh Your support means a lot — and you’ll never miss a practical guide again! #PostgreSQL #PostgreSQL17 #Database #DBA #DatabaseAdministrator #Linux #RedHat #RockyLinux #AlmaLinux #OracleLinux #DatabaseInstallation #OpenSource #DevOps #SystemAdministrator #CloudComputing #DataEngineering #DatabaseManagement #ITInfrastructure #CloudDatabase #PostgreSQLInstallation #DatabasePerformance #TechBlog #MediumBlog #Tutorial #PostgresWomenIndia #StepByStepGuide #Learning #CareerGrowth #TechnicalWriting #KnowledgeSharing #MySQL #MSSQL #Mongodb #Oracle #DatabaseSecurity #pg_hba #DataSecurity #Backend #TechWriteUp #DevCommunity #RDS #EC2 #S3, #Aurora #PostgreSQL #PostgreSQL17 #DatabasePerformance #pgBouncer #pgpoolII #JDBC #DevOps #SRE #DBA #Scalability
To view or add a comment, sign in
-
-
🚀 Automating MongoDB Backups with Bash + Email Notifications 🗄️ Recently, I created a simple yet powerful Bash script that automates the MongoDB schema backup process for Non-Production environments. Here’s what the script does 👇 ✅ Verifies if the MongoDB database exists before taking backup ✅ Performs schema-only mongodump ✅ Logs all activities ✅ Sends a neat HTML-formatted email notification with backup status (✅ Success / ❌ Failure) ✅ Uses msmtp for lightweight email delivery It’s a handy automation for DB admins and DevOps engineers who want to keep things clean and monitored without manual intervention. 💻 Check out the full script on GitHub: 🔗 https://lnkd.in/g72B633Y If you’re working on database or infrastructure automation, would love to hear your thoughts or improvements! 💬 #MongoDB #Bash #Automation #DevOps #DatabaseAdministration #Scripting #Linux #Cloud
To view or add a comment, sign in
-
I’ve built a new repository that automates the export of MySQL/Aurora databases from AWS Backup snapshots to Amazon S3, using Ansible integrated with Jenkins CI/CD. This project extends my previous repo aurora-db-export (https://lnkd.in/geaqeQmn) — now enhanced with a full Jenkins pipeline that orchestrates Ansible playbooks in a controlled, repeatable, and auditable workflow. ⚙️ Pipeline Workflow 1. Jenkins pipeline triggers (manual or scheduled) 2. Jenkins agent labeled ansible initializes a Python virtual environment 3. Repository is checked out from SCM (GitHub) 4. Required Ansible collections are installed 5. Main playbook runs to: - Retrieve DB credentials from AWS Secrets Manager - Identify the latest AWS Backup snapshot - Create a temporary RDS instance - Dump a specific MySQL database - Export data to Amazon S3 - Clean up temporary resources 6. Jenkins archives logs and reports success/failure 7. Workspace is cleared after completion 💡 This integration bridges DevOps automation and AWS data operations, enabling reliable, hands-free database exports directly through CI/CD. 🔗 Check it out here: GitHub – Jenkins-Ansible MySQL Export Automation 👉 https://lnkd.in/gkPDxrB4 #DevOps #Jenkins #Ansible #AWS #Automation #CloudOps #InfrastructureAsCode #MySQL #RDS
To view or add a comment, sign in
-
-
🚀 Setting Up Multi-Instance MySQL & MongoDB with Docker Compose on a Single VM Recently, I built a structured setup to run multiple MySQL (8.0.37) containers on different ports (3306 & 3307) along with a MongoDB (6.x) container on port 27018 — all within a single Ubuntu VM (4GB RAM, 8 vCPU, 100GB SSD). Each service is isolated through its own Docker Compose YAML file, configured for performance, persistence, and easy management. This setup ensures: ✅ Simplified multi-environment management (Dev & PreProd) ✅ Clean data directory and logging separation ✅ Easy scalability for future replication or monitoring setups ✅ Quick rebuild/recovery using Docker volumes You can find the detailed step-by-step guide and YAML configurations in the document I created 👇 📘 MySQL & MongoDB Docker Setup Guide #MySQL #MongoDB #Docker #DevOps #DatabaseAdministration #Containerization #OpenSource #InfrastructureAsCode
To view or add a comment, sign in
-
Day 66 – #100DaysOfDevOps #KodeKloud Situation: The Nautilus DevOps team needed to deploy a MySQL database on the Kubernetes cluster with persistent storage and secure credential management using Kubernetes secrets. My task was to handle the full setup from storage provisioning to deployment and service exposure. Task: Deploy a MySQL instance with persistent storage, environment-based configuration, and secure credentials using secrets. Action: 1) Created PersistentVolume (PV): i) iName: mysql-pv ii) iCapacity: 250Mi iii) iDefined access mode and storage class. 2) Created PersistentVolumeClaim (PVC): i) Name: mysql-pv-claim ii) Requested 250Mi storage to bind with the PV. 3) Configured Secrets: i) mysql-root-pass → key: password=YUIidhb667 ii) mysql-user-pass → keys: - username=kodekloud_rin - password=LQfKeWWxWD iii) mysql-db-url → key: database=kodekloud_db3 4) Created MySQL Deployment: i) Name: mysql-deployment ii) Image: mysql:latest iii) Volume mounted /var/lib/mysql using the PVC. iv) Environment variables linked with secrets using secretKeyRef: - MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD 5) Exposed the service: i) Name: mysql ii) Type: NodePort iii) Port: 3306 iv) NodePort: 30007 6) Verified the setup with: kubectl get pods,svc,pvc,pv,secrets Ensured all resources were in a running and bound state. Result: 1) ✅ MySQL deployed successfully with persistent storage. 2) ✅ Secrets securely managed database credentials. 3) ✅ Accessible externally via NodePort 30007. Learning: This task solidified my understanding of how persistent volumes ensure data durability in Kubernetes and how secrets enhance security by separating sensitive information from application logic. 👉 Here’s the link to join the challenge: https://lnkd.in/g4wnpsF2 #Day66 #KodeKloudEngineer #Kubernetes #MySQL #DevOps #PersistentVolume #Secrets #CloudComputing #ContinuousLearning #DevOpsChallenge
To view or add a comment, sign in
-
Build and deploy a Spring Boot REST API on Azure Kubernetes Service and wire it to Azure Cosmos DB (NoSQL). The step-by-step tutorial covers containerizing with Docker, pushing to ACR, deploying to AKS, and CRUD ops against your database—plus guidance that Azure Spring Apps is generally recommended for Spring workloads. Dive in: https://msft.it/6043s7OHd #AzureCosmosDB #Java
To view or add a comment, sign in
-
Build and deploy a Spring Boot REST API on Azure Kubernetes Service and wire it to Azure Cosmos DB (NoSQL). The step-by-step tutorial covers containerizing with Docker, pushing to ACR, deploying to AKS, and CRUD ops against your database—plus guidance that Azure Spring Apps is generally recommended for Spring workloads. Dive in: https://msft.it/6042s7OHj #AzureCosmosDB #Java
To view or add a comment, sign in
-
Is your monolithic database groaning under the weight of your microservices? As applications scale, the database often becomes the biggest bottleneck. You can throw more CPU and RAM at it (vertical scaling), but you eventually hit a wall. So, how do you scale your database horizontally, just like your stateless services? Enter Vitess. Vitess is a CNCF graduated project that makes MySQL scalable on an epic level. Originally born at YouTube to handle massive traffic, it’s now a go-to for running MySQL on Kubernetes. So, what’s the magic? In simple terms, Vitess acts as a smart proxy layer on top of multiple MySQL instances. It takes a large database and breaks it into smaller, faster pieces called shards. To your application, it still looks like you're talking to a single, giant MySQL database. But behind the scenes, Vitess is routing queries to the correct shard, handling connection pooling, and rewriting queries for better performance. This means you get: → Horizontal scaling for your database writes. → Resilience against failures. → Improved performance without changing your application code. It effectively brings the scalability and operational ease of a cloud-native database to the familiar and battle-tested MySQL ecosystem. If you’re running stateful workloads on Kubernetes and feeling the scaling pain, Vitess is a project worth exploring. #DevOps #Kubernetes #CNCF #Vitess #MySQL #Database #CloudNative #OpenSource #K8s
To view or add a comment, sign in
-
-
🚀 Kubernetes Series — Part 6 Deploying MySQL Database on Kubernetes using ConfigMap, Secrets & Persistent Volumes After deploying our first application in Part 5, it’s time to make things more powerful — by bringing in a real-world database setup inside Kubernetes ⚙️ In this part, I explored how to deploy MySQL using Kubernetes objects that make applications dynamic, secure, and persistent: ✅ ConfigMap – Store configuration data (like DB name) separate from the application code ✅ Secrets – Securely store sensitive information such as passwords 🔐 ✅ Persistent Volume (PV) & Persistent Volume Claim (PVC) – Ensure data persistence even if Pods restart ✅ Deployment – Manage MySQL Pods with reliability and scalability ✅ Namespace – Organize your Kubernetes resources effectively 🧩 With this setup, I learned how all these components connect together — ensuring data security, consistency, and resilience in the Kubernetes ecosystem. 📘 Attached: A complete hands-on document with all YAML manifests and kubectl commands, step-by-step for beginners. #Kubernetes #DevOps #Cloud #Containers #Automation #LearningJourney #AWS #Ansible #TechCommunity #MySQL
To view or add a comment, sign in
-
End-to-End Flow Complete! Receiver Service is Live on Kubernetes 🚀 We've hit a huge milestone in our DevOps lab! I've successfully built and deployed the Receiver Service, officially closing the loop on our initial messaging architecture. Crucially, this entire infrastructure—from Kafka to the application and databases—is running locally on my machine using the open-source power of Kind and Docker. We are validating a production-grade, multi-service pipeline without consuming any cloud resources in this phase. What is the Receiver Service for? This Spring Boot microservice is the heart of our data processing tier. Its job is to: Consume messages from our Kafka topic. Persist those messages to MySQL. Cache the data in Redis. Display the full data flow on a dashboard via Thymeleaf. By running this as a Pod on our kind cluster, we've validated the entire journey: from a user's web form (Sender Service) straight through to a durable database (MySQL). Explore the Code & Guides: Main Project Repository: https://lnkd.in/gb8Z9BQz Receiver Service Codebase: https://lnkd.in/gG-2Yr_U Guide: Creating the Service: https://lnkd.in/gPJdmqRN Guide: Deploying as a Pod: https://lnkd.in/gWZhwZHs #DevOps #Kubernetes #Microservices #Kafka #SpringBoot #MySQL #Redis #CloudNative #TechJourney #LocalDevOps #KindCluster
To view or add a comment, sign in
-
Impressed. Thanks bro.