Four databases. One arena. Expect chaos, comedy, and questionable benchmarking as PostgreSQL takes on the rest in this live, totally biased battle for DB supremacy.
Thursday, September 4 at 17:40 – 18:05
Thursday, September 4 at 17:40 – 18:05
Four databases. One arena. Expect chaos, comedy, and questionable benchmarking as PostgreSQL takes on the rest in this live, totally biased battle for DB supremacy.
Thursday, September 4 at 18:10 – 19:30
Thursday, September 4 at 17:35 – 17:40
Thursday, September 4 at 17:05 – 17:35
Thursday, September 4 at 16:15 – 17:00
Understanding a project’s ecosystem is the main barrier to entry. I’ve had community roles for different projects (Passenger app server, k6) and technology areas (Ruby, DevOps, Microsoft) but I found the Postgres ecosystem one of the most difficult to grasp.
However, the project and the people grew on me and I believe that in order for it to keep up with the growing demand as the most popular database according to StackOverflow, Postgres needs to onboard new contributors, and it needs to do so quickly.
There are plenty of people who want to contribute, but don’t know where to start. I think I do. Join me for a module I developed for inhouse training at EDB, that I’ll have open sourced by the time PGDay Austria takes place, in line with the project’s ethos.
Thursday, September 4 at 16:15 – 17:00
In late 2023, the Java community started a challenge to find the most efficient way to process a file with 1 billion rows of data. Unsurprisingly, many database communities quickly took on the same challenge with varying results. Postgres, in many cases, performed the worst without close attention to settings and efficient resource utilization. But, with a little more effort, could it compete head-to-head?
In this session, we’ll look at the original challenge and how to approach it with vanilla Postgres beyond the basics. Next, we’ll explore how the increasingly popular in-memory analytics database, DuckDB, handles the same challenge. Finally, we’ll explore recent opportunities to integrate the two databases together to provide a powerful analytical engine with Postgres for the best of both worlds.
Thursday, September 4 at 16:15 – 17:00
Picking up the topic of previous talks about using PostgreSQL to store many data, very many data, and absurdly many data, this talk wants to talk about the problems and challenges that one might experience when trying to shove all the data into PostgreSQL.
Many things that are simple with small amounts of data can become headache-inducing.
As data grows even a simple ‘select count(*)’ can be unpleasantly slow.
Taking a naive backup could easily take more than a day, and restoring such a backup is similarly time-consuming.
We will explore the limits of where PostgreSQL can go, the limits imposed by physics, hardware, and software, and ideas how we can go beyond those limitations.
We’ll also look at some performance problems and the many strategies we have to mitigate them.
Hopefully this talk will give you the confidence that PostgreSQL can grow with your data, and show you some of the options you have to improve performance when your data just keeps growing and growing.
Thursday, September 4 at 15:50 – 16:10
Thursday, September 4 at 15:00 – 15:45
PostgreSQL’s autovacuum tuning in the cloud era is a delicate balancing act. It demands careful juggling of application performance, cloud costs, and maintenance(autovacuum) efficiency. Databases on bare-metal enjoyed a happy flow with superior throughput and ultra-low latency. In contrast, the cloud era has suddenly introduced performance speedbreakers, even as it simplifies management for DBAs.
This talk will unravel how to optimize autovacuum settings for peak efficiency across AWS, GCP, and Azure. Discover strategies to overcome throughput and resource limits without busting your cloud budget, keeping your cloud FinOps team smiling all along.
Thursday, September 4 at 15:00 – 15:45
Join this talk to learn how to secure your database in Postgres using streaming replication, from the very BASIC to the very ADVANCED. After all, even if your database is small, your data might still be precious. And as your project grows and gets bigger, your Postgres environment will need to be adapted to scale with it.
This talk will give you a beginner’s guide to streaming replication in Postgres and more specifically you will learn about WAL (write-ahead logging), how to prepare for DR, and how to use the `pg_receivewal` utility in a large setup with cascading replicas, continuous backup, etc. You’ll also learn about the concepts of RTO and RPO and how to scale your data protection architecture as your application grows.
Why attend this talk? Because the amazing power of streaming replication in PostgreSQL is too under-appreciated.
Based on over two decades of practical experience, you will be guided through some of the pitfalls and get introduced to the full range of Postgres Disaster Resilience techniques.