Databases are magic from my perspective. They take in queries, find all the information you tasked it to find, and POOF. In the blink of an eye, there it is, all packaged up nice and neat. Or it blows up in your face, but that never happens, right? Let’s take a look at some steps we can take when things are not returned in the blink of and eye. What can we do to improve our database performance.
Let’s use this query to get a quick overview of what tables in your database are getting scanned too often and could benefit from a carefully placed index.
SELECT relname, seq_scan-idx_scan AS too_much_seq, case when seq_scan-idx_scan>0 THEN 'Missing Index?' ELSE 'OK' END, pg_relation_size(relname::regclass) AS rel_size, seq_scan, idx_scan FROM pg_stat_all_tables WHERE schemaname='public' AND pg_relation_size(relname::regclass)>1000 ORDER BY too_much_seq DESC;
That will give us a nice starting point to see if maybe we need to employ some indexes
Thanks to https://github.com/miguelvps for the great query
If you have a query, you can use the EXPLAIN directive to understand how Postgres plans a query. This is helpful to identify bottlenecks and areas that are costly to run.
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) SELECT * FROM people WHERE age > 25;
Take the resulting JSON from that query and paste it into a New Plan on http://tatiyants.com/pev/#/plans
From there, you can explore which areas of the query are most costly and suggestions for improvement.
If you want to try a quick db inspection tool. Look into https://github.com/ankane/pghero.
Did this help you out? It took me a few days to piece together all this information together, I hope this saves you some time (who knows, maybe the future me will be thankful I wrote this down). Let me know your thoughts. firstname.lastname@example.org