PGBadger : how to monitor your database ?

In June 2019, I was at the PGDay Conference at Lyon. There were many interesting sessions. One of these sessions was about PgBadger. The session was animated by Gilles Darold - the author of the tool.

This post is a short synthesis of the session.

PgBadger is simple.

PgBadger is a transportable tool. There is no prerequisites. It was created in 2010. It is used to make audits of the postgres servers. It is based on the logs of the PostgreSQL instance. Example of the run command :

    pgbadger /var/log/portgresql/postresql-11.log

PgBadger is powerful PgBadger auto detects the log formats. It can process in parallel many log files. After the processing, it generates a professionnal complete report of the state of the PostgreSQL instance. You can see a snapshot of the tool UI at http://pgbadger.darold.net/samplev7.html

There is more than 80 options in the command line.

    See pgbadger --help

Logs filtering You can filter the logs. For example, if you want only the errors, you can user --watch-mode option. To apply temporals filters, you can user the following options : --begin, --end, --include-time, --exclude-time You can filter on identifiers like ip, database, user, application with these options : --dbclient, --dbname, --dbuser, --appname Finally you can filter on the log lines with --exclude-line, --include-query, --exclude-query, --include-file and --exclude-file

Report management PgBadger can control the sections of the report. You can :

  • delete section with --disable-* option.
  • delete all the graph
  • disable the request formatting.
  • modify the title
  • anonymize the requests
  • increase the number of request

If you want to customize the report, it is possible with the CSS spreadsheet resources/pgbadger.css. Do not forget to integrate the modifications with the perl scrip tools/updt_embedded.

You can also generate your own reports : it is possible to use the binary output with a perl script or the json output with whatever langage.

Be careful with the intensive logs. It is recommended to set log_min_duration_statement greater than 0 ms.

Since PostgreSQL 12, you can sampling the logs with the parameter log_transaction_sample_rate.

Incremental mode It is possible to enable an incremental mode to take in account only the logs not processed with the -I option.

Logs remote processing You can access at the logs remotly vit SSH or HTTP(S) access.

Conclusion This session was very interesting and the tool seems to be useful. I have only to test it further. More details about Pg Badger at : http://pgbadger.darold.net

Previous Post