Tiny Tiny RSS: Community

Postgresql 10 experience?


I am curious if folks have cut over their tt-rss installations from the 9.x to the 10.x version of postgresql. ANy issues/gotchas? Did you just do a dump/restore to do the migration? Thanks in advnace for any assistance.


I upgraded to 10 and everything went fine. my installation is on docker so i ran an upgrade with this image: tianon/postgres-upgrade

Feels slightly faster… as everytime something is new :wink:


I did a new install on a new server a couple of months ago. Imported the dump from a 9.x database in a 10.x database and everything went just fine.


I used default debian scripts for cluster migration without any issues. Of course backup is recommended.


I upgraded to 10, via pg_upgrade, shortly after 10 was released. No issues at all. Don’t forget to turn on max_parallel_workers_per_gather to match your cores


thanks to all for sharing your experience


Hi everybody, I read on the main page “PostgreSQL (9.1 or newer) or MySQL - InnoDB is required.”
Did anyone already test postgresql 11? Is there a testsuite which could be run to see if everything works fine with PG11?


I’ve been running 11 for a few days now with no issues.


I’ve been running 11.1 for over a month and haven’t had a single issue.


I forget when I upgraded to 11.1, I actually skipped (except for a few minutes of upgrade time) 11.0… not on purpose, but, I kinda forgot it was out. :lol:


Just upgraded from postgresql 9.6 to 11.1 via dump/import and took opportunity to upgrade php 7.0 to 7.3 at the same time. No issues so far…


How is it? I’ve always used pg_upgrade.


I can’t detect any differences TBH, and everything still seems functional as before. I’m on an older codebase still at the moment Tiny Tiny RSS v18.8 (df0115f).


Actually, I meant the dump/import. :slight_smile:


@SleeperService: When migrating from a major PostgreSQL version to another, the pg_dump/pg_restore is the safest way, although the slowest (it should not be a problem for tt-rss).

pg_upgrade is much quicker, especially with the --link option, as it converts your old datafiles into the new ones. Bugs did happen in the past, so paranoid people avoid this. You inherit the cruft of your old instance. Obviously, this cannot change the server at the same time.

There are other methods to reduce the downtime but this is probably overkill here.

Of course, a full logical backup (for example with pg_dumpall or pg_back) MUST be done before.

BTW: I’ve read in this thread that you must modify ‘max_parallel_workers_per_gather’ to the core numbers, this is false and dangerous. The default is 2 and often enough (or already too much if you have only 1 or 2 cores). ‘max_parallel_workers’ must be set to the number of cores or less, especially it the sever is not dedicated to PostgreSQL.


I just do a zfs snapshot beforehand. so much quicker, and more reliable. :slight_smile: