Intermittent PDOException

Every week or two I find that I’m getting a PDOException in my logs from the update daemon. I’m currently running from tt-rss.git - commit 32c0c07cc1fb1019a61b83814bdd7c3830b621e6. I’m running on Debian Linux 9.5 and php 7.0.30-0+deb9u1. PostgreSQL version 9.6.10. The full log from the most recent update that gave an error is below. Any ideas what would cause such an error?

update_daemon2.php[25726]: [01:15:09/25726] [MASTER] active jobs: 0, next spawn at 0 sec.
update_daemon2.php[25726]: [01:15:10/25726] [MASTER] spawned client 0 [PID:10431]...
update_daemon2.php[25726]: [01:15:10/25726] [MASTER] spawned client 1 [PID:10432]...
update_daemon2.php[25726]: [01:15:10/10433] Using task id 1
update_daemon2.php[25726]: [01:15:10/10433] Lock: update_daemon-10432.lock
update_daemon2.php[25726]: [01:15:10/10433] Waiting before update (5)
update_daemon2.php[25726]: [01:15:10/10434] Using task id 0
update_daemon2.php[25726]: [01:15:10/10434] Lock: update_daemon-10431.lock
update_daemon2.php[25726]: [01:15:10/10434] Waiting before update (0)
update_daemon2.php[25726]: [01:15:10/10434] Scheduled 7 feeds to update...
update_daemon2.php[25726]: [01:15:38/10434] Base feed: http://feed.theregister.co.uk/atom?a=Simon%20Travaglia
update_daemon2.php[25726]: [01:15:38/10434]  => 2018-08-31 00:44:54.957764, 6 2
update_daemon2.php[25726]: [01:15:53/10433] Base feed: http://feed.theregister.co.uk/atom?a=Simon%20Travaglia
update_daemon2.php[25726]: [01:15:53/10433]  => 2018-08-31 00:44:54.957764, 6 2
update_daemon2.php[25726]: [01:15:54/10434]     16.0714 (sec)
update_daemon2.php[25726]: [01:15:54/10434] Base feed: http://feeds.feedburner.com/Hoeg
update_daemon2.php[25726]: [01:15:54/10434]  => 2018-08-31 00:44:55.479556, 33 2
update_daemon2.php[25726]: [01:15:54/10433]     0.2356 (sec)
update_daemon2.php[25726]: [01:15:54/10433] Base feed: http://feeds.feedburner.com/Hoeg
update_daemon2.php[25726]: [01:15:54/10433]  => 2018-08-31 00:44:55.479556, 33 2
update_daemon2.php[25726]: [01:15:54/10434]     0.0950 (sec)
update_daemon2.php[25726]: [01:15:54/10434] Base feed: http://githubengineering.com/atom.xml
update_daemon2.php[25726]: [01:15:54/10434]  => 2018-08-31 00:44:55.730238, 113 2
update_daemon2.php[25726]: [01:15:54/10433]     0.0956 (sec)
update_daemon2.php[25726]: [01:15:54/10433] Base feed: http://githubengineering.com/atom.xml
update_daemon2.php[25726]: [01:15:54/10433]  => 2018-08-31 00:44:55.730238, 113 2
update_daemon2.php[25726]: [01:15:54/10433]     0.2819 (sec)
update_daemon2.php[25726]: [01:15:54/10433] Base feed: http://overthehedgeblog.wordpress.com/feed/
update_daemon2.php[25726]: [01:15:54/10434]     0.3238 (sec)
update_daemon2.php[25726]: [01:15:54/10433]  => 2018-08-31 00:44:56.032159, 68 2
update_daemon2.php[25726]: [01:15:54/10434] Base feed: http://overthehedgeblog.wordpress.com/feed/
update_daemon2.php[25726]: [01:15:54/10434]  => 2018-08-31 00:44:56.032159, 68 2
update_daemon2.php[25726]: [01:15:54/10434]     0.2821 (sec)
update_daemon2.php[25726]: [01:15:54/10434] Base feed: http://rss.slashdot.org/Slashdot/slashdot
update_daemon2.php[25726]: [01:15:54/10434]  => 2018-08-31 00:44:56.732091, 38 2
update_daemon2.php[25726]: [01:15:54/10433]     0.3308 (sec)
update_daemon2.php[25726]: [01:15:54/10433] Base feed: http://rss.slashdot.org/Slashdot/slashdot
update_daemon2.php[25726]: [01:15:54/10433]  => 2018-08-31 00:44:56.732091, 38 2
update_daemon2.php[25726]: PHP Fatal error:  Uncaught PDOException: There is no active transaction in /home/httpd/reader/classes/rssutils.php:1177
update_daemon2.php[25726]: Stack trace:
update_daemon2.php[25726]: #0 /home/httpd/reader/classes/rssutils.php(1177): PDO->commit()
update_daemon2.php[25726]: #1 /home/httpd/reader/classes/rssutils.php(190): RSSUtils::update_rss_feed(38, true, false)
update_daemon2.php[25726]: #2 /home/httpd/reader/update.php(199): RSSUtils::update_daemon_common(50)
update_daemon2.php[25726]: #3 {main}
update_daemon2.php[25726]:   thrown in /home/httpd/reader/classes/rssutils.php on line 1177
update_daemon2.php[25726]: [01:15:55/10434]     0.6960 (sec)
update_daemon2.php[25726]: [01:15:55/10434] Base feed: https://github.com/blog/engineering.atom
update_daemon2.php[25726]: [01:15:55/10434]  => 2018-08-31 00:44:57.362355, 107 2
update_daemon2.php[25726]: [01:15:56/10432] removing lockfile (10432)...
update_daemon2.php[25726]: [01:15:56/25726] [reap_children] child 10432 reaped.
update_daemon2.php[25726]: [01:15:56/25726] [SIGCHLD] jobs left: 1
update_daemon2.php[25726]: [01:15:56/10434]     1.0755 (sec)
update_daemon2.php[25726]: [01:15:56/10434] Base feed: http://feeds.feedblitz.com/icringely
update_daemon2.php[25726]: [01:15:56/10434]  => 2018-08-31 00:44:57.844137, 20 2
update_daemon2.php[25726]: [01:15:57/10434]     0.3739 (sec)
update_daemon2.php[25726]: [01:15:57/10434] Processed 7 feeds in 18.9624 (sec), 2.7089 (sec/feed avg)
update_daemon2.php[25726]: [01:15:57/10434] Running housekeeping tasks for user 2...
update_daemon2.php[25726]: [01:15:57/10434] Sending digests, batch of max 15 users, headline limit = 1000
update_daemon2.php[25726]: [01:15:57/10434] All done.
update_daemon2.php[25726]: [01:15:57/10434] cache/simplepie: removed 0 files.
update_daemon2.php[25726]: [01:15:57/10434] cache/feeds: removed 0 files.
update_daemon2.php[25726]: [01:15:57/10434] cache/images: removed 0 files.
update_daemon2.php[25726]: [01:15:57/10434] cache/export: removed 0 files.
update_daemon2.php[25726]: [01:15:57/10434] cache/upload: removed 0 files.
update_daemon2.php[25726]: [01:15:57/10434] Removed 0 old lock files.
update_daemon2.php[25726]: [01:15:57/10434] Removing old error log entries...
update_daemon2.php[25726]: [01:15:57/10434] Feedbrowser updated, 47 feeds processed.
update_daemon2.php[25726]: [01:15:57/10434] Purged 0 orphaned posts.
update_daemon2.php[25726]: [01:15:57/10434] Removed 0 (feeds) 0 (cats) orphaned counter cache entries.
update_daemon2.php[25726]: [01:15:58/25726] [reap_children] child 10431 reaped.
update_daemon2.php[25726]: [01:15:58/25726] [SIGCHLD] jobs left: 0
update_daemon2.php[25726]: [01:16:10/25726] [MASTER] active jobs: 0, next spawn at 60 sec.

you need to find the previous exception in tt-rss error log and then search before posting, which you didn’t or else you’d know that already

Sorry about that, I missed the logs inside tt-rss. Here it is
PDOStatement::execute(): SQLSTATE[23505]: Unique violation: 7 ERROR: duplicate key value violates unique constraint “ttrss_entries_guid_key” DETAIL: Key (guid)=(SHA1:d0dbe2f01752c60a8cd5ae735d36e5579785731f) already exists.
1. classes/rssutils.php(895): execute(Array)
2. classes/rssutils.php(190): update_rss_feed(38, 1, )
3. update.php(199): update_daemon_common(50)

unless you have an actual database inconsistency (which is unlikely) you’re probably running into a problem similar to this thread:

https://discourse.tt-rss.org/t/cant-update-the-feeds-freebsd/1397

unfortunately so far nobody (including me) figured out an exact cause for this.

you can find a bunch of other threads on similar issues which happened to other people. keywords being foreign key, etc.

e: are there are other errors before that one you posted?

Those are the only errors that I have seen. I don’t do anything to make them go away either. They go away on their own.

I’m using PostgreSQL for my database.

The thread that you reference appears to be for something else. It’s not a duplicate key violation. This error is about a duplicate key in a guid table. Perhaps tt-rss has a bug in it’s guid implementation.

the error is different but the principle is the same:

inside a PDO transaction first query checks for entity existence, next query inserts it if it doesn’t exist (i.e. first query doesn’t return anything).

column which identifies the entity may be protected by a unique constraint or a foreign key, this ultimately doesn’t matter. what matters is that sometimes this sequence fails for some unknown reason and causes transaction to fail.

it’s entirely possible I’m using PDO wrong and it’s a bug in tt-rss but what’s absolutely 100% clear is that you have no idea what you’re talking about.

Actually, I do know a little bit about this. The check that you’re doing isn’t valid. If there are 2 connections to the database doing the same select and insert, one will succeed and one will fail. Here’s an example using the postgres command line.

Open up 2 terminals:

In terminal 1 execute:

psql
=> create table test (guid integer primary key, value varchar)
=> begin;
BEGIN
=> select * from test where guid = 2;
 guid | value
------+-------
(0 rows)

In terminal 2 execute:

psql
=> begin;
BEGIN
=> select * from test where guid = 2;
 guid | value
------+-------
(0 rows)
=> insert into test (guid, value) VALUES (2, 'foo');
INSERT 0 1
=> commit;
COMMIT

In terminal 1 execute:

=> insert into test (guid, value) VALUES(2, 'bar');
ERROR:  duplicate key value violates unique constraint "test_pkey"
DETAIL:  Key (guid)=(2) already exists.

Selecting for a value does not prevent another database connection from inserting the value that was queried for, therefore you will get these errors.

To avoid the duplicate key error you need to either put a write lock on the whole table or you need to always insert unique data.

@jpschewe,

Are you using single or multi process updating (i.e. update.php or update_daemon2.php)?

I’m using update_datemon2.php

Right, so there is some code in place to prevent multiple processes from updating the same feed simultaneously. Fox knows his code better than I but I guess it’s technically possible for it to still happen.

I guess one possible workaround would be to put some random delay in there right before starting the process so parallel processes (hopefully) don’t pick the same feed(s) (similar to how a network switch creates random delays in the invent of a collision). Not sure how well that would work with PHP though, I don’t think it’s well-suited for that type of thing.

Trying to start the processes at different times doesn’t seem like it would help much. However one could catch the exception and keep retrying since it appears that it’s expected.

error = false
do {      
  if (!select(guid)) {
    try {
      insert
    } catch(duplicate key) {
      error = true
    }
} while(error)

Or maybe just catch the duplicate key exception and keep going under the assumption that someone else inserted the right data.

parallel updates causing stuff like that is an old known thing. it’s just the most boring explanation i guess, and in this case the error is mostly harmless. there’s some workarounds for that in place, as posted above, but if the batch takes too long it might happen.

e: also, preventing inconsistency because of concurrency is partially why transactions are there in the first place, so arguably this is all working as designed :thinking:

i don’t think there’s any point in trying to recover the transaction in this situation, if another process is working on the same feed why duplicate the effort. also, trying to guess why a query-related error has happened sounds like a terrible idea.

the only really bad thing about it is that the entire batch is aborted if that happens. this is definitely not optimal.

e: i think a good compromise would be proceeding to the next feed on a PDOException, which is also easy to implement and shouldn’t introduce any weird consistency issues. log it and move on.

alternatively, maybe we should select feed row for update near the beginning of update_rss_feed(). the downside is other tasks would have to wait (either that or fail with lock error)

e: tt-rss currently uses transactions per-article not per-feed though

If the error is expected, then it seems reasonable to just catch it, log a warnings instead of an error and then goto the next feed under the assumption that feed is already being processed elsewhere. This has 2 advantages:

  1. the batch process doesn’t fail
  2. Users don’t keep asking questions about errors that aren’t really errors

I’ve yet to see any mention of PDF’s yet…

So many people to please… fixed.

well the batch should no longer get aborted, i’m also going to experiment with locks so in case of a PDO exception caused by concurrency it would be obvious why it happened

Spoilsport. T’was a mere observation, not a complaint.