Daemon no longer updating feeds

Everything had been working great until about last Wed when the daemon simply stopped updating feeds. I can still do automatically with update.php --feeds, but the background daemon no longer does so and shows no errors in the logs.

Any suggestions? I’ve gone as far as completely wiping the site, DB and all and trying it again, all with the same image, but still no luck.

Please see the following wiki article:

https://git.tt-rss.org/fox/tt-rss/wiki/UpdatingFeeds

How are you running the daemon? If systemd, have you tried systemctl status tt-rss (replace tt-rss with the name of your service)?

Are you running the multi-threaded daemon or the single? What operating system? What versions? Have there been any system or application updates since the issue started?

  1. i’m not seeing any logs
  2. if daemon is running but not scheduling anything, start with update.php --force-updates
  3. find and post daemon logs

Good question. It’s the standard update.php daemon and service ttrss status shows it is running and the logs continue to be updated. Issue started last Wed at 1335 ET. Daemon config is at github/chriswiegman/docker-ttrss/ttrss.conf

Here’s the log, which might help:

[17:00:42/31] [MASTER] active jobs: 0, next spawn at 60 sec.
[17:01:42/31] [MASTER] active jobs: 0, next spawn at 0 sec.
[17:01:43/31] [MASTER] spawned client 0 [PID:649]…
[17:01:43/31] [MASTER] spawned client 1 [PID:650]…
[17:01:43/654] Using task id 0
[17:01:43/654] Lock: update_daemon-649.lock
[17:01:43/654] Waiting before update (0)
[17:01:43/653] Using task id 1
[17:01:43/653] Lock: update_daemon-650.lock
[17:01:43/653] Waiting before update (5)
[17:01:43/654] Scheduled 0 feeds to update…
[17:01:43/654] Sending digests, batch of max 15 users, headline limit = 1000
[17:01:43/654] All done.
[17:01:43/654] cache/feeds: removed 0 files.
[17:01:43/654] cache/images: removed 0 files.
[17:01:43/654] cache/export: removed 0 files.
[17:01:43/654] cache/upload: removed 0 files.
[17:01:43/654] Removed 0 old lock files.
[17:01:43/654] Removing old error log entries…
[17:01:43/654] Feedbrowser updated, 133 feeds processed.
[17:01:43/654] Purged 0 orphaned posts.
[17:01:43/654] Removed 0 (feeds) 0 (cats) orphaned counter cache entries.
[17:01:44/31] [reap_children] child 649 reaped.
[17:01:44/31] [SIGCHLD] jobs left: 1
[17:01:48/653] Scheduled 0 feeds to update…
[17:01:48/653] Sending digests, batch of max 15 users, headline limit = 1000
[17:01:48/653] All done.
[17:01:49/31] [reap_children] child 650 reaped.
[17:01:49/31] [SIGCHLD] jobs left: 0
[17:02:43/31] [MASTER] active jobs: 0, next spawn at 60 sec.
[17:03:43/31] [MASTER] active jobs: 0, next spawn at 0 sec.
[17:03:44/31] [MASTER] spawned client 0 [PID:782]…
[17:03:44/31] [MASTER] spawned client 1 [PID:783]…
[17:03:44/786] Using task id 1
[17:03:44/786] Lock: update_daemon-783.lock
[17:03:44/786] Waiting before update (5)
[17:03:44/787] Using task id 0
[17:03:44/787] Lock: update_daemon-782.lock
[17:03:44/787] Waiting before update (0)
[17:03:44/787] Scheduled 0 feeds to update…
[17:03:44/787] Sending digests, batch of max 15 users, headline limit = 1000
[17:03:44/787] All done.
[17:03:44/787] cache/feeds: removed 0 files.
[17:03:44/787] cache/images: removed 0 files.
[17:03:44/787] cache/export: removed 0 files.
[17:03:44/787] cache/upload: removed 0 files.
[17:03:44/787] Removed 0 old lock files.
[17:03:44/787] Removing old error log entries…
[17:03:44/787] Feedbrowser updated, 133 feeds processed.
[17:03:44/787] Purged 0 orphaned posts.
[17:03:44/787] Removed 0 (feeds) 0 (cats) orphaned counter cache entries.
[17:03:45/31] [reap_children] child 782 reaped.
[17:03:45/31] [SIGCHLD] jobs left: 1
[17:03:49/786] Scheduled 0 feeds to update…
[17:03:49/786] Sending digests, batch of max 15 users, headline limit = 1000
[17:03:49/786] All done.
[17:03:50/31] [reap_children] child 783 reaped.
[17:03:50/31] [SIGCHLD] jobs left: 0
[17:04:44/31] [MASTER] active jobs: 0, next spawn at 60 sec.
[17:05:44/31] [MASTER] active jobs: 0, next spawn at 0 sec.
[17:05:45/31] [MASTER] spawned client 0 [PID:798]…
[17:05:45/31] [MASTER] spawned client 1 [PID:799]…
[17:05:45/803] Using task id 0
[17:05:45/803] Lock: update_daemon-798.lock
[17:05:45/803] Waiting before update (0)
[17:05:45/802] Using task id 1
[17:05:45/802] Lock: update_daemon-799.lock
[17:05:45/802] Waiting before update (5)
[17:05:45/803] Scheduled 0 feeds to update…
[17:05:45/803] Sending digests, batch of max 15 users, headline limit = 1000
[17:05:45/803] All done.
[17:05:45/803] cache/feeds: removed 0 files.
[17:05:45/803] cache/images: removed 0 files.
[17:05:45/803] cache/export: removed 0 files.
[17:05:45/803] cache/upload: removed 0 files.
[17:05:45/803] Removed 0 old lock files.
[17:05:45/803] Removing old error log entries…
[17:05:45/803] Feedbrowser updated, 133 feeds processed.
[17:05:45/803] Purged 0 orphaned posts.
[17:05:45/803] Removed 0 (feeds) 0 (cats) orphaned counter cache entries.
[17:05:46/31] [reap_children] child 798 reaped.
[17:05:46/31] [SIGCHLD] jobs left: 1
[17:05:50/802] Scheduled 0 feeds to update…
[17:05:50/802] Sending digests, batch of max 15 users, headline limit = 1000
[17:05:50/802] All done.
[17:05:51/31] [reap_children] child 799 reaped.
[17:05:51/31] [SIGCHLD] jobs left: 0
[17:06:45/31] [MASTER] active jobs: 0, next spawn at 60 sec.
[17:07:45/31] [MASTER] active jobs: 0, next spawn at 0 sec.
[17:07:46/31] [MASTER] spawned client 0 [PID:804]…
[17:07:46/31] [MASTER] spawned client 1 [PID:805]…
[17:07:46/808] Using task id 1
[17:07:46/809] Using task id 0
[17:07:46/808] Lock: update_daemon-805.lock
[17:07:46/808] Waiting before update (5)
[17:07:46/809] Lock: update_daemon-804.lock
[17:07:46/809] Waiting before update (0)
[17:07:46/809] Scheduled 0 feeds to update…
[17:07:46/809] Sending digests, batch of max 15 users, headline limit = 1000
[17:07:46/809] All done.
[17:07:46/809] cache/feeds: removed 0 files.
[17:07:46/809] cache/images: removed 0 files.
[17:07:46/809] cache/export: removed 0 files.
[17:07:46/809] cache/upload: removed 0 files.
[17:07:46/809] Removed 0 old lock files.
[17:07:46/809] Removing old error log entries…
[17:07:46/809] Feedbrowser updated, 133 feeds processed.
[17:07:46/809] Purged 0 orphaned posts.
[17:07:46/809] Removed 0 (feeds) 0 (cats) orphaned counter cache entries.
[17:07:47/31] [reap_children] child 804 reaped.
[17:07:47/31] [SIGCHLD] jobs left: 1
[17:07:51/808] Scheduled 0 feeds to update…
[17:07:51/808] Sending digests, batch of max 15 users, headline limit = 1000
[17:07:51/808] All done.
[17:07:52/31] [reap_children] child 805 reaped.
[17:07:52/31] [SIGCHLD] jobs left: 0

its possible that time on your server went backwards, run update.php --force-updates.

by the way, you’re logging on to tt-rss UI right? otherwise it’s 30 days (stock timeout) and updates stop, see FAQ.

–force-updates has reset the clock, if you will, as the feeds are all showing last updated at Wed, Dec 31 1969 - 20:00 but the daemon is still not updating them. I’ll take a look into the time issue.

Just saw your faq update… that might be what I’m missing… the timing is about right.

Same issue here. I tracked things down to the query that select the feeds to update from the database. With all of the where clauses, no results are returned anymore despite update time of the feeds being set to 1970. Strangely, if you specify a feed with a manual update interval, at least that feed gets updated.

Another thing that I am seeing is that no settings are displayed in my account. Yet, they seem to exist in the database.

related? https://discourse.tt-rss.org/t/empty-prefs-with-default-profile/2523

Following the linked issue with prefs, that definitely appears to be the issue here as well.

Hi, recently registered just to say I have same issue here… Logs say “Scheduled 0 feeds to update…”
My system is:
Docker MariaDB:10
openSUSE 15.0, apache 2.4.33-lp150.2.17.1, php7.2.5-lp150.2.19.1
ttrss v19.2 (088fcf8) © 2005-2019

this happened after general system update, maybe prefs issue can be related too.

https://discourse.tt-rss.org/t/empty-prefs-with-default-profile/2523/43?u=fox

e: since it looks like there’s one underlying issue here i suggest to continue discussing it in that other thread.