Possible Bug With Search Requests to backend.php?


Describe the problem you’re having:

I’ve been trying to solve slow search times on many of my feeds for a long time now. First I switched from a Synology NAS to a dedicated linux box. Then from mysql to postgres. Then saw that I needed to build the search index for postgres. Then tried sphinx and confirmed it contained data and was being hit. Nothing worked. Some searches still took over 30 seconds and left a spinning loading icon next to the feed for what felt like forever. Today, I finally did some debugging and I think I now know why.

Turns out the URL that gets hit when you make a search is something like this:


The handling of ForceUpdate in feeds.php could take 30 seconds or more for some of my feeds, making it seem like search was taking forever, when really it was quite fast even when not using sphinx. As a user, I certainly didn’t expect search to have this side-effect, of re-fetching the feed.

So the question is, is this behavior intentional and in place for a reason, or should the code be changed to not put m=ForceUpdate in there on search requests? I’ve made such a change to feedlist.js locally and can confirm that search is as fast as I’d expect it to be with it in place.


FWIW, hitting the “cancel search” link also causes m=ForceUpdate to end up on the query string: “op=feeds&method=view&feed=108&view_mode=unread&order_by=feed_dates&m=ForceUpdate&cat=false&csrf_token=blahblahblah”


tbh synchronous updates on forceupdate should probably be removed altogether

nobody with a sane php configuration (i.e. with open_basedir) is going to be seeing it anyway

the only criteria currently is “feed that was active is selected again”

e: https://git.tt-rss.org/git/tt-rss/commit/8dedacf497c31560bb8723cfe02b84921f44d576 done


Thanks fox, and happy new year.