after having read Filtering duplicates + Logic of dupe elimination I’m wondering whether duplicates’ elimination has progressed in any manner workable for users not on PostgreSQL - perhaps there is a plugin of something? I’m not thinking of anything super sophisticated, just simple URL check - if a URL already exists in the DB just add any found tags / categories to the URL already stored instead of creating the same item again with differing taxonomy?
(I’m sadly not proficient enough in programming, so I cannot realy assess how hard or not it would be to go beyond that. From the utter user pov I’m thinking “the more characteristics match, the likelier an entry should be treated as a duplicate”? (url, guid, timestamp, title?))
If that’s already covered somehow somewhere, I’d be grateful for a related pointer;
much appreciated; cheers - LX