Innoreader supports importing/exporting entries (especially those starred entries) to a JSON format.
Here is an example:
{
"crawlTimeMsec":"1516226902000",
"timestampUsec":"1516226902000000",
"id":"tag:google.com,2005:reader\/item\/000000035d74960b",
"categories":[
"user\/1006616538\/state\/com.google\/reading-list",
"user\/1006616538\/state\/com.google\/read",
"user\/1006616538\/state\/com.google\/starred"
],
"title":" THIS IS Titile THIS IS Titile THIS IS Titile THIS IS Titile THIS IS Titile THIS IS Titile THIS IS Titile ",
"published":1516210059,
"updated":1516230449,
"starred":1516226902,
"canonical":[
{
"href":"http::\/\/www.google.com"
}
],
"alternate":[
{
"href":"http::\/\/www.google.com",
"type":"text\/html"
}
],
"summary":{
"direction":"ltr",
"content":"THIS IS CONTENT THIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENTTHIS IS CONTENT"
},
"author":"",
"likingUsers":[
],
"comments":[
],
"commentsNum":-1,
"annotations":[
],
"origin":{
"streamId":"feed\/http:\/\/www.google.com/feed.xml",
"title":"SOMEONE's RSS FEED",
"htmlUrl":"http:\/\/www.google.com"
}
},
I think it is a cool feature for a seeming-less migration from other readers to TT-RSS.
Currently TT-RSS can import the list of feeds (OPML) and fetch the entries (contents) from that list. Since the remote feed might not preserve the full history, it is hard to transfer the starred entries from the old reader to TT-RSS.
It would be awesome if TT-RSS could import the list of entries (like the Innoreader JSON format), sort them into corresponding feeds according to the URL, and maybe also mark the stars automatically
Then I tried to upload the XML file, but found many problems. First it won’t upload due to the max_header_size on PHP and Nginx side. I increased both to 120MB, and then it worked for file <2MB.
However, when I try uploading a 10MB XML file, it shows
Great!
Now I imported everything from the my Innoreader.
But I also found a problem during the process. The imported articles are not searchable, and the Feed Debugger trick doesn’t seem to work for the imported articles. So I have to manually update index with SQL .
update ttrss_entries set tsvector_combined = to_tsvector(content);
thanks for reporting this; tsvector_index has likely been added after this plugin was initially written, i’ll make a note to update it so that the index is generated properly