[SOLVED] SQL disk IO issues on tt-rss update on a Raspberry Pi

I recently updated my Raspberry Pi (Raspbian) server from wheezy to stretch. This included an update from tt-rss 1.13+dfsg-1 to 16.8+git20160826+dfsg-3. However, I’m having massive disk IO problems, relating to mysql (I think it’s mariaSQL, when I log in to it). I tested with atop and iotop, and mysql uses 80–100% of the disk IO, which obviously slows everything else down on the server. I have tt-rss set to update every 30 minutes, but this disk usage occurs at all times. I tested uninstalling tt-rss, and disk IO drops to <10%.

I know that the tt-rss FAQ recommends postgreSQL for performance, but I originally just used mySQL/mariaSQL because it was already installed for ownCloud/Nextcloud. In any case, when I was on wheezy, I had no problems with this choice. I only have one user, so presumably tt-rss shouldn’t tax the database too much.

I searched the forums, but the only reference to SQL performance I could find referred to disabling feedbrowser. However, I couldn’t find reference to this in /etc/tt-rss/config.php, nor searching the web. Is there anything else I could try?

Cheers.

i was not aware this is a “mysql is slow on my toy computer” support central

also try this Read before posting / reporting bugs

and maybe even this https://git.tt-rss.org/git/tt-rss/wiki/FAQ#im-having-performance-problems-why-is-tt-rss-so-slow

I don’t consider you to be any kind of support central! I recognise that tt-rss is open source, and I have no feelings of entitlement to any kind of support! I apologise if I have been offensive.

I also was unsure about the processing power of the Pi when I originally purchased it, but I now have one simultaneously running Nextcloud, ejabberd, OwnTracks/Mosquitto, Subsonic and rsync for backups of three computers, and performance is excellent. tt-rss was also perfectly fine, until the recent update to Stretch.

Apologies for not reading the sticky. I think when I signed up/in, I wasn’t taken to the main forum. Anyway, point taken, here are the missing versions:

mysql-server 5.5.9999+default
mysql-common 5.8+1.0.2
php7.0 7.0.19-1

I checked in /var/log/tt-rss.log, and there was nothing obviously wrong. /var/log/mysql/error.log is also empty.

I haven’t updated to Stretch because it was just released (at least wait till the first point update!). In Stretch the Debian maintainers have dropped MySQL and moved to MariaDB, that in itself shouldn’t be a problem. But jumping two major OS releases might be. I don’t know; it might make a difference on an under-powered machine like a RaspberryPi.

Anyway… MySQL/MariaDB and PostgreSQL is kind of like Beta/VHS… The crappy one won the popularity contest. Speaking from experience (I use MariaDB because other stuff I run needs it and I didn’t want two database daemons on one system) it will work if you do one thing: throw memory at it. So login to the database and run SHOW ENGINE INNODB STATUS\G then Google that and follow the results. You’ll likely need to increase the innodb_buffer_pool_size value. But this won’t magically fix a slow machine. If your bottleneck is during feed updates then it’s probably waiting to write. You can experiment with changing update times, for example: running the updater more frequently but setting feeds to update less frequently. This should space out the load across a greater period of time and cause each run of the updater to have less work.

But here’s the thing: You’re running a RaspberryPi, not exactly a machine built for performance. Also, it looks like you’re not running from the Git repository but from a package. And based on the version numbers I’m reading, you jumped from 1.13 to 16.8? That’s about two years worth of updates to the TT-RSS code.

At the end of the day you completely overhauled your entire (under-powered) system in one shot and now want to know why disk IO sucks.

That’s a lot to run on one of those machines and TT-RSS is very database heavy. There’s simply no way around it. In an ideal world you’d run it off PostgreSQL on SSDs.

I’m also moving this to unsupported because your setup is not.

^
to be fair i don’t think there were massive performance changes in tt-rss for the last, uh, few years. i think this is more likely either he’s running out of IOPS on his rpi or mysql being misconfigured somehow / a general bottleneck.

i would still recommend you try pulling latest git code, because you did switch between two ancient packages neither of those technically supported (here) anymore.

e: i would also suggest postgres but adding yet another database server to this poor thing might make everything worse.

Just a couple more thoughts before I fall asleep:

If you can, turn off caching of images/media for each feed. Version 16.8 is before fox added support for caching audio/video to disk, but if you have feeds that have more than a few images caching will further stress the disk.

How much memory do you actually have available?

Just to add a data point, I found it cheaper (in terms of memory) on a limited-memory VPS to run postgres and mysql in parallel than to run tt-rss with mysql. At a casual glance, the bottleneck/bloat for mysql was background feed updates with more feeds -> more mysql activity. You might be hitting swap as well if you’re getting a lot of disk IO.

innodb is literal garbage so it does make some sense

@JustAMacUser Thank you for that heap of useful information! I’ll investigate those leads and report back. Regarding being underpowered, as I mention above, it was perfectly capable of running all those programs on Wheezy; it’s only Stretch that has suddenly caused issues, presumably from the tt-rss upgrade.

Yes, given that everything was fine in the previous Debian/Raspbian version, I thought that might be it.

Good idea. I can do this to at least exclude any issues with tt-rss directly, and see if it’s mysql.

Does this toggle individually for each feed? Aren’t they off by default?

I have 1 GB, but ~60% is unused.

Hmmm… I wondered about that. I might test this too. Thank you.

I don’t have any swap enabled, and I’m not running out of memory, so I don’t think this is it.

Thank you everyone for your replies. I have a bit to go on, so I’ll check it all out and report back. Cheers.

Yes and yes. But I don’t know whether you have them enabled so I suggested turning the feature off for troubleshooting purposes.

Yeah, then definitely up the buffer size. But do the reading I suggested about engine status, etc. That report will tell you not only MySQL’s memory usage but also IO waits.

You should have it enabled as most modern operating systems expect it to be there. I don’t think the world will explode if it’s not there, but you can set it small and tune swapiness so its only used if absolutely necessary.

From my general reading, it’s not really recommended on a Raspberry Pi, because the SD card’s read/write speed is so ridiculously low.

Thank you all again. I’ve spent a few hours working on this. Here are the results of my first test, which was to increase innodb_buffer_pool_size. (Apparently I can only put in one link since I’m new to the forum.)

Check current innodb_buffer_pool_size:

MariaDB [(none)]> SELECT (@@innodb_buffer_pool_size / POWER(1024,3)) AS "innodb_buffer_pool_size in GB";
+-------------------------------+
| innodb_buffer_pool_size in GB |
+-------------------------------+
|                         0.125 |
+-------------------------------+
1 row in set (0.00 sec)

Get size of databases: stackoverflow<dot>com/a/1733523

MariaDB [(none)]> SELECT table_schema                                        "DB Name", 
    ->    Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" 
    -> FROM   information_schema.tables 
    -> GROUP  BY table_schema; 
+--------------------+---------------+
| DB Name            | DB Size in MB |
+--------------------+---------------+
| information_schema |           0.2 |
| mysql              |           0.9 |
| owncloud_database  |          99.5 |
| performance_schema |           0.0 |
| ttrss              |          51.8 |
+--------------------+---------------+
5 rows in set (0.22 sec)

So it looks like the buffer size is 0.125 GB, but the total data is ~150 MB. I can check what percent is in use, too. dba<dot>stackexchange<dot>com/a/56497

MariaDB [(none)]> SELECT CONCAT(FORMAT(DataPages*100.0/TotalPages,2),' %') BufferPoolDataPercentage FROM
    -> (SELECT variable_value DataPages FROM information_schema.global_status
    -> WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
    -> (SELECT variable_value TotalPages FROM information_schema.global_status
    -> WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;
+--------------------------+
| BufferPoolDataPercentage |
+--------------------------+
| 65.88 %                  |
+--------------------------+
1 row in set (0.01 sec)

It seems like it should be fine? Anyway, just in case, I tried increasing the buffer size, by editing /etc/mysql/mariadb.cnf and appending the following lines.

[mysqld]
innodb_buffer_pool_size=200M

Then restart with sudo service mysql restart.

I repeat the mysql commands to confirm that the change has been applied, and now usage is a mere 13.70 %. However, there is no difference in performance, which is confirmed with sudo atop.

I’ll try testing the other suggestions soon.

Restarting the MySQL service then immediately checking the results is not reflective of the actual usage because the database isn’t going to have had any opportunity to use its cache. It needs to run for a bit–inserting, updating, selecting, etc.–then you check its status.

Nevertheless, the problem is probably this:

I know you said it was working fine on Wheezy and now on Stretch it’s not working, so maybe search for general IO issues on Stretch. Maybe the OS as a whole needs some performance tuning for the RaspberryPi.

Side note: Those are some crazy queries. If you just run the SHOW ENGINE INNODB STATUS\G command you’ll get stuff like:

Buffer pool size        8191
Buffer pool size, bytes 134201344
Free buffers            7299

Which shows free buffers (7299 in the above example). The same report details IO waits*. This page is old but explains the results.


*You’re having performance issues, so it’s important to know if MySQL is waiting to use the disk or if something else is. (Probably MySQL.)

Your general reading is mistaken. You should have at least 2 GB of swap even if it is never used. This has nothing to do with actual swapping but entirely to do with the way the system allocates memory.

The 2 GB is added to the system’s pool of available memory and allows processes to request more than they actually need (which they commonly do) and gives the system the freedom to permit the process to run. Without the swap space, you are wasting actual physical memory that could be used by processes that could fit into it, but that are denied it by the operating system because not enough virtual memory is available (and yes, this could be impacting your database).

Your system need never actually swap anything to disk. It simply needs the swap space there.

That’s not how Linux VM works. The overcommit is a thing but will happen even beyond the total of RAM+swap if the system is configured to allow it. You can perfectly well have no swap and not impact on how much a process can overcommit.

See Do we really need swap on modern systems? (yes, I know the system for this thread is Debian/Raspbian, but the general nature of Linux VM is the same).

Ah yes, good point. I’ll wait a bit and then see how it goes.

Yes, I suspect you are correct. I did search for issues previously on Stretch and mysql, but to no avail. I’ve got an excellent set of leads from this thread, and I’m still working my way through them as time permits, so I’ll see if perhaps postgresql can help, just as a reference point. I’ll also test the git version of tt-rss and other suggestions.

I tested mine and got the following:

Buffer pool size        12799
Buffer pool size, bytes 209698816
Free buffers            8046

But I think that’s just another way of looking at my output from my “percentage of buffer in use” command above, which now gives 36.72 % in use. Here are a few other possibly pertinent sections from SHOW ENGINE INNODB STATUS\G, but I wasn’t sure what are reasonable values.

SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 96934
OS WAIT ARRAY INFO: signal count 92154
Mutex spin waits 105554, rounds 2557628, OS waits 83528
RW-shared spins 9152, rounds 270858, OS waits 8672
RW-excl spins 1806, rounds 138869, OS waits 4448
Spin rounds per wait: 24.23 mutex, 29.60 RW-shared, 76.89 RW-excl

FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
4498 OS file reads, 324801 OS file writes, 131871 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.64 writes/s, 0.25 fsyncs/s

Yep, it’s definitely mysql, from checking atop and iotop.

Yes, this was my understanding. Although from everything above, it seems that mysql probably has enough memory anyway, so this issue is probably moot.

It is, but a whole lot easier to type. :slight_smile:

How long was MySQL running before you pulled that particular report?

(Also possible you’re having read/write errors on the SD card. I’ve had drives fail and because of the bad sectors write operations locked the whole system up.)

About 17 hours. That was with the default innodb_buffer_pool_size. I’ve just restarted with the new values.

Hmm… I’ve certainly had problems with that before on my Raspberry Pi systems. I think this SD card is only a few months old; I replaced it because the previous one corrupted. I’m not sure if that means it’s more or less likely that this one is also corrupt! I’ll put that on the list to check! Thanks again.

Your waits are higher in 17 hours than mine are after running MySQL for several days. Memory doesn’t seem to be an issue.

I’m about at the limit of my knowledge for this at this point. While the issue began after the upgrade to Stretch it’s difficult to say that was actually the cause and I have trouble believing it (because Debian isn’t that fragile in my experience). But I’m by no means an expert in all things.

At the end of the day, I think you’d simply be better of re-purposing an old computer to run your web applications.

Check for disk errors. Try PostgreSQL (nextCloud also works with that, not sure about your other applications). Go back to using Wheezy.

But I don’t have an old computer! :disappointed: Also, the reason why I’m on a “toy computer” is because I just can’t afford it. Eternal studentship will do that to you. :disappointed:

So this was the next thing that I tried. I moved tt-rss to PostgreSQL (while keeping Nextcloud on mariaDB). Iit turned out that PostgreSQL was running reasonably well, but mariaDB was still running terribly! Presumably Nextcloud or something else (or internal?) was causing problems.

I’m thinking that my initial test were flawed. Originally, I uninstalled tt-rss and tested the disk IO. It was definitely fine… but perhaps I just didn’t look for long enough. Maybe Nextcloud (or whatever) was just not going hard at that moment.

I’ve since done a few tests for IO throughput of various media. My external USB hard drive was ~7 times quicker than the SD card! Hence, I moved the mariaDB database to the external USB hard drive… and everything works perfectly fine now!

Since Nextcloud hadn’t been updated recently, I guess most evidence points to some mariaDB update that caused these issues. tt-rss alone cannot have caused the issues, although it’s plausible that both Nextcloud and tt-rss contributed… although I have no real evidence for this. Thank you everyone for all your help, apologises if this ended up being off-topic, but hopefully this might help someone else down the track with a Raspberry Pi.

(Also, FWIW in terms of CPU power, my Pi is running at a load of ~0.7 over four cores, and memory is at ~40% free. It’s just the SD card speed that’s horrific!)

Yes, quite a few, but as I mention above, they were running perfectly fine on Wheezy. I use Nagios for monitoring, and I rarely get too much load, except with things like Subsonic misbehave. TBH I was just considering moving everything off the SD onto USB, given how well mysql works now… but it’s so rare that I have problems that I’m not sure if I could be bothered.

Yes, I think that’s likely correct.