It operates at a lower level than git. Git tracks files. ZFS tracks blocks of data. If you snapshot a dataset, that snapshot takes up 0 bytes, more or less. If you add a 1 GB file, the snapshot is 1 GB. If you delete a file, it tracks that as well. It tracks everything from the point of the snapshot onward. This also means if you delete 30 GB, you don’t gain that space back; it’s needed to restore the file system to that point. But the idea is that you’d keep rolling snapshots. If you restore to a snapshot you cannot, however, restore again to a future snapshot. Once you rollback, you are rolled back. You can, though, pull individual files out of the previous snapshots using a hidden .zfs directory in each dataset’s root directory.
A large company might want to keep snapshots at 15-minutes intervals for a few days and daily snapshots for a month. So old ones are constantly getting removed and new ones created.
Snapshots are also cheap (performance wise). It takes virtually no time or CPU effort to create one.
Again, it’s also very low level so it happens quickly and works efficiently.
ZFS does need a fair bit of free space to operate well. I’ve experienced noticeable performance drops when my array got to 70% capacity. Thankfully growing the array is pretty easy by either adding new drives or slowly replacing existing ones.
ZFS will do this but that’s not its primary point. What you want is something like VirtualBox to play around in and restore the whole OS, create different branches of experimenting, etc.
e: Use the right tool for the right job.
No, sorry. I’m not hear to promote other forums and in all honesty I don’t even frequent that many. You might have to learn like the rest of us… by breaking things along the way.