>decided to clean/tune up twitter and pixiv name detection and found a few missing ugoiras along the way (including the pixiv.pictures one)
>Thank you for your hard work ♥
E-he-he, you're welcome, big-boob-dog-lady ( ̷ ̷ ̷´ω` ̷ ̷ ̷)
And danbooru has a custom ugoira player now, that's nice, I can probably copy it at some point.
Meanwhile Gelbooru has re-encoded all their video files three or four times by now.
You know, I should totally scrape all furry* source:*pixiv* stuff from danbooru (add this task to the infinite backlog). It's the only place where originals of many ugoiras now deleted from pixiv can be found.
I wanted to scrape the entirety of danbooru when the whole AI drama started, downloaded all post metadata, went "okay, now I got to figure out how danbooru calculates perceptual hashes, so that I can reuse their hashes, then I can store images by p-hashes, and relate file hashes to that in some SQLite DB". And never did that.
Honestly, I probably wouldn't have made meaningful progress on the entirety of danbooru anyway, because even before images, metadata is only the tip of the iceberg, and, for example translations would have taken a lot more effort to scrape and keep up to date.
But if it's only kemonos, perhaps it could be more manageable.