P762 link reply
kimono.avif
What do you think of the AVIF image format? Do you appreciate improvements in image compression, or do you think people should stick to older, more compatible formats that work well enough?
P764 link reply
dog link
P765 link reply
beast
P766 link reply
>do you think people should stick to older, more compatible formats that work well enough?
even webp hasnt been properly adapted and google wont stop pushing it there is no choice jpg/png/gif is forever
nice for some autistic network but >we and ffmpeg wont *****ing support webp animations still
P769 link reply
>ffmpeg wont *****ing support webp animations still
Apparently it will encode to animated webp but not decode. But even when encoding transparency is broken so there's not much point.
P771 link reply
P769
last i tried even the encoder is broken unless you depend on libwebp
and the libwebp "hooks" wont supporting muxing decode and everything else with ffmpegs "pipeline"
P772 link reply
P771
also tried a patch some chink made no go
P790 link reply
Is the easiest way to deal with webp animations to use Google's tools to make them from/into a sequence of static images? Not very convenient.
https://developers.google.com/speed/webp/docs/webpmux

Maybe the situation will be better with AVIF due to AV1 not being exclusively a Google thing?
P795 link reply
P970
using ffmpeg as a shit test
a similar shit test would be imagemagick depending on what it is
ffmpeg usually supports exotic formats properly and the newests ones properly early
in real life ffmpeg and imagemagick is a 0day sink for encoding and decoding someone should avoid
google can be seen as irrelevant since thats the same as only the reference having an implementation meaning irrelevant forever
>Maybe the situation will be better with AVIF due to AV1 not being exclusively a Google thing?
webp was an example jxl is also *****ing out it still doesnt support jxl animations
type ffmpeg -codecs and find out
P796 link reply
P795 to P790
the biggest issues are these forced adoptions where everyone who doesnt want to deal with media is pulling in libwebp and hoping for the best
had to wait a while until a mediocre av1 decoder came around
P809 ## link reply
could you guys get your ***** of an admin to fact check himself before making wild claims like https://boards.4chan.org/g/thread/86946736#p86947429 ?
t. nanon
P810 link reply
P809
Not me. Are browsers actually going to start supporting JPEG XL? (Looks like they might.) Might add thumbnails for that too.
P811 link reply
>>810 they literally do. it's hidden behind a flag because libjxl is not mature and there may still be security critical bugs. that browsers have had support for this long despite the lib being in beta speaks volumes to the demand for jxl.
P813 link reply
jpeg xl is coded in c++ and bloated. Compared to "out" video driven www v3 image size is not that important.
btw, hitomi.la converted all their stashes to webp. More hate to google.
>>p809
P816 link reply
JPEG XL sounds promising.

https://github.com/mozilla/standards-positions/issues/522#issuecomment-834162694
>Can recompress JPEG without further visual changes. (I think this is a big deal for migration path, since it allows size wins without human supervision for quality loss. Also, this gives the format a more legitimate claim to a "JPEG something" brand than e.g. JPEG2000. It should be pretty non-controversial that this property can be verified, though I haven't personally done the verification.)
>Is designed for high-fidelity still photos. (The primary way both JPEG2000 and the video codec-based image formats improve on JPEG is that they have non-awful but still visible artifacts at compression ratios where JPEG has very awful and very visible artifacts. I think it's important to also address the case where you don't want visible artifacts but still want a substantial compression win over JPEG. How well JXL meets its design intent on this point is harder to assess than the previous point. I haven't done this assessment personally, either, but I react favorably to this even being the design intent.)
>Less generation loss, so better suited for authoring workflow. (I haven't verified this personally)
>Better progressive decoding story: Does not need a site-provided mechanism for low-res placeholder swapping if the browser implements incremental painting of incremental decode.
>No tile boundaries based on maximum video frame size. (Unclear if at all relevant at high-fidelity compression ratios.)

P848 link reply
P811
>using libjxl
only proper implementation being reference = irrelevant
P813
jxl is still bloated but your confusing the reference with implementations
but look at that ffmpeg doesnt have an in house implementation for jxl unlike parts of avif and webp its a wrapper around libjxl thats broken like libwebps wrapper
for how long has it been even patents didnt stop ffmpeg from implementing drafts
P865 link reply
that's because they all use the av1 decoders you disingenuous *****wit. video decoders see a lot more development for obvious reasons, algorithmic improvements actually make sense for video. and even then there's only the reference impl and intel's dumb little software decoder. not to mention that avif is a second class format that will expire the moment av1 expires like it did for vp8 and vp9.
P874 link reply
>that's because they all use the av1 decoders
the implementations im seeing doesnt support av1 for webps lossy and lossless compression and ffmpeg uses its own vp7 and vp8 for its webp the patches for avif uses the in house av1 code how was that a consideration
>hurr second class durr impl
not sure what video decoders seeing more development has to do with using ffmpeg as a shit test for adoption ffmpeg still ends up implementing shit like targa and txd but how many years has it been now for these truemotion infested image formats
>and even then there's only the reference impl and intel's dumb little software decoder
ffmpeg supports decoding indeo2 and above to indeo5 the code is in house
if your talking about av1 i see dav1d rav1e and svtav1 something like libjxl and libavif using those makes sense i never said something disingenuous as an implementation should reimplement nonreference algorithms to be considered relevant
P875 link reply
P874
if you meant intel by aom thats effectually another reference for this shit test
P931 link reply
Here's some usage stats according to
https://w3techs.com/technologies/market/image_format/10

Have you seen AVIF in the wild anywhere?
P938 link reply
P931
op first person i found out about this meme format
hope it doesnt end up like manga sites adopting *****ing webp animations
>bmp tiff
kek
https://w3techs.com/technologies looking at this a majority of sites hosting apng would be duplicates like boorus mirror from pixiv and each other
why is ico so low are they not counting favicons
where is jxl
im more interested in stats from everything not just the web
P1049 link reply
P938
They list some of the sites they've found using APNG:
https://w3techs.com/technologies/details/im-apng
The top one is a pokemon-esque "game" where the creatures are NFTs for sale. It doesn't link to where it found APNG files, though, and I can't find them myself. I did see plenty of WebP which it doesn't list the site as using, though.
P1052 link reply
P1049
lmao
>APNG is used by less than 0.1% of all the websites.
this is being generous
P1084 link reply
heh
P1089 link reply
P1084
its over
P1122 link reply
testing JPEG XL
P1140 link reply
Did you encode it yourself? I grabbed mine directly from a test site.
P1141 link reply
Never mind, same size. Was reported as above 30kb when I downloaded it.
P1144 link reply
I wish 4chan supported AVIF because The Daily Mail uses AVIF, and sometimes I find their images on Google Image search, so I have to convert them to JPG before uploading to 4chan

For the same reason, I wish 4chan supported WebP, so I don't have to convert WebPs. I'm probably not the only one who would like that
P1146 link reply
anyone try making an animated jxl
P1205 link reply
Compressed image with virtually no visible artifacts.
P1206 link reply
Original lossless image
P1207 link reply
JPEG with a few visible artifacts.
P1208 link reply
P1207
How do we normally show up compression artefacting in a jpeg? Iirc we could keep rolling the image back and forth one pixel and reencoding it, and the edges will become more clear.
P1211 link reply
I'm pretty sure there is some kind of visual diff tool, but I just toggle between the images like you say. If that's not good enough, it might as well be lossless.
P1212 link reply
Can you do something analogous to this, but maybe also take the difference of the images after?
```ksh (with apologies for the next lines)
convert -roll 1 a.jpg b.jpg
convert -roll -1 b.jpg a.jpg
```
a bunch of times, also for your jxl (and then put the jxl in some other intermediate format I can actually open). I don't have a way I am happy to work with images right now. I will think of or find something this weekend.
P1217 link reply
P1208
If you repeatedly reencode an image as JPEG with the same settings, there's usually not much generation loss. If you want to make something look really JPEG, you can switch back and forth between two quality settings.
P1219 link reply
P1208
I thought that jpeg compression was such that rolling the image by a pixel changes which pixels made up particular subregions, changing the color pallet. Maybe I had to do it once and then subtract it from the original or something. I think there was some way to do it that showed up the edges between pallet-regions that was clearly a little different to the right image in P1212. How would you estimate image fidelity?
P1220 link reply
P1219
Oh, for some reason I read that as changing a pixel. Yeah, rolling the entire image by a pixel would cause generation loss, too.
It's nothing to do with color palettes, though. JPEG doesn't use those.

>How would you estimate image fidelity?
Usually by looking, switching back and forth as P1211 suggested if necessary. There are some automated tools to measure the difference between images, but in the end they're just trying to model human judgments.
P1222 link reply
P1220
My memory, largely filled in by imagination and speculation goes like this:
Jpeg achieves lossy compression by selecting frames and performing a DCT over the frame. Dubious -> Iirc a DCT will make something kinda more continuous into something kinda more spikey, which is useful for a compression strategy. I thought that we should be able to highlight edges between frames with some chicanery. I know I said palette, which is another, and different kind of compression that's available in PNGs.
P1224 link reply
P1222
Oh, I see what you mean now. Usually the edges between blocks aren't the most significant JPEG artifacts, although they do show up at very low quality settings. More often the biggest problem is ringing at sharp edges in the image. You can see a lot of this in the second image of P1212, for example in the sky right at the edge of the yellow structure, and really all over the image.
P1225 link reply
>not using jpeg
P1260 link reply
>not using AV1
P1264 link reply
P1260
I'm surprised this thing plays smoothly as it does on my potato of a machine that often chokes on 1080p video.
P1303 link reply
P1320 link reply
Another AV1 stress test.

Wasn't patient for a slow encoding preset.
P1321 link reply
Not embedding? Too big?
Let's try again.
P1327 link reply
Using crf instead of constant bitrate.
P1328 link reply
P1320
I'm seeing a lot of artifacting in this one.
P1343 link reply
Crf encoding seems to have a weakness with motion artifacts. While it cleans up a lot of the noise and blur from constant bitrates, a lot of the details look kind of choppy, as if they are being interpolated in some frames.
P1344 link reply
How much does AV1 improve on bitrate at levels of quality that aren't terrible?
P1345 link reply
Considering that Google is using it for 8K videos on Youtube, it probably fares best there.
This is kind of the best it can do at hypothetically 4chan-compatible sizes. I'm eagerly awaiting for the day that I can upload this to /wsg/ after they flip a boolean.
P1355 link reply
P1356 link reply
P1373 link reply
Would be easier to see how much it's improving things with a comparison to another codec. You can post multiple files here, just select multiple files in the dialog. Posting the settings used would also be good.
P1382 link reply
I guess I should give h265 a try. The source I am using is already in that format, but obviously at higher bitrates.
P1404 link reply
VP8 encode from August 2013.webm
P1382
Well, that's definitely progress.
P1426 link reply
nintendo.com now uses AVIF
P1586 link reply
vent.png
tv.avif
pc.avif
10bit.avif
I'm convinced that 10-bit is a gimmick and the real solution to banding is to avoid broadcast/TV ranges.
Here's a pure RGB image encoded in the default TV range, a full PC range, and with a 10-bit TV range.

In my opinion, PC looks best. There's just too much information wasted on chroma with 10-bit.
To be fair, this is an extreme example that makes the encoder shit itself.
P1594 link reply
HD video using libaom
P1666 link reply
There is no need for AVIF.

>>P1586
10-bit is needed for higher latitude. HDR on the consumer side, but mainly for source material so that there is more source information to play around with to create the final 8-bit image.
P2852 link reply
Now that 4chan supports VP9, comparing it with AV1 becomes interesting.

It doesn't seem like such a huge leap compared to the jump from VP8 to VP9, but there is of course still a lot of time for AV1 to mature.
P2853 link reply
VP9
P4008 link reply
Is --passes supposed to not do anything when using CRF with SVT-AV1?
Only the VBR mode seems to change depending on that setting.

Anyway, I encoded this tiny WebM using CRF 62. I'd try with libaom too, but it takes too damn long at its best settings.
P4010 link reply
P4008
>Is --passes supposed to not do anything when using CRF with SVT-AV1?
That would make sense. I don't see what you'd need a second pass for in CRF mode.
P4017 link reply
According to the documentation:

>When using CRF (constant visual rate factor) mode, multi-pass encoding is designed to improve quality for corner case videos--it is particularly helpful in videos with high motion because it can adjust the prediction structure (to use closer references, for example). Multi-pass encoding, therefore, can be said to have an impact on quality in CRF mode, but is not critical in most situations.

Perhaps the video has no such corner case, and it ends up being treated the same for that reason. It has only one keyframe and no global motion, after all.
P5607 link reply
P7199 link reply
Lol @ broken thumbnail
P7810 link reply
P7199
Probably timed out when trying to create it.
P10684 link reply
It takes *****ing hours to encode this using butteraugli tuning.
I hope it was worth it.
P10850 link reply
P10684
It looked better in P4008. The artifacts in the new one are way more noticeable.
P16225 link reply
P16230 link reply
P16225
>There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL
That's it, bois, big G (Jew) said it himself, no more small imagesize glags for you. Better go update your chrome app. Fukk yor entire nazi ecosystem!
P16231 link reply
P16244 link reply
[bold .webp] or gtfo
P16245 link reply
[bold: .webp] or gtfo
P16339 link reply
P16230
It would be nice if Firefox continued work on it, but all they seem to do lately is copy Google.
P16351 link reply
P16245
P16339
It seems jewgle didn't snatch the patent for "free algorithm" in time and decided to drop support of it to spoil competition from microshaft. and webp2 will be the "playground of compression" for chrome devs.
P17134 link reply
What image viewer do you guys use? I've been using feh since it isn't that bloated but I'm wondering if there's anything more lightweight than feh.
P17146 link reply
P17134
Depending on the file, usually feh, Imagemagick display, or eog, or occasionally even mpv when I've got a bunch of images and videos mixed.

>I've been using feh since it isn't that bloated
It's nice, although sometimes I want something that works with animated GIFs.
x