The iPhone 16 family has arrived and includes many new features, some of which Apple has played very close to its vest. One such improvement is the inclusion of JPEG XL file types, which promise improved image quality compared to standard JPEG files while delivering relatively smaller file sizes.
[…]Overall, JPEG XL addresses many of JPEG’s shortcomings. The 30-year-old format is not very efficient, only offers eight-bit color depth, doesn’t support HDR, doesn’t do alpha transparency, doesn’t support animations, doesn’t support multiple layers, includes compression artifacts, and exhibits banding and visual noise. JPEG XL tackles these issues, and unlike WebP and AVIF formats, which each have some noteworthy benefits too, JPEG XL has been built from the ground up with still images in mind.
↫ Jeremy Gray at PetaPixel
Excellent news, and it will hopefully mean others will follow – something that tends to happen when Apple finally supports to the new thing.
My hotos? How rude.
Unfortunately, JPEG XL introduced a regression… it supports animations.
Yet another format where, if I support it, I’m going to need to run a sandboxed image parser in situations like “select the image to use as your avatar” to strip out later frames, and my file manager needs more than a simple magic number match to determine what kind of file I’m dealing with.
At least APNG is a flat-out violation of the PNG spec, which is why the reference implementation will never support it. (The PNG spec explicitly says that the PNG header denotes that all images in the PNG file will be derived from a single abstract image, and the rationale document associated with the PNG spec explicitly says they thought Animated GIF was a mistake.)
ssokolow,
I think animations are a side product of having multiple layers:
From https://en.wikipedia.org/wiki/JPEG_XL
Basically someone would eventually hack this into the format in a non-standard way. It is too tempting to use multiple images (which were supposed to be layers) in a succession as animations. So they took the preemptive step and included that as standard.
But of course I might be misreading this.
This sort of happened with the GIF 89a format. Support for multiple images per file was originally added to stack multiple layers to produce images with more than 256 colors, but animation support was then included because it was such a minor modification. Netscape later added support for looping via the Application Block capability of the format, and other browsers eventually adopted it.
But is it better than JPEG 2000? Now that the patents have expired…
Yes.
The good news is that, unlike HEIF/HEIC, JPEG XL is royalty-free, which is a nice surprise.
Since HDR will break compatibility with most JPEG implementations out there anyway, why not embrace a state-of-the-art royalty-free format?
kurkosdr,
And I don’t understand why Google does not support it.
This random site for example (really random, just Googled for an example, … ah the irony):
https://jpegxl.info/old/art/
Does not render correctly in Chrome.
They used to champion open formats, so much so that they developed their own for video ones (and also bought an existing company).
Thing is, Firefox doesn’t support it either (I just visited the website, Firefox is my main browser, and the pictures won’t render). My guess is that both Google and Mozilla are waiting for any patent trolls to come out of the woodwork first, The JPEG group may have done their due diligence, but that’s usually not enough to stop any patent trolls wielding questionable patents.
We have reached the stage where we can’t have new media formats out of fear of patent trolls.
JPEG XL is royalty free I think. Google does not like it because they are pushing AVIF as a subset of their own AV1 codec. Firefox dropped JPEG XL claiming “too much complexity”.
I am using the Zen browser these days ( a Firefox fork ) and it handles that site ( so JPEG XL ) just fine.
I think even Ladybird supports JPEG XL.
“too much complexity” is codeword for “it’s a too new format and we would like to see if any patent trolls come out of the woodwork first”.
Remember that VC-1 was supposed to be royalty-free (and was supposedly designed to not step on third-party patents), but when the spec was released, patent trolls started pointing to parts of the spec and claiming they have a patent on that.
When I said that we have reached the stage where we can’t have new media formats out of fear of patent trolls, this is what I meant. New formats are only adopted only when a headline feature (such as HDR or 4K video) need to be supported.
kurkosdr,
Software/math patents shouldn’t be allowed at all. They end up encouraging the development and use of sub par algorithms for avoidance purposes. Patent trolls create intellectual lock downs by establishing idea monopolies on math and algorithms. This harms software developers, who can trigger patent landmines at any time even when write their own code and don’t copy. Software patents are good for patent trolls and lawyers, but practically everyone else looses with costs inevitably being passed on to consumers by way of higher prices to pay for wasted time, patent costs, plus the lawsuits, not to mention products having to be stripped away of “infringing ideas”.
But they are a thing in several countries. Even in the EU, where software patents are theoretically not a thing, you can get a patent on an algorithm as long as the algorithm is disguised as a machine (and if you do that, any software that implements the algorithm counts as an implementation of the machine). This prevents the most ridiculous “do it on a computer” patents (for business processes etc) but it still allows for novel algorithms to be patented. There are EU countries that have chosen to make software patents not enforceable for software (like France), but they are the exception.
So, software patents are a thing in several important markets, and companies have to deal with the world as it is, not how it should be. That’s why I can’t blame them for taking a wait-and-see stance on JPEG XL.
PS: But if you want to get philosophical on the “should” question, there is the open question of why a unique modulation algorithm should be patentable when implemented entirely in hardware (no firmware whatsoever) but not patentable when implemented as a software-defined radio. The DVB-T2 modulation and demodulation algorithms are patented for example (and all DVB-T2 receivers pay a royalty). Should a software-defined radio implementation not be liable to royalties while it does the same thing?
kurkosdr,
I know 🙁
Right, I sympathize. Although the patent holders may be playing the same game. Wait for wider adoption and THEN float the submarine patents. The only way to be sure something doesn’t infringe is to wait ~20 years for any would-be patents to expire.
That’s a valid question. My opinion is that a hardware implementation of math is still math. If math is the only thing that makes an invention unique, then it probably shouldn’t be patentable. I doubt the framers of the constitution ever anticipated that patents would be used in pursuit of monopolies on the implementation of math algorithms. 🙁
Chrome dropped because of personal drama. Check who made the pull request dropping support. In short, it’s Webp and Avif guys being pissed off that their useless formats have been deprecated.
Mozilla probably just waits and copies Google, as usual.
joscher,
The reasons for dropping it were weak, although I’m not aware of a personal drama situation, can you elaborate?
https://www.phoronix.com/news/Chrome-Dropping-JPEG-XL-Reasons
I am reminded of Colitti though. He is the google employee holding back DHCP6, which works everywhere else except for android. Because of this, the /64 IPv6 network that most ISPs hand out cannot be subnetted, at least not without NAT66 which is batshit crazy. Google probably owns millions of such blocks so they don’t care, but an average business or home lab may want to subnet that giant address space and can’t because of this stupid limitation.
https://issuetracker.google.com/issues/36949085
https://www.nullzero.co.uk/android-does-not-support-dhcpv6-and-google-wont-fix-that/
Unfortunately Colitti bares a lot of responsibility for limiting corporate IPv6 deployments where SLAAC isn’t going to be deployed. Google employees are entitled to their opinions, but at the end of the day they should not be dictators when it’s not their house.
It’s as if Chrome is the new IE when it comes to browser market share, and Google is the new Microsoft when it comes to abusing that browser market share in whatever way they see fit.
But when it comes to new media formats specifically, I am willing to give them a pass after what happened with VC-1. It’s not just Google, nobody wants to support media formats newer than 25 years old unless they absolutely have to in order to support some headline feature (like Netflix UHD or YouTube 4K), because media formats newer than 25 years old are patent minefields. Your personal website full of JPEG XL pictures is not a headline feature, not enough for Google or Mozilla to take the risk anyway.
good step for a faster general adoption, avif is very good too, both are way better than webp, which I really dislike..
Actually, Avif is much worse than Webp (or even classic PNG) when it comes to lossless images. And PNG is the most common image format on the web, so Avif only really makes sense for some niches like animations and auto-generated thumbnails of AV1 videos.
My very controversial opinion:
Do you know what avoids a boatload of CO2 emissions? Backwards compatibility, keeping old devices compatible as long as possible. There’s no reason my old 2009 Mac Pro with a Vega 64, 128GB of RAM, 4x nvme ssds not to be compatible with the newest OS. There’s no reason why an old iPhone 6, that can run perfectly 1080p videos, browse the web, etc, not to be compatible with the newes iOS (bloat does not count as necessary feature)
Do you know what avoids a boatload of CO2 emissions? Keeping unprocessed metal ore underground, not transported, not molten, not manufactured, not shipping millions of parts around for a finished product.
Do you know what avoids a boatload of CO2 emissions? Not taking 300 vanity pictures per day that no one will EVER see, and synchronizing them to the cloud, and dumbwalking while mindlessly while scrolling at pictures of other people’s foods.
Trying to wrap this up as ecological gain is pure hypocrisy.
Unfortunately, the capitalist model needs endless pointless production of new devices – thus, obsolescence of old devices. It is intrinsically incompatible with ecological gain. At best, it can very slowly evolve towards slightly less impactful ways of production while letting the marketing wheel spin full time to present it as a green revolution.
Winter_skies,
I don’t think it is about capitalism, or rather free market, but actually this is about practicality.
A software company like Microsoft would be more than happy to support older devices and sell as many copies as possible. However after a certain point, not having features in hardware, means they will have more difficulty coding those features in software, or drop them altogether.
Even Linux, which would be the closest thing to a collectivist system is dropping older hardware support, or when they keep it, it is not as good as before. You can no longer expect to run icewm and firefox on your Pentium with latest Linux. (Yes, you can with some effort, but it won’t be pleasant).
It is the way of life. We progress, and rarely look back.
shit like webp sould be legally banned. If not by the US as least the EU can do something? it is all bad and any content posted in webp has no value to society,
If Google got what they wanted the web would be:
1. User uploads a JPEG
2. Website converts it into a lossy WebP
3. Server converts it another time into a lossy Avif (if supported by the browser)
4. User downloads it, can’t use the file and converts it back to a JPEG.
So three generation losses just to get back to the same file.
Meanwhile, JPEG XL can convert from and to existing JPEGs without any generation loss. And it’s the best lossless codec (slightly better than WebP and much better than Avif and PNG) so it can also replace the endless amount of existing PNGs without introducing quality loss.
Jpegs design is simple and elegant. And worth keeping.
Many of these “new” features were already in jpeg or are trivial to implement.
A problem is that only a subset of the original jpeg spec was commonly implemented.
I would really like to retain jpegs simplicity and implement the neglected features (like arithmetic encoding, higher bits, etc) and add in a few more modern features. At this point it couldn’t use the jpg/jpeg extension but should be backward compatible.
traeak,
It’s served us well, but I don’t see a reason for us to be beholden to a 32 year old format if we have something better. There are features that I frequently need where JPEG falls short. Alpha channels in particular is something I need all the time and I often end up resorting to PNG files just for that reason. I’m still going to keep the old jpegs around, but if there was a new format that was widely supported with more features, better compression, better quality, then why shouldn’t we use it for new content moving forward?