It’s taken a Herculean seven-year effort, but GIMP 3.0 has finally been released. There are so many new features, changes, and improvements in this release that it’s impossible to highlight all of them. First and foremost, GIMP 3.0 marks the shift to GTK3 – this may be surprising considering GTK4 has been out for a while, but major applications such as GIMP tend to stick to more tried and true toolkit versions. GTK4 also brings with it the prickly discussion concerning a possible adoption of libadwaita, the GNOME-specific augmentations on top of GTK4. The other major change is full support for Wayland, but users of the legacy X11 windowing system don’t have to worry just yet, since GIMP 3.0 supports that, too.
As far as actual features go, there’s a ton here. Non-destructive layer effects is one of the biggest improvements.
Another big change introduced in GIMP 3.0 is non-destructive (NDE) filters. In GIMP 2.10, filters were automatically merged onto the layer, which prevented you from making further edits without repeatedly undoing your changes. Now by default, filters stay active once committed. This means you can re-edit most GEGL filters in the
↫ GIMP 3.0 release notesmenu on the layer dockable without having to revert your work. You can also toggle them on or off, selectively delete them, or even merge them all down destructively. If you prefer the original GIMP 2.10 workflow, you can select the “Merge Filters” option when applying a filter instead.
There’s also much better color space management, better layer management and control, the user interface has been improved across the board, and support for a ton of file formats have been added, from macOS icons to Amiga ILBM/IFF formats, and much more. GIMP 3.0 also improves compatibility with Photoshop files, and it can import more palette formats, including proprietary ones like Adobe Color Book (ACB) and Adobe Swatch Exchange (ASE).
This is just a small selection, as GIMP 3.0 truly is a massive update. It’s available for Linux, Windows, and macOS, and if you wait for a few days it’ll probably show up in your distribution’s package repositories.
Amazing! Hopefully we will see bigger improvements more often now especially with colour spaces (eg. CMYK and CieLab) and non-destructive editing. Maybe even UX.
It is easy to be cynical about the pace of GIMP development and indeed it has been slow. However, I think a lot of the problem has simply been that the version that the rest of us were using was not the one that the devs were adding new things to. The big architectural changes meant that new features added years ago are only seeing the light of day now.
ReactOS has the same problem. The “stable” version is years out of date. Anybody close to the project has been using versions with far more functionality for a really long time but, unless you are downloading nightly dev versions, you would never know that progress is being made. As a result, progress is much slower than it should be and far fewer people are able to participate.
Now that 3.0 is out, I do hope that the apparent pace of GIMP evolution accelerates and we can all get more exited about where it is going next.
GIMP fell into a similar trap that Netscape in the 90s: rewrite too much at once. KDE learned that with KDE 4. For both it was a necessary step for future evolution though, so Kudos to GIMP devs for sticking to it.
https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
Serafean,
It’s not always the case though. Sometimes it really is better to tear down the house and build a better foundation than try and keep building on what you’ve got. Inadequate wiring, leaks, bad isolation, bad windows, mold, rotten boards, structural limitations, not up to code, etc. Same can apply to software – patching a code base with many problems could be the wrong choice. This isn’t to say everything should be rewritten, only that it depends and it should not be ruled out if the code base has a lot of problems. Believe it or not I’ve seen a lot of terrible code in corporate environments. Of course it can be hard for managers to justify rewrites when they have such short term incentives – apply some putty to cover up the mold and make it someone else’s problem (typically my problem when I end up working on it).
It has been many years since I read that Joel article but I mostly agree with him and primarily because I agree that the replacement is always much bigger than you expect. You typically find out that your customers rely on behaviour or features you did not anticipate. You often realize that there were features you did not yourself understand or allow for in your new design. By the time you are back to feature parity, the replacement system is ready to be replaced.
That said, I am a massive fan of refactoring. Old code needs to constantly be improved, made more modular, and made more testable. You cannot really refactor unless you code is well tested. I think it was Michael Feathers that defined “legacy” code as code that could not be tested and therefore could not be safely refactored. Refactoring and modernizing old code seems like a lot of work and expense, that is until you have been through a massive multi-year re-write on a core product that just about kills the company. With a couple of those under your belt, migrating even the crustiest pile of ancient spaghetti bit by bit does not seem so bad. If you can create a core, you can wrap it in an interface or API, change out the UX, and significantly change the functionality without losing what works. If you can break the core even further into a few modules, you can replace them individually over time without anything approaching the risk or scale of a full re-write. In the end, you can get to pretty much the same (probably better) result without triggering existential threats along the way.
These days when you can create containers and VMs that provide software the environment it needs without holding everything else back, it is more viable than ever to keep the lights on while you renovate.
LeFantome,
I know that replacements can be a big job, but at the same time it’s also true that many managers are influenced by sunk cost fallacy and underestimate how bad foundations are holding back business operations.
Again, I’d say it depends on the circumstances. Some software worked poorly on day 1, sometimes it just doesn’t age well. A lot of enterprise software has been hanging on for decades in programming environments like classic VB that don’t scale well and aren’t even supported any more.
To be fair that depends on the competency of the team. If the project manager insists on cutting corners, offshoring to the cheapest bidders, etc…then yeah I concede a new project can be setup up for failure. I’ve definitely seen instances of this. However I’ve also seen success stories where the new system works so much better that everyone is absolutely pleased to cut off the old dead weight. I think it’s wrong to declare a solution without even considering project specifics.
Funnily enough I’ve done a lot of work for a company that web enables cobol applications on the mainframe. Being able to build new GUIs without having to rewrite existing applications is the pitch. But there are lots of caveats that may not be immediately obvious. The effort that goes into reusing the cobol code can be very significant when the GUI doesn’t have a 1:1 representation with the cobol application, which they practically all want to improve on. Also, instead of maintaining one code base you will end up having to make changes in two places going forward: the cobol appliction, and the GUI built on top of it. This involves a ton of technical nuance like sync problems and marshaling errors that we probably don’t have time to go into.
Is reusing existing code the right decision? Sometimes but not always. Sometimes a rewrite can be easier/cheaper/more future proof than trying to modernize old code. It’s important to evaluate this question on a case by case basis.
@Alfman
We are probably not disagreeing as much as it sounds. Though I have certainly become more and more cautious of all-at-once rewrites over time. It would be fascinating to share war stories.
Your VB example resonates. I was involved in the replacement of a very large VB6 based application not long ago. I agreed to (and advocated for) a rewrite for many of the reasons you cite. The replacement was a .NET based, cloud native, Kubernetes monster. Many, many lessons learned on how not to do a re-write learned from that experience.
Joel, the guy who wrote the article, somewhat famously wrote his own VBScript compiler to port a Windows only VBScript product he had to PHP/Linux without having to rewrite the VBScript code. Then they starting enhancing the compiler so they could use features VBScript lacked. They could generate both VBScript and PHP versions of the product as output from one code base. If I recall, they called the resulting “language” Wasabi (enhanced VBScript).
Years later, they created a .NET version of Wasabi that generated .NET assembly from the same source code (no re-write). At some point, they retired Wasabi and moved to C#. Instead of a rewrite though, they had Wasabi generate readable C# which they then maintained directly. This was all over I think close to 20 years. At no point was the original application ever really retired or “rewritten from scratch”. Customers saw regular releases and feature progression the entire time without realizing all the changes that were going on under the hood.
What we probably agree on is that bad managers can make it suck no matter what strategy you use.
LeFantome,
This isn’t a reflection on your message, but I feel compelled to say that VBscript (and related ASP) are some of the worst languages I’ve ever had the misfortune of using as a software professional .Yuck yuck yuck! If ever I saw a language worth leaving behind…this is it! Joel may have done the world and himself a disservice in trying to continue VBscript legacy, haha. I wouldn’t say this of visual basic, which offered much better primitives and was actually good at rapid application development (although active-x would go on to be an achilles heel).
That’s a fair point. Fundamentally all languages tend to support the same math/conditionals/loops/arrays/etc. In principal it’s not that hard to take an algorithm from one syntax and rewrite it for another syntax. However syntax can be superficial while my bigger concerns have to do with software foundations and structure. I’ll give you an example…
I recently worked on a code base from the 90/00s to make needed cryptographic upgrades. We did not rewrite the application, which is fair enough, but the upgrade effort was massive because the code suffers from spaghetti code syndrome and as it grew they failed to design things modularly using clean loosely coupled interfaces. With no self contained unit, crypto logic ended up being peppered throughout the code base, everywhere inside of the event loops and GUI there were if conditions to take different branches depending on cryptographic states. Because we needed to upgrade the crypto, it became a hell of an ordeal and even though we were reusing code we ended up breaking things in the process.
In the end it worked out and at least the crypto code is better off for it, but this was just one change in an code base that honestly is still a mess. The code is written across two languages, portions of which use MFC dialogs, others written in win32 resource dialogs. There are lots of shenanigans to marshal program state across DLLs. It all technically works, but they’ve been living with a fairly large maintenance burden and burning out devs. If they had wanted to fix everything then at some point the calculus could favor a rewrite rather than working with such sloppy foundations.
Obviously not all projects are like this and I’m not trying to make the case everything needs to be rewritten, but my point is that we need to consider whether code reuse is actually the best path forward and not automatically assume it is in all cases.
Sure that’s an easy thing to agree on
In my example the product was a corporate acquisition and not originally developed by the company I worked for.
@Alfman GIMP 3.0 is a serious achievement, they took the hard road, but the result is worthy of being lauded. I hope Wayland is watching.
I much prefer what GIMP did, even if the wait was excruciating, it’s still way better than a death by a thousand cuts.
cpcf,
I look forward to trying it. If may be a while since Debian stable tends to be slow to release new versions.
Non destructive filters sounds awesome though. This is actually a big reason I prefer vector graphic software – I want to be able to make changes without having to “commit” when it’s time to apply a filter.
Years ago my favorite graphics software was paint shop pro. It had a non-destructive workflow but I lost access to it when I moved to linux and now I use gimp. I’ve tried krita as well but it was too artistic for my needs.
As a light user who finds the ui of most image editors equally obscure, I am curious as to what the photoshop crowd thinks of this new version.
I tried it super fast and for me it is as ugly as always
I used to use Photoshop, I’ve moved 100% to Gimp now for editing my scanned, traditionally-painted paintings. Some of the ways you do things are a bit different, but nothing important, you can learn that. The biggest problem really is performance. Gimp is slow. It’s not visible on small images, but on 600 DPI scanned TIFF images, doing some simple transformations, can take anywhere from 3 to 8 seconds, while on Photoshop are 1 second, or instantaneous.
Eugenia Loli,
I use gimp’s box transformations to correct distortions for documents I take pictures of. My god the performance leaves a lot to be desired. It should be real time. Obviously blender isn’t the right tool for the job, but it’s shaders are real time and it proves that the computer is not the limiting factor, far from it. Gimp’s code just lacks optimization. Also whatever algorithm gimp uses is not pixel perfect, which is very frustrating when you know exactly where a mark on paper needs to be mapped to.
I’m thankful to have gimp, but yea it could use a lot more optimization and acceleration.
Is there a list of what file formats are actually supported? I couldn’t find any at the site.
Are there any graphics file formats that GIMP does not support? I’ve thrown all kinds of weird and obscure file endings at it over the years, can’t recall it ever letting me down.
If IFF ILBM support has only just been added then I suspect there are a great many other file formats not yet supported.
Minuous,
I you go to the open dialog box there’s a long list of supported file formats.
Is it missing something you need?
Probably. Since it has online docs I was hoping this kind of thing would be documented rather than having to download and install it and check a file requester to find out. Otherwise there doesn’t seem much point to having the online docs.
So you don’t have a specific need and just want a list. Maybe this pages are helpful?
https://developer.gimp.org/core/standards/
https://fileinfo.com/software/gimp/gimp
GTK? Motif or GTFO! =)))))
In a more serious note, I am happy this is out now and I am eager to try it. Years ago, I dumped all subscription software out of my life (I don’t mind perpetual licenses), but GIMP or even Luminar Neo choke with 4×5 and 8×10 300-800mp scans.
GIMP scrolls and redraws the screen very slowly, and previewing any levels adjustment is impossibly slow. Luminar Neo leaks memory like there’s no tomorrow, slowly going through all 64GB of RAM and more than 100GB of swap space.
I was just yesterday considering going back to Adobe, but will now give GIMP another shot.
Shiunbird,
I use gimp almost exclusively for raster graphics. It’s not critical for me because I don’t do much graphics editing, but I concur: the software is badly optimized and would benefit from a major overall. It takes a while for debian repos to push feature updates but I look forward to v3.
Unless I’m actually working with photos, I prefer vector graphic tools to generate graphics. I use inkscape there, but it needs some work too.
Incidentally I’ve been working on an inkscape fork that takes the inkscape scene and forwards it to a laser projecting daemon in real time. My wife wanted something to draw outlines for her projects. Using ILDA software is clunky and doesn’t work with normal vector formats. So I thought it would be really cool to project directly out of real vector graphics software. Inkscape was an obvious candidate. I wanted to implement it as a plugin for inkscape, but it quickly became apparent that inkscape’s built in facilities weren’t up to the task so I had to create a fork that allows objects to be extracted from the scene to send them to the projector..
Optimisation isn’t the problem with GIMP, the biggest issue is the lack of shape drawing tools. Things which MS Paint, Paintshop Pro and Photoshop can do the GIMP can’t do, or it does in a very clunky manner. Hopefully this will all be fixed in GIMP-3.2.