In the best case, we may be able to avoid feeding the data through the compositing pipeline of the compositor as well, if the compositor supports direct scanout and the dmabuf is suitable for it. In particular on mobile systems, this may avoid using the GPU altogether, thereby reducing power consumption.
I don’t understand what’s happening but it seems like a good idea? Can anyone help?
Instead of chucking the information for a whole bunch of pixels into the compositor they’re going to send through a little note with a description of where to find it in memory, to be used when those pixels finally get sent out to the display.
My understanding is that they were already sending “notes” to the compositor, but one note per window, after an internal per-window composition. Now they’ll be sending the parts to avoid the intermediary step. The notes have to be about rectangular areas, when that’s not the case they have to use alpha channels and ik not sure it’ll be more efficient.