(There are sample images below)
Color Space Blues
It was supposed to be a simple thing. “Get this in, and you’re done.”. I was thinking it would take 3 days or so. As I complete my 25,000th line of code on this “one simple thing”, I can see it was never going to be that simple.
I’m talking about a Sagelight feature called Power Curves. It allows manually-adjusted curves in a number of color spaces, such as RGB, C*I*E LAB, Hunter LAB, XYZ, Yxy, HSL, HSB, YcrCb and some hybrid color spaces that I think are very helpful. In one sense, offering curves in so many color spaces is overkill — YCrCb, for example, is just for completeness. Hunter LAB, however, I think can be much more useful than C*I*E LAB in many cases, because it avoids a secondary non-linear color space conversion that can skew colors as they move around in your image.
Colors changing in your image as you edit while in non-RGB spaces has always been a pet-peeve of mine.
Using non-RGB color spaces that split the light and the color into multiple channels can be important. For example, it’s better to sharpen your image in just the luminance channel. Many editors use LAB mode for this, where Sagelight’s new version uses XYZ space because it is a purer method of the same approach (well, actually, the ‘L’ of the LAB and the XZ of the colors, but that’s why it gets complicated). Also, sometimes, more accurate color changes can be obtained in alternate color spaces, where RGB space operations can yield more perceptually-based changes.
The Science of Color
Lately, I’ve been obsessed with color. It didn’t start off as an obsession, but once I started to really delve into the issue of how we want to edit pictures today, the issue of color in our images was suddenly front and center. Today, we can do much more with images than we have been able to in the past. One of the main reasons for this is that pictures have far less noise than they used to. One of the talking points of the new high-iso CCD cemeras is that “you no longer need an image editor”. I’m repeating myself from another blog post, but this is really a good point. The fact that images are coming out so much better from camerastoday means that we can do much more as we work with pictures that are better from the start.
One of the main things we want to do is to make our pictures more powerful and more striking. This means adding color, which either means toning of some sort, but this also means, in just about all cases (in one form or another), adding saturation.
This generally requires working in different color spaces — most of the time behind the scenes by converting your image on-the-fly to some color space, performing an operation, and converting back — to get a desired result in terms of adding color to your image.
As I mentioned, it’s always been a pet-peeve of mind that colors can tend to change to something other than intended in alternate color spaces. Many editors used either C*I*E LAB or Hunter LAB to increase the saturation of your image. (Sagelight currently uses Hunter LAB or HSL, and the new version has some new technology along these lines, with a color space specifically created for perceptually accurate saturation). If you’re a Lightroom , Sagelight, et. al., user have you ever noticed that sometimes the brownish areas go yellow and the skies go cyan? I have, and it’s always annoyed me.
I’ve recently spent a lot of time working on some code to deal with that and, really, to find out what is happening in the math that causes this.
Here comes the science
I’ve discovered (or at least realized) a few things about color. One, the color shift I mentioned happens because the typically-known conversion formulas to XYZ color space from RGB color space (necessary for moving into C*I*E LAB, Hunter Lab, YXY, and a number of other color spaces) are not necessarily the best formulas for image processing that is based on human interaction from a perceptual level. Another reason this happens is that every pixel in your image goes through a lot of calculations just to perform a simple operation. For example, to convert a pixel to C*I*E LAB Space and adjust the saturation involves something along the lines of:
- fRed = fRed > .04045 ? ((fRed+.055)/1.055)^2.4 : fRed/12.92
- fGreen = fGreen > .04045 ? ((fGreen+.055)/1.055)^2.4 : fGreen/12.92
- fBlue = fBlue > .04045 ? ((fBlue+.055)/1.055)^2.4 : fBlue/12.92
- X = fRed*.4124 + fGreen*.3576 + fBlue*.1805
- Y = fRed*.2125+fGreen*.7152+fBlue*.0722
- Z = fRed*.0193 + fGreen*.1192 + fBlue*.9505
- X = X > .008856 ? X^3 : 7.787*X + 16/116
- Y = Y > .008856 ? Y^3 : 7.787*Y + 16/116
- Z = Y > .008856 ? X^3 : 7.787*Z + 16/116
- X / kTristimulusX
- Y / kTristimulusY
- Z / kTristimulusZ
- CIE-L* = ( 116 * Y) – 16
- CIE-a* = 500 * ( X-Y )
- CIE-b* = 200 * ( Y-Z )
Finally, a simplified way to increase the saturation
- A = A* fSaturationIncrease
- B = B*fSaturationIncrease
All of that just to perfrom what turns out to be a fairly simple math, even in the the non-simplified version. And now, we get to convert it all back to the original RGB color space!
There are a significant number of calculations that occur with every pixel of your image just to increase the saturation, most of it (by far) just to convert it to the color space and then back again. It’s no wonder that the color varies, considering that there are two non-linear transformations occuring on your image before the color is even touched. The real art of programming is to perform these calculations in SSE code to make them work in real time. A lot of people have complained that Lightroom is a little sluggish. Well, to defend Lightroom, here is why: Like Sagelight, Lightroom takes color very seriously and to do so takes performing multiple sets of calcualtions such as above (keep in mind that I only wrote out the first half of one of these sets) on every single pixel while you’re moving a slider or curve. My way of dealing with it is to write everything in SSE code to make it ultra-fast, but also means it can take little while to get things like the Power Curves out (and also why it blossomed into 25,000 lines of code).
Tristimulus values and such
Note that the above formulas are not the actual formulas used in Sagelight. Part of the work I’ve been doing is to get the right formulas to keep the color drift as minimal as possible. The Tristimulus values as well as the conversion multipliers for RGB->XYZ and XYZ->RGB can be the difference from your image moving towards an ugly yellowish-green hue or not.
Basically, the numbers have to be very precise, but one of the main issues is that these formulas we’re designed to split color from the light, and these numbers date back 70+ years. For example, YUV and YCrCb (akin to LAB) were developed for the purpose of transmitting color television in a way that was still compatible with black-and-white tv. They’re very useful for today, but there are issues to deal with that weren’t necessarily at the forefront at the time.
The main issue, I believe, is that we’re in a world where we want to modify the information in the images in new and significant ways, and this causes problems when these non-linear conversions happen with the image.
I’d like to talk more about this, but I think I’d rather post some examples….
Getting Tangible Results
The rocks definitiely go yellow. It looks nice as an effect, but rocks aren’t typically yellow unless they have a lot of sulfer. Plus, the yellowish color is not the original hue of the rocks. It turns out that the color hue differential between the original image and the saturation is only 5 degrees, but this is the difference between yellow and brown. These images are highly saturated to demonstrate the issue, but under normal circumstances, this issue can be the difference in how your picture turns out. Here is the same result image moved over 5 degrees to match the original color of the rocks:
Now the sky is a little skewed, since I adjust the entire image 5 degrees (I could have used the Undo Brush to put the sky back). This demonstrates the problem with adding color to images from a programming point of view. It’s very hard to be exact, and there can be a big difference between what you might want perceptually and what the actual colors in the image might be. For example, consider a yellow flower against a nice blue sky. It may look great, but if you deepen the sky in the picture, the same yellow won’t look natural because, as human beings, we’re used to yellows turning to the red as the light drops because our yellow sun turns orange later in the day.
Here is the same picture saturated using a new color space model developed for Sagelight:
(addendum: some people have suggested that the result image looks too artificial (because it has a lot of color). It’s all in the eye of the beholder, I suppose. But, I do see the point. See the comments section of this article for more discussion on the issue. Also note that I’ve become a little embarrassed about the over-saturated original result and replaced it with the one you see — that’s a subject for a different article, about how we can become calibrated to over-saturation and color hues that we can often overdo a picture without realizing it).
The Art of Image Science
The tangible results of the science really gets into the art of it. My job, as I view it, is to find a way to make the science easy to work with. Part of it is making hefty algorithms accessible by writing them in SSE code which can make things upwards of 100 times faster than it would be in optimized C++ code. But, also, it’s about figuring out what to put in and to leave out. As I said, I’ve been working with color lately, which has turned into a bit of a nightmare as I had to deal with working with numbers and formulas where any slight deviation (or just using the standard/published versions) would wind up with color drift in your image.
As I wind down on this part of the next version of the software (I should have a pre-release out any day now), and it becomes more and more tangible, it suddenly becomes worth it:
You may have seen this Impala before. It’s one of the test images I use. There are certain images that are challenging for various reasons, and this is one of them. In this case, my goal has been to find different ways to give it a new paint job. It’sa challenge because it’s hard to bring out the colors in this car without killing the reflections and making the picture look unrealistic.
As I mentioned, part of my job is to find a way to get out the power of image editing in a way that is as easy as possible. With the developments and new technology I’ve been working on, here is a way we can bring the color out in this car (without killing the reflections) with the new Vibrance Curves:
As I mentioned before, I really think manual curves are obsolete for general image toning. But, they are very useful for things like this.
With three simple curves done in realtime (so you can see what you get as you do it) we can really make a difference with this picture, and then go on and make more adjustments as desired (I would brighten it, for example).
For me, that’s the art and the science of image processing coming together.