The Science (and Art) of Image Processing

(There are sample images below)

Color Space Blues

It was supposed to be a simple thing.  “Get this in, and you’re done.”.   I was thinking it would take 3 days or so.  As I complete my 25,000th line of code on this “one simple thing”, I can see it was never going to be that simple.

I’m talking about a Sagelight feature called Power Curves.  It allows manually-adjusted curves in a number of color spaces, such as RGB, C*I*E LAB, Hunter LAB, XYZ, Yxy, HSL, HSB, YcrCb and some hybrid color spaces that I think are very helpful.  In one sense, offering curves in so many color spaces is overkill — YCrCb, for example, is just for completeness.  Hunter LAB, however, I think can be much more useful than C*I*E LAB in many cases, because it avoids a secondary non-linear color space conversion that can skew colors as they move around in your image.

Colors changing in your image as you edit while in non-RGB spaces has always been a pet-peeve of mine.

Using non-RGB color spaces that split the light and the color into multiple channels can be important.  For example, it’s better to sharpen your image in just the luminance channel.  Many editors use LAB mode for this, where Sagelight’s new version uses XYZ space because it is a purer method of the same approach (well, actually, the ‘L’ of the LAB and the XZ of the colors, but that’s why it gets complicated).   Also, sometimes, more accurate color changes can be obtained in alternate color spaces, where RGB space operations can yield more perceptually-based changes.

The Science of Color

Lately, I’ve been obsessed with color.  It didn’t start off as an obsession, but once I started to really delve into the issue of how we want to edit pictures today, the issue of color in our images was suddenly front and center.  Today, we can do much more with images than we have been able to in the past.  One of the main reasons for this is that pictures have far less noise than they used to.  One of the talking points of the new high-iso CCD cemeras is that “you no longer need an image editor”.  I’m repeating myself from another blog post, but this is really a good point.  The fact that images are coming out so much better from camerastoday means that we can do much more as we work with pictures that are better from the start.

One of the main things we want to do is to make our pictures more powerful and more striking.  This means adding color, which either means toning of some sort, but this also means, in just about all cases (in one form or another), adding saturation.

This generally requires working in different color spaces — most of the time behind the scenes by converting your image on-the-fly to some color space, performing an operation, and converting back — to get a desired result in terms of adding color to your image.

As I mentioned, it’s always been a pet-peeve of mind that colors can tend to change to something other than intended in alternate color spaces.  Many editors used either C*I*E LAB or Hunter LAB to increase the saturation of your image. (Sagelight currently uses Hunter LAB or HSL, and the new version has some new technology along these lines, with a  color space specifically created for perceptually accurate saturation).   If you’re a Lightroom , Sagelight, et. al., user have you ever noticed that sometimes the brownish areas go yellow and the skies go cyan?  I have, and it’s always annoyed me.

I’ve recently spent a lot of time working on some code to deal with that and, really, to find out what is happening in the math that causes this.

Here comes the science

I’ve discovered (or at least realized) a few things about color.  One, the color shift I mentioned happens because the typically-known conversion formulas to XYZ color space from RGB color space (necessary for moving into C*I*E LAB, Hunter Lab, YXY, and a number of other color spaces) are not necessarily the best formulas for image processing that is based on human interaction from a perceptual level.  Another reason this happens is that every pixel in your image goes through a lot of calculations just to perform a simple operation.  For example, to convert a pixel to C*I*E LAB Space and adjust the saturation involves something along the lines of:

  • fRed = fRed > .04045 ? ((fRed+.055)/1.055)^2.4 : fRed/12.92
  • fGreen = fGreen > .04045 ? ((fGreen+.055)/1.055)^2.4 : fGreen/12.92
  • fBlue = fBlue > .04045 ? ((fBlue+.055)/1.055)^2.4 : fBlue/12.92
  • X = fRed*.4124 + fGreen*.3576 + fBlue*.1805
  • Y = fRed*.2125+fGreen*.7152+fBlue*.0722
  • Z = fRed*.0193 + fGreen*.1192 + fBlue*.9505
  • X = X > .008856 ? X^3 : 7.787*X + 16/116
  • Y = Y > .008856 ? Y^3 : 7.787*Y + 16/116
  • Z = Y > .008856 ? X^3 : 7.787*Z + 16/116
  • X / kTristimulusX
  • Y / kTristimulusY
  • Z / kTristimulusZ
  • CIE-L* = ( 116 * Y) – 16
  • CIE-a* = 500 * ( X-Y )
  • CIE-b* = 200 * ( Y-Z )

Finally, a simplified way to increase the saturation

  • A = A* fSaturationIncrease
  • B = B*fSaturationIncrease

All of that just to perfrom what turns out to be a fairly simple math, even in the the non-simplified version.  And now, we get to convert it all back to the original RGB color space!

There are a significant number of calculations that occur with every pixel of your image just to increase the saturation, most of it (by far) just to convert it to the color space and then back again.  It’s no wonder that the color varies, considering that there are two non-linear transformations occuring on your image before the color is even touched.  The real art of programming is to perform these calculations in SSE code to make them work in real time.  A lot of people have complained that Lightroom is a little sluggish.  Well, to defend Lightroom, here is why:  Like Sagelight, Lightroom takes color very seriously and to do so takes performing multiple sets of calcualtions such as above (keep in mind that I only wrote out the first half of one of these sets) on every single pixel while you’re moving a slider or curve.  My way of dealing with it is to write everything in SSE code to make it ultra-fast, but also means it can take  little while to get things like the Power Curves out (and also why it blossomed into 25,000 lines of code).

Tristimulus values and such

Note that the above formulas are not the actual formulas used in Sagelight.  Part of the work I’ve been doing is to get the right formulas to keep the color drift as minimal as possible.   The Tristimulus values as well as the conversion multipliers for RGB->XYZ and XYZ->RGB can be the difference from your image moving towards an ugly yellowish-green hue or not.

Basically, the numbers have to be very precise, but one of the main issues is that these formulas we’re designed to split color from the light, and these numbers date back 70+ years.  For example, YUV and YCrCb (akin to LAB) were developed for the purpose of transmitting color television in a way that was still compatible with black-and-white tv.  They’re very useful for today, but there are issues to deal with that weren’t necessarily at the forefront at the time.

The main issue, I believe, is that we’re in a world where we want to modify the information in the images in new and significant ways, and this causes problems when these non-linear conversions happen with the image.

I’d like to talk more about this, but I think I’d rather post some examples….

Getting Tangible Results

Original Image

Let’s look at this image.  It’s a nice picture.  It can use some work, most notably some color and a little darkening.  I’m not going to darken it here, but I do want to concentrate on the color.   It just happens to be a good example of the yellowing I was talking about.  Let’s use some XYZ/LAB-based saturation:

XYZ/LAB-based saturation (Lightroom) (purposely highly saturated)

The rocks definitiely go yellow.  It looks nice as an effect, but rocks aren’t typically yellow unless they have a lot of sulfer.   Plus, the yellowish color is not the original hue of the rocks.  It turns out that the color hue differential between the original image and the saturation is only 5 degrees, but this is the difference between yellow and brown. These images are highly saturated to demonstrate the issue, but under normal circumstances, this issue can be the difference in how your picture turns out.  Here is the same result image moved over 5 degrees to match the original color of the rocks:

Previous saturated Image hue-adjusted -5 degrees to match original rock hue

Now the sky is a little skewed, since I adjust the entire image 5 degrees (I could have used the Undo Brush to put the sky back).  This demonstrates the problem with adding color to images from a programming point of view.  It’s very hard to be exact, and there can be a big difference between what you might want perceptually and what the actual colors in the image might be.  For example, consider a yellow flower against a nice blue sky.  It may look great, but if you deepen the sky in the picture, the same yellow won’t look natural because, as human beings, we’re used to yellows turning to the red as the light drops because our yellow sun turns orange later in the day.

Here is the same picture saturated using a new color space model developed for Sagelight:

Original Image Saturated with New Color Space Model (Sagelight) (purposely highly saturated)

This image is very close to the altered image where I put the rock colors back to the original colors.  Now, with very little effort, I can tone and otherwise work on this image to make it into whatever I want.  Just deepening the sky here would bring out the rest of the picture (which is easily done with the masking).

The Result

Now that I’ve been able to work with the colors more precisely, here is something I could do with the image by saturating it with better color (and not oversaturating as I did for the purpose of example) and then performing a few minor adjustments… if I wanted to make it look like it was taken later in the afternoon, for example:
(you might want to cleanse your palette by not looking at the above pictures (i.e. scroll this page down until the disappear) — since they are highly saturated, the result may look too plain until you haven’t seen the above pictures for a couple seconds)

Sample Result

(addendum: some people have suggested that the result image looks too artificial (because it has a lot of color).  It’s all in the eye of the beholder, I suppose.  But, I do see the point.  See the comments section of this article for more discussion on the issue.  Also note that I’ve become a little embarrassed about the over-saturated original result and replaced it with the one you see — that’s a subject for a different article, about how we can become calibrated to over-saturation and color hues that we can often overdo a picture without realizing it).

The Art of Image Science

The tangible results of the science really gets into the art of it.  My job, as I view it, is to find a way to make the science easy to work with.  Part of it is making hefty algorithms accessible by writing them in SSE code which can make things upwards of 100 times faster than it would be in optimized C++ code.  But, also, it’s about figuring out what to put in and to leave out.  As I said, I’ve been working with color lately, which has turned into a bit of a nightmare as I had to deal with working with numbers and formulas where any slight deviation (or just using the standard/published versions) would wind up with color drift in your image.

As I wind down on this part of the next version of the software (I should have a pre-release out any day now), and it becomes more and more tangible, it suddenly becomes worth it:

Original Image

You may have seen this Impala before.  It’s one of the test images I use.   There are certain images that are challenging for various reasons, and this is one of them.  In this case, my goal has been to find different ways to give it a new paint job.  It’sa  challenge because it’s hard to bring out the colors in this car without killing the reflections and making the picture look unrealistic.

As I mentioned, part of my job is to find a way to get out the power of image editing in a way that is as easy as possible.  With the developments and new technology I’ve been working on, here is a way we can bring the color out in this car (without killing the reflections) with the new Vibrance Curves:

Image after using Vibrance Curves in Sagelight

As I mentioned before, I really think manual curves are obsolete for general image toning.  But, they are very useful for things like this.

With three simple curves done in realtime (so you can see what you get  as you do it) we can really make a difference with this picture, and then go on and make more adjustments as desired (I would brighten it, for example).

For me, that’s the art and the science of image processing coming together.

12 thoughts on “The Science (and Art) of Image Processing

  1. Wow!
    You are obsessed, stay that way for our sake.

    I am sure that all other Sagelight users are also obsessed with this incredible editing software.

    May you never be satisfied.LOL


  2. Hello Rob,

    this looks like seriously demanding and of course potentially very, very helpful.
    I hope you have some people giving you some good feedback regarding the implementation of such new technology – how to make it intuitive and user friendly.

  3. Great to hear about all the work you are putting in.

    However, as a keen rock climber, I don’t at all think the rock edits look realistic. I assume the rock is light sandstone ? Sandstone can look orange when it has a lot of iron in it but it doesn’t seem so in this instance.

    I tried some fill light, so I could see more rock in the very dark areas; turned down the highlights, to bring out the sky; then some smart contrast. To me, it gives much more realistic looking rock.

    Removing saturation from the rock rather than adding it, made it look even more realistic. Saturating the sky more does make it head to cyan as you suggest and it required about a 30 color shift to bring it back to sky blue.

    I also wonder how much variation there is in the color curves of the eye ? I know that 10% of men have very poor red/green response.

    • That’s why it’s hard to work with deepening color, because it’s always subjective. I probably should have used another example, such as a portrait or something that we all tend to agree more on in terms of hues.

      Maybe I should replace the picture. I think (somewhere) someone made the point that I find all the time: what looks good to me doesn’t always look good to another person. I mentioned somewhere that I can go through 100 pictures for every example I actually post, for exactly that reason. My criteria for whether something has a cast or not is when the color cast extends unto multiple identifiable objects. For example, if you move the picture to the red to bring out skin tones, it can make the other objects in the picture go a little yellowish, which causes a cast and causes a loss of depth in the picture.

      Also, keep in mind that I highly saturated the picture on purpose. In the end, I was going for something that was more of a later-day look, as the rocks would almost certainly turn towards the red (unless they were filled with sulfur) later in the day.

      I’ve seen rocks both colors, but the main issue in the article was that the end-result was in concert with the actual hue of the rocks in the original picture, whereas the yellow wasn’t (I even kind of liked the yellow cast, in a way).

      • I happened to wander through a photography exhibition a few days ago. All of the fellow’s pics were super saturated. The effect looked fantastic, largely because of the type of scenes he had selected … tropical; underwater; waves with surfers.

        Your pic of the old car looks good too.

        I think it is important to be selective if extra vibrancy is to be used.

  4. Hi Rob,

    I did just check the rock images on a second computer (a PC with a calibrated monitor) and I have to agree with dramcam that the rocks look quite unrealistic (perhaps they should be more artsy? OK), but in my eyes they are still way too yellow.

    • Yeh, maybe. I noticed that when I put the picture up, and was probably swayed by the original yellowing, because I didn’t want to go too red, either.

      But, that’s what I’m working with — how to bring out the colors in an image without making them too off-base, so this is great feedback. As soon as I get this pre-release out, I will be posting (on the discussion board) a number of examples, and it would be great to get some commentary on what is going right with them and what is going wrong, so I can really zero-in on the right formulas.

      • I just looked at the original, and this points out an issue I’ve worked with Sagelight since the beginning:

        color casts.

        Sometimes I think I sound like a broken record on this subject, but after looking at the picture, it has a small yellow cast to it (most pictures do) which causes the cast to become more pronounced when saturated. Which can be good when you want a picture that seems more filled with color, but, definitely according to some, not so much in this picture. The squirrel picture (below) is the same — it started off with a yellow cast, which became more intense with both saturation methods.

        (again, the original result in the article was saturated more highly on purpose to make the point about color shifting with different saturation algorithms)

  5. Thanks for the very informative post. I just wished I understood it all as well as you do.

    About the yellow rock. It all comes down to what you like. I tweek a picture and get it where I like it, then my wife says, “I liked it the way it was better.” I usually very nicely say “Hey who is the artist here?” At the end of the day you have to make it look how you like it, because you can’t please everyone. Also when it comes to colors on the monitor the proof is in the print.

    Anyway I think the rock looks just fine even though it is quite yellow, but I like vivid colors.

    You can have the rock though, and I will take the Imapala. Rob, can you give it a pearlescent white paint job and send it my way? I mean the actual car, not the picture. 😉

    • I was making more the technical point that the result of the saturation changed the hue, in terms of the definition of saturation being to increase the color (but not change the hue).

      I mentioned in the article that I kind of liked the effect, but I guess, for lack of space, I didn’t get to some of the points I wanted.

      One of those is the issue that when the color drift happens on a constant basis, then many of your pictures are moving towards the yellow (or cyan in the case of blues), so this tends to be the case whether you like it or not.

      I read an article recently that reviewed Sagelight and a few other editors — the author is a Lightroom user and only started looking at other editors because of what he called the “Lightroom Look” (I don’t pin this issue on Lightroom by any means, it’s just Lightroom is 1000x times more known than anything else, so it gets the good and the bad here). Basically, his pictures were starting to have the same recognizable look to them.

      I started realizing that I wanted to avoid the “Sagelight Look”. I realized that there is a central issue in image processing programs that becomes more and more important as they become more powerful: make that power as simple and as accessible as possible.

      And I think that’s part of the problem. Lightroom can do many, many things, but the interface really focuses in just a few things: saturation, clarity, vibrance, and a couple other things. As Sagelight grows more powerful, I see the same sort of issues happening — the most prominent and easily-accessible features are the ones used the most.

      So, if your saturation (and vibrance) are always moving your image to a different hue (in certain color areas), then you’ve got a look that is associated with that algorithm/product. Whereas, if it can be more exact, then it doesn’t really matter.

      I wish I could post images in these comments. Maybe I will do an addendum to the article with a couple more shots to make the point, but I don’t want to look too focused on the issue… though it did take a month of research…

      It turns out I can post images… Here is another example:

      Original Image of Squirrel

      Here it is in the XYZ/LAB version:

      XYZ/LAB version (Lightroom)

      Here it is with the same essential operation performed with a saturation algorithm designed to keep the original hue as close as possible (it turns out that it is difficult to do so, and not necessarily always the best thing anyway — XYZ-based saturation, under many circumstances is really great).

      Sagelight Saturation

      Keep in mind that these are both raw edits (i.e. one operation performed on the file globally, with no other alteration). In the Sagelight case, for example, I would probably reduce the saturation effect in his paw and the surrounding fur, and then Dodge & Burn (or otherwise darken) the surrounding area. But, the yellowed version is going to need more work to remove the yellow cast that was just added. These images are also purposely saturated a little more (though not much) to make a point.

      (my synopsis on the Sagelight version: It actually has a yellow cast to it, but also has a little more red than I’d like — this is because the original picture has a yellow cast that is hard to see. Normally, I would have auto-balanced the picture first. The Sagelight version has more depth because it doesn’t have nearly as much of a yellow color-cast to it. It’s fairly easy to deal with the over-saturation (by either backing off, darkening the picture first, or adjusting the red later), much more so than the color cast).

      Again, the XYZ/LAB version goes yellow — everywhere. The idea is that (as you mentioned) this version may look better to some people depending on their tastes (it doesn’t to me, personally), but since the yellow casting happens on a regular basis, it can be an issue.

      Plus, if I saturate a picture, I want to make sure it is going to go the general direction of the colors in the picture. As I mentioned in the article, as we can do more and more with our pictures, this is one of the things we want to do — not universally, but being able to add powerful colors can be a good thing. In these examples, I’m perform the operation on the entire picture to make a point. But, in reality, I might just do the background or foreground, or specific objects (like eyes in a portrait, for example) to bring out specific areas. In my experience, adding a lot of color like this to the entirety to the picture does tend to make them look unrealistic, where just doing it to specific areas can really bring out the depth.

      The main thing, for me (and through my experience), is that when we can start off with an image with as little color drift as possible, we end up with more choices as we continue editing, rather than having that cast propagate, which (in my experience) it invariable does. So, the idea here is that the corrected version gives a better starting point, even with a lower amount of initial saturation (which still goes to the yellow in the XYZ-based version).

      Oh, here is the pearl version of the Impala. Sorry, I couldn’t get a real one. ha.

      Sort-of-pearl Imapala

      Sort-a-kinda pearl Impala. I guess it depends on how you define “pearl” (i.e. more white or more beige/yellow). But, that goes to your point. 🙂

  6. “to bring out specific areas” … talking of which I’ve been doing a lot of bokeh lately ( large aperature, large lens to give emphasise the subject with a blurred background ). However, my camera doesn’t allow anywhere near as much effect as the top professional’s big cameras.

    Is it possible to do this automatically with software ? That is, to make blurred areas more blurred but leave sharp areas untouched ?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s