Colour Correction with a colour passport basics
Show older comments
I am measuring the change of vegetation indicies over time so I am taking a picture every day. Unfortunately the light cant be the same every single time and so i would need to do some colour correction on the images before processing the indicies. I have an image taken with a colour passport in it (attached) and i was wondering how to go about doing the colour correction. I have attached the data sheet for the colour passport as well. Would it be a case of identifying the region in the photo and finding the ratio between the expected RGB values from the data chart and the values in the photo?
Answers (1)
Image Analyst
on 25 Jun 2020
Edited: Image Analyst
on 25 Jun 2020
1 vote
I do this all the time. Calibrated color measurement is my specialty as you might guess from my avatar.
It looks like you have the x-Rite Color Checker Classic, not the passport. And certainly not the DataColor chart from the PDF you attached - I mean, that one doesn't even have the same number and color of chips on it!
Bascially you have to identify the chart's chips locations. The best way is to just have the chart and camera on a jig where they are in the same location all the time. Then you can read the RGB values right from known row/column locations. If you can't do that and it moves all over each time, then you have to find the chips. First you have to convert to HSV color space and look for highly saturated regions. Now to distinguish between chips and leaves you'll have to find the leaves. You can do that by looking for the black frame of the chart and excluding everything outside of that. Now you'll have to do certain things to make sure you find the centroid of every chip. If you can assume the chart is fairly aligned with the image edges, then you can simply use kmeans() and then add a row for the neutral colored row, which doesn't show up in the thresholded saturation image. If it's tilted, you'll have to use fitPolynomialRANSAC().
The next step is to read the RGB colors of the chips in the order that your reference table has them, which is not necessarily the order in which regionprops() finds them. So there is some reordering you will need to do.
So now you have the RGB value of the chips and you need to develop a transform to convert RGB values into LAB values. I'm attaching a Powerpoint tutorial on that. Basically you should convert the reference LAB to reference XYZ, then pick a model, like cross channel quadratic, and use least squares to determine a transform to go from RGB to XYZ. You go to XYZ instead of LAB because if you have white in your image that is brighter than the white color chip, it won't predict the correct value if you just go immediately straight to LAB (long story, just trust me). Then you can use analytical equations to go from XYZ to LAB. Now once you have the image in LAB color space, you can compare it to the time zero image and get the Delta E color differences.
Note, you'll have to do a background correction before you do anything else. This is because the exposure will not be uniform. Not only might there be illumination variation over the field of view but all cameras produce shading on the image, mostly due to the lens. For example, you may have only 80% the brightness at the corner as you do at the middle, and you have to correct for that. Don't do a background subtraction like novices will recommend to you. You need to do a background division, not subtraction. Why? Well if the light at the corner is only 0.8 as bright as the middle, what do you need to do? You need to divide it by 0.8 to bring it up to what it should be, right?
Why do you need to do background correction? Well you don't want different colors for an object depending on where it is in the field of view do you? If you have the same color and just move it from the middle to the corner it will have a different RGB value and it you don't correct for that then you'd get a different calibrated LAB value.
Anyway, look over the attachment for more info. I've left out a lot of real world considerations, even in the attachment. To explain everything to you I'd need about a full day or even two. But this should get you started.
21 Comments
Simon Kirkman
on 25 Jun 2020
Simon Kirkman
on 26 Jun 2020
Image Analyst
on 26 Jun 2020
Sorry, color science is confusing even for those us us in the field. Even experts in the field can get confused since it's a mixture of radiometry with psychology.
It's only OK to use the built-in rgb2xyx() or other built-in color space conversion functions if you don't want true, calibrated color values that will match your spectrophotometer. Those formula just give you "book formulas". So if you measured some red object with your spectrophotometer and found out that LAB was (50, 20, 0), if you use my formulas you will get those values to within 2-5 units. If you use the built-in book formulas, you may get (60, 30, 10) or some values that could be off by 20 or 30 units or more. Whatever it gives you, it will be less accurate than if you used the true values to do the calibration. This is because the book formula just goes one way -- you put in your RGB values and it gives you xyz value according to some hard coded formula, so those xyz values might not be the actual, true values of your sample. With my way, the formula is not hard coded -- it's determined from your data. So the values will be closer to the "true" values than just some book formula.
Look at it this way. The XYZ and LAB color values of the material are an intrinsic property of the material. The color of your sample does not matter if it's in a dark closet or the bright sunshine. It has a true color determined by its spectrum like you can measure on a spectrophotometer. You do not want the sample colors reported/estimated values to change just because your illumination level changes. With the book formula, if you brighten your image by doubling your exposure time, then your XYZ value using the built in formula will be like twice as bright (well, not exactly but you get the idea). However with my method, you will always get an accurate XYZ out because the formula adjusts. So in super simple terms, just imagine that x was 0.8*red (it's not, but just play along). So now if your scene gets brighter (but your physical object doesn't change) then the book formulas will give you twice the x value, whereas my formula will adjust so that now x=0.4*red, and you will get the same x as before, which is the true x because that's what you trained the formulas with. Which is what should happen because the color is an intrinsic value of the material, not something that changes with the brightness of your light source.
For the least squares method to determine what the formula is, you put in your measured RGB values from your chips, and the "true" XYZ values from the chart's specification sheet. This will give you the alpha, beta, and gamma coefficients of the equations. You then use those equations to put in any arbitrary RGB value to get out estimated XYZ values, which you then put into analytical formulas, like from http://www.easyrgb.com/en/math.php, to get estimated LAB values.
The presentation talks about 2 conversion schemes: color correction, and color calibration. Color correction is an RGB-to-RGB "repair" of your RGB values to match some "gold standard" RGB values, which can be some RGB values measured at time zero, or RGB values from a different system. This is generally only used if you want to compare two images side-by-side, like you have images taken with different exposure times (so one is brighter and one is darker) but you want to put those into a Powerpoint document and have them look the same brightness. It is not needed for doing scientific analysis of the color, like you want to know the Delta E color difference or the LAB color values.
I generally do not do RGB-to-RGB correction since I nearly always want to measure the color, not match the color to some reference color. So you should do the RGB-to-LAB color calibration, not the RGB-to-RGB color correction. It's not that one is more or less accurate than the other. It's just that they are used for different things. You can do both, but it's usually not necessary. If you want the true LAB color, then why bother going from one RGB image to another as an intermediate stepping stone to get to LAB? You don't need to. It will work fine, or probably better, if you just go from the original RGB to the LAB rather than doing a correction in between.
BYK Gardner is having "office hours" and webinars on color and appearance
60 Minutes" Webinars
A 45-minute presentation with discussion, questions and answers on a variety of color, appearance and physical test topics.
- June 25 - Elements of a Color Program - note time is 2pm ET
- July 9 - Automotive Reporting with Smart-Chart
- July 14 - Dispersion for Plastic Resin, Raw Materials and Compounders - note time is 2pm ET
- July 16 - Measuring Film Thickness of Coatings
- August 20 - Basic Building Blocks of Color
- September 17 - Get a Clear View - How to Ensure the Quality of Your Transparent Product
- October 8 - Viscosity Basics
- November 12 - Color Systems for Solid & Effect Colors
- December 3 - Effect Colors
Simon Kirkman
on 26 Jun 2020
Image Analyst
on 27 Jun 2020
- Yes, you must do background correction to flatten the image before you calibrate the color (develop the RGB-to-XYZ transform).
- No. Again, you cannot use any "book formulas" whether from Bruce Lindbloom's site, easyrgb.com, or any other site. If you do so you will have lost any ability for your color calibration to compensate (take ito account) changing light levels, and will be throwing anway any possibility of comparing results to instruments such as colorimeters or spectrophotometers. You cannot do that. Again, you need to use least squares to come up with the transform, not use one from a web site or book. I'd use D65/10. It's pretty much an industry standard (except in publishing where D50 is popular).
- For converting from XYZ to LAB, you can use the reference chromaticities here: https://www.easyrgb.com/en/math.php
Observer 2° (CIE 1931) 10° (CIE 1964) Note
Illuminant X2 Y2 Z2 X10 Y10 Z10
A 109.850 100.000 35.585 111.144 100.000 35.200 Incandescent/tungsten
B 99.0927 100.000 85.313 99.178; 100.000 84.3493 Old direct sunlight at noon
C 98.074 100.000 118.232 97.285 100.000 116.145 Old daylight
D50 96.422 100.000 82.521 96.720 100.000 81.427 ICC profile PCS
D55 95.682 100.000 92.149 95.799 100.000 90.926 Mid-morning daylight
D65 95.047 100.000 108.883 94.811 100.000 107.304 Daylight, sRGB, Adobe-RGB
D75 94.972 100.000 122.638 94.416 100.000 120.641 North sky daylight
E 100.000 100.000 100.000 100.000 100.000 100.000 Equal energy
F1 92.834 100.000 103.665 94.791 100.000 103.191 Daylight Fluorescent
F2 99.187 100.000 67.395 103.280 100.000 69.026 Cool fluorescent
F3 103.754 100.000 49.861 108.968 100.000 51.965 White Fluorescent
F4 109.147 100.000 38.813 114.961 100.000 40.963 Warm White Fluorescent
F5 90.872 100.000 98.723 93.369 100.000 98.636 Daylight Fluorescent
F6 97.309 100.000 60.191 102.148 100.000 62.074 Lite White Fluorescent
F7 95.044 100.000 108.755 95.792 100.000 107.687 Daylight fluorescent, D65 simulator
F8 96.413 100.000 82.333 97.115 100.000 81.135 Sylvania F40, D50 simulator
F9 100.365 100.000 67.868 102.116 100.000 67.826 Cool White Fluorescent
F10 96.174 100.000 81.712 99.001 100.000 83.134 Ultralume 50, Philips TL85
F11 100.966 100.000 64.370 103.866 100.000 65.627 Ultralume 40, Philips TL84
F12 108.046 100.000 39.228 111.428 100.000 40.353 Ultralume 30, Philips TL83
Simon Kirkman
on 27 Jun 2020
Image Analyst
on 27 Jun 2020
I'm not familiar with that index. It is the contrast between the red band and the NIR band because it's basically the delta value divided by the average value. It looks like it uses the red channel but I'm not sure what wavelength range. Was it developed for certain satellite wavelength bands? If so, you'd have to do some tricky things.
If it's just the general purpose red band, which of course is different for every digital camera because they use different sensors, then you'd have to get that red band after calibrating to LAB. So what I'd do in that case is to use the conversion from lab to sRGB using the built-in MATLAB function lab2rgb(). It's okay to use this because of the prior calibration process you went through to get calibrated LAB values.
Even though I/we don't know the spectral emissitivy of what "red" was used in the formula, it should be okay to use to compare different NDVI indices as long as you're always using the same camera, or at least the same model of camera.
Simon Kirkman
on 29 Jun 2020
Image Analyst
on 29 Jun 2020
You can use your red, I'm just saying that if other people used a much more narrow red band than you, your NDVI index will not match theirs.
No I don't have a source to the formulas. It's pretty much common sense. It's like asking for a source that says it's okay to fit a quadratic to your data. I guess you could cite the Wikipedia page on least squares fitting if you want.
You cannot use a gray chip to compute background non-uniformities because the gray chip obviously does not cover the entire image. If you know that the gray chip has a gray level of 140, how does that tell you what it might be in the corner, or edge, or middle? Nothing - all it tells you is what the intensity is in the small spot where the chip is. You need to put a uniform gray sheet that covers your entire field of view so you can find out the background intensity everywhere. I'm attaching my demo.
Simon Kirkman
on 29 Jun 2020
Image Analyst
on 30 Jun 2020
I guess I forgot to say the first thing you should do, before anything else, is to white balance your camera. Your images seem to have a strong yellowish cast and are probably not white balanced. Your camera should have a procedure to do that. I'll see if I have time to run your code tomorrow.
Simon Kirkman
on 30 Jun 2020
Image Analyst
on 30 Jun 2020
RGB-to-RGB correction is not needed unless you want to do something like show a gallery of a bunch of photos taken at different times in a Powerpoint presentation or something. It's not needed to determine Delta E color difference.
I'm attaching a little program that will take an image and alter it, then correct it, so you can see how rgb-to-rgb correction works.


Simon Kirkman
on 1 Jul 2020
Image Analyst
on 1 Jul 2020
You need to turn off auto-white balancing if it's the kind of thing where it does it whenever it thinks it needs to. If you do white balancing it must only do it when you tell it to, and that is when the entire field of view is white. If you just want color difference (Delta E) then you don't need rgb-to-rgb correction. Indeed, if you don't do any color correction, it might alert you to some problems that you should address, like intensity changes. It's best to control illumination as much as possible but if your scene is subject to the weather (clouds, time of day, etc.) then there's not much you can do about that, but as long as you have a color chart in every image, or a separate image taken immediately before, then the calilibration should handle (compensate for) any illumination changes.
Simon Kirkman
on 2 Jul 2020
Image Analyst
on 2 Jul 2020
Yes, it's possible to get high delta Es for pixels that have drastically different colors.
Don't use image as the name of your variable since its a built-in function.
You need to attach your images, mat files, and functions if I'm to replicate your situation.
Simon Kirkman
on 2 Jul 2020
Simon Kirkman
on 7 Jul 2020
María da Fonseca
on 26 Aug 2022
Hi!
Does anyone know the selection criteria for the 24 colors of the x-Rite Color Checker Classic?
María
Image Analyst
on 27 Aug 2022
They picked a gray scale selection, for obvious reasons. And then they wanted colors out near the extremes of the color gamut so that's why they have the 4 "pure" RGBMCY colors which are as vivid and saturated as they can get. Then for the other 6 they tried to pick 6 colors from natural scenes and photographs that sort of evenly sampled the 3-D gamut. By the way, x-rite sold the color chart product line to Calibrite.com.
Categories
Find more on Green in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!