Clear Filters
Clear Filters

Colour identification with noisy image

2 views (last 30 days)
Noah Noah
Noah Noah on 23 Mar 2022
Answered: Image Analyst on 24 Mar 2022
I have an image that I need to identify the colour using LAB space.
My question is- Do i denoise the image before turning the image into lab space? i.e via applycform,
or do I turn the image into lab space then denoise the uint8 image before turning into a double
or would i turn into lab space into a double then denoise on the double image?
Thank you
  2 Comments
DGM
DGM on 23 Mar 2022
I'm not sure what exactly you're intending to do, but you'd probably want to start by getting out of uint8 before anything else. Depending on what you do for denoising, you don't want to have a scenario where the tools you're using are internally converting to floating point and back to uint8 repeatedly. That might be minor, but it seems contrary to the goals of noise reduction.
yanqi liu
yanqi liu on 24 Mar 2022
yes,sir,may be upload your image file to analysis

Sign in to comment.

Answers (1)

Image Analyst
Image Analyst on 24 Mar 2022
You don't necessarily need to denoise the image. It really depends on what you're going to do with it. If you're just getting the mean LAB in some region you're just going to average together a bunch of values anyway so whether they are noisy or noise-free won't matter too much. However is the noise is so bad that it prevents you from locating the region's boundaries accurately then you might denoise first. There are ton's of denoising algorithms. Probably the simplest is medfilt2(), a non-linear filter that smoothes yet retains sharp edges. There are better ones though. But they are more complicated. Like non-local means, BM3D, Mean Shift, etc.
The bigger problem is getting the LAB values. If you merely use the built-in formulas for converting RGB to LAB you won't get the LAB like you'd get from a calibrated, high tech instrument like a spectophotometer. This is because if you don't calibrate, you're just using "book formulas" so your values will be arbitrary. Look at this thought experiment. Let's say you took a photo of some object and you also measure it's LAB values on a spectrophotometer. Then you use rgb2lab() to convert it to CIELAB values. Guess what - your values will be nowhere near the values you got from your spectrophotometer. So let's say you now cut the exposure time of your camera in half so that your image brightness is half. Now if you convert to LAB, your L value will be the half the prior value. Huh? What happened? Your object is the same - it didn't change color and your lighting stayed the same but now your LAB values are half the values as before. So basically you can make the LAB values almost whatever you want. The only way to get the "true" LAB values is to calibrate. You do this by showing it a known reference chart, for example the Calibrite Color Checker Chart, with known, true LAB values. Then you can measure the chart and develop a transform to convert RGB into LAB. That chart should be embedded in the scene - in the field of view. Now your estimated LAB values will be much closer to your true spectrophotometric values.

Categories

Find more on Image Processing Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!