huffman encoding for image

for the code-
A=imread('xyz.jpg');
A1=rgb2gray(A);
A = A1(:);
[symbols,p]=hist(A,double(unique(A)))
p=p/sum(p)
symbols=symbols/sum(symbols)
[dict,avglen]=huffmandict(symbols,p)
comp=huffmanenco(A,dict)
i am getting error-
Error using huffmandict (line 164)
Source symbols repeat
Error in new (line 7)
[dict,avglen]=huffmandict(symbols,p)
suggest necessary changes

3 Comments

Hello @Nidhi
Walter sir already answered the question? Am I right? I also pointed the way.
Have you looked on that? p must be the same dimensions as symbols
But how to make them same?
@Nidhi
I have updated the answer, please check here

Sign in to comment.

 Accepted Answer

I haven't used huffmandict before, but based on the document and the error message you have, I suspect your inputs are wrong.
Double check your inputs to huffmandict. Make sure symbols are unique values, and p are the probability of each value. That means:
[symbols, p] = hist(A,double(unique(A))) should be
[p, symbols] = hist(A,double(unique(A))) because the 1st output of hist is the frequency (probability)
and the second output is the unique bin (unique symbol)

30 Comments

In that case i am getting a different error-
The Huffman dictionary provided does not have the codes for all the input signals.
Hm, try this:
A = uint8(randi(255, 100, 100, 3)); %imread('xyz.jpg');
A1 = rgb2gray(A);
A = A1(:);
[symbols, ~, idx] = unique(A);
p = histcounts(idx, 1:max(idx)+1);
p = p/sum(p);
[dict, avglen] = huffmandict(symbols, p);
comp = huffmanenco(A, dict)
I am using R2014a ,so histcounts() is not present. Can you suggest any other alternative?
OCDER
OCDER on 11 Oct 2018
Edited: OCDER on 11 Oct 2018
p = hist(idx, symbols)
Still no luck-
Error using eps
Class must be 'single' or 'double'.
Error in hist (line 116)
edgesc = edges + eps(edges);
[symbols, ~, idx] = unique(A);
p = zeros(size(symbols));
for j = 1:max(idx)
p(j) = sum(j == idx);
end
Probability of an input symbol cannot be greater than 1
Add this - forgot to normalized probability p.
p = p/sum(p)
Ok now i got something, but the image is a thin vertical line with the message-
Image is too big to fit on screen; displaying at 0%
Also 'p' is 256x1 double while 'symbols' is 256x1 uint8 .
It's a 256x1 double because uint8 numeric format ranges from 0 to 255 (256 values). p should be a double because it's a probability value from 0 to 1. symbols should be whatever unique value, which is 0 to 255 integer number. You could do double(symbols) to make it double.
Your images is a vertical line because of this code you used:
A = A1(:); %make images a vertical vector
To fix, you'll have to reshape your A back to a MxN vector via
A = reshape(A, size(A1, 1), size(A1, 2))
where do i have to put this line?
Before you do imshow. Can you paste your current, full code here? When are you summoning imshow?
The current code is-
A=imread('xyz.jpg');
A1=rgb2gray(A);
A = A1(:);
[symbols, ~, idx] = unique(A);
counts = accumarray(idx, 1);
p = zeros(size(symbols));
for j = 1:max(idx)
p(j) = sum(j == idx);
end
p = p/sum(p)
[dict,avglen]=huffmandict(symbols,p);
comp=huffmanenco(A,dict);
imshow(comp);
The output of huffmanenco is not an image: it is a double vector with values 0 and 1. You should not be using imshow() on it unless you are prepared for exactly what you got -- a single pixel wide and probably extending to the limits of the screen.
The output of huffmanenco is encoding data, not an image.
You'll need to use huffman decoder to get back at the image.
image > huffmandict > huffmanencode > huffmandecode > image
huffmandeco(comp,dict) is taking a lot of time. What to do?
It should be fairly fast for dictionary of reasonable size. The worst case would probably be where the symbols were all close to the same probability as that would produce a tree of maximum size.
Dictionary size is 256x2 cell in my case. And it is taking more than 20 minutes for a 40KB image. So is there any chance that my program is wrong?
I tested on random data that was 199 x 201 . The huffmandeco took less than 1 second.
a = imread('xyz.jpg');
imshow(a);
A1=rgb2gray(a);
imhist(A1);
[M N]=size(A1);
A = A1(:);
count = 0:255;
p = imhist(A1) / numel(A1)
[dict,avglen]=huffmandict(count,p) % build the Huffman dictionary
comp= huffmanenco(A,dict); %encode your original image with the dictionary you just built
compression_ratio= (512*512*8)/length(comp) %computing the compression ratio
%%DECODING
Im = huffmandeco(comp,dict); % Decode the code
I11=uint8(Im);
decomp=reshape(I11,M,N);
imshow(decomp);
This is my whole code and i am not able to find the cause to execution delay. Can you please provide your code for reference?
Using your code on examples/deeplearning_shared/test.jpg (480 x 640, about 300 Kb), the decoding takes about 7 seconds.
I am sorry but i don't get what 'examples/deeplearning_shared/test.jpg' image is or how to access it.
Do you have the Image Processing Toolbox?
If you use
imshow('peppers.png')
then does an image show up?
As of R2018b that particular image was moved to the Deep Learning Toolbox (formerly known as Neural Network Toolbox), and a new image test.jpg was added to that directory.
But since peppers.png was present for quite a number of releases, use that. In my test on peppers.png, the decoding took about 4.3 seconds.
function huff
a = imread('xyz.jpg');
imshow(a);
A1=rgb2gray(a);
imhist(A1);
[M, N]=size(A1);
A = A1(:);
count = 0:255;
p = imhist(A1) / numel(A1);
tic; [dict,avglen]=huffmandict(count,p); toc % build the Huffman dictionary
tic; comp= huffmanenco(A,dict); toc %encode your original image with the dictionary you just built
compression_ratio= (512*512*8)/length(comp); %computing the compression ratio
display(compression_ratio)
%%DECODING
tic; Im = huffmandeco(comp,dict); toc % Decode the code
I11=uint8(Im);
decomp=reshape(I11,M,N);
imshow(decomp);
But when i used peppers.png then it took in total 6 minutes to give the 'decomp' image and that too in grayscale. What to do????
What's your RAM and CPU for the computer? Could be that there isn't enough physical RAM for processing, so you're using HDD, which is much slower. If that's not the issue, use profile to see which step is taking the most time.
profile on
runCode %run your code here
profview
You need to expect grayscale, since you are doing A1=rgb2gray() and encoding A1.
Is your computer especially slow, or does it have a very small amount of memory? If you are using MS Windows, what does
memory
show?
If you invoke
bench
then where does your system show up in comparison to other systems?
@OCDER
@Walter Roberson
You are using a version of MATLAB no later than R2017b: I can tell because is_a_valid_code was removed from huffmandeco as of R2018a.
Looking at the amount of memory you have, I suspect you are running R2015b or earlier with a 32 bit MATLAB.
When I test with peppers.png on R2017b on my system, the decoding took about 53 seconds.
I have a memory of having reported an inefficiency to MATLAB having to do with is_a_valid_code, that it was being called far too often. However, I do not seem to find that in my case list, so perhaps I reported it years ago on a different license.
Anyhow: upgrading to R2018a or later would probably speed up quite a bit.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!