# Huffman Coding Compression Algorithm

Huffman Coding (also known as Huffman Encoding) is a algorithm for doing data compression and it forms the basic idea behind file compression. This post talks about fixed length and variable length encoding, uniquely decodable codes, prefix rules and construction of Huffman Tree.

We already know that every character is stored in sequences of 0’s and 1’s using 8 bits. This is called `fixed-length encoding` as each character uses the same number of fixed bits storage.

#### Given a text, how can we reduce the amount of space required to store a character?

The idea is to use `variable-length encoding`. We can exploit the fact that some characters occurs more frequently than others in a text (refer this) to design an algorithm which can represent the same piece of text using lesser number of bits. In variable-length encoding, we assign variable number of bits to characters depending upon their frequency in the given text. So, some character might end up taking 1 bit, some might end up taking two bits, some will be encoded using three bits, and so on. The problem with variable length encoding lies in its decoding.

#### Given a sequence of bits, how can we decode it uniquely?

Lets consider the string “aabacdab”. It has 8 characters in it and uses 64 bits storage (using fixed-length encoding). If we note, the frequency of characters ‘a’, ‘b’, ‘c’ and ‘d’ are 4, 2, 1, 1 respectively. Lets try to represent “aabacdab” using lesser number of bits by using the fact that ‘a’ occurs more frequently  than ‘b’ and ‘b’ occurs more frequently than ‘c’ and ‘d’. We start by randomly assigning single bit code 0 to ‘a’,  2-bit code 11 to ‘b’ and 3-bit code 100 and 011 to characters ‘c’ and ‘d’ respectively.

a 0
b 11
c 100
d 011

So the string aabacdab will be encoded to 00110100011011 (0|0|11|0|100|011|0|11) using above codes. But the real problem lies in decoding. If we try to decode the string 00110100011011, it will lead to ambiguity as it can be decoded to,

0|0|11|0|100|0|11|011   aabacabd
..
and so on

To prevent ambiguities in decoding, we will ensure that our encoding satisfies what’s called the `prefix rule` which will result into `uniquely decodable codes`. The prefix rule states that no code is a prefix of another code. By code, we mean the bits used for a particular character. In above example, 0 is prefix of 011 which violates the prefix rule. So if our codes satisfies the prefix rule, the decoding will be unambiguous (and vice versa).

Lets consider above example again. This time we assign codes that satisfies the prefix rule to characters ‘a’, ‘b’, ‘c’ and ‘d’.

a   0
b   10
c   110
d   111

Using above codes, the string aabacdab will be encoded to 00100100011010 (0|0|10|0|100|011|0|10). Now we can uniquely decode 00100100011010 back to our original string aabacdab.

#### Huffman Coding –

Now that we are clear on variable length encoding and prefix rule, let’s talk about Huffman coding.

The technique works by creating a binary tree of nodes. A node can be either a leaf node or an internal node. Initially, all nodes are leaf nodes, which contain the character itself, the weight (frequency of appearance) of the character. Internal nodes contain character weight and links to two child nodes. As a common convention, bit ‘0’ represents following the left child and bit ‘1’ represents following the right child. A finished tree has n leaf nodes and n-1 internal nodes. It is recommended that Huffman tree should discard unused characters in the text to produce the most optimal code lengths.

We will use priority queue for building Huffman tree where the node with lowest frequency is given highest priority. Below are the complete steps –

1. Create a leaf node for each character and add them to the priority queue.

2. While there is more than one node in the queue:

a. Remove the two nodes of highest priority (lowest frequency) from the queue

b. Create a new internal node with these two nodes as children and with
frequency equal to the sum of the two nodes’ frequencies.

c. Add the new node to the priority queue.

3. The remaining node is the root node and the tree is complete.

Consider some text consisting of only ‘A’, ‘B’, ‘C’, ‘D’ and ‘E’ character and their frequencies are 15, 7, 6, 6, 5 respectively. Below figures illustrate the steps followed by the algorithm –     The path from root to any leaf node stores the optimal prefix code (also called Huffman code) corresponding to character associated with that leaf node.

Below is C++ and Java implementation of Huffman coding compression algorithm:

## C++

Output:

Huffman Codes are :

h 111110
f 11110
i 1110
t 11011
l 110100
o 1100
n 1011
r 10101
d 0010
g 0001
H 00001
u 00000
c 0011
a 010
e 110101
011
m 1000
. 111111
s 1001
p 10100

Original string was :
Huffman coding is a data compression algorithm.

Encoded string is :
00001000001111011110100001010110110011110000101110101100010111110100101101001100100101101101001100111100100010100101011101011001100111101100101101101011010000011100101011110110111111101000111111

Decoded string is:
Huffman coding is a data compression algorithm.

## Java

Output:

Huffman Codes are :

100
a 010
c 0011
d 11001
e 110000
f 0000
g 0001
H 110001
h 110100
i 1111
l 101010
m 0110
n 0111
. 10100
o 1110
p 110101
r 0010
s 1011
t 11011
u 101011

Original string was :
Huffman coding is a data compression algorithm.

Encoded string is :
11000110101100000000011001001111000011111011001111101110001100111110111000101001100101011011010100001111100110110101001011000010111011111111100111100010101010000111100010111111011110100011010100

Decoded string is:

Huffman coding is a data compression algorithm.

Note – The storage used by the input string is 47*8 = 376 bits but our encoded string only takes 194 bits. i.e. about 48% data compression. To make program readable, we’re using C++ string class to store the encoded string in above program.

Since efficient priority queue data structures require O(log(n)) time per insertion, and a complete binary tree with n leaves has 2n-1 nodes and huffman coding tree is a complete binary tree, this algorithm operates in O(nlog(n)) time, where n is the number of characters.

References:     (8 votes, average: 5.00 out of 5) Loading...

Please use our online compiler to post code in comments. To contribute, get in touch with us.
Like us? Please spread the word and help us grow. Happy coding 🙂 Subscribe
Notify of Guest

This is a good write up. Several minutes of reading in my metro home made me understand the whole algorithm easily. I plan to implement this myself when I get home. Thanks for sharing! Guest

Just spent the last 6-7 hours diving into compression which started this article.
Amazing site, thank you. Guest

Simple explanation, clear imagery, and readable code. Top notch article there. Puts my attempt to shame, lol. Thanks. Guest

Thanks a lot. Guest

Funny, I was writing an implementation of Huffman coding to compress network data yesterday, (delta compressed, so there are lots of zeros)
Made a table of frequencies, used qsort, assigned prefix tree values without really making a binary tree, but then I got to taking the codes and writing the bits and promptly gave up there (not trivial) and just did run length encoding.

I’ll take another stab at it later though Guest

For new development, use ANS to get ratios near Arithmetic encoding for a computational cost near Huffman encoding:
https://en.wikipedia.org/wiki/Asymmetric_Numeral_Systems Guest
Charles Van Noland

This is what I used in my multiplayer action shooter game project. It operates virtually the same way as Huffman encoding (as far as an API goes) but can achieve compression ratios similar to range encoding: where frequencies do not have to be at powers of two for optimal encoding like with Huffman.

It took a day or two to decode the whitepaper and get it all up and running, though, because it’s not as intuitive as Huffman. Guest

Both Wikipedia reference links (“Huffman coding” and “Variable-length code”) point to non-existing article because of a trailing slash (/) in the URL. Guest
Shivam Bandejia

Line 84-85 in java gives errors Guest
Sindhusha Dhulipala

if left is the first value popped from priority queue, wouldn’t it have the higher prioritY? wouldn’t E be the left child and D the right child? Guest

Why is the keyword operator() used? Guest

to compare two c++ objects. Guest
Arman Ali

This program is not running in dev c++
what should i do??? Guest

You might be using old version of C++ compiler (before C++11). So update your compiler, otherwise run the code online or switch to code::blocks software.