Huffman Coding (also known as Huffman Encoding) is a algorithm for doing data compression and it forms the basic idea behind file compression. This post talks about fixed length and variable length encoding, uniquely decodable codes, prefix rules and construction of Huffman Tree.

We already know that every character is stored in sequences of 0’s and 1’s using 8 bits. This is called `fixed-length encoding`

as each character uses the same number of fixed bits storage.

**Given a text, how can we reduce the amount of space required to store a character?**

The idea is to use `variable-length encoding`

. We can exploit the fact that some characters occurs more frequently than others in a text (refer this) to design an algorithm which can represent the same piece of text using lesser number of bits. In variable-length encoding, we assign variable number of bits to characters depending upon their frequency in the given text. So, some character might end up taking 1 bit, some might end up taking two bits, some will be encoded using three bits, and so on. The problem with variable length encoding lies in its decoding.

**Given a sequence of bits, how can we decode it uniquely?**

Lets consider the string “aabacdab”. It has 8 characters in it and uses 64 bits storage (using fixed-length encoding). If we note, the frequency of characters ‘a’, ‘b’, ‘c’ and ‘d’ are 4, 2, 1, 1 respectively. Lets try to represent “aabacdab” using lesser number of bits by using the fact that ‘a’ occurs more frequently than ‘b’ and ‘b’ occurs more frequently than ‘c’ and ‘d’. We start by randomly assigning single bit code 0 to ‘a’, 2-bit code 11 to ‘b’ and 3-bit code 100 and 011 to characters ‘c’ and ‘d’ respectively.

b 11

c 100

d 011

So the string aabacdab will be encoded to 00110100011011 (0|0|11|0|100|011|0|11) using above codes. But the real problem lies in decoding. If we try to decode the string 00110100011011, it will lead to ambiguity as it can be decoded to,

0|0|11|0|100|0|11|011 aabacabd

0|011|0|100|0|11|0|11 adacabab

..

and so on

To prevent ambiguities in decoding, we will ensure that our encoding satisfies what’s called the `prefix rule`

which will result into `uniquely decodable codes`

. The prefix rule states that no code is a prefix of another code. By code, we mean the bits used for a particular character. In above example, 0 is prefix of 011 which violates the prefix rule. So if our codes satisfies the prefix rule, the decoding will be unambiguous (and vice versa).

Lets consider above example again. This time we assign codes that satisfies the prefix rule to characters ‘a’, ‘b’, ‘c’ and ‘d’.

b 10

c 110

d 111

Using above codes, the string aabacdab will be encoded to 00100100011010 (0|0|10|0|100|011|0|10). Now we can uniquely decode 00100100011010 back to our original string aabacdab.

**Huffman Coding –**

Now that we are clear on variable length encoding and prefix rule, let’s talk about Huffman coding.

The technique works by creating a binary tree of nodes. A node can be either a leaf node or an internal node. Initially, all nodes are leaf nodes, which contain the character itself, the weight (frequency of appearance) of the character. Internal nodes contain character weight and links to two child nodes. As a common convention, bit ‘0’ represents following the left child and bit ‘1’ represents following the right child. A finished tree has n leaf nodes and n-1 internal nodes. It is recommended that Huffman tree should discard unused characters in the text to produce the most optimal code lengths.

We will use priority queue for building Huffman tree where the node with lowest frequency is given highest priority. Below are the complete steps –

2. While there is more than one node in the queue:

a. Remove the two nodes of highest priority (lowest frequency) from the queue

b. Create a new internal node with these two nodes as children and with

frequency equal to the sum of the two nodes’ frequencies.

c. Add the new node to the priority queue.

3. The remaining node is the root node and the tree is complete.

Consider some text consisting of only ‘A’, ‘B’, ‘C’, ‘D’ and ‘E’ character and their frequencies are 15, 7, 6, 6, 5 respectively. Below figures illustrate the steps followed by the algorithm –

The path from root to any leaf node stores the optimal prefix code (also called Huffman code) corresponding to character associated with that leaf node.

## C++

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
#include <bits/stdc++.h> using namespace std; // A Tree node struct Node { char ch; int freq; Node *left, *right; }; // Function to allocate a new tree node Node* getNode(char ch, int freq, Node* left, Node* right) { Node* node = new Node(); node->ch = ch; node->freq = freq; node->left = left; node->right = right; return node; } // Comparison object to be used to order the heap struct comp { bool operator()(Node* l, Node* r) { // highest priority item has lowest frequency return l->freq > r->freq; } }; // traverse the Huffman Tree and store Huffman Codes // in a map. void encode(Node* root, string str, unordered_map<char, string> &huffmanCode) { if (root == NULL) return; // found a leaf node if (!root->left && !root->right) huffmanCode[root->ch] = str; encode(root->left, str + "0", huffmanCode); encode(root->right, str + "1", huffmanCode); } // traverse the Huffman Tree and decode the encoded string void decode(Node* root, int &index, string str) { if (root == NULL) return; // found a leaf node if (!root->left && !root->right) { cout << root->ch; return; } index++; if (str[index] =='0') decode(root->left, index, str); else decode(root->right, index, str); } // Builds Huffman Tree and huffmanCode and decode given input text void buildHuffmanTree(string text) { // count frequency of appearance of each character // and store it in a map unordered_map<char, int> freq; for (char ch: text) freq[ch]++; // Create a priority queue to store live nodes of // Huffman tree; priority_queue<Node*, vector<Node*>, comp> pq; // Create a leaf node for each character and add it // to the priority queue. for (auto it : freq) pq.push(getNode(it.first, it.second, NULL, NULL)); // do till there is more than one node in the queue while (pq.size() != 1) { // Remove the two nodes of highest priority // (lowest frequency) from the queue Node *left = pq.top(); pq.pop(); Node *right = pq.top(); pq.pop(); // Create a new internal node with these two nodes // as children and with frequency equal to the sum // of the two nodes' frequencies. Add the new node // to the priority queue. int sum = left->freq + right->freq; pq.push(getNode('\0', sum, left, right)); } // root stores pointer to root of Huffman Tree Node* root = pq.top(); // traverse the Huffman Tree and store Huffman Codes // in a map. Also prints them unordered_map<char, string> huffmanCode; encode(root, "", huffmanCode); cout << "Huffman Codes are :\n" << endl; for (auto it : huffmanCode) cout << it.first << " " << it.second << endl; cout << "\nOriginal string was :\n" << text << endl; // print encoded string string str = ""; for (char ch: text) str += huffmanCode[ch]; cout << "\nEncoded string is :\n" << str << endl; // traverse the Huffman Tree again and this time // decode the encoded string int index = -1; cout << "\nDecoded string is: \n"; while (index < (int)str.size() - 2) decode(root, index, str); } // main function int main() { string text = "Huffman coding is a data compression algorithm."; buildHuffmanTree(text); return 0; } |

**Output: **

Huffman Codes are :

h 111110

f 11110

i 1110

t 11011

l 110100

o 1100

n 1011

r 10101

d 0010

g 0001

H 00001

u 00000

c 0011

a 010

e 110101

011

m 1000

. 111111

s 1001

p 10100

Original string was :

Huffman coding is a data compression algorithm.

Encoded string is :

00001000001111011110100001010110110011110000101110101100010111110100101101001100100101101101001100111100100010100101011101011001100111101100101101101011010000011100101011110110111111101000111111

Decoded string is:

Huffman coding is a data compression algorithm.

**Note – ** The storage used by the input string is 47*8 = 376 bits but our encoded string only takes 194 bits. i.e. about 48% data compression. To make program readable, we’re using C++ string class to store the encoded string in above program.

Since efficient priority queue data structures require O(log(n)) time per insertion, and a complete binary tree with n leaves has 2n-1 nodes and huffman coding tree is a complete binary tree, this algorithm operates in O(nlog(n)) time, where n is the number of characters.

**References:**

https://en.wikipedia.org/wiki/Huffman_coding

https://en.wikipedia.org/wiki/Variable-length_code

Dr. Naveen garg, IIT-D (Lecture – 19 Data Compression)

**Thanks for reading.**

Please use ideone or C++ Shell or any other online compiler link to post code in comments.

Like us? Please spread the word and help us grow. Happy coding 🙂

## Leave a Reply

Funny, I was writing an implementation of Huffman coding to compress network data yesterday, (delta compressed, so there are lots of zeros)

Made a table of frequencies, used qsort, assigned prefix tree values without really making a binary tree, but then I got to taking the codes and writing the bits and promptly gave up there (not trivial) and just did run length encoding.

I’ll take another stab at it later though

For new development, use ANS to get ratios near Arithmetic encoding for a computational cost near Huffman encoding:

https://en.wikipedia.org/wiki/Asymmetric_Numeral_Systems

This is what I used in my multiplayer action shooter game project. It operates virtually the same way as Huffman encoding (as far as an API goes) but can achieve compression ratios similar to range encoding: where frequencies do not have to be at powers of two for optimal encoding like with Huffman.

It took a day or two to decode the whitepaper and get it all up and running, though, because it’s not as intuitive as Huffman.

Just spent the last 6-7 hours diving into compression which started this article.

Amazing site, thank you.

Both Wikipedia reference links (“Huffman coding” and “Variable-length code”) point to non-existing article because of a trailing slash (/) in the URL.

Thanks for bringing this to our notice. We have updated the hyperlinks.

This is a good write up. Several minutes of reading in my metro home made me understand the whole algorithm easily. I plan to implement this myself when I get home. Thanks for sharing!