CS61B
Lectures 38: Compression
Zip Files, How Do They Work?
Size in Bytes
$ zip mobydick.zip mobydick.txt
adding: mobydick.txt (deflated 59%)
$ ls -l
-rw-rw-r-- 1 jug jug 643207 Apr 24 10:55 mobydick.txt
-rw-rw-r-- 1 jug jug 261375 Apr 24 10:55 mobydick.zip
File is unchanged by zipping / unzipping.
$ unzip mobydick.zip
replace mobydick.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: r
new name: unzipped.txt
inflating: unzipped.txt
$ diff mobydick.txt unzipped.txt
$
Compression Model #1: Algorithms Operating on Bits
In a lossless algorithm we require that no information is lost.
01010101000001010101110...
Compression
Algorithm C
1001010101...
01010101000001010101110...
Decompression
Algorithm C-1
1001010101...
Bitstream B
Compressed bits C(B)
C(B)
B
Prefix Free Codes
Increasing Optimality of Coding
By default, English text is usually represented by sequences of characters, each 8 bits long, e.g. ‘d’ is 01100100.
Easy way to compress: Use fewer than 8 bits for each letter.
word | binary | hexadecimal |
dog | 01100100 01101111 01100111 | 64 6f 67 |
More Code: Mapping Alphanumeric Symbols to Codewords
Example: Morse code.
More Code: Mapping Alphanumeric Symbols to Codewords
Example: Morse code.
Note:
Alternate strategy: Avoid ambiguity by making code prefix free.
Morse Code (as a Tree)
From Wikimedia
Prefix-Free Codes [Example 1]
A prefix-free code is one in which no codeword is a prefix of any other. Example for English:
space | 1 |
E | 01 |
T | 001 |
A | 0001 |
O | 00001 |
I | 000001 |
... | |
I ATE: 0000011000100101
start
space
E
T
A
0
1
O
I
1
0
1
0
1
0
1
0
1
0
1
0
...
Prefix-Free Codes [Example 2]
A prefix-free code is one in which no codeword is a prefix of any other. Example for English:
space | 111 |
E | 010 |
T | 1101 |
A | 1011 |
O | 1001 |
I | 1000 |
... | |
I ATE: 100011110111101010
start
0
1
space
E
A
T
O
I
...
...
...
1
0
0
1
0
...
1
0
1
0
1
0
1
0
1
0
1
0
1
Prefix Free Code Design
Observation: Some prefix-free codes are better for some texts than others.
Observation: It’d be useful to have a procedure that calculates the “best” code for a given text.
space | 1 |
E | 01 |
T | 001 |
A | 0001 |
O | 00001 |
I | 000001 |
... | |
space | 111 |
E | 010 |
T | 1101 |
A | 1011 |
O | 1001 |
I | 1000 |
... | |
Better for EEEEAT (8+3+4 = 15 bits).
Much worse for JOSH (25+5+8+10 = 48 bits).
Worse for EEEEAT (12+4+4 = 20 bits).
Better for JOSH (7+4+6+6 = 23 bits).
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency |
我 | 0.35 |
爸 | 0.17 |
是 | 0.17 |
李 | 0.16 |
刚 | 0.15 |
Left half
Right half
我
爸
是
李
刚
35% of all characters are 我
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency | Code |
我 | 0.35 | 0... |
爸 | 0.17 | 0... |
是 | 0.17 | 1... |
李 | 0.16 | 1... |
刚 | 0.15 | 1... |
我
爸
是
李
刚
Left half
Right half
0
1
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency | Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 1... |
李 | 0.16 | 1... |
刚 | 0.15 | 1... |
我
爸
是
李
刚
Left half
Right half
1
0
0
1
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency | Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 1... |
李 | 0.16 | 1... |
刚 | 0.15 | 1... |
我
爸
是
李
刚
Left half
Right half
1
0
0
1
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency | Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 11... |
刚 | 0.15 | 11... |
我
爸
是
李
刚
Left half
Right half
1
0
0
1
0
1
Code Calculation Approach #1 (Shannon-Fano Coding)
Symbol | Frequency | Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
我
爸
是
李
刚
1
0
0
1
0
1
0
1
Efficiency Assessment: http://yellkey.com/??
How many bits per symbol do we need to compress a file with the character frequencies listed below using the Shannon-Fano code that we created?
Symbol | Frequency | S-F Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
A. (2*3 + 3*2) / 5
= 2.4 bits per symbol
B. (0.35 + 0.17 + 0.17) * 2 + (0.16 + 0.15) * 3
= 2.31 bits per symbol
C. Not enough information, we need to know the exact characters in the file being compressed.
Efficiency Assessment of Shannon-Fano Coding
How many bits per symbol do we need to compress a file with the character frequencies listed below using the Shannon-Fano code that we created?
B. (0.35 + 0.17 + 0.17) * 2 + (0.16 + 0.15) * 3 = 2.31 bits per symbol.
Symbol | Frequency | S-F Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
Example assuming we have 100 symbols:
Total: 231 bits
231 / 100 = 2.31 bits/symbol
Efficiency Assessment of Shannon-Fano Coding
If we had a file with 350 我 characters , 170 爸 characters , 170 是 characters, 160 李 characters, and 150 刚 characters, how many total bits would we need to encode this file using 32 bit Unicode? Using our Shannon-Fano code?
You don’t need a calculator.
Symbol | Frequency | S-F Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
2.31 bits per symbol for texts with this distribution
Efficiency Assessment of Shannon-Fano Coding
If we had a file with 350 我 characters , 170 爸 characters , 170 是 characters, 160 李 characters, and 150 刚 characters, how many total bits would we need to encode this file using 32 bit Unicode? Using our Shannon-Fano code?
1000 total characters.
Space used:
Our code is 14 times as efficient!
Symbol | Frequency | S-F Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
2.31 bits per symbol for texts with this distribution
Code Calculation Approach #1 (Shannon-Fano Coding)
Shannon-Fano coding is NOT optimal. Does a good job, but possible to find ‘better’ codes (see CS170).
Symbol | Frequency | Code |
我 | 0.35 | 00 |
爸 | 0.17 | 01 |
是 | 0.17 | 10 |
李 | 0.16 | 110 |
刚 | 0.15 | 111 |
我
爸
是
李
刚
1
0
0
1
0
1
0
1
Code Calculation Approach #2: Huffman Coding
As before, calculate relative frequencies.
我
爸
是
李
刚
0.35
0.17
0.17
0.16
0.15
我
爸
是
李
刚
0.35
0.17
0.17
0.31
0
1
我
爸
是
李
刚
0.35
0.31
0
1
0.34
0
1
Code Calculation Approach #2: Huffman Coding
As before, calculate relative frequencies.
我
爸
是
李
刚
0.35
0.31
0
1
0.34
0
1
我
爸
是
李
刚
0.35
0
1
0
1
0.65
1
0
我
爸
是
李
刚
0
1
0
1
1
0
0
1
Huffman vs. Shannon-Fano
Shannon-Fano code below results in an average of 2.31 bits per symbol, whereas Huffman is only 2.3 bits per symbol.
Symbol | Frequency | S-F Code | Huffman Code |
我 | 0.35 | 00 | 0 |
爸 | 0.17 | 01 | 100 |
是 | 0.17 | 10 | 101 |
李 | 0.16 | 110 | 110 |
刚 | 0.15 | 111 | 111 |
Strictly better than Shannon-Fano coding. There is NO downside to Huffman coding instead.
Prefix-Free Codes
Question: For encoding (bitstream to compressed bitstream), what is a natural data structure to use? Assume characters are of type Character, and bit sequences are of type BitSequence.
space | 111 |
E | 010 |
T | 1101 |
A | 1011 |
O | 1001 |
I | 1000 |
... | 0111 |
space | 1 |
E | 01 |
T | 001 |
A | 0001 |
O | 00001 |
I | 000001 |
... | |
I ATE: 0000011000100101
I ATE: 100011110111101010
Prefix-Free Codes
Question: For encoding (bitstream to compressed bitstream), what is a natural data structure to use? chars are just integers, e.g. ‘A’ = 65.
Compared to HashMaps, Arrays are faster (just get the item from the array) and only use more memory if some characters in the alphabet are unused.��
I ATE: 0000011000100101
I ATE: 100011110111101010
Prefix-Free Codes
Question: For decoding (compressed bitstream back to bitstream), what is a natural data structure to use?
space | 111 |
E | 010 |
T | 1101 |
A | 1011 |
O | 1001 |
I | 1000 |
... | 0111 |
space | 1 |
E | 01 |
T | 001 |
A | 0001 |
O | 00001 |
I | 000001 |
... | |
I ATE: 0000011000100101
I ATE: 100011110111101010
Prefix-Free Codes
Question: For decoding (compressed bitstream back to bitstream), what is a natural data structure to use?
space | 111 |
E | 010 |
T | 1101 |
A | 1011 |
O | 1001 |
I | 1000 |
... | 0111 |
space | 1 |
E | 01 |
T | 001 |
A | 0001 |
O | 00001 |
I | 000001 |
... | |
I ATE: 0000011000100101
I ATE: 100011110111101010
Huffman Coding in Practice
Huffman Compression
Two possible philosophies for using Huffman Compression:
What are some advantages/disadvantages of each idea? Which is better?�
$ java HuffmanEncodePh1 ENGLISH mobydick.txt
$ java HuffmanEncodePh1 BITMAP horses.bmp
$ java HuffmanEncodePh2 mobydick.txt
$ java HuffmanEncodePh2 horses.bmp
Huffman Compression (Your Answers)
Two possible philosophies for using Huffman Compression:
What are some advantages/disadvantages of each idea? Which is better?
Huffman Compression
Two possible philosophies for using Huffman Compression:
In practice, Philosophy 2 is used in the real world.
Huffman Compression Example [Demo Link]
Given input text: 我我刚刚刚是我是我刚李刚我李是爸李爸李是李我我李刚是我是刚爸是刚我爸我李是是李是我我刚爸是李我我我是爸我是我爸是我爸是我是刚我是爸刚爸我刚我我刚爸我我爸我刚爸爸李李李李我我爸李我我刚爸李我我李我爸我我
Step 1: Count frequencies.
Step 2: Build encoding array and decoding trie.
Step 3: Write decoding trie to output.huf.
Step 4: Write codeword for each symbol to output.huf.
Output bits: 010101010101001…00111111111101…
Decoding Trie
Codewords
我
爸
是
李
刚
0
1
0
1
1
0
0
1
0.35
0.17
0.17
0.16
0.15
Decoding Trie
See optional textbook for how to do this.
Huffman Decompression Example [Demo Link]
Given input bitstream: 010101010101001…00111111111101…
Step 1: Read in decoding trie.
Step 2: Use codeword bits to walk down the trie, outputting symbols every time you reach a leaf.
Output symbols: 我我刚刚刚是…
我
爸
是
李
刚
0
1
0
1
1
0
0
1
0.35
0.17
0.17
0.16
0.15
Decoding Trie
Codewords
Huffman Coding Summary
Given a file X.txt that we’d like to compress into X.huf:
To decompress X.huf:
See Huffman.java for an example implementation on 8-bit symbols.
Compression Algorithms (General)
The big idea in Huffman Coding is representing common symbols with small numbers of bits.
Many other approaches, e.g.
General idea: Exploit redundancy and existing order inside the sequence.
Compression Theory
Comparing Compression Algorithms
Different compression algorithms achieve different compression ratios on different files.
We’d like to try to compare them in some nice way.
Let’s start with a straightforward puzzle.
SuperZip
Suppose an algorithm designer says their algorithm SuperZip can compress any bitstream by 50%. Why is this impossible?
Universal Compression: An Impossible Idea
Argument 1: If true, they’d be able to compress any bitstream down to a single bit. Interpreter would have to be able to do the following (impossible) task for ANY output sequence.
01010101000001010101110
101010100001
111001
101
00
1
01010101000001010101110
101010100001
111001
Compression
Compression
Compression
101
Compression
00
1
Compression
Universal Compression: An Impossible Idea
Argument 2: There are far fewer short bitstreams than long ones. Guaranteeing compression even once by 50% is impossible. Proof:
A Sneaky Situation
Universal compression is impossible, but the following example implies that comparing compression algorithms could still be quite difficult.
Suppose we write a special purpose compression algorithm that simply hardcodes small bit sequences into large ones.
010
00000000001111111111...
GameOfThronesSeason6-Razor1911-Rip-Episode1.mp4
Compressed Bits
Decompression
Algorithm C-1
3 bits
8927472560 bits
A Sneaky Situation
Suppose we write a special purpose compression algorithm that simply hardcodes small bit sequences into large ones.
To avoid this sort of trickery, we should include the bits needed to encode the decompression algorithm itself.
010
00000000001111111111...
GameOfThronesSeason6-Razor1911-Rip-Episode1.mp4
Compressed Bits
Decompression
Algorithm C-1
3 bits
8927472560 bits
8927472707 bits
Compression Model 2: Self-Extracting Bits
As a model for the decompression process, let’s treat the algorithm and the compressed bitstream as a single sequence of bits.
0111000001110101...
SelfExtractingGoT.java
Interpreter
8,927,472,560 bits
0100001001001101...
GameOfThronesSeason6-Razor1911-Rip-Episode1.mp4
8,927,472,707 bits
HugPlant
Huffman Coding can be used to compress any data, not just text. In bitmap format, the plant below is simply the stream of bits shown on the right.
42 4d 7a 00 10 00 00 00 00 00 7a 00 00 00 6c 00 00 00 00 02 00 00 00 02 00 00 01 00 20 00 03 00 00 00 00 00 10 00 12 0b 00 00 12 0b 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00 ff 00 00 ff 00 00 00 00 00 00 ff 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 01 00 ...
Total: 8389584 bits
Original Uncompressed Bits B
HugPlant Compressed
00 20 00 f4 c3 b7 6d c2 31 24 92 dc 24 a7 c9 25 ae 24 b5 c4 85 88 40 be c4 92 46 25 79 2f c4 af 25 f8 92 49 24 92 64 c9 92 49 30 b1 24 92 49 24 2c 49 24 92 49 0b 12 49 24 92 42 c4 92 49 24 92 49 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ...
Decoding Trie: 2560 bits
Image data: 1991464 bits
74 68 65 20 70 61 73 73 63 6F 64 65 20 69 73 20 68 75 67 39 31 38 32 37 78 79 7A 2E 65 75 7a c0
09 eb cd d4 2a 55 9f d8 98 d1 4e e7 97 56 58 68
0c 7a 43 dd 80 00 7b 11 58 f4 75 73 77 bc 26 01
e0 92 28 ef 47 24 66 9b de 8b 25 04 1f 0e 87 bd
87 9e 03 c9 f1 cf ad fa 82 dc 9f a1 31 b5 79 13
9b 95 d5 63 26 8b 90 5e d5 b0 17 fb e9 c0 e6 53
c7 cb dd 5f 77 d3 bd 80 f9 b6 5e 94 aa 74 34 3a
a9 c1 ca e6 b8 9c 60 ab 36 3b a5 8a b4 3a 5c 5a
62 e9 2f 16 4c 34 60 6e 51 28 36 2c e7 4e 50 be
c0 15 1b 01 d9 c0 bd b4 20 87 42 be d4 e2 23 a2
b6 84 22 4c cf 74 cd 4f 23 06 54 e6 c2 0f 2d bd
e5 81 f4 c6 de 15 59 f1 68 a4 a5 88 16 b0 7f bf
8a 1d 98 bd 33 b4 d5 71 22 93 81 af e0 cc ce 12
57 23 62 3a e4 3d 8c f1 12 8d a5 40 3b 70 d6 9b
12 49 62 8d 6f d4 52 f6 7f d5 11 7c ca 07 dd e3
dc 1c 7f c4 a4 69 77 6e 5e 60 db 5a 69 01 95 c8
d7 2e 57 62 b7 8e 5c 51 f9 70 55 1b 7c ba 68 bc
Huffman.java
compress()
42 4d 7a 00 10 00 00 00 00 00 7a 00 00 00 6c 00 00 00 00 02 00 00 00 02 00 00 01 00 20 00 03 00 00 00 00 00 10 00 12 0b 00 00 12 0b 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00 ff 00 00 ff 00 00 00 00 00 00 ff 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 01 00 ... Total: 8389584 bits
Total: 1994024 bits
Opening a .huf File
Of course, major operating systems have no idea what to do with a .huf file.
Or an Even Simpler View
To keep things conceptually simpler, let’s package the compressed data plus decoder into a single self-extracting .java file.
70 75 62 6c 69 63 20 63 6c 61 73 73 20 53 65 6c 66 45 78 74 72 61 63 74 69 6e 67 48 75 67 50 6c 61 6e 74 20 7b 0d 0a ... 74 68 65 20 70 61 73 73 63 6F 64 65 20 69 73 20 68 75 67 39 31 38 32 37 78 79 7A 2E 65 75 7a c0 09 eb cd d4 2a 55 9f d8 98 d1 4e e7 97 56 58 68 0c 7a 43 dd 80 00 7b 11 58 f4 75 73 77 bc 26 01 e0 92 28 ef 47 24 66 9b de 8b 25 04 1f 0e 87 bd 87 9e 03 c9 f1 cf ad fa 82 dc 9f a1 31 b5 79 13 9b 95 d5 63 26 8b 90 5e d5 b0 17 fb e9 c0 e6 53 c7 cb dd 5f 77 d3 bd 80 f9 b6 5e 94 aa 74 34 3a ... 00 20 00 f4 c3 b7 6d c2 31 24 92 dc 24 a7 c9 25 ae 24 b5 c4 85 88 40 be c4 92 46 25 79 2f c4 af 25 f8 92 49 24 92 64 c9 92 49 c4 92 49 24 92 49 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ...
SelfExtractingHugPlant.java
42 4d 7a 00 10 00 00 00 00 00 7a 00 00 00 6c 00 00 00 00 02 00 00 00 02 00 00 01 00 20 00 03 00 00 00 00 00 10 00 12 0b 00 00 12 0b 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00 ff 00 00 ff 00 00 00 00 00 00 ff 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff...
HugPlant.bmp
2,037,424 bits
8,389,584 bits
Compression Model #2: Self-Extracting Bits
As a model for the decompression process, let’s treat the algorithm and the compressed bitstream as a single sequence of bits.
Will discuss the implications of this model next time.
0111000001110101...
SelfExtractingHugPlant.java
Interpreter
8,389,584 bits
0100001001001101...
HugPlant.bmp
2,037,424 bits
LZW Style Compression (Extra)
Thought Experiment
How might we compress the following bitstreams (underlines for emphasis only)?
The LZW Approach
Key idea: Each codeword represents multiple symbols.
Demo Example: http://goo.gl/68Dncw
LZW
Named for inventors Limpel, Ziv, Welch.
Our version in lecture is simplified, for example:
LZW
Neat fact: You don’t have to send the codeword table along with the compressed bitstream.
LZW decompression example:
Side Trip: Lossy Compression
Lossy Compression
Most media formats lose information during compression process:
Why?
Lossy Compression
Basic idea: Throw away information that human sensory system doesn’t care about.
�Examples:
�See EE20 (or perhaps 16A/16B?) for more.
Summary
Compression: Make common bitstreams more compact.
Huffman coding:
�LZW:
Slides Moved to Next Lecture
TAANSTAFL: There ain’t no such thing as a free lunch.
Compression can be thought of as a ‘remapping’ of bitstreams.
0111000001110101...
SelfExtractingHugPlant.java
Interpreter
8,389,584 bits
0100001001001101...
HugPlant.bmp
2,037,424 bits
Even Better Compression
Compression ratio of 25% is certainly very impressive, but we can do much better. MysteryX achieves a 0.35% compression ratio!
29,432 bits
0010000001110000...
MysteryX
Interpreter
0111000001110101...
SelfExtractingHugPlant.java
Interpreter
8,389,584 bits
0100001001001101...
HugPlant.bmp
2,037,424 bits
8,389,584 bits
0100001001001101...
HugPlant.bmp
Anybody want to guess what this is?
MysteryX: HugPlant.java
MysteryX is just HugPlant.java, the piece of code that I used to generate the .bmp file originally.
Interesting question:
29,432 bits
0010000001110000...
CB: HugPlant.java
Interpreter
8,389,584 bits
0100001001001101...
B: HugPlant.bmp
69 6d 70 6f 72 74 20 6a 61 76 61 2e 61 77 74 2e
43 6f 6c 6f 72 3b 0a 0a 0a 70 75 62 6c 69 63 20
63 6c 61 73 73 20 48 75 67 50 6c 61 6e 74 20 7b
0a 09 2f 2f 73 65 6e 64 20 65 6d 61 69 6c 20 74
6f 20 70 72 69 7a 65 40 6a 6f 73 68 68 2e 75 67
20 74 6f 20 72 65 63 65 69 76 65 20 79 6f 75 72
20 70 72 69 7a 65 0a 0a 09 70 72 69 76 61 74 65
20 73 74 61 74 69 63 20 64 6f 75 62 6c 65 20 73
63 61 6c 65 46 61 63 74 6f 72 3d 32 30 2e 30 3b
0a 0a 09 70 72 69 76 61 74 65 20 73 74 61 74 69
63 20 69 6e 74 20 67 65 6e 43 6f 6c 6f 72 56 61
6c 75 65 28 69 6e 74 20 6f 6c 64 56 61 6c 2c