This HTML5 document contains 38 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dcthttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n13http://en.wikipedia.org/wiki/
dbphttp://dbpedia.org/property/
dbchttp://dbpedia.org/resource/Category:
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Canonical_Huffman_code
rdfs:label
Canonical Huffman code
rdfs:comment
In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Data compressors generally work in one of two ways. Either the decompressor can infer what codebook the compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it.
owl:sameAs
freebase:m.0gy99r yago-res:Canonical_Huffman_code
dbp:wikiPageUsesTemplate
dbt:Multiple_issues dbt:Compression_methods dbt:No_footnotes dbt:Tmath dbt:Technical
dct:subject
dbc:Lossless_compression_algorithms dbc:Coding_theory
gold:hypernym
dbr:Type
prov:wasDerivedFrom
n13:Canonical_Huffman_code?oldid=1065477844&ns=0
dbo:wikiPageID
6946171
dbo:wikiPageLength
9240
dbo:wikiPageRevisionID
1065477844
dbo:wikiPageWikiLink
dbr:Symbol_code dbr:Algorithm dbr:Information_theory dbr:Codebook dbr:Binary_number dbr:Bit dbr:Huffman_coding dbr:JPEG_File_Interchange_Format dbr:Kraft–McMillan_inequality dbr:Value_(computer_science) dbc:Lossless_compression_algorithms dbc:Coding_theory dbr:Logical_shift dbr:Pseudocode dbr:8-bit_computing dbr:Decimal_separator dbr:Bit-length dbr:Alphabet dbr:Data_compression dbr:Computer_science
dbo:abstract
In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Data compressors generally work in one of two ways. Either the decompressor can infer what codebook the compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it. In order for a scheme such as the Huffman code to be decompressed, the same model that the encoding algorithm used to compress the source data must be provided to the decoding algorithm so that it can use it to decompress the encoded data. In standard Huffman coding this model takes the form of a tree of variable-length codes, with the most frequent symbols located at the top of the structure and being represented by the fewest bits. However, this code tree introduces two critical inefficiencies into an implementation of the coding scheme. Firstly, each node of the tree must store either references to its child nodes or the symbol that it represents. This is expensive in memory usage and if there is a high proportion of unique symbols in the source data then the size of the code tree can account for a significant amount of the overall encoded data. Secondly, traversing the tree is computationally costly, since it requires the algorithm to jump randomly through the structure in memory as each bit in the encoded data is read in. Canonical Huffman codes address these two issues by generating the codes in a clear standardized format; all the codes for a given length are assigned their values sequentially. This means that instead of storing the structure of the code tree for decompression only the lengths of the codes are required, reducing the size of the encoded data. Additionally, because the codes are sequential, the decoding algorithm can be dramatically simplified so that it is computationally efficient.
foaf:isPrimaryTopicOf
n13:Canonical_Huffman_code