It was way back in 1948 that Claude Shannon, at Bell Telephone Laboratories published his landmark paper. He determined the goal for data encoding. Shannon proved that at any given noise level, there is a maximum ratio between the volume of information and the ammount of redundancy required for data integrity. He is the Shannon in Shannon's limit.
The answer was turbo codes. This type of data encoding sent two different redundant sub-blocks of parity data per bit of any sent message. This transfer ran at just a niche below the hypothetical "Shannon's limit." It was the eloquent answer and was embraced quickly by mobile phones, satillite data, mobile television, and many other forms of data transfer. Eventually it'll be part of most wireless data transfer.
The way it works is that the data is encoded into three sub-blocks of bits. The first sub-block is the m-bit block of payload data. The second and third sub-blocks are encoded as n/2 parity bits for the payload data but as different permutations. This is computed using a recursive systematic convolutional code (RSC code.) these three redundant blocks of data are interleaved (arranged non-continiously.)
The message is sent, but when decoded is done so with a set of "soft decisions." The decoder produces an integer for each bit in the data block. This integer is not for computation, but is a measure of the probability whether or not the bit is a binary 0 or 1. this process occurrs with each redundant block as well and they are processed again to determine the likelihood data to reconcile differences between the blocks. The two frenchmen do give credit to the earlier work of Andrew Viterbi in 1967 with his Viterbi decoding algorithm for trellis codes. More here.
No comments:
Post a Comment