Definition of Cyclical Redundancy Check in Network Encyclopedia.
What is Cyclic Redundancy Check (CRC)?
Cyclic Redundancy Check is a number mathematically calculated for a packet by its source computer, and then recalculated by the destination computer. If the original and recalculated versions at the destination computer differ, the packet is corrupt and needs to be resent or ignored.
The mathematical procedure for performing a CRC is specified by the International Telecommunication Union (ITU) and involves applying a 16-bit polynomial to the data being transmitted by the packet for packets of 4 KB of data or less, or a 32-bit polynomial for packets larger than 4 KB.
The results of this calculation are appended to the packet as a trailer. The receiving station applies the same polynomial to the data and compares the results to the trailer appended to the packet. Implementations of Ethernet use 32-bit polynomials to calculate their CRC.
History of Cyclic Redundancy Check
The CRC was invented by W. Wesley Peterson in 1961. The 32-bit CRC function, used in Ethernet and many other standards, is the work of several researchers and was published in 1975.
This is the algorithm for the CRC-32 variant of CRC. The CRC Table is a memoization of a calculation that would have to be repeated for each byte of the message.
Function CRC32 Input: data: Bytes //Array of bytes Output: crc32: UInt32 //32-bit unsigned crc-32 value
//Initialize crc-32 to starting value crc32 ← 0xFFFFFFFF
for each byte in data do nLookupIndex ← (crc32 xor byte) and 0xFF; crc32 ← (crc32 shr 8) xor CRCTable[nLookupIndex] //CRCTable is an array of 256 32-bit constants
//Finalize the CRC-32 value by inverting all the bits crc32 ← crc32 xor 0xFFFFFFFF return crc32