...one of the most highly
regarded and expertly designed C++ library projects in the
world.

— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards

This is the documentation for a snapshot of the develop branch, built from commit dbfa60bede.

Note | |
---|---|

Some of the introductory material is based on |

When binary data is transmitted, usually electronically, there is a chance that the data gets corrupted. One method to pick up said corruption is to generate some value that is coded from the original data, send said value to the receiver, then confirm that the received data generates the same value when it's coded at the destination.

There are several possibilities after the receiver's check:

- The check value matches at both places and the data was transmitted intact.
- The data got corrupted and the check values do not match.
- The data is intact, but the check value got corrupted. This can't the distinguished from the previous case, but is a (relatively) safe false positive.
- Either the data and/or check value gets corrupted, but such that the results of the check-coding are still consistent. This is a dangerous false negative!

The way to minimize false negatives is to choose coding algorithms that cause a lot of churn per input, especially a variable amount.

The check values are known as **checksum**s because
they are used to *check* for data consistency and the first
coding algorithms were addition- (i.e. *sum*ming-) based.

**Cyclic Redundancy Codes** are a type of consistency
check that treats the message data as a (long) dividend of a modulo-2 polynomial
division. Modulo-2 arithmetic doesn't use carries/borrows when combining
numbers. A specific CRC defines a set number of bits to work on at a time,
where said number is also the degree of a fixed polynomial (with modulo-2
coefficients) used as a divisor.

Since ordering doesn't apply to modulo arithmetic, the check between the current high part of the dividend and the trial partial product (of the divisor and the trial new quotient coefficient) is done by seeing if the highest-degree coefficient of the dividend is one. (The highest-degree coefficient of the divisor must be one by definition, since it's the only non-zero choice.) The remainder after the division is finished is used as the basis of the CRC checksum.

For a given degree *x* for the modulo-2 polynomial divisor,
the remainder will have at most *x* terms (from degree
*x* - 1 down to the constant term). The coefficients
are modulo-2, which means that they can be represented by 0's and 1's.
So a remainder can be modeled by an (unsigned) integer of at least *x*
bits in **width**.

The divisor must have its *x* degree term be one, which
means it is always known and can be implied instead of having to explicitly
include in representations. Its lower *x* terms must
be specified, so a divisor can be modeled the same way as remainders. With
such a modeling, the divisor representation could be said to be truncated
since the uppermost term's value is implied and not stored.

The remainder and **(truncated) divisor polynomial**s
are stored as basic computer integers. This is in contrast to the dividend,
which is modeled from the input stream of data bits, where each new incoming
bit is the next lower term of the dividend polynomial. Long division can
be processed in piecemeal, reading new upper terms as needed. This maps
to reading the data a byte (or bit) at a time, generating updated remainders
just-in-time, without needing to read (and/or store(!)) the entire data
message at once.

Long division involves appending new dividend terms after the previous terms have been processed into the (interim) remainder. So the remainder it the only thing that has to change during each division step; a new input byte (or bit) is combined with the remainder to make the interim dividend, and then combined with the partial product (based on the divisor and top dividend bit(s)) to become a remainder again.

When all of the input data has been read during division, the last *x*
bits are still stuck in the interim remainder. They have not been pushed
through the division steps; to do so, *x* zero-valued
extra bits must be passed into the system. This ensures all of the message's
data bits get processed. The post-processed remainder is the checksum.
The system requires the message to be **augmented**
with *x* extra bits to get results.

Alternatively, if the post-division augmentation bits are the expected checksum instead, then the remainder will "subtract" the checksum with itself, giving zero as the final remainder. The remainder will end up non-zero if bit errors exist in either the data or checksum or both. This option requires the checksum to be fed from highest-order bit first on down (i.e. big endian).

Exploiting the properties of how the division is carried out, the steps
can be rearranged such that the post-processing zero-valued bits are not
needed; their effect is merged into the start of the process. Such systems
read **unaugmented** messages and expose the
checksum directly from the interim remainder afterwards. (You can't use
the "augment-message-with-checksum-and-zero-check" technique
with this, of course.)

Since long division proceeds from the uppermost terms on down, it's easiest
to treat an incoming byte as the uppermost unprocessed terms, and to read
the bits within that byte as the highest-order bit is the uppermost unprocessed
term and go down. However, some hardware implementations have an easier
time reading each byte from the lowest-order bit and go up. To simulate
those systems in software, the program needs to be flagged that **input reflection** needs to be applied. Reflecting
a built-in integer reverses the order of its bits, such that the lowest-
and highest-order bits swap states, the next-lowest- and next-highest-order
bits swap, etc. The input reflection can be done by reflecting each byte
as it comes in or keeping the bytes unchanged but reflect the other internal
functioning. The latter sounds harder, but what it usually done in the
real world, since it's a one-time cost, unlike reflecting the bytes.

Similarly, the final remainder is processed by some hardware in reverse
order, which means software that simulate such systems need to flag that
**output reflection** is in effect.

Some CRCs don't return the remainder directly (reflected or not), but add
an extra step complementing the output bits. Complementing turns 1 values
into 0 values and vice versa. This can simulated by using a XOR (exclusive-or)
bit mask of all 1-values (of the same bit length as the remainder). Some
systems use a **final XOR mask** that isn't
all 1-values, for variety. (This mask takes place *after*
any output reflection.)

At the other end, the built-in-integer register normally starts at zero
as the first bytes are read. Instead of just doing nothing but load input
bits for *x* steps, some CRC systems use a non-zero
**initial remainder** to add extra processing.
This initial value has to be different for the augmented versus un-augmented
versions of the same system, due to possible incorporation with the zero-valued
augment bits.

The Rocksoft™ Model CRC Algorithm, or RMCA for short, was designed by Ross Williams to describe all the specification points of a given CRC system (quoted):

**RMCA Parameters**

- WIDTH
This is the width of the algorithm expressed in bits. This is one less than the width of the Poly.

- POLY
This parameter is the poly. This is a binary value that should be specified as a hexadecimal number. The top bit of the poly should be omitted. For example, if the poly is 10110, you should specify 06. An important aspect of this parameter is that it represents the unreflected poly; the bottom bit of this parameter is always the LSB of the divisor during the division regardless of whether the algorithm being modelled is reflected.

- INIT
This parameter specifies the initial value of the register when the algorithm starts. This is the value that is to be assigned to the register in the direct table algorithm. In the table algorithm, we may think of the register always commencing with the value zero, and this value being XORed into the register after the N'th bit iteration. This parameter should be specified as a hexadecimal number.

- REFIN
This is a boolean parameter. If it is FALSE, input bytes are processed with bit 7 being treated as the most significant bit (MSB) and bit 0 being treated as the least significant bit. If this parameter is FALSE, each byte is reflected before being processed.

- REFOUT
This is a boolean parameter. If it is set to FALSE, the final value in the register is fed into the XOROUT stage directly, otherwise, if this parameter is TRUE, the final register value is reflected first.

- XOROUT
This is an W-bit value that should be specified as a hexadecimal number. It is XORed to the final register value (after the REFOUT) stage before the value is returned as the official checksum.

His description assumes an octet-sized byte. The *POLY*
is the (truncated) divisor. The *INIT* is the initial
remainder, assuming the unaugmented version of CRC processing is used.
(If you're using an augmented-style CRC, you have to undo the effect of
the built-in zero-augment before initialization.)

The two function templates and two class templates in this library provide ways to carry out CRC computations. You give the various Rocksoft™ Model CRC Algorithm parameters as template parameters and/or constructor parameters. You then submit all the message data bytes at once (for the functions) or piecemeal (for the class objects).

Note that some error-detection techniques merge their checksum results within the message data, while CRC checksums are either at the end (when augmented, without either kind of reflection, with a bit-width that's a multiple of byte size, and no XOR mask) or out-of-band.