# Inroduction To Low Density Parity Check Computer Science

Low-density parity-check (LDPC) codes are forward error-correction codes, which were invented by Robert Gallager in his PhD thesis in 1962 [1]. After long term neglect, LDPC drew the attention of numerous researchers. Nowadays, design techniques enables LDPC codes approach the capacity of classical memory-less channels within hundredths of decibel [2]. In addition to the theoretical research, LDPC codes have been utilized in many fields, such as satellite-based digital video broadcasting and long-haul optical communication standards, and are highly likely to be adopted in the IEEE wireless local area network standard and LTE of third-generation mobile communications[26].

Forward error control coding, such as LDPC codes, increases their message bits with deliberately introduced redundant bits, namely check bits, to produce a codeword for the message. With the help of added check bits, codewords are sufficiently distinct from one another. Therefore, although some bits in the codeword may be corrupted by the channel noise, the transmitted message can still be decoded at the receiver.

Although the flip of a single bit due to channel noise can be easily detected with single parity check bit, this code cannot indicate which bit, or bits, were inverted. Since any codeword with even number of bits flip produces codeword satisfying this constraint (2.1), the errors of even numbers cannot be detected by this single parity check code. Therefore, in order to detect more than a single bit error, increased redundancy needs to be introduced as additional parity check bits. More sophisticated codes usually contain multiple parity-check equations and each codeword must satisfy every one of them. To illustrate, we define another codeword c:

Actually, except for the sparseness of H, there is no difference between LDPC codes and any other block codes. If there are block codes that can be denoted by a sparse parity-check matrix, they can be decoded successfully using the LDPC iterative decoding methods. Nevertheless, it is not practical to find a sparse parity-check matrix for a block code. Therefore, in order to design the LDPC codes, the spare parity-check matrix H needs to be construct firstly and then a generator matrix according to equation (2.8) can be determined.

It is the sparseness of parity-check matrix of LDPC codes that determines the biggest difference between the classical block codes and LDPC codes. The difference is how they are decoded. In general, classical block codes are decoded with ML like decoding methods, which requires the codes are short and designed algebraically to make the decoding process simple. As for LDPC codes, due to the special properties of H, they are decoded iteratively by using a bipartite graph representing their parity-check matrix [3].

The LDPC codes can be divided into two categories: regular LDPC codes [1] and irregular LDPC codes [4] [5]. If each codeword bit participates in exactly parity-check equations and each parity-check equation includes exactly codeword bits, the parity-check matrix is called -regular. Otherwise, it is an irregular LDPC code.

If equation (2.4) satisfies all of the codewords in the code, H is a valid parity-check matrix. It has been noted that an error correction code can be depicted by more than one parity-check matrix. Specifically, for the same code, two parity-check matrices even do not need to have the same number of rows. What is required, however, is that the rank over GF(2) of them must be the same. In matrix is the rank of parity-check matrix H

(2.11)

where k is the number of message bits and is the number of linearly independent rows over GF(2) in H. For instance, we have a regular matrix for the code of (2.2) with and:

. (2.12)

For an irregular parity-check matrix H, we define the degree distribution v and h of the code. is the set of the fraction of columns with weight while h is the set of the fraction of rows with weight . According to the definition, we can find that the parity-check matrix is irregular with degree distribution, , and .

In summary, a regular LDPC code needs to satisfy

(2.13)

while for an irregular LDPC code, the degree distribution satisfies

(2.14)

Like Tanner suggested [3], LDPC codes can be represented by bipartite graphs. The bipartite graph, namely Tanner graph, consists of two sets of vertices: n vertices called variable nodes, corresponds to bits of the codeword and m vertices called check nodes, corresponds to the set of parity-check constraints which define the code. An edge is the connection between a variable node and a check node, if the code bit is included in the corresponding parity-check equation. Therefore, the number of edges in the bipartite graph should be equal to the number of non-zeros in the parity-check matrix. The Tanner graph in Figure 2.1 defines an irregular code whose parity-check matrix is .

Variable nodes

**âˆš**

**âˆš**

Check nodes

Figure 2.1: The Tanner graph representation of an irregular parity-check matrix

A cycle in a Tanner graph consists of a set of connected vertices. It includes edges that start and end at the same vertex and contains other vertices more than once. The bold edges in Figure 2.1 indicate a cycle. The number of the edges a cycle contains is the length of the cycle and the smallest cycle decides the girth of a Tanner graph.

In summary, the sparseness of the bipartite graph structure is the essential property that guarantees performance and the efficiency of decoding algorithms of LDPC codes. We will illustrate the decoding algorithms and code design in the following sections.

**2.2 Message-Passing Decoding**

The classical LDPC codes decoding algorithm is a class of decoding algorithms called message-passing algorithms [1]. It illustrates the process that messages pass along the edges of Tanner graph from bit nodes to check nodes, and from check nodes back to bit nodes. Since the information passes iteratively between bit nodes and check nodes, this class of algorithms is also called iterative decoding algorithm which was detailed analyzed in [6]. According to the type of messages passed or the type of operation conducted at the nodes, the message-passing algorithms can still be subdivided. For example, for the bit-flipping decoding, the messages are binary, while for the belief propagation decoding, the messages are probabilities that indicate the level of belief regarding the value of the bit nodes. In the following part, we will focus on some typical message-passing decoding algorithms [ldpc-design and decoding] [tutorial].

**2.2.1 Message-passing decoding on the binary erasure channel**

Message-passing decoding on the binary erasure channel (BEC) is a simple example to illustrate the algorithms. The input of this channel is binary bits sequences, while the output of the channel consists of binary bits and an extra element, called the erasure. Transmitted through the BEC, each bit is either received correctly or completely erased with the probability. Therefore, the decoder only needs to determine the exact value of the erased bits.

In the message-passing decoder, if there is only one erased bit in each parity-check equation, the check node determines the correct value of the erased bit by choosing the value that satisfies the parity-check equation.

This is the basic process of message-passing decoding on BEC: after receiving the messages, use to indicate the value '0', '1' or '' (erased) of the -th bit node. Each bit node sends the channel information to all its directly connected check nodes. If a check node only receives one '' message, it can determine the correct value of that erased bit by conducting parity check. If not, the check node keeps the message be ''. Then the message, labeled by indicating the message from the check node to the bit node, will be sent to the corresponding bit nodes. If an erased bit node receives the message which is '0' or '1', the bit node changes its value to the incoming message '0' or '1', respectively. This decoding process won't be halt until all values of the bits are known, or the maximum iteration number is reached.

Consider, for instance, the codeword generated according to the parity-check matrix (2.12) is and is transmitted through the BEC. The received information is. According to the basic process of message-passing decoding algorithm on BEC, the initialization of messages is. In the first step of the first iteration, the check nodes calculate the check node messages in terms of parity-check equations. Since the 1st check node connects bit nodes 1, 2 and 4 with incoming messages '1', ''and '0' respectively, the outgoing message, from the 1st check node to the 4th bit node, will be For the 2nd check node which connects bit nodes 2, 3 and 5 with incoming messages '', '' and '0' respectively, since this check node receives two '' messages, it cannot determine the value of any bit. Therefore, all the outgoing messages from the check node are ''. In these ways, and the 4th check node cannot determine the values of 3rd, 4th and 6th bit, due to the two '' messages from the 3rd and 6th node bit. In the second step, each bit node that has message '' needs to update its value using the incoming messages. The 2th bit which is unknown has incoming messages and, so it updates its value to '0'. For the 6th bit which is unknown as well, it has incoming messages and, so its value can be updated to '1'. However, for the 3rd bit node also with unknown message, it has incoming messages and, so it cannot change its value. Hence, after the second step of the first iteration, the messages are updated tofrom the previous messages

Test the messages, and there still is an unknown bit, namely the 3rd bit, so the decoding process we mentioned above will be iterated. Repeating the steps in the first iteration, we get the messages at the end of second iteration. Through testing, there is no unknown bit. Then the algorithm halts and finally returns the correct messages as the decoded codeword. The whole process of the message-passing decoding on the BEC is illustrated in Figure 2.2.

Initialization

1

1

0

Iteration 1

Check messages

Check messages

1

1

0

Bit messages

1

1

0

Bit messages

Iteration 2

Figure 2.2: Message-passing decoding of the received messages on the binary erasure channel. The solid arrow, the black dotted arrow and the red dashed arrow indicate message '1', '0' and '', respectively.

**2.2.2 Bit-flipping decoding**

The bit-flipping algorithm [1], first proposed by Gallager, is a hard-decision message-passing decoding algorithm. For this algorithm, messages passing along the edges of Tanner graph are binary. The check node determines the outgoing values by checking whether the incoming messages satisfy the parity-check equations.

Here is the bit-flipping decoding algorithm: at the initialization stage, each bit node sends its bit value received from the channel to the check nodes that the bit node directly connects. At the first step of main iteration process, each check node determines the outgoing value and sends it to the corresponding bit node. The outgoing value is calculated by making the outgoing value and the incoming values, except for the one from the edge of the outgoing value, satisfy the parity-check equation. The second step is the update of bit node. If the majority of the incoming messages to a bit node are different from the current value, the bit node flips its current value. After the messages from bit nodes are sent to check nodes again, each check node tests whether its parity-check equation is satisfied. If all parity-check equations are satisfied, the algorithm is halted, otherwise the decoding algorithm is repeated until the maximum number of decoder iterations is reached and a failure to converge to a codeword will be detected.

Consider the code is sent through a channel, and is received. At the initialization stage, each bit node receives its value from the channel and sends the value to the check nodes connected directly. For the first step, the 1st check node which connects the 1st, 2nd and 4th bit nodes, determines the values to these bit nodes are, and , respectively. Similarly, other messages are and In the second step, the majority of the messages from the 1st and 3rd check nodes to the 1st bit node is different from its current value '0', so the 1st bit node flips its value to '1'. For the 2nd bit node receiving messages from the 1st and 2nd check nodes, however, there is no the majority of messages different from the current value '0', so the value remains '0' without flip. In the same way, the other bits still remain their current values, so the new messages are. Then the messages are sent to the check nodes again and tested. Since all the parity-check equations are satisfied, the decoding algorithm terminates and returns the messages. The whole bit-flipping decoding steps are illustrated in figure 2.3.

Initialization

0

1

0

Check messages

0

1

0

Bit update

1

1

0

**âˆš**

**âˆš**

**âˆš**

**âˆš**

**âˆš**

**âˆš**

**âˆš**

**âˆš**

Test

1

1

0

Figure 2.3: Bit-flipping decoding of the received messages. The solid arrow and the dashed arrow indicate message '1', and '0' respectively. A cross represents that the parity-check equation is not satisfied, while a tick represents that the parity-check equation is satisfied.

However, the cycles of Tanner graph make the iterative decoding process less effective. For instance, decoding the codeword from the received codeword The Tanner graph of this LDPC code is illustrated in figure 2.4. In this Tanner graph there is a 4-cycle, highlighted in red, between the first two bit nodes and the first two check nodes. According to the figure, we can find that each of the first two codeword bits is included in the same two parity-check equations. The detrimental result is if one of the two bits is incorrect and both the two parity-check equations are unsatisfied, it is impossible to determine which bit is in error and the algorithm fails to converge.

1

0

0

Figure 2.4: Bit-flipping decoding of the received codeword. The edges in bold indicate a 4-cycle. The solid arrow and the dashed arrow indicate message '1', and '0' respectively. A cross represents that the parity-check equation is not satisfied, while a tick represents that the parity-check equation is satisfied.

**2.2.3 Sum-product decoding**

In contrast to the bit-flipping decoding algorithm, the sum-product decoding algorithm is a soft decision message-passing algorithm first proposed by Gallager [1]. However, it was not developed until 30 years later, due to the limited computing resources in the early of 1960s. It was rediscovered by researchers during the research of Turbo decoding [6].

The obvious difference from the bit-flipping algorithm is that the messages transmitted along edges are probabilities represented by log-likelihood ratios. Since the messages are probabilities that indicate a confidence level about the value of the codeword bits, it is also call belief-propagation decoding. The messages received are real values with two parts: the sign and the magnitude of the values. The sign represents a binary decision, and the magnitude of the value indicates the confidence of the binary decision. Therefore, compared with bit-flipping only using hard decision, the sum-product algorithm with soft information improves the effectiveness of LDPC codes decoding significantly.

For a bit, denotes the probability that anddenotes the probability that

[1] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963.

[1] Sarah. J. Johnson Wiley Encyclopedia of Telecommunications (J. Proakis, Ed.)

[2] S. Y. Chung, G. D. Forney, Jr., T. J. Richardson, and R. L. Urbanke, "On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit," IEEE Commun. Letters, vol. 5, no. 2, pp. 58-60, February 2001.

[3] R. M. Tanner, "A recursive approach to low complexity codes," IEEE Trans. Inform. Theory, vol. IT-27, no.5, pp. 533-547, September 1981.

[4] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, D. A. Spielman, and V. Stemann, "Practical loss-resilient codes," in Proc. 30th ACM Symp. on the Theory of Computing, 1998, pp. 249-258.

[5] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman,"Improved low-density parity-check codes using irregular graphs," IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 585-598, February 2001.

[6] C.Berrou, A.Glavieux and P.Thitimajshima, "Near Shannon limit errorcorrecting

coding and decoding: Turbo-codes," in Proceedings of the IEEE

International Conference on Communications(ICC), pp.1064-1070 (1993).

[26] "3rd Generation Partnership Project (3GPP); Technical speciï¬cation group radio

access network; Multiplexing and channel coding (FDD), 3GPP TS 25.212 V4.0.0

(2000-12)," [Online]. Available: http://www.3gpp.org.

Article name: **Inroduction To Low Density Parity Check Computer Science essay, research paper, dissertation**