Information Theory: Coding Theorems for Discrete Memoryless Systems Essay

Paper Type:  Essay
Pages:  7
Wordcount:  1728 Words
Date:  2022-05-26

16-APSK digital communication system

The 16-APSK digital communication system is a digital modulation scheme that employs the amplitude and phase changes of the carrier signal to offer the data transport mechanism for the information being conveyed. The 16-APSK digital communication system is currently being considered for technological systems such as the 5G mobile communication system due to its high performance regarding spectral efficiency, noise reduction and ease of use. The main concept behind the 16-APSK is the reduction of the number of power levels that are needed to transmit information and data (Chayratsami and Thuaykaew, 2014, p. 323). The advantages of using the 16-APSK digital communication system is that it uses less amplitude levels, lower peak to average ratio, and the combination with Gray coding to minimize the bit error rate (BER).

Trust banner

Is your time best spent reading someone else’s essay? Get a 100% original essay FROM A CERTIFIED WRITER!

The modulation of the 16-APSK digital communication system is usually used in the Digital Video Broadcasting - Satellite to Handhelds (DVB-SH) standard in the satellite transmission mode. The 16-APSK modulation constellation comprises of 2 concentric rings or regularly spaced 4 and 12-PSK points correspondingly in the innermost ring (R1) and the outer one (R2). One of the main parameters of the 16-APSK modulation is the ratio of the radii of the 2 concentric circles where the constellation nodes are positioned. The ratio of the outer ring to the inner ring radius of the 16-APSK modulation is often used in the forward error correction (FEC) channel coding technique to allow for performance optimization of the given channel's attributes (Chayratsami and Thuaykaew, 2014, p. 324). Therefore, the 16-APSK digital communication system's attributes can be used to overcome the problems that were come across with the other modulation forms that are currently being widely used.

Shannon's Information Capacity Theorem

The Shannon's information capacity theorem states that the network capacity of any given constant network with a bandwidth of W Hz, distressed by the Gaussian noise with a power spectral density of n0/2 is given by Cc = W log2 (1+S/N) bits per second. S is considered as the average conveyed power of the signal and N, the average noise power is expressed as N =-WWn0/2dw = n0W (Csiszar and Korner, 2011, n.p). W is considered the channel bandwidth that corresponds to the sharpness of the given image, while S is considered as the signal energy and N the noise energy that corresponds to the grain found in most films.

The Shannon information capacity is thus the optimum information amount that can be conveyed through a given channel devoid of any error. The actual amount of data and information passed successfully would thus depend on the code and can be expressed in terms of the Signal to Noise Ratio (SNR), which can be limited in terms of bandwidth, level of the signals, and the level of the noise. SNR shows the bits of information that can be conveyed every second without any error over a channel of bandwidth while the power of the signal is expressed in terms of watts and exposed to the Additive white Gaussian noise (Csiszar and Korner, 2011, n.p).

The Shannon information limit is thus the basic optimal broadcast capacity which can be attained on any network given any blend of code, transmission and decoding technique. The bandwidth restricts the speed of the data symbols that can be conveyed over the particular channel. The SNR ratio then restricts the amount of data that can be squeezed in every symbol that is transmitted. The increase of the SNR can make the transmitted symbol stronger to withstand the noise (Schreuder, 2014, p. 261). SNR, which is measured at the receivers' end, is a function of the quality of the signal, the power of the signal, and the attributes of the channel. In order to increase the Shannon information capacity, the SNR ratio and the assigned the bandwidth have to be traded. Therefore, the bandwidth can be traded off for the SNR. However, since the bandwidth inclines to infinity, the capacity of the channel is finite since the bandwidth increase leads to an increase in the noise power.

Every network system has an optimal rate of information capacity that is referred to as the channel capacity (C). When the information rate (R) is less than the information capacity (C), one can be able to reduce the probabilities for small errors by use of code. In order to achieve even lower probabilities for error, the engineer has to work on the longer blocks of data signal, which entails much longer interruptions and high computational requests. Therefore, if the R can be made less than C, then the transmission of information may be achieved without any error despite the noise (Schreuder, 2014, p. 262). However, the Shannon information capacity theorem does not have a detailed proof as it just proposes the coding technique for reducing the error. The lack of proof thus leads impedes the development of a coding method that enables the information reach the highest channel capacity (Csiszar and Korner, 2011, n.p).

The Relationship between BER and Eb/No for Different Constellation

The ratio of the energy per bit (Eb) to the noise power spectral density (No) is a major parameter in the digital communication or transmission of data. The efficiency of any network system is usually measured by the relationship between BER and Eb/N0. The Eb/No is considered a normalized SNR measure and is thus used to compare BER performance of various digital modulations without considering the bandwidth (Maral, 2004, p.94). Eb is the signal energy that is related to every user data bit because it is equal to the power of the signal divided by the bit rate of the given user. The BER is usually high for the lesser Eb/N0 values and gets lower for the high Eb/N0 values. BER thus decreases to 0 for the lower SNR values in the case of the coded bitstreams (Angueira and Romo, 2012, n.p). When similar input voice signals are checked for the various lengths of message bits, it is usually realized that the BER is usually high for the coded bitstreams compared to the un-coded one. The BER versus Eb/N0 curves usually differ for the various constellation systems as shown below.

Maral (2004, p.94) argues that it was proved that with the use of the channel characterization and the BER versus Eb/N0 graph that a one-carrier system can be able to transmit good quality signals at Eb/N0 values that fall below 10dB even without using a channel equalizer at the receiver's end. Therefore, when plotting curves for the various constellation schemes, the conversion of the values on the horizontal axis are carried out in an independent manner for every curve to produce a relationship shown below.

Reed-Solomon (255,233) Block Code

The Reed-Solomon codes are a crucial group of error-correcting codes that were established to correct the multiple errors that exist in digital communication and storage systems such as the digital television, high-speed modems, satellite systems and mobile phones. The RS (255, 233) has a high coding rate thereby enabling it to be suitable for most applications such as the storage and transmission of data. The probability of any error remaining in the decoded data when the RS (255, 233) is being used is usually lower than when it is not used.

The first stage that the Reed-Solomon (255, 233) follows in its decoding cycle is the syndrome calculation whereby it divides the input signals received ensuring that the remainder is 0 (Kumar and Gupta, 2011, p.11). When the remainder is not 0, then there exist errors in the code and the RS (233, 255) proceeds to the next stage, which is the determination of the error-locator polynomial. The syndrome calculation by the RS (255, 233) is usually done using a code word that has '2t'syndromes, which depend on the substitution of the errors. After it has counted the syndrome polynomial, the RSS (233, 255) then calculates the error values and where they are positioned. This usually involves the solution of simultaneous equations with 't' unknowns using a number of fast algorithms that often take advantage of the singular matrix structure of the RS (255, 233) block code to minimize the computational effort that would be required. The syndrome polynomials that often have unknown errors before decoding are also solved in this stage (Abdesslam et al, 2016, p. 12). The finding of the error-locator polynomial is usually done by utilizing the Berlekamp-Massey algorithm or even the Euclid's algorithm to achieve more efficiency.

The third stage is the solution of the error locator polynomial, which involves the evaluation of the polynomial error to get the roots that would point to the location of the error in the received message. The fourth step involves the calculation of the error value after the errors have been located. This stage usually involves the use of the syndromes and the roots of the error polynomial to find the error values. In case the error symbol has a set bit, then the corresponding bit received symbols has an error that has to be averted. The automation of the correction process is done by reading the received symbol once more to correct the error symbols (Abdesslam et al, 2016, p. 13).

While plotting graphs to show the variation between the Eb/N0 versus BER for the RS (233, 255), similar voice signals are often checked for the different lengths of the encoded messages and the error capability of the RS (23, 255) block code. The BER is usually much higher for the RS (233, 255) block code compared to the bit stream that has not been coded. Therefore, the effect of the RS (233, 255) block code is that it increases BER (Ketterling, 2003, p. 147).

References

Abdesslam, H., El Habti, E.I.A., El Abbassi, A. and Mohamed, H., 2016. Performance Study of RS (255, 239) and RS (255.233) Used Respectively in DVB-T and NASA. International Journal of Engineering Research and Applications, 6(11), pp.10-14.

Angueira, P. and Romo, J., 2012. Microwave line of sight link engineering. John Wiley & Sons.

Chayratsami, P. and Thuaykaew, S., 2014, February. The optimum ring ratio of 16-APSK in LTE uplink over nonlinear system. In Advanced Communication Technology (ICACT), 2014 16th International Conference on (pp. 322-328). IEEE.

Csiszar, I. and Korner, J., 2011. Information theory: coding theorems for discrete memoryless systems. Cambridge University Press.

Ketterling, H.P., 2003. Introduction to digital professional mobile radio. Artech House.

Kumar, S. and Gupta, R., 2011. Bit error rate analysis of Reed-Solomon code for efficient communication system. International Journal of Computer Applications, 30(12), pp.11-15.

Maral, G., 2004. VSAT networks. John Wiley & Sons. Schreuder, D.A., 2014. Vision and Visual Perception. Archway publishing.

Cite this page

Information Theory: Coding Theorems for Discrete Memoryless Systems Essay. (2022, May 26). Retrieved from https://proessays.net/essays/information-theory-coding-theorems-for-discrete-memoryless-systems-essay

logo_disclaimer
Free essays can be submitted by anyone,

so we do not vouch for their quality

Want a quality guarantee?
Order from one of our vetted writers instead

If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:

didn't find image

Liked this essay sample but need an original one?

Hire a professional with VAST experience and 25% off!

24/7 online support

NO plagiarism