Image Compression: Exploring Beyond Huffman Coding - Research Paper

Paper Type:  Research paper
Pages:  6
Wordcount:  1398 Words
Date:  2023-03-22

Introduction

Image compression can be described as one of the vital applications in the field of Digital Image Processing. Compression is generally removing all the unwanted data from the image, therefore enhancing the memory space required without necessarily distorting the image. Consequently, it is simple to use Huffman coding algorithm as it is easy to implement. The compression ratio is directly related to the memory space or storage space required; that is, the better the compression ratio, the lesser the storage space.

Trust banner

Is your time best spent reading someone else’s essay? Get a 100% original essay FROM A CERTIFIED WRITER!

Different algorithms can be used to perform the compression ratio; some keep the same information as the original, described as lossless whole others are lossy, and often lose the unique information upon compression. Every compression method is designed for a specific kind of image, and cannot work well with unintended images. Many algorithms let you change parameters and adjust the compression to have a more elegant image [10]. For authenticity, it is better to use a cryptography method before concealing the message. Many popular algorithms can be used in cryptography. They include Advanced Encryption Standard (AES), Blowfish, Data Encryption Standard (DES), RC4 Rivest-Shamir-Adleman (RSA)

In this research, combining cryptography and steganography that uses Huffman Coding and RSA, DWT, and RLE can be efficient.

Related Work

This section will be introducing the relevant information and the background of the concept of image compression. There are several claims that art image compression techniques are already available, like Discrete Wavelet Transform and Wavelet Compression Technique, being some of the examples. The methods are vital for many image processing applications. RSA can be described as a renowned asymmetric cryptographic algorithms. Montgomery representation can be used and is described in [3] that plays an essential role in RSA implementation and even in elliptic curve cryptography. In [5] authors have given their insights on different applications of the Montgomery multiplication algorithm that forming base of optimization scheme effective in modular exponentiation. In [14]we have explained the need to eliminate the transfer of two random numbers as a way of implementing RSA in a secure way. The Huffman coding depends on the discrete cosine in integers form, transformation and novel, entropy encoder, an efficient and low complexity therefore making utilizing adaptive Golomb-Rice Algorithm other than Huffman tables. Quantization is an important module in Wavelet transform-based codec and minimizes the visual redundancy. It is also the only operating which introduces distortion. PSNR is used to give the standards of input images, and the DWT algorithm is essential in providing a better compression ratio [18]. DWT is computed through sub-sampling and also by convolution with a number of filters producing a low pass filter outcomes and also a detailed high pass filter outcome. Multiresolution decomposition can be achieved by recapitulating the subsampling and convolution of the two filters in the approximation component. For two dimensional prompts, there are wavelets that are separable where the computation decomposes to parallel processing, which is then followed by vertical processing using only the 1D filter.

According to Patel et al. (2016), [11], image compression using the Huffman coding is more comfortable and more straightforward. Compression of images is vital since its implementation obtains less memory and also convenient. The purpose of the paper is to have an insight at Huffman coding and how it is used to remove the redundant bits in a piece of information through examining several parameters like the Peak signal to noise ratio, Bits per Pixel, compression ratio and mean square error for several input image of various sizes and also new ways of splitting such photos will provide excellent results and the data content also will be secure. There are many advantages of the compression technique in image analysis, which contains security of the image.

S. Srikanth et al. (2013)[20]also presents an image compression technique that uses several embedded Wavelet-based image coding together with the Huffman Coding technique to aid in further compression. They have utilized the EZW with SPHIT algorithms accompanied by Huffman encoding through the use of different wavelength families and also compare the PSNRs, rating the families. We tested the algorithms using several images, and the results obtained through the technique had incredible quality and also gives very high compression ration I comparison to previously used lossless image compression method.

Lastly, Hitoshi Kiya et al. in 2012 proposed that a lossy data compression technique to be used in a Histogram image signal can be planned. This proposal is from attainable lossless coding, dependent on lossless coding and lossless histogram packing. They established lossy plotting with few computational masses, which can be rate-distortion optimizing the Lloyd Maxquantization and also the lossless coding. This planned way produces excellent rated performance, bend plane than all the other access ways. Therefore, the method can be used to form histogram sparseness in images, inverse plotting that does not enlarge quantization sound.

Basic Model Structure, Compression and Encryption Algorithms

In the figure, 1.there is the proposed algorithm's basic model shows how the secret data can be done from the sender to the receiver. The initial step is to encode secret input message through the use of the RSA cryptography technique. Then we quantize the input cover image with the quantization technique, where we finally get the compressed stego-image. The model can be stored or transmitted. These are the necessary steps that are applied to these techniques. The techniques are lossless and lossy methods. In the lossy, there is a loss of important information during compression, but in lossless compression, no data is lost. RLE or and Huffman coding are used in the later technique, while the DWT is used in the former.

Fig. 1. Block Diagram of Image Compression.

RSA Encryption Algorithm

RSA algorithm is utilized by modern systems to both decrypt and also encrypt information. This is an example of an asymmetric cryptographic algorithm. Asymmetric explains two keys are used in the process. The technique is also public cryptography because the keys can be utilized by any person. It is the most common public-key implementation strategy and is named after 3 MIT mathematicians that developed it in 1977. They were Leonard Adleman, Adi Shamir, Ronald Rivest A modified RSA is used to produce a cipher textbook that has equal lengths.

Compression Techniques

They are mainly lossless and lossy. In lossless compression, the search for long strings of code is done, and also a method to replace it with shorter chains is done. The technique is unique as it can recreate the whole file as it was before compression. On the other hand, lossy compression searches the code and looks for pieces to delete.

Although they can be used in a program file, they are useful in the multimedia files where a lot of information kept is hard for the human sense detect. The data might appear to be identical but different at the code level.

Run Length Encoding

It is a straightforward method of image compression that runs information and is kept in a single data valuation or counts instead of the original run. The technique can be used for sequential pieces of information and also very useful for the repetitive data. The sequences of identical symbols or pixels called runs CITATION Dha14 \l 1033 [28] are replaced. Run-length encoding gets the best results while using images that are of contiguous color and especially monochrome [29].

Huffman Encoding

Huffman coding has various advantages over the other techniques like the lossless image compression that can be effective and easy to implement. The implementation involves the occurrence of each data, and then it is sorted through ascending. The technique produced a Huffman tree that can be used to restore data to become the original data after compression.

Discrete Wavelet Transformation (DWT)

DWT is an important technique that plays a vital role in compressing the given image while ensuring that no information of the picture is lost. DWT is under lossless image compression. The technique to transforms discrete-time signals to separate wavelength representation.

It is situated on time scale depiction that can provide multiresolution. It is better to use wavelets than to compress signals and considered one of the most useful and advantageous computational tools to be used in a multiplicity of image processing applications and messages. They are mainly used in images to reduce unwanted noise and even to blur. Wavelet transformation is emerging to be one of the most useful and powerful tools that can be used for image and data compression.

References

[1] K. Sarmah, "Improved Cohort Intelligence-A high capa...

Cite this page

Image Compression: Exploring Beyond Huffman Coding - Research Paper. (2023, Mar 22). Retrieved from https://proessays.net/essays/image-compression-exploring-beyond-huffman-coding-research-paper

logo_disclaimer
Free essays can be submitted by anyone,

so we do not vouch for their quality

Want a quality guarantee?
Order from one of our vetted writers instead

If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:

didn't find image

Liked this essay sample but need an original one?

Hire a professional with VAST experience and 25% off!

24/7 online support

NO plagiarism