Volume 1 , Issue 2 , PP: 74-80, 2020 | Cite this article as | XML | Html | PDF | Full Length Article
Philippe Schweizer 1 *
We would like to show the small distance in neutropsophy applications in sciences and humanities, has both finally consider as a terminal user a human. The pace of data production continues to grow, leading to increased needs for efficient storage and transmission. Indeed, the consumption of this information is preferably made on mobile terminals using connections invoiced to the user and having only reduced storage capacities. Deep learning neural networks have recently exceeded the compression rates of algorithmic techniques for text. We believe that they can also significantly challenge classical methods for both audio and visual data (images and videos). To obtain the best physiological compression, i.e. the highest compression ratio because it comes closest to the specificity of human perception, we propose using a neutrosophical representation of the information for the entire compression-decompression cycle. Such a representation consists for each elementary information to add to it a simple neutrosophical number which informs the neural network about its characteristics relative to compression during this treatment. Such a neutrosophical number is in fact a triplet (t,i,f) representing here the belonging of the element to the three constituent components of information in compression; 1° t = the true significant part to be preserved, 2° i = the inderterminated redundant part or noise to be eliminated in compression and 3° f = the false artifacts being produced in the compression process (to be compensated). The complexity of human perception and the subtle niches of its defects that one seeks to exploit requires a detailed and complex mapping that a neural network can produce better than any other algorithmic solution, and networks with deep learning have proven their ability to produce a detailed boundary surface in classifiers.
Compression , physiological data compression , neural nets ,   , neutrosophic , deep learning
[1] C. E. Shannon. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review 5.1 (2001), 3-55.
[2] K. Tatwawadi. DeepZip: Lossless Compression using RecurrentNetworks. (2017), Stanford University, Palo Alto, CA.
[3] J. Ziv, and A. Lempel. Compression of individual sequences via variable-rate coding. IEEE Transactions on Information Theory, 24 (5), (1978), 530.
[4] CMIX : http://www.byronknoll.com/cmix.html [Accessed 24/04/18]
[5] G. Harshvardhan. Using AI to Super Compress Images. (2017), Site Hackernoon. [Accessed 24/04/18]
[6] S. Kunwar. JPEG Image Compression Using CNN. (2018), Autonomous University of Barcelona, Barcelona, Spain.
[7] L.A. Zadeh. Fuzzy sets. Information Control, 8(1965), 338–353..
[8] F. Smarandache, and P. Surapati. New Trends in Neutrosophic Theory and Applications. Pons asbi, Brussels, 2016.
[9] F. Smarandache. A unifying field in logics. neutrosophy: neutrosophic probability, set and logic. American Research Press, Rehoboth, 1998.