One of the biggest problems that we can encounter when scaling a picture that’s small to a big dimension is the fact that small scaled image are losing some important bits of information which cannot be recovered when scaled back. This problem though, can be fixed using multi-objective genetic algorithm. Researchers Kishor Datta Gupta and Sajib Sen are describing a genetic algorithm approach to recover lost bits while image resized to the smaller version using the original image data bit counts which are stored while the image is scaled. This method is very scalable to apply in a distributed system. Also, the same method can be applied to recover error bits in any types of data blocks.
In the introduction the authors talk about how some users prefer to store pictures to cloud. Currently, 4.7 trillion of photos are saved in the cloud (Perret, 2017). And only a few percentage are called to use again. So less used files can be stores in a compression technique which can save more space than time and make the cloud system faster as memory redundancy time will be reduced. Then the authors talk about how an image can be modeled. An image file can be modeled using a continuous function of three variables; they are X, Y and T. X and Y are coordinates of x, y in a plane, and T is time, if image changes in respect to time. For normal image time T is always static 1. There are several techniques which are normally divided into two categories: lossy and lossless image compressions. In lossy compression, after recovery there are negligible difference present, while lossless gives accurate image. In 2008, Roger Johansson was able to regenerate a Mona Lisa image from random sampling (Roger Johansson, 2017). It uses a genetic algorithm to model a population of individuals, each containing a string of DNA which can be visualized in the form of an image (Grow Your Own Picture Genetic Algorithms & Generative Art, 2017).
By starting with a population consisting of a randomly generated gene pool, each individual is compared to the reference image (the one on the left), and the individuals can then be ranked by their likeness to it, known as their “fitness”, with the best fit being displayed on the output image (the one on the right). By breeding the fittest individuals from the population, the DNA which produces the most accurate representation of the reference image is selected over successive generations, effectively demonstrating the power of a natural selection process to produce the best candidate for any given environment.
The authors proposal is to store each column and row bits count in a separate file and used that to reproduce the image using genetic algorithm. Their method will first resize the image using normal image resizing option provided by an operating system or standard library and attach the extra 2 array of data which contains no of 1 in original image in each row and column. Also, the total no of 1 in that image will be present too. Furthermore, the researchers explain in detail how their method works.
Their method has scalability to deployed in any distributed system and can work faster. Using some more constraints and filtering this algorithm can provide better results. Also, the same technique can also be applied to any file system. This procedure can really improve the reduction of noise in QR code better than the (Gupta, 2018) had done before; when using parallel computing, time overhead cost will be reduced and the process can generate a near perfect reconstruction fast.