Used to generate a batch of image data to support real-time data upgrade.
Image preprocessing Image generator ImageDataGenerator (featurewise_center= False, Take a look at the official website of this function: All of the above has been published on my githubAbove, the jupyter notebook file during the experiment is attached, everyone can play it, have fun!
Try the effect a few times, and finally decide which parameters to use.
In the face of small data sets, it becomes very important to use DataAugmentation to expand your data set, but before using DataAugmentation, it is necessary to understand whether your data set needs such pictures, such as the cat and dog war data set does not need to be flipped up and down Picture, and consider whether the degree of transformation is reasonable, for example, it is unreasonable to shift the target horizontally outside the image. # Find the locally generated image and print 9 images on the same figure Gen_data = datagen.flow_from_directory(PATH,
As the code for debugging these parameters, you can also use jupyter notebook to test these parameters and print the picture results on your web page. Figure 19 shows the effect when cval=100, which can be compared with the graph without cval parameter in the lower right corner of Figure 18. When set to "constant", there is an optional parameter, cval, which means that a fixed value of color is used for filling. The effects of these four filling methods are shown in Figure 18, from left to right, from top to bottom: "reflect", "wrap", "nearest", "constant". So how can these missing areas be completed? It is determined by the parameters in fill_mode, including: "constant", "nearest" (default), "reflect" and "wrap". As mentioned earlier, when the image is panned, zoomed, or miscut, some missing areas will appear in the picture. fill_modeĭatagen = image.ImageDataGenerator(fill_mode='wrap', zoom_range=)įill_mode is the fill mode. It should be that when saving to the local, keras restores the image pixel value to the original scale, but not in the memory. If we directly print the value of the picture in memory, you can see the following results: Figure 16Īs you can see from Figure 16, the pixel value of the picture is reduced to between 0 and 1, but if you open the picture saved locally, its value remains unchanged, as shown in Figure 17.
Scaling the pixel value between 0 and 1 is conducive to the convergence of the model and avoids neuron “death”.Īfter the picture is rescaled, the picture saved to the local is visually indistinguishable. Area”, so the scaling factor is set to 1/255. In some models, the pixel value of the original image may fall into the "death" of the activation function. This operation is performed before all other transformation operations. The role of rescale is to multiply each pixel value of the picture by this scaling factor. Figure 4ĭatagen = image.ImageDataGenerator(rescale= 1/255, width_shift_range=0.1) For other DataAugmentation results for mnist, please see this blog: Image Augmentation for Deep Learning With Keras, Friends who have modified comments are welcome to leave a message. Here is the result reproduced by another blog using mnist. Later, when the resize was 28×28, there was no memory error, but the code did not end after one night of running, so the picture cannot be reproduced by using the cat and dog battle picture. The value should be too large during the calculation of SVD. When my image is resized to 224×224, the code reports a memory error.
Sorry, I use keras Official demo code, And the effect of zca_whitening is not reproduced. For details, please refer to: Whitening transformation-Wikipedia, Whitening-Stanford。 The role of zca whitening is to perform PCA dimensionality reduction operations on the pictures, reduce the redundant information of the pictures, and retain the most important features. Datagen = image.ImageDataGenerator(zca_whitening=True)