Cart / 0.00$

No products in the cart.

Validation accuracy...
 
Notifications
Clear all

Validation accuracy is stuck at 50% even after data augmentation

1 Posts
2 Users
0 Reactions
217 Views
0
Topic starter

I need to implement a CNN model for multiclass classification using these reduced size grayscale images of size 56*56 images. I have 20 classes.

code using reduced image dataset of 2280 gray scale images

````X_temp, X_train, Y_temp, Y_train = train_test_split(all_images, all_labels_one_hot, test_size=0.7, random_state=99)

X_test, X_val, Y_test, Y_val = train_test_split(X_temp, Y_temp, test_size=0.5, random_state=99)

print('X_train.shape:', X_train.shape)
print('X_test.shape:', X_test.shape)
print('Y_train.shape:', Y_train.shape)
print('Y_test.shape:', Y_test.shape)
print('X_val.shape:',X_val.shape)
print('Y_val.shape:',Y_val.shape)

"""
X_train.shape: (1596, 28, 28)
X_test.shape: (342, 28, 28)
Y_train.shape: (1596, 20)
Y_test.shape: (342, 20)
X_val.shape: (342, 28, 28)
Y_val.shape: (342, 20)
"""

tf.random.set_seed(99)

img_size1 = all_images.shape[1]
# in case of rectangular image
img_size2 = all_images.shape[2]

img_shape = (img_size1, img_size2,1)

# Define the model

model = keras.Sequential([
keras.layers.Conv2D(filters=32, kernel_size=5, activation='elu', kernel_initializer='he_uniform', padding = 'same', input_shape=img_shape, name='conv_01'),
keras.layers.BatchNormalization(),
#keras.layers.AveragePooling2D(pool_size=2,strides=2, name='pool_01'),
keras.layers.Conv2D(filters=32, kernel_size=5,activation='elu', kernel_initializer='he_uniform', padding='same', name='conv_02'),
keras.layers.BatchNormalization(),
keras.layers.AveragePooling2D(pool_size=2, strides=2,name='pool_02'),
keras.layers.Conv2D(filters=64, kernel_size=5,activation='elu', kernel_initializer='he_uniform',padding='same', name='conv_03'),
keras.layers.BatchNormalization(),
#keras.layers.MaxPooling2D(pool_size=2, strides=2,name='pool_03'),
keras.layers.Conv2D(filters=64, kernel_size=5,activation='elu', kernel_initializer='he_uniform',padding='same', name='conv_04'),
keras.layers.BatchNormalization(),
keras.layers.AveragePooling2D(pool_size=2,name='pool_04'),
keras.layers.Flatten(name='flatten_01'),
keras.layers.Dropout(0.3, name='dropout_04'),
keras.layers.Dense(20, activation='softmax', name='classification'),
])

Model: "sequential_31"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_01 (Conv2D) (None, 28, 28, 32) 832

batch_normalization_145 (B (None, 28, 28, 32) 128
atchNormalization)

conv_02 (Conv2D) (None, 28, 28, 32) 25632

batch_normalization_146 (B (None, 28, 28, 32) 128
atchNormalization)

pool_02 (AveragePooling2D) (None, 14, 14, 32) 0

conv_03 (Conv2D) (None, 14, 14, 64) 51264

batch_normalization_147 (B (None, 14, 14, 64) 256
atchNormalization)

conv_04 (Conv2D) (None, 14, 14, 64) 102464

batch_normalization_148 (B (None, 14, 14, 64) 256
atchNormalization)
...
Total params: 243700 (951.95 KB)
Trainable params: 243316 (950.45 KB)
Non-trainable params: 384 (1.50 KB)

categorical_crossentropy = keras.losses.CategoricalCrossentropy()
optimizer = keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer, metrics=['accuracy'],loss=categorical_crossentropy )

from keras.callbacks import LearningRateScheduler,ReduceLROnPlateau

# Set up ReduceLROnPlateau callback
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-7, verbose=1)

def lr_schedule(epoch, lr):
if epoch < 10:
return lr
else:
return lr * tf.math.exp(-0.1)

lr_scheduler = LearningRateScheduler(lr_schedule)

#history = model.fit(X_train, Y_train, batch_size=64, epochs=100, validation_data=(X_val, Y_val),shuffle=True,callbacks=[lr_scheduler,reduce_lr])
history = model.fit(X_train, Y_train, batch_size=32, epochs=100, validation_data=(X_val, Y_val),shuffle=True,callbacks=[lr_scheduler])````

code for augmentation of reduced gray scale image

it is the same as above. The difference is that I have applied augmentation to the images to generate 1000s of images in each subfodler and then applied the architecture.I have applied image augmentation first and then stored the images in the folder. Later on, splitting of data is used to divide the whole dataset into train, test and validation.

````def augment_image(image):
# Apply data augmentation
datagen = ImageDataGenerator(
rotation_range=10,
width_shift_range=0.2,
zoom_range=0.1,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=True,
fill_mode='reflect'#can use wrap(1) or reflect(2)
)

# Reshape the image to (1, height, width, channels) as required by the flow method
img = image.reshape((1,) + image.shape + (1,))

# Generate augmented images
augmented_image = datagen.flow(img, batch_size=1).next()[0].squeeze()

return augmented_image

#splitting of data into train,testing and validation

X_train, X_temp, Y_train, Y_temp = train_test_split(all_images, all_labels_one_hot, test_size=0.3, random_state=99)

X_test, X_val, Y_test, Y_val = train_test_split(X_temp, Y_temp, test_size=0.2, random_state=99)

print('X_train.shape:', X_train.shape)
print('X_test.shape:', X_test.shape)
print('Y_train.shape:', Y_train.shape)
print('Y_test.shape:', Y_test.shape)
print('X_val.shape:',X_val.shape)
print('Y_val.shape:',Y_val.shape)
print('X_temp.shape:',X_temp.shape)
print('Y_temp.shape:',Y_temp.shape)

X_train.shape: (14000, 56, 56)
X_test.shape: (4800, 56, 56)
Y_train.shape: (14000, 20)
Y_test.shape: (4800, 20)
X_val.shape: (1200, 56, 56)
Y_val.shape: (1200, 20)
X_temp.shape: (6000, 56, 56)
Y_temp.shape: (6000, 20)

tf.random.set_seed(99)

img_size1 = all_images.shape[1]
# in case of rectangular image
img_size2 = all_images.shape[2]

img_shape = (img_size1, img_size2,1)

# Define the model
model = keras.Sequential([
keras.layers.Conv2D(filters=32, kernel_size=5, activation='elu', kernel_initializer='he_uniform', padding = 'same', input_shape=img_shape, name='conv_01'),
keras.layers.BatchNormalization(),
#keras.layers.AveragePooling2D(pool_size=2,strides=2, name='pool_01'),
keras.layers.Conv2D(filters=64, kernel_size=5,activation='elu',kernel_initializer='he_uniform', padding='same', name='conv_02'),
keras.layers.BatchNormalization(),
keras.layers.AveragePooling2D(pool_size=2, strides=2,name='pool_02'),

keras.layers.Conv2D(filters=64, kernel_size=3,activation='elu',kernel_initializer='he_uniform', padding='same', name='conv_03'),
keras.layers.BatchNormalization(),
#keras.layers.AveragePooling2D(pool_size=2, strides=2,name='pool_03'),
keras.layers.Conv2D(filters=64, kernel_size=3,activation='elu', kernel_initializer='he_uniform',padding='same', name='conv_04'),
keras.layers.BatchNormalization(),
keras.layers.AveragePooling2D(pool_size=2,name='pool_04'),

keras.layers.Conv2D(filters=64, kernel_size=3,activation='elu', kernel_initializer='he_uniform',padding='same', name='conv_05'),
keras.layers.BatchNormalization(),
keras.layers.AveragePooling2D(pool_size=2,name='pool_05'),

keras.layers.Flatten(name='flatten_01'),
keras.layers.Dropout(0.3, name='dropout_04'),
keras.layers.Dense(20, activation='softmax', name='classification'),

])

Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_01 (Conv2D) (None, 56, 56, 32) 832

batch_normalization_21 (Ba (None, 56, 56, 32) 128
tchNormalization)

conv_02 (Conv2D) (None, 56, 56, 64) 51264

batch_normalization_22 (Ba (None, 56, 56, 64) 256
tchNormalization)

pool_02 (AveragePooling2D) (None, 28, 28, 64) 0

conv_03 (Conv2D) (None, 28, 28, 64) 36928

batch_normalization_23 (Ba (None, 28, 28, 64) 256
tchNormalization)

conv_04 (Conv2D) (None, 28, 28, 64) 36928

batch_normalization_24 (Ba (None, 28, 28, 64) 256
tchNormalization)

pool_04 (AveragePooling2D) (None, 14, 14, 64) 0

...
Total params: 226772 (885.83 KB)
Trainable params: 226196 (883.58 KB)
Non-trainable params: 576 (2.25 KB)

categorical_crossentropy = keras.losses.CategoricalCrossentropy()
optimizer = keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer, metrics=['accuracy'],loss=categorical_crossentropy )

from keras.callbacks import LearningRateScheduler
from keras.callbacks import ReduceLROnPlateau
tf.compat.v1.ragged.RaggedTensorValue
tf.compat.v1.executing_eagerly_outside_functions

def lr_schedule(epoch, lr):
if epoch < 10:
return lr
else:
return lr * tf.math.exp(-0.1)

lr_scheduler = LearningRateScheduler(lr_schedule)

history = model.fit(X_train, Y_train, batch_size=64, epochs=100, validation_data=(X_val, Y_val),shuffle=True,callbacks=([lr_scheduler]))```

I have experimented with batch_size, number of epochs, adding/removing dense layers/conv2D layers, changing the number of filters/kernel size, trying different optimizers but all in vain. In this case, Averagepooling 2D and elu activation function found to be suitable. Could you please recommend anything so that I can try on the original dataset code or the code for augmented+original image dataset. I cannot figure out how to extract more features or which direction should I proceed to improve my model.

It would be helpful if someone can please help me or tell me how to proceed in this direction by adding or removing conv2D layers/dense layers because I have tried almost everything but maybe I have missed out something.

This topic was modified 10 months ago 2 times by Neuraldemy Support
1 Answer
0

Hi, it's hard to tell exactly how to make your model perfect but based on the code you have provided we would recommend doing a few things: 

First of all, your training accuracy is around 63 - 65% based on the plots and validation accuracy is around 52% which is what we can expect, it's not the best but this is what you can expect at such accuracy level. To improve your model score try building simpler models first and then focus on increasing the model complexity. For example, create a base model then check the score, then increase its complexity then check the score again and so on. This way you will also be able to understand why exactly your model is performing that way. It's all about trying out different things. So, the best thing you can do is build different models one after another in increasing complexity. 

Secondly, try analyzing if you have don't have imbalanced classes in your dataset. Make sure you are preprocessing dataset properly. Don't directly jump to data augmentation. Try raw first then improve. Also, try out different kernel sizes. Smaller size captures fine details and if you wish to capture global details try larger kernels. Try to check the confusion matrix and classification score details for better understanding. 

If you further wish to improve your model score, try out transfer learning (fine tuning or feature extraction) or try out different architectures. Try using functional API based architecture. 

So, you will have to try out different things. Additionally, you also need to decide what is the accuracy level you are targeting for your project. This is the most important factor. 

We hope this may help you to improve your model further. 

___

Please close the topic if you have found the answer

Please close the topic if your issue has been resolved. Add comments to continue adding more context or to continue discussion and add answer only if it is the answer of the question.
___
Neuraldemy Support Team | Enroll In Our ML Tutorials

Welcome Back!

Login to your account below

Create New Account!

Fill the forms below to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.

Retrieve your password

Please enter your username or email address to reset your password.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?