1. Installing Libraries¶
!pip install split-folders
Collecting split-folders Downloading split_folders-0.5.1-py3-none-any.whl (8.4 kB) Installing collected packages: split-folders Successfully installed split-folders-0.5.1
2. Importing libraries¶
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn.metrics import confusion_matrix, classification_report, f1_score
import seaborn as sns
import pathlib
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import layers , models
from keras.preprocessing.image import ImageDataGenerator
import splitfolders
import os
import cv2
/opt/conda/lib/python3.10/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.5 warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}" /opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so'] caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE'] warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}") /opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so'] caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE'] warnings.warn(f"file system plugins are not loaded: {e}")
3. Loading Dataset¶
# Directory path for the dataset
Dataset_path = pathlib.Path('/kaggle/input/rice-image-dataset/Rice_Image_Dataset')
4. Introduction¶
In this project, we have embarked on an exploration of image classification using the Rice_Image dataset, a collection of 75,000 images meticulously categorized into five distinct classes: 'Arborio,' 'Basmati,' 'Ipsala,' 'Jasmine,' and 'Karacadag.'
The primary objective of this endeavor is to construct and train convolutional neural network (CNN) models capable of accurately classifying these rice images. To accomplish this, we employed one of the most popular deep learning frameworks: TensorFlow. TensorFlow is leveraged for tasks encompassing dataset loading, normalization, as well as the construction, training, and testing of CNN models. The primary goals of this notebook are to develop, train, and assess CNN models, identify the optimal one among them, and ultimately visualize the outcomes.
4.1 Introduction to Dataset¶
The Rice_Image dataset is a rich collection of rice images, with each class containing 15,000 data points.
These images are of dimensions 250x250 pixels, providing ample visual information to enable accurate classification. The dataset's diversity reflects the unique characteristics of each rice type, making it an ideal candidate for showcasing the potential of CNN models in image classification tasks.
4.2 TensorFlow Approach¶
In the TensorFlow framework, we meticulously loaded and normalized the dataset.
We constructed three distinct CNN models, each with its own architecture, to ensure a comprehensive exploration of various model configurations.
The first model employed a relatively simple architecture with one convolutional layer, while the second and third models featured additional convolutional layers, max-pooling layers, and dense layers for enhanced complexity.
The training process included an exploration of early stopping techniques to prevent overfitting. Through rigorous training and validation, we assessed the models' performance, aiming to find the optimal architecture for accurate classification.
5. Preparing Dataset¶
5.1 Splitting the data into three folders: Train, Validation, and Test¶
splitfolders.ratio(Dataset_path, output='Images', seed=42, ratio=(.7,.15,.15))
Copying files: 75000 files [12:16, 101.78 files/s]
Img_Size = (250, 250)
seed = 42
Batch_Size = 32
Train = keras.utils.image_dataset_from_directory(
'Images/train',
labels='inferred',
label_mode='int',
batch_size = Batch_Size,
image_size = Img_Size,
seed=seed,shuffle = True,
)
Validation = keras.utils.image_dataset_from_directory(
'Images/val',
labels='inferred',
label_mode='int',
batch_size = Batch_Size,
image_size = Img_Size,
seed=seed,shuffle = True,
)
Test = keras.utils.image_dataset_from_directory(
'Images/test',
labels='inferred',
label_mode='int',
batch_size = Batch_Size,
image_size = Img_Size,
seed=seed,shuffle = True,
)
Found 52500 files belonging to 5 classes. Found 11250 files belonging to 5 classes. Found 11250 files belonging to 5 classes.
Classes = [Train, Validation, Test]
for l in Classes:
Class_labels = l.class_names
print(Class_labels)
['Arborio', 'Basmati', 'Ipsala', 'Jasmine', 'Karacadag'] ['Arborio', 'Basmati', 'Ipsala', 'Jasmine', 'Karacadag'] ['Arborio', 'Basmati', 'Ipsala', 'Jasmine', 'Karacadag']
5.2 Printing dataset shapes:¶
def print_dataset_shapes(dataset, dataset_name):
for image_batch, labels_batch in dataset:
print(f"{dataset_name} Shape: {image_batch.shape} (Batches = {len(dataset)})")
print(f"{dataset_name} Label: {labels_batch.shape}\n")
break
datasets = [
(Train, "Train"),
(Validation, "Validation"),
(Test, "Test")
]
# Print shapes for each dataset
for dataset, dataset_name in datasets:
print_dataset_shapes(dataset, dataset_name)
Train Shape: (32, 250, 250, 3) (Batches = 1641) Train Label: (32,) Validation Shape: (32, 250, 250, 3) (Batches = 352) Validation Label: (32,) Test Shape: (32, 250, 250, 3) (Batches = 352) Test Label: (32,)
6. Visualising the Data¶
font1 = {'family':'serif','size':12}
font2 = {'family':'serif','size':10}
f = plt.figure()
f.set_figwidth(10)
f.set_figheight(10)
for Image,Classes in Train.take(1):
for i in range(1,16):
plt.subplot(3,5,i)
plt.imshow(Image[i].numpy().astype("uint8"))
plt.title(Class_labels[Classes[i]],backgroundcolor='grey',color='white',fontdict=font1)
plt.axis("off")
plt.suptitle('Random Rice Images From Train data', y=0.92)
plt.show()
f = plt.figure()
f.set_figwidth(10)
f.set_figheight(10)
for Image,Classes in Validation.take(1):
for i in range(1,16):
plt.subplot(3,5,i)
plt.imshow(Image[i].numpy().astype("uint8"))
plt.title(Class_labels[Classes[i]],backgroundcolor='grey',color='white',fontdict=font1)
plt.axis("off")
plt.suptitle('Random Rice Images From Validation data', y=0.92)
plt.show()
f = plt.figure()
f.set_figwidth(10)
f.set_figheight(10)
for Image,Classes in Test.take(1):
for i in range(1,16):
plt.subplot(3,5,i)
plt.imshow(Image[i].numpy().astype("uint8"))
plt.title(Class_labels[Classes[i]],backgroundcolor='grey',color='white',fontdict=font1)
plt.axis("off")
plt.suptitle('Random Rice Images From Test data', y=0.92)
plt.show()
7. Modeling¶
7.1 First model¶
A Simple Model with One Convolutional Layer:¶
CNN_model1 = models.Sequential(
[
layers.Rescaling(1./255, input_shape=(250, 250, 3)),
layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
layers.MaxPool2D(pool_size=2, strides=2),
layers.Flatten(),
layers.Dense(units=512, activation='relu'),
layers.Dropout(0.5), # Adding dropout for regularization
layers.Dense(units=5, activation='sigmoid')
]
)
# Compile the model
CNN_model1.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy', # Use appropriate loss for multi-class classification
metrics=['accuracy']
)
# Summary of the model architecture
CNN_model1.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling (Rescaling) (None, 250, 250, 3) 0 conv2d (Conv2D) (None, 248, 248, 32) 896 max_pooling2d (MaxPooling2D (None, 124, 124, 32) 0 ) flatten (Flatten) (None, 492032) 0 dense (Dense) (None, 512) 251920896 dropout (Dropout) (None, 512) 0 dense_1 (Dense) (None, 5) 2565 ================================================================= Total params: 251,924,357 Trainable params: 251,924,357 Non-trainable params: 0 _________________________________________________________________
Function for Comparing Training and Validation Results of Different Models:¶
def visualizing_results(history, model_name):
font1 = {'family':'serif','size':15}
font2 = {'family':'serif','size':12}
plt.figure(figsize=(10, 10))
plt.subplot(2, 1, 1)
plt.plot(history.history['accuracy'], label='Train Accuracy',color='b', marker='o')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy',color='orange',marker='d')
plt.xlabel('Epoch', fontsize=12,labelpad=16)
plt.ylabel('Accuracy', fontsize=12,labelpad=16)
plt.title(f'{model_name} - Training and Validation Accuracy',backgroundcolor='grey',color='white',fontdict=font1)
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(history.history['loss'], label='Train Loss',color='b', marker='o')
plt.plot(history.history['val_loss'], label='Validation Loss',color='orange',marker='d')
plt.xlabel('Epoch', fontsize=12,labelpad=16)
plt.ylabel('Loss', fontsize=12,labelpad=16)
plt.title(f'{model_name} - Training and Validation Loss',backgroundcolor='grey',color='white',fontdict=font1)
plt.legend()
plt.tight_layout()
plt.show()
7.1.1 Fitting the Model Using EarlyStopping¶
In this section, we enhance the model training process by incorporating the EarlyStopping callback.
This mechanism continually monitors the validation loss during training. If the validation loss fails to improve for a defined number of consecutive epochs (as specified by the 'patience' parameter), the training process is stopped.
By integrating EarlyStopping, we mitigate overfitting risk and optimize training efficiency, exemplifying a cautious approach that considers the model's performance on unseen validation data.
# Define the EarlyStopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)
# Fit the model with the EarlyStopping callback
history1_m1 = CNN_model1.fit(
Train,
epochs=20,
validation_data=Validation,
callbacks=[early_stopping],
verbose=1
)
Epoch 1/20 1641/1641 [==============================] - 97s 54ms/step - loss: 0.2636 - accuracy: 0.9564 - val_loss: 0.0439 - val_accuracy: 0.9857 Epoch 2/20 1641/1641 [==============================] - 87s 53ms/step - loss: 0.0494 - accuracy: 0.9841 - val_loss: 0.0486 - val_accuracy: 0.9858 Epoch 3/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0297 - accuracy: 0.9900 - val_loss: 0.0491 - val_accuracy: 0.9862 Epoch 4/20 1641/1641 [==============================] - 86s 52ms/step - loss: 0.0298 - accuracy: 0.9896 - val_loss: 0.0466 - val_accuracy: 0.9876
Visualizing Confusion Matrix and Model Comparison Results:¶
visualizing_results(history1_m1, "Model 1")
7.1.2 Fitting the Model Without Using EarlyStopping¶
history2_m1 = CNN_model1.fit(
Train,
epochs=20,
validation_data=Validation,
verbose=1
)
Epoch 1/20 1641/1641 [==============================] - 85s 52ms/step - loss: 0.0659 - accuracy: 0.9771 - val_loss: 0.0407 - val_accuracy: 0.9875 Epoch 2/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0478 - accuracy: 0.9833 - val_loss: 0.0445 - val_accuracy: 0.9859 Epoch 3/20 1641/1641 [==============================] - 84s 51ms/step - loss: 0.0189 - accuracy: 0.9934 - val_loss: 0.1025 - val_accuracy: 0.9718 Epoch 4/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0256 - accuracy: 0.9912 - val_loss: 0.1391 - val_accuracy: 0.9683 Epoch 5/20 1641/1641 [==============================] - 83s 50ms/step - loss: 0.0192 - accuracy: 0.9937 - val_loss: 0.1373 - val_accuracy: 0.9721 Epoch 6/20 1641/1641 [==============================] - 83s 51ms/step - loss: 0.0126 - accuracy: 0.9958 - val_loss: 0.1338 - val_accuracy: 0.9721 Epoch 7/20 1641/1641 [==============================] - 86s 52ms/step - loss: 0.0076 - accuracy: 0.9974 - val_loss: 0.1396 - val_accuracy: 0.9747 Epoch 8/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0055 - accuracy: 0.9983 - val_loss: 0.1812 - val_accuracy: 0.9745 Epoch 9/20 1641/1641 [==============================] - 84s 51ms/step - loss: 0.0059 - accuracy: 0.9979 - val_loss: 0.1541 - val_accuracy: 0.9756 Epoch 10/20 1641/1641 [==============================] - 85s 51ms/step - loss: 0.0045 - accuracy: 0.9986 - val_loss: 0.1554 - val_accuracy: 0.9764 Epoch 11/20 1641/1641 [==============================] - 84s 51ms/step - loss: 0.0026 - accuracy: 0.9992 - val_loss: 0.2012 - val_accuracy: 0.9755 Epoch 12/20 1641/1641 [==============================] - 84s 51ms/step - loss: 0.0029 - accuracy: 0.9992 - val_loss: 0.1951 - val_accuracy: 0.9771 Epoch 13/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0037 - accuracy: 0.9988 - val_loss: 0.2095 - val_accuracy: 0.9751 Epoch 14/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0033 - accuracy: 0.9991 - val_loss: 0.1897 - val_accuracy: 0.9749 Epoch 15/20 1641/1641 [==============================] - 83s 51ms/step - loss: 0.0024 - accuracy: 0.9993 - val_loss: 0.2154 - val_accuracy: 0.9757 Epoch 16/20 1641/1641 [==============================] - 85s 52ms/step - loss: 0.0042 - accuracy: 0.9988 - val_loss: 0.2404 - val_accuracy: 0.9740 Epoch 17/20 1641/1641 [==============================] - 82s 50ms/step - loss: 0.0017 - accuracy: 0.9995 - val_loss: 0.2324 - val_accuracy: 0.9745 Epoch 18/20 1641/1641 [==============================] - 86s 52ms/step - loss: 0.0019 - accuracy: 0.9995 - val_loss: 0.2467 - val_accuracy: 0.9756 Epoch 19/20 1641/1641 [==============================] - 83s 50ms/step - loss: 0.0035 - accuracy: 0.9991 - val_loss: 0.2454 - val_accuracy: 0.9738 Epoch 20/20 1641/1641 [==============================] - 83s 50ms/step - loss: 0.0021 - accuracy: 0.9994 - val_loss: 0.2766 - val_accuracy: 0.9744
Visualizing Model Comparison Results:¶
visualizing_results(history2_m1, "Model 1")
# Evaluate the model on the test data
test_loss, test_accuracy = CNN_model1.evaluate(Test, verbose=1)
print(f"Test loss: {test_loss:.4f}")
print(f"Test accuracy: {test_accuracy:.4f}")
352/352 [==============================] - 8s 23ms/step - loss: 0.2471 - accuracy: 0.9762 Test loss: 0.2471 Test accuracy: 0.9762
7.2 Second Model¶
A Dual-Layer Convolution and Dual-Layer Pooling:¶
CNN_model2 = models.Sequential(
[
layers.Rescaling(1./255, input_shape=(250, 250, 3)),
layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
layers.MaxPool2D(pool_size=2, strides=2),
layers.Conv2D(filters=64, kernel_size=3, activation='relu'), # Additional Convolutional Layer
layers.MaxPool2D(pool_size=2, strides=2),
layers.Flatten(),
layers.Dense(units=512, activation='relu'),
layers.Dropout(0.5),
layers.Dense(units=5, activation='sigmoid')
]
)
CNN_model2.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
CNN_model2.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling_1 (Rescaling) (None, 250, 250, 3) 0 conv2d_1 (Conv2D) (None, 248, 248, 32) 896 max_pooling2d_1 (MaxPooling (None, 124, 124, 32) 0 2D) conv2d_2 (Conv2D) (None, 122, 122, 64) 18496 max_pooling2d_2 (MaxPooling (None, 61, 61, 64) 0 2D) flatten_1 (Flatten) (None, 238144) 0 dense_2 (Dense) (None, 512) 121930240 dropout_1 (Dropout) (None, 512) 0 dense_3 (Dense) (None, 5) 2565 ================================================================= Total params: 121,952,197 Trainable params: 121,952,197 Non-trainable params: 0 _________________________________________________________________
history1_m2 = CNN_model2.fit(
Train,
epochs=20,
validation_data=Validation,
verbose=1
)
Epoch 1/20 1641/1641 [==============================] - 81s 48ms/step - loss: 0.2017 - accuracy: 0.9408 - val_loss: 0.0640 - val_accuracy: 0.9794 Epoch 2/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.1103 - accuracy: 0.9681 - val_loss: 0.0452 - val_accuracy: 0.9878 Epoch 3/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.3500 - accuracy: 0.9502 - val_loss: 0.2117 - val_accuracy: 0.9636 Epoch 4/20 1641/1641 [==============================] - 80s 48ms/step - loss: 0.1758 - accuracy: 0.9476 - val_loss: 0.1356 - val_accuracy: 0.9633 Epoch 5/20 1641/1641 [==============================] - 90s 55ms/step - loss: 0.1083 - accuracy: 0.9665 - val_loss: 0.0663 - val_accuracy: 0.9836 Epoch 6/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.1011 - accuracy: 0.9731 - val_loss: 0.0563 - val_accuracy: 0.9893 Epoch 7/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.1085 - accuracy: 0.9737 - val_loss: 0.1629 - val_accuracy: 0.9705 Epoch 8/20 1641/1641 [==============================] - 77s 47ms/step - loss: 0.1014 - accuracy: 0.9734 - val_loss: 0.0991 - val_accuracy: 0.9759 Epoch 9/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.1203 - accuracy: 0.9670 - val_loss: 0.0743 - val_accuracy: 0.9852 Epoch 10/20 1641/1641 [==============================] - 77s 47ms/step - loss: 0.1048 - accuracy: 0.9730 - val_loss: 0.0864 - val_accuracy: 0.9852 Epoch 11/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.2086 - accuracy: 0.9481 - val_loss: 0.0822 - val_accuracy: 0.9860 Epoch 12/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.1784 - accuracy: 0.9556 - val_loss: 0.1119 - val_accuracy: 0.9772 Epoch 13/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.1477 - accuracy: 0.9596 - val_loss: 0.0783 - val_accuracy: 0.9858 Epoch 14/20 1641/1641 [==============================] - 81s 49ms/step - loss: 0.1444 - accuracy: 0.9650 - val_loss: 0.1079 - val_accuracy: 0.9871 Epoch 15/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.1613 - accuracy: 0.9605 - val_loss: 0.0780 - val_accuracy: 0.9852 Epoch 16/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.2205 - accuracy: 0.9467 - val_loss: 0.1001 - val_accuracy: 0.9748 Epoch 17/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.1706 - accuracy: 0.9523 - val_loss: 0.1582 - val_accuracy: 0.9604 Epoch 18/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.1623 - accuracy: 0.9518 - val_loss: 0.1093 - val_accuracy: 0.9833 Epoch 19/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.1180 - accuracy: 0.9629 - val_loss: 0.0955 - val_accuracy: 0.9838 Epoch 20/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.1317 - accuracy: 0.9636 - val_loss: 0.0928 - val_accuracy: 0.9841
Visualizing Model Comparison Results:¶
visualizing_results(history1_m2, "Model 2")
7.3 Third Model¶
I've employed the LeNet-5 model in this section:¶
# Define the model
CNN_model3 = models.Sequential(
[
layers.Rescaling(1./255, input_shape=(250, 250, 3)),
layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
layers.MaxPool2D(pool_size=2, strides=2),
layers.Conv2D(filters=64, kernel_size=3, activation='relu'),
layers.MaxPool2D(pool_size=2, strides=2),
layers.Flatten(),
layers.Dense(units=512, activation='relu'),
layers.Dense(units=120, activation='relu'),
layers.Dense(units=5, activation='sigmoid')
]
)
# Compile the model
CNN_model3.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy', # Use appropriate loss for multi-class classification
metrics=['accuracy']
)
# Summary of the model architecture
CNN_model3.summary()
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling_2 (Rescaling) (None, 250, 250, 3) 0 conv2d_3 (Conv2D) (None, 248, 248, 32) 896 max_pooling2d_3 (MaxPooling (None, 124, 124, 32) 0 2D) conv2d_4 (Conv2D) (None, 122, 122, 64) 18496 max_pooling2d_4 (MaxPooling (None, 61, 61, 64) 0 2D) flatten_2 (Flatten) (None, 238144) 0 dense_4 (Dense) (None, 512) 121930240 dense_5 (Dense) (None, 120) 61560 dense_6 (Dense) (None, 5) 605 ================================================================= Total params: 122,011,797 Trainable params: 122,011,797 Non-trainable params: 0 _________________________________________________________________
history1_m3 = CNN_model3.fit(
Train,
epochs=20,
validation_data=Validation,
verbose=1
)
Epoch 1/20 1641/1641 [==============================] - 81s 48ms/step - loss: 0.1643 - accuracy: 0.9494 - val_loss: 0.0909 - val_accuracy: 0.9663 Epoch 2/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.0874 - accuracy: 0.9710 - val_loss: 0.0874 - val_accuracy: 0.9703 Epoch 3/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0610 - accuracy: 0.9788 - val_loss: 0.0803 - val_accuracy: 0.9749 Epoch 4/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.0390 - accuracy: 0.9872 - val_loss: 0.0766 - val_accuracy: 0.9786 Epoch 5/20 1641/1641 [==============================] - 80s 48ms/step - loss: 0.0254 - accuracy: 0.9913 - val_loss: 0.0752 - val_accuracy: 0.9809 Epoch 6/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0186 - accuracy: 0.9938 - val_loss: 0.1173 - val_accuracy: 0.9764 Epoch 7/20 1641/1641 [==============================] - 83s 50ms/step - loss: 0.0155 - accuracy: 0.9952 - val_loss: 0.1105 - val_accuracy: 0.9795 Epoch 8/20 1641/1641 [==============================] - 80s 49ms/step - loss: 0.0104 - accuracy: 0.9965 - val_loss: 0.1153 - val_accuracy: 0.9818 Epoch 9/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0102 - accuracy: 0.9968 - val_loss: 0.0871 - val_accuracy: 0.9820 Epoch 10/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.0091 - accuracy: 0.9975 - val_loss: 0.1027 - val_accuracy: 0.9820 Epoch 11/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.0110 - accuracy: 0.9971 - val_loss: 0.0814 - val_accuracy: 0.9850 Epoch 12/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.0079 - accuracy: 0.9980 - val_loss: 0.1337 - val_accuracy: 0.9821 Epoch 13/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0073 - accuracy: 0.9980 - val_loss: 0.1196 - val_accuracy: 0.9817 Epoch 14/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0109 - accuracy: 0.9974 - val_loss: 0.1237 - val_accuracy: 0.9847 Epoch 15/20 1641/1641 [==============================] - 80s 49ms/step - loss: 0.0058 - accuracy: 0.9986 - val_loss: 0.1862 - val_accuracy: 0.9806 Epoch 16/20 1641/1641 [==============================] - 78s 48ms/step - loss: 0.0083 - accuracy: 0.9978 - val_loss: 0.1192 - val_accuracy: 0.9833 Epoch 17/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0037 - accuracy: 0.9990 - val_loss: 0.1727 - val_accuracy: 0.9828 Epoch 18/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.0060 - accuracy: 0.9985 - val_loss: 0.1415 - val_accuracy: 0.9823 Epoch 19/20 1641/1641 [==============================] - 79s 48ms/step - loss: 0.0083 - accuracy: 0.9982 - val_loss: 0.1240 - val_accuracy: 0.9818 Epoch 20/20 1641/1641 [==============================] - 78s 47ms/step - loss: 0.0064 - accuracy: 0.9987 - val_loss: 0.1252 - val_accuracy: 0.9855
Visualizing Model Comparison Results:¶
visualizing_results(history1_m3, "Lenet-5")
# Evaluate the model on the test data
test_loss, test_accuracy = CNN_model3.evaluate(Test)
print(f"Test loss: {test_loss:.4f}")
print(f"Test accuracy: {test_accuracy:.4f}")
352/352 [==============================] - 9s 26ms/step - loss: 0.1078 - accuracy: 0.9878 Test loss: 0.1078 Test accuracy: 0.9878
8. Conclusion¶
Comparative Analysis of Rice Image Classification Models
In this study, three distinct CNN models were developed for the classification of rice varieties based on images. The performance of each model was assessed, leading to valuable insights into their strengths and areas of improvement.
Model 1: Simple Convolutional Network
The first model, a straightforward convolutional network, showcased competitive accuracy on the validation set, hovering around 98%. While its training and validation accuracy curves were relatively aligned, a notable concern arose from a slight increase in validation loss towards the end of training. This indicates potential overfitting, suggesting that further regularization techniques might be beneficial.
Model 2: Enhanced Convolutional Network
The second model, featuring an additional convolutional layer, offered similar accuracy to the first model on the validation set. However, its convergence dynamics were more erratic, with some fluctuations in training and validation accuracy. This suggests a possible struggle in generalization, warranting closer attention to model complexity and potential optimization.
Model 3: LeNet-5 Inspired Architecture
The third model, drawing inspiration from the classic LeNet-5 architecture, emerged as a strong contender. It showcased consistent convergence, with training and validation accuracy curves aligning closely. Most notably, this model achieved an accuracy of around 98% on the validation set, reflecting effective learning and generalization. The simplicity of the LeNet-5 architecture seemed to contribute positively to its performance and stability.
In evaluating these models, the LeNet-5 inspired architecture demonstrated remarkable performance, balancing accuracy, convergence, and model complexity effectively. While all models presented potential, the LeNet-5 inspired approach stands out as a compelling choice for rice image classification. Further refinements and optimizations to this architecture hold the promise of even more impressive results, not only in rice classification but also in various image classification tasks.
9. Saving Model¶
Final_Model = CNN_model3
Final_Model.save("CNN_model3.h5")
This model could be further quantized using TensorFlow Lite.