313 batches  each batch 32 samples total 10 k

import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
tensorflow is a google product

we no need to install when we use google colab

if you want to use in the local laptop we need to install

it will works more efficiently in google colab

keras is a backend of tensorflow

it will help to create the NN is easy

list

numpy array GPU

tenors

torch
l=[1,2,3,4]
tf.constant(l)
l=[1,2,3,4]
tf.constant(l)
load the data

import keras

dir(keras)

[‘DTypePolicy’,
‘FloatDTypePolicy’,
‘Function’,
‘Initializer’,
‘Input’,
‘InputSpec’,
‘KerasTensor’,
‘Layer’,
‘Loss’,
‘Metric’,
‘Model’,
‘Operation’,
‘Optimizer’,
‘Quantizer’,
‘Regularizer’,
‘RematScope’,
‘Sequential’,
‘StatelessScope’,
‘SymbolicScope’,
‘Variable’,
builtins‘,
cached‘,
doc‘,
file‘,
loader‘,
name‘,
package‘,
path‘,
spec‘,
version‘,
‘_tf_keras’,
‘activations’,
‘applications’,
‘backend’,
‘callbacks’,
‘config’,
‘constraints’,
‘datasets’,
‘device’,
‘distribution’,
‘dtype_policies’,
‘export’,
‘initializers’,
‘layers’,
‘legacy’,
‘losses’,
‘metrics’,
‘mixed_precision’,
‘models’,
‘name_scope’,
‘ops’,
‘optimizers’,
‘preprocessing’,
‘quantizers’,
‘random’,
‘regularizers’,
‘remat’,
‘saving’,
‘src’,
‘tree’,
‘utils’,
‘version’,
‘visualization’,
‘wrappers’]
from keras import datasets
dir(datasets)
[‘builtins‘,
cached‘,
doc‘,
file‘,
loader‘,
name‘,
package‘,
path‘,
spec‘,
‘boston_housing’,
‘california_housing’,
‘cifar10’,
‘cifar100’,
‘fashion_mnist’,
‘imdb’,
‘mnist’,
‘reuters’]
from keras.datasets import fashion_mnist
import keras
from keras import datasets
from keras.datasets import fashion_mnist
 fashion_mnist


dir(fashion_mnist)
[‘builtins‘,
cached‘,
doc‘,
file‘,
loader‘,
name‘,
package‘,
path‘,
spec‘,
‘load_data’]
fashion_mnist.load_data

(x_train, y_train), (x_test, y_test)=fashion_mnist.load_data()

(x_train, y_train), (x_test, y_test)`

 (a,b),(c,d)

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
29515/29515 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26421880/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
5148/5148 ━━━━━━━━━━━━━━━━━━━━ 0s 1us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4422102/4422102 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
x_train.shape
(60000, 28, 28)
x_train[0]

y_train[0]
np.uint8(9)
Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]
class_names
[‘top’,
‘trouser’,
‘pullover’,
‘dress’,
‘coat’,
‘sandal’,
‘shirt’,
‘sneaker’,
‘bag’,
‘ankle boot’]
plot the images

plt.imshow(x_train[0])
plt.colorbar()
plt.xlabel(class_names[y_train[0]])
plt.show()

class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]

import matplotlib.pyplot as plt
plt.figure(figsize=(14,14))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.imshow(x_train[i])
    plt.colorbar()
    plt.xlabel(y_train[i])
    plt.title(class_names[y_train[i]])

Scale the data

we will divide data by 255 so that the pixel ranges from 0 to 1
x_train=x_train/255
x_test=x_test/255
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]

import matplotlib.pyplot as plt
plt.figure(figsize=(14,14))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.imshow(x_train[i])
    plt.colorbar()
    plt.xlabel(y_train[i])
    plt.title(class_names[y_train[i]])

Develop a model

input layer

mention the input shape
hidden layer

number of nuerons

activation

we can use HeNormal or Xaviour for random weights and bias

Drop out percentage : how many nuerons drop based on probability

Batch Normalization

output layer

10 classes so 10 nuerons

activation : Softmax

Sequential model

Input layer will take image 28×28 then flatten into 784 pixels

the model will add each layer then become Dense

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense,Flatten, Dropout, BatchNormalization,Activation
from tensorflow.keras.initializers import HeNormal
model = Sequential()

########## Input 

model.add(Flatten(input_shape = (28, 28)))   # Input

################ Hidden1 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

################ Hidden2 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

############## Output layer 

model.add(Dense(10, activation = ‘softmax’,kernel_initializer = HeNormal()))
/usr/local/lib/python3.11/dist-packages/keras/src/layers/reshaping/flatten.py:37: UserWarning: Do not pass an input_shape/input_dim argument to a layer. When using Sequential models, prefer using an Input(shape) object as the first layer in the model instead.
super().init(**kwargs)
Model summary

model.summary()

Trainable- Non Trainable parameters

weights and bias are trainable parameters

Hidden layer1: 100480

Hidden layer2: 16512

Output layer : 1290

Total parameter : 100480+16512+1290 = 118,282

Batch Normalization

– gamma (scale) and beta (shift) are trainable :

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256

– Moving average and Moving Varinace are Non trainable

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256
Total parameters(Traibale+Non Trainable) :

– Trainable: 118,282+256+256= 118,794

– Non Trainblee: 256+256= 512

– Total = 118,794+512= 119,306
Total Non trainable parameters : 118,282+256+256=118,794

shape in Summary

None,784

None indicates batach size : (batch,28,28)

None: I will decide my batch when i process

Example : if we process a batch of 32 samples at a time shape will adjust as 32,784

If we are doing prediction of 1000 samples the shape will be 1000,28,28 : 1000,784

If we apply one sample for inference shape become : 1,28,28 : 1,784

Model compile

step-1: model development

step-2: Model summary

step-3: model compile

– loss : cross entropy

– optimizer : Adam learning rate will be degault

– metrics : accuracy
from re import VERBOSE
model.compile(optimizer=’adam’,
              loss = ‘sparse_categorical_crossentropy’,
              metrics = [‘accuracy’])
model fit

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

help(model.fit)
Help on method fit in module keras.src.backend.tensorflow.trainer:

fit(x=None, y=None, batch_size=None, epochs=1, verbose=’auto’, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1) method of keras.src.models.sequential.Sequential instance
Trains the model for a fixed number of epochs (dataset iterations).

Args:
    x: Input data. It can be:
        - A NumPy array (or array-like), or a list of arrays
        (in case the model has multiple inputs).
        - A backend-native tensor, or a list of tensors
        (in case the model has multiple inputs).
        - A dict mapping input names to the corresponding array/tensors,
        if the model has named inputs.
        - A `keras.utils.PyDataset` returning `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `tf.data.Dataset` yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `torch.utils.data.DataLoader` yielding `(inputs, targets)`
        or `(inputs, targets, sample_weights)`.
        - A Python generator function yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
    y: Target data. Like the input data `x`, it can be either NumPy
        array(s) or backend-native tensor(s). If `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or a Python generator function,
        `y` should not be specified since targets will be obtained from
        `x`.
    batch_size: Integer or `None`.
        Number of samples per gradient update.
        If unspecified, `batch_size` will default to 32.
        Do not specify the `batch_size` if your input data `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    epochs: Integer. Number of epochs to train the model.
        An epoch is an iteration over the entire `x` and `y`
        data provided
        (unless the `steps_per_epoch` flag is set to
        something other than None).
        Note that in conjunction with `initial_epoch`,
        `epochs` is to be understood as "final epoch".
        The model is not trained for a number of iterations
        given by `epochs`, but merely until the epoch
        of index `epochs` is reached.
    verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
        0 = silent, 1 = progress bar, 2 = one line per epoch.
        "auto" becomes 1 for most cases.
        Note that the progress bar is not
        particularly useful when logged to a file,
        so `verbose=2` is recommended when not running interactively
        (e.g., in a production environment). Defaults to `"auto"`.
    callbacks: List of `keras.callbacks.Callback` instances.
        List of callbacks to apply during training.
        See `keras.callbacks`. Note
        `keras.callbacks.ProgbarLogger` and
        `keras.callbacks.History` callbacks are created
        automatically and need not be passed to `model.fit()`.
        `keras.callbacks.ProgbarLogger` is created
        or not based on the `verbose` argument in `model.fit()`.
    validation_split: Float between 0 and 1.
        Fraction of the training data to be used as validation data.
        The model will set apart this fraction of the training data,
        will not train on it, and will evaluate the loss and any model
        metrics on this data at the end of each epoch. The validation
        data is selected from the last samples in the `x` and `y` data
        provided, before shuffling.
        This argument is only supported when `x` and `y` are made of
        NumPy arrays or tensors.
        If both `validation_data` and `validation_split` are provided,
        `validation_data` will override `validation_split`.
    validation_data: Data on which to evaluate
        the loss and any model metrics at the end of each epoch.
        The model will not be trained on this data. Thus, note the fact
        that the validation loss of data provided using
        `validation_split` or `validation_data` is not affected by
        regularization layers like noise and dropout.
        `validation_data` will override `validation_split`.
        It can be:
        - A tuple `(x_val, y_val)` of NumPy arrays or tensors.
        - A tuple `(x_val, y_val, val_sample_weights)` of NumPy
        arrays.
        - A `keras.utils.PyDataset`, a `tf.data.Dataset`, a
        `torch.utils.data.DataLoader` yielding `(inputs, targets)` or a
        Python generator function yielding `(x_val, y_val)` or
        `(inputs, targets, sample_weights)`.
    shuffle: Boolean, whether to shuffle the training data before each
        epoch. This argument is ignored when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
    class_weight: Optional dictionary mapping class indices (integers)
        to a weight (float) value, used for weighting the loss function
        (during training only).
        This can be useful to tell the model to
        "pay more attention" to samples from
        an under-represented class. When `class_weight` is specified
        and targets have a rank of 2 or greater, either `y` must be
        one-hot encoded, or an explicit final dimension of `1` must
        be included for sparse class labels.
    sample_weight: Optional NumPy array or tensor of weights for
        the training samples, used for weighting the loss function
        (during training only). You can either pass a flat (1D)
        NumPy array or tensor with the same length as the input samples
        (1:1 mapping between weights and samples), or in the case of
        temporal data, you can pass a 2D NumPy array or tensor with
        shape `(samples, sequence_length)` to apply a different weight
        to every timestep of every sample.
        This argument is not supported when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
        Instead, provide `sample_weights` as the third element of `x`.
        Note that sample weighting does not apply to metrics specified
        via the `metrics` argument in `compile()`. To apply sample
        weighting to your metrics, you can specify them via the
        `weighted_metrics` in `compile()` instead.
    initial_epoch: Integer.
        Epoch at which to start training
        (useful for resuming a previous training run).
    steps_per_epoch: Integer or `None`.
        Total number of steps (batches of samples) before declaring one
        epoch finished and starting the next epoch. When training with
        input tensors or NumPy arrays, the default `None` means that the
        value used is the number of samples in your dataset divided by
        the batch size, or 1 if that cannot be determined.
        If `x` is a `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function, the
        epoch will run until the input dataset is exhausted. When
        passing an infinitely repeating dataset, you must specify the
        `steps_per_epoch` argument, otherwise the training will run
        indefinitely.
    validation_steps: Integer or `None`.
        Only relevant if `validation_data` is provided.
        Total number of steps (batches of samples) to draw before
        stopping when performing validation at the end of every epoch.
        If `validation_steps` is `None`, validation will run until the
        `validation_data` dataset is exhausted. In the case of an
        infinitely repeating dataset, it will run indefinitely. If
        `validation_steps` is specified and only part of the dataset
        is consumed, the evaluation will start from the beginning of the
        dataset at each epoch. This ensures that the same validation
        samples are used every time.
    validation_batch_size: Integer or `None`.
        Number of samples per validation batch.
        If unspecified, will default to `batch_size`.
        Do not specify the `validation_batch_size` if your data is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    validation_freq: Only relevant if validation data is provided.
        Specifies how many training epochs to run
        before a new validation run is performed,
        e.g. `validation_freq=2` runs validation every 2 epochs.

Unpacking behavior for iterator-like inputs:
    A common pattern is to pass an iterator like object such as a
    `tf.data.Dataset` or a `keras.utils.PyDataset` to `fit()`,
    which will in fact yield not only features (`x`)
    but optionally targets (`y`) and sample weights (`sample_weight`).
    Keras requires that the output of such iterator-likes be
    unambiguous. The iterator should return a tuple
    of length 1, 2, or 3, where the optional second and third elements
    will be used for `y` and `sample_weight` respectively.
    Any other type provided will be wrapped in
    a length-one tuple, effectively treating everything as `x`. When
    yielding dicts, they should still adhere to the top-level tuple
    structure,
    e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate
    features, targets, and weights from the keys of a single dict.
    A notable unsupported data type is the `namedtuple`. The reason is
    that it behaves like both an ordered datatype (tuple) and a mapping
    datatype (dict). So given a namedtuple of the form:
    `namedtuple("example_tuple", ["y", "x"])`
    it is ambiguous whether to reverse the order of the elements when
    interpreting the value. Even worse is a tuple of the form:
    `namedtuple("other_tuple", ["x", "y", "z"])`
    where it is unclear if the tuple was intended to be unpacked
    into `x`, `y`, and `sample_weight` or passed through
    as a single element to `x`.

Returns:
    A `History` object. Its `History.history` attribute is
    a record of training loss values and metrics values
    at successive epochs, as well as validation loss values
    and validation metrics values (if applicable).

history=model.fit(x_train,
                  y_train,
                  epochs=10,
                  batch_size=64,
                  validation_split=0.2)  #Verbose=1
history
Epoch 1/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 5ms/step – accuracy: 0.7241 – loss: 0.7836 – val_accuracy: 0.8367 – val_loss: 0.4306
Epoch 2/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8324 – loss: 0.4599 – val_accuracy: 0.8507 – val_loss: 0.4033
Epoch 3/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 5s 6ms/step – accuracy: 0.8435 – loss: 0.4228 – val_accuracy: 0.8665 – val_loss: 0.3542
Epoch 4/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8543 – loss: 0.3954 – val_accuracy: 0.8694 – val_loss: 0.3419
Epoch 5/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 6ms/step – accuracy: 0.8602 – loss: 0.3728 – val_accuracy: 0.8692 – val_loss: 0.3504
Epoch 6/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8575 – loss: 0.3783 – val_accuracy: 0.8783 – val_loss: 0.3293
Epoch 7/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8673 – loss: 0.3569 – val_accuracy: 0.8798 – val_loss: 0.3173
Epoch 8/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8696 – loss: 0.3538 – val_accuracy: 0.8834 – val_loss: 0.3164
Epoch 9/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8735 – loss: 0.3379 – val_accuracy: 0.8810 – val_loss: 0.3247
Epoch 10/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8790 – loss: 0.3249 – val_accuracy: 0.8898 – val_loss: 0.3008

history.history
{‘accuracy’: [0.7848333120346069,
0.8345624804496765,
0.8475000262260437,
0.8540208339691162,
0.8598750233650208,
0.8617916703224182,
0.8652083277702332,
0.8694999814033508,
0.8713958263397217,
0.874625027179718],
‘loss’: [0.5992023348808289,
0.4512609541416168,
0.41365811228752136,
0.3957699239253998,
0.37709933519363403,
0.368355929851532,
0.3608276844024658,
0.3518081605434418,
0.34145137667655945,
0.3366551995277405],
‘val_accuracy’: [0.8367499709129333,
0.8506666421890259,
0.8665000200271606,
0.8694166541099548,
0.8691666722297668,
0.878250002861023,
0.8798333406448364,
0.8834166526794434,
0.8809999823570251,
0.8897500038146973],
‘val_loss’: [0.4306168556213379,
0.4033019244670868,
0.35415950417518616,
0.3418999910354614,
0.3503606915473938,
0.3293328881263733,
0.31731531023979187,
0.3163609504699707,
0.3246942460536957,
0.3008052706718445]}
plt.plot(history.history[‘loss’], label=’loss’)
plt.plot(history.history[‘val_loss’], label = ‘val_loss’)
plt.legend()
plt.xlabel(‘epochs’)
plt.ylabel(‘loss’)

model evaluation on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

model.evaluate(x_test,y_test)

 313 batches  each batch 32 samples total 10k

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step – accuracy: 0.8808 – loss: 0.3246
[0.3262534737586975, 0.8794000148773193]
model prediction on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

y_pred=model.predict(x_test)
313/313 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
np.argmax(y_pred[0])==y_test[0]
np.True_
chcek with image

plt.imshow(x_test[0])
plt.colorbar()
plt.xlabel(class_names[y_test[0]])
plt.ylabel(class_names[np.argmax(y_pred[0])])
plt.show()

plt.show()

max_prob=[np.max(i)  for i in y_pred]
index=[np.argmax(i) for i in y_pred]
prediction_class=[class_names[i] for i in index]
Ground_Truth_class=[class_names[i] for i in y_test]

d1=pd.DataFrame(zip(max_prob,index,prediction_class,
                    Ground_Truth_class),
             columns=[‘Max_proba’,’Index’,’Prediction_class’,
                      ‘Ground_Truth_class’])
con=d1[‘Prediction_class’]==d1[‘Ground_Truth_class’]
d1[‘output’]=np.where(con,1,0)
accuracy=d1[‘output’].sum()/len(d1[‘output’])
accuracy
np.float64(0.8794)
d1

model Save

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

model.save(‘ANN.h5’)
model.save(‘ANN.keras’)
WARNING:absl:You are saving your model as an HDF5 file via model.save() or keras.saving.save_model(model). This file format is considered legacy. We recommend using instead the native Keras format, e.g. model.save('my_model.keras') or keras.saving.save_model(model, 'my_model.keras').
model load

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

loaded_model=tf.keras.models.load_model(‘ANN.h5’)
loaded_model
WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model.

loaded_model.summary()

REAL TIME PREDICTION

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

Step-9: Real time predictions

Model trained on which shape : None,784

None,28,28 : None indicated batch size

The new image also should be follow the same shape

1,28,28 : one image only we are passing

Model shape might be different

Model might be color image not gray

we need to first check the shape if (h,w,c)

convert color to gray : (h,w)

lower the shape to 28,28

then convert into: (1,28,28)

class_names
[‘top’,
‘trouser’,
‘pullover’,
‘dress’,
‘coat’,
‘sandal’,
‘shirt’,
‘sneaker’,
‘bag’,
‘ankle boot’]
load the image

import cv2
img=cv2.imread(‘/content/bag.jpg’)
img.shape
(1500, 1500, 3)
plt.imshow(img)

Convert into gray

gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized_gray=cv2.resize(gray,(28,28))
resized_gray.shape
(28, 28)
plt.subplot(1,2,1).imshow(resized_gray)
plt.subplot(1,2,2).imshow(img)

resized_gray_scale=resized_gray/255
import numpy as np
sample=np.expand_dims(resized_gray_scale,axis=0)
sample.shape
(1, 28, 28)
class_names[np.argmax(loaded_model.predict(sample))]

Colab paid products – Cancel contracts here
data_object
Variables
terminal
Terminal
from keras.datasets import fashion_mnist
import keras
import keras
from keras import datasets
from keras.datasets import fashion_mnist
fashion_mnist
dir(fashion_mnist)
[‘builtins‘,
cached‘,
doc‘,
file‘,
loader‘,
name‘,
package‘,
path‘,
spec‘,
‘load_data’]
fashion_mnist.load_data

(x_train, y_train), (x_test, y_test)=fashion_mnist.load_data()#(x_train, y_train), (x_test, y_test)`# (a,b),(c,d)

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
29515/29515 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26421880/26421880 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
5148/5148 ━━━━━━━━━━━━━━━━━━━━ 0s 1us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4422102/4422102 ━━━━━━━━━━━━━━━━━━━━ 0s 0us/step
x_train.shape
(60000, 28, 28)
x_train[0]

y_train[0]
np.uint8(9)
Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]
class_names
[‘top’,
‘trouser’,
‘pullover’,
‘dress’,
‘coat’,
‘sandal’,
‘shirt’,
‘sneaker’,
‘bag’,
‘ankle boot’]
plot the images

plt.imshow(x_train[0])
plt.colorbar()
plt.xlabel(class_names[y_train[0]])
plt.show()

class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]

import matplotlib.pyplot as plt
plt.figure(figsize=(14,14))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.imshow(x_train[i])
    plt.colorbar()
    plt.xlabel(y_train[i])
    plt.title(class_names[y_train[i]])

Scale the data

we will divide data by 255 so that the pixel ranges from 0 to 1
x_train=x_train/255
x_test=x_test/255
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]

import matplotlib.pyplot as plt
plt.figure(figsize=(14,14))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.imshow(x_train[i])
    plt.colorbar()
    plt.xlabel(y_train[i])
    plt.title(class_names[y_train[i]])

Develop a model

input layer

mention the input shape
hidden layer

number of nuerons

activation

we can use HeNormal or Xaviour for random weights and bias

Drop out percentage : how many nuerons drop based on probability

Batch Normalization

output layer

10 classes so 10 nuerons

activation : Softmax

Sequential model

Input layer will take image 28×28 then flatten into 784 pixels

the model will add each layer then become Dense

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense,Flatten, Dropout, BatchNormalization,Activation
from tensorflow.keras.initializers import HeNormal
model = Sequential()

########## Input 

model.add(Flatten(input_shape = (28, 28)))   # Input

################ Hidden1 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

################ Hidden2 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

############## Output layer 

model.add(Dense(10, activation = ‘softmax’,kernel_initializer = HeNormal()))
/usr/local/lib/python3.11/dist-packages/keras/src/layers/reshaping/flatten.py:37: UserWarning: Do not pass an input_shape/input_dim argument to a layer. When using Sequential models, prefer using an Input(shape) object as the first layer in the model instead.
super().init(**kwargs)
Model summary

model.summary()

Trainable- Non Trainable parameters

weights and bias are trainable parameters

Hidden layer1: 100480

Hidden layer2: 16512

Output layer : 1290

Total parameter : 100480+16512+1290 = 118,282

Batch Normalization

– gamma (scale) and beta (shift) are trainable :

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256

– Moving average and Moving Varinace are Non trainable

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256
Total parameters(Traibale+Non Trainable) :

– Trainable: 118,282+256+256= 118,794

– Non Trainblee: 256+256= 512

– Total = 118,794+512= 119,306
Total Non trainable parameters : 118,282+256+256=118,794

shape in Summary

None,784

None indicates batach size : (batch,28,28)

None: I will decide my batch when i process

Example : if we process a batch of 32 samples at a time shape will adjust as 32,784

If we are doing prediction of 1000 samples the shape will be 1000,28,28 : 1000,784

If we apply one sample for inference shape become : 1,28,28 : 1,784

Model compile

step-1: model development

step-2: Model summary

step-3: model compile

– loss : cross entropy

– optimizer : Adam learning rate will be degault

– metrics : accuracy
from re import VERBOSE
model.compile(optimizer=’adam’,
              loss = ‘sparse_categorical_crossentropy’,
              metrics = [‘accuracy’])
model fit

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

help(model.fit)
Help on method fit in module keras.src.backend.tensorflow.trainer:

fit(x=None, y=None, batch_size=None, epochs=1, verbose=’auto’, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1) method of keras.src.models.sequential.Sequential instance
Trains the model for a fixed number of epochs (dataset iterations).

Args:
    x: Input data. It can be:
        - A NumPy array (or array-like), or a list of arrays
        (in case the model has multiple inputs).
        - A backend-native tensor, or a list of tensors
        (in case the model has multiple inputs).
        - A dict mapping input names to the corresponding array/tensors,
        if the model has named inputs.
        - A `keras.utils.PyDataset` returning `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `tf.data.Dataset` yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `torch.utils.data.DataLoader` yielding `(inputs, targets)`
        or `(inputs, targets, sample_weights)`.
        - A Python generator function yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
    y: Target data. Like the input data `x`, it can be either NumPy
        array(s) or backend-native tensor(s). If `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or a Python generator function,
        `y` should not be specified since targets will be obtained from
        `x`.
    batch_size: Integer or `None`.
        Number of samples per gradient update.
        If unspecified, `batch_size` will default to 32.
        Do not specify the `batch_size` if your input data `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    epochs: Integer. Number of epochs to train the model.
        An epoch is an iteration over the entire `x` and `y`
        data provided
        (unless the `steps_per_epoch` flag is set to
        something other than None).
        Note that in conjunction with `initial_epoch`,
        `epochs` is to be understood as "final epoch".
        The model is not trained for a number of iterations
        given by `epochs`, but merely until the epoch
        of index `epochs` is reached.
    verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
        0 = silent, 1 = progress bar, 2 = one line per epoch.
        "auto" becomes 1 for most cases.
        Note that the progress bar is not
        particularly useful when logged to a file,
        so `verbose=2` is recommended when not running interactively
        (e.g., in a production environment). Defaults to `"auto"`.
    callbacks: List of `keras.callbacks.Callback` instances.
        List of callbacks to apply during training.
        See `keras.callbacks`. Note
        `keras.callbacks.ProgbarLogger` and
        `keras.callbacks.History` callbacks are created
        automatically and need not be passed to `model.fit()`.
        `keras.callbacks.ProgbarLogger` is created
        or not based on the `verbose` argument in `model.fit()`.
    validation_split: Float between 0 and 1.
        Fraction of the training data to be used as validation data.
        The model will set apart this fraction of the training data,
        will not train on it, and will evaluate the loss and any model
        metrics on this data at the end of each epoch. The validation
        data is selected from the last samples in the `x` and `y` data
        provided, before shuffling.
        This argument is only supported when `x` and `y` are made of
        NumPy arrays or tensors.
        If both `validation_data` and `validation_split` are provided,
        `validation_data` will override `validation_split`.
    validation_data: Data on which to evaluate
        the loss and any model metrics at the end of each epoch.
        The model will not be trained on this data. Thus, note the fact
        that the validation loss of data provided using
        `validation_split` or `validation_data` is not affected by
        regularization layers like noise and dropout.
        `validation_data` will override `validation_split`.
        It can be:
        - A tuple `(x_val, y_val)` of NumPy arrays or tensors.
        - A tuple `(x_val, y_val, val_sample_weights)` of NumPy
        arrays.
        - A `keras.utils.PyDataset`, a `tf.data.Dataset`, a
        `torch.utils.data.DataLoader` yielding `(inputs, targets)` or a
        Python generator function yielding `(x_val, y_val)` or
        `(inputs, targets, sample_weights)`.
    shuffle: Boolean, whether to shuffle the training data before each
        epoch. This argument is ignored when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
    class_weight: Optional dictionary mapping class indices (integers)
        to a weight (float) value, used for weighting the loss function
        (during training only).
        This can be useful to tell the model to
        "pay more attention" to samples from
        an under-represented class. When `class_weight` is specified
        and targets have a rank of 2 or greater, either `y` must be
        one-hot encoded, or an explicit final dimension of `1` must
        be included for sparse class labels.
    sample_weight: Optional NumPy array or tensor of weights for
        the training samples, used for weighting the loss function
        (during training only). You can either pass a flat (1D)
        NumPy array or tensor with the same length as the input samples
        (1:1 mapping between weights and samples), or in the case of
        temporal data, you can pass a 2D NumPy array or tensor with
        shape `(samples, sequence_length)` to apply a different weight
        to every timestep of every sample.
        This argument is not supported when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
        Instead, provide `sample_weights` as the third element of `x`.
        Note that sample weighting does not apply to metrics specified
        via the `metrics` argument in `compile()`. To apply sample
        weighting to your metrics, you can specify them via the
        `weighted_metrics` in `compile()` instead.
    initial_epoch: Integer.
        Epoch at which to start training
        (useful for resuming a previous training run).
    steps_per_epoch: Integer or `None`.
        Total number of steps (batches of samples) before declaring one
        epoch finished and starting the next epoch. When training with
        input tensors or NumPy arrays, the default `None` means that the
        value used is the number of samples in your dataset divided by
        the batch size, or 1 if that cannot be determined.
        If `x` is a `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function, the
        epoch will run until the input dataset is exhausted. When
        passing an infinitely repeating dataset, you must specify the
        `steps_per_epoch` argument, otherwise the training will run
        indefinitely.
    validation_steps: Integer or `None`.
        Only relevant if `validation_data` is provided.
        Total number of steps (batches of samples) to draw before
        stopping when performing validation at the end of every epoch.
        If `validation_steps` is `None`, validation will run until the
        `validation_data` dataset is exhausted. In the case of an
        infinitely repeating dataset, it will run indefinitely. If
        `validation_steps` is specified and only part of the dataset
        is consumed, the evaluation will start from the beginning of the
        dataset at each epoch. This ensures that the same validation
        samples are used every time.
    validation_batch_size: Integer or `None`.
        Number of samples per validation batch.
        If unspecified, will default to `batch_size`.
        Do not specify the `validation_batch_size` if your data is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    validation_freq: Only relevant if validation data is provided.
        Specifies how many training epochs to run
        before a new validation run is performed,
        e.g. `validation_freq=2` runs validation every 2 epochs.

Unpacking behavior for iterator-like inputs:
    A common pattern is to pass an iterator like object such as a
    `tf.data.Dataset` or a `keras.utils.PyDataset` to `fit()`,
    which will in fact yield not only features (`x`)
    but optionally targets (`y`) and sample weights (`sample_weight`).
    Keras requires that the output of such iterator-likes be
    unambiguous. The iterator should return a tuple
    of length 1, 2, or 3, where the optional second and third elements
    will be used for `y` and `sample_weight` respectively.
    Any other type provided will be wrapped in
    a length-one tuple, effectively treating everything as `x`. When
    yielding dicts, they should still adhere to the top-level tuple
    structure,
    e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate
    features, targets, and weights from the keys of a single dict.
    A notable unsupported data type is the `namedtuple`. The reason is
    that it behaves like both an ordered datatype (tuple) and a mapping
    datatype (dict). So given a namedtuple of the form:
    `namedtuple("example_tuple", ["y", "x"])`
    it is ambiguous whether to reverse the order of the elements when
    interpreting the value. Even worse is a tuple of the form:
    `namedtuple("other_tuple", ["x", "y", "z"])`
    where it is unclear if the tuple was intended to be unpacked
    into `x`, `y`, and `sample_weight` or passed through
    as a single element to `x`.

Returns:
    A `History` object. Its `History.history` attribute is
    a record of training loss values and metrics values
    at successive epochs, as well as validation loss values
    and validation metrics values (if applicable).

history=model.fit(x_train,
                  y_train,
                  epochs=10,
                  batch_size=64,
                  validation_split=0.2)  #Verbose=1
history
Epoch 1/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 5ms/step – accuracy: 0.7241 – loss: 0.7836 – val_accuracy: 0.8367 – val_loss: 0.4306
Epoch 2/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8324 – loss: 0.4599 – val_accuracy: 0.8507 – val_loss: 0.4033
Epoch 3/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 5s 6ms/step – accuracy: 0.8435 – loss: 0.4228 – val_accuracy: 0.8665 – val_loss: 0.3542
Epoch 4/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8543 – loss: 0.3954 – val_accuracy: 0.8694 – val_loss: 0.3419
Epoch 5/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 6ms/step – accuracy: 0.8602 – loss: 0.3728 – val_accuracy: 0.8692 – val_loss: 0.3504
Epoch 6/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8575 – loss: 0.3783 – val_accuracy: 0.8783 – val_loss: 0.3293
Epoch 7/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8673 – loss: 0.3569 – val_accuracy: 0.8798 – val_loss: 0.3173
Epoch 8/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8696 – loss: 0.3538 – val_accuracy: 0.8834 – val_loss: 0.3164
Epoch 9/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8735 – loss: 0.3379 – val_accuracy: 0.8810 – val_loss: 0.3247
Epoch 10/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8790 – loss: 0.3249 – val_accuracy: 0.8898 – val_loss: 0.3008

history.history
{‘accuracy’: [0.7848333120346069,
0.8345624804496765,
0.8475000262260437,
0.8540208339691162,
0.8598750233650208,
0.8617916703224182,
0.8652083277702332,
0.8694999814033508,
0.8713958263397217,
0.874625027179718],
‘loss’: [0.5992023348808289,
0.4512609541416168,
0.41365811228752136,
0.3957699239253998,
0.37709933519363403,
0.368355929851532,
0.3608276844024658,
0.3518081605434418,
0.34145137667655945,
0.3366551995277405],
‘val_accuracy’: [0.8367499709129333,
0.8506666421890259,
0.8665000200271606,
0.8694166541099548,
0.8691666722297668,
0.878250002861023,
0.8798333406448364,
0.8834166526794434,
0.8809999823570251,
0.8897500038146973],
‘val_loss’: [0.4306168556213379,
0.4033019244670868,
0.35415950417518616,
0.3418999910354614,
0.3503606915473938,
0.3293328881263733,
0.31731531023979187,
0.3163609504699707,
0.3246942460536957,
0.3008052706718445]}
plt.plot(history.history[‘loss’], label=’loss’)
plt.plot(history.history[‘val_loss’], label = ‘val_loss’)
plt.legend()
plt.xlabel(‘epochs’)
plt.ylabel(‘loss’)

model evaluation on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

model.evaluate(x_test,y_test)

 313 batches  each batch 32 samples total 10k

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step – accuracy: 0.8808 – loss: 0.3246
[0.3262534737586975, 0.8794000148773193]
model prediction on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

y_pred=model.predict(x_test)
313/313 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
np.argmax(y_pred[0])==y_test[0]
np.True_
chcek with image

plt.imshow(x_test[0])
plt.colorbar()
plt.xlabel(class_names[y_test[0]])
plt.ylabel(class_names[np.argmax(y_pred[0])])
plt.show()

plt.show()

max_prob=[np.max(i)  for i in y_pred]
index=[np.argmax(i) for i in y_pred]
prediction_class=[class_names[i] for i in index]
Ground_Truth_class=[class_names[i] for i in y_test]

d1=pd.DataFrame(zip(max_prob,index,prediction_class,
                    Ground_Truth_class),
             columns=[‘Max_proba’,’Index’,’Prediction_class’,
                      ‘Ground_Truth_class’])
con=d1[‘Prediction_class’]==d1[‘Ground_Truth_class’]
d1[‘output’]=np.where(con,1,0)
accuracy=d1[‘output’].sum()/len(d1[‘output’])
accuracy
np.float64(0.8794)
d1

model Save

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

model.save(‘ANN.h5’)
model.save(‘ANN.keras’)
WARNING:absl:You are saving your model as an HDF5 file via model.save() or keras.saving.save_model(model). This file format is considered legacy. We recommend using instead the native Keras format, e.g. model.save('my_model.keras') or keras.saving.save_model(model, 'my_model.keras').
model load

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

loaded_model=tf.keras.models.load_model(‘ANN.h5’)
loaded_model
WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model.

loaded_model.summary()

REAL TIME PREDICTION

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

Step-9: Real time predictions

Model trained on which shape : None,784

None,28,28 : None indicated batch size

The new image also should be follow the same shape

1,28,28 : one image only we are passing

Model shape might be different

Model might be color image not gray

we need to first check the shape if (h,w,c)

convert color to gray : (h,w)

lower the shape to 28,28

then convert into: (1,28,28)

class_names
[‘top’,
‘trouser’,
‘pullover’,
‘dress’,
‘coat’,
‘sandal’,
‘shirt’,
‘sneaker’,
‘bag’,
‘ankle boot’]
load the image

import cv2
img=cv2.imread(‘/content/bag.jpg’)
img.shape
(1500, 1500, 3)
plt.imshow(img)

Convert into gray

gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized_gray=cv2.resize(gray,(28,28))
resized_gray.shape
(28, 28)
plt.subplot(1,2,1).imshow(resized_gray)
plt.subplot(1,2,2).imshow(img)

resized_gray_scale=resized_gray/255
import numpy as np
sample=np.expand_dims(resized_gray_scale,axis=0)
sample.shape
(1, 28, 28)
class_names[np.argmax(loaded_model.predict(sample))]

Colab paid products – Cancel contracts here
data_object
Variables
terminal
Terminal
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,
‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]

import matplotlib.pyplot as plt
plt.figure(figsize=(14,14))
for i in range(25):
plt.subplot(5,5,i+1)
plt.imshow(x_train[i])
plt.colorbar()
plt.xlabel(y_train[i])
plt.title(class_names[y_train[i]])
Scale the data

we will divide data by 255 so that the pixel ranges from 0 to 1
x_train=x_train/255x_test=x_test/255

class_names = [‘top’, ‘trouser’, ‘pullover’, ‘dress’, ‘coat’,               ‘sandal’, ‘shirt’, ‘sneaker’, ‘bag’, ‘ankle boot’]import matplotlib.pyplot as pltplt.figure(figsize=(14,14))for i in range(25):    plt.subplot(5,5,i+1)    plt.imshow(x_train[i])    plt.colorbar()    plt.xlabel(y_train[i])    plt.title(class_names[y_train[i]])

Develop a model

input layer

mention the input shape
hidden layer

number of nuerons

activation

we can use HeNormal or Xaviour for random weights and bias

Drop out percentage : how many nuerons drop based on probability

Batch Normalization

output layer

10 classes so 10 nuerons

activation : Softmax

Sequential model

Input layer will take image 28×28 then flatten into 784 pixels

the model will add each layer then become Dense

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense,Flatten, Dropout, BatchNormalization,Activation
from tensorflow.keras.initializers import HeNormal
model = Sequential()

########## Input 

model.add(Flatten(input_shape = (28, 28)))   # Input

################ Hidden1 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

################ Hidden2 

model.add(Dropout((0.2)))

model.add(Dense(128, activation = ‘relu’,kernel_initializer = HeNormal()))

model.add(Dense(128,kernel_initializer = HeNormal())) # logits

############ Batch Norm 

model.add(BatchNormalization())

########## Activation 

model.add(Activation(‘relu’)) # non linearity

############## Output layer 

model.add(Dense(10, activation = ‘softmax’,kernel_initializer = HeNormal()))
/usr/local/lib/python3.11/dist-packages/keras/src/layers/reshaping/flatten.py:37: UserWarning: Do not pass an input_shape/input_dim argument to a layer. When using Sequential models, prefer using an Input(shape) object as the first layer in the model instead.
super().init(**kwargs)
Model summary

model.summary()

Trainable- Non Trainable parameters

weights and bias are trainable parameters

Hidden layer1: 100480

Hidden layer2: 16512

Output layer : 1290

Total parameter : 100480+16512+1290 = 118,282

Batch Normalization

– gamma (scale) and beta (shift) are trainable :

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256

– Moving average and Moving Varinace are Non trainable

    – Hidden layer 1:  2*128=256

    – Hidden layer 2:  2*128=256
Total parameters(Traibale+Non Trainable) :

– Trainable: 118,282+256+256= 118,794

– Non Trainblee: 256+256= 512

– Total = 118,794+512= 119,306
Total Non trainable parameters : 118,282+256+256=118,794

shape in Summary

None,784

None indicates batach size : (batch,28,28)

None: I will decide my batch when i process

Example : if we process a batch of 32 samples at a time shape will adjust as 32,784

If we are doing prediction of 1000 samples the shape will be 1000,28,28 : 1000,784

If we apply one sample for inference shape become : 1,28,28 : 1,784

Model compile

step-1: model development

step-2: Model summary

step-3: model compile

– loss : cross entropy

– optimizer : Adam learning rate will be degault

– metrics : accuracy
from re import VERBOSE
model.compile(optimizer=’adam’,
              loss = ‘sparse_categorical_crossentropy’,
              metrics = [‘accuracy’])
model fit

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

help(model.fit)
Help on method fit in module keras.src.backend.tensorflow.trainer:

fit(x=None, y=None, batch_size=None, epochs=1, verbose=’auto’, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1) method of keras.src.models.sequential.Sequential instance
Trains the model for a fixed number of epochs (dataset iterations).

Args:
    x: Input data. It can be:
        - A NumPy array (or array-like), or a list of arrays
        (in case the model has multiple inputs).
        - A backend-native tensor, or a list of tensors
        (in case the model has multiple inputs).
        - A dict mapping input names to the corresponding array/tensors,
        if the model has named inputs.
        - A `keras.utils.PyDataset` returning `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `tf.data.Dataset` yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
        - A `torch.utils.data.DataLoader` yielding `(inputs, targets)`
        or `(inputs, targets, sample_weights)`.
        - A Python generator function yielding `(inputs, targets)` or
        `(inputs, targets, sample_weights)`.
    y: Target data. Like the input data `x`, it can be either NumPy
        array(s) or backend-native tensor(s). If `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or a Python generator function,
        `y` should not be specified since targets will be obtained from
        `x`.
    batch_size: Integer or `None`.
        Number of samples per gradient update.
        If unspecified, `batch_size` will default to 32.
        Do not specify the `batch_size` if your input data `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    epochs: Integer. Number of epochs to train the model.
        An epoch is an iteration over the entire `x` and `y`
        data provided
        (unless the `steps_per_epoch` flag is set to
        something other than None).
        Note that in conjunction with `initial_epoch`,
        `epochs` is to be understood as "final epoch".
        The model is not trained for a number of iterations
        given by `epochs`, but merely until the epoch
        of index `epochs` is reached.
    verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
        0 = silent, 1 = progress bar, 2 = one line per epoch.
        "auto" becomes 1 for most cases.
        Note that the progress bar is not
        particularly useful when logged to a file,
        so `verbose=2` is recommended when not running interactively
        (e.g., in a production environment). Defaults to `"auto"`.
    callbacks: List of `keras.callbacks.Callback` instances.
        List of callbacks to apply during training.
        See `keras.callbacks`. Note
        `keras.callbacks.ProgbarLogger` and
        `keras.callbacks.History` callbacks are created
        automatically and need not be passed to `model.fit()`.
        `keras.callbacks.ProgbarLogger` is created
        or not based on the `verbose` argument in `model.fit()`.
    validation_split: Float between 0 and 1.
        Fraction of the training data to be used as validation data.
        The model will set apart this fraction of the training data,
        will not train on it, and will evaluate the loss and any model
        metrics on this data at the end of each epoch. The validation
        data is selected from the last samples in the `x` and `y` data
        provided, before shuffling.
        This argument is only supported when `x` and `y` are made of
        NumPy arrays or tensors.
        If both `validation_data` and `validation_split` are provided,
        `validation_data` will override `validation_split`.
    validation_data: Data on which to evaluate
        the loss and any model metrics at the end of each epoch.
        The model will not be trained on this data. Thus, note the fact
        that the validation loss of data provided using
        `validation_split` or `validation_data` is not affected by
        regularization layers like noise and dropout.
        `validation_data` will override `validation_split`.
        It can be:
        - A tuple `(x_val, y_val)` of NumPy arrays or tensors.
        - A tuple `(x_val, y_val, val_sample_weights)` of NumPy
        arrays.
        - A `keras.utils.PyDataset`, a `tf.data.Dataset`, a
        `torch.utils.data.DataLoader` yielding `(inputs, targets)` or a
        Python generator function yielding `(x_val, y_val)` or
        `(inputs, targets, sample_weights)`.
    shuffle: Boolean, whether to shuffle the training data before each
        epoch. This argument is ignored when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
    class_weight: Optional dictionary mapping class indices (integers)
        to a weight (float) value, used for weighting the loss function
        (during training only).
        This can be useful to tell the model to
        "pay more attention" to samples from
        an under-represented class. When `class_weight` is specified
        and targets have a rank of 2 or greater, either `y` must be
        one-hot encoded, or an explicit final dimension of `1` must
        be included for sparse class labels.
    sample_weight: Optional NumPy array or tensor of weights for
        the training samples, used for weighting the loss function
        (during training only). You can either pass a flat (1D)
        NumPy array or tensor with the same length as the input samples
        (1:1 mapping between weights and samples), or in the case of
        temporal data, you can pass a 2D NumPy array or tensor with
        shape `(samples, sequence_length)` to apply a different weight
        to every timestep of every sample.
        This argument is not supported when `x` is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function.
        Instead, provide `sample_weights` as the third element of `x`.
        Note that sample weighting does not apply to metrics specified
        via the `metrics` argument in `compile()`. To apply sample
        weighting to your metrics, you can specify them via the
        `weighted_metrics` in `compile()` instead.
    initial_epoch: Integer.
        Epoch at which to start training
        (useful for resuming a previous training run).
    steps_per_epoch: Integer or `None`.
        Total number of steps (batches of samples) before declaring one
        epoch finished and starting the next epoch. When training with
        input tensors or NumPy arrays, the default `None` means that the
        value used is the number of samples in your dataset divided by
        the batch size, or 1 if that cannot be determined.
        If `x` is a `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function, the
        epoch will run until the input dataset is exhausted. When
        passing an infinitely repeating dataset, you must specify the
        `steps_per_epoch` argument, otherwise the training will run
        indefinitely.
    validation_steps: Integer or `None`.
        Only relevant if `validation_data` is provided.
        Total number of steps (batches of samples) to draw before
        stopping when performing validation at the end of every epoch.
        If `validation_steps` is `None`, validation will run until the
        `validation_data` dataset is exhausted. In the case of an
        infinitely repeating dataset, it will run indefinitely. If
        `validation_steps` is specified and only part of the dataset
        is consumed, the evaluation will start from the beginning of the
        dataset at each epoch. This ensures that the same validation
        samples are used every time.
    validation_batch_size: Integer or `None`.
        Number of samples per validation batch.
        If unspecified, will default to `batch_size`.
        Do not specify the `validation_batch_size` if your data is a
        `keras.utils.PyDataset`, `tf.data.Dataset`,
        `torch.utils.data.DataLoader` or Python generator function
        since they generate batches.
    validation_freq: Only relevant if validation data is provided.
        Specifies how many training epochs to run
        before a new validation run is performed,
        e.g. `validation_freq=2` runs validation every 2 epochs.

Unpacking behavior for iterator-like inputs:
    A common pattern is to pass an iterator like object such as a
    `tf.data.Dataset` or a `keras.utils.PyDataset` to `fit()`,
    which will in fact yield not only features (`x`)
    but optionally targets (`y`) and sample weights (`sample_weight`).
    Keras requires that the output of such iterator-likes be
    unambiguous. The iterator should return a tuple
    of length 1, 2, or 3, where the optional second and third elements
    will be used for `y` and `sample_weight` respectively.
    Any other type provided will be wrapped in
    a length-one tuple, effectively treating everything as `x`. When
    yielding dicts, they should still adhere to the top-level tuple
    structure,
    e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate
    features, targets, and weights from the keys of a single dict.
    A notable unsupported data type is the `namedtuple`. The reason is
    that it behaves like both an ordered datatype (tuple) and a mapping
    datatype (dict). So given a namedtuple of the form:
    `namedtuple("example_tuple", ["y", "x"])`
    it is ambiguous whether to reverse the order of the elements when
    interpreting the value. Even worse is a tuple of the form:
    `namedtuple("other_tuple", ["x", "y", "z"])`
    where it is unclear if the tuple was intended to be unpacked
    into `x`, `y`, and `sample_weight` or passed through
    as a single element to `x`.

Returns:
    A `History` object. Its `History.history` attribute is
    a record of training loss values and metrics values
    at successive epochs, as well as validation loss values
    and validation metrics values (if applicable).

history=model.fit(x_train,
                  y_train,
                  epochs=10,
                  batch_size=64,
                  validation_split=0.2)  #Verbose=1
history
Epoch 1/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 5ms/step – accuracy: 0.7241 – loss: 0.7836 – val_accuracy: 0.8367 – val_loss: 0.4306
Epoch 2/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8324 – loss: 0.4599 – val_accuracy: 0.8507 – val_loss: 0.4033
Epoch 3/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 5s 6ms/step – accuracy: 0.8435 – loss: 0.4228 – val_accuracy: 0.8665 – val_loss: 0.3542
Epoch 4/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8543 – loss: 0.3954 – val_accuracy: 0.8694 – val_loss: 0.3419
Epoch 5/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 6s 6ms/step – accuracy: 0.8602 – loss: 0.3728 – val_accuracy: 0.8692 – val_loss: 0.3504
Epoch 6/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8575 – loss: 0.3783 – val_accuracy: 0.8783 – val_loss: 0.3293
Epoch 7/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8673 – loss: 0.3569 – val_accuracy: 0.8798 – val_loss: 0.3173
Epoch 8/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 6ms/step – accuracy: 0.8696 – loss: 0.3538 – val_accuracy: 0.8834 – val_loss: 0.3164
Epoch 9/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8735 – loss: 0.3379 – val_accuracy: 0.8810 – val_loss: 0.3247
Epoch 10/10
750/750 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step – accuracy: 0.8790 – loss: 0.3249 – val_accuracy: 0.8898 – val_loss: 0.3008

history.history
{‘accuracy’: [0.7848333120346069,
0.8345624804496765,
0.8475000262260437,
0.8540208339691162,
0.8598750233650208,
0.8617916703224182,
0.8652083277702332,
0.8694999814033508,
0.8713958263397217,
0.874625027179718],
‘loss’: [0.5992023348808289,
0.4512609541416168,
0.41365811228752136,
0.3957699239253998,
0.37709933519363403,
0.368355929851532,
0.3608276844024658,
0.3518081605434418,
0.34145137667655945,
0.3366551995277405],
‘val_accuracy’: [0.8367499709129333,
0.8506666421890259,
0.8665000200271606,
0.8694166541099548,
0.8691666722297668,
0.878250002861023,
0.8798333406448364,
0.8834166526794434,
0.8809999823570251,
0.8897500038146973],
‘val_loss’: [0.4306168556213379,
0.4033019244670868,
0.35415950417518616,
0.3418999910354614,
0.3503606915473938,
0.3293328881263733,
0.31731531023979187,
0.3163609504699707,
0.3246942460536957,
0.3008052706718445]}
plt.plot(history.history[‘loss’], label=’loss’)
plt.plot(history.history[‘val_loss’], label = ‘val_loss’)
plt.legend()
plt.xlabel(‘epochs’)
plt.ylabel(‘loss’)

model evaluation on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

model.evaluate(x_test,y_test)

 313 batches  each batch 32 samples total 10k

313/313 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step – accuracy: 0.8808 – loss: 0.3246
[0.3262534737586975, 0.8794000148773193]
model prediction on test data

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

y_pred=model.predict(x_test)
313/313 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step
np.argmax(y_pred[0])==y_test[0]
np.True_
chcek with image

plt.imshow(x_test[0])
plt.colorbar()
plt.xlabel(class_names[y_test[0]])
plt.ylabel(class_names[np.argmax(y_pred[0])])
plt.show()

plt.show()

max_prob=[np.max(i)  for i in y_pred]
index=[np.argmax(i) for i in y_pred]
prediction_class=[class_names[i] for i in index]
Ground_Truth_class=[class_names[i] for i in y_test]

d1=pd.DataFrame(zip(max_prob,index,prediction_class,
                    Ground_Truth_class),
             columns=[‘Max_proba’,’Index’,’Prediction_class’,
                      ‘Ground_Truth_class’])
con=d1[‘Prediction_class’]==d1[‘Ground_Truth_class’]
d1[‘output’]=np.where(con,1,0)
accuracy=d1[‘output’].sum()/len(d1[‘output’])
accuracy
np.float64(0.8794)
d1

model Save

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

model.save(‘ANN.h5’)
model.save(‘ANN.keras’)
WARNING:absl:You are saving your model as an HDF5 file via model.save() or keras.saving.save_model(model). This file format is considered legacy. We recommend using instead the native Keras format, e.g. model.save('my_model.keras') or keras.saving.save_model(model, 'my_model.keras').
model load

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

loaded_model=tf.keras.models.load_model(‘ANN.h5’)
loaded_model
WARNING:absl:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model.

loaded_model.summary()

REAL TIME PREDICTION

step-1: model development

step-2: model summary

step-3: model compile

step-4: model fit

data : x_train,y_train

epochs : 10

batch size : 64

validation

if you have seperate validation data then provide

just give percentage number train data only split into validaion and train

Step-5: model evaluation

60000 train 48 validation 12

10000 samples we are evaluation batch prediction

Step-6: model predict pass only x_test

we will get for each sample 10 probability values

apply np.argmax

compare with y_test

the accuracy should be equal to Evaluate accuracy 0.8808

Step-7: Model save

.keras

.h5

Step-8: load the model

Step-9: Real time predictions

Model trained on which shape : None,784

None,28,28 : None indicated batch size

The new image also should be follow the same shape

1,28,28 : one image only we are passing

Model shape might be different

Model might be color image not gray

we need to first check the shape if (h,w,c)

convert color to gray : (h,w)

lower the shape to 28,28

then convert into: (1,28,28)

class_names
[‘top’,
‘trouser’,
‘pullover’,
‘dress’,
‘coat’,
‘sandal’,
‘shirt’,
‘sneaker’,
‘bag’,
‘ankle boot’]
load the image

import cv2
img=cv2.imread(‘/content/bag.jpg’)
img.shape
(1500, 1500, 3)
plt.imshow(img)

Convert into gray

gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized_gray=cv2.resize(gray,(28,28))
resized_gray.shape
(28, 28)
plt.subplot(1,2,1).imshow(resized_gray)
plt.subplot(1,2,2).imshow(img)

resized_gray_scale=resized_gray/255
import numpy as np
sample=np.expand_dims(resized_gray_scale,axis=0)
sample.shape
(1, 28, 28)
class_names[np.argmax(loaded_model.predict(sample))]

Colab paid products – Cancel contracts here
data_object
Variables
terminal
Terminal

kamblenayan826

Leave a Reply

Your email address will not be published. Required fields are marked *