core package

Submodules

core.data_loader module

class core.data_loader.DataLoader(featurewise_center: bool = False, featurewise_std_normalization: bool = False, zca_whitening: bool = True, rotation_range: int = 30, width_shift_range: float = 0.2, height_shift_range: float = 0.2, zoom_range: float = 0.15, shear_range: float = 0.15, horizontal_flip: bool = True, validation_split: float = 0.3)

Bases: object

API to load and Augment Image Data

from_common_dir(directory: str, target_size: Tuple[int, int], color_mode: str = 'rgb', batch_size: int = 32)

Takes the path to a directory & generates batches of augmented data. Use this only if directory is common for both training data and validation data.

Following should be the folder structure Dir/sub_dir/.jpeg –> Dir/class/.jpeg

directory :str

Path of Image Directory. It should contain one subdirectory per class. Any PNG, JPG, BMP, PPM or TIF images inside each of the subdirectories directory tree will be included in the generator.

target_size: tuple, optional

The dimensions to which all images found will be resized, by default (256,256)

color_mode: str, optional

One of “grayscale”, “rgb”, “rgba”.. Whether the images will be converted to have 1, 3, or 4 channels, by default “rgb”

batch_size: int, optional

Size of batches of images to be used, by default 32

ImageGenerator: Data generator to be used in fit_generator() Keras API x_test: Validation data y_test: Validation labels

Visit: https://keras.io/preprocessing/image/#flow

from_dir(directory: str, target_size: tuple = (256, 256), color_mode: str = 'rgb', classes: int = None, class_mode: str = 'categorical', batch_size: int = 32)

Takes the path to a directory & generates batches of augmented data. Use this only if directory is separate for both training data and validation data. Therefore use this method to create training as well as validation data generator separately.

directory: str

Path of Image Directory. It should contain one subdirectory per class. Any PNG, JPG, BMP, PPM or TIF images inside each of the subdirectories directory tree will be included in the generator.

target_size: tuple, optional

The dimensions to which all images found will be resized, by default (256,256)

color_mode: str, optional

One of “grayscale”, “rgb”, “rgba”.. Whether the images will be converted to have 1, 3, or 4 channels, by default “rgb”

classes: list, optional

list of class subdirectories (e.g. [‘dogs’, ‘cats’]). If not provided, the list of classes will be automatically inferred from the subdirectory names/structure under directory, where each subdirectory will be treated as a different class, by default None

class_mode: str, optional

One of “categorical”, “binary”, “sparse”, “input”, or None. Determines the type of label arrays that are returned, by default “categorical”

batch_size: int, optional

Size of the batches of data, by default 32

data_generator :

Data Loader for given data

Following should be the folder structure Dir/sub_dir/.jpeg –> Dir/class/.jpeg

Visit: https://keras.io/preprocessing/image/#flow_from_directory

core.resnet module

core.resnet.build_resnet_layer(inputs, num_filters_in: int, depth: int)

Append desired no Residual layer with current Sequential layers. Input should be an instance of either Keras Sequential or Functional API with input shape. You may use output from this function to connect to a fully connected layer or any other layer.

eg: Input layer –> Conv2d layer –> resnet_layer –> FC

inputs: Tensor layer

Input tensor from previous layer

num_filters_in: int

Conv2d number of filters

depth: int

Number of Residual layer to be appended with current Sequential layer

Network with Residual layer appended.

core.resnet.build_resnet_model(input_shape: Tuple[int, int, int], depth: int, num_classes: int)

ResNet Version 2 Model builder [b]

Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as bottleneck layer. First shortcut connection per layer is 1 x 1 Conv2D. Second and onwards shortcut connection is identity. At the beginning of each stage, the feature map size is halved (down sampled) by a convolution layer with strides=2, while the number of filter maps is doubled. Within each stage, the layers have the same number filters and the same filter map sizes.

input_shape: tuple

3D tensor shape of input image

depth: int

Number of core Convolution layer. Depth should be 9n+2 (eg 56 or 110), where n = desired depth

num_classes: int

No of classes

Returns

Keras Model

core.resnet.build_resnet_pretrained(base_model: str = 'ResNet50V2', input_shape: Tuple[int, int, int] = None, no_classes: int = None, freeze: bool = True)

Build a ResNetV2 model using pretrained weights (eg. trained on imagenet). Use this if you want to use Transfer learning approach. For Fine-Tuning Layers can be freeze & only Fully connected layer can be trained.

base_model: str, optional

Base model to be used. Anything from ‘ResNet50V2’, ‘ResNet101V2’, ‘ResNet152V2’, by default ‘ResNet50V2’

input_shape: tuple

3d tensor shape of images feeding as input

no_classes: int

No of classes to classify

freeze: bool, optional

Freeze all convolution layers and train only on Fully connected layer. Keep it true for transfer learning, by default True

Returns

Keras Model

Raise ValueError

“Base model should be from ‘ResNet50V2’, ‘ResNet101V2’, ‘ResNet152V2’

Note:

Model will also include a Fully connected layer at end.

core.resnet.build_resnet_pretrained_customized(base_model: str = 'ResNet50V2', input_shape: Tuple[int, int, int] = None)

Build a ResNetV2 model using pretrained weights (eg. trained on imagenet). Use this if you want to use Transfer learning approach and desired fully connected layer. You have to append a fully connected layer at end w.r.t to Kearas Functional API.

Example
X = build_resnet_pretrained_customized(input_shape = (224,224,3)
# Append FC layer
y = X.output
y = Flatten()(y)
y = Dense(10, activation=’softmax’)(y)
# Combining base model FC head model
model = Model(inputs=x.input, outputs=y)
# Freezing weights
for layer in base.layers:
  layer.trainable = False

base_model: str, optional

Base model to be used. Anything from ‘ResNet50V2’, ‘ResNet101V2’, ‘ResNet152V2’, by default ‘ResNet50V2’

input_shape: tuple

3d tensor shape of images feeding as input

Returns

Keras Model

Raise ValueError

“Base model should be from ‘ResNet50V2’, ‘ResNet101V2’, ‘ResNet152V2’

Note:

Model will not include a Fully connected layer at end at you have to add it

core.resnet.residual_block(X, num_filters: int, stride: int = 1, kernel_size: int = 3, activation: str = 'relu', bn: bool = True, conv_first: bool = True)
X: Tensor layer

Input tensor from previous layer

num_filters: int

Conv2d number of filters

stride: int by default 1

Stride square dimension

kernel_size: int by default 3

COnv2D square kernel dimensions

activation: str by default ‘relu’

Activation function to used

bn: bool by default True

To use BatchNormalization

conv_first: bool by default True

conv-bn-activation (True) or bn-activation-conv (False)

Module contents