1. Convolutional Neural Network

First, all the functions to build neural network layers are imported. The first layer of the model is Conv2D, where 32 image data are number of outputs, and kernel (weight vector) size is set to 5*5, input shape to the shape of each image (47, 67,3), and activation function set as rectified linear unit (=max(x,0)). The layer does not perform padding because it's not necessary to prevent the output tensor's dimensionality from shrinking for a fairly large dataset. Instead, pooling is applied to make the reduction with pooling size 3*3, where each of 9 elements would be the highest number from the 3*3 size. After that, dropout is executed for 0.25 portion, where this is the portion for nodes that will not be in use. In a similar way, another layer of Conv2D is added. Then, the matrix formatted data is converted to a vector format by using Flatten function. Lastly, a fully connected layer (Dense) is added followed by a drop out of 0.5, and then another layer is added this time with softmax function as activation function for two classes. The summary at the end shows the summary of the layers added.
2. Model Testing

The model generated above is saved as smile_dector_model as a keras file. The model is compiled with parameters above to evaluate the accuracy score. After some fine-tuning for the hyperparameters of the model above and the batch size and number of epochs, the model reached accuracy of 99.2 percent. With the training dataset, the accuracy was 99.9 percent whereas with the testing dataset, the accuracy was 90.8 percent.