Deep learning is a class of machine learning algorithms that progressively uses multiple layers to extract higher-level features from the raw input. For example, lower layers in image processing may identify edges, while higher layers may identify the concepts relevant to a human, such as digits, letters, or faces.
In machine learning, one of the most important features is to extract the characteristics of an object. So how to extract features?
Filtering, also known as sifting, is the process of eliminating what you don’t want and leaving what you do want, see Figure 2-21. It is one of the most effective ways to extract features from an object. Just as many scientific laws assume an ideal environment, filtering puts features in a near-ideal environment, making it easier to analyze and come up with laws.
Filtration is ubiquitous in equipment, with various filter papers, including filters for coffee pots, water filtration, etc.
In optics, various filters are used to achieve the purpose of filtering specific light.
In signal processing, there are high-pass filters, which are used to filter out the low-frequency part. There are low-pass filters, which are used to filter out the high-frequency part. And of course, there are bandwidth filters, which filter out the high and low-frequency parts of the signal outside the bandwidth to get the desired signal.
In computer science, filters are even more widely used to process digital signals or filter spam.
In the field of deep learning, deep learning algorithms automatically extract the features of an object. When extracting features, filters are used, similar to filters in optics, together with convolution algorithms, to finally obtain the features of an object.
Figure 2-21 How to extract features?
Let’s take a look at the schematic diagram of the deep learning system, and see Figure 2-22 to illustrate the principle and process of deep learning, taking the recognition model building of cars as an example.
We take the big data collected over and mark each image for classification. First, we have to normalize the dataset, for example, to 256*256*3 images. ( Normalization is the process of rescaling one or more attributes to the range of 0 to 1. This means that the largest value for each attribute is one, and the smallest value is 0. ) Next, we use the filter matrix to draw points to reduce the number of data and improve the processing efficiency, and then we use the convolution algorithm to extract features. After the features are extracted, pooling is done to continue the point extraction. ( The pooling layer is to reduce the resolution of the feature map but retains features of the map required for classification )This process can be repeated N times as needed. After the feature processing, we enter the fully connected layer of deep learning related to the output classification and get the probability of each classification, that is, the likelihood. In general, those with 90% or higher probability can be identified as the classification.
Figure 2-22 Deep learning algorithm
The data is divided into three sets for better model building in deep learning; see Figure 2-23 Dataset classification.
Figure 2-23 Data set classification
Figure 2-24 Model training process
The model’s training can be started after dividing the dataset into the actual training set, validation set, and test set. First, input the labeled data, then perform feature extraction and build the model. Next, input the validation set data into the trained model, observe the output results, whether they are correct or not, and then count the percentage of correctness. Repeat this process to continuously optimize the parameters of the training model, including the selection of filters, optimization of parameters, etc. until satisfactory results are achieved. Figure 2-24 shows the model training process and how to use a model to reason input images.
 from https://en.wikipedia.org/wiki/Deep_learning, Oct18,2021