Notes on "Practical deep learning": launching
Chapter 2: Production
Start with the objective when designing your model.
For example, Google started with the objective: "What is the user's main objective in typing in a search query?" This led to the aim to "show the most relevant search result."
The next step is to determine what actions you can take to achieve the objective. In Google's case, it was ranking search results.
The third step is determining what new data you need for the actions. In Google's case, what new data would they need to produce such a ranking?
They realized that the implicit information regarding which pages linked to which other page could be useful for this purpose.
We begin thinking about building a predictive model after these three steps. The objective, levers, the data we have, and the data we need determine the model we make. The models will take the levers and uncontrollable variables as their inputs; the output can be combined to predict the final state of the objective.
Recap by answering the questions
- If the images for the bear classifier are from the game, then the app won't work because it was trained with the photo of the real bears.
- The text models cannot understand the semantics. This could lead to incorrect predictions.
- Text models could be used to troll the internet to spread disinformation.
- Usage of AI doesn't guarantee 100% results. It's especially critical with the medical reports. One solution is to work en-par with the people. The highly predictive outputs could be automated. For less predicted outcomes, we can do a manual check.
- The AI is good at high cardinality tabular data. The high cardinality means it's something that contains ample discrete choices.
- The recommendation systems work with user-generated behaviors. It recommends similar products from other sales. But it doesn't mean users didn't see them or may want something others didn't pick yet.
- Drivetrain approach: a) define an objective. b) define actions to reach the objective. c) define the data you need for the actions.
- Drivetrain approach for the video recommendation system: a) keep users in the platform. b) find liked videos like this from people of similar taste. c) create a data of liked data.
- The DataLoader class prepares data for training from unprepared data.
- To create a DataLoader, we need four things: a) What kind of data are we working with: images and text? b) How do you get those data: from files, CSV, or SQL? c) How do you label the items by filename? d) How to create a validation set: split items randomly by a seed.
- The splitter parameter of DataLoader describes how to get the validation set from the items list.
- The splitter takes two arguments: the percentage of items in the validation set and a seed to generate a random list.
- To determine independent variables, we use X. To represent dependent variables, we use Y.
- When images have different sizes, we need to normalize them—crop, pad, or squish.
- Data augmentation creates variants of the item by applying transforming functions. It helps to increase the amount of training set.
- Item transformers apply a function to each data. Batch transformers use the function of a whole batch.
- The confusion matrix shows a table of the training results. It helps to debug the data we used for training.
- The export function saves the architecture and model parameters together.
- To use a prediction model instead for training is called inference.
- IPython widgets are GUI components combining JavaScript and Python codes.
- we may use CPU in production since we work with one item at a time. In training, we may use GPU to accelerate the process.
- If a model is on a server, then there is a risk of working with confidential user data. Or network latency may slow down the prediction.
- the camera might capture the bears from different angles for a bear classifier, but it didn't work with the night vision. The camera may have low resolution.
- Out of domain is when an image for inference is not in a category of images we used for training.
- The domain shift is when the type of data changes over time.
- Three deployment process steps: a) export model for inference. b) create a UI. c) host the model.
Objective: Understand the text in depth. Users should not be distracted.
Action: Track all words users encounter.
Then, get the description and insert it into the context. If it's a link, get a summary and add it.
Data: track all words the user knows. Track how often he read and keep the terminology until it won't increase in appearance.
What should happen if it went out? Everyone on the internet uses it.