Artificial Intelligence (AI) Lifecycle

Sep 01, 2022

In this video we will outline the AI life cycle and its elements. The AI lifecycle can be thought of in three stages - project scope, building the model, and deploying it to production. The six steps in the typical data science and AI process are explained, including asking the right questions, conducting research, choosing appropriate methods, validating assumptions, testing, and interpreting conclusions. Deployment of machine learning models is discussed, including considerations for when to deploy and different methods of deployment such as on-call systems, on-demand systems, and on-edge systems.

As you start to think through implementing AI within your organization, you need to take the time to manage it and scope it appropriately. The AI lifecycle can be thought of in three stages, project scope, building the model, and deploying it to production. We'll start by talking about Project scoping and Building the model. This diagram shows the typical data science in AI process. Out of these six steps the first, Ask, and the last, Interpret, are the ones where you're most likely to be involved.

There are various iterations of this process that you may have seen before, but they're all similar. One thing to note here is that it's common to move back and forth between the steps as you work through the project. This process works in conjunction with the AI lifecycle. We will revisit and dive deeper into these steps in the next few videos. [Video description begins] The process includes six steps: 1. Ask. What is the problem(s) we need to solve? 2. Research. What data do we need and how do we get it? 3. Model. Which method(s) is appropriate to use? 4. Validate. Do the model and assumptions work as expected? 5. Test. How does the model generalize to real-world data? and 6. Interpret. How can we use the conclusions in the real world? [Video description ends] Now that we've discussed a high-level overview of Project scoping and Building the model, let's move on to deploying it to production.

Machine learning models are most often developed against a sample of labeled data and often on an analyst local machines, or a series of dedicated training servers. Deployment allows you to integrate a machine learning model into an existing production environment to make practical business decisions based on data at scale. You deploy a model when you need to make automated predictions on previously unseen data.

You don't need to deploy a model if predictions don't need to be automated, predictions are for a one-time analysis or predictions are not accurate or useful. Now the reason you need to keep these things in mind is because it can take significant resources to deploy these models, so make sure you have a clear understanding of how you will use them once they're developed. There are several ways to deploy machine learning algorithms and the method that you choose will depend on the use case.

The methods differ in their architecture and can generally be categorized into one of three types. On-call systems are designed to provide predictions on an occasional basis. The model is called to make predictions on groups of observations at one time. The predictions are then stored in a database for downstream use cases.

In on-demand systems, prediction services are always available and they can be provided in real time. They make predictions typically one at a time, hence the name on-demand. With on-edge systems, prediction services are deployed on the device without connecting to any external services or applications. These systems are on the edge of the Internet of Things, so think about something like a thermostat, robot vacuums, phones, and web browsers that you use most often.