Imagine a fitness app where a virtual assistant helps users to reach their goals and to stay motivated. For the assistant to be engaging, it must give actionable and personalized advice, and adjust the plan to the actual progress of the user. In other words, a good coach must know its coachee.
We will show you how you can use machine learning to build an automated coach that understands the habits of its users, by getting insights on their behaviors through individual predictive models. We will describe how to extract relevant insights from models through explainable AI (XAI) techniques, and how to use them to provide a personalized coaching experience.
ML Models
With machine learning, you can generate models and make predictions based on them. From tracking data of someone’s meals, you can generate a model that predicts how much they may eat at their next dinner. But individual models can be used for much more than forecasts. They are a representation of an expected behavior. As such they can be the foundation of an automated coach that understands its users, and uses this understanding to give them personalized challenges and actionable advice.
The method presented here requires tracking data on someone’s behavior, on a quantity that they may need help to decrease or increase. For example calories in minus calories out from a fitness tracking app, power consumption from a smart power plug, sleep time tracking, app usage times, etc.
We will use predictive models based on this tracking data to get insights on the progress of a user. Consider the two following examples.
- Fitness: We have tracking data on a user’s calorie intake minus calorie burn. We want to help them reach diet goals. We generate a model that predicts their hourly calorie balance depending on the available context: time of day, day of the week, weather and temperature, whether a day is a holiday, etc.
- Home energy: We have data on a user’s home electricity consumption. We want to help them reduce their electricity use. We generate a model that predicts the home’s daily or hourly energy use depending on the available context, as above.
From these models, we can identify conditions in which the user is making progress or stalling, and anticipate the effects of the current situation on the big picture. With one model per user, the assistant can devise a personalized, adaptive plan for each of them.
In this article, we will focus on how to get insights from individual predictive models, so we will assume that you already have the data and the models. If you only have the data, we can’t help but recommend our own shop to generate and manage individual machine learning models: Craft AI.
Coaching insights
When the coach interacts with the user, it uses the model to find insights on their progress. Then from these insights, it generates personalized messages or images.
Here are some of the possible insights, how to retrieve them from predictive models, and how they can be used in practice in a coaching app.
Set personalized goals
Set a quantified goal for the following week that is both challenging and reachable.
How: Add up predictions for the following week to get a baseline of what the user can be expected to do. From there, set a reasonable improvement goal. For example a flat 5% difference, or a slight increase over the improvement achieved the previous week.
Making predictions for the following week requires estimating the model’s input features, for example using weather forecasts. For this step you will need to use a model with only feature that can be estimated. But we will see below that some other insights can be gained only on past data, in which case any feature with information on the context will be useful.
Anticipate problem areas to work on, foster good behavior
Anticipate when users are likely to make mistakes.
How: Find when the user will have the most extreme values on predictions for the following week, to identify when they can make the most significant improvements, or conversely when they are likely to be on the right track.
Point out unusual behavior
Raise an alert when going off track.
How: Compare predictions with recent data, to find when the user went significantly over or under the model’s predictions. Then you can look for common conditions in which this happens: For example, find out which type of activity or which day of the week occurs the most when data is over predictions. This can help the user know what triggers undesirable behavior.
Here, you don’t need to predict future behavior, so it is possible to use a model based on features that you cannot estimate in advance.
Anything that can label data points can be used to find commonalities between times when the user went beyond predictions, even if the labels are not used in the predictive model itself.
Explain habits and find effective actions
Help the user understand their own behavior and how to make improvements.
How: Find which variables/features affect the model’s prediction the most, to show which change on a variable can have a significant impact globally.
This can be a way to find which actions are the most effective, to give advice on what the user can do to make improvements. For example in a fitness application, you can use as a feature in the model whether a type of physical activity was done on a given day. This lets you see its impact on the prediction, to find out which type of activity is associated with the best results. This step doesn't require forecasting, so you don't need to be able to estimage these features in the future.
With decision tree models, you can know directly which decision rules have the most impact, they are the ones closest to the root node. With linear regression models, you can use coefficients. Or you can use a model-agnostic method to find important variables, like those presented in this online book on interpretable machine learning
Show the impact of context changes
This is similar to the previous point, but in a local context of small changes around a single prediction on the current situation. This tells you the effect of having done things differently at a specific time.
How: Look at variations of the prediction when making small changes on variables one by one, compared to the current situation.
Show conditions in which the “worst” behavior occurs
Identify the worst offenders
How: Find conditions in which the model makes extreme predictions.
With decision trees, you can look at tree nodes whose predicted values are in the top 10%. This gives you areas of high consumption, described in terms of important variables. For example this could let you know that high consumption happens between Tuesday and Thursday when the temperature is between X and Y.
Explain variations
Talking about changes rather than absolute values emphasizes what happened recently, and lets the user think in bite-sized incremental improvements.
How: Apply some of the methods described above to changes from one day to the next, instead of applying them to a day’s consumption. Was an increase unusual? Or if it was expected, which features accounts for the increase the most?
You can use a model that predicts consumption as above, and look at the difference between the prediction for two consecutive days. Or you can train a model that learns changes from one period to the next.
To find which features explain a change that is consistent with predictions of a consumption mode, with decision trees you can look at which decision rules saw a significant change: did most data points on day 1 match a decision rule, but not on day 2? Then this decision may describe a boundary between the behaviors of those two days. With linear regression you can find out which variable had the largest change in its contribution to the prediction coefficient.
Continuous learning
We described how to extract several insights on the progress of the user, based on an individual predictive model. In practice, this requires a constant flow of data. The model must be updated frequently, to have updated insights which take into account the recent behavior of the user. The model should also disregard old data to account for evolving habits (concept drift).
A simple way to do this is with a time window of a few weeks or months on the data. As the model represents the user’s behavior during that time period, the time window should be long enough for the model to learn habits, but short enough for it to be able to evolve.
Conclusion
Predictive models get you more than predictions. They can give you insights on the behavior behind the data, thanks to explainable AI methods. With individual tracking data, these insights let you build a personalized coach that understands its users, and is therefore more engaging and useful.
Here at Craft AI we leverage machine learning for real-life use cases every day, and we see a real appetite for explainable predictive models and their creative applications.
We presented a method to build an automated coach that focuses on a single topic. A natural next step could be to duplicate this approach on several subjects for the same virtual assistant, in which case it would also need to be able to choose the most relevant topic to talk about in a personalized way.
Once again, it is essential to understand the user’s habits and preferences to provide them with a personalized experience, and explainable machine learning models can give you the necessary insights.