The Harvard Business Review published that acquiring a new customer is 5 to 25 times more expensive than retaining an existing one. Research also shows that increasing customer retention by just 5% increases profits by 25% to 95%.
The customers of retail businesses such as banking, telecom, utilities, and insurance are constantly influenced by factors such as the rates charged and promotions by competitors. These create concerns that make customers stay or switch. The customer may sometimes call to express their doubts but their mind is probably already made up by the time they get to speak to a customer service representative. A business is left reacting to a situation vs. being proactive about it.
Imagine a world where you can predict which customer is likely to leave and why. A company’s efforts in retention could be proactive and focused.
This is where AI can help. What it can do that traditional analytics can’t do is to figure such behavioural characteristics and patterns from your customer data to reliably predict the risk of a customer leaving.
But before we start, let’s define the two important terms, data and prediction.
“Data” in the context of a telecom company could be demographic data such as the home address, work address, occupation, income, age, and gender. It could be purchase data such as type of service, the value of purchase, the date of purchase, the payment type, how it was purchased e.g. online / shop, the options bought, etc. The data may come from the use of services such as the number of calls [outgoing, incoming, location, international, roaming, text messages, through Wi-Fi or network, total minutes, etc. The data could come from billing such as the total amount of the bill, amount for voice calls, text, or for other services e.g. use of data, for 1-800 calls, or the number of days after which payment was made. Further, customer relationship data can be gathered, such as the number of interactions from the call centre or visit to retail shops, online website, or mobile application use, total complaints, complaint type, and the time that was taken to solve the issue. Data can come as a file or from an online source, from a database, or a machine controller, in formats such as text, numeric, image, audio, video, etc.
For the purpose of this note, an open public dataset was downloaded from Kaggle and used.
“Prediction” is the output of a machine learning algorithm having been trained to learn from historical data. These algorithms make a model which when applied to new data can reliably forecast the chance of an outcome. E.g. what is the chance of an existing customer leaving in the next 30 days? These models generate presumed values for unknown variables for each record in the new data by learning patterns between known variables and calculating what a presumed value will likely be.
Step 1: Create a project
The user starts by creating a project. Names it customer_churn_prediction.
The project can be shared with team members to collaborate.
The following steps illustrate how data is ingested, cleaned, and transformed to train a machine-learning algorithm and then deployed to production.
Step 2 – Ingest, clean and transform data.
The user selects the data file (in this case a .csv) from their desktop and clicks Upload.
Once uploaded, the user can view the raw data. The user can also view the statistics of the data by clicking on the Stat icon. These are already auto-computed by the system
The user starts defining a dataset. Selects Target (Output) as “Churn” and selects the Features (Input) as other input variables.
The user defines a dataset from raw data
Think of this way – the target variable is what the user wants to predict. The Features (Input) are variables from which the user wants machine learning to draw a pattern.
The user proceeds to the next page.
The data may have missing values. It may be required to treat the missing values with something.
The user checks for any missing values in the raw data in the Missing Value Handler feature.
The system has prescribed a transformation step.
The user follows the recommendation.
The user checks for missing values
The data has to be transformed into something that algorithms can understand and calculate.
The user clicks on Feature Pre-processor.
The system has prescribed the transformations. The user follows the recommendation.
The user preprocesses the features
The user gives this transformed dataset a name, e.g. ds_all_features.
The user names the dataset
Data will be split into test and training data.
The system recommends a 20/80 split.
The user follows the recommendation.
The dataset is now ready for machine learning.
Step 3 – Create models
Many machine learning algorithms are available for modelling – some solve classification problems, some solve regression problems, and some that solve deep learning problems. Our AI platform ships with many algorithms. It also allows you to add, or create your own.
The user selects “Add Base Model” to select the dataset created in the previous section.
The user concurrently selects a machine learning algorithm.
The user selects an algorithm to create the ML model
The system creates the model that the user selected.
The user can create as many models by clicking and selecting other algorithms. Or also use the Auto ML feature where the system lets you sit back and relax while it automatically generates many models and ranks them you. For now, let’s create one more version of this model now using a neural network classifier.
The user selects the MultiLayerPerceptronNNClassifier.
The system has recommended optimum parameters for the algorithm. The user goes with the system optimized recommendations.
The user creates another version using the neural network classifier
Version 2 of the model is instantly created, now using the neural network classifier.
The user can see the performance metrics of all models.
The model accuracy statistics are instantly displayed.
The user can see actual vs. predicted results, compares the models to pick one that offers the best results. The Logistic Regression Classifier gave the best results in this case.
This completes the machine learning part. The model is ready for deployment so that it can be used by other applications.
Step 4 – Model review
It’s always good practice to have someone review your work. Making AI is no different. It has to be checked for biases, interpretability and considerations for safe and ethical AI.
The user submits the model for review by putting comments for deployment, attaching any extra information and submiting the request to review.
The reviewer now has access to all the documentation that was automatically generated from what the raw data looks like to the point of time that the model was created.
When satisfied, the reviewer accepts it to be deployed to a production environment.
Step 5 – Deploy the model
The model is now approved to be deployed. It’s time to deploy it as an API so that new and legacy applications can start using it as a prediction service.
The user selects the model to be deployed, clicks Add default app code. Confirms. Clicks Deploy. Confirms.
An API is instantly created and the application starts running.
The user generates an API key and access token and shares it with others so that new and legacy applications can start using the prediction service that was just created.
Step 6 – Dashboard
For each model deployed as an API, a real-time dashboard is automatically created.
The user can type input values and hit the predict button to get the model output.
The API results can also be connected to tools such as Microsoft Power BI or Spotfire to make custom reports.
Figure: Custom dashboard created in Power BI
Further models can be developed to fine-tune your customer retention strategy. For example, all customers may not be worth retaining. To understand that, a Lifetime Customer Value model can be built that learns from data such as charges, margins, demographics, past purchases, billing type, etc. to predict the total value from a customer over their expected lifetime with you.
Figure: Customer 360 dashboard
Another example is a model that tells which customers will give positive feedback if a particular product or service was offered to them. Think of it as how Netflix tells what you might like by understanding what others like you are watching.
A combination of these models, when used together, allow your retention teams to become highly effective at what they do.
We believe that machine learning can help you boost customer loyalty and your profitability at the same time. It does not have to be one or another. Start using it to supercharge your sales and marketing, do the next best offer, cross-sell and up-sell, optimize your campaigns for special customer characteristics – the possibilities are endless.