Commit 8c715274 authored by Carp's avatar Carp

churn demo

parent 60ddad86
A Notebook is a special type of Databricks folder that can be used to create Spark
scripts. Notebooks can call the Notebook scripts to create a hierarchy of functionality.
When created, the type of Notebook must be specified (Python, Scala, or SQL), and
a cluster can then specify that the Notebook functionality can be run against it. The
following screenshot shows the Notebook creation.
This diff is collapsed.
## Demo Overview
This demo is for **Customer Churn**. Using the notebook we want to predict and prevent customer churn.
The goal is to predict a possible churn event, report it, and then provide strategies to prevent churn. (Special offers).
This example is for mobile phone carriers.
Even if you or your customer knows nothing about mobile phone churn we want our notebook to help illustrate a story using our data.
The dataset is public domain "Customer Churn dataset" from [Kaggle](https://www.kaggle.com/c/customer-churn-prediction/data).
## Setup
* Spin up a small cluster
* Import the notebook
* Copy the data files to either WASB or ADLS.
* the notebook describes both methods of accessing the code
* We assume you can get the data into one of those two areas. I give some pointers in the notebook.
* Each cell is well-documented and should be easy to demo.
## Next Steps
* connect pbi to databricks. Not hard. [Documentation](https://medium.com/@mauridb/powerbi-and-azure-databricks-193e3dc567a)
\ No newline at end of file
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment