Rule Generator (Decision Tree algorithm) Spark Example¶
This notebook contains an example of how the Rule Generator (Decision Tree algorithm) can be used to create rules based on a labelled dataset (stored as a Koalas DataFrame). This algorithm generate rules by extracting the highest performing branches from a tree ensemble model.
You should use this module when loading the dataset into memory is not possible. In this case, the standard Rule Generator algorithm cannot be used, as it relies on Pandas & Sklearn.
Requirements¶
To run, you’ll need the following:
A labelled, processed dataset (nulls imputed, categorical features encoded).
Import packages¶
[1]:
from iguanas.rule_generation import RuleGeneratorDTSpark
from iguanas.metrics.classification import FScore
import databricks.koalas as ks
from pyspark.ml.classification import RandomForestClassifier
from pyspark.sql import SparkSession
Create Spark session¶
[2]:
spark = SparkSession.builder.config('spark.dynamicAllocation.enabled', True).getOrCreate()
21/11/23 17:30:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/11/23 17:30:15 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Read in data¶
Let’s read in some labelled, processed dummy data.
[3]:
X_train = ks.read_csv(
'dummy_data/X_train.csv',
index_col='eid'
)
y_train = ks.read_csv(
'dummy_data/y_train.csv',
index_col='eid'
).squeeze()
X_test = ks.read_csv(
'dummy_data/X_test.csv',
index_col='eid'
)
y_test = ks.read_csv(
'dummy_data/y_test.csv',
index_col='eid'
).squeeze()
Generate rules¶
Set up class parameters¶
Now we can set our class parameters for the Rule Generator. Here we’re using the F1 score as the rule performance metric (you can choose a different function from the metrics.classification module or create your own).
Note that if you’re using the FScore, Precision or Recall score as the optimisation function, use the FScore, Precision or Recall classes in the metrics.classification module rather than the same functions from Sklearn’s metrics module, since Sklearn’s functions do not work on Koalas DataFrames.
Please see the class docstring for more information on each parameter.
[4]:
fs = FScore(beta=1)
[5]:
params = {
'n_total_conditions': 4,
'opt_func': fs.fit,
'tree_ensemble': RandomForestClassifier(numTrees=5, seed=0),
'precision_threshold': 0.5,
'target_feat_corr_types': 'Infer',
'verbose': 1
}
Instantiate class and run fit method¶
Once the parameters have been set, we can run the .fit() method to generate rules.
[6]:
rg = RuleGeneratorDTSpark(**params)
[7]:
X_rules = rg.fit(
X=X_train,
y=y_train,
sample_weight=None
)
--- Calculating correlation of features with respect to the target ---
21/11/23 17:30:23 WARN package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
21/11/23 17:30:27 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
--- Returning column datatypes ---
--- Creating Spark DataFrame for training ---
--- Training tree ensemble ---
--- Extracting rules from tree ensemble ---
/Users/jlaidler/venvs/iguanas_os_dev/lib/python3.8/site-packages/databricks/koalas/frame.py:11847: UserWarning: Koalas doesn't allow columns to be created via a new attribute name
warnings.warn(msg, UserWarning)
Outputs¶
The .fit() method returns a dataframe giving the binary columns of the generated rules as applied to the training dataset.
Useful attributes created by running the .fit() method are:
rule_strings: The generated rules, defined using the standard Iguanas string format (values) and their names (keys).
rule_descriptions: A dataframe showing the logic of the generated rules and their performance metrics as applied to the training dataset.
[8]:
X_rules.head()
[8]:
RGDT_Rule_20211123_7 | RGDT_Rule_20211123_8 | RGDT_Rule_20211123_11 | RGDT_Rule_20211123_10 | RGDT_Rule_20211123_0 | RGDT_Rule_20211123_1 | RGDT_Rule_20211123_3 | RGDT_Rule_20211123_6 | RGDT_Rule_20211123_5 | RGDT_Rule_20211123_9 | RGDT_Rule_20211123_12 | RGDT_Rule_20211123_4 | RGDT_Rule_20211123_2 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
eid | |||||||||||||
867-8837095-9305559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
974-5306287-3527394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
584-0112844-9158928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
956-4190732-7014837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
349-7005645-8862067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[9]:
rg.rule_descriptions.head()
[9]:
Precision | Recall | PercDataFlagged | OptMetric | Logic | nConditions | |
---|---|---|---|---|---|---|
Rule | ||||||
RGDT_Rule_20211123_7 | 0.991837 | 1.000000 | 0.027547 | 0.995902 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_8 | 0.991837 | 1.000000 | 0.027547 | 0.995902 | (X['account_number_num_fraud_transactions_per_... | 2 |
RGDT_Rule_20211123_11 | 0.991837 | 1.000000 | 0.027547 | 0.995902 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_10 | 0.972000 | 1.000000 | 0.028109 | 0.985801 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_0 | 1.000000 | 0.378601 | 0.010344 | 0.549254 | (X['account_number_avg_order_total_per_account... | 2 |
Apply rules to a separate dataset¶
Use the .transform() method to apply the generated rules to a separate dataset.
[10]:
X_rules_test = rg.transform(
X=X_test,
y=y_test,
sample_weight=None
)
/Users/jlaidler/venvs/iguanas_os_dev/lib/python3.8/site-packages/databricks/koalas/frame.py:11847: UserWarning: Koalas doesn't allow columns to be created via a new attribute name
warnings.warn(msg, UserWarning)
Outputs¶
The .transform() method returns a dataframe giving the binary columns of the rules as applied to the given dataset.
A useful attribute created by running the .transform() method is:
rule_descriptions: A dataframe showing the logic of the generated rules and their performance metrics as applied to the given dataset.
[11]:
rg.rule_descriptions.head()
[11]:
Precision | Recall | PercDataFlagged | OptMetric | Logic | nConditions | |
---|---|---|---|---|---|---|
Rule | ||||||
RGDT_Rule_20211123_7 | 0.991453 | 1.000000 | 0.026700 | 0.995708 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_8 | 0.991453 | 1.000000 | 0.026700 | 0.995708 | (X['account_number_num_fraud_transactions_per_... | 2 |
RGDT_Rule_20211123_11 | 0.991453 | 1.000000 | 0.026700 | 0.995708 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_10 | 0.958678 | 1.000000 | 0.027613 | 0.978903 | (X['account_number_num_fraud_transactions_per_... | 1 |
RGDT_Rule_20211123_1 | 1.000000 | 0.396552 | 0.010497 | 0.567901 | (X['account_number_avg_order_total_per_account... | 2 |
[12]:
X_rules_test.head()
[12]:
RGDT_Rule_20211123_7 | RGDT_Rule_20211123_8 | RGDT_Rule_20211123_11 | RGDT_Rule_20211123_10 | RGDT_Rule_20211123_1 | RGDT_Rule_20211123_0 | RGDT_Rule_20211123_3 | RGDT_Rule_20211123_6 | RGDT_Rule_20211123_9 | RGDT_Rule_20211123_4 | RGDT_Rule_20211123_12 | RGDT_Rule_20211123_5 | RGDT_Rule_20211123_2 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
eid | |||||||||||||
975-8351797-7122581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
785-6259585-7858053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
057-4039373-1790681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
095-5263240-3834186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
980-3802574-0009480 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |