seldonian.parse_tree.nodes.ConfusionMatrixBaseNode

class ConfusionMatrixBaseNode(name, cm_true_index, cm_pred_index, lower=-inf, upper=inf, conditional_columns=[], **kwargs)

Bases: BaseNode

__init__(name, cm_true_index, cm_pred_index, lower=-inf, upper=inf, conditional_columns=[], **kwargs)

A base node for the confusion matrix. Inherits all of the attributes/methods of basenode and sets the i,j indices of the K x K confusion matrix, C_ij:

                Predicted labels
            | j=0  | j=1  | ... | j=K
            ---------------------------
        i=0 | C_00 | C_01 | ... | C_0K|
            |______|______|_____|_____|
True    i=1 | C_10 | C_11 | ... | C_1K|
labels      |______|______|_____|_____|
        ... | ...  | ...  | ... | ... |
            |______|______|_____|_____|
        i=K | C_K0 | C_K1 | ... | C_KK|
            |______|______|_____|_____|
Parameters:
  • name (str) – The name of the node

  • cm_true_index – The index of the row in the confusion matrix. Rows are the true values

  • cm_pred_index – The index of the column in the confusion matrix. Columns are the predicted values

  • lower (float) – Lower confidence bound

  • upper (float) – Upper confidence bound

  • conditional_columns (List(str)) – When calculating confidence bounds on a measure function, condition on these columns being == 1

__repr__()

Overrides Node.__repr__()

Methods

calculate_bounds(**kwargs)

Calculate confidence bounds given a bound_method, such as t-test.

Returns:

A dictionary mapping the bound name to its value, e.g., {“lower”:-1.0, “upper”: 1.0}

calculate_data_forbound(**kwargs)

Prepare data inputs for confidence bound calculation.

Returns:

data_dict, a dictionary containing the prepared data

calculate_value(**kwargs)

Calculate the value of the node given model weights, etc. This is the expected value of the base variable, not the bound.

compute_HC_lowerbound(data, datasize, delta, **kwargs)

Calculate high confidence lower bound Used in safety test

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta (float) – Confidence level, e.g. 0.05

Returns:

lower, the high-confidence lower bound

compute_HC_upper_and_lowerbound(data, datasize, delta_lower, delta_upper, **kwargs)

Calculate high confidence lower and upper bounds Used in safety test. Confidence levels for lower and upper bound do not have to be equivalent.

Depending on the bound_method, this is not always equivalent to calling compute_HC_lowerbound() and compute_HC_upperbound() independently.

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta_lower – Confidence level for the lower bound, e.g. 0.05

  • delta_upper – Confidence level for the upper bound, e.g. 0.05

Returns:

(lower,upper) the high-confidence lower and upper bounds.

compute_HC_upperbound(data, datasize, delta, **kwargs)

Calculate high confidence upper bound Used in safety test

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta (float) – Confidence level, e.g. 0.05

Returns:

upper, the high-confidence upper bound

mask_data(dataset, conditional_columns)

Mask features and labels using a joint AND mask where each of the conditional columns is True.

Parameters:
  • dataset (dataset.Dataset object) – The candidate or safety dataset

  • conditional_columns (List(str)) – List of columns for which to create the joint AND mask on the dataset

Returns:

The masked dataframe

Return type:

numpy ndarray

predict_HC_lowerbound(data, datasize, delta, **kwargs)

Calculate high confidence lower bound that we expect to pass the safety test. Used in candidate selection

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta (float) – Confidence level, e.g. 0.05

Returns:

lower, the predicted high-confidence lower bound

predict_HC_upper_and_lowerbound(data, datasize, delta_lower, delta_upper, **kwargs)

Calculate high confidence lower and upper bounds that we expect to pass the safety test. Used in candidate selection. Confidence levels for lower and upper bound do not have to be equivalent.

Depending on the bound_method, this is not always equivalent to calling predict_HC_lowerbound() and predict_HC_upperbound() independently.

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta_lower – Confidence level for the lower bound, e.g. 0.05

  • delta_upper – Confidence level for the upper bound, e.g. 0.05

Returns:

(lower,upper) the predicted high-confidence lower and upper bounds.

predict_HC_upperbound(data, datasize, delta, **kwargs)

Calculate high confidence upper bound that we expect to pass the safety test. Used in candidate selection

Parameters:
  • data (numpy ndarray) – Vector containing base variable evaluated at each observation in dataset

  • datasize (int) – The number of observations in the safety dataset

  • delta (float) – Confidence level, e.g. 0.05

Returns:

upper, the predicted high-confidence upper bound

zhat(model, theta, data_dict, sub_regime, **kwargs)

Calculate an unbiased estimate of the base variable node.

Parameters:
  • model (models.SeldonianModel object) – The machine learning model

  • theta (numpy ndarray) – model weights

  • data_dict (dict) – Contains inputs to model, such as features and labels

Returns:

A vector of unbiased estimates of the measure function