Utilities

Utilities for the entire package.

coremltools.models.utils.evaluate_classifier(model, data, target='target', verbose=False)

Evaluate a CoreML classifier model and compare against predictions from the original framework (for testing correctness of conversion). Use this evaluation for models that don’t deal with probabilities.

Parameters:

filename: [str | MLModel]

File from where to load the model from (OR) a loaded version of the MLModel.

data: [str | Dataframe]

Test data on which to evaluate the models (dataframe, or path to a csv file).

target: str

Column to interpret as the target column

verbose: bool

Set to true for a more verbose output.

Examples

>>> metrics =  coremltools.utils.evaluate_classifier(spec, 'data_and_predictions.csv', 'target')
>>> print metrics
{"samples": 10, num_errors: 0}
coremltools.models.utils.evaluate_classifier_with_probabilities(model, data, probabilities='probabilities', verbose=False)

Evaluate a classifier specification for testing.

Parameters:

filename: [str | Model]

File from where to load the model from (OR) a loaded version of the MLModel.

data: [str | Dataframe]

Test data on which to evaluate the models (dataframe, or path to a csv file).

probabilities: str

Column to interpret as the probabilities column

verbose: bool

Verbosity levels of the predictions.

coremltools.models.utils.evaluate_regressor(model, data, target='target', verbose=False)

Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion)

Parameters:

filename: [str | MLModel]

File path from which to load the MLModel from (OR) a loaded version of MLModel.

data: [str | Dataframe]

Test data on which to evaluate the models (dataframe, or path to a .csv file).

target: str

Name of the column in the dataframe that must be interpreted as the target column.

verbose: bool

Set to true for a more verbose output.

Examples

>>> metrics =  coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target')
>>> print metrics
{"samples": 10, "rmse": 0.0, max_error: 0.0}
coremltools.models.utils.evaluate_transformer(model, input_data, reference_output, verbose=False)

Evaluate a transformer specification for testing.

Parameters:

spec: [str | MLModel]

File from where to load the Model from (OR) a loaded version of MLModel.

input_data: list[dict]

Test data on which to evaluate the models.

reference_output: list[dict]

Expected results for the model.

verbose: bool

Verbosity levels of the predictions.

Examples

>>> input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}]
>>> expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}]
>>> metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)
coremltools.models.utils.load_spec(filename)

Load a protobuf model specification from file

Parameters:

filename: str

Location on disk (a valid filepath) from which the file is loaded as a protobuf spec.

Returns:

model_spec: Model_pb

Protobuf representation of the model

See also

save_spec

Examples

>>> spec = coremltools.utils.load_spec('HousePricer.mlmodel')
coremltools.models.utils.rename_feature(spec, current_name, new_name, rename_inputs=True, rename_outputs=True)

Rename a feature in the specification.

Parameters:

spec: Model_pb

The specification containing the feature to rename.

current_name: str

Current name of the feature. If this feature doesn’t exist, the rename is a no-op.

new_name: str

New name of the feature.

rename_inputs: bool

Search for current_name only in the input features (i.e ignore output features)

rename_outputs: bool

Search for current_name only in the output features (i.e ignore input features)

Examples

# In-place rename of spec
>>> coremltools.utils.rename_feature(spec, 'old_feature', 'new_feature_name')
coremltools.models.utils.save_spec(spec, filename)

Save a protobuf model specification to file.

Parameters:

spec: Model_pb

Protobuf representation of the model

filename: str

File path where the spec gets saved.

See also

load_spec

Examples

>>> coremltools.utils.save_spec(spec, 'HousePricer.mlmodel')