Handle Batch Prediction
To record batch predictions of your model, follow these steps:
Step 1: Perform batch predictions using your model.
Step 2: Iterate over the predictions, and for each prediction, generate a SingleTagInferenceRecord.
Step 3: Add the record to the evaluation recorder.
How does it work?
In the validation loop, the model predicts labels and scores for each input record. These predictions are then used to create SingleTagInferenceRecord
objects, where a unique identifier is generated for each record. Each record contains the predicted label, actual label, and corresponding score.
These records are added to the evaluation recorder, which will be stored in the MarkovML backend. Once all records are added, the evaluation recorder is marked as finished, initiating the generation of the evaluation report.
Note
Additional records cannot be added once the recording is marked finished, ensuring the integrity of the evaluation process.
Sample Code
# Your validation loop that goes over each record to get a score from the model
predicted_labels, scores = model.predict(input_values)
for input_value,actual, pred,score in zip(input_values,actual_values, predicted_labels,scores):
urid= recorder.gen_urid(input_value)
record = SingleTagInferenceRecord(urid=urid
inferred=predicted_label,
actual=actual,
score=score)
evaluation_recorder.add_record(record)
# call finish to mark the recording as finished.
# This starts the generation of the evaluation report for this recording.
# Additional records can't be added once the recording is marked finished.
evaluation_recorder.finish()
Updated about 2 months ago