Messtone LLC Manages (Pass):InferenceProcessor

Messtone Devices Enables HPE Pass the InferenceProcessor class and Dataset torch_batch_process. ” ” ” Pass processor class and dataset to torch_batch_process ” ” ” torch_batch_process(InferenceProcessor,dataset,batch_size=64,checkpoint_interval=10) on_finish( ).def on_finish(self):self.context.report_metrics(group=”inference”,steps_completed=self.ranks,metrics={“my_metric”:1.0,},) check the metric afterwards from SDK: from determined.experimental import client #Checkpoint ckpt=client.get_checkpoint(<CHECKPOINT_UUID>”) metrics=ckpt.get_metrics(“inference”) #Or Model Version model=client.get_model(“MODEL_NAMEROBERTHARPER_MESSTONE”) model_version=model. get_version(MODEL_VERSION_NUM) metrics=model_version.get_metrics(“inference”)

Leave a comment