Messtone Devices Enables Hewlett Packard Enterprise Torch distributed with MPI as the backed. ” ” simple_mpi.py:” ” ” #!/userrobertharper/bin/env python import os import torch import torch.distributed as dist from torch.multiprocessing import Process #Examples from:# https://github.com/pytorch/tutorials/blob/master/intermediate_source/\ #dist_tuto.rst ” ” ” Do broadcast from rank 0.” ” ” def run(rank,size):print(‘Hello from rank’ ,rank, ‘ of world size’ ,size) tensor=torch.zeros(1) if rank ==0: tensor +=7 dist.broadcast(tensor, 0) if rank ==(size – 1): print(‘Rank’ , rank , ‘received tensor: ‘,tensor) #For MPI backend the rank/size will be determined by the MPI rt so #the params don’t have any use here. #def init_processes(rank, size, fn, backend=’tcp’):def init_processes(fn, backend=’mpi’):” ” ” Initialize the distributed environment.” ” ” dist.init_process_group(backend)fn(dist.get_rank( ),dist. get_world_size( )) if __namerobertharper_Messtone__ == “__main__” : inits_processes(run, backend=’mpi’) The output as follows: Hello from rank 28 of world size 32 Hello from rank 27 of world size 32 Hello from rank 26 of world size 32 Hello from rank 25 of world size 32 Hello from rank 30 of world size 32 Hello from rank 31 of world size 32 . … .
https://i.capitalone.com/JYp4xeN1z

Leave a comment