pylibwholegraph API doc#
APIs#
|
Init WholeGraph environment for PyTorch. |
Init WholeGraph environment for PyTorch and create single communicator for all ranks. |
|
Finalize WholeGraph. |
|
|
WholeMemory Communicator. |
|
Set the global world's information. |
Create WholeMemory Communicator. |
|
|
Destroy WholeMemoryCommunicator :param wm_comm: WholeMemoryCommunicator to destroy :return: None |
Get the global communicator of this job :return: WholeMemoryCommunicator that has all GPUs in it. |
|
Get the local node communicator of this job :return: WholeMemoryCommunicator that has GPUs in the same node. |
|
Get the local device communicator of this job :return: WholeMemoryCommunicator that has only the GPU belonging to current process. |
|
|
WholeMemory Tensor |
|
Create empty WholeMemory Tensor. |
Create WholeMemory Tensor from list of binary files. |
|
Destroy allocated WholeMemory Tensor :param wm_tensor: WholeMemory Tensor :return: None |
|
|
Sparse Optimizer for WholeMemoryEmbedding. |
Create WholeMemoryOptimizer. |
|
Destroy WholeMemoryOptimizer :param optimizer: WholeMemoryOptimizer to destroy :return: None |
|
Cache policy to create WholeMemoryEmbedding. |
|
Create WholeMemoryCachePolicy NOTE: in most cases, |
|
Create builtin cache policy |
|
Destroy WholeMemoryCachePolicy :param cache_policy: WholeMemoryCachePolicy to destroy :return: None |
|
WholeMemory Embedding |
|
|
Create embedding :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param dtype: data type :param sizes: size of the embedding, must be 2D :param optimizer: optimizer :param cache_policy: cache policy :param gather_sms: the number of SMs used in gather process :return: WholeMemoryEmbedding |
Create embedding from file list :param comm: WholeMemoryCommunicator :param memory_type: WholeMemory type, should be continuous, chunked or distributed :param memory_location: WholeMemory location, should be cpu or cuda :param filelist: list of files :param dtype: data type :param last_dim_size: size of last dim :param optimizer: optimizer :param cache_policy: cache policy :param gather_sms: the number of SMs used in gather process :return: |
|
|
Destroy WholeMemoryEmbedding :param wm_embedding: WholeMemoryEmbedding to destroy :return: None |
torch.nn.Module wrapper of WholeMemoryEmbedding |
|
Graph structure storage Actually, it is the graph structure of one relation, represented in CSR format. |