RMM  23.12
RAPIDS Memory Manager
Public Types | Public Member Functions | List of all members
rmm::mr::cuda_async_memory_resource Class Referencefinal

device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation. More...

#include <cuda_async_memory_resource.hpp>

Inheritance diagram for rmm::mr::cuda_async_memory_resource:
Inheritance graph
[legend]
Collaboration diagram for rmm::mr::cuda_async_memory_resource:
Collaboration graph
[legend]

Public Types

enum class  allocation_handle_type { none = 0x0 , posix_file_descriptor = 0x1 , win32 = 0x2 , win32_kmt = 0x4 }
 Flags for specifying memory allocation handle types. More...
 

Public Member Functions

 cuda_async_memory_resource (thrust::optional< std::size_t > initial_pool_size={}, thrust::optional< std::size_t > release_threshold={}, thrust::optional< allocation_handle_type > export_handle_type={})
 Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold. More...
 
 cuda_async_memory_resource (cuda_async_memory_resource const &)=delete
 
 cuda_async_memory_resource (cuda_async_memory_resource &&)=delete
 
cuda_async_memory_resourceoperator= (cuda_async_memory_resource const &)=delete
 
cuda_async_memory_resourceoperator= (cuda_async_memory_resource &&)=delete
 
bool supports_streams () const noexcept override
 Query whether the resource supports use of non-null CUDA streams for allocation/deallocation. cuda_memory_resource does not support streams. More...
 
bool supports_get_mem_info () const noexcept override
 Query whether the resource supports the get_mem_info API. More...
 
- Public Member Functions inherited from rmm::mr::device_memory_resource
 device_memory_resource (device_memory_resource const &)=default
 Default copy constructor.
 
 device_memory_resource (device_memory_resource &&) noexcept=default
 Default move constructor.
 
device_memory_resourceoperator= (device_memory_resource const &)=default
 Default copy assignment operator. More...
 
device_memory_resourceoperator= (device_memory_resource &&) noexcept=default
 Default move assignment operator. More...
 
void * allocate (std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Allocates memory of size at least bytes. More...
 
void deallocate (void *ptr, std::size_t bytes, cuda_stream_view stream=cuda_stream_view{})
 Deallocate memory pointed to by p. More...
 
bool is_equal (device_memory_resource const &other) const noexcept
 Compare this resource to another. More...
 
void * allocate (std::size_t bytes, std::size_t alignment)
 Allocates memory of size at least bytes. More...
 
void deallocate (void *ptr, std::size_t bytes, std::size_t alignment)
 Deallocate memory pointed to by p. More...
 
void * allocate_async (std::size_t bytes, std::size_t alignment, cuda_stream_view stream)
 Allocates memory of size at least bytes. More...
 
void * allocate_async (std::size_t bytes, cuda_stream_view stream)
 Allocates memory of size at least bytes. More...
 
void deallocate_async (void *ptr, std::size_t bytes, std::size_t alignment, cuda_stream_view stream)
 Deallocate memory pointed to by p. More...
 
void deallocate_async (void *ptr, std::size_t bytes, cuda_stream_view stream)
 Deallocate memory pointed to by p. More...
 
bool operator== (device_memory_resource const &other) const noexcept
 Comparison operator with another device_memory_resource. More...
 
bool operator!= (device_memory_resource const &other) const noexcept
 Comparison operator with another device_memory_resource. More...
 
std::pair< std::size_t, std::size_t > get_mem_info (cuda_stream_view stream) const
 Queries the amount of free and total memory for the resource. More...
 

Detailed Description

device_memory_resource derived class that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation.

Member Enumeration Documentation

◆ allocation_handle_type

Flags for specifying memory allocation handle types.

Note
These values are exact copies from cudaMemAllocationHandleType. We need to define our own enum here because the earliest CUDA runtime version that supports asynchronous memory pools (CUDA 11.2) did not support these flags, so we need a placeholder that can be used consistently in the constructor of cuda_async_memory_resource with all versions of CUDA >= 11.2. See the cudaMemAllocationHandleType docs at https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html
Enumerator
none 

Does not allow any export mechanism.

posix_file_descriptor 

Allows a file descriptor to be used for exporting. Permitted only on POSIX systems.

win32 

Allows a Win32 NT handle to be used for exporting. (HANDLE)

win32_kmt 

Allows a Win32 KMT handle to be used for exporting. (D3DKMT_HANDLE)

Constructor & Destructor Documentation

◆ cuda_async_memory_resource()

rmm::mr::cuda_async_memory_resource::cuda_async_memory_resource ( thrust::optional< std::size_t >  initial_pool_size = {},
thrust::optional< std::size_t >  release_threshold = {},
thrust::optional< allocation_handle_type export_handle_type = {} 
)
inline

Constructs a cuda_async_memory_resource with the optionally specified initial pool size and release threshold.

If the pool size grows beyond the release threshold, unused memory held by the pool will be released at the next synchronization event.

Exceptions
rmm::logic_errorif the CUDA version does not support cudaMallocAsync
Parameters
initial_pool_sizeOptional initial size in bytes of the pool. If no value is provided, initial pool size is half of the available GPU memory.
release_thresholdOptional release threshold size in bytes of the pool. If no value is provided, the release threshold is set to the total amount of memory on the current device.
export_handle_typeOptional cudaMemAllocationHandleType that allocations from this resource should support interprocess communication (IPC). Default is cudaMemHandleTypeNone for no IPC support.

Member Function Documentation

◆ supports_get_mem_info()

bool rmm::mr::cuda_async_memory_resource::supports_get_mem_info ( ) const
inlineoverridevirtualnoexcept

Query whether the resource supports the get_mem_info API.

Returns
false

Implements rmm::mr::device_memory_resource.

◆ supports_streams()

bool rmm::mr::cuda_async_memory_resource::supports_streams ( ) const
inlineoverridevirtualnoexcept

Query whether the resource supports use of non-null CUDA streams for allocation/deallocation. cuda_memory_resource does not support streams.

Returns
bool true

Implements rmm::mr::device_memory_resource.


The documentation for this class was generated from the following file: