Attention
The vector search and clustering algorithms in RAFT are being migrated to a new library dedicated to vector search called cuVS. We will continue to support the vector search algorithms in RAFT during this move, but will no longer update them after the RAPIDS 24.06 (June) release. We plan to complete the migration by RAPIDS 24.08 (August) release.
Common#
This page provides pylibraft
class references for the publicly-exposed elements of the pylibraft.common
package.
Basic Vocabulary#
- class pylibraft.common.DeviceResources#
DeviceResources is a lightweight python wrapper around the corresponding C++ class of device_resources exposed by RAFT’s C++ interface. Refer to the header file raft/core/device_resources.hpp for interface level details of this struct
- Parameters:
- streamOptional stream to use for ordering CUDA instructions
Accepts pylibraft.common.Stream() or uintptr_t (cudaStream_t)
Examples
Basic usage:
>>> from pylibraft.common import Stream, DeviceResources >>> stream = Stream() >>> handle = DeviceResources(stream) >>> >>> # call algos here >>> >>> # final sync of all work launched in the stream of this handle >>> # this is same as `raft.cuda.Stream.sync()` call, but safer in case >>> # the default stream inside the `device_resources` is being used >>> handle.sync() >>> del handle # optional!
Using a cuPy stream with RAFT device_resources:
>>> import cupy >>> from pylibraft.common import Stream, DeviceResources >>> >>> cupy_stream = cupy.cuda.Stream() >>> handle = DeviceResources(stream=cupy_stream.ptr)
Using a RAFT stream with CuPy ExternalStream:
>>> import cupy >>> from pylibraft.common import Stream >>> >>> raft_stream = Stream() >>> cupy_stream = cupy.cuda.ExternalStream(raft_stream.get_ptr())
Methods
getHandle
(self)Return the pointer to the underlying raft::device_resources instance as a size_t
sync
(self)Issues a sync on the stream set for this instance.
- class pylibraft.common.Stream#
Stream represents a thin-wrapper around cudaStream_t and its operations.
Examples
>>> from pylibraft.common.cuda import Stream >>> stream = Stream() >>> stream.sync() >>> del stream # optional!
Methods
get_ptr
(self)Return the uintptr_t pointer of the underlying cudaStream_t handle
sync
(self)Synchronize on the cudastream owned by this object.
- class pylibraft.common.device_ndarray(np_ndarray)[source]#
pylibraft.common.device_ndarray is meant to be a very lightweight __cuda_array_interface__ wrapper around a numpy.ndarray.
- Attributes:
c_contiguous
Is the current device_ndarray laid out in row-major format?
dtype
Datatype of the current device_ndarray instance
f_contiguous
Is the current device_ndarray laid out in column-major format?
shape
Shape of the current device_ndarray instance
strides
Strides of the current device_ndarray instance
Methods
- property c_contiguous#
Is the current device_ndarray laid out in row-major format?
- copy_to_host()[source]#
Returns a new numpy.ndarray object on host with the current contents of this device_ndarray
- property dtype#
Datatype of the current device_ndarray instance
- classmethod empty(shape, dtype=<class 'numpy.float32'>, order='C')[source]#
Return a new device_ndarray of given shape and type, without initializing entries.
- Parameters:
- shapeint or tuple of int
Shape of the empty array, e.g., (2, 3) or 2.
- dtypedata-type, optional
Desired output data-type for the array, e.g, numpy.int8. Default is numpy.float32.
- order{‘C’, ‘F’}, optional (default: ‘C’)
Whether to store multi-dimensional dat ain row-major (C-style) or column-major (Fortran-style) order in memory
- property f_contiguous#
Is the current device_ndarray laid out in column-major format?
- property shape#
Shape of the current device_ndarray instance
- property strides#
Strides of the current device_ndarray instance
Interruptible#
- pylibraft.common.interruptible.cuda_interruptible()[source]#
Temporarily install a keyboard interrupt handler (Ctrl+C) that cancels the enclosed interruptible C++ thread.
Use this on a long-running C++ function imported via cython:
>>> with cuda_interruptible(): >>> my_long_running_function(...)
It’s also recommended to release the GIL during the call, to make sure the handler has a chance to run:
>>> with cuda_interruptible(): >>> with nogil: >>> my_long_running_function(...)