Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
–
–
–
tl;dr: Almost certainly no advantage.
cudaMemcpyDefault
was added IIRC when GPUs started becoming capable of easily identifying the memory space by inspecting the address ("Unified virtual addressing"). Before that, you had to specify the direction. See, for example, the CUDA 3 documentation, accessible
here
. Look for cudaMemcpyKind in the API reference - no Default, just H2H, H2D, D2H and H2H.
When this changed, I guess it made sense to nVIDIA not to overload the function or name it differently, but just add a different constant value for the new capability.
I'm not 100% certain there's no difference, it's just very reasonable; and speaking from anecdotal personal experience, I've not seen any advantage/difference. Certainly the copying is not faster.
[...] Passing
cudaMemcpyDefault
is recommended, in which case the type of transfer is inferred from the pointer values. However,
cudaMemcpyDefault
is only allowed on systems that support unified virtual addressing. [...]
Therefore if you have a GPU that allows unified virtual addressing, use
cudaMemcpyDefault
, otherwise you got no option than to be explicit.
You can query if your system supports it with
cudaGetDeviceProperties()
with the device property
cudaDeviceProp::unifiedAddressing
.
–
Thanks for contributing an answer to Stack Overflow!
-
Please be sure to
answer the question
. Provide details and share your research!
But
avoid
…
-
Asking for help, clarification, or responding to other answers.
-
Making statements based on opinion; back them up with references or personal experience.
To learn more, see our
tips on writing great answers
.