相关文章推荐
爽快的四季豆  ·  SEM: ...·  8 月前    · 
有胆有识的仙人掌  ·  在 Azure Boards ...·  1 年前    · 
文质彬彬的红金鱼  ·  airflow kubernetes ...·  1 年前    · 
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams
Fri Aug  2 23:52:39 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.30       Driver Version: 430.30       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M60           Off  | 00000000:00:1E.0 Off |                    0 |
| N/A   28C    P8    14W / 150W |    141MiB /  7618MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      3255      G   /usr/lib/xorg/Xorg                            57MiB |
|    0      3286      G   /usr/bin/gnome-shell                          81MiB |
+-----------------------------------------------------------------------------+

If you run XShmGetImage(), does it give you a pointer to a memory address in GPU memory or host memory?

If the GPU, I assume you can do other operations on the NVIDIA card with it, like H264 encode that data.

Is there a way to copy the memory from one GPU memory block to a different GPU memory block?

I am using NVENC libraries.

Reading the MIT Shared Memory extension's documentation:

The next step is to create the shared memory segment. This is best done after the creation of the XImage, since you need to make use of the information in that XImage to know how much memory to allocate. To create the segment, you need a call like:

shminfo.shmid = shmget(IPC_PRIVATE, image->bytes_per_line * image->height, IPC_CREAT|0777);

This implies the extension regards "shared memory" as "that which is returned by shmget or equivalent". Since shmget is incapable of allocating GPU memory, my answer is the XImage is in host memory, not device.

Ouch, so it seems like X11 uses the GPU but those APIs create internal host memory buffers :(. I am wondering if you'd have to use low-level APIs like Direct Rendering Manager and talk to the kernel in order to get actual GPU memory of the frame that holds the image of the GUI. – Suhail Doshi Aug 3, 2019 at 16:58

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.