I just read “Google Continues Working On “Magma” For Mesa Cross-Platform System Call Interface” on Phoronix and didn’t get it. That made me realise my knowledge and understanding of these things is barely existent. I did write an MS paint clone on linux in C++ a really long time ago and the entire thing was with opengl (it looked like crap), but since then… nothing.

So my understanding is that the graphics card (or CPU if there’s no graphics card), writes to a component which is connected to a screen and every cycle (every 1/60 seconds if 60Hz) the contents are sent or read by the screen. OpenGL provided a common interface to do so, but has been outdated since… a while and replaced by Vulkan. Then there are libraries either built on top of are parallel to OpenGL. Vulkan can be parallel or use OpenGL if that’s the only one supported IIRC.
However, I’m not sure if OpenGL is implemented at the hardware level (on the graphics card), software level, or both.

Furthermore, I don’t understand where Magma, Meta, and MESA come in.

Maybe my core understanding is wrong or just outdated. I can’t tell. Can anybody eplain?

Anti Commercial-AI license

  • onlinepersona@programming.devOP
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Thanks, this is the answer to the question I was just asking! (What is Magma trying to solve).

    Can you dive a little deeper in how Magma is solving this? Don’t VMs have a virtual GPU with a driver for that GPU in the guest that, I imagine, forwards the graphics instructions and routines to the driver on the host? (possibly even translating to OpenGL or VK that then is handled by Mesa?). Where in that does Magma come in? My guess is that magma sits in the guest as the graphics driver and on the host before Mesa, but I know little about virtualisation outside of containers.

    Also, what are these “native contexts” you speak of? Are they like the virtualisation extensions on CPUs that VMs can directly use?

    Anti Commercial-AI license

    • vividspecter@aussie.zone
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 days ago

      I’m not really an expert, but I’ll try and answer your questions one by one.

      Don’t VMs have a virtual GPU with a driver for that GPU in the guest that, I imagine, forwards the graphics instructions and routines to the driver on the host?

      Yes, this is what VirGL (OGL) and Venus (Vulkan) do. The latter works pretty well because Vulkan is more low level and better represents the underlying hardware so there is less of a performance overhead. However, this does mean you need to translate all APIs one by one, not just OGL and Vulkan, but also hardware decoding and encoding of videos, and compute, so it’s a fair amount of work.

      Native contexts, in contrast, are basically the “real” host driver used in the guest, and they essentially pass through everything 1:1 to the host driver where the actual work is carried out. They aren’t really like virtualisation extensions as the hardware doesn’t need to support it AFAICT, just the drivers on both the host and the guest. There’s a presentation and slides on native contexts vs virgl/venus which may be helpful.

      Where in that does Magma come in? My guess is that magma sits in the guest as the graphics driver and on the host before Mesa, but I know little about virtualisation outside of containers.

      To be honest, I don’t fully understand the details either, but your interpretation seems more or less correct. From looking at the diagram on the MR it seems that it’s a layer between the userspace graphics driver and the native context (virtgpu) layer on the guest side, which in turn communicates with another Magma layer on the host, and finally passes data to the host GPU driver, which may be Mesa but could also be other drivers as long as they implement Magma.

      The broader idea is to abstract implementation details, so applications and userspace drivers don’t need to know the native context implementation details (other than interfacing with Magma). And the native context layer doesn’t need to know which host gpu driver is being used, it just needs to interface with Magma.