I recently bought a new computer, and, after again trying Windows 11 for a bit, I decided I wanted to keep using OpenSUSE.

Probably unpopular here: I enjoy screwing around with local LLM models. I used LM Studio on my old computer (on which I had also installed OpenSUSE Leap). I also tested it on my new computer in Windows 11, and it worked very nicely. Now I’m trying on my new computer with OpenSUSE Leap 16, and it doesn’t work at all.

Specifically: no runtimes nor engines are present, and my hardware isn’t recognised at all - not my GPU nor my CPU, nothing.

I’m thinking it’s a driver issue. I’ve looked around quite a bit, and also looked up (what seems to me) the most important error messages I got when running the AppImage from the console:

spoiler

[BackendManager] Surveying hardware with backends with options: {“type”:“newAndSelected”} [BackendManager] Surveying new engine ‘[email protected]’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13407 [ProcessForkingProvider][NodeProcessForker] Exited process 13407 21:17:29.644 › Failed to survey hardware with engine ‘[email protected]’: LMSCore load lib failed - child process with PID 13407 exited with code 127 [BackendManager] Survey for engine ‘[email protected]’ took 9.47ms [BackendManager] Surveying new engine ‘[email protected]’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13408 [ProcessForkingProvider][NodeProcessForker] Exited process 13408 21:17:29.648 › Failed to survey hardware with engine ‘[email protected]’: LMSCore load lib failed - child process with PID 13408 exited with code 127 [BackendManager] Survey for engine ‘[email protected]’ took 3.70ms [BackendManager] Surveying new engine ‘[email protected]’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13409 [ProcessForkingProvider][NodeProcessForker] Exited process 13409 21:17:29.651 › Failed to survey hardware with engine ‘[email protected]’: LMSCore load lib failed - child process with PID 13409 exited with code 127 [BackendManager] Survey for engine ‘[email protected]’ took 3.57ms

This is my system with installed drivers:

I did get Ollama to work… Any thoughts?

  • moonpiedumplings@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    I know this issue, I had a similer issue trying to get the client for krunker.io working with my nvidia gpu. I might have the solution saved somewhere, this comment is so I can remind myself to check.

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    12 hours ago

    This looks like a sandboxing issue. Using the “no-sandbox” flag has never worked on AppImage from what I remember, except for very light runtimes. Running with sudo will throw that error because the root user has no display manager running.

    Just try running the installer if you don’t want to mess around with debugging the AppImage. Check the GitHub Issues for related keywords and see if others are running into the same issue, maybe it’s just a specific release, or SELinux causing the problem.

      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        10 hours ago

        They have a simple bash installer from what I see. You can also install everything via pip as well. Couple quick commands.

        That bug report mentions a few versions, so maybe just go back to whatever version was working on your other machine.

  • Ⓜ3️⃣3️⃣ 🌌@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    I use lmstudio too, from the official appimage.

    Never managed to get the nvidia working, probably because the nvidia drivers are a pain to install properly (secure boot, drivers, kernel, display … everything must be perfect or it doesn’t work).

    But the CPU is just enough for local models if you are a bit patient and have plenty RAM.

    It is very unusual for lmstudio failing to detect your CPU and your system memory. I would start looking for excessive restriction affecting lmstudio itself (appimage ? Native ? Something with selinux or system hardening ?)

    • Don Antonio Magino@feddit.nlOP
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      Thanks for the tips. How do I go about doing this? What I’ve tried is run LM Studio from the console with --no-sandbox. No idea if that has something to do with what you’re referring to.

      I’ve also tried running it with sudo, which gives this error message:

      spoiler

      [15557:0409/215138.000614:ERROR:ui/ozone/platform/x11/ozone_platform_x11.cc:249] Missing X server or $DISPLAY [15557:0409/215138.000647:ERROR:ui/aura/env.cc:257] The platform failed to initialize. Exiting. Segmentatiefout

    • Don Antonio Magino@feddit.nlOP
      link
      fedilink
      arrow-up
      1
      ·
      11 hours ago

      It detects my hardware at least, but for whatever reason the Hub is empty and I can’t download the default Jan model… But it works when I import the models manually.