[… Lumo] is the least open “open” AI assistant that we have ever added to our index. One of our reasons for inclusion is an openness claim: a model provider that calls their system “open” or “open source” or a variation on that. Lumo is “open” in that sense (Proton calls it open source) but in no other way. Nothing about it is currently open. […]

  • Sonalder@lemmy.ml
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    8 days ago

    I was really pissed by Proton’s marketing on this one. It is so closed and opaque, only the client are open. This is clearly open washing marketing!

    • Screen_Shatter@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 days ago

      I know I should just move from proton to another service but dammit I finally got transitioned to their email service. I’m so tired of having that trust betrayed.

    • Alphane Moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      8 days ago

      They deleted my message when I said that Luma is not actually private (I didn’t know this community was company run).

      No cloud LLM can be private (unlike email or even cloud storage), only a locally run LLM can be truly private.

  • good_hunter@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    8 days ago

    At the moment I have been pretty satisfied with the LLMs that Kagi is offering in their standard their for my low requirement queries. Supposedly most of the apis they use are too alternative servers that host instances of the models, some claiming they don’t use your data. I am not aware to what extend open source is serviced in those models, but I have little interest in Lumo for the time being. Can someone explain to me why open source matters for LLMs?

    • Sonalder@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      8 days ago

      LLM are becomming widely used and whoever controll the LLM training control people lives to some extends. It’s crazy how people are relying more and more on ChatGPT for conveniance, some even think it is actually intelligent/smart (it is not). LLM have biais because training data and training recipes are not neutral. Governments and big corpos can use this to extend control and manipulate/influence opinion, behavior, etc… They can choose how and what to censor, the point of view of information that could change user perception of it. With closed data and training recipe we can not see how it is made and what rules the models have been following.

      Most open source model only publish the finally wieght model. No training data (mainly because they violate IP, copyright, etc… if you ask me) nor the training recipe, so you only have the possibility to host the model yourself. They slap Open Source on it and voilà… but it often isn’t and when it is you can see that it perform worse (because they haven’t read all Sci-Hub and Z-Library unlike Llama and ChatGPT for exemple.

      Edit : Clarify some sentences

    • MHLoppy@fedia.ioOP
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      8 days ago

      Leaving this comment just shows that you haven’t read the submission lol

      The only open source code we have found is for the Lumo mobile and web apps. Proton calling the Lumo AI assistant open source based on that is a bit like Microsoft calling Windows open source just because there’s a github repository for Windows Terminal.

        • Sonalder@lemmy.ml
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 days ago

          Lumo Apps is one thing, Lumo itself is another. The way Proton is marketing this service is truely misleading.

          As a user of Proton I ma really disapointed by their Lumo marketing and the actual reality of the service. Reminds me how they marketed ProtonPass at launch in a different way. I’m starting to think it’s a bad thing having them developing such a wide ecosystem of tools. They should focus on a more narrow number and make them stands out.