Clearly the whole drama with Pentagon making a big deal of showing that they’re trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured.

Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. All the media has been running the story of a plucky Anthropic defying US military to defend ethical AI and protect humanity.

    • venusaur@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      7 days ago

      Thanks! Can you explain what you just wrote? Do you own these GPU’s? Are you in China?

      • Zedd @lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        No problem. My desktop has an nvidia RTX 350 card that has 8GB of ram on it. It’s a basic modernish video card. Ollama is an open source framework for running large language models. The model I’m using is qwen 2.5. It has 3 billion (3b) parameters(basically the size of the LLM) . Docker is a program that allows you to basically run smaller dedicated computers on your computer.

        I am not in China. I’m an American living in Albania. I recommended DeepSeek because it’s free, works well, and if a company is going to have the information on what you’re chatting about, it might as well be one that isn’t in the same country as you.

        • venusaur@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          6 days ago

          Thanks for all the info! I’d love to run a model locally, but I don’t have the money for a decent enough setup right now, but I know it’s getting close. How effective is the 3b model? Does it do the job for you or you feel like it’s lacking? Are requests pretty slow on that machine?