• Rose@slrpnk.net
    link
    fedilink
    arrow-up
    25
    ·
    2 days ago

    Yeah, I took one course where we used MongoDB. I was like “still unconvinced, but I’ll keep this in mind if I run into situations not covered by PostgreSQL.” …I’ve not run into situations not covered by PostgreSQL. Everything will be covered by PostgreSQL.

  • velxundussa@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    I manage instances of both mongo and postgres at work.

    I’ll say Mongo OpsManager is pretty sweet, and HA is way easier on Mongo.

  • Fargeol@lemmy.world
    link
    fedilink
    arrow-up
    70
    ·
    2 days ago

    “You know what ELSE everybody likes? Postgres! Have you ever met a person, you say, ‘Let’s use some Postgres,’ they say, ‘Hell no, I don’t like Postgres’? Postgres is perfect!”

    • RustyNova@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      2 days ago

      I 100% agree… If you don’t need portable databases. For those, everybody like SQLite (even if it can be annoying sometimes)

      • wetbeardhairs@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        23 hours ago

        You can pry sqlite out of my cold dead hands. Because I’ll probably die while using it out of frustration due to the poor performance of triggers.

        • RustyNova@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          20 hours ago

          Tbh trigger performance isn’t that much of a concern unless you need to write lots of data, which most usage don’t need.

          Also try check statements instead or even re-evaluate your schema to prevent them if you really need to.

          Personally my death would be multiple write transaction deadlocks. Sadly it doesn’t play that well with async code, like with sqlx (rust).

            • RustyNova@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              14 hours ago

              This is also part of my death, because it’s much easier to not deadlock when you are FIFO.

              Personally I went for the nuclear option, and any transaction is sent as a tokio task to make sure the transaction keeps getting polled despite other futures getting polled. Coupled with a generous busy timeout timer (60secs) and Wal mode, it works pretty well.

              Probably should also put the mutex strategy (perhaps a tokio semaphore instead?) although due to lifetimes it might be hard to make a begin() function on my DB pool wrapper.

              … Congratulations. You nerd snipped me. Time for it to go on the todo stack.

              Hyped for it too, but wouldn’t use until sqlx suport. Compile time checked queries are just so good. I don’t use rustsqlite for that reason alone (you often don’t need async SQLite anyways)

      • ksh@aussie.zone
        link
        fedilink
        arrow-up
        3
        ·
        19 hours ago

        Is it actually any good for small personal projects? Just want someone who has used it to answer as I’m considering putting some work into it.

        • qaz@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          I tested it once and it didn’t really impress me. Perhaps you can try using something like Grist.

      • tyfon@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        2 days ago

        I have used libre office base and found it’s buggy mess.

        1. Not all drivers support all functions, so if you are wondering why some options are not present it’s probably adapter not supporting it.
        2. Errors and help are usually empty or super generic like ‘syntax incorrect’.
        3. Interface sometimes bugs out when long syntax is present in input fields
        4. Because of 1. It also doesn’t support all syntax from Microsoft SQL, MySQL etc.

        I sugest to use dbbever for any DB, it’s different but at least it’s not a buggy mess. Or pgAdmin for Postgresql. Or DB Browser for SQLite

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    2 days ago

    Just use Mongo, it scales so well!

    Never understood why anyone chose Mongo. Though I have some funny memories getting rid of it because it was slowing the app down sooo much.

    If you need something for storing JSONs and querying, just use ElasticSearch/OpenSearch.

    • Venator@lemmy.nz
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Or add a column next to the json with some data about the json and index that.

    • NigelFrobisher@aussie.zone
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      Oh god, all the people storing massive JSON documents, and then having to lock the whole thing to modify sub-entities.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      used OpenSearch in a recent project, but the number of annoyances with it are through the roof. From SSL certs setup to bad defaults in settings, and the fact it does type inference for indices requiring you to manually recreate the index, and the docker container that takes 30s to start every time…

      If you can use mongo, just use that. Or pick something other than OpenSearch if that’s overkill for you.

    • Mirror Giraffe@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      Where I work we use mongo, it’s not what I would’ve picked but i guess it helped early dev speed and bad practices like having productus do direct db edits to save a situation because the app isn’t mature yet.

      By now when collections are getting huge and documents as well we’ve had to archive more and more recent data, which causes problems, and we have to really make sure our queries are sharp or cost and lag will go through the roof.

      With that said, it actually works pretty ok for a production platform with quite a big customer base, and there are many improvements we could do if we had the time.

      If I were there at day one I’d have rooted for sql, mainly based on how much these different collections have to relate, but I don’t think mongo is as horrible as many people make it out to be and it does have upsides.

    • Flamekebab@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I’ve used it for one small project and quite liked it. I struggle with the concepts behind relational databases and Mongo’s approach was understandable for me.

  • Psaldorn@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    Had to roll my own JSON storage system after spending weeks trying to get sqlite to work on Godot/android.

    It took a day and will suck at scale because there are no indexes. It just goes through the whole file, line by line when you search for an id.

    BUT IT WORKS.

    Hopefully the repos and stuff I piled on top have made it abstract able enough I can move it to a real database if the issue ever gets resolved.

    • fxdave@lemmy.ml
      link
      fedilink
      arrow-up
      20
      ·
      edit-2
      2 days ago

      That’s not the point of JSONB. Use normalized tables whenever you can. JSONB allows you to store a document with unknown structure, and it allows you to access that data within SQL.

      • Simulation6@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I probably have just run into a bad example of its use. I can see it being useful for unknown documents.

    • jubilationtcornpone@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      2 days ago

      I run a web app that processes at least one third party JSON document that is so large it would exceed the table column limit if flattened out. It gets stored in a JSONB column. EFCore with Npgsql can query JSON documents in Postgres. Works just fine as long as you put indexes on the fields you’re going to be querying.

    • Slotos@feddit.nl
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I can’t muster any sarcasm out of sheer disappointment. You win this time…