Yeah, I took one course where we used MongoDB. I was like “still unconvinced, but I’ll keep this in mind if I run into situations not covered by PostgreSQL.” …I’ve not run into situations not covered by PostgreSQL. Everything will be covered by PostgreSQL.
I manage instances of both mongo and postgres at work.
I’ll say Mongo OpsManager is pretty sweet, and HA is way easier on Mongo.
Jesus Christ, that’s JSON Bourne.
Yes that’s right, it goes into postgres.
who needs any of that when you have microsoft access
Ow, my integrity
There is actually an open source alternative for that in the Libreoffice suite called “Base”
Is it actually any good for small personal projects? Just want someone who has used it to answer as I’m considering putting some work into it.
I tested it once and it didn’t really impress me. Perhaps you can try using something like Grist.
I have used libre office base and found it’s buggy mess.
- Not all drivers support all functions, so if you are wondering why some options are not present it’s probably adapter not supporting it.
- Errors and help are usually empty or super generic like ‘syntax incorrect’.
- Interface sometimes bugs out when long syntax is present in input fields
- Because of 1. It also doesn’t support all syntax from Microsoft SQL, MySQL etc.
I sugest to use dbbever for any DB, it’s different but at least it’s not a buggy mess. Or pgAdmin for Postgresql. Or DB Browser for SQLite
“You know what ELSE everybody likes? Postgres! Have you ever met a person, you say, ‘Let’s use some Postgres,’ they say, ‘Hell no, I don’t like Postgres’? Postgres is perfect!”
Yeah! Postgres is great!
- Mutters something under his breath about MariaDB.
MariaDB
Let’s schedule a meet-up at 00/00 year 0000 to talk about it.
elephant walks in
I 100% agree… If you don’t need portable databases. For those, everybody like SQLite (even if it can be annoying sometimes)
You can pry sqlite out of my cold dead hands. Because I’ll probably die while using it out of frustration due to the poor performance of triggers.
Tbh trigger performance isn’t that much of a concern unless you need to write lots of data, which most usage don’t need.
Also try check statements instead or even re-evaluate your schema to prevent them if you really need to.
Personally my death would be multiple write transaction deadlocks. Sadly it doesn’t play that well with async code, like with sqlx (rust).
My death was the fact that table lock acquisition is not FIFO.
https://sqlite.org/forum/forumpost/8d7d253df1b9811b4b76c2c4c26ac0740e73d06e9edfeb2ab8aabaebd899cbc8
Thankfully I can at least have FIFO in a single process by wrapping every write transaction in a mutex.
P.S. can’t wait for turso’s SQLite replacement to have feature-parity and sqlx support.
This is also part of my death, because it’s much easier to not deadlock when you are FIFO.
Personally I went for the nuclear option, and any transaction is sent as a tokio task to make sure the transaction keeps getting polled despite other futures getting polled. Coupled with a generous busy timeout timer (60secs) and Wal mode, it works pretty well.
Probably should also put the mutex strategy (perhaps a tokio semaphore instead?) although due to lifetimes it might be hard to make a
begin()
function on my DB pool wrapper.… Congratulations. You nerd snipped me. Time for it to go on the todo stack.
Hyped for it too, but wouldn’t use until sqlx suport. Compile time checked queries are just so good. I don’t use rustsqlite for that reason alone (you often don’t need async SQLite anyways)
Yeah, but is it web scale?
/dev/null is web scale, it maintains sub 1ms times no matter how much load you give it!
Does /dev/null support sharding?
Yes, you can run as many replicas as you want. It’s also incredibly lean on the synchronization bandwidth.
It certainly supports the admin sharting when he finds out where all the data went.
What data?
How to avoid a alien invasion according to war of the worlds 2025
lololol yusssss
With SQL you scale it when it is required by sharding, read replicas, cache layers, and denormalization.
With NoSQL afaik, we have to deal with the scaling from the beginning by keeping the consistency of denormalized data, that has additional code overhead. Is mongoDB different in this regard?
EDIT: I got whooshed. Thanks for the reference :)
edit edit: Holt shit how did I miss this for 15 years. This is great, stayed accurate all this time.
Shoutout to software that had to deal with y2k and is still popular, gotta be one of my favourite genders.
It’s a reference to this masterpiece: https://youtu.be/b2F-DItXtZs
I’ve never seen that one. It’s a masterpiece for sure. Thanks.
Just fyi you’re taking a meme seriously
Just use Mongo, it scales so well!
Never understood why anyone chose Mongo. Though I have some funny memories getting rid of it because it was slowing the app down sooo much.
If you need something for storing JSONs and querying, just use ElasticSearch/OpenSearch.
Or add a column next to the json with some data about the json and index that.
Oh god, all the people storing massive JSON documents, and then having to lock the whole thing to modify sub-entities.
But is Elasticsearch web scale?
I say this with all appropriate irony: as the guy that deployed it at for Wikipedia, yes.
used OpenSearch in a recent project, but the number of annoyances with it are through the roof. From SSL certs setup to bad defaults in settings, and the fact it does type inference for indices requiring you to manually recreate the index, and the docker container that takes 30s to start every time…
If you can use mongo, just use that. Or pick something other than OpenSearch if that’s overkill for you.
Where I work we use mongo, it’s not what I would’ve picked but i guess it helped early dev speed and bad practices like having productus do direct db edits to save a situation because the app isn’t mature yet.
By now when collections are getting huge and documents as well we’ve had to archive more and more recent data, which causes problems, and we have to really make sure our queries are sharp or cost and lag will go through the roof.
With that said, it actually works pretty ok for a production platform with quite a big customer base, and there are many improvements we could do if we had the time.
If I were there at day one I’d have rooted for sql, mainly based on how much these different collections have to relate, but I don’t think mongo is as horrible as many people make it out to be and it does have upsides.
I’ve used it for one small project and quite liked it. I struggle with the concepts behind relational databases and Mongo’s approach was understandable for me.
Wait you guys don’t just write data to a
.txt
file?/s
I don’t use file extensions, so no
.txt
./s
Had to roll my own JSON storage system after spending weeks trying to get sqlite to work on Godot/android.
It took a day and will suck at scale because there are no indexes. It just goes through the whole file, line by line when you search for an id.
BUT IT WORKS.
Hopefully the repos and stuff I piled on top have made it abstract able enough I can move it to a real database if the issue ever gets resolved.
Just store the JSON in a sqlite table with an extra column or two for commonly indexed stuff…?
No, you misunderstand. Sqlite just does not work when it’s packaged by Godot mono for mobile (see the ticket in the other replies)
It worked fine on desktop which made it more frustrating
I’m confused about your SQLite troubles … it compiles for pretty much everything - as long as you have a file system mapping.
It’s not just me, but seems to affect Godot c# deployments to mobile
https://github.com/godotengine/godot/issues/97859
Worked fine on desktop
Ahh, it’s not an issue about SQLite but about whether the right libraries are bundled by Godot. Got it, that explains it.
It’s weird tho because everything looks like it’s there inside the apk. 🤷
Pros and cons
I really dislike JSonB in Postgres. Just use a ORM at that point.
That’s not the point of JSONB. Use normalized tables whenever you can. JSONB allows you to store a document with unknown structure, and it allows you to access that data within SQL.
I probably have just run into a bad example of its use. I can see it being useful for unknown documents.
I run a web app that processes at least one third party JSON document that is so large it would exceed the table column limit if flattened out. It gets stored in a JSONB column. EFCore with Npgsql can query JSON documents in Postgres. Works just fine as long as you put indexes on the fields you’re going to be querying.
Ok, I was wrong. The only example I have worked with was just someone being lazy.
I can’t muster any sarcasm out of sheer disappointment. You win this time…