

There are at least 3 comments with well laid arguments (hint, they have way more up votes than the post). You have answered to none of them.


There are at least 3 comments with well laid arguments (hint, they have way more up votes than the post). You have answered to none of them.


I never used them. AI is shit, and they’re still at the “burning money” stage, wait 2-3 years and they’ll enter the enshittification stage, where it will be even worse.
Plenty of times I’ve seen coworkers stuck at the same problem for hours. Until they come and ask for help and I give them a simple answer for their simple problem. Every time it is “well, I asked the AI and it said this thing and it didn’t work, so I asked it to fix it and it didn’t either, a bunch of times.”. I just tell them “you’re surrounded by a lot of people here that know a lot about programming, why don’t you ask any of them?”.
For real, why use an AI at work where you are surrounded by people that can actually answer your question? It just makes no sense. Leave AI to those that can’t pay an artist for their game. Or to those that have a “game design idea that will change the world” but won’t pay a programmer even if they can’t program themselves.


It’s not. Numbers are arranged (both binary and base 10) with the most significant digit on the left.
Whether you read the number from left to right or right to left is irrelevant and you can choose whichever one you want.
But it is completely consistent with base 10 (normal numbers).


Pay your seniors then. Changing jobs is a hassle. No one will do it if they are happy with their current job. Give them also work from home, so not even moving due to a partner will make the senior change jobs.
Yes of course! It’s one of the 3 changes of this version
Removing ICE is always good work. They are awful when encountered, although I haven’t encountered one yet.


Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.
I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.
It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.


We have all been children, we all know the anatomical differences.
It’s not like children are alien, most differences are just “this is smaller and a slightly different shape in children”. Many of those differences can be seen on fully clothed children. And for the rest, there are non-CSAM images that happen to have nude children. As I said earlier, it is not uncommon for children to be fully nude in beaches.


What you don’t think?
Why does being a parent give any authority in this conversation?


The wine thing could prove me wrong if someone could answer my question.
But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.
Who is to say interpolating nude children from regular children+nude adults is too wild?
Furthermore, you don’t need CSAM for photos of nude children.
Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.
As a rust developer I feel obligated by religion to make this comment:
Then you’d love rust! Rust only has “interfaces” (called traits) but doesn’t have inheritance. You just have traits that don’t inherit from anything and structs (that don’t inherit from other structs) that implement X amount of traits.
So you can have the good things about OOP without the bad ones.
And these traits allow you to make trait objects, which would be like regular objects in C# (with vtables for the methods). If 2 different structs implement the same trait, you can “downcast” them to a trait object and store them in the same array. Or pass it is an argument to a function that wants something that implements that trait but doesn’t care about the specific struct. You can of course cast it back later to the original struct.


Did it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”


Tbf it’s not needed. If it can draw children and it can draw nude adults, it can draw nude children.
Just like it doesn’t need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.
The windows crate is full of Deref. Because the windows API is full of inheritance.
It may not be what the trait was thought of for, but I’m glad we have it to interface with APIs that have actual inheritance.
But if I have to make an Array I have to inherit from Indexable which inherits from Collection which inherits from Object! How else am I supposed to implement an Array?


How many of those 8 hours are because bit bucket is down? I’d bet at least 1. I swear their uptime is measured in 8s instead of 9s (as in 88.88, not 99.88)
Deref works on top of encapsulation though. Inheritance syntactically hides the encapsulation, with Deref it’s in the clear.
It’s true that it feels like inheritance, but I’m grateful for it. Otherwise it would be a pain to use the windows API.
I know this thread is old. But I disagree with you.
I agree that depending on how you use a debugger, some race conditions might not happen.
However, I don’t agree that debuggers are useless to fix race conditions.
I have a great example that happened to me to prove my point:
As I was using a debugger to fix a normal bug, another quite strange unknown bug happened. That other bug was indeed a race condition. I just never encountered it.
The issue was basically:
And so handling the session start and session end at the same time resulted in a bug. It was more complicated than this (we do use mutexes) but it was along those lines.
We develop in a lab-like condition with fast networking and computers, so this issue cannot happen on its own. But due to the breakpoint I put in the session initiation function, I was able to observe it. But in a real world scenario it is something that may happen.
Not only that, I could reproduce the “incredibly rare” race condition 100% of the time. I just needed to place a breakpoint in the correct place and wait for some amount of time.
Could this be done without a debugger? Most of the time yes, just put a sleep call in there. Would I have found this issue without a debugger? Not at all.
An even better example:
Deadlocks.
How do you fix a deadlock? You run the program under a debugger and make the deadlock happen. You then look at which threads are waiting at a lock call and there’s your answer. It’s as simple as that.
How do you print-debug a deadlock? Put a log before and after each lock in the program and look at unpaired logs? Sounds like a terrible experience. Some programs have thousands of lock calls. And some do them at tens of times per second. Additionally, the time needed to print those logs changes the behaviour of the program itself and may make the deadlock harder to reproduce.


4k is noticeable in a standard pc.
I recently bought a 1440p screen (for productivity, not gaming) and I can fit so much more UI with the same visual fidelity compared to 1080p. Of course, the screen needs to be physically bigger in order for the text to be the same size.
So if 1080p->1440p is noticeable, 1080p->4k must be too.
You went through my comment history and quoted me, to just not read the whole quote.
Here, I’ll help you:
As I said, there are already 3 top comments explaining to you why you’re being downvoted. I don’t need to explain myself when I mostly agree with them, I just upvote them.
If everyone had to explain every downvote, we would have hundreds of comments on each post, and most of them would say the same thing.