How’s it compare to greenshot?
How’s it compare to greenshot?
California has pushed out badly worded laws in the past. Here’s a definition from the bill.
“Artificial intelligence model” means an engineered or machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs that can influence physical or virtual environments and that may operate with varying levels of autonomy.
Tell me that wouldn’t also apply to a microwave oven.
After several years of using Linux for work and school, I made the leap to daily driving linux on my personal computer. I stuck with it for two years. Hundreds of hours I sunk into an endless stream of inane troubleshooting. Linux preys on my desire to fix stuff and my insane belief that just one more change, suggested by just one more obscure forum post will fix the issue.
… the lack of an increment operation, no “continue” instruction, and array indices starting from 1 instead of 0. These differences can be jarring
Understatement
It depends. It will not affect many of them until 2025 when enterprise support for v2 ends and by then other arrangements and fixes might be. Brave in particular I would not worry yet.
I wonder how good this model would be at an obfuscated code challenge.
This is all they really said IMO:
My tendency these days is to try to use the term “machine learning” rather than AI
The initial results showed something that should have been obvious to anyone: *More data beats more parameters.
That makes a lot of sense!
Purely speculation but, I wonder if this is a case of having some old, very low quality photos and trying to enhance and upscale them for the show.
You can generate your own tracks using bing chat.
All the Suno tracks I’ve heard have a similar style. Very procedural and formulaic. Calling it AI seams like a stretch.
Relevant article: https://lemmy.ml/post/12857742
Prompt engineering is a thing, but I wouldn’t say it’s much of a job title. There are people doing it: optimizing system prompts, preprocessing and postprocessing, llms are just one piece of a complex pipeline and someone has to build all that. Prompt engineering is part the boot strapping for making better llms but this work is largely being done by data scientists who are on the forefront of understanding how AI works.
So is prompt engineering just typing questions? IDK. Who knows what those people mean when they say that but whatever it’s called there is a specialized field around improving AI tech and prompt engineering is certainly a part.
Nothing in the article corroborated the claim in the title that human intervention made things worse, just that the problem goes deeper.
“AI Prompt Engineering Is Dead” long live LLMOps which is totally not the same thing /s
I thought it was a strange choice to use such a technical descriptor in their name. This makes sense.
Is this good? How does it compare to the existing tools?
I would be curious what the article means by AI. For example this might include some transcription and sentiment anaysis. Didn’t see anything too complicated in their description of what the software does.
If I understand correctly, basically they are testing if the llm can create novel outputs.
Got to say I especially don’t get the environmental part. Generative ai is not crypto currency mining.
So is this what Mozilla meant when they announced a privacy push back in February
https://fortune.com/2024/02/08/mozilla-firefox-ceo-laura-chambers-mitchell-baker-leadership-transition/