Good to see them learning LaTeX young. It’s one of those life skills that no one should need, but everybody does need at some point
They/Them, agender-leaning scalie.
ADHD software developer with far too many hobbies/trades: AI, gamedev, webdev, programming language design, audio/video/data compression, software 3D, mass spectrometry, genomics.
Learning German (B2), Chinese (HSK 3-4ish), French (A2).
Good to see them learning LaTeX young. It’s one of those life skills that no one should need, but everybody does need at some point
Why do I find “match-3” most offensive part of that thought?
Google also is responsible for the SEO industry. They made ads hugely profitable, then started directing traffic to sites that serve more of their ads, regardless of quality.
Western companies no longer operating in the Russian market, but still producing desirable content. … Western companies have ‘legalized’ piracy in Russia.
100% this.
Media is culture, and IMO people have a right to participate in culture. If it’s excessively difficult or impossible to legitimately access culture, one has the moral right to illegitimately access culture, and share it so others also have access.
It’s inexcusable to refuse to directly sell media. The internet has made it easier than ever to trade access to media for money. Geo-restricted subscription services should be a nice add-on option for power-consumers, not the only way to get access to something.
anthropomorphic behavior
Anyone else morbidly curious about what happens if they don’t fix the bill’s wording and accidentally ban “human-shaped behavior” at school?
The funny thing is that YouTube’s code is already so laggy that we all believed this without a second thought.
The website does a bad job explaining what its current state actually is. Here’s the GitHub repo’s explanation:
Memory Cache is a project that allows you to save a webpage while you’re browsing in Firefox as a PDF, and save it to a synchronized folder that can be used in conjunction with privateGPT to augment a local language model.
So it’s just a way to get data from browser into privateGPT, which is:
PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power of Large Language Models (LLMs), even in scenarios without an Internet connection. The project provides an API offering all the primitives required to build private, context-aware AI applications.
So basically something you can ask questions like “how much butter is needed for that recipe I saw last week?” and “what are the big trends across the news sites I’ve looked at recently?”. But eventually it’ll automatically summarize and data mine everything you look at to help you learn/explore.
Neat.
I agree that older commercialized battery types aren’t so interesting, but my point was about all the battery types that haven’t had enough R&D yet to be commercially mass-produced.
Power grids don’t care much about density - they can build batteries where land is cheap, and for fire control they need to artificially space out higher-density batteries anyway. There are heaps of known chemistries that might be cheaper per unit stored (molten salt batteries, flow batteries, and solid state batteries based on cheaper metals), but many only make sense for energy grid applications because they’re too big/heavy for anything portable.
I’m saying it’s nuts that lithium ion is being used for cases where energy density isn’t important. It’s a bit like using bottled water on a farm because you don’t want to pay to get the nearby river water tested. It’s great that sodium ion could bring new economics to grid energy storage, but weird that the only reason it got developed in the first place was for a completely different industry.
This is awesome news. Not because of the car, but because it builds the supply lines for an alternative battery chemistry.
People have been using lithium-ion batteries for home and grid storage, which is nuts if you compare it to other battery types. Lithium is expensive and polluting and only makes sense if you’re limited by weight & space. Cheaper batteries, even if they’re bigger/heavier, will do wonders to the economics of sustainable electricity production.
But the comments below say they’re not able to access the new page, even with the direct URL… It seems certain tiers of customers can’t opt out. Possibly they can’t be included in the first place (e.g. EU users), but it’s a pretty big screw up to hide one’s status on such an important privacy setting.
I’m glad to hear I’m not missing out on anything. (It’s still not out in Europe.)
Yeah, I was over-enthusiastic based on their cherry-picked examples. SeamlessExpressive still leaves a lot to be desired.
It has a limited range of emotions and can’t change emotion in the middle of the clip. It can’t produce the pitch shifts of someone talking excitedly, making the output sound monotonous. Background noise in the input causes a raspy, distorted output voice. Sighs, inter-sentence breaths, etc. aren’t reproduced. Sometimes the sentence pacing is just completely unnatural, with missing pauses or pauses in bad places (e.g. before the sentence-final verb in German).
IMO their manual dataset creation is holding them back. If I was in this field, I would try to follow the LLM route: Start with a next-token predictor trained indiscriminately on large-scale speech+text data (e.g. TV shows, movies, news radio, all with subtitles even if the subs need to be AI generated), fine-tune it for specific tasks (mainly learning to predict and generate based on “style tokens” (speaker, emotion, accent, pacing)), then generate a massive “textbook” synthetic dataset. The translation aspect could be almost completely outsourced to LLMs or multilingual subtitles.
This is so exciting!
I can’t wait to see how well the Expressive model does on anime and foreign films. I wouldn’t be surprised if this was the end of terrible dubs.
This is gonna be great for language learning as well. Finally being able to pick any media and watch it in any language. It might even be possible to rig it up to an LLM to tune the vocab to your exact level…
Thanks! That’s a well-written paper. I don’t know why I keep falling for science journalism’s simplified explanations.
I’ve so-far only skimmed it, but to answer my question they find light dark matter to be the simplest case (I didn’t see a specific range, but they used 250keV as an example), but they also considered a scenario where “dark-zillas” (mass >> 10^10 GeV) are plausible. At least that still narrows the search space a bit 😅
Sadly archive.li seems to be in a broken CAPTCHA loop, so I can’t see the full article. However, I’m struggling to imagine a fundamental universe-spanning interaction that triggers weeks after the big bang, given that the universe has already expanded/cooled enough by 20 minutes to stop fusing nuclei. If there is evidence for a Dark Matter big bang for weeks after the Matter big bang, surely this must have some extreme implications about the possible mass range of DM particles?
One thing nobody seems to be talking about: Just like String Theory, the more new phenomena are needed to make the Dark Matter model work, the further we stray from the edge of Occam’s Razor. While all the research into detecting hypothetical particles has been fun to follow, I can’t help but feel we’re just a few equations away from discovering that the universe is actually pretty MONDane.
With teacher hours, isn’t that still often over 40 hours a week?
They’ve had days to prepare this response. They didn’t rescind or explain the one thing that people universally hated, which means they’re just stalling and trying to save their reputation without actually changing trajectory.
We’ve seen this corporate bullshit so much in recent years. No more “benefit of the doubt”.
ooo, I love this. It reminds me of how nice C#'s LINQ is…
“Pipeline style” DB queries have some interesting advantages as well:
I had no idea Omeleto existed. Looks like I’ve got a few weekends of watching their vids ahead of me!