

Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.




It looks like I was wrong about it being the default journaling mode for ext3; the default is apparently to journal only metadata. However, if you’re journaling data, it gets pushed out to the disk in a new location rather than on top of where the previous data existed.
https://linux.die.net/man/1/shred
CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:
log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)
file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems
file systems that make snapshots, such as Network Appliance’s NFS server
file systems that cache in temporary locations, such as NFS version 3 clients
compressed file systems
In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal mode, which journals file data in addition to just metadata. In both the data=ordered (default) and data=writeback modes, shred works as usual. Ext3 journaling modes can be changed by adding the data=something option to the mount options for a particular file system in the /etc/fstab file, as documented in the mount man page (man mount).


open-sources
To repeat my comment over on [email protected], “open-sources” isn’t really the right term here, as the source code that runs the speakers isn’t being released. This is just releasing API documentation to let software interact with the speakers.


ATSC 3.0 allows broadcasters to track consumer viewing habits much like Facebook and Google use today.
Sure wouldn’t want to miss out on that.


Typically when (some) 3D games don’t work, I’ve found that 3D library support for one of the 32-bit or 64-bit binaries isn’t present — Steam relies on the systemwide libraries — and the game bails or tries to do software rendering. I’ve hit some other users on here who have had the same issue.
It looks like the full versions of those are all run through Proton, are Windows binaries, though there are Linux-native demo binaries.
I have Dystopika myself.
installs
$ file Dystopika.exe
Dystopika.exe: PE32+ executable for MS Windows 6.00 (GUI), x86-64, 7 sections
$
So probably 64-bit.
There’s some environment variable that will force Proton to use the older Direct3D backend based on OpenGL (WineD3D) instead of Vulkan (DXVK). Let me see if I can find that.
searches
You want:
PROTON_USE_WINED3D=1 %command%
In the Steam launch properties for the game; that’ll force it to use OpenGL instead of Vulkan. Here, it will run with or without it. Does that magically make it work?
One useful tool for debugging 3D issues is mangohud. If you stick it in the Steam launch properties before “%command%” and it can display anything at all, it’ll show an overlay showing which API (WineD3D or DXVK) is being used as well as what the rendering device being used is, which will let you know whether it’s trying to render using software or hardware. So MANGOHUD_CONFIG=full mangohud %command%.
On my system, Dystopika appears able to render in pure software (not at a great framerate, mind):
PROTON_USE_WINED3D=1 LIBGL_ALWAYS_SOFTWARE=1 MANGOHUD_CONFIG=full mangohud %command%
So I don’t know if it’d be falling back to software causing that. Rendering in software is listed in the mangohud overlay as being “llvmpipe”.
Another way to check that each path functions is to run the following programs, see if they display correctly and at a reasonable clip. They’re in the mesa-utils-bin:i386 and mesa-utils-bin:amd64 packages on Debian, so probably same for Mint:
$ glxgears.i386-linux-gnu
$ glxgears.x86_64-linux-gnu
$ vkgears.i386-linux-gnu
$ vkgears.x86_64-linux-gnu
That’ll be a simple test of all of the OpenGL and Vulkan 32-bit and 64-bit interfaces.


So, it’s not really a problem I’ve run into, but I’ve met a lot of people who have difficulty on Windows understanding where they’ve saved something, but do remember that they’ve worked on or looked at it at some point in the past.
My own suspicion is that part of this problem stems from the fact that back in the day, DOS had a not-incredibly-aimed-at-non-technical-users filesystem layout, and Windows tried to avoid this by hiding that and stacking an increasingly number of “virtual” interfaces on top of things that didn’t just show one the filesystem, whether it be the Start menu or Windows Explorer and file dialogs having a variety of things other than just the filesystem to navigate around. The result is that you have had Microsoft banging away for much of the lifetime of Windows trying to add more ways to access files, most of which increase the difficulty of actually understanding what is going on fully through the extra layers. But regardless of why, some users do have trouble with it.
So if you can just provide a search that can summon up that document where they were working on that had a picture of giraffes by typing “giraffe” into some search field, maybe that’ll do it.


I don’t use OpenHAB or Home Assistant, but I’d be extremely surprised if they don’t have existing functionality for connecting microphones, speakers, and LLMs to set up voice-controlled stuff.
searches
Willow Is a Practical, Open Source, Privacy-focused Platform for Voice Assistants and Other Applications
Willow is an ESP IDF based project primarily targeting the ESP32-S3-BOX hardware family from Espressif. Our goal is to provide Amazon Echo/Google Home competitive performance, accuracy, cost and functionality with Home Assistant, openHAB and other platforms.
100% open source and completely self-hosted by the user with “ready for the kitchen counter” low cost commercially available hardware.
https://rhasspy.readthedocs.io/en/latest/
Rhasspy (ɹˈæspi) is an open source, fully offline set of voice assistant services for many human languages that works well with:


My understanding from a very brief skim of what Microsoft was doing with Copilot is to take screenshots constantly, run image recognition on it, and then make it searchable as text and have the ability to go back and view those screenshots in a timeline. Basically, adding more search without requiring application-level support.
They may also have other things that they want to do, but that was at least one.
EDIT: They specifically called that feature “Recall”, and it was apparently the “flagship” feature of Copilot.


Not the position Dell is taking, but I’ve been skeptical that building AI hardware directly into specifically laptops is a great idea unless people have a very concrete goal, like text-to-speech, and existing models to run on it, probably specialized ones. This is not to diminish AI compute elsewhere.
Several reasons.
Models for many useful things have been getting larger, and you have a bounded amount of memory in those laptops, which, at the moment, generally can’t be upgraded (though maybe CAMM2 will improve the situation, move back away from soldered memory). Historically, most users did not upgrade memory in their laptop, even if they could. Just throwing the compute hardware there in the expectation that models will come is a bet on the size of the models that people might want to use not getting a whole lot larger. This is especially true for the next year or two, since we expect high memory prices, and people probably being priced out of sticking very large amounts of memory in laptops.
Heat and power. The laptop form factor exists to be portable. They are not great at dissipating heat, and unless they’re plugged into wall power, they have sharp constraints on how much power they can usefully use.
The parallel compute field is rapidly evolving. People are probably not going to throw out and replace their laptops on a regular basis to keep up with AI stuff (much as laptop vendors might be enthusiastic about this).
I think that a more-likely outcome, if people want local, generalized AI stuff on laptops, is that someone sells an eGPU-like box that plugs into power and into a USB port or via some wireless protocol to the laptop, and the laptop uses it as an AI accelerator. That box can be replaced or upgraded independently of the laptop itself.
When I do generative AI stuff on my laptop, for the applications I use, the bandwidth that I need to the compute box is very low, and latency requirements are very relaxed. I presently remotely use a Framework Desktop as a compute box, and can happily generate images or text or whatever over the cell network without problems. If I really wanted disconnected operation, I’d haul the box along with me.
EDIT: I’d also add that all of this is also true for smartphones, which have the same constraints, and harder limitations on heat, power, and space. You can hook one up to an AI accelerator box via wired or wireless link if you want local compute, but it’s going to be much more difficult to deal with the limitations inherent to the phone form factor and do a lot of compute on the phone itself.
EDIT2: If you use a high-bandwidth link to such a local, external box, bonus: you also potentially get substantially-increased and upgradeable graphical capabilities on the laptop or smartphone if you can use such a box as an eGPU, something where having low-latency compute available is actually quite useful.


I know open ai bought ~40% of microns memory production.
IIRC Micron was the only Big Three DRAM manufacturer that OpenAI didn’t sign a contract with. I think that they signed contracts with SK Hynix and Samsung for their supply, and didn’t with Micron.
searches
Yeah:
OpenAI ropes in Samsung, SK Hynix to source memory chips for Stargate
Not signing was actually probably to Micron’s advantage; I understand that OpenAI didn’t let Samsung know that they were negotiating with SK Hynix and didn’t let SK Hynix that they were negotiating with Samsung and signed both deals concurrently. That is, each of Samsung and SK Hynix probably sold the DRAM that went to OpenAI for less than they could have gotten on the open market, since neither was aware at the time of signing that the supply on the open market outside of themselves would sharply decrease during the period of the contract, which would be expected to drive up prices.
I mean, they still made a lot more money than they had been making. Just that they could have probably managed to get even more money for the DRAM that they sold.
IIRC the 40% number was OpenAI signing for 40% of global production output, not for any particular company’s output.


According to TrendForce, the boom is expected to continue, as conventional DRAM contract prices in 1Q26 are forecast to rise 55–60% QoQ, while NAND Flash prices are expected to increase 33–38% QoQ.
And that’s in an environment where DRAM output is significantly ramping up:
https://www.theverge.com/news/847344/micron-ram-memory-shortage-2026-earnings
Micron aims to ramp up production and expects to increase its shipments of DRAM and NAND flash memory by 20 percent next year
SK hynix to boost DRAM production by a huge 8x in 2026, still won’t be enough for RAM shortages
EDIT: Also:
The memory sector has surged into a full-blown seller’s market, with both South Korean giants rejecting long-term agreements (LTAs) of two to threes years and sticking to quarterly contracts, anticipating stepwise DRAM price increases each quarter through 2027, the report suggests.
Just a few years ago, they were losing a ton of money due to low DRAM prices, so I imagine that rejecting long-term contracts at current (already high) prices drives even more home that they expect demand to increase further relative to supply:
SK Hynix reports Q2 loss as chip glut continues
SEOUL (Reuters) -South Korea’s SK Hynix posted a quarterly operating loss on Wednesday, as the company said the memory chip market is beginning to recover from a deep downturn.
Published on 07/25/2023 at 07:29 pm EDT


“Open source” really isn’t the right term here, if they’re just releasing API specifications. “Open sourcing” the speakers would be releasing the source code to the software that runs on the speakers.
Like, all of Microsoft’s libraries on Windows have a publicly-documented interface. That hardly makes them open source. Just means that people can write software that make use of them.


He’s still not sure Niemannn cheated, though. “It is of course suspicious,” he said. “But it could be luck or it could be that Magnus had a bad day… maybe it’s not even possible to do this. That’s why I thought to make this program. Let people try. Maybe if people figure out it doesn’t even work at all, then this whole theory of butt plugs was just a waste of time.”
Hmm. Actually…you probably can actually determine it. Assuming that the device’s radio is talking Bluetooth, which not all do, if anyone had a cell phone near the environment, and made use of Google’s or Apple’s Location Services, said companies probably have a log of it responding to beacons. Those location services work by broadcasting a beacon to Bluetooth and WiFi devices and uploading their MAC address and signal strength of the response to Google and Apple, who then compare it to prior position reports and IDs and strengths to determine a position, so there’ll be a log of devices and their active periods floating around in their databases.


looks confused
searches
Ah.
A cheating controversy rocking the chess world just won’t let up. One conspiracy theory promoted by Elon Musk without evidence is that young chess wiz Hans Niemannn defeated world chess champion Magnus Carlsen in early September with the aid of a vibrating set of anal beads.
It’s an intriguing idea, but is such a thing even possible? Ron Sijm, a software engineer in the Netherlands, wants to find out and has developed software to test the theory. He’s posted the code to open-source coding platform GitHub, and all he needs now is the right sex toy.
With the code built, Sijm started hunting for a butt plug or set of anal beads to test his theory. He’s turned to a community that knows the systems best, the butt plug sex toy control project Buttplug.io. Sijm has been talking with the folks on Buttplug.io Discord server in an effort to find someone who already has a device and is willing to test the software.
Sijm said coding out the basic software took about four hours and that, hypothetically, it would be easy for someone like Niemann or his team to put together. The list of compatible anal vibrating devices is long.


Razer has officially pulled the curtain back on Project AVA, a “Friend for Life” AI desk companion featuring a 5.5-inch 3D holographic display. Moving beyond simple voice assistants, AVA utilizes human-like vision and audio sensing to provide full contextual awareness, acting as a real-time gaming wingman, professional consultant, and personal organizer.
The hardware is a sleek cylindrical unit equipped with a dual far-field mic array, an HD camera with ambient light sensors, and Razer’s signature Chroma RGB. At its core, the device currently leverages xAI’s Grok engine to power its “PC Vision Mode,” allowing it to analyze on-screen gameplay or complex documents via a high-bandwidth USB-C connection to a Windows PC.
For some time, man had suffered in a world lacking a smart speaker with a camera, tits, short skirt, and ability to monitor everything he did on his computer. That world was about to end.


Web devs need hardware integration support too.


They said they thought they were within Github’s acceptable use guidelines; even though they make mods for hentai games and things like interactive vibrator plugins, they took care to not host anything explicit directly in their repositories.
A developer who goes by Sauceke, who Github suspended in mid-November without explanation, said their open-source adult toy mod users are now encountering broken links or simply can’t find any of their work.
Hmm. Buttplug.io’s GitHub repositories are still up, and I’d think that that’d be rather-more-prominent if the issue is sex toy code.


According to ShinyHunters, the records contain extensive data on Premium members including email addresses, activity type, location, video URL, video name, keywords associated with the video and the time the event occurred. Activity types include whether the subscriber watched or downloaded a video, or viewed a channel and events include search histories.
This sort of thing is one of those examples why “no log, no profile” service is probably a good idea. The service could have offered the option to charge a fee for access, but not retain customer activity data. They didn’t do that. At some point down the line, someone got ahold of the data, which I imagine that their customers are not really super keen on having floating around attached to their identities.
Probably a lot of companies out there that log and retain a lot of data about their customers.


It’s not, and I think that Excel is often used where other tools would be more-appropriate because of existing expertise with Excel, but you don’t necessarily need to use a database for all tasks where a bunch of data gets stored.
I have plenty of scripts that deal with large amount of schlorped up data that just leave it in a text file, and Unix has a long and rich tradition and toolset for using text files for data storage and processing data in them in bulk.
GNU R, a statistics package, has a lot of tools to schlorp up data from many sources, including scraping it from the web, and storing it large data frames to be processed and maybe visualized. It’s probably rather more performant than databases for some kinds of bulk data processing.
Okay, so…is it appropriate here?
One thing that spreadsheets can be handy for is for making specialized calculators that plonk some data into some simple model and spit out a result. Having, say, the current temperature in a given city may be a perfectly reasonable input to make available to a spreadsheet, I think.
I don’t know if I can count this as mine, but I certainly didn’t disagree with predictions of others around 1990 or so that the smart home would be the future. The idea was that you’d have a central home computer and it would interface with all sorts of other systems and basically control the house.
While there are various systems for home automation, things like Home Assistant or OpenHAB, and some people use them, and I’ve used some technology that were expected to be part of this myself, like X10 for device control over power circuits, the vision of a heavily-automated, centrally-controlled home never made it to become the normal. I think that the most-widely-deployed piece of home automation that has shown up since then is maybe the smart thermostat, which isn’t generally hooked into some central home computer.