

There are a fair number of third party boards based on the RP2040/RP2050 silicon. Even esphome can target it even though it originally targeted the esp32.
The silicon itself is pretty nice although the original had done problems with deep sleep.
FLOSS virtualization hacker, occasional brewer


There are a fair number of third party boards based on the RP2040/RP2050 silicon. Even esphome can target it even though it originally targeted the esp32.
The silicon itself is pretty nice although the original had done problems with deep sleep.


Are they as well supported? There are lots of SBCs out there but if they are only supported by vendor kernels and have no documentation then i’d rather pay the Pi premium.
ETA: that said for a lot of stuff microcontrollers are a much better bet.
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.


What was wrong with working with Godot that made them want to fork?
I guess somewhere between 6 and 7…urm 6/7 👐 (and my kids say I don’t understand memes 😅).
If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
I thought they were also blood relations?
I think the OP’s analysis might have made a bit of a jump from overall levels of hobbyist maintainers to what percentage of shipping code is maintained by people in their spare time.
While the experiences of OpenSSL and xz should certainly drive us find better ways of funding underlying infrastructure you do see a higher participation rates of paid maintainers where the returns are more obvious. The silicon vendors get involved in the kernel because it’s in their underlying interests to do so - and the kernel benefits as a result.
I maintain a couple of hobbyist packages on my spare time but it will never be a funded gig because comparatively fewer people use them compared to DAYJOB’s project which can make a difference to companies bottom lines.
The year of Linux on the desktop is whatever year you personally switched over.


Now I’ve read the article it’s unnamed industry analysts and it’s written by an AI. For all I know the AI has hallucinated the number.


I assume microcontrollers. Most of those are invisible to consumers.
I would not want anything that requires a cloud connection to be responsible for securing my house. The security record of these smart locks also isn’t great.
The final question you need to ask yourself is how they fail safe? There have been Tesla owners trapped in burning cars. If, god forbid, your house caught fire can you get out of your door secured with a smart lock?
Once we summit the peak of inflated expectations and the bubble bursts hopefully we’ll get back to evaluating the technology on its merits.
LLM’s definitely have some interesting properties but they are not universal problem solvers. They are great at parsing and summarizing language. There ability to vibe code is entirely based on how closely your needs match the (vast) training data. They can synthesise tutorials and stack overflow answers much faster than you can. But if you are writing something new or specialised the limits of their “reasoning” soon show up in dead ends and sycophantic “you are absolutely right, I missed that” responses.
More than the technology the social context is a challenge. We are already seeing humans form dangerous parasocial relationships with token predictors with some tragic results. If you abdicate your learning to an LLM you are not really learning and that could have profound impacts on the current cohort of learners who might be assuming they no longer need to learn as the computer can do it for them.
We are certainly experiencing a very fast technological disruption event and it’s hard to predict where the next few years will take us.


Fundamentally the reason they want to use kernel modules is to observe the system for other executables interfering with the game. This is a hacky solution at best
The TPM hardware can support attested boot so you can verify with the hardware nothing but the verified kernel and userspace is running. That gives you the same guarantees but without letting third parties mess with your kernel.


It’s nice to see Valve and Igalia see the benefit of open GPU drivers for Proton and FEX utilise.
I would have thought unified memory would pay off, otherwise you spend your time shuffling stuff between system memory and vram. Isn’t the deck unified memory?


mu4e inside my Emacs session.
I ran into something similar when in haste I went from Raspbian Stretch to plain Bookworm and discovered the Debian version of Kodi didn’t have all the userspace drivers to drive the hardware decoding. In the end I worked around it by running Kodi from a container with stretch in it until the official Raspbian Bookworm got released. Maybe you could build a stretch based container for your VLC setup?


Did you ever play with the audio visualiser? I believe it was built in with the CD-ROM drive? What about Tempest 2000?
Imgur has been offline in the UK since the original investigation. Do they even want to be in the UK market?