• 1 Post
  • 226 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle
  • I have a Blade Stealth 13 QHD+ touchscreen (RZ09-02393E32) since 2017. Until recently it was mostly Windows and Ubuntu side by side. I realized few months ago I never ever boot on Windows so I removed it. I also got tired on Ubuntu pushing for its own package management system which I don’t find useful. Consequently back to “just” Debian stable and works great for me. Didn’t have to tinker with anything, just works.





  • I used Kodi with LibreElec for years in a similar setup. It was nice… but in practice I didn’t really use the “cool” functionalities (like indexing, image preview, Web remote control, etc) so instead I checked how Kodi works and noticed DLNA. I saw that my favorite video player, namely VLC, supports DLNA. I then looking for DLNA server on Linux, found few and stuck to the simplest I found, namely minidlna. It’s quite basic, at the least the way I use it, but for my usage it’s enough :

    • install VLC on clients, including Android video projector, phones, XR HMDs, etc
    • install minidlna on server (RPi5)
    • configure minidlna to serve the right directory with subdirectories ( /var/lib/minidlna by default )
    • configure few extra software that get videos to push them (via scp script and ssh-key) to rpi5:/var/lib/minidlna/

    voila… very reliable setup (been using for more than a year on a daily basis.



  • So if you are genuinely worried about this, don’t.

    First because, as numerous persons already clarified, researchers here are breaking deprecated cryptography.

    It’s a bit like taking toothpicks and opening a lock while the locks used in your modern car is very different. Yes, it IS actually interesting but the same technique does not apply in practice, only in principle.

    Second because IF in principle there IS a path to radically grow in power, there are already modern cryptography techniques which are resistant to scaling the power of quantum computers. Consequently it is NOT just about small the key is, but also HOW that key is made, what are the mathematical foundations on which a key is made, and can be broken.

    Anyway for a few years now there has been research, roughly matching the interest in quantum computers, to what is called post-quantum encryption, or quantum resistant encryption. Basically the goal of the research is to find new ways to make keys that are very cheap to generate and verify, literally with something as cheap and non powerful as the chip in your credit card, BUT practically impossible to “crack” for a computer, even a quantum computer, even a powerful one. The result of that on-going research are schemes like Kyber, FALCON, SPHINCS+, etc which answer such requirements. Organizations like NIST in the US verify that the schemes are actually without flaws and the do recommendations.

    So… all this to say that a powerful quantum computer is still not something that breaks encryption overall.

    If you are worried TODAY, you can even “play” with implementations like https://github.com/open-quantum-safe/oqs-demos and setup a server, e.g Apache, and a client, e.g Chromium, so that they can communicate using such schemes.

    Now practically speaking if you are not technically inclined or just want to bother, you can “just” use modern software, e.g Signal, which last year https://signal.org/blog/pqxdh/ announced that they are doing just that on your behalf.

    You can finally expect all actors, e.g hosts like Lemmy, browsers like Firefox, that you use daily to access content to gradually both integrate post-quantum encryption but also gradually deprecate older, and thus risky, schemes. In fact if you try to connect today to old hardware via e.g ssh you might find yourself forced to accept older encryption. This very action is interesting because it does show that over the years encryption changes, old schemes get deprecated and replace.

    TL;DR: cool, not worried though even with a properly powerful quantum computer because post-quantum encryption is being rolled out already.


  • What this show is a total lack of originality.

    AI is not new. Open-source is not new. Putting two well known concepts together wasn’t new either because… AI has historically been open. A lot of the cutting edge research is done in public laboratories, with public funding, and is published in journals (sadly often behind paywall but still).

    So the name and the concept are both unoriginal.

    A lot of the popularity gained from OpenAI by using a chatbot is not new either. Relying on always larger dataset and benefiting from Moore’s law is not new either.

    So I’m not standing on any side, neither this person nor the corporation.

    I find that claiming to be “owning” common ideas is destructive for most.




  • Just yesterday I pinned VLC on my KDE Plasma Task Manager. Why? Because this way I can directly open “Recent Files” from it. I discovered about this functionality just last week with Libre Office Draw. It’s so efficient, it absolutely changed how I use my computer daily!

    but… why do I bother with this long example? Because IMHO that’s from KDE, not Debian. When a distro improve the UX, as I also wish, it can be mostly by selecting the best software in its packages to maintain (e.g. here KDE but yes could indeed be their own custom made package, even though it requires a lot more resource AND other distro could also use them back assuming it’s FLOSS) but arguably the UX is mostly of the distribution itself is limited to the installation process.


  • more cutting edge than Debian

    In what aspect? How about Debian Unstable?

    I’m personally on Stable but I do also have some AppImages (and recently discovered AM https://github.com/ivan-hc/AM thanks to someone here), my own ~/bin directory and quite a few tools. I feel that there are very few things from an end-user standpoint that needs to done only through the distribution package manager. I believe having a stable OS but “cutting edge” specific apps (say Cura, Blender, etc) is a good compromise. As you mention Firefox over a PPA (which is also have I have) is such a good compromise. So I’m curious (genuinely, not trying to “convert” you to Debian on desktop) what is better on that front on Ubuntu rather than Debian.

    Edit: to clarify I both pay my bills (literally, and work too) and play (including recent VR Windows only games) on my Debian stable on desktop.


  • a shortage of meaningful innovation

    Well… a distribution IS a selection of packages and a way to keep them working together. Arguably the “only” innovation in that context is HOW to do that and WHICH packages to rely on. For the first, the “latest” real change could be considered immutable distributions, as on the SteamDeck, and declarative setup, e.g. NixOS. For the second… well I don’t actually know if anybody is doing that, maybe things like PrimTux for kids at schools in France?

    Anyway, I agree but I think it’s tricky to be innovative there so let me flip the question, what would YOU expect from an innovative distribution?




  • I’d happily give technical advice but first I need to understand the actual need.

    I don’t mean “what would be cool” but rather what’s the absolute minimum basic that would make a solution acceptable.

    Why do I insist so much? Well because installing a distribution, e.g. Debian, takes less than 1h. Assuming you have a separate /home directory, there is no need to “copy” anything, only mounting correctly. If it is on another physical computer then the speed will depend on the your storage capacity and hardware (e.g. SSD vs HDD). Finally “configuring” each piece of software will take a certain amount of time, especially if you didn’t save the configuration (which should be the case).

    Anyway, my point being that :

    • installing the OS takes little time
    • copying data across physical devices take a lot more time
    • configuring manually specific software takes a bit of time

    So, if you repeat the operation several times a week, investing time to find a solution can be useful. If you do this few times a year or less, it’s probably NOT actually efficient.

    So, again, is this an intellectual endeavor, for the purpose of knowing what an "ideal’ scenario would be or is it a genuine need?




  • I thought saying

    contribute however they can up to their own capabilities

    was actually very clear but seems I wasn’t clear enough so that means… literally doing ANYTHING except only criticizing. That can mean being an open-source developer, yes, but that can also means translation, giving literally 1 cent, etc. It means doing anything at all that would not ONLY be saying “this is good, but it’s not good enough” without doing actually a single thing to change, especially while actually using another free of charge browser that is funded by advertisement. Honestly if that’s not clear enough I’m not sure what would be … but please, do ask again I will genuinely try to be clearer.


  • I hope everybody criticizing the move either do not use products from Mozilla or, if they do, contribute however they can up to their own capabilities. If you don’t, if you ONLY criticize, yet use Firefox (or a derivative, e.g. LibreWolf) or arguably worst use something fueled by ads (e.g. Chromium based browsers) then you are unfortunately contributing precisely to the model you are rejecting.