Tomorrow? Oh, so you already forgot the announcement from last month? Well, I mean I guess we have a lot of our own stuff going on right now…
/s?
Tomorrow? Oh, so you already forgot the announcement from last month? Well, I mean I guess we have a lot of our own stuff going on right now…
/s?
Yup. Or anything held against them is now just fakery.
Sometimes it feels technology may doom us all in the end. We’ve got a rough patch in society starting now, now that liars and cheats can be more convincingly backed up, and honest folk hidden behind credible doubt that they are the liars.
AI isn’t just on the path to make convincing lies, it’s on the path to ensuring that all truth can be doubted as well. At which point, there is no such thing as truth until we learn yet a new way to tell the difference.
“They don’t need to convince us what they are saying, the lies, are true. Just that there is no truth, and you cannot believe anything you are told.”
Are you installing needed libraries?
For example, the installer runs because it doesn’t need any, but then your app needs say VCRedist 2010, and so won’t until run until you add the vcrun2010
extra library with Winetricks or the menu in Bottles.
The way I understand it, I think the real issue here is that Proton Drive should clear the sync state or identity when uninstalled. The identification of the PC should be unique to each install, so that when you reinstall it later it understands that it is now a “new” system needing to be reworked from scratch, and that the empty folder is awaiting initial download, not mass cloud deletion. Would that lead to multiple copies in the “Computers” backup section? Sure, but that can be a good thing too, or at least better than wiping the drive, and more easily remedied.
I wonder if Proton could shave off some work hours by just putting the API team in contact with the RClone backend developer, or by contributing to it.
I get the feeling even if Proton released a drive app for Linux, all but the most casual users will just be waiting for when RClone learns from it and improves.
What happens next? A wave of even worse disregard for things.
After all, if we can bring back the mammoth, who cares if we off <insert species here>, they’ll just bring it back next rotation. /s
It’s not really because it fell over. It’s because it wasn’t supposed to fall over. Consumable launch materials don’t contend with this because failure to return is a success. This is a failure. This must be learned from and fought against/prevented going forward.
RClone? I understand it’s a bit hacky but it works well for me in testing and is a generally accepted option for cloud storage of all kinds on Linux.
Since you mention setup instead of any manual install screwery, I’d say root(uid 0) is still very real, you just didn’t setup any login for it. Every time you sudo
(substitute-user-do), you(probably uid 1000) are running that command as root instead of you. In fact, just sudo -i
and you are now “logged in” as root.
Edit: Missed the context. Should still be useful info but you probably are not accidentally remoting into an account you never setup the login for.
Raspbian is sometimes a compromise between security and usability, because it is designed to go into the hands of new users. It also used to ship with a default “pi/rasberry” login hardcoded and IIRC permitted root password login over ssh. Things experience users change or turn off, but needs to start friendly for the rest, you know?
By doing this, they can take a step in the right direction by separating the root and login user, without becoming annoying asking for a password frequently as a newbie copies and pastes tutorial commands all week.
And as I said it’s unlikely, even very unlikely, but just not impossible. Everything comes with a risk, I just believe it’s up to you, not me, what risks mean in your environment. Might be you’d like to have the convenience on the home dev server, but rather have as much security as possible on a public facing one.
Or maybe you’d like to get really dialed in and only allow specific commands to be run without a password, so you can be quick and convenient about rebooting but lock down the rest. Up to you, really, that’s the power of Linux.
In Debian, you will want to modify your /etc/sudoers
file to have the NOPASSWD
directive.
So where you find something like this in that file:
%sudo ALL=(ALL:ALL) ALL
Make it like this:
%sudo ALL=(ALL:ALL) NOPASSWD:ALL
In this example, powers are given to the sudo %group, yours might just say pi or something else the user fits into.
Also, please note that while this is convenient, it does mean anyone with access to your shell has a quick escalation to root privileges. Some program you run has a shell escape vulnerability and gets a shell without a password, this means they also get root without one too. Unlikely to happen, sure, but I believe one should make informed decisions.
Now would be a good time to look for a .com
you like, or one of the more common TLDs. And register it at Namecheap, Porkbun, or Cloudflare. (Cloudflare is cheapest but all-eggs-in-one-basket is a concern for some.)
Sadly, all the cheap or fun TLDs have a habit of being blocked wholesale, either because the cheap ones are overused by bad actors or because corporate IT just blacklists “abnormal” TLDs (or only whitelists the old ones?) because it’s “easy security”.
Notably, XYZ also does that 1.111B initiative, selling numbered domains for 99¢, further feeding the affordability for bad actors and justifying a flat out sinkhole of the entire TLD.
I got a three character XYZ to use as a personal link shortener. Half the people I used it with said it was blocked at school or work. My longer COM poses no issue.
Is there a list anywhere of this and other settings and features that could/should certainly be changed to better Firefox privacy?
Other than that I’m not sure I’m really going to jump ship. I think I’m getting too old for the “clunkiness” that comes with trying to use third party/self hosted alternatives to replace features that ultimately break the privacy angle, or to add them to barebones privacy focused browsers. Containers and profile/bookmark syncing, for example. But if there’s a list of switches I can flip to turn off the most egregious things, that would be good for today.
You would go for a Raspberry Pi when you need something it was invented for.
Putting a computer on your motorcycle or robot or solar powered RV. Super small space or low-low power availability things, or direct GPIO control.
A MiniMicro will run laps around a Pi for general compute, but you can’t run it off a cell phone battery pack. People only related Pis to general compute because of the push to sell them as affordable school computers, not because they were awesome at it, because they were cheap and just barely enough.
Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:
Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.
Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?
I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…
Onboard AI chips will allow this to be local.
Phones do not have the power to ~~~
Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.
It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.
EDIT: Finished looking for what I thought I remembered…
Additionally, Siri has been locally processed since iOS 15.
https://www.macrumors.com/how-to/use-on-device-siri-iphone-ipad/
I think there’s a larger picture at play here that is being missed.
Getting the weather is a standard feature for years now. Nothing AI about it.
What is “AI” is, Hey Siri, what is the weather at my daughter’s recital coming up?
The AI processing, calculated on-device if what they claim is true, is:
Well {Your phone contact name}, it looks like it will {remote weather response} during your {calendar event from phone} with {daughter from contacts} on {event date}.
That is the idea between on-device and cloud processing. The phone already has your contacts and calendar and does that work offline rather than educating an online server about your family, events and location, and requests the bare minimum from the internet, in this case nothing more than if you opened the weather app yourself and put in a zip code.
Plug it into a monitor or TV and keep an eye on the console.
I have an older NUC that will not cooperate with certain brands of NVMe drive under PVE…the issue sounds like yours where it would work for an arbitrary amount of time before crashing the file system, attempting to remount read-only and rendering the system inert and unable to handle changes like plugging a monitor in later, yet it would still be “on”.
Does this mean the RClone integration can be improved?
It feels a little hacky right now, and as I previously learned when I last recommended it does not do things like image thumbnails which turns people away.