Shout out to FDroid for being awesome. But realistically it’s not going to cover all the apps you’ll ever need.
Shout out to FDroid for being awesome. But realistically it’s not going to cover all the apps you’ll ever need.
My concern is with malware that exploits the software stack though, and those links pertain to scams that exploit human nature. Hence they don’t really support the argument that the iOS/android stack is more/less secure.
Scams that exploit human nature are an inevitable part of being online and there is no foolproof way to prevent them. I never said that either company was better or worse at reactive removal.
Scam apps require user interaction to achieve their goals. They largely aren’t doing anything that the user doesn’t allow them to do. So while I would always advocate swift removal, the onus is on me to protect myself rather than the store itself.
The links I posted related to software on the play store exploiting aspects of the Android stack to surreptitiously perform tasks without the users knowledge. If somebody downloads one of those apps they are able to do things that the user isn’t aware of and never allows. This is the kind of exploitation that is preventable by thorough fuzzing. And this is the kind of threat that iOS does a fantastic job at protecting against.
Put it this way: I can safely download any app from the Apple App Store knowing that it is highly unlikely it will fuck with my device. I know that if it does it’ll probably be noteworthy enough to make the news. I can’t say the same for the Google Play Store.
Except that somehow it just keeps happening to google:
Whatever Apple is doing, you just don’t see this level of compromise on iOS. It’s not just that the google store is no better, it seems to be so much worse.
The Wileyfox Swift was a rebadged device from an ODM, and at the time was quite well known and liked because the company was UK based and touted responsive local support. The hardware was good and the software support certainly no worse than any other at the time. The frustration of using it came from the problems inherent in the android stack, not the device itself.
I wanted to use android and I tried my best to make it a rational choice. The issues I encountered applied all the same to phones many times the price I paid, hence making iOS my only option. All these years later most of those core issues persist.
No I did not and the Swift was (at the time) an official lineage target. It performed well, but the amount of work and effort it took to attain and maintain that performance was simply unacceptable to me. I like the concept of Android and I like how open it is but that doesn’t mean I’m going to be an apologist for it’s shortcomings. Of which there are many. I would love to be able to justify using an android device but it is just not a rational choice for me. And it would seem many others.
Denigrating something is by definition unfair criticism - and I don’t think even the most evangelical of android fans can support the mediocre manufacturer support and security history of the platform.
I tried a full phone cycle on Android. A Wileyfox Swift. I stuck with it for 4 years. I’ve dealt with a handful of Android tablets. I still have to wrangle Android on fire sticks.
I love to mess around with electronics but holyshit never again. These are devices that need to work and perform, I got so damn tired messing with Lineage and TWRP - the alternative being the zero updates from the manufacturer. The whole stack is a janky mess, and a moving target in terms of security and performance. Flagship phones that might stay current and perform well for a couple of years? Wtf?
So many android apps are dogshit. There’s no minimum bar to entry. Malicious apps sneak onto the play store. Out of date apps linger around.
My phone is not a project piece. It’s an essential device. Apple gives me a stringently vetted App Store, strong privacy controls, dependable hardware and performance. They expose the settings that I need and optimise everything else. My iPhone works and does it’s job with far less painful maintenance. I’m definitely willing to trade some freedom for that utility.
Not only that but Apple hasn’t tried to drm the open web lately. Are you sure this is consumerism and peer pressure? And not a dogshit software stack with poor performance, security and hardware driving away the users who are most engaged with their devices?
Do I care what phone you’re using? No. But I think bullshit click bait articles which effectively denigrate an entire demographic for the sake of instigating a tired back and forth about apples vs oranges should stay on the other side of the fucking paywall.
Hasn’t been an issue for me. HA would only be depending on Opnsense for a DHCP lease so assuming you have reasonable lease times it’ll just pick up where it left off.
Without checking I would imagine you could just set a delay for the HA container to make sure opnsense can start first, if it does become an issue.
I use an N5105 generic mini pc running proxmox and opnsense. You can get them fairly cheaply from Aliexpress. They’re particularly low power and come with 4-6 gigabit network ports. I have two containers, the second of which hosts my Home Assistant instance. As an added bonus they often don’t have a fan.
For wifi I use Ubiquity wifi 6 Lite APs with the controller running under home assistant.
You can ignore the windows machine unless it’s using nfs, it’s not relevant.
Your screenshot suggests my guess was incorrect because you do not have any authorised Networks or Hosts defined.
Even so if it was me I would correctly configure authorised hosts or authorised networks just to rule it out, as it neatly explains why it works on one container but not another. Does the clone have the same IP by any chance?
The only other thing I can think for you to try is to set maproot user/group to root/wheel and see if that helps but it’s just a shot in the dark.
The two docker containers can access the share, but the new proxmox container can’t?
The new proxmox container will have a different IP. My guess would be that the IP of the docker host is permitted to access the nfs share but the ip of the new proxmox container is not.
To test, you can allow access from your entire lan subnet (192.168.1.1/24)
Edit: For reference see: https://www.truenas.com/docs/scale/scaletutorials/shares/addingnfsshares/#adding-nfs-share-network-and-hosts
In particular: If you want to enter allowed systems, click Add to the right of Add hosts. Enter a host name or IP address to allow that system access to the NFS share. Click Add for each allowed system you want to define. Defining authorized systems restricts access to all other systems. Press the X to delete the field and allow all systems access to the share.
I can’t even show you a COTS Wifi 7 device, unless I’m missing something the two models Asus have listed aren’t available on Amazon.com - not only that but are there any clients yet? So it doesn’t really support your point.
Even then… how are you even getting 30gbps into the device - three 10GbE ports in a LAG? And then you’re what… pushing that 30gbps over your home fibre? Looking at the spec Wifi 7 is designed for large scale deployment not home use. Anyway I’m getting off topic.
I mean you do realise I’m largely in agreement with you when it comes to discrete access points? I was just pointing out a factual flaw in your assertion that so called DIY devices did not support 802.11ax. My strong disagreement was with the state of COTS routers.
I think you kind of missed my point. The WiFi 7 magic, or any magic really that you’re ascribing to Asus or any consumer facing manufacturer doesn’t even come from them - they buy that shit in, slap on a load of marketing drivel and try to con your grandma or some gaming kid out of a few hundred bucks and call it a day. At best they’re gonna be sending it out for emissions testing because they have to, to get it certified. Maybe they test the antenna placement but given some of the testing I’ve seen it’s clear they don’t even always do that.
If any of those guys do anything considerably different to anyone else it wouldn’t be a standard right? The clients would only work with matching routers! In fact years back you used to see this, I think 802.11n some manufacturers had some superfast bullshit that only worked when you had a matching pair.
The whole point of standards like 802.11be is to make sure everything works together and does more or less the same thing, and the whole point of their marketing department is to convince you that their special brand of bullshit does something super special and unique when by definition it cannot without breaking standards, rendering it unable to use the term wifi.
Home routers have been dog shit for years, and behind the marketing they largely all still are. Don’t allow that shit. Don’t forgive them. I literally linked you to a laundry list of vulnerabilities in Asus routers patched last month, some of which had been known for years
Sorry my dude. I know this is a bit of a ranty winding post, but holy shit I’m guessing you haven’t been around for the last 20 years of bullshit that these companies have been pulling.
NONE of them deserve your loyalty and they definitely don’t know the meaning of the word kindness. They have proven time and time again that they would sell their own granny for a few pennies.
Don’t accept that shit.
I agree entirely. Look at these lovely radiation patterns:
https://help.ui.com/hc/en-us/articles/115005212927-UniFi-Network-AP-Antenna-Radiation-Patterns
I strongly suspect that those antennas are highly optimised and they lend themselves to being mounted in optimal locations. A couple of rubber duck antennas will work in the same room, but keeping that stuff up and away from all the other gear will pay dividends on the fringes of your wifi coverage.
While I agree in general that turnkey solutions for access points (not routers) are largely preferable I must point out that it is at least possible to achieve 802.11ax with DD-WRT: https://openwrt.org/toh/views/toh_available_16128_ax-wifi for example, as I found out from this excellent post: https://lemmy.ninja/post/224052
That post also does a fantastic job of explaining the inherent issues of dealing with wifi hardware from an open source perspective.
Features like Mu-MIMO/beam forming that call for arrays of antenna are a part of the respective WiFi specifications, and are baked into the closed firmware of the radios. While manufacturers will fight hard to make you believe they are implementing something special, the fact is that they must abide by the WiFi standards and are just rebranding things built into the radios they buy. Hence even FOSS software can implement them. Check out this thread I found which describes what’s going on:
https://forum.dd-wrt.com/phpBB2/viewtopic.php?p=1215880
What troubles me about the ap/router combos from Asus and the like is that they they charge so much for so little, and they have a history of being generally shitty: https://www.pcworld.com/article/447083/netgear-accuses-asus-of-submitting-fraudulent-test-results-to-the-fcc.html
It was these same companies that claimed gigabits of wifi throughput, when they were in fact advertising the combined speed of three antennas over two bands. No one device would ever see the speed they slapped on the package. Heck even if they did, grandma probably can’t appreciate the fact that faster wifi doesn’t mean shit if you have a 20/3 asynchronous dsl connection.
The specialised hardware - ASICS that push packets - are what allow them to include megabytes of RAM and tiny amounts of storage along with extremely anemic CPUs. Very little if any of this is designed in house, they pick components or even an entire SoC, lay out a board, test it and ship it with a nauseating markup. Those ASICS aren’t expensive: they’re in the most basic switches, and the super duper wifi hardware is just a rebadged product from another company. This isn’t really a criticism, it just means that they are efficient and low power but hardly unique. It is though an observation that even the high end router/ap combos are far from bleeding edge tech worthy of the high prices they charge, imho. Why the fuck is 10GbE still so expensive in 2023? There are 10 year old SATA3 drives that can saturate a GigE uplink.
The software side usually consists of a minimised Linux build often running a myriad of the same open source software running on DIY builds. Back in the bad old days it even took some pressure to get them to abide by the respective OSS licenses and give their code back to the communities they were using to make money.
They’re charging a premium for very low spec hardware, and not doing a great deal to earn their keep.
Finally while these companies are now being forced to provide updates, they are still shipping products with security issues:
One of the most relevant examples from that article being: ‘The other critical patch is for an almost five-year-old CVE-2018-1160 bug caused by an out-of-bounds write Netatalk weakness that can also be exploited to gain arbitrary code execution on unpatched devices.’
So while I can agree that a DIY Wifi AP will likely cause a certain amount of avoidable grief, I simply can’t abide by the notion that OPNsense or PFsense is unable to offer feature parity with COTS routers.
As an addendum, if my $100 x86 router can route 1GbE as well as a $300 RGB monstrosity, what are they bringing to the party exactly? Why should we indulge that? Why should we tolerate such gratuitous bullshit?
If your only goal is working https then as the other comment correctly suggests you can do DNS-01 authentication with Let’s Encrypt + Certbot + Some brand of dyndns
However the other comment is incorrect in stating that you need to expose a HTTP server. This method means you don’t need to expose anything. For instance if you do it with HA:
https://github.com/home-assistant/addons/blob/master/letsencrypt/DOCS.md
Certbot uses the API of your DDNS provider to authenticate the cert request by adding a txt record and then pulls the cert. No proxies no exposed servers and no fuss. Point the A record at your Rfc1918 IP.
You can then configure your DNS to keep serving cached responses. I think though that ssl will still be broken while your connection is down but you will be able to access your services.
Edit to add: I don’t understand why so many of the HTTPS tutorials are so complicated and so focused on adding a proxy into the mix even when remote access isn’t the target.
Cert bot is a shell script. It asks the Lets Encrypt api for a secret key. It adds the key as a txt record on a subdomain of the domain you want a certificate for. Let’s encrypt confirms the key is there and spits out a cert. You add the cert to whatever server it belongs to, or ideally Certbot does that for you. That’s it, working https. And all you have to expose is the rfc1918 address. This, to me at least, is preferable to proxies and exposed servers.
Not that I don’t love Ubi but OPNsense and pfsense will also handle failover:
https://docs.opnsense.org/manual/multiwan.html
This is also possible within Linux, Windows and *BSD by just adding both possible routes and weighting them accordingly:
https://serverfault.com/questions/226530/get-linux-to-change-default-route-if-one-path-goes-down
Yes. Depending on your network configuration you could consider using cellular data as a backup form of connectivity.
He asked for a recommendation which I can’t provide because I haven’t gone down the route he wants to know about, hence the first line and my explanation of why I chose not to do that.
I then speculated how I would do it if I were in his position. Then I broke down his question to help him examine what he really wanted: a completely free(as in open source) appliance, a free operating system and or free drivers.
Then finally I explained why you’re unlikely to get a truly free radio. I’m sorry if you or others found this unhelpful, I was just trying to condense quite a lot of information into a short post.
I did just see this posted: https://lemmy.ninja/post/224052
The short answer is no, because it’s a pain in the ass and offers little tangible benefit. But I can speculate.
If I was going down this path I would look for an x86 box with a wifi card that is supported by OPNsense or PFsense(that’s usually going to be dependant on available *BSD available drivers). I don’t how well they would function but I would expect quirks. You could also check the compatibility lists of the open router distributions to find something that’s well supported. You can check the forums for posts from people with similar goals and check their mileage.
You might even be able to achieve this with an ESP32.
But what are you hoping to achieve? Do you mean open radio firmware or do you mean open drivers? Or an open OS talking to a closed radio? What’s the benefit?
Radios in any device are discrete components running their own show.
Open drivers should be possible. However I have a feeling that open firmware for wifi access points radio hardware is going to be extremely hard to find. The regulatory agencies really don’t want the larger public to have complete control because of the possibility of causing interference and breaking the rules(for good reason - imagine if your neighbour had bad signal so he ignorantly cranks up the power output, not realising that he can’t do the same with his client devices, rendering his change useless).
I seem to remember a change in FCC rules some time back that seemed to disallow manufacturers obtaining certification for devices that permitted end users to modify the firmware, much to the concern of open router users at the time. The rule was aimed at radio firmware but the concern was that the distinction would be lost and the rule applied to the entire router by overzealous manufacturers who hate third party firmware at best.
A fully open radio is basically an SDR. Can you move packets over an SDR? Hell yes, but now you’re in esoteric HAM radio territory. It’s going to be a hell of a fun project and you’re going to learn a lot, but in so far as a practical wifi ap, your results will be limited.
I use FOSS wherever it’s practical but if you want working wifi just stick to the well tested brand names. For what it’s worth you probably won’t gain any security by going open, if there’s any weakness it’ll probably be baked in at the protocol level which open devices would need to follow anyway. At least a discrete AP can be isolated and has no reason to be given internet access.
I would take these projects over stock firmware on traditional home routers any day. And I have done where I’ve been unable to rig a more permanent solution. They have an honourable mission in a section of hardware filled with absolute junk.
But the trouble is the sheer number of hardware targets and meagre resources on these devices combined with the contempt of third party firmware from most manufacturers make them hard to flash and leave them rarely updated, if you’re lucky enough to have a supported device. Even then they are prone to quirks and bugs. Some devices do receive and are capable of receiving updates but they often cost more than the equivalent low TDP general purpose computer.
Just imagine: the developers of DD-WRT have to target not just each individual router model but every single revision as the manufacturers have a habit of switching major components or even entire chipsets between product revisions. On top of that the documentation for the components used might be sparse or non existent. I’m impressed that these router distributions can make it work at all but that doesn’t make it any more practical or sustainable.
At this point you may as well flip the router into modem mode and run OPNsense or PFSense and get a fully fledged operating system running on far more resources than any of these SoCs. Assuming you have the power budget you’ll get assured updates and far more flexibility with fewer compatibility issues and quirks. My passively cooled N5105 box with 8GB of ram and a 128GB HDD happily routes a 1gb/s WAN while simultaneously hosting a busy home assistant instance. The resources aren’t even maxed out.
Following my experience I will always opt to run dedicated APs. DD-WRT WiFi support is amazing considering what they have to work with, but there are only so many wifi chipsets they can support and because they try to support as much as they can there are always problems with something. I really don’t have time to constantly troubleshoot the wifi following cryptic posts from years ago. Ubiquity stuff isn’t flawless either but it’s stable and a lot less prone to hard to trace issues. YMMV.
DD-WRT and friends I love you, you really saved my ass a few times when all I had was some shitty CPE. You’re still way nicer than Cisco gear. But I find it hard to justify using a gimped out SoC from a couldn’t-care-less manufacturer when I can buy a 5W TDP passively cooled x86 computer for ~100usd.
I ended up using Aqara switches that talk to a Sonoff ZB BRIDGE-P flashed with Tasmota.
Sonoff TX series might fit your bill but I wanted a real switch rather than a capacitive one. Shelly are usually good quality but they aren’t easily available where I am: https://templates.blakadder.com/shelly_1.html
https://templates.blakadder.com/ is a pretty comprehensive overview of your options
https://www.zigbee2mqtt.io/supported-devices/ covers devices that are compatible with z2m
https://zigbee.blakadder.com/zha.html lists devices supported by ZHA
I started with ZHA in my zigbee setup but moved to z2m due to device support.
Edit: Be aware that some of the Aqara switches are quite big on the back and you may struggle to fit them. I use E series switches that are considerably smaller than other types.
A second vote for Reolink. They’re entirely adequate for most home scenarios.
Dahua are also very good if you can find them however they are aimed at professional installers. They cover almost every scenario imaginable and have good on device ai features. They do have their idiosyncrasies but do everything you could need and offer excellent lowlight performance for very little cost. There is also a very good home assistant integration.
You’ll find a lot of people tend to chose between Dahua and the more expensive Hikvision on cctvforums. You should be able to pick up a capable 4mp Dahua with tripwire detection for 60GBP. These cameras can (sometimes literally) see in the dark.
Avoid ESP32 Cams. They are very low frame rate and produce a very noisy image. They’re fun to tinker with but are nowhere near the quality of a real IPC.