nice.
can usually get a pretty good performance increase with hand writing asm where appropriate.
don’t know if its a coincidence, but i’ve never seen someone who’s good at writing assembly say that its never useful.
nice.
can usually get a pretty good performance increase with hand writing asm where appropriate.
don’t know if its a coincidence, but i’ve never seen someone who’s good at writing assembly say that its never useful.
this is a complex topic and probably belongs in a different thread.
essentially i don’t personally believe in punishing citizens of a country for the actions of its politicians.
at best its misguided, at worse it basically empowers politicians on both sides who draw power from friction between citizens of different nations. typical divide and conquer bs.
why do you not think a software developer wouldn’t have to
wouldn’t or shouldn’t? if you mean wouldn’t, it’s not surprising and its not the dev’s fault they have to comply with policy, so the criticism is not with them.
if you mean shouldn’t, i don’t agree with punishing athletes either, but regarding foss specifically, isn’t the “friendly competition” of olympics equivalent to that? sort of. in some ways yes. in other ways its actually the opposite.
collaboration is actually the opposite of competition.
and while there’s a case for the benefits of healthy sports competition, i don’t believe it truly fulfills the spirit of international goodwill to the degree it says on the packaging. foss and other forms of international collaboration for the betterment of greater society are definitely on a higher rung - in my opinion at least.
personally i don’t agree with sanctioning foss communities.
but fuckit, bring on more forks i say.
among other benefits, the scifi-type scenario of nations trying to patch eachothers backdoors and slip in new backdoors (and hopefully innovations). could make for an exciting OS space-race type scenario
gonna use this as an opportunity to launch my ted talk:
there’s no such thing as anything but “race mixing” since every single human on the planet is a mix of different ancient races anyway
(or to put another way, race is a bs term anyway since we’re all homosapiens)
agreed the existing system is deeply flawed and currently on a trajectory to critical failure.
regarding peer review itself, this is another point. people regard peer review as this binary thing which takes place prior to publication and is like a box which is ticked after publication.
which is ofc ridiculous, peer review is an ongoing process, meaning many of the important parts take place after publication. fortunately this does happen in a variety of fields and situations, however not being the norm leads to a number of the issues in discussion. further it creates an erroneous mindset that simply because something has been published that its now fully vetted, which is ofc absurd.
also agreed, the process should be blind. i believe it often already means the reviewer’s identities are hidden, but i also agree the authours should be hidden during the process too.
don’t see the role as unpaid being a problem though, introducing money would complicate things alot and create even more conflicts of interest and undermine what little integrity the process still has.
i really love your idea of standardising the process in a network-like protocol. this would actually make an excellent RFC and i’d totally support that.
on a similar vein, this is why i’ve been advocating for a complete restructuring of support given to reproduction. as you mentioned, the current process is vulnerable to a variety of human network effects. and among other issues with that problem, i also see the broken reproduction system playing a role here.
as it currently stands, reviewers can request more explanation or data, introduction of changes/additional caveats etc or reject the paper entirely. what this means is a reviewer can only really gauge whether something sounds right, or plausible. and as you correctly identify, certain personalities or flavours of prevailing culture will play a role in the reviewer’s assessment of what merely seems like it’s plausible or correct etc. this has shown to make major breakthroughs more difficult to communicate and face unfair resistance, which has frankly held back society at large.
whereas if there was an organised system of reproduction it’s no longer left to just a matter of opinion in how something sounds. this is ofc how its supposed to work already, and sometimes does, but all too often does not. imo it would be a great detail to include in your idea for a protocol-based review process.
i don’t envision this as always being something which must take place prior to publication, it can and should be an ongoing process. where papers could have their classification formerly upgraded over time. currently the only ‘upgrade’ a paper really receives is publicity or number of citations. the flaws of which are yet another discussion again.
unrelated: @OP looks like you accidentally posted this many times. Imo would be good to delete the others to keep the conversation in 1 place.
I generally agree. The system is utterly rotten.
Only thing I’d mention slightly counter to that is peer review - as a process - is still something I believe is useful.
That is, the process of people with relevant domain expertise critiquing methodology, findings etc. When its done right, it absolutely produces better results which everyone benefits from.
Where it fails is when cliques and ingroups are resistant to change on principle, which is ofc actually an anti-scientific stance. To put it another way, the best scientist wants to be proven wrong (or less correct) if that is indeed the truth.
It also fails, as you identify, when the corrupt rot of powerful publishers (who are merely leeches) gate-keep the potential for communicating alternate models.
It also fails where laypeople parrot popsci talking points without understanding that peer review is far from infallible. Even the best of the best journals still contain errors - any genuine scientist is the first to admit this. Meanwhile popsci enthusiast laypeople think that just because something was printed in any journal, that it must be unequivocally 100.000% truth, and are salivating at the opportunity to label any healthy dose of skepticism as “antiscience” or “conspiracy theorist” etc.
It also seems to fail when popsci headlines invariably don’t include the caveats all good scientists include with their findings etc.
Final point which I think would help enormously is its very very difficult to get funding or high worth publications in reproduction. The obsession with novelty is not only unhealthy, it’s unproductive.
Reproduction is vastly undervalued. Sadly its not easy to get funding or support for ‘merely’ reproducing recent results. There’s two reasons why this should change, firstly it will ofc help with the reproducibility crisis, and it will also afford upcomers excellent opportunities to sharpen their skills, and properly prepare for future ground-breaking work. To put another way, when reading a novel paper you think you understand it. Only when you take it to the lab do you truly understand.
why does it need to be device agnostic?
happy to get into into these subtopics, but it’s also possible i may not be understanding you properly because i agree with alot of what you just said.
what are you attributing the close to 0 probability to?
if you wanna say “whats the probability that CMG was at least partly talking out their arse about their capabilities (and especially any claim they were currently in possession of that capability)?”
i’d also give it like >90% probability they (CMG) are full of shit. in which case you could say i agree with you (to within say 10% error margin).
if you’re instead saying the probability is ~100% that audio surveillance capability cannot possibly currently exist outside TLAs because “someone would’ve published it already” then i really cannot agree. (and afaict that ars article does not support that stance either)
Not disputing the three letter agencies
The capability they were claiming to have would make a three letter agency very excited.
sorry i didn’t understand. didn’t you say you don’t doubt TLAs likely already have this capability?
oppressive regimes
most (all?) of whom are operating outside typical legal constraints and likely already have access to the million dollar exploit trade which already exists.
further, i’m not sure how this changes the landscape anyway? its not without precedent that variations on capabilities can be useful to more than one market segment concurrently?
trivial to discover and flag as malware
can you explain further what you mean by this? i’m not sure there’s anything trivial about conclusive analysis of the deep complexities and dependencies of modern smart devices
Apple and Google would also be very keen to find and squash whatever loophole let’s them record without showing the notification.
historically we’ve seen google can take over half a decade to address such things, afaict (welcome correction on this) apple’s generally been faster to respond, and i do agree apple’s current public image attire would be contrary to be seen to enable this. [not simping for apple btw, just stating that part of their brand currently seems to be invested in this]
in reality there are a confluence of many agendas and there’s likely ALOT of global users running non-bleeding edge or other variations on the myriad of sub-system components, regardless of what upstream entities like google implement. if you are aware of any conclusive downstream binary analyses please link
which if true would have been exposed/validated by security researchers long ago.
i agree the probability of discovery increases over time. and the landscape is growing more hostile to such activities. yet i’m not aware that a current lack of published discovery is actual proof it’s never happened.
tbh we have our doubts this leak is directly connected to solid proof “they are listening”.
but we’re not currently aware of any substantiated reasons to say with certainty “they’re absolutely not listening”
well they’re an ad company, so being full of shit is pretty much mandatory.
but i’m not aware of any evidence they’re actually 100% full of shit on this exact issue or not? can you explain a little more how you know for certain they’re full of shit. or you just meant “they’re most likely full of shit”?
honestly i think this is due to unplanned voice calls essentially being broken technology now.
imagine we had 2020s email spammers while mail servers had 1990s spam filters, that’s basically where we’re at now with unplanned voice.
When you work in an industry where the entire collaborative workflow of everyone is based on software that doesn’t run on Linux, then not running that software is equal to not being able to work in that industry.
there’s no denying that’s true, though ofc it has alot to do with microsofts very agreessive and anti-competitive practices.
though its all a bit tangential, the main issue i think comes down to what someone means when they say “everything”. certainly if someone said “you can do everything”, i’d expect them to qualify what is (should be) obviously a slight exaggeration as parlance. they don’t literally mean “everything” they just mean most everyday things. i think its fairly common in everyday speech for someone to be able to work out thats what they meant.
in the few rare cases when someone literally means absolutely everything, then yes that silly statement would be incorrect. and if strictly intended with that meaning would certainly qualify as misinformation.
Not sure if when people say you can “do everything that windows does”, they should be interpreted to mean “every single piece of software/drivers ever written for windows was also written for linux”.
Yeah I don’t know. Just see how the modern world is shaping society to the negative I just don’t see where we are close to utopia But right now we are on a different path
That was essentially a big part of my point. We could be close to a utopia by now (from the perspective of technological possibilities).
Instead, as I said
for some suspicious reason we took a very different road, and here we are
That said I don’t currently believe technology itself is inherently bad.
Like all tools, it depends what you do with it.
Is a general purpose tool like hammer good or bad? It has the capacity for both. And therefore it’s up to the user which is which.
And that’s the issue really, what are we doing with our wonderous technology?
This might be a bit of a radical take. But in that ~125 year window i was refering to, alot of machines we’ve invented are actually weapons.
Weapons to destroy eachother physically (conflict/threats of violence etc).
Weapons to destroy nature (deforestation and probably most mining).
Weapons to destroy the mind (social media etc, actually most media now).
What if we’d had 1+¼ century of building a collective utopia instead of all these weapons?
afaict from the technical perspective it’s not really unfeasible, its the non-technical problem: the user and what they use the tools for.
Another clue for us is probably the term appropriate technology, which is a vibe i think eg. solar punk is helping to cultivate.
Anyway we’ve done ALOT of misuse. That’s why i don’t blame technology itself.
I still think it’s more about what we’ve done with it.
necessary decline in our quality of life
i’m not refuting your core premise.
but on the note of this issue, not sure i can agree.
have a look at this public infrastructure technology from 122 years ago:
imagine if we’d spent the last 1+¼ century collectively working towards the utopia this kind of project hinted at - instead of developing new machines to destroy?
typically they say utopian dreams scatter in the face of increased technological awareness. have to say my experience has been the opposite.
the more i learn about technology, the more i realise we could probably be very close to a near-utopia by now. for some suspicious reason we took a very different road, and here we are.
i reckon there’s a good chance she was trolling. and if so, fuckin well played.
you make excellent points, but not sure i can agree with your conclusion
if we had full source a variety of automated analysis and hardening tooling could be applied which is much much more efficient compared to parsing the arch docs.
sure. specifically, different tools are good for different tasks. if someone’s tasks don’t require the use a certain tool, its quite reasonable for them to say “this tool isn’t useful for me”.
its another thing altogether to say “this tool isn’t useful at all”, especially when they have not performed the tasks for which it is useful.