• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • “Lossless” has a specific meaning, that you haven’t lost any data, perceptible or not. The original can be recreated down to the exact 1s and 0s. “Lossy” compression generally means “data is lost but it’s worth it and still does the job” which is what it sounds like you’re looking for.

    With images, sometimes if technology has advanced, you can find ways to apply even more compression without any more data loss, but that’s less common in video. People can choose to keep raw photos with all the information that the sensor got when the photo was taken, but a “raw” uncompressed video would be preposterously huge, so video codecs have to throw out a lot more data than photo formats do. It’s fine because videos keep moving, you don’t stare at a single frame for more than a fraction of a second anyway. But that doesn’t leave much room for improvement without throwing out even more, and going from one lossy algorithm to another has the downside of the new algorithm not knowing what’s “good” visual data from the original and what’s just compression noise from the first lossy algorithm, so it will attempt to preserve junk while also adding its own. You can always give it a try and see what happens, of course, but there are limits before it starts looking glitchy and bad.


  • Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.



  • I know TiddlyWiki quite well but have only poked at Logseq, so maybe it’s more similar to this than I think, but TiddlyWiki is almost entirely implemented in itself. There’s a very small core that’s JavaScript but most of it is implemented as wiki objects (they call them “tiddlers,” yes, really) and almost everything you interact with can be tweaked, overridden, or imitated. There’s almost nothing that “the system” can do but you can’t. It’s idiosyncratic, kind of its own little universe to be learned and concepts to be understood, but if you do it’s insanely flexible.

    Dig deep enough, and you’ll discover that it’s not a weird little wiki — it’s a tiny, self-contained object database and web frontend framework that they have used to make a weird little wiki, but you can use it for pretty much anything else you want, either on top of the wiki or tearing it down to build your own thing. I’ve used it to make a prediction tracker for a podcast I follow, I’ve made my own todo list app in it, and I made a Super Bowl prop bet game for friends to play that used to be spreadsheet-based. For me, it’s the perfect “I just want to knock something together as a simple web app” tool.

    And it has the fun party trick (this used to be the whole point of it but I’d argue it has moved beyond this now) that your entire wiki can be exported to a single HTML file that contains the entire fully functional app, even allowing people to make their own edits and save a new copy of the HTML file with new contents. If running a small web server isn’t an issue, that’s the easiest way to do it because saving is automatic and everything is centralized, otherwise you need to jump through some hoops to get your web browser to allow writing to the HTML file on disk or just save new copies every time.



  • OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.


  • If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

    There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

    LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

    One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.


  • In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

    OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

    “We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

    The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

    https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

    The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.


  • There just isn’t much use for an approach like this, unfortunately. TypeScript doesn’t stand alone enough for it. If you want to know how functions work, you need to learn how JavaScript functions work, because TypeScript doesn’t change that. It adds some error checking on top of what’s already there, but that’s it.

    An integrated approach would just be a JavaScript book with all the code samples edited slightly to include type annotations, a heavily revised chapter on types (which would be the only place where all those type annotations make any difference at all, in the rest of the book they’d just be there, unremarked upon), and a new chapter on interoperating with vanilla JavaScript. Seeing as the TypeScript documentation is already focused on those exact topics (adding type annotations to existing code, describing how types work, and how to work with other people’s JavaScript libraries that you want to use too), you can get almost exactly the same results by taking a JavaScript book and stapling the TypeScript documentation to the end of it, and it’d have the advantage of keeping the two separate so that you can easily tell what things belong to which side.


  • I use TiddlyWiki for, well, a bunch of my projects, but primarily for my task management. You can use it as a single HTML file, which contains the entire wiki, your data, its own code, all of it, and of course use it in any browser you like. Saving changes is a bit of a pain until you find a browser extension or some other way of enabling more seamless editing than re-saving the edited wiki as another single HTML file, but there are many solutions to that as described on their site above.

    The way I use it, which is more technical but also logistically simpler, is by running their very minimal Node.JS server which you can just visit and use in any browser which takes care of saving and syncing entirely.

    The thing I like about TiddlyWiki is that although on its surface it’s a quirky little wiki with a fun party trick of fitting into an HTML file, what it actually is is a self-contained lightweight object database with a simple yet powerful query language and miniature front-end web development environment which they have used to implement a quirky little wiki. Each “article” is an object that is taggable and has key/value data, and “widgets” can be used in the text to edit and display that data, pulling from the “database” using filters. You can use it to make simple web apps for yourself and they come together very quickly once you know what you’re doing, and the entire thing is a demonstration of a complex web app that is also possible. The wiki’s implemented entirely using those same tools, and everything is open for you to tweak and edit to your liking.

    I moved a Super Bowl guessing/fake gambling game that I run from a form and spreadsheet to a TiddlyWiki and now I can share an online dashboard that live updates for everyone and it was decently easy to make and works really well. With my task manager, I recently decided to add a feature where I can set an “agenda” value on any task, and they all show up in one place, so I could set it as “Boss” and then quickly see everything I wanted to bring up in our next 1 on 1 meeting. It took just a few minutes to add the text box to anything that gets tagged “Task” and then make another page that collected them all and displayed them in sections.