• 30 Posts
  • 1.71K Comments
Joined 5 years ago
cake
Cake day: May 31st, 2020

help-circle

  • I feel like setting up a new machine is just the easiest to explain.

    Personally, I find dotfiles messy, as you often just want to change one or two settings, but you always carry along the whole file with all kinds of irrelevant other settings. This also makes it impractical to diff two versions of those dotfiles, especially when programs write semi-permanent settings into there.

    I guess, your mileage will vary depending on what programs or desktop environment you use.
    For example, I love KDE, but they really don’t do a good job keeping the config files clean. Nix Plasma-Manager generally fixes that, and for example allows defining the contents of the panel in a readable form.


  • Personally, the stepping stone I needed to know about is Nix Home-Manager, which basically allows you to manage your dotfiles independent of the distro. From what I understand, if I do switch to NixOS, I’ll continue using this code with just some minor tweaks.

    But yeah, I agree with the verdict in the post. I like it a lot, but I would not have made it past the initial learning curve, if I didn’t happen to be a software engineer. Sysadmins will probably be able to figure out how to put it to use, too. But it’s just not for non-technical Linux users.



  • Ephera@lemmy.mltolinuxmemes@lemmy.worldDesktop PTSD
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    On KDE, I’d recommend getting a KWin Script for tiling. Krohnkite is what people use currently.

    It’s not as buttery smooth as dedicated tiling window managers and it can be a bit glitchy at times, but it is better than one might expect and significantly easier (and likely less glitchy) than trying to get bspwm to work in Plasma.


  • Yeah, after writing that comment, I was thinking, if I do promote it, that means there’s a certain expectation that I’ll integrate or implement functionality that others want. At that point, it becomes less of an egoistic thing. And I’ll be doing more communication and whatnot, therefore less programming.

    Maybe that’s the puzzle piece that OP is missing? If you don’t promote it, you have practically no extra work compared to developing it under a proprietary license. In fact, it often reduces the workload, if you can just post it publicly without having to secure the repo.
    And you don’t incur costs from giving it away either. So, if you make sure to only put in the work that you want to put in in the first place, you have no disadvantage from publishing it with an open-source license.




  • I mean, it sounds like it’s gonna be a fairly large codebase. Rust is definitely better equipped for large codebases than Python…

    I do agree that Python could give them more outside contributors, but from my experience, I don’t think it’s worth swaying from your preferred tooling for that. Outside contributions will make up barely a fraction of code changes either way, so you should rather ensure that your core team is productive.





  • That’s definitely being done. It’s referred to as “tool calling” or “function calling”: https://python.langchain.com/docs/how_to/tool_calling/

    This isn’t as potent as one might think, because:

    1. each tool needs to be hooked up and described extensively.
    2. the naive approach where the LLM generates heaps of text when calling these tools, for example to describe the entire state of the chessboard as JSON or CSV, is unreliable, because text generation is unreliable.
    3. smarter approaches, like having an external program keeping track of the chessboard state and sending it to a chess engine, so that the LLM only has to forward the move that the user described, don’t really make sense to incorporate into a general-purpose language model. You can find chess chatbots on the internet, though.

    But all-in-all, it is a path forward where the LLMs could just do the semantics and then call a different tool for each thinky job, serving at least as a user interface.
    The hope is for it to also serve as glue between these tools, automatically calling the right tools and passing their output into other tools. I believe, the next step in this direction is “agentic AI”, but I haven’t yet managed to cut through the buzzword soup to figure out what that actually means.








  • It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?

    Worth noting here that tests should primarily serve as a (self-checking) specification, i.e. documentation for what the code is supposed to do.
    The more competent your type checking is and the better the abstractions are, the less you need to rely on tests to find bugs in the initial version of the code. You might be able to write code, fix the compiler errors and then just have working code (assuming your assumptions match reality). You don’t strictly need tests for that.

    But you do need tests to document what the intended behaviour is and conversely which behaviours are merely accidental, so that you can still change the code after your initial working version.
    In particular, tests also check the intended behaviour of all the code parts you might not have realized you’ve changed, so that you don’t need to understand the entire codebase every time you want to make a small change.