• 9 Posts
  • 92 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • The common thread I’ve seen online is this:

    • Google’s search algorithm sucks. I always append reddit.com to get good forum results
    • Reddit’s search algorithm sucks.

    These two tools are quickly becoming coupled for Google-Fu expert users. The historical forum history that goes back 3-5 years on Reddit is their goldmine. You can’t just make a new subreddit overnight when a sub gets paywalled. All of that historical data will be lost and paywalled.

    I think a paywall could be an effective money maker for Reddit because they’ve basically become their own Google - in that each subreddit acts like a unique website with real, human, responses. The only problem is that reddit has a god awful search algorithm that they refuse to improve. So people use Google to essentially search reddit. The “whales” so-to-speak are the only people they need to capture. People like myself (frugal people) aren’t in their peripherals. But the people that think “I’ll pay each month for NYT” or “it’s just a few dollars for the WSJ” are going to use the same logic for Reddit: “it’s a small amount of money to have access to high quality forums on X, Y, and Z”.

    In addition, this might bolster Reddit’s content even further. Since paywalled subs will automatically reduce the amount of AI content spammed on them, they will inherently increase the legitimacy of each forum.

    Lastly, this will give them a path towards monetization for moderators which doesn’t require them skimming off of their own pay checks to achieve it.

    Do I like this? No. Is this fair? Also no. People contributed to Reddit under the impression that their data would be available and accessible to anyone with an Internet connection. That implicit guarantee is being violated. It’s an afront to the hard working individuals that have developed these communities brick by brick.

    But does this “solution” make a lot of business sense? Possibly. As long as they survive the changeover in the short term, I think they’ll thrive from this choice for the reasons I stated above.

    Again, it’s going to give them a pathway for:

    • Monetization
    • Reduce AI spam (a big fear of all forums)
    • They could make even more money off the back of this

    I’m pretty much over Reddit anyways. Lemmy has been my backup social media for a while now. The Internet is still free - for now. I just hope we can all find better search engines and forums in the future. Google has been degrading. Reddit has been locking things down. We obviously need to pivot to other platforms. Or maybe just go back to the old days where you find niche forums hosted by some dude in his basement. Nothing wrong with that.







  • Display and layout rules aren’t difficult at all. Maybe I’m just not experienced enough. I’ve been a web dev for nearly a decade now and I feel like I’ve got the hang of it. That being said, I don’t work on projects that have to work on everything from a Nokia to an ultra wide monitor. We shoot for a few common sizes and hope it clears between edge cases nicely. What is an example of something that wraps randomly?


  • Genuinely, though, CSS is fairly clear cut about the rules of positioning and space. Relative positioning is one of the most important concepts to master since it allows things to flow via the HTML structure and not extra CSS. Fixed positioning is as if you had no relative container other than the window itself. Absolute positioning is a little weird, but it’s just like fixed positioning except within the nearest parent with relative positioning.

    Everything else is incredibly straight forward. Padding adds space within a container. Margins add space outside a container. Color changes text color. Background-color changes the background color of an element.

    Top, left, right, and bottom dictate where the element should be positioned after the default rules are applied. So if you have a relative div inside a parent which is half way down the page, top/right/left/bottom would move the element relative to it’s position within the parent. If you made the div fixed, it would be moved relative to the window.

    Lastly, if you’re designing a webpage just think in boxes or rows and columns. HTML can define 75% of the webpage structure. Then with just a bit of CSS you can organize the content into rows/columns. That’s pretty much it. Most web pages boil down to simple boxes within boxes. It just requires reading and understanding but most people don’t want to do that to use CSS since it feels like it should just “know”.

    As someone who has built QT, Swing, and JavaFx applications, I way prefer the separation of concerns that is afforded us via HTML JS and CSS.







  • I see. Well without a command line, I wouldn’t call it a terminal. I think you just want tooling to be available on an Android? It would probably look like a button or series of buttons on an app. Maybe you could connect the dots between them to insinuate a pipe? E.g., you have a “mv” button and a “file” button. When you drag from mv -> file you could maybe kick off a process that moves the file. Maybe it would prompt you for other arguments like destination? I suppose this theoretical app could allow people to install additional tooling and make their own custom commands.

    But I just feel like a button UI for these kinds of things will always be awkward. If you don’t have a keyboard/terminal interface, it’s hard to implement anything that would even behave like terminals in terms of functionality.





  • I think this article does a good job of asking the question “what are we really measuring when we talk about LLM accuracy?” If you judge an LLM by its: hallucinations, ability analyze images, ability to critically analyze text, etc. you’re going to see low scores for all LLMs.

    The only metric an LLM should excel at is “did it generate human readable and contextually relevant text?” I think we’ve all forgotten the humble origins of “AI” chat bots. They often struggled to generate anything more than a few sentences of relevant text. They often made syntactical errors. Modern LLMs solved these issues quite well. They can produce long form content which is coherent and syntactically error free.

    However the content makes no guarantees to be accurate or critically meaningful. Whilst it is often critically meaningful, it is certainly capable of half-assed answers that dodge difficult questions. LLMs are approaching 95% “accuracy” if you think of them as good human text fakers. They are pretty impressive at that. But people keep expecting them to do their math homework, analyze contracts, and generate perfectly valid content. They just aren’t even built to do that. We work really hard just to keep them from hallucinating as much as they do.

    I think the desperation to see these things essentially become indistinguishable from humans is causing us to lose sight of the real progress that’s been made. We’re probably going to hit a wall with this method. But this breakthrough has made AI a viable technology for a lot of jobs. So it’s definitely a breakthrough. I just think either I finitely larger models (of which we can’t seem to generate the data for) or new models will be required to leap to the next level.