• HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    23 hours ago

    TL;DR: A lot of interesting history about Japanese game and console development, some good points about AI in software development, but I’m mainly disappointed in the title and it made me read the article with the wrong lens for way too long. I was eager to learn how specifically Miyamoto managed to jump to mastery in high level software architecture without personally writing code in a time long before generative AI, and it did not deliver on that.

    never compiled a line of code

    Like, taking high level code and writing out the corresponding executable binary? Unless he works with magnetic core memory, yeah I doubt he ever needs to do that.

    I get what the article is saying but this is a weird way to word it.

    Miyamoto sat next to programmers and iterated on numbers until the feel was right. The programmers typed. Miyamoto decided what they were typing toward.

    […]

    He made them by understanding the machine, the medium, and the human at a level most of his programmers could not reach.

    OK but I doubt he has never programmed himself. To be good at doing that probably means he did his fair share of stumbling through mistakes in his own software.

    output of a design process that operated above the opcodes and above the data structures, at the level where the whole system coheres or doesn’t.

    With the age of the games referenced and the mention of opcodes I assume the “actual” programmers were writing assembly? In that case it does make more sense. He’s probably manually doing a lot of the high level features modern developers now expect to be built into the language. Mad respect.

    The industry calls that “design,” and files it in a separate drawer from engineering/coding.

    Is that true? That definitely sounds like engineering to me. When I hear “design” I tend to associate it with things like UI mockups in a program like Figma or something. Or in game development, defining the worlds/levels/progression tree.

    Level 1: syntax. Loops, types, recursion, the standard stuff. How you write a correct sentence in the language. […]

    Level 2: flow. What you do with Level 1. Formally correct and good versus formally correct and terrible. […]

    Level 3: architecture. Macro decisions. Full awareness of all the consequences of each call before you make it. Why your node system is trigger-pull-based and not event-pushed. Why your data layer returns synchronously even though everyone expects async. Why you chose a flat table over a pretty schema because the retrieval pattern matters more than the shape. Level 3 is where systems either cohere or silently fall apart two years later, and you usually can’t tell which it is until it’s too late.

    […]

    And Level 3 judgment usually grows out of Level 1 time: the architects whose decisions hold up ten years later are, overwhelmingly, people who spent a decade at the keyboard before they stopped needing to. Miyamoto is a genuine exception, not a template.

    […]

    Miyamoto operates almost entirely at Level 3. He doesn’t operate at Level 1, he delegates it, and has for nearly fifty years.

    Again, the article vaguely implies that he has only ever done level 3 but doesn’t explain further. Nothing so far has explicitly stated he’s never personally written a program other than the title.

    The industry lets Miyamoto be a genius. It does not let him be a coder. Those are different containers, and the border between them is patrolled harder than almost any other line in this profession. Designer is allowed to mean taste, vision, intuition, feel. Coder is reserved for the people who type. A definition as narrow as narrow can be. Not architect, not systems thinker, not person-who-decides-what-the-machine-should-do, no. It is: “Person who types”. The entire conceptual territory beyond the keyboard has been recast as “design” and quietly removed from the coding conversation, because coding as a word has been collapsed to its most accidental layer. Philosophically speaking.

    Maybe this is different for the game industry but I don’t really hear the term “coder” used that much, as in someone who only writes code with no input on higher level designs. Usually it’s called developer which implies all three levels the article mentions.

    The work Miyamoto actually does - the architectural decisions that determine whether the system coheres - has no home inside that definition. So it gets filed under “design,” which is the industry’s word for “important but not coding.”

    Again, maybe game development is different, but usually those decisions are lumped in with writing code. The project manager or client don’t really care what the internal architecture is, just that it does the thing it needs to. Design to me is even higher level than that, going into what the application should do, who will use it, and what problems it’s supposed to solve for them.

    The article then goes into how the game boy came to be, and for the most part I agree here. Gunpei Yokoi focused on the experience instead of technical specs, which is what consumers actually care about especially for a game console. This is definitely something more people in software development should learn from, because in the end no one cares what kind of technology is behind some application, as long as it does the thing. This is even more true for hardware design which has orders of magnitude longer life cycles for a particular technology, also cost for hardware is directly proportional to how “advanced” the tech is.

    The industry’s blindness to Yokoi isn’t an accident. It’s structural. The industry only measures Level 1 reliably and Level 2 occasionally, and has no vocabulary for Level 3 that isn’t vibes. The Yokois and the Miyamotos get classified as “designers” or “visionaries”: nice words, soft words, words that imply they don’t really belong in the engineering conversation… while the Level 1 people who implemented their ideas get the engineer title.

    Again, from what I’ve seen they’re definitely seen as part of the engineering conversation.

    People thought AI would make Level 1 coders ten times more productive and leave the architects behind. It’s closer to the opposite. AI does a growing slice of the Level 1 work now, and that slice is growing faster than the industry’s ability to readjust to that change. Level 1 coders worry for a reason, and their anger is rational.

    I mean, I’ve definitely seen that conversation about development going from writing code to higher level design pretty much as soon as AI coding tools came out. You could even say that’s an embodiment of all programming higher level than assembly, a way to offload low level implementation in favour of higher level “design.” There’s a whole other conversation to be had about whether current AI is “good enough” to actually trust to replace manual writing of code, but I think most people in software development could see from very early on that it won’t be long until AI can code as well as humans.

    If you’re a coder reading this and you’re angry, I understand. But the worst thing you can do right now is double down on Level 1 signaling. The LeetCode grinding, the “real engineers do it by hand” posture, the insistence that craft means typing is the direction that gets you nothing, because the machines do that pretty well by now and are going to keep getting better. Sorry.

    Probably good advice for most people but just because AI can do it now doesn’t mean we will no longer have any need for level 1 people. I mean, you still have to maintain the programming languages AI uses, sure you can use AI for that too but at some point you still need humans involved for troubleshooting. It will just become a niche and not the default. Take assembly as a parallel. Most people don’t need to write assembly anymore, but every once in a while you still have to go through the assembly the compiler generates and check for things like whether it’s correctly optimizing stuff, using certain hardware acceleration features, etc. Or how processor design is mostly done in HDL nowadays but you still need people to know the actual physics governing the circuits themselves.

    IMO, if you find that your calling TRULY IS the manual implementation of algorithms by writing code, I say still pursue that passion. You’ll still be needed and that passion will make you extremely valuable. I think AI can eventually free people from being forced to do that when what they really want to do is higher level design, which I suspect is a good majority of developers. But that only means the people who want to do that can focus more on their specific niche.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      The key point being made there is that we can separate the intention from the implementation details. The key aspect of programming isn’t in banging out lines of code. It’s understanding the behavior of the system being implemented, the states that it flows through, and how the user interacts with it. These are the core skills, and they lie at the level of abstraction where there is a significant overlap between coding and design.

      I don’t think it’s so much that we don’t need skills to work at a detailed level of code anymore, but that we shouldn’t see them as an essential part. Like you said, there are still people who know how to write assembly by hand, but they’re few and far between, working in specific niches where extreme optimization is required. A general coder doesn’t really think about what’s happening at hardware level at all.

      My read of the article was more that we should expand the conception of what we mean by coding to include people who work at a higher level of abstraction as well.