• 1 Post
  • 1.05K Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle

  • For me it’s the text (too regular and perfectly-ruled to be hand lettered, but too much variance between the letterforms to be a font) and the little AI artifact on the random doohickey directly under the bottom left corner of the AI computer monitor: Random doohickey.

    Aside from that, it’s just the weight of unmotivated choices. Why is the “good” side of the image grayscale while the “bad” side is in color (a human probably would’ve done it the other way)? Why are the desks drawn slightly differently while the person, chair, and computer are drawn the same (a human would’ve probably made everything identical to better illustrate their point)? Why all the random clutter on one but not the other (if the point was to make the AI computing experience look scattered and cluttered, surely they would’ve made it more overwhelmingly cluttered, but if it was for verisimilitude they’d have put clutter on both desks)? Also, subjectively, the “AI” logo on the screen suggests a pleasant experience, not an oppressive one.

    An unmotivated choice on its own isn’t necessarily an AI calling card, but enough of them together alongside one or two smoking guns can definitely make the case pretty strongly.






  • Ok. The classic answer is “turn on the first switch for five minutes. Then turn switch 1 back off, turn on the second switch and go in the room immediately. If the light is hot, it’s controlled by switch 1; if it’s on, it’s controlled by switch 2; if it’s off and cold it’s controlled by switch 3.”

    Except that a light bulb in 2025 is very likely to be an LED bulb, so it wouldn’t actually get hot. At least not hot enough to feel even a few moments later. And in a corporate setting (this is classically an interview question), the switch has been more likely to control a fluorescent tube, which can get hot, but typically not as quickly as an incandescent one.

    My answer, if I were in an interview, would be to ask questions (Chesterton’s Fence).

    • First of all, why do we have the one-visit limit? Is this a prod light bulb? We need a dev light bulb environment, with the bulbs and switches in the same room. (While we’re making new environments, let’s get a QA and regression environment, too. Maybe a fallback environment, depending on SLAs.)

    • Second, what might the other switches do? What’s the downside to just turning them all on? If that’s not known, why not? What is the risk? For that matter, do we know that only one switch needs to be turned on to turn on the light, or is it possible that the switches represent some sort of 3-bit binary encoding?

    • Third, why were the switches designed this way? Can they be redesigned to provide better feedback? Or simplified to a single switch? If not, better documentation (labeling) is a must.

    • Fourth, we need to reduce the length of the feedback loop. A five minute test and then physically going to touch the bulb is way too long. Let’s look into moving the switches or the light in our dev environment so that the light can be seen from the switches.











  • Tali Roth, the then product manager working on the core Windows user experience, including the Start menu, taskbar, and notifications, took up the question and talked about how building the taskbar from scratch meant that they had to cherry-pick things to put into the feature list first, and the ability to move the taskbar didn’t make the cut, for several reasons that Microsoft values.

    WHY WOULD YOU DO THAT?!

    If you have working code, why would you rewrite it from scratch? Refactor, sure. Overhaul, maybe. But why rewrite the whole thing?! You’re gaining nothing but unnecessary bugs.

    I know all the joke answers. To justify a product manager’s salary, because Microsoft gonna Microsoft, whatever. I want to know the real reason. Why would you ever rewrite working code from scratch if you don’t have to?