• luciole (he/him)@beehaw.org
    link
    fedilink
    arrow-up
    50
    ·
    9 months ago

    Personally I find the “there is no such thing as a real picture” argument facetious and dangerous. Filters, optimizing zoom and autofocus is not the same as convincingly taking someone out of a scene they were in or putting them in a scene they never were in. One is a purely aesthetic adjustment while they other purports false information. Samsung Generative Edit further trivializes the latter and leaves no indication of the manipulation.

    • BitOneZero@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      While I agree with what you are saying, I think audiences crave the falsehoods strongly, regardless of how the sausage is made. And I know that the technology itself may be regulated for normal consumers, while ‘professionals’ will use their wealth to get another set of technology that does it better. Much like in the USA prostitution is generally illegal, but filing sex for pornography media is legal. There really are not very many preaching to level the playing fields on media production hardware. And if you look at the energy requirements and cost of a high-end GPU just as run-time, you can start to get the sense of how a $15,000 camera is going to be able to do post-production that a consumer smartphone won’t have.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    ·
    10 months ago

    Well there are analog cameras

    Also I agree that nearly every digital camera has to do some correction, and correcting for lighting / time of day makes our photos nicer. But the end goal should be a photo that looks as close to what we’d see naturally?

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      21
      ·
      edit-2
      10 months ago

      Analog cameras don’t have the dynamic range of human vision, fall quite short in the gamut area, use various grain sizes, and can take vastly different photos depending on aperture shape (bokeh), F stop, shutter speed, particular lens, focal plane alignment, and so on.

      More basically, human eyes can change focus and aperture when looking at different parts of a scene, which photos don’t allow.

      To take a “real photo”, one would have to capture a HDR light field, then present it in a way an eye could focus and adjust to any point of it. There used to be a light field digital camera, but the resolution was horrible, and no HDR.

      https://en.m.wikipedia.org/wiki/Light_field_camera

      Everything else, is subject to more or less interpretation… and in particular phone cameras, have to correct for some crazy diffraction effects because of the tiny sensors they use.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            9 months ago

            Wouldn’t mind getting a second hand “like new” one with a scratched front glass plastic… for the right price, as long as the inner plastic lenses aren’t scratched.

            (I know, there’s about no chance of that ever happening)

        • dfyx@lemmy.helios42.de
          link
          fedilink
          arrow-up
          4
          ·
          9 months ago

          But not on a static image. They use eye tracking to figure out what you’re looking at and refocus the external cameras based on that.

      • ReallyActuallyFrankenstein@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        It’s actually a great idea - an up up-to-date light field camera combined with eye tracking to adjust focus. It could work right now in some VR, and presumably the same presentation without VR via a front-facing two-camera (maybe one camera with good calibration) smartphone array.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Yup, I was seriously considering getting the Lytro, just to mess around. The main problem, is the resolution drop due to needing multiple sensor pixels per “image pixel”, but then having to store them all anyway. So if you wanted a 10Mpx output image, you might need a 100Mpx sensor, and shuffle around 100Mpx… just for the result to look like 10Mpx.

          If we aim at 4K (8Mpx) displays, it might still take some time for the sensors, and data processing capability on both ends to catch up. If we were to aim at something like an immersive 360 capture, it might take even longer. Adding HDR, and 60fps video recording, would push things way out of current hardware capabilities.

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 months ago

      The end goal should be some kind of representation of reality, at the very least, even if it’d not “what we see naturally”. A camera can see some things that we can’t, and can’t see some things that we can - at least in a single exposure - so, the image is never going be a perfect visual representation of how anyone remembers the scene.

      But to suggest that they don’t represent some aspect of reality because they’re a simulacrum generated by visual data is just self-indulgent too-convenient-to-not-embrace pseudo-philosophy coming from someone whose wealth is tied to selling such bullshit to the public.

      The goal here is to make people feel like they’re good at something - taking photos - by manufacturing the result, which not only totally defeats the point of what most people take photos for, but has some incredibly dark and severe edge cases which they clearly haven’t considered (and are motivated to not consider).

      Which is just par for the course for tech bros.

    • mobyduck648@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      9 months ago

      It depends on the artistic and technological intent I think. Valve (tube) amplifiers are inferior to any modern amplifier in every way you could actually measure with an oscilloscope yet people still build them and valves are still produced they same way they were in the 1950s because the imperfections they produce in the sound can sound pleasant, which is down to psychoacoustic factors which have subjective as well as objective components. A photo that looks exactly like what we’d see naturally is one potential goal but it’s not the only one in my opinion.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          9 months ago

          It’s not an MBA thing, it’s a technological progress thing.

          We’ve gone from photos full of “ghosts” (double exposures, light leaks, poor processing), to photos that took some skill to modify while developing them, to celluloid that could be spliced but took a lot of effort to edit, to Photoshop and video editing software that allowed compositing all sort of stuff… and we’re entering an age where everyone will be able to record some cell phone footage, then tell the AI to “remove the stop sign”, “remove the gun from the hand of the guy getting chased, then add one to the cop chasing them”, or “actually, turn them into dancing bears”, and the cell phone will happily oblige.

          Right now, watermarking and footage certification legislation is being discussed, because there is an ice cube’s chance in hell of Samsung or any other phone manufacturer to not add those AI editing features and marketing them to oblivion.

          In this article, as a preemptive move, Samsung is claiming to “add a watermark” to modified photos, so you could tell them from “actual footage”… except it’s BS, because they’re only adding a metadata field, which anyone can easily strip away.

          TL;DR: thanks to AI, your evidence will get thrown away unless it’s certified to originate from a genuine device and hasn’t been tampered with. Also expect a deluge of fake footages to pop up.

          • spujb@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            9 months ago

            “it’s not an MBA thing, it’s a technical progress thing”

            proceeds to describe how MBAs (Samsung marketers and business leaders) are doing this with technology

            again with the acting like i disagree with you? lol

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              9 months ago

              proceeds to describe how MBAs (Samsung marketers and business leaders) are doing this with technology

              Non-MBAs are already using the same technology for deep fake porn, including fake revenge porn, or to blackmail and bully their school classmates.

              You seem to blame it on businesses, like Samsung, which is what I disagree with. All those MBAs are just desperately trying (and failing) to anticipate regulations caused by average people, that will be way stricter than what even you might want.

  • Fizz@lemmy.nz
    link
    fedilink
    arrow-up
    15
    ·
    9 months ago

    Anyone using their phone for photography has been using heavily edited images already.

      • admiralteal@kbin.social
        link
        fedilink
        arrow-up
        24
        ·
        edit-2
        9 months ago

        Analog cameras also do not catch an image exactly as-is. Most likely, the idea of a “true” image of exactly how a thing exists in the real world is just a fantasy. This is qualia. An image is definitionally subjective. Just look at the history of film technology and the racial biases it helped perserve.

        But there’s undeniably a huge difference between how you interpret and commit the photons going through the lens versus entirely inventing photons going through the lens.

        • Zorind@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Very neat article, glad you shared it!

          Interesting to think about now, they mention how modern digital cameras are not great at taking photos of interracial couples. I’m sure / hope someone is working on that, or at least maybe that’s a use case for some of the fancy photo post-processing - to take two photos with different exposure levels and somehow combine them to get accurate features from multiple people of various complexions in one photo.

  • helenslunch@feddit.nl
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    9 months ago

    Nonsense. There is a very clear difference between analyzing the contents of a photo for modification and literally just overlaying another image altogether.

    Also my Pixel, and many other digital cameras, can shoot “raw” images.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      9 months ago

      my Pixel, and many other digital cameras, can shoot “raw” images

      The raw data a tiny phone sensor with tiny lenses captures is highly distorted, with strong chromatic aberration and diffraction effects. They only go away (to an extent) with large sensor cameras, and high end lenses.

      If the “raw” images that Pixel produces have none of those distortions, then they aren’t raw.