Chinese scientists have unveiled an optical computing chip that outperformed Nvidia’s leading AI hardware by over a hundredfold in speed and energy efficiency – particularly for generative tasks such as video production and image synthesis.

The LightGen chip was developed by a team from Shanghai Jiao Tong University and Tsinghua University, harnessing the speed of light to execute complex artificial intelligence workloads.

With more than 2 million photonic neurons integrated into a compact chip, LightGen can generate high-resolution images, including 3D scenes, and create videos.

The research, led by Professor Chen Yitong from Shanghai Jiao Tong University, was published in the journal Science on Friday.

Chen said LightGen could be “further scaled up” and added: “It provides a new way to bridge the new chip architectures to daily complicated AI without impairment of performance and with speed and efficiency that are orders of magnitude greater, for sustainable AI.”

With artificial intelligence advancing rapidly, generative AI can now produce realistic images and even videos – but it needs immense computing capacity and consumes large amounts of energy.

As a result, scientists have turned to photonic computing as conventional electronic chips reach their limits.

Traditional computers rely on the flow of electrons to send and process information, while photonic computing uses laser pulses instead of electrons, performing operations at the speed of light.

Optical signals also have the advantage of minimising power consumption and offering rapid responses to user requests.

However, although photonic computing systems have shown potential in specific tasks, they previously struggled to handle high-complexity generative AI tasks – such as synthesising images and generating videos – because of limitations in their computing architecture and underdeveloped training algorithms.

The LightGen team’s work focused on developing three areas: building a new architecture, developing a novel training algorithm and giving the chip high integration density.

Architecturally, the team created an “optical latent space” – similar to an expandable “highway hub” for light – where data can flow rapidly in its most compact form, allowing for the efficient compression and reconstruction of information, according to the study.

The researchers also developed a generative training algorithm that, compared with conventional versions, removed the need for massive labelled data sets.

Instead, they used an unsupervised training algorithm that allowed LightGen to learn and create by discerning statistical patterns in data along similar lines to the human learning process.

The team packed more than 2 million photonic “neurons” onto a chip of 136.5 sq mm (0.2 square inches), constructing a sophisticated network capable of handling high-resolution image generation.

Experiments highlighted some of LightGen’s abilities, including the generation of animal images at 512×512 pixel resolution with diverse categories, colours, expressions and backgrounds, which were rich in detail and logically correct.

The study said: “LightGen experimentally implemented high-resolution semantic image generation, denoising [making grainy images appear cleaner and sharper], style transfer, three-dimensional generation and manipulation.”

At a conservative estimate, LightGen achieved a system computing speed of 3.57×10⁴ Tera Operations Per Second (TOPS) and an energy efficiency of 6.64×10² TOPS/watt.

This meant its overall performance surpassed that of leading electronic chips, such as Nvidia’s market-leading A100, by more than a hundredfold.

“The improvement in computing speed and energy efficiency of LightGen corresponded well with the experimentally measured end-to-end reduction in time and energy cost when LightGen experimentally achieved generation quality comparable with that of real-world electronic AI models on Nvidia A100,” the paper said.

The researchers said LightGen could mark a significant shift in the hardware used for generative AI by making photonic computing a core platform capable of independently executing complex creative tasks.

They added that its extraordinary energy efficiency also offered a practical pathway to alleviate the growing energy demands of AI computing.

  • CriticalResist8@lemmygrad.ml
    link
    fedilink
    arrow-up
    14
    ·
    18 hours ago

    Step 1: China manufactures over 30% of all consumer commodities in the world

    Step 2: western chip makers announce cutoffs for next year to focus on b2b

    Step 3: China starts making chips that rival with nvidia’s

    There is a >80% chance that by 2027 virtually all big retailers will carry Chinese GPUs and RAM brands because it’ll be too complicated to sell Samsung and RTX Gpus.

    • 201dberg@lemmygrad.ml
      link
      fedilink
      arrow-up
      8
      ·
      17 hours ago

      As a PC user, please for the love of fuck China, we need parts. RAM quadruple+ in price overnight. SSDs are close behind. Even recycled HDDs are getting hit.