I thought that they were managing that stuff on a per-pixel basis, no engine, assets, or other abstractions, just raw-dogging pixel colors.
And before I even played video games at all I was watching somebody play some assassin’s creed game I think and I thought the player had to control every single limb qwop-style.
In the first few Assassin’s Creed games, they did use the idea of a Puppeteer system for the control scheme, although it wasn’t physics-based or anywhere near as hard as QWOP. Each of the controllers face buttons performed actions associated with each limb, and the right trigger would swap between low profile actions and high profile actions.
In the top right of the screen, there was always a UI element showing what the buttons did at that moment in that context, which might’ve been why you thought it was a QWOP style system. It’s not exactly what you were thinking of at the time, but you were closer than you realise.
I thought that they were managing that stuff on a per-pixel basis, no engine, assets, or other abstractions, just raw-dogging pixel colors.
And before I even played video games at all I was watching somebody play some assassin’s creed game I think and I thought the player had to control every single limb qwop-style.
Apparently ai is learning to do that first thing you said about pure pixel management. It’s crazy that it works at all
In the first few Assassin’s Creed games, they did use the idea of a Puppeteer system for the control scheme, although it wasn’t physics-based or anywhere near as hard as QWOP. Each of the controllers face buttons performed actions associated with each limb, and the right trigger would swap between low profile actions and high profile actions.
In the top right of the screen, there was always a UI element showing what the buttons did at that moment in that context, which might’ve been why you thought it was a QWOP style system. It’s not exactly what you were thinking of at the time, but you were closer than you realise.