- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Following on from the success of the Steam Deck, Valve is creating its very own ecosystem of products. The Steam Frame, Steam Machine, and Steam Controller are all set to launch in the new year. We’ve tried each of them and here’s what you need to know about each one.
“From the Frame to the Controller to the Machine, we’re a fairly small industrial design team here, and we really made sure it felt like a family of devices, even to the slightest detail,” Clement Gallois, a designer at Valve, tells me during a recent visit to Valve HQ. “How it feels, the buttons, how they react… everything belongs and works together kind of seamlessly.”
For more detail, make sure to check out our in-depth stories linked below:
Steam Frame: Valve’s new wireless VR headset
Steam Machine: Compact living room gaming box
Steam Controller: A controller to replace your mouse
Valve’s official video announcement.
So uh, ahem.
Yes.
Valve can indeed count to three.


I mean, yeah, but… I’m thinking like a uh, distributed compute type of model, like you see on scalable server rack type deployments for what we used to call supercomputers.
If the latency is 10ish ms, thats easily low enough that you could say, split off a chunk of the total game render pipeline instruction set, maybe a seperated physics thread, run the whole game from the x86 Steam Machine, use Steam Link as a communication layer, send x86 to the Frame, which then ‘solves’ it via the FEX emu-layer, and then the Frame also doesn’t do any other part of rendering the game, it just accepts player inputs and then recieves graphical render data.
Physics runs at only 60fps, rest of game runs at 90 or 120 or w/e.
Steam Machine is the master/coordinator, Frame is the slave/subject, it has various game processes just distinctly dedicated to it’s compute hardware, the Steam Machine is then potentially able to get/use more compute, assuming synchronization is stable, which means overall experienced performance gain, more fps or higher quality settings than just using a Steam Machine.
They are already kind of doing this via what they are calling Foviated Rendering.
Basically, the Frame eyetracks you, and it uses that data to prioritize which parts of the overall scene are rendered at what detail level.
IE, the edges of your vision don’t actually need the same level of render resolution as the center of your vision… because human eyes literally lose vision detail away from the center of their vision.
So, they already have a built in system that showcases the Frame and the Machine rapidly exhanging fairly low level data, as far as a game render pipeline goes.
I’m saying, use whatever that transport buffer is, make a mode where you could potentially shunt off more data via that buffer into a distributed, sort of 2 computer version of multithreading.
How is that really different than a game with a multiplayer client/server model?
Like, all source games are basically structured as mutliplayer games with the server doing the world, and the client doing the player… when you play a single player source game, you’re basically just running both the client and the server at the same time, on your one machine, without any actual networking.
That’s an accurate description of foveated rendering, but what the Frame has is foveated streaming. Foveated rendering needs to be incorporated in the game, typically at the engine level. Foveated streaming can happen at the system-wide level because it’s not reducing the rendering load, but instead reducing the bitrate of the streamed video when you’re streaming wireless VR over wifi from your desktop PC or Steam Machine.
Ok, yeah, I mixed up Foveated Rendering and Streaming.
So what I’m trying to say is that Valve has figured out a generalized Foveated Streaming layer that just works on all games, full stop.
So, you, a game dev… look at that, and build around whatever they’re using in Steam Link to accomplish that… use that as a template/basis to build a sort of dual device , seperated out async threading mode.
If Steam Link can move data that fast, then build your game/engine from the ground up with that protocol in mind, that you’re always gonna have a second hardware instance that you can run some subportion of game logic / engine calls on, some kind of async multithreading.
Not just native Foveated Rendering in a VR context, but potentially any kind of game/engine level sort of API or… mode or something, that would allow this superfast Steam Link streaming to have a game be … collaboratively generated by many devices linked with this kind of nearby, wireless speed.
Maybe a rough analog here would be the difference between a game that is built from the ground up with full Steam Input support, so it gives you more options than if you’re just using Steam as an over the top, generalized input translation layer.
Of course, … I think basically only Source games actually have full Steam Input support, and I have no idea if this Foveated Streaming thing is… open source or proprietary to Valve.
If its open, theoretically anyone could try to do this.
If its proprietary, well then it’d just be Valve.
Maybe another analogy that comes to mind is a weird, little used way of running ARMA servers.
So with ARMA games… they’re big, fuckass complex, open world milsims, ARMA literally just is the commericial, streamlined version of the much more hardcore milsims they sell to governments/militaries.
ARMA servers can be ran in a way where you have more than one server doing the actual computations. This mode, these protocols, they do exist in the commerical variants, but they are not well documented, almost nobody uses it, because its quite complex.
Players will join the main dedicated server, but the dedicated server can also shunt off tasks like simulating the movement patterns of 100s or 1000s of other NPCs to other, synchronized servers.
So what you as the player join as the dedi, thats basically just coordinating traffic and world states between players, and also NPCs, who are essentially being simulated on their own servers, which are running in a mode that is optimized for just simulating them.
This is kinda sorta analagous to what I’m trying to describe with the Frame and Machine, its just thst now you as a player, externally networked or not, now you have a two hardware system combo that is cumulatively producing your whole game experience, which theoretically should be possible if you can low level implement whatever protocol Steam Link is doing to be able to stream data both ways so fast, as they are with Foviated Streaming.
Right?
Because that Foveated Streaming has to be hooking into actual hardware inputs from the Frame, the eyetracking, then it bounces that to the Steam Machine, does calcs, throws some result back to the Frame, super fast.
So, it has some low level of hardware input access, and obviously the Frame alters what the display renders based on what the Machine tells it to, so we have two way, super low latency, local wireless communication going on here.
From the Gamers Nexus interview, basically, this is only possible because the Frame and Machine are talking via a dedicated WiFi7 dongle, that is soley used by them.
Anyway, no clue if this would actually make in practical sense in terms of trying to take the time to develop it, what the actual gains or benefits would be… I’m just trying to brainstorm out things that may potentially be neat/possible.