It’s no surprise that NVIDIA is gradually dropping support for older videocards, with the Pascal (GTX 10xx) GPUs most recently getting axed. What’s more surprising is the terrible way t…
I don’t get what needs support, exactly. Maybe I’m not yet fully awake, which tends to make me stupid. But the graphics card doesn’t change. The driver translates OS commands to GPU commands, so if the target is not moving, changes can only be forced by changes to the OS, which puts the responsibility on the Kernel devs. What am I missing?
The driver needs to interface with the OS kernel which does change, so the driver needs updates. The old Nvidia driver is not open source or free software, so nobody other than Nvidia themselves can practically or legally do it. Nvidia could of course change that if they don’t want to do even the bare minimum of maintenance.
The driver needs to interface with the OS kernel which does change, so the driver needs updates.
That’s a false implication. The OS just needs to keep the interface to the kernel stable, just like it has to with every other piece of hardware or software. You don’t just double the current you send over USB and expect cable manufacturers to adapt. As the consumer of the API (which the driver is from the kernel’s point of view) you deal with what you get and don’t make demands to the API provider.
You don’t just double the current you send over USB and expect cable manufacturers to adapt
That’s pretty much how we got to the point where USB is the universal charging standard: by progressively pushing the allowed current from the initially standardized 100 mA all the way to 5 A of today. A few of those pushes were just manufacturers winging it and pushing/pulling significantly more current than what was standardized, assuming the other side will adapt.
The default standard power limit is still the same as it ever was on each USB version. There’s negotiation that needs to happen to tell the device how much power is allowed, and if you go over, I think over current protection is part of the USB spec for safety reasons. There’s a bunch of different protocols, but USB always starts at 5V, and 0.1A for USB 2.0, and devices need to negotiate for more. (0.15A I think for USB 3.0 which has more conductors)
As an example, USB 2.0 can signal a charging port (5V / 1.5A max) by putting a 200 ohm resistor across the data pins.
The default standard power limit is still the same as it ever was on each USB version
Nah, the default power limit started with 100 mA or 500 mA for “high power devices”. There are very few devices out there today that limit the current to that amount.
It all begun with non-spec host ports which just pushed however much current the circuitry could muster, rather than just the required 500 mA. Some had a proprietary way to signal just how much they’re willing to push (this is why iPhones used to be very fussy about the charger you plug them in to), but most cheapy ones didn’t. Then all the device manufacturers started pulling as much current as the host would provide, rather than limiting to 500 mA. USB-BC was mostly an attempt to standardize some of the existing usage, and USB-PD came much later.
Device drivers are not like other software in at least one important way: They have access to and depend on kernel internals which are not visible to applications, and they need to be rebuilt when those change. Something as huge and complicated as a GPU driver depends on quite a lot of them. The kernel does not provide a stable binary interface for drivers so they will frequently need to be recompiled to work with new versions of linux, and then less frequently the source code also needs modification as things are changed, added to, and improved.
This is not unique to Linux, it’s pretty normal. But it is a deliberate choice that its developers made, and people generally seem to think it was a good one.
I don’t get what needs support, exactly. Maybe I’m not yet fully awake, which tends to make me stupid. But the graphics card doesn’t change. The driver translates OS commands to GPU commands, so if the target is not moving, changes can only be forced by changes to the OS, which puts the responsibility on the Kernel devs. What am I missing?
The driver needs to interface with the OS kernel which does change, so the driver needs updates. The old Nvidia driver is not open source or free software, so nobody other than Nvidia themselves can practically or legally do it. Nvidia could of course change that if they don’t want to do even the bare minimum of maintenance.
That’s a false implication. The OS just needs to keep the interface to the kernel stable, just like it has to with every other piece of hardware or software. You don’t just double the current you send over USB and expect cable manufacturers to adapt. As the consumer of the API (which the driver is from the kernel’s point of view) you deal with what you get and don’t make demands to the API provider.
I don’t generally disagree, but
That’s pretty much how we got to the point where USB is the universal charging standard: by progressively pushing the allowed current from the initially standardized 100 mA all the way to 5 A of today. A few of those pushes were just manufacturers winging it and pushing/pulling significantly more current than what was standardized, assuming the other side will adapt.
The default standard power limit is still the same as it ever was on each USB version. There’s negotiation that needs to happen to tell the device how much power is allowed, and if you go over, I think over current protection is part of the USB spec for safety reasons. There’s a bunch of different protocols, but USB always starts at 5V, and 0.1A for USB 2.0, and devices need to negotiate for more. (0.15A I think for USB 3.0 which has more conductors)
As an example, USB 2.0 can signal a charging port (5V / 1.5A max) by putting a 200 ohm resistor across the data pins.
Nah, the default power limit started with 100 mA or 500 mA for “high power devices”. There are very few devices out there today that limit the current to that amount.
It all begun with non-spec host ports which just pushed however much current the circuitry could muster, rather than just the required 500 mA. Some had a proprietary way to signal just how much they’re willing to push (this is why iPhones used to be very fussy about the charger you plug them in to), but most cheapy ones didn’t. Then all the device manufacturers started pulling as much current as the host would provide, rather than limiting to 500 mA. USB-BC was mostly an attempt to standardize some of the existing usage, and USB-PD came much later.
Device drivers are not like other software in at least one important way: They have access to and depend on kernel internals which are not visible to applications, and they need to be rebuilt when those change. Something as huge and complicated as a GPU driver depends on quite a lot of them. The kernel does not provide a stable binary interface for drivers so they will frequently need to be recompiled to work with new versions of linux, and then less frequently the source code also needs modification as things are changed, added to, and improved.
This is not unique to Linux, it’s pretty normal. But it is a deliberate choice that its developers made, and people generally seem to think it was a good one.
That sounds like a stupid idea to me. But what do I know? I live in the ivory tower of application development where APIs are well-defined and stable.
Thanks for explaining.
Using 10 year old hardware with 10 year old drivers on 10 year old OS require no further work.
The hardware doesn’t change, but the OS do.