Those who are now beating up on the new 12VHPWR (although I don’t really like the part either) may generate nice traffic with it, but they simply haven’t recognized the actual problem with the supposedly fire-hazardous and melting connections or cables. Even if certain YouTube celebrities are of a different opinion because they seem to have found a willing object of hate in the 12VHPWR once again: This connection is actually quite safe, even if there are understandable concerns regarding the handling.
However, the “safe” is only valid if e.g. the used supply lines from the power supply with “native” 12VHPWR connector have a good quality and 16AWG lines or at least the used 12VHPWR to 4x 6+2 pin adapter also offers what it promises. Which brings us directly to the real cause of the cases that occurred: It’s the adapter solution exclusively provided by NVIDIA to all board partners, which has fire-dangerous flaws in its inner construction!
GPUs have gotten absolutely insane these last few years, and it was only a matter of time before something like this was going to happen.
Some users are reporting problems and others are not, as a percentage it could be inline with normal failure rates. What makes it so dangerous is that the problem isn’t evident from outside the cable, you have to physically cut the cable open to see the problem. It’s such a dangerous condition I think the only responsible thing to do is to recall 100% of the cables.
Aside: even if there were no quality problems with the cables I personally hate the design of these things. They’re aesthetically ugly, bulky and bad for airflow. You can’t do a nice job with them. The power solution is in an orientation that requires a u-turn to reach the power source, requires significantly more case clearance for safe cable installation, maximizes stress on the cable, and even risk damaging the GPU connector. A new angle adapter could help, but IMHO it would have been better to design the connectors and cables so that the cables can be routed directly to the back plane without any front facing protrusions. It’s too late to fix this generation, but hopefully they can learn from these mistakes going forward.
Alfman,
This is entirely on NVIDIA.
First, the cables are apparently low quality. There has long been a discussion in homelab communities on “crimped” vs “molded” SATA power connectors. Yes, it is not exactly the same, but the results are very familiar:
https://nerdtechy.com/best-lp4-molex-sata-power-adapter
Second, I have no idea why 4090 draws 500W of power. That is entirely inefficient.
Their slightly older datacenter model (A100), offers 8x the theoretical computation limit at only 250W: https://www.leadtek.com/eng/products/AI_HPC(37)/NVIDIA_A100(30891)/detail (and, that also comes with a connector).
– 9.7 TFLOPS fp64 vs only 1.27 TFLOPS
– 40 GB RAM vs only 24GB
– 1,555 GB/s memory bandwidth vs only 1,000 GB/s
Yes, is much more expensive. But that is beside the point. The last gen card can have significantly more performance at lower power draws. And comes with a proper adapter cable.
sukru,
I don’t know if that “250W” is correct because nvidia’s specs qshow 300W-400W depending on model.
https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-nvidia-us-2188504-web.pdf
Also, nvidia’s website puts 4090 at 450W (though different brands might use more).
Overlooking this though a power discrepancy is to be expected given the fact that these are completely different configurations optimized for different tasks.
1) The 4090 won’t use ~450W all the time and we don’t know what the power draw will be under a specific benchmark – it has to be measured. So without running tests we don’t know the true power draw when these cards are running FP64 benchmarks, we cannot assume it equals the max. This puts the “performance per watt” divisor into question.
2) The “slow” FP64 performance on 4090 might simply come down to the fact that none of nvidia’s consumer GPUs are configured & optimized for FP64, In the world of GPGPU, FP64 is targeted to scientific/enterprise models.
When you take a look at the FP32 performance, the A100 gets blown away by the 4090…
(I broke the source links intentionally to overcome wordpress spam filters)
techpowerup.com/gpu-specs/geforce-rtx-4090.c3889
videocardz.com/newz/nvidia-announces-tesla-a100-specifications
nvidia.com/en-in/geforce/graphics-cards/40-series/rtx-4090/
A100 FP32 -> 19.5 TFLOPS
4090 FP32 -> 82.58 TFLOPS
A100 FP16 -> 78 TFLOPS
4090 FP16 -> 82.58 TFLOPS
A100 Cuda cores -> 6912
4090 Cuda cores -> 16384
I am curious about the tensor specs of the 4090, but it’s not published, bah.
A100 Tensor cores -> 432
4090 Tensor cores -> 512
We can’t overlook pixel/texture shader performance, but I did not find any details for on the A100. Also ray tracing is very likely one of the reasons behind 4090’s high TDP.
A100 RT cores -> 0?
4090 RT cores -> 128
I got these results on a 3080ti, which is obviously a different card, but you may find them enlightening anyways:
Q2 demo power consumption (out of 350W max)
3080TI opengl, 60FPS cap -> 41W
3080TI raytrace, 60FPS cap -> 250W-273W
3080TI opengl, no cap -> 41W-86W, 1000FPS
3080TI raytrace, no cap -> 346W-348W, 142-150FPS
Do you still think this after considering my point that we’re not comparing apples to apples?
Alfman,
That is a fair point, but the responsibility is still on nvidia.
As for power usage. A100 is a larger chip, whereas 4090 is a smaller, denser one (826 mm2 vs 608mm2). It looks like A100 is not using all that area in regular workflows, at least in my case (never saw it going too much further than 200W). and 4090 is using more areas of the chip.
(Assuming silicon resistance would be the same. But in fact it is now, better quality silicon will have less impurities, and less resistance. Also assumes similar clock rates, which of course depends on a lot of factors).
Anyway, nvidia can optimize their chips to run cool (the Tesla series), or burn laptop motherboards (https://www.tomshardware.com/news/nvidia-geforce-faulty-defect-gpu,7795.html).
sukru,
Yeah, most applications only ever use a fraction of the silicon at a given time. Most gamers will never use the tensor cores, meanwhile most HPC applications will never use the race tracing cores, etc. Meanwhile things like 64bit FP can have consumer applications sometimes but nvidia decided to use this criteria to differentiate enterprise cards.
Conceivably yes, but I’m not actually sure whether enterprise cards are speced to use different silicon at the fab? I think it may be more about binning and/or under-clocking to run enterprise chips less aggressively for a longer expected lifetime.
That’s a long term grudge, haha. Regarding the cable problems, as I understood it nvidia didn’t manufacture them, so it isn’t clear yet where the engineering fault lies, but it is ultimately nvidia’s responsibility to make it right by recalling any potentially faulty cables they sold. That’s the standard I am going to judge them on personally.
Alfman,
Btw, thank you for bringing more accurate numbers.
sukru,
Haha, yes I like numbers 🙂
Things get too darned political & divisive when people express opinions absent a set of common facts to agree on. (Generally speaking, not pointing a finger at you)
The root of the problem lies not with the design of the cables, but the fact that computers began to draw more power than a vacuum cleaner. And the loudest parts of the community came to accept that fact. That’s the absurdity we have to speak against. The shape and layout of the cables carrying that power is only a product of that actual problem. So is putting water coolers or more and more exotic cooling solution designs…
Imagine your refrigerator drawing 5 times more power it drew, and having 3 loud fans blowing into its cooling radiator. Imagine your TV having water cooler and requiring expensive thermal paste replacements every second year. Imagine your vacuum cleaner having a diesel engine… These are crazy right? Isn’t water cooling in a computer case crazy. But the gamer teens embraced these ‘solutions’ in their race for having bigger dicks, and the rest of the community followed them.
I hope the EU will introduce a tax on these absurd GPUs and the recent trend of CPUs drawing huge amounts of power.
You can do most stuff on a modern, low power 10W PC – even play Deus Ex (2000)!
If I may add, even if we concede that the e-penis race cannot stop, then hardware manufacturers should come together to redesign the desktop PC. We cannot keep pretending that the RTX4090 or even the RX4080 are regular expansion cards, and that having the entire thing supported by a PCB and a couple of case screws on one side is a good mechanical design. Because it’s not. Or that feeding the entire thing via a wire that carries one-fourth of the power carried by a US-spec wall outlet but is much thinner (and the connector it attaches to is much smaller) is somehow good electrical design, even if it can be made to work with the right materials and construction. Because it’s not good electrical design.
It’s possibly worse than that. Conductor size primarily depends on current. 450W at 120 volts is only 3.75 amps, and 450W at 12 volts is 37.5 amps – literally 10 times as much.
However, conductor size also depends on voltage drop (a wire acts like a resistor, and longer wires have more resistance), the insulation’s thermal properties/melting point, and the heat dissipation (e.g. for the same current; a wire run through the insulation in a house’s ceiling needs to be a lot bigger than the wiring on a motorbike).
Anyway; I don’t think redesigning everything to suit the absurd GPU power consumption is the best choice – I’d prefer banning cryptocurrencies and putting an extreme “luxury tax” on GPUs that consume more than 100W (under the pretense of saving the environment). At least that way, game developers might actually try to optimize their bloated and inefficient trash instead of making unsuspecting consumers to pay for the increasingly poor quality of games.
Note: Do you remember when Crysis was like the gold standard in graphics quality (with “Ah, but will it run Crysis?” becoming a meme)? Yeah, Crysis runs perfectly fine on a 10 year old integrated GPU that consumes less than 15W (and I’m honestly surprised that Crysis hasn’t been ported to smartphones).
I’d vote for that. Really.
Brendan,
You are technically right, however I think in this case it was a quality control problem rather than the cables not being to spec. The cable only overheats when broken. Of course it’s a huge problem regardless of the cause. It’s only a matter of time before somebody’s house/building burns down.
I am no fan of crypto either. I’ll sure get some flack for saying it, but most of the appeal was from financial speculation rather than practical real world applications. The shift to proof of stake is so much better for the environment, but if the days when you could just buy hardware, sit back, and let your GPUs convert electricity into money are gone, then a lot of cryptocurrency’s backers are out. Maybe the bubble has burst on its own?
Regarding games, you have a point about optimization, however IMHO the real elephant in the room is raytracing. This is by and large the main culprit for these enormous energy loads.
Purely out of curiosity, I tried to find evidence for that claim…
“Crysis 3 Intel igpu @1300MHz HD 4000 3570k OC Benchmark FPS”
https://www.youtube.com/watch?v=1w9aY8Lbuq8
Yes, it’s crysis running on a 10 year old iGPU. Although honestly the frame rate is low and the quality looks horrendous to me.
I wasn’t aware of this, but apparently there is an RTX remaster…
“Crysis 3 Remastered vs Original RTX 3080 4K Ultra Graphics Comparison”
https://www.youtube.com/watch?v=-a6l-0rWolA
To me, RTX ray-tracing in most titles including crysis is extremely subtle. Even so they’re both dramatically better than a 10 year old iGPU. I get the point you are making about excess, but you need to be careful with examples. iGPUs can’t hold a candle next to discrete graphics.
It occurs to me now that you may have been implicitly talking about crysis v1 rather than series generically. Oh well. I think the question is: at what point do graphics become “good enough”? That’s hard to say as this is so subjective. Graphics get better but it’s been diminishing returns. What’s the point of such high settings if we can barely tell without carefully looking at frames in side by side comparisons?
I use GPUs for GPGPU, where more processing power is very appreciated, but for general purpose gaming I tend to agree with you that these massive GPUs are overkill.
Honestly (I’m not trying to be silly), the correct answer is that graphics becomes “good enough” after about 15 minutes.
To explain; the human brain has an amazing ability to adapt to different visual stimuli. If you drop the graphics quality your brain starts filling in the gaps (and you become immersed in gameplay) and you stop noticing the difference after a little time. If you increase graphics quality it looks awesome for a short while before you become accustomed to it.
Graphics quality is really about marketing. It’s first impressions, and things like trailers that aren’t long enough for your brain to adapt to.
Brendon,
You know what, I laughed here before getting to your explanation 🙂
I can relate to this sentiment with game immersion. Heck even today I still think back to games I enjoyed when I was younger. I had a hell of a lot of fun with driving games regardless of the graphics. I still do like good graphics, but it doesn’t make the whole experience.
The EU did something similar when it comes to TV energy consumption, by mandating energy efficiency minimums for TVs in an effort to combat the “backlight wars”. The “backlight wars” is TV manufacturers putting more and more powerful backlights in their LCD TVs and shipping those TVs with “Dynamic” mode as default (a mode which runs the backlight at or close to its maximum power), in order to impress customers at the electronics store. Even stricter rules come into effect in 2023.
Guess what happened next? OLED TVs happened, and the best OLED TVs won’t be able to meet the 2023 energy efficiency minimums. Neither will micro-LED. Also, 8K LCD TVs need the extra backlight power so the light can push through the denser screendoor of 8K LCD panels, but they can’t have that extra backlight power because then they won’t meet the energy efficiency minimums. Also, if 3D becomes a thing again, which need double the backlight power for 3D (the glasses cut half of the light), EU citizens will again be restricted.
Now, there is a workaround: TV manufacturers can ship their EU-bound TVs with some kind of “eco” mode as the default (the EU regulations only care about the default mode the TV ships with), but my point is: You don’t want people like the Eurocrats in Brussels making rules about those things. They don’t care about the underlying technology and won’t change course later to accommodate changes in technology.
For example, what if 5 years from now the GPU becomes the CPU and main memory of the system and a 450W consumption total is perfectly reasonable for advanced/creative workflows? Let me repeat again: You don’t want people like the Eurocrats in Brussels making rules about those things.
Without veering into the trillionth “regulation vs. freedom” debate on the net, let’s return to the initial comment on this streak. GPUs drawing cable melting watts of power, or 200+ watts CPUs are not reasonable. We all agree on that. And 95 percent of computers should not or need not draw such power levels, yet are sold with “the fastest hardware” no matter what. We all agree on that as well.
Add to it the current global problems we have. We have a growing e-waste problem, because the computer industry embraced planned obsolescence, requiring us to discard many PCBs, screens, batteries etc. to waste each year, The CO2 levels in the atmosphere are way higher than the levels recorded when I was a kid. And there is a lunatic in Moscow, believing he can do whatever he wants, just because he controls vast amounts of fuel.
Ergo, I can live without a 8K TV, or if I must buy one, pay 50% more for it, if that sacrifice will, when applied collectively, force that lunatic into a more sensible plane. Whoever applies those brakes –be it brusselites or someone else– are welcome.
kurkosdr,
I would also want less regulation overall, but what nvidia does is pretty wasteful.
Their main selling point in 4090 is DLSS 3.0, which is an AI based boost to frame rates. That actually works to enable sometimes 3x precepted performance, which is impressive.
But that has nothing to do with the more power draw. More power draw enables larger chips, more transistors, and lower quality silicon (they have higher number of transistors in datacenter chips, they take less power, but are much more expensive).
Anyway, the point is, there is probably nothing that stops the next gen features from being implemented on 30×0 cards. Worst case, they could have made a “3080 AI” with some additional tensor cores or whatever new structure they need.
Using 500W is pretty much unreasonable.
As I said previously, it’s not only 8K TVs that have a problem meeting the new efficiency standards, it’s also the best OLED TVs and also the upcoming Micro-LED TVs. Here is a good summary:
https://www.youtube.com/watch?v=Ssgna9Mg6a4
Also, if 3D ever comes back, for example in the form of glasses-free 3D (btw Acer is working on 3D with SpatialLabs) or even in the form of regular glasses-powered 3D TV, it’s highly unlikely any 3D technology will qualify. But even if this doesn’t happen, not having the best OLED TVs and upcoming Micro-LED TVs qualify is already a loss. Sure, there is the “eco” mode workaround, but then you have to tell everyone how to mess with modes.
My point is, considering how lazily the Eurocrats regulated energy efficiency in TVs, I don’t want them regulating things like power consumption in any area of cutting-edge technology. What they did with TVs is take whatever technology was popular at the time (LCD), they noticed that some backlight settings weren’t what they wanted them to be, screamed “market failure!!!” three times (it’s a Eurocrat tradition, don’t ask why), and drafted legislation for TVs assuming LCD panels will be the end-all-be-all for TV technology for at least the next two decades, and then they doubled down on the stupid (because bureaucracies never give up turf once they acquire it). In fact, capping power consumption in GPUs would be a dumb move today, considering how semi-pro creative professionals use them for advanced rendering work, video effects, and video encoding.
If you don’t like the RTX4000 series GPUs, don’t buy them. Nvidia themselves would prefer you bought an RTX3000 series GPU anyway (which is why they are withholding the RTX4070). Also, DLSS3 frame interpolation is crap and causes artifacts on UI elements, during panning, and during scene changes.
Look, I would also like someone to teach Nvidia a lesson, but not via legislation that will never go away. I would prefer if users did it, and it looks like they are doing that by not buying.
kurkosdr,
That’s an interesting idea. I agree we could do much better! However if we abandoned today’s standards, the market might break off into factions and we’d no longer have a standard, just a bunch of incompatible pseudo-proprietary computers.
Those fears aside though, there are interesting possibilities for new concept PCs to address not only these power delivery issues, but also the other antiquated designs our computers are based on including PCI. Computer initialization in particular needs to be better. Booting and device detection need to be instantaneous. And I don’t mean sleep/hibernation, I mean from fully off to fully on. Everything needs to be properly event oriented. Power management & hotplugging shouldn’t be an afterthought. It’d be a stretch, but device drivers ought to be universal.
I’d love to see a new concept PC to succeed the computers we have today, but I honestly can’t see the industry coming together to pull it off.
They were engineered to carry the right about of current, the problem is that soldering junctions broke. If you’re saying they shouldn’t have soldering junctions at all, then that’s a fair point but that’s a byproduct of existing ATX connectors :-/
It’s highly unlikely the actual architecture will change, for the reason you mention. But I would be happy if they just re-engineered the mechanical aspect. I mean, make the GPU something like a server blade inside the case (firmly attached on both sides and possibly with its own airflow). If we could move the PSU to the bottom, we can also do this. For better or worse, the modern discrete GPU is not an ordinary expansion card and hasn’t been since at least the days of the Fermi. The fact we keep pretending modern discrete GPUs are ordinary expansion cards that can be supported by a PCB and a couple of case screws is, quite frankly, ridiculous
Btw with regards to mechanical support, all they have to do is come up with some kind of standardized bracket to anchor the GPU to the front of the case, not just the rear. That way, at least the damn thing won’t be hanging from its own PCB (which often leads to sag, or damage in shipment in packing foam isn’t fitted prior to shipment).
kurkosdr,
No denying it.
While computers keep evolving out of necessity, they’ve got a shoddy foundation at the base. At least originally ISA cards were so long that they were held by slots at the front of the case. Slots which are long gone…
https://upload.wikimedia.org/wikipedia/commons/9/93/8088-inside-2.jpg
For better or worse, when ISA & PCI cards got shorter, they kept the same form factor and only secured cards with single screw on one side – at least those cards would be small and lightweight compared to the old giants. But now that cards are becoming huge and heavy again, we’re left with no support on the front of the case whatsoever, which is insane.
That would be pretty neat. We need an all in one solution that takes care of power, airflow, data bus, with optional provisions for things like water cooling. I’d like the chassis/motherboard/devices to have enough meta data about themselves to produce an accurate on screen representation showing exactly where the fans/temp & power sensors/ports/cards are. Not only would this create new innovative interfaces, but you could actually get realtime intelligent system optimization instead of a bunch of proprietary software & hardware from multitudes of vendors that can’t talk together.
Physically I think it would be really neat if they took the form of hot-swap disk & power supplies used in servers. Quick and easy to install & remove, no fidgeting with alignment or cables, extremely robust and one might not even have to open the case.
Imagine not understanding the problem actually is, what the actual failure rate is, or the fact that you’re literally getting the same computing power that a couple decades ago would have taken a whole data center floor and thousands of kilowatts.. for 1/1000th of the power and space…. and yet talking about taxing in order to solve a problem you don’t understand.
This isn’t quite what happened. In reality laptops saw a massive increase in market share while desktops saw a massive decrease. People still choosing desktops at this point are typically going to invest in these power sucking monstrosities.
That happened because instead of using two bog-standard tried and tested 8pin power connectors NVIDIA went with some untested tiny crap. They saved a few bucks on the PCB.
Oh, and people bend the cable too much because of the humongous size of cards which don’t fit well into most cases.
Surely, if you’re plopping down this kind of money, it might make sense to swap the case to be compatible, no? I mean I’d have to be pretty wealthy and lazy to do that.
Bill Shooter of Bul,
I think it’s a legitimately bad design. Sure the cards are big, but even so you shouldn’t have to add 2 more inches to a case dimension just to fit a cable without straining it. It’s one thing to buy a bigger case to fit a large GPU, but if you’re buying a bigger case just to fit the cable, then it’s a bad design. At least 3rd party manufactures are said to be designing a 90 degree connector to solve this, but it should have been solved upstream before going to market.
I mean, its all a bad design. I can’t defend any of it, but 1000 watt power supplies 3 space gpus, a little bigger case isn’t that much of an additional absurd request. Like if you want to buy the dumb thing, buy the right parts for it. Go whole hog absurd, or stick with a rational computer.
Bill Shooter of Bul,
Well, you are coming to this from the perspective that a 450W-500W 4090 GPU is ridiculous, therefor any other ridiculous requirements should just be tolerated. Not to oversell the 4090 as anything other than niche, but for owners who are actually interested in GPUs like this, a bad design isn’t good or reasonable for them – it should be fixed.
Here you’ve got a manufacturer of these 12VHPWR cables officially instructing users not bend the wires within 35mm from the connector (note this is a 3rd party and not nvidia’s connector).. This is really difficult to guaranty even in cases with enough clearance.
https://cablemod.com/12vhpwr/
I already thought it was a shame for the standard to spec smaller pins for this connector. in the first place, But now that tests are independently revealing how fragile these can be after normal cable management, it’s becoming a bigger problem.
For it’s part AMD decided against using the new standard and I think it was the right decision.
https://pokde.net/system/pc/gpu/amd-rdna3-will-not-use-12vhpwr
Actually the tiny connector is more expensive.
This is part of the reason I have gone on to tinker with older computers. Being able to ditch massive heatsinks and using passively cooled gpus is nicer on the ears and a lot of times on the power usage…
FWIW, there are others that have looked into this and found the adapters they were able to get there hands on were different, thicker gage wires and different connector setup. Those they couldn’t get to fail. So maybe its not super widespread? In any case not a real problem for most people who will never buy the $1600 card. Heaven knows I won’t. If you’re into esports ( does esports really need a 4090? ) or max quality AAA games I’m not judging you at all, its just more niche than a lot of the attention deserves.
Sorry you just said something that is hazard.
Hazard 1. Thicker gauge wire. Each of the pins in the socket are only able to handle X amount of amps safely. 16AWG is what is designed to be crimped into the pin but then Nvidia adapter has soldered on 14AWG. Now you have a problem this ever comes 1 to 1. As in 1 wire to 1 pin. 14AWG can carry more current than the pin is designed to take. Correct design the wire is the fuse-able link as the weakest part not the socket or pin.
Hazard 2. Flat plate bridging. Think on the card between the pins is bridged and now on right at the plug is another bridge in Nvidia design. Now you have a big problem electrically. Remember electricity will take the path of the lowest resistance. So that 2 or 4 pins on a plate you cannot be sure that the current will go evenly between the pins slight differences in resistance in fact you should purely presume to due to resistor at at some point it going to all attempt to flow though a single pin and overload it. Most would not think of it but copper wire cable is light value resistor and the adapter design coming with the Nvidia vendor cards have removed this resistor. Would have been more costly todo a PCB with 12 small resistors to reduce amp flow differences in case of imperfect contact. Yes the simplest solution just do the 12 power wires into the plug with the correct wire size as the approved PCIe design said todo.
Yes using 16 awg is the max by the 12VHPWR PCIe standard for the power carry wires. And that the max that is meant to be crimped into a power pin in the 12VHPWR. Yes that is so that the 16awg is the fuse-able link if something over draws though a single pin. So the replaceable cable breaks not the plug or the socket. Please also be aware the power pins in the 12VHPWR are not new design pins there amp limits are absolutely know and 16awg is absolutely it even 15awg cable will result in socket/plug failures.
Sorry just saying problem is not wide spreed is like saying it fine to wire up houses without a fuse box because most cases everything works fine and some other safety kicks in. Nvidia adapter design removed 12 safety fuses. So of course a few things are frying badly for the unlucky few at this stage.
All these early adapters to 12VHPWR using 14AWG wire should be forced to be recalled they are electrically unsafe. There are certification tests for the power pins in 12VHPWR for how much power you can draw though them before they melt and that is why they are only designed to have 16AWG at max crimped in. The reality the Nvidia adapter using 14AWG should have had small resister/fuses to each pin to make sure individual pin could not be overloaded and if that overload happened the cable failed without damaging anything else.
Yes surprise cables are meant to be fuses to prevent worse things from happening.
Hope you understand I’m just trying to lay out the scope in my comment. It seems to affect a non majority of users who have this somewhat rare and expensive gpu that aren’t using a power supply with the native connection it kind of needs. That’s not great, and the problem should be fixed. Kind of like a recall car companies issue. Usually not an event that will 100% kill everyone using it, but big enough of a problem to remove dangerous components that reduce the safety margin.
oiaohm,
I don’t agree that thicker wires are the problem. Thick wires do not cause high current to be pushed into the pins. Current is drawn under load. Using thicker & higher quality cables does not increase current, in fact if a device is using voltage regulation (as GPUs do) then there will be less current thanks to the smaller voltage drop using a thicker cable.
Yes I agree that everything has resistance, including the PCB pads & solder bridge that these cables are using for a bus bar. In this case the problem is that the solder joints break but is there any evidence that it can’t handle the current when it’s intact? If so it would be a new problem that affects more users, but so far I haven’t seen anyone reporting this to be an issue.
Adding 12 resisters would add costs, complexity, resistance and failure modes while not really solving anything that needs to be solved. Tiny imbalances in the milliamps isn’t a problem and unless the resisters are extremely precise they’d potentially increase the imbalance rather than solve it. This is an interesting academic problem perhaps, but it’s counterproductive to the bigger goal of minimizing resistance. To the extent that there were a significant imbalance, a higher capacity bus bar would be better and easier than 12 individual resisters.
Can you cite specific sources for any of that?
It’s not normal for any of the power cables in a PC to have fuses outside of the power limiting and fuses in a power supply Maybe you want to make the case that every pin needs to be individually fused, but this is not and has never been standard practice in the industry.
“I don’t agree that thicker wires are the problem. Thick wires do not cause high current to be pushed into the pins. Current is drawn under load. Using thicker & higher quality cables does not increase current, in fact if a device is using voltage regulation (as GPUs do) then there will be less current thanks to the smaller voltage drop using a thicker cable.”
Design certification for PCI-SIG requires testing to destruction not that if GPU behaves X way everything is good.
“Yes I agree that everything has resistance, including the PCB pads & solder bridge that these cables are using for a bus bar. In this case the problem is that the solder joints break but is there any evidence that it can’t handle the current when it’s intact? If so it would be a new problem that affects more users, but so far I haven’t seen anyone reporting this to be an issue.”
There is no evidence that the solider joints breaks are causing the socket failure.
“Adding 12 resisters would add costs, complexity, resistance and failure modes while not really solving anything that needs to be solved. Tiny imbalances in the milliamps isn’t a problem and unless the resisters are extremely precise they’d potentially increase the imbalance rather than solve it. This is an interesting academic problem perhaps, but it’s counterproductive to the bigger goal of minimizing resistance. To the extent that there were a significant imbalance, a higher capacity bus bar would be better and easier than 12 individual resisters.”
The problem here is a high capacity bus bar connected straight to the socket pins. PCI-SIG and the maker of the sockets for 12VHPWR both say max of AWG16 wire. In fact maker of the sockets to get a clear and solid crimp and to allow movement for pin miss alignment in the socket recommends high grade AWG17 wire.
Next is how Nvidia done the bus bar. When the Nvidia 12VHPWR adapters are cut apart you can see they have used pins in the socket designed to be crimped and left the part to be broken off as the bus bar. Flexing is not likely in this setup to break the solder joint but in fact snap the designed in weak point separating a pin from the bus bar. AWG14 carries multi pins worth of amps. A pin snaps off from the bus bar you now have too much current going though the pins so they overheat and melt the plastic as one of the causes. Another caused due to the pins being direct connected to bus bar they cannot move. Might not look like it but a male version of 12VHPWR designed to be solder on PCB has Intentional designed in flexibility to allow the pins to move slightly to prevent deforming the connection area. Yes the bus bad of Nvidia design is likely to cause connection deformation so causing incorrect connection between the two side so even if you don’t snap a pin off end up with more amps traveling though a pin than should be. Yes this has been documented that the pins in the Nvidia designed 12VHPWR are getting deformed due to lack of movement caused by the way Nvidia did the bus bar. Of course a deformed pin with the cable using AWG16 pin to pin would not have ended in melting in the socket of the plug instead would have just been no power flow down that pin and if those were coming from a bus bar the AWG16 wire would fry again not in the socket.
Yes PCB male plug of 12VHPWR on a PCB being the bus bar solder on still would not be ideal with but would have been way less likely todo what just happened. Yes using a 12VHPWR male PCB plug + PCB board would have been more expensive also would have give the cable a 90 degree bend so avoiding the problem of running into the side of case that end up putting load on the connection possible pushing in the snap off direction. Yes the design is using the crimp pins snap off plate as the bus bar.
The individual resistors each have a amp limit they are fuses as well. This would be to prevent putting too many amps though individual pin. Yes it would most likely be cascading failure.
“It’s not normal for any of the power cables in a PC to have fuses outside of the power limiting and fuses in a power supply Maybe you want to make the case that every pin needs to be individually fused, but this is not and has never been standard practice in the industry.”
No you need to refer to PCI-SIG. All cables by PCI-SIG are meant to fail in the parts between the plug/socket not in the plugs/socket if it fails inside the plug/socket something is outside specification here. Yes this is meant to be tested to overload. The cable Nvidia has provided is not to PCI-SIG specification. The suggestion of board between the plug with resisters that will blow if overloaded would be doing the cable inside PCI-SIG specification with oversized wires.
When it comes to your computer power connectors it is standard practice that when testing to overload that the wire should be the failure point/fuse. If not the wire something in the cable that that breaks that is not in the socket.
Alfman the reality here wires defined sizes of wires for a connection are recommend max fuse sizes by PCI-SIG. If you are not going to obey those wire sizes should add something else in the cable designed to break so the sockets don’t get busted up.
Like it or not 12VHPWR adapter that has been failing is cost cutting of the worst kind. Using reduced number of AWG14 wires vs AWG16 wires results in cheaper material requirement. Using the snap off part of crimping pins as bus bar is cost cutting. Having no safety to prevent overloading the pins is also cost cutting. Yes we will put a bit of wrapping around it to prevent bending to prevent the pins from being flexed hopefully to prevent them from being snapped off. This is stupid by the way the flex is between the pin and the snap of plate where the failure is going to happen and this is going to flex every time you plug in and out no matter how perfect you do it. This breaking 12VHPWR adapters are poorly designed that when you know what you are looking at they are always going to break badly. They are dirt cheap unsafe solution being provided with very expensive cards.
oiaohm,
They probably were tested to destruction. Some youtubers tested the cables themselves and didn’t have a problem under realistic loads. This makes sense if load isn’t the cause of the problem but rather physical damage is.
Look at the pictures. The solder and copper are being ripped apart from the neighboring wires, It’s obvious there’s a physical problem because neither the copper nor the solder is keeping the wires properly connected.
Please cite all your sources, and this goes for the entirety of your post as you keep making claims without any sources.
An nvidia email obtained by gamer’s nexus claims connecting the pins to a PCB is to spec. Their tests included a 16AWT cable becoming severely current imbalanced after bending. So no, the problem isn’t the gauge of the wire.
https://youtu.be/p48T1Mo9D3Q?t=449
Electrically it’s the minimum wire size that’s critical, In overload conditions you really need a fuse and overload protection. If the fuse is smaller than what the wire can handle, that’s not a problem, the opposite is.
Physically speaking, a crimp connector can only take a certain size wire, but in this case 14AWG wires are not being crimped inside the pins. So as much as you’ve made a huge deal about it, that is not the problem here!!
We agree that they have broken badly, but the problem has more to do with the bus bar and not wire thickness. If you still want to disagree, then at least provide a source for your claims (otherwise they are very debatable).