You may have noticed that due to “AI” companies buying up all literally all the RAM in the world, prices for consumer RAM and SSDs have gone completely batshit insane. Well, it’s only going to get worse, since Micron has announced it’s going to exit the market for consumer RAM and is, therefore, retiring its Crucial brand. The reason?
You know the reason.
“The AI-driven growth in the data center has led to a surge in demand for memory and storage. Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments,” said Sumit Sadana, EVP and Chief Business Officer at Micron Technology.
↫ Micron’s press release
First it was the crypto pyramid scheme, and now it’s the “AI” pyramid scheme. These MLMs for unimpressive white males who couldn’t imagine themselves out of a wet paper bag are ruining not just the environment, software, and soon the world’s economy when the bubble pops, but are now also making it extraordinarily expensive to buy some RAM or a bit of storage. Literally nothing good is coming from these techbro equivalents of Harlequin romance novels, and yet, we’re forced to pretend they’re the next coming of the railroads every time some guy who was voted most likely to die a middle manager at Albertsons in Casper, Wyoming, farts his idea out on a napkin.
I am so tired.

Why all the harping about “white males”? People who don’t happen to be white males also work in LLMs (and the datacenters that power them) and have leadership positions too. Why does everything have to be the fault of teh ebil white males?
My concerns are more pedestrian: Who the hell is paying for all those chips and the datacenters that house them? Let alone the energy that powers them? Even the dot-com bubble didn’t consume so much of the world’s chip supply, and in an era less people owned computers too. Where are the actual dollars coming from? At the rate LLM datacenters are munching through capital, even the vast financial firepower of Apple, Microsoft, Oracle, and Meta will be exhausted by the end of the next year. Are those companies really betting the farm on LLMs? Do they have a government bailout lined up when their financial firepower dries up? That’s the real question.
kurkosdr,
It is a mix of real potential, and “yet another bubble”. Like most bubbles, there is a fundamental value in AI (even though many would act like luddites until it becomes pervasive). However the current competition makes it a “we can’t afford not to run in this race” situation
Google?
They were the ones who literally invented the large language models. Specifically the Transformer architecture. Or The T in GPT. Now with Search deteriorating, and ad revenue going down, they have to make a move. Fortunately, they actually have the hardware and cash reserves to burn. So Google will be fine
NVidia
One of the larger player who once again fell onto all fours like a lucky cat. They have a reliable, albeit archaic API, CUDA, which has become de factor training language for machine learning. They will also be fine. As they are the “shovel vendors” in this gold rush
Microsoft
They are selling cloud services and also experiment with AI in the desktop and gaming. I don’t think they will make anything from the client effort. But as a B2B vendor, Azur is likely to make a killing.
OpenAI
They are pretty much toast. They have no moat, and no path to profitability. They have depended on one bailout after another (like Microsoft). But their latest attempt was preemptively killed by the us government.
Meta / Facebook
Frankly I have no idea which way they will go.
Open Source
The actual winners. As the hardware became available, models commodities, and the software being enhanced… The local LLM offerings are maybe only 6 months or so behind the big commercial ones.
Today I can download ollama + open-webui, get a reasonable model like Qwen3 or Gemma, upload my documents — entirely locally, index them, and query for my own stuff. Entirely private. Entirely local. (except the initial download of course)
We will be fine, there will be a natural correction, and many who has over extended will become bankrupt, like every other cycle.
See, that’s where my disagreement is: They don’t have the cash reserves to burn. They may have a market cap in the trillions, they may have assets in the hundreds of billions, but when it comes to money available for spending, they have a little under $100bn:
https://companiesmarketcap.com/alphabet-google/cash-on-hand/
With the rate AI datacenters are munching through capital, all this money should be gone sometime in 2026. I mean, AI datacenters are eating the world’s memory supply. Do you know how much capital it takes to do that? And that’s in addition to the fact that they’ve been eating most of TSMC’s wafer production for some time now, which also requires a monumental amount of capital to do. And then there is the capital to power all those chips.
So, this is my question: Where is all that capital coming from? The only comparison I can draw is back when housebuilders in the US were acquiring land in the middle of nowhere to build houses and paying top dollar for it, with no indication where all this money was coming from. Post GFC, we learned that the capital was coming from subprime mortgages, financed by the deposits of US citizens (aka, people’s deposits were being hollowed out to pay for the boondoggle that was McMansions in the middle of nowhere). That’s why I am worried right now: What is being hollowed out to pay for all those AI datacenters with almost no real-world revenue and the power to run them? Officially, it’s the cash Google, Microsoft, and Meta have in hand, but really?
kurkosdr,
That might be the limit to how much cash they want to keep on hand. In other words they don’t have to choose between $100bn versus spending on AI because they can have both at the same time. Google alone makes $100B a quarter and massive layoffs in recent years probably means they have more revenue than they know what to do with right now.
https://www.macrotrends.net/stocks/charts/GOOG/alphabet/revenue
Holding cash is a notoriously bad investment, which leads to a unique problem for extremely wealthy people and companies: they don’t want to be stuck holding cash long term. They are desperate to invest their money somewhere, and for better or worse they chose AI. Yeah, it seems awfully nice to have those kinds of problems, but the point is they may not have to touch that $100B at all.
Alfman, and kurkosdr,
My point was not the cash on hand, but they they can burn for a long time (ads are not going away tomorrow, or in the next 5-6 years). — sorry for not being able to tell it correctly.
Architecture wise they switched to TPU based models. That is a massive benefit, since they not only save costs, but they can “co design” the model and the hardware to run them.
(The advantage of this cannot be understated. A generic GPU with Tensor cores is nice to have. But a model architecture specific hardware is significantly better).
Google has no shortage of “traditional” datacenters. I think I can share this much: for an “experimental” job with xx,000 CPU cores and yyy TB storage, I would not even need to fill in any forms. (As long as it had business reasons).
Google has more problems, if they were not investing in AI (we can see this in the market reaction in 2022 and post ChatGPT era, and what happened after Gemini became widely known)
sukru,
Somebody’s going to be the first to turn one these LLM models into an ASIC. Rather than having to fetch neuron weights from memory, they can be baked right into the transistor fabric. Multiplication by fixed values can use mathematical tricks to speed up calculation compared to generic multiplication (*2 is simply a shift, *3 is simply a shift and addition, *4 is simply a shift, etc). Furthermore it could eliminating most propagation and latching delays between ALU and RAM. Doing it in the analog domain instead of digital could speed things up even further and resemble NNs found in nature. This would give up the flexibility of programmable hardware, but even so a hardcoded LLM chip could be worth it just for the performance advantages.
Alfman,
They don’t go so far, but there are significant architectural changes compared to general purpose Tensor cores.
TPU has a massive local ram, which is content addressed, and used for embeddings. (I remember we discussed this, “hardware hashtables” before)
They are “systolic arrays”, not parallel NUMA machines. What this means is they are doing “streaming” of work. No RAM/cache back and forth, it just “flows” like a heartbeat (I’ll be honest, I don’t know all the details)
That ALU -> RAM issue is “solved” as the data is not pushed to RAM or cache in between steps at all
Memory is “software defined”. As in its networking as well. (They use hardware optical switches. Tiny mirrors that be aligned to create node-to-node direct paths between TPUs)
Here is an older document, but I had found it fascinating back in the day:
https://newsletter.semianalysis.com/p/google-ai-infrastructure-supremacy
(The site keeps most of their findings private, and was asking something like $1,000 per year subscription fee. So… we read what we can)
sukru,
Yes, I understand that TPUs can beat a GPGPU and there are some ways to optimize memory overhead. But it still doesn’t beat a hard coded model that doesn’t rely on memory in the first place. Google’s TPU do seem more optimal for training, but for inference an optimized ASIC could dramatically beat the same LLM running more generically on google’s TPUs.
The main shortcoming of hard coding an LLM into an ASIC is obviously that it’s not programmable, but speed and energy advantages could still be prove beneficial over the more generic hardware TPUs that google has.
Alfman,
But TPU is the ASIC for specialized linear algebra execution.
They are very different than our CPUs or GPUs and do not follow the “Von Neuman” architecture. We could go into details for hours, but basically they require a specialized “compilation” of your compute graph, called XLA.
https://www.reddit.com/r/Compilers/comments/1hmy886/backend_codegenoptimizations_for_tpus/ (some light discussion on it)
And you do NOT want to “burn in” model inference. It is always changing over time:
https://github.com/ml-explore/mlx-lm (MLX is one of the most optimized runtimes, look at how often this changes)
I would estimate we have achieved about 10x runtime improvement since early ChatGPT 3.5 days And, that is why people are able to run SOTA models on commodity hardware at reasonable speeds. If we had built a “ChatGPT 3.0 ASIC” it would be extremely slow today.
sukru,
Linerar algebra is easily solved, NNs explicitly need a non-linear component known as an activation function, although I assume you already know this and calling it a linear algebra ASIC was an oversimplification.
Yes, but I think you’re missing the point I was getting at. While Google’s TPUs are specialized at NNs, they are still generic with regards to a specific NN. Maybe this can be better understood by looking at a single neuron….
To handle neurons generically you need transistors both to store all the neuron’s weight values AND to multiply those weights to arbitrary input values. That’s what google are doing. It works, but it’s generic with regards to the NN values being programmable. This generic NN engine has obvious advantages over a hard coded NN, however if you hard-code specific neural network values into the transistors you can not only eliminate all the transistors needed for memory, but you also get to optimize away most of the multiplication transistors too. Hard coding those neural values directly into the circuit leads to trivial boolean logic optimizations (ie this bit is always ONE and that bit is always ZERO). Moreover, what I was trying to say before is that if you know a neuron weight has a specific hard coded value, there can be algebraic solutions with substantial reductions in transistor counts (like bit shifts not requiring any transistors at all).
Building an ASIC is financially out of reach for most people like us, but the math itself isn’t and even people like us could beat google’s TPUs by applying basic math to a hard coded LLM.
I’m going to stick to “baking” since that’s the turn normally used to refer to optimize a generic solution with hard coded values. As for the merits, I agree that a generic TPU has obvious benefits over a hard coded one. However to the extent that we’re looking at AI inferencing in data centers with gigawatts and potentially terawatts of power, a hard coded NN chip that is faster, uses fewer transistors and less power carries some obvious benefits.
I think TPUs are optimal for training, but for inferencing you don’t need to be google to see that an ASIC with a baked in LLM would outperform their TPUs.
Yes, limited lifespan is the obvious con. But it derives value from how much work it does during that lifespan and could pay for itself through higher performance and less energy. Hard coded chips may seem wasteful, but compared against the backdrop of terrawatt data centers it could still make more sense to make temporary chips that operate much more efficiently during their life.
I’d envision a chip fab process that’s highly automated and optimizes end to end time to market. Of course in reality the fabs are in such high demand that even if you can optimize everything on your end, the fabs may not be able to get around to making chips for your NN in a timely fashion. These market conditions could offset many of the benefits that a more efficient specialized chip would bring over a generic one. So while these aren’t engineering factors, they still can hinder us from achieving faster and more efficient inferencing.
Alfman,
Transformers are very specialized architectures. Almost nobody uses generic Neural Networks to train or run them except in education or very small settings.
Basically a series of matrix operations — hence low level linear algebra.
Also, TPUs match this perfectly without needing any more specialization. They are “Systolic arrays”
https://en.wikipedia.org/wiki/Systolic_array
And I’m not sure people think they are training only. They are pretty much used in inference in widespread settings due to much better power efficiency. Android has those Tensor cores for a long while. (And I think Microsoft is bringing something similar, and even nvidia as well, finally)
You could even get a TPU “hat” for Raspberry PI at one time, but I’m not sure whether they are still available.
sukru,
Linear algebra is mathematically solved. We can trivially collapse arbitrary deep linear algebra nodes just by evaluating them algebraically. However without the non-linear activation function it does not and cannot rise to the level of a practical neural net. We can get more into it if you want, but I expect this is something we should already be in agreement on.
I don’t disagree with anything you are saying here, I accept all of it. But it doesn’t seem like your addressing the crux of what I’m saying, all of that inference can be done more efficiently by a hard coded LLM. The math isn’t particularly hard, it wouldn’t take many of us to do it. Of course we lack the means to actually fabricate a cutting edge ASIC, that’s the main barrier. and a pretty large one at that.
Alfman,
True, an ASIC would make inference faster… if model architectures and algorithms stayed the same.
However by the time you get the first prototypes from fab (EVT), your ASIC would be matched by algorithmic improvements in the TPU. And before you get the final product (DVT, PVT)… it would have been completely obsolete and candidate for e-waste going into landfill.
One thing about LLM landscape is it is extremly fast moving. I would recommend following the local llama community:
https://www.reddit.com/r/LocalLLaMA/
(That is where I get majority of my information)
It would have to play catch up with things like: mixture of experts, kv-cache quantization, virtual context windows, 8 bit floats, 4 bit ints, 4 bit floats, 1.58-bit weights (not joking),
How can we know this?
Turns out there is an LLM “chip”: Groq (not to be confused by Elon's Grok)
https://en.wikipedia.org/wiki/Groq
They were the first to break some speed barriers, and designed by some of the original team at Google's TPUs. Receiving billions in funding
The problem?
https://artificialanalysis.ai/
They are neither the fastest nor the cheapest today. A static architecture (or let's say slow moving) competing with a very fast dynamic algorithmic ecosystem. — to be pendantic they are extremely fast, it is just that they are memory constrained, and need more chips, hence slowdown at larger model sizes.
(missed edit)
… and again TPUs are linear algebra ASICs.
sukru,
If your point is that we can’t fabricate new hardcoded LLM ASICs as quickly as reproggramming a new LLM in a generic TPU, well that’s obviously true and needs to be conceded up front. While a generic TPU running a given LLM won’t be able to match the performance of the same LLM hardcoded into an ASIC, there are obvious cons for the later.
Although this does make me wonder theoretically what the minimum possible turnaround time could be achieved with a “just in time” fab that solved all the engineering & bureaucratic bottlenecks? With maximal automation, is it physically possible to get all the lithography masks completed and outputting silicon wafers within a week? If so, that could revolutionize rapid custom ASICs in much the same way 3d printers and milling machines have revolutionized rapid custom parts.
I already do and I’ve even built a small project using it. Awesome stuff!
Ok, we can go into that. I’d agree they do linear algebra as well, but it sounds like you are pushing back on my claim that google TPUs (and NNs more broadly speaking) need to implement non-linear algebra for the activation functions?
https://medium.com/@varun_mishra/activation-functions-in-focus-understanding-relu-gelu-and-silu-841ed1c6df0c
https://introl.com/blog/google-tpu-architecture-complete-guide-7-generations
https://qengineering.eu/google-corals-tpu-explained.html
So although a lot of linear algebra is involved, boiling down a TPU to a “linear algebra ASICs” sounds like an oversimplification to me because you can’t stop there, you also need non-linear activation functions too for NN to be viable.
Edit: Shoot, I would have found a different link if I had noticed that medium article was behind a login barrier.
Alfman,
I’m not an expert, but I think the timeframes are 1 to 2 years. There are a lot of issues, including physics not playing ball (leakages, and stuff), and factories allocating capacity way ahead in the future
You are correct. “Technically correct, the best kind of correct”
The core of TPU is that Matrix Multiply engine. But it also has VPU which can do ReLU/GELU/similar and much more. (I would not be surprised if there are ARM cores as well. I had an FPGA board — AMD Zynq which was similar. Combination of FPGA cores + ARM to control execution)
sukru,
I get that things are badly backlogged today. Everyone has to compete against the likes of apple, amazon, google, nvidia, etc. Normal size companies have no real shot unless they go to much older generations.
That said, I was more curious about how long it actively takes to fully engineer and fabricate an ASIC excluding the queuing delays where nothing’s happening. What’s the shortest theoretical turnaround time physically achievable? I guess this could be a question for LLM, haha.
This isn’t relevant today, but in the future maybe it could be.
Every single major tech corporation in the world relies heavily on AI now internally. There are big bucks coming in just from that.
The entire B2B stack has been reworked almost overnight, and that is a $20=30 trillion market.
As usual a lot of “tech” “enthusiasts” have missed yet another memo. Ergo the usual “hurrr durrr AI bubble”
This really sucks, Crucial was my go-to brand for sensible, inexpensive, broadly compatible RAM for pretty much every x86 device I have owned in the past 15 years or so. Obviously with the prices hiked across the board I won’t be buying memory from any brand for a while, but if the prices ever become sensible again (and I really doubt they will after such a stratospheric rise) I’ll probably have to hold my nose and pay more for “gaming RAM” with all the LEDs and fake heat spreaders and garish colors, and deal with aggressive timings when I just need to upgrade a plain old workstation.
Morgan,
I think micron chips are used by more than just crucial branded dimms. I use G’Skill branded DIMMs, not sure who makes the chips for them. Regardless, this only makes a bad situation even worse.
I haven’t been entirely successful at avoiding LEDs, I don’t want any lights or screens using power and releasing heat no matter how modest into the case. Unfortunately many manufacturers keep pushing light up components and those without lights go out of stock first so buyers like me are often stuck with unwanted LEDs. My motherboard, GPU, water cooler all have bright lights I would have rather not been there. I can turn off the ones on my motherboard, but the others remain on all the time.
Crucial is Micron’s “house brand” of Plain Jane memory, but yes Micron chips are (or were until now) used across the industry by DIMM manufacturers. G.Skill uses memory chips from all three major manufacturers (Micron, Samsung, SK Hynix) across their product lines so you never really know which you will end up with. I have used their RAM in the past but I always had issues with timing and custom clock speed profiles so I would always fall back to Crucial, and sometimes AData, the latter of which will likely be my go-to for the future as they have a solid line of OEM-style memory that has no flashy bits or aggressive timings.
It’s pretty much unavoidable in GPUs these days unless you go for OEM or reference cards. My latest GPU is a Sapphire Pure 7700 XT that has an unfortunate red glow coming from the top. Given the case it’s in (white Fractal Ridge) the entire top of my case now glows red all the time. There’s no way to turn it off short of a possible firmware hack, but I’m not brave enough to attempt it and risk bricking the card over it.
Thankfully the rest of my system is plain and ordinary; I try to buy things like motherboards and CPU coolers that are OEM style whenever possible. My CPU cooler is the standard AMD Wraith Stealth that came with the 3400G I had in this system before I upgraded it to a 5700GT, no glowing bits there.
Morgan,
Do you run them out of spec? I always run them at spec(*) XMP and and buy RAM for the speed and timings I want. Of course one can just get unlucky with defects, but I’ve been using them so long with reliable results.
* I’ve had one set start to fail but when I looked closer the MSI motherboard was guilty of overvolting them to 1.365V (supposed to be 1.35V). I looked at the XMP profile and the correct voltage was set in XMP. I was rather disappointed to discover that MSI were ignoring the limits and overvolting! It’s just 1.1% over, but still you have to “undervolt” just to be in spec! I’ve had it running stably at 1.344V and haven’t had any problems.
I’ve also witnessed motherboards overvolting CPUs out of the box. In 2025 modern motherboards from ASUS and ASROCK have both been called out for supplying voltages that kill the CPUs.
https://www.howtogeek.com/heres-how-to-check-if-your-motherboard-is-overvolting-your-cpu/
Manufacturers not being responsible for operating in spec by default makes DIY computer assembly much more stressful since now the consumers have to proactively monitor/change settings to be safe. I never used to pay attention to this because I wasn’t an overclocker. Not only this, but this ordeal revealed the existence of “shadow voltages” that can’t be adjusted by normal users even if they’re too high. How can I trust them now? They’ve leaned in hard to the overclocking culture and now when I shop for motherboards I genuinely don’t know which ones are safety engineered to respect the limits of components. Is it too much to ask that default settings are always safe and stable and that overclockers need to explicitly request overvolting?
It’s hard to believe an engineer would approve this; I feel like at the center of all this there must be a room where MBAs are pressuing engineers to push the limits of the hardware so they can differentiate their products on performance.
In an era of stock shortages for GPUs, I get what I can get. I was a fan of corsair’s maglev fans on their water coolers. But all the new models have their ihue lighting and you can’t get the old ones any more. Between brands leaving the market and prices raising by 100-200%, memory could get to this point too.
EVGA, another one of my favorites for power supplies…OMG I just took a look at the price of new power supplies, double what I paid. Everything’s going to hell.
The specific issue I had with G.Skill RAM was that it ran more aggressive than advertised, while other brands of RAM on the same board ran at default XMP settings for the board. Even dialing back the settings on the board didn’t allow that specific RAM to boot reliably whereas Crucial RAM booted fine with default settings. This was nearly a decade ago on a Gigabyte board with an Intel Skylake CPU, all run of the mill stuff. I haven’t used G.Skill since then so it definitely could have just been bad RAM or a compatibility fluke between the board and memory. I’m just a “once bitten twice shy” kind of guy so that brand fell off my list for good.
EVGA was a goto PSU brand for me for a long time too, but these days I stick with Seasonic, they are an ODM for many other brands but they sell under their own brand as well. I’ve also found FSP, Lian Li, and Be Quiet! brands to be reliable and, well, quiet PSUs. I avoid Corsair like the plague; I had a brand new Corsair PSU die after two months, and it was still well within their warranty period. They refused warranty service unless I paid shipping both ways (defective unit to them, replacement unit to me) and shipped it in the original box with all original documents and accessories. By the time I paid both shipping fees I could have bought another new PSU so that’s exactly what I did, from Seasonic. I will never, ever buy anything made by Corsair again after that horrible experience.
Morgan,
I’m confused by this. Unless you are alleging that GSkill falsified XMP data, it’s 100% on the motherboard to run memory at spec. The RAM technically can’t do anything about it. Honestly I’ve never come across GSkill RAM with wrong XMP data and it’s pretty easy to check when enabling XMP so it makes me wonder what happened in your case. Did you buy new from a reputable source?
Yeah, that’s human nature. Most people get working components, but if you’re one of the unlucky ones that can shift your opinion pretty quick. Maybe it wasn’t statistically fair on my part, but after having experienced catastrophic failures in two unrelated NAND devices wherein the entire media was immediately lost, the experience has lead me down the path of never again relying on NAND devices without RAID. Even though I’ve seen HDDs fail, they usually gave early warning signs of failure: SMART errors, bad sectors, noise before motor failure, what have you. I didn’t get any warning with my SSDs. My being bitten shaped my opinions even though my experience may not be statistically representative.
My warranty stories are very mixed and I’ve shared some of them on osnews in the past. Sometimes no problem at all, other times you can tell the trained customer service agents to deny everything. I wouldn’t actually mind buying products with no warranty beyond the return period if it was honestly disclosed up front in exchange for lower prices (ie most of the stuff I buy off of ebay). The vast majority of the time I’ll never need a warranty, but when a company goes out of their way to make a warranty a selling point and charges for the privilege, then they damn well should honor it. Yet my experience is about 50% of companies selling products with warranties don’t actually intend to honor them and sometimes whistleblowers confirm these allegations are true where they work. Unfortunately many consumers can’t know whether they’ll be denied a warranty at time of purchase. In principal going to court is the way to enforce corporations honor their contracts, in practice it takes lawyers and time that ordinary buyers don’t have and costs more than a product is worth, which means manufacturers get to deny warranties with little risk to themselves. As I understand things, the EU takes this more seriously and companies can get in a lot more trouble for reneging warranties.
@Alfman:
I don’t think it was anything intentional, I think either that specific RAM was incorrectly binned or it was just a quirk of incompatibility with that specific board. Regardless, my lizard brain said “G.Skill no good no more” and I’ve never given them a shot again. If the situation had been reversed and the Crucial RAM had failed while the G.Skill worked I would have avoided Crucial from that moment forward and would probably have G.Skill RAM in my current workstations instead of Crucial.
Morgan,
Fair enough, it may not have performed to spec. What confused me was this specific wording “it ran more aggressive than advertised”, haha.
@Alfman:
Gotcha, what I meant by “aggressive” was a bit similar to your experience with that motherboard where voltages were higher than the profile you selected. For me the timings were also more “gaming focused” rather than the default timing profile I had set. It’s as if the motherboard and the RAM were fighting over which settings to run with, and the RAM won. At least that’s how I saw it. The Crucial RAM took the defaults with no issues and from there I could just set the speed to the CPU’s preferred 2400 from the motherboard’s default of 2133 (from what I recall, that was a long time ago). I want to say the G.Skill RAM defaulted to some crazy number like 2933 on that motherboard even though it was sold and labeled as 2400, and when I tried setting it to 2400 it didn’t persist after a reboot. It was also unstable in the OS at any set speed, but the Crucial RAM ran perfectly fine.
In short, incorrectly binned or just plain faulty.
I can see why a user would likely blame the memory under the circumstances described, yet I am not convinced you’ve strictly ruled out the potential for the motherboard being at fault. The RAM merely provides the specs as an eeprom and only the motherboard can decide what speed to run RAM at, which is why I was curious if the GSkill ram was reporting the correct XMP data.
It’s ancient history and this information may be useless to you now, but if you are curious you can use cpu-z to view the eeprom under the SPD tab (see screenshots here)
https://smart.dhgate.com/how-to-verify-xmp-is-enabled-using-cpu-z-a-step-by-step-guide/
On linux, the tool to do this is decode-dimms, though it notoriously lacks support for XMP so only the base profiles get listed. Still, this gives you an idea of what’s on the eeprom…
The lack of XMP info on linux is notorious and has been brought up in nearly every forum asking about how to read DDR RAM specs on linux over a decade.. Alas, it still doesn’t decode it 🙁
dmi-decode shows the current memory configuration as reported by the BIOS.
@Alfman:
The motherboard worked perfectly fine with other brands of RAM, otherwise I would definitely have blamed it instead of the RAM. The G.Skill sticks were the outlier, not the board or other RAM. I ran that system with the Crucial memory for years as my main workstation, and never once had stability issues like I did that first week or so with the G.Skill RAM.
Morgan,
Yes, I understood that, and I am sorry if it seems like I am not listening. My point though is that if the GSkill eeprom didn’t contain the invalid speeds you were seeing, then we can definitively conclude the motherboard is at fault for setting invalid speeds because it’s the motherboard that chooses speeds. DDR simply goes alone for the ride without a technical ability to override the motherboard.
This doesn’t tell us “why” the motherboard did what it did (again assuming the eeprom data was in fact correct) – it’s a black box to which there is no public source code. I hesitate to speculate as to the “why” part, but hypothetically there could be a parsing/overflow/whatever bug that happened to be triggered with your GSkill RAM. Alternately there could be code that says “if brand/model matches XYZ then use this logic instead” This could possibly stem from a well intention patch for previous RAM but is now causing new issues. Note that the Corsair memory wouldn’t necessarily be affected even if it had similar specs.
By now I probably sound like a lunatic GSkill RAM fanboy to you, haha. I am sorry 🙂 All I am trying to say is that it’s within the realm of possibility that the motherboard could have been at fault – switching RAM doesn’t necessarily disprove it.
To look at another example. Apple has programmed new iphone models to make it appear that 3rd party repairs use defective components even when it can be proven that the repair parts are both working and authentic. Apple did this intentionally to make customers think the repair components are glitching when in fact apple’s programming is the real culprit causing intentional code glitches. Of course the apple case involves actual malice by apple, but it still highlights the way a user’s experience could yield a conclusion that’s the opposite of the facts.
TLDR; it probably was just bad RAM, but it would have been more conclusive to eliminate the motherboard as a variable by testing it in a different system. Had you done so and it worked, that could have drawn attention to a problem with the first motherboard.
@Alfman:
I am beginning to wonder… 😉
In all seriousness, you seem to be coming from a position that the RAM absolutely cannot be bad, or that it’s completely impossible that it was incorrectly binned, and you’ve gone into all kinds of convoluted theories as to why it absolutely positively must be a defective board no matter what I experienced.
Remember that the simplest answer is usually the correct one: Does it make more sense that the RAM was immaculate and my board was somehow broken only with that specific RAM, even though the board worked perfectly with anything else I threw in it, and worked with zero issues for years before I retired it in favor of a full upgrade? Or that the RAM was indeed just that one in 100,000 that shouldn’t have passed QC but slipped through anyway, and I ended up “winning” that lottery?
For some perspective, one of the companies I work for sells health, beauty, and medical equipment along with parts and components for that equipment, and I often get a returned part at my workbench to evaluate where it went wrong and figure out whether we need to request a credit or replacement from the vendor or if it was damaged by the customer. In my experience, no single part or component is a Holy Grail that absolutely cannot be defective. I’ve seen resistors that have the wrong value stamped on them, EEPROM chips that didn’t get flashed at the factory or were flashed incorrectly, capacitors and diodes that have been installed backwards; all things that can and do happen in mass produced circuit boards. DIMMs are not exempt from manufacturing mistakes any more than those other components.
Anyway, it’s all moot as it is a decade old anecdote, but my gods man, you really don’t want to let it go, so I’ll stop us here before we both lose our minds on this thought experiment. Good day my friend. 🙂
Morgan,
I do see your point that the simplest answer could be that the RAM is bad. I really never meant to suggest that “the RAM absolutely cannot be bad”, but rather that ruling out other possibilities may require more tests. Whether it’s 1 in 50 or 1 in 1000, some will end up with bad RAM. But motherboards can also be a culprit and it’s not unusual for manufacturers to release BIOS updates to improve RAM compatibility. Skim through some BIOS updates to see what I mean…
https://www.msi.com/Motherboard/MAG-X870E-TOMAHAWK-WIFI/support
How many users affected by these incompatibilities end up thinking their RAM was bad and not testing it on a different system? Do you still think this is such a convoluted theory?
It is moot, I need to have a better sense for when I’m not being helpful.
Worse, I feel bad for annoying Lurch.
Morgan,
I do buy Crucial every now and then, however they were not the best… just “value”
I think there might be only two consumer manufacturers remaning. Samsung and SK Hynix (SK = South Korea, as far as I know). So basically the entire world depends on the Koreans! Go Koreans Go!
If you think the Ai hurrah is bad, you should listen to financial news during market hours. I think its nothing but a fantasy that people pour endless amounts of money into, sort of like a black hole. There is no structure, plan, action, or standardization for Ai. So it cannot be defined, yet the stockholders, and C Suite execs dump money into this fantasy. I get a good chuckle each day as Jenssen Wong, and financial analysts attempt to explain the tech, and direction of American companies in either Ai’s or their cpmany’s implementation plans for it. It’s even funnier when they dip into the tech side of things.
Unfortunately, there is something more sinister going on in the AI industry. Report by Gamer’s Nexus:
https://www.youtube.com/watch?v=9A-eeJP0J7c
Yep. It’s tiring. This site would be more enjoyable if Thom only covered things he enjoyed and nothing that angered him.
I find it tiring even when I agree with him. The tone is just off putting.
And yes, Satya Nadella is clearly Caucasian
Just to add, I doubt there is a single person on the planet that Thom has ever convinced. Persuasion involves more than just being angry and off putting.
Micron are simply ceasing their value-brand direct-to-consumer subsidiary. Which they wanted to exit for ages, because that was one of their division with the lowest margins.
But there are still vendors using Micron DRAM for consumer products.
FWIW Samsung, SK Hynix, Micron, Kioxia, Winbond, Naya, Alliance, Infineon, ST Micro, and WD are still in the DRAM and NAND business.
Xanady Asem,
It’s not just Crucial dropping out, but that Crucial dropping out reflects the headwinds the consumer industry is facing at large. Many people witnessing what is transpiring are concerned for the future. Maybe it’s overblown, I still hope the worst case scenario doesn’t actually play out in a permanent way. But between tariffs, inflation, systemic product shortages, things don’t look likely to get better in the near term. Even planned obsolescence is popping up courtesy of microsoft, leading to a simultaneous increase in demand and decrease in supply. Crucial dropping out didn’t cause this, but it does exacerbate it. 🙁