We need to consume.
The average American now holds onto their smartphone for 29 months, according to a recent survey by Reviews.org, and that cycle is getting longer. The average was around 22 months in 2016.
While squeezing as much life out of your device as possible may save money in the short run, especially amid widespread fears about the strength of the consumer and job market, it might cost the economy in the long run, especially when device hoarding occurs at the level of corporations.
↫ Kevin Williams at CNBC
Line must go up. Ļ̷̩̺̾i̶̼̳͍͂̒ͅn̵͕̉̾e̴̞͛̓̀̍ ̴͙̙̥͋͐m̸͚̉̆u̴̖̰̪̽̔ͅs̶̨̛̾ţ̷̢̂͛̆͝ ̵̱̐̓̾̔͜ğ̷͕̮̮͆o̷̟͈̐̏̄͝ ̷̢̨̞̉u̴̢̪̭̱̿͑͛̌p̴͈̜̫̖̌.

Funny how the whole “sustainability” and “climate emergency” thing don’t seem to be much relevant when it comes to gadgets and surveillance tech.
And shareholders.
ESG is a sham. Always was. But it is now getting way too obvious to hide that behind nice propaganda.
Just take a peak at that farse called COP30 that just happened and you will see that not even governments take that seriously.
A relative spent 20 days at the COP working as technical staff for one of the embassies. We got some goldies:
– French president’s plane lands, president disembarks, plane takes off and keeps a hold pattern due to lack of parking slots.
– Airlifted V12 Mercedeses for the dignataries.
– picking up a completely inadequate city, without infrastructure, and having to do a lot of work without proper environmental impact studies due to the urgency
– locals being priced out of the rental markets due to landlords wanting to make a quick buck.
– a fire at one of the venues.
Circus.
Almost everyone, here in Brazil, knew and said, the place choosen was crazy and a waste of money for everyone.
There are more stuff that happened there, like Lula’s government putting wood walls around trash or open-air sewers to hide it, in the best communist country style.
jbauer,
I would say it is more than that. Because they have never been serious about climate or our environment.
I’m really invested in our future. We need higher efficiency, and better energy sources.
However there is one answer that has been here for 60 years: nuclear power, clean, safe, extremely reliable.
But we made it a boogeyman, and made it extremely expensive (no it is not the “market”)
Today if we want to fix our emissions, we have to do two things: Solar + Nuclear
Solar is being sabotaged at the local level, as antique power companies want to hold onto their profits (and states like California are pretty much on their side)
Nuclear has been sabotaged by those who don’t like the advancement of humanity (and their useful illiterates)
(Strange side note: When we “cleaned the sulphur” from large marine shipping fuel, the atmosphere got worse. Turns out sulphur was reflecting Sun, and had anti-greenhouse effects. This is perfectly visible on the data. And a major reason of recent climate shift to hotter summers. Question: Why don’t we enforce even higher sulphur level on marine transportation like… yesterday? I mean it should not be too hard to accept “pollution was actually good” if the data is clear, isn’t it?)
When you look at it from a systems view, there is no good choice.
CO2 dissolved in the ocean, increasing its acidity, is already creating problems to the pteropod’s ability do form shells [1], and likely affecting many other organisms.
SO2 emissions cause acid rain, which would increase the acidity further. [2]
If acidity increases enough to cause grave problems at the absolute bottom of the ocean food chain (plankton [3]) it’s essentially game over.
Another issue to consider is “Termination shock” : imagine an event that stops/reduces shipping (covid?) , and while dealing with that, civilization also has to deal with the sudden change in climate from temperatures catching up to where the would’ve been.
At this point we’re in a damned if you do, damned if you don’t kind of situation…
> We need higher efficiency, and better energy sources.
We mainly need to use less. Increases in efficiency usually lead to an increase in usage, leading to increased consumption.
[1] https://www.noaa.gov/noaa-led-researchers-discover-ocean-acidity-dissolving-shells-tiny-snails-us-west-coast
[2] https://antipollutionplan.com/what-effects-does-acid-rain-have-on-our-oceans.html
[3] https://news.mit.edu/2015/ocean-acidification-phytoplankton-0720
I am amused by the claim that longer hardware upgrade cycles result in lower productivity, as opposed to bad software.
I do agree! Productivity is not hindered directly by hardware upgrade, but by software overrequirements.
How come do I need a software that consumes 3Gb of memory to show me less than 700ko of total messages?
Why does my phone lag when opening my text messages when total useful characters wheight less than 1ko?
Why is my computer so slow to start some software – when emulating in javascript a PC from 2000 feel more snappier to each sollicitations???
How come can I in 2025, with a Ryzen 6/12 cores, 24Gb of RAM, type on my keyboard and not see the characters on the screen at the same time in Ms Word??? (while I can use a javascript editor that is way more effective and even beats Word at correcting spelling)
Nobody needs or should be expected to buy a new $1000+ phone every 2 years or sooner. You can’t squeeze every penny possible from people, then whine about how they don’t consume enough. If `they` want people to buy more, `they’ll` have to put more money in peoples pockets to do so.
The industry should be grateful we remain collectively daft enough to only leave it 29 months between ‘upgrades’.
The mean time between significant productivity or lifestyle enhancing feature additions to phones (notably better cameras, more storage, err… something else..?) must be at least three years by now.
And the rate at which your hardware starts to feel slow due to higher content demands has more or less stopped, or would have except that the OS manufacturers keep conveniently slowing old devices down regardless of the content consumed.
As it stands my personal and work phones are from 2020, my work laptop is from 2011, my personal laptop is from 2010. I might be a slight outlier and certainly don’t have very intensive requirements, but other than having to use XFCE Mint on the laptops I don’t have to compromise at all on how much or what data types I work with, nor which programs/apps I use.
15 years ago there is no way you could have run hardware that old without serious compromises with modernity. It would have been seen as eccentric (I know, I was). Now, though, no-one even notices the kit is old, that’s how inspiring the newer replacements have been.
So yeah, the industry should thank their lucky stars that they’re keeping the public/hamsters on the treadmill, even if they are aren’t running it quite as fast any more.
Perhaps they should try coming up with some actually new hardware categories that can do actually new things if they want us to buy more stuff? Substituting actual progress for iteration and AI fluffery will not continue to drive sales for ever.
Make like a Victorian and try invention again, just for a laugh.
people are upgrading their phones every 29 months?! That’s crazy! No wonder we have an e-waste problem.
This trend became apparent on computers long ago and is why microsoft took (and continues to take) measures to transition away from one time purchases towards recurring subscription streams and forcing product obsolescence onto customers who weren’t otherwise interested in buying new hardware.
At the time recall that the stagnation of computer sales was widely being blamed on mobile phones replacing computers. However my phone did not replace my computer, that would absolutely kill my productivity. The real reason computer sales stagnated is because the desktop experience and innovation have stopped improving many years ago. Yes computers were still getting marginal spec bumps, but it wasn’t a noticeably better experience for most desktop use cases. And the mobile market was still growing because it hadn’t yet reached it’s saturation point. However the article seems to confirm that mobiles are now in the same boat and mobile customers aren’t seeing much value in upgrading mobiles There was a huge boom in phone sales when (US) carriers killed off 3G and bricked almost every old phone and forcing new sales. But absent obsolescence scenarios most customers would go without upgrading hardware.
This isn’t a bad thing! Using/reusing hardware longer is important and should be encouraged. What is unfortunate is that so many mobile vendors have done a terrible job with long term support. I fault many of them for engineering both hardware and software to be unserviceable. PCs have proven this doesn’t technically have to be the case, but given the stubbornness of the parties involved I don’t see a solution for mobile users for the foreseeable future. For their part, manufacturers have demonstrated over decades that they have no intention of solving long term support. And meanwhile linux kernel devs for their part have demonstrated over decades that they have no intention of ever solving the non-stable kernel ABI, which also impedes upgrading EOL phones. So although consumers holding onto devices longer is good, the culmination of these factors means that after market support is not widely available even though the hardware still works fine, old devices become vulnerable to patched exploits.
Alfman,
There was an actual need to upgrade hardware, due to increasing capabilities, but that hit a wall.
In terms of real, absolute, raw power, we are way beyond one would expect from efficiency. Forget Raspberry PI, which has become a very capable machine in 5th iteration, but cheap less than $10 “microcontrollers” have surpassed what we paid $1,000 for the PC back in the day.
Look at Windows 95, minimum requirements (most of us had PCs with similar specs, many of us did not even own one, but used in a computer lab, later Internet Cafe)
386 / 486 CPU, 4MB RAM, 50MB hard drive, VGA (640×480) output
What does ESP32-S3 have today? (not the base model, though)
Dual core ARM processor, 8MB of RAM, 16/32MB of flash
(Search “ESP32-S3-DevKitC-1-N16R8”, which comes to about $8 a piece, maybe even cheaper bulk)
I have many of these (or rather similar models) lying around. I still have 3-4 raspberry pis I have not even unboxed. Computing have become so cheap, so abundant, I tend to buy in bulk “in case I need to scale up”
But why do we need more hardware?
Windows 95 used bitmap fonts on VGA display with no hardware acceleration other than mipmap and cursor rendering.
Today we have multiple 4K (or even 5K, 6K!) monitors, with different DPI, “font ligature” with “turing complete” font languages (one font larger than entire RAM of Windows 95), high speed WiFi, many services running in background to support these and so on.
But that is also “solved” about 5 years ago. Nothing is absolutely necessary as an upgrade. Similar for tablets, phones, and other devices.
(Except maybe for running latest games, local large language models… and VR, which is still way behind natural limits).
sukru,
You must be the reason they were always out of stock for so many years 🙁 haha.
Upgrading created a big difference in the 90s and 00s. Increasing the resolution of displays and cameras was night and day. The quality was bad enough that the market had built in demand for improvements even before new hardware came out! But over time there were diminishing returns and upgrades became harder to notice. It’s kind of funny when we have to freeze frame and magnify output to see the new specs in action and can be too subtle to notice in normal use.
I haven’t tried VR on account of not really having a use for it personally. Games are definitely demanding applications, which might justify upgrading. But even there a lot of the effects are very subtle because of diminishing returns too. The latest innovations like ray tracing are cool and all, but games can look beautiful without it and I don’t miss not having it. If you use something like blender, it’s extremely beneficial and almost mandatory to have high end hardware, but average consumers don’t have such high end needs.
Alfman,
I would say Ray Tracing is a must if you want better than Xbox 360 era graphics and modern large open environments.
That is why recent ones like Doom and Assassin’s Creed made it required.
Very shortly:
In 360 era, consoles could not do indirect lighting well, but they had large freely browsable — and destructible environemnts:
https://web.archive.org/web/20090607011853/http://www.allegory-of-the-game.com/archives/99 — extremely good article, almost lost to time
In Xbox One era, they had prebaked lighting, but they limited the environment size — and interactions:
https://www.resetera.com/threads/assassins-creed-shadows-gdc-talk-on-ray-tracing-console-performance-developer-pains-and-more.1186575/
But today, this gen…
Basically AC: Shadows would have required 2TB of pre-baked data for lighting only. And it would take months to calculate that.
Instead they went real time lighting (RT).
The sad part? Even though we actually need it now, “gaming” companies like amd or nvidia will allocate the silicon to AI cores instead of RT ones.
sukru,
Oh wow, the funny part is that I buy GPUs for cuda and tensor cores rather than gaming, haha.
We should do a poll, Of course it’s all just a matter of opinion but I don’t think RT is worth it because games already look good without RT and I doubt it has much impact on games being enjoyable for a majority of players.
Alfman,
Then there is probably some real market segmentation here, unlike the crypto one.
I think you missed my point above. Yes the games looked good, but they had to sacrifice world size and interactivity.
sukru,
That’s not really true though, those are engine specific limitations. I’m not saying hardware RT isn’t a useful tool, but you are underselling what games can achieve without it. I’ve seen realtime raytracing techniques used before RT hardware existed.
Apparently linux devs have even created a software solution to enable RT only games to run on hardware that does not even have it!
“Linux Saves Old AMD Cards! Guide to Play Indiana Jones on Unsupported GPUs”
https://www.youtube.com/watch?v=57MxplKwOuQ
https://www.phoronix.com/news/RADV-LBVH-Lands
“Emulate Hardware Ray Tracing Support on Old GPUs (GCN Old)”
https://www.youtube.com/watch?v=VEo7066YoVo
Of course hardware acceleration will provide more rays to use with high RT settings, but it shows how much can realistically be achieved on older hardware when the developers working on it don’t have a vested interest in selling new hardware. The same is true with other new innovations like DLSS, which are officially touted as a feature of new hardware but in fact the algorithms run just fine on old hardware if someone implements them.
Alfman,
I think you are still missing the point. The issue is not whether RT is accelerated or not. But rather RT feature is a requirement of modern engines.
Of course everything can be emulated in software (either on CPU or GPU). We had Mesa render OpenGL on Intel CPUs before many of us could afford nvidia cards. But the results were similarly low quality (for referenced Indiana Jones game runes much worse than a base Series S console ($300 hardware) let alone modern GPUs like 5070 or 9070 xt when using RADV on RX Vega 64.
Anyway…
RT will be required in more and more games, while older hardware incapable of doing that will be left behind. (Or they will emulate those APIs with 720p native resolution, low quality presets, 30 fps, just like a Raspberry Pi running games from 10 years ago).
That is what game design dictates, no way around it.
sukru,
So if I understand this correctly, you are not talking about RT hardware specifically, but just the idea of ray tracing in a game as a general concept? I don’t think people have a problem with the idea of raytracing. It’s the implementation they have problems with. We don’t need to look long to find hordes of gamers who just don’t find RT worth it because in their opinion the benefits don’t make up for the higher cost/energy/heat and worse performance. To this end, I think low overhead lite/software ray tracing (ie less accurate shadows, screen space reflections only, lightweight hardware dependencies, etc) is good enough without needing tons of RT cores and I think it’s a sensible solution to the problem.
However I concede that game publishers might not care about customer demographics with lower specs. They may even get incentives from nvidia to make games RT-hardware exclusive not because they need to be, but because nvidia wants to sell more hardware.
Because of the previous ambiguities over what was meant by “RT”, it’s unclear when you are talking about hardware RT versus software RT capable engines. For their part, many users are happy to forego high quality RT and I suspect many would agree that gameplay is more important than accurate shadows and reflections.
It’s probably an open question whether the game publishers will ultimately succeed in forcing gamers to buy hardware they didn’t want or if they’ll loose sales over game requirements and poor performance. Frankly RT sometimes kills the performance even with high end cards particularly at high resolutions. It will be interesting to see how this plays out.
I got distracted watching ray tracing game videos, haha.
This video reflects my opinions as a user…
“Is Ray Tracing Good?”
https://www.youtube.com/watch?v=DBNH0NyN8K8
As the video concludes, new titles are likely to include more hardware RT support in the future and some may decide to make RT mandatory as you’ve said. I still find the impact is often very subtle, especially in the context of playing a game without side by side comparisons. If this pans out to become the new norm, the net result could be that we need to spend more money on heavy compute hardware and electricity to play future games for visual improvements that are hardly noticed during gameplay. It feels like a wasteful future.
Alfman,
Yes, that is precisely my point.
And RT will be required more and more.
(You should really look at the article I shared above to see — side by side — what was missing in the 360 era, and what tricks devs used in Xbox One era — which no longer work).
As a high level summary, modern rendering is all about light. But when the hardware could not do lighting accurately in real time, developers used techniques like:
Baked GI, Light maps, Light probes, SSAO, SSR, Shadow maps, Cube maps, … (I won’t go into details, look them up if you are interested)
They were not only inaccurate, but they came at massive costs. They were hard to calculate, and made the world “static”.
Basically if you were an artist designing a game, and if you moved a building slightly to the left, you’d have to wait a week before the next pre-calculated asset data was ready.
And if the game had dynamic weather, and time of day, that would explode the processing. That one week becomes literal months.
So, game devs either had to spend months worth of compute resources and add to delays. After that distribute 2TB of baked data… For a leaky abstraction with very obvious quality issues…
Or just do lighting in real time (RT)
sukru,
You might be right but you might be wrong. If the performance doesn’t align with expectations, then these games will flop. Even modern entry/mid level GPUs bought today take a serious hit under RT not merely do to RT core bottlenecks, but also not having enough RAM, which nvidia have chronically under-provisioned. Time will tell if hardware will improve and become more accessible or whether consumers observing RT’s poor performance for themselves will avoid games that don’t allow them to disable RT.
IIMHO there isn’t an issue with games implementing simple ray tracing techniques for general illumination, this can be done on existing hardware as already proven by mesa. The problem is games that completely throw away the rasterization optimizations we’ve perfected over decades and rely heavily on RT cores to render most of the scene. Adding ever more RT cores means we can make the inefficient possible, but it’s still very inefficient and on top of the high barriers to entry for a good RT experience, it requires a huge amount of power. When I run RT my house lights literally start flickering and I don’t even know if they’re on the same circuit 🙁
I don’t know if this is rhetorical, but why would they have to wait a week? Accelerated RT hardware isn’t just for users, in principal the very same hardware that we use to render RT scenes on our machines can be used by developers too. Maybe their dev tools don’t support RT acceleration, but if they don’t it seems like an obvious area for improvement that would save them a ton of time.
Again, most consumers don’t actually care as long as approximations are good enough. Regardless of our opinions about RT, whether vendors will be able to push the gaming industry towards mandatory RT render pipelines will depend on whether a critical mass of consumers buy into it. It’s irrelevant what vendors claim if consumers don’t buy it.
Alfman,
With RT, we are going through a transition no unlike the software rendered older games like DOOM into fully hardware rendered ones. Of course the early adopters struggled, and DOOM looked much better than many full 3d games like Descent.
(Though difference in capabilities were acutely visible).
Yes, they are throwing away those optimizations, usually called “bag of tricks”. But they were nowhere near perfect. They could at best be called “adhering to 80/20 rule”
A week was an upper limit. For older games, it was hours, than days. When it really stated requiring a week, they gave up.
From a recent GDC talk:
Source:
https://gdcvault.com/play/1035526/Rendering-Assassin-s-Creed-Shadows , Page 54 in the PDF (it is very rare for them to share this for free. Normally it requires a very expensive GDC membership).
They had no choice for the quality mode to use real time as “baked” lighting that was “perfectly good” no longer scaled to that 16km x 16km size area.
It is a massive difference for level designers, as with the RT engine they now have WYSIWYG rendering of their designs. They no longer have to “guess” how it will look in game, they know exactly how it would look in real time.
(That final “bake time” includes multiple seasons and so on. Usually during development we can assume they would have a single snapshot)
sukru,
Don’t get me wrong you are entitled to complain about the flaws of traditional techniques but I still don’t think many gamers care. RT enhancements can be subtle to the point where they don’t matter at all especially when RT on looks nearly indistinguishable to the untrained eye in titles where rasterization techniques are already doing a fantastic job. Given how so many reviewers are stating this opinion too I wouldn’t dismiss the possibility they could be in the majority.
Obviously I get that being on the cutting edge of engine building takes a lot work, and that GDC paper offers an interesting insight into that process. Yes it would be easy to throw it all away and rely on RT alone, but even with high end cards the hardware is not there yet! Doing everything on RT cores without any rasterization techniques is has gotten a lot faster, but as users of 3d rendering programs can attest they’re still not “gaming fast” Scenes with complex geometry like hair look fantastic when done purely with RT, but it *crawls*! Until we get GPUs with a whole lot more RT cores to compute every single pixel in real time, then we still have to rely on traditional rasterization techniques.
As for baking shadow data taking too long, that is computationally solved by the very same RT hardware you want for rendering shadows on end user systems. I am able to bake the shadow maps of very complex scenes in blender and I promise you it doesn’t take as long as you are claiming. It’s possible they did not have state of the art RT hardware acceleration during game development, so I can see the baking time taking longer,, but that’s a solvable problem, not an insurmountable one.
The GDC paper reveals that even with RT they were still baking data for indoor illumination and weather. And the outdoor grass and shrubbery had too much detail for the BVH, things I can confirm are slow even running RT hardware.
Interestingly the GDC paper says assassin’s creed uses the very same software ray tracing technique that I’ve been suggesting in our discussion and they do so for the very same reason I’ve been suggesting it: supporting some low end hardware. I thought this was a good compromise and it turns out assassin’s creed devs think so too.
Honestly that’s an extremely misleading claim without context, but at least now I understand where you are getting that from. This is coming from “Baked Volumetric Diffuse Global Illumination”. For starters volumetric diffuse is optional – accurately sampling dust particle density isn’t a necessity and rasterization engines can fake it quite well. In fact I was surprised when I first saw EEVEE (blender’s non RT render) emulating volumetric diffusion. RT is obviously going to be more accurate, but we’ve gotten good at faking it and many gamers are happy with that.
That’s a tooling problem. I’ll accept that maybe their dev tools are inadequate, but in principal there’s no reason the tools a studio relies on can’t do both. Like with blender, I can create complex scenes and effects and see the output of different engines in real time with the click of a button.
I realize these rasterization tricks will never be as photo-realistic as RT, but users may not care. And as for the tooling issues, I acknowledge this may be a problem for studios, but it can be fixed.
Alfman,
For all practical purposed the 80/20 “hacks” we used last gen are obsolete for modern engine. We will only see more and more being required.
(Though some games will have a “fallback” path. But they will be restricted in terms of interactivity)
You make a fair point that for a static snapshot, rasterization “tricks” often look indistinguishable from Ray Tracing to the average player. If the camera and the geometry never moved, I would agree that we should stick to the “80/20 rule.”
If we have moving objects, destructible environments, and time of day… things go out of the window.
That is true. And that is part of my frustration.
The future direction is clear, and manufacturers need to focus on fixing the modern real time lighting pipeline, instead of adding “yet another ML mechanism”
The shift to RT isn’t just about making pixels look better; it’s about making the environment behave correctly.
You don’t bake “one” snapshot. You bake many, many of them.
Again the world is not static. You as the player is not static.
They “baking” on assumptions that break the moment the player interacts with the world in a meaningful way. That is why they have to repeated for different angles, and generally each and every room.
Yes, it is a transition. Think of this as the PS1 -> PS2 era where many parts were static, but we started real 3D rendering for main objects.
The context was literally there. For getting the “high” quality they had two choices: bake for months, and distribute 2TB of game data (15? blu-ray discs) or just do it in real time.
All of these technique fail, for example:
They fail on destructible environements. Why do you think we did not have Battlefield like immersion in 360 era for ~10 years?
The reflections can only show what is visible on the screen. Mirrors in real life don’t work that way (things disappear from mirrors or water puddles as soon as they move away from the screen. Tilt your camera down — the entire mirror is broken!)
“Oily shadows”
They are a secondary texture precalculated ahead of time. They never look exactly right, and they don’t work for interactivity.
Incorrect perspectives. They just need much more post processing in real time to look better, but they still fail to do real thing.
Light “peeks” from outside at wierd angles, as you move inside the room. Sometimes you just happen to be near a probe outside of the wall, and immersion breaks.
Bottom line, expect to invest in RT hardware as we have all bought Riva and 3dfx cards back in the past.
Anyway…
This is became a bit long… And I think we can still hold onto older cards for a while.
But don’t expect things not to change.
sukru,
I disagree. Rasterization techniques are far from being obsolete and that GDC link reveals that many of the same precomputing “hacks” are still used by the RT pipeline to help accelerate RT itself. These hacks aren’t going away until we get RT hardware that can compute every pixel in real time, but we’re far from that point. Even with RT on this creates visible anomalies in scenes with high motion, which can sometimes be worse than rasterization techniques. Maybe this will change in the future, but until then we’re going to have to disagree that precomputing hacks are obsolete. We’re far from a point where consumer grade hardware can handle a pure RT pipeline in realtime.
But I don’t think it’s a big problem because if anything I see raytracing as enhancing raster graphics rather than replacing it. And this is where I see RT offering the highest gains right now.
And yet they’re still baking data for Assassin’s creed’s RT engine because a 100% dynamic scene can’t be handled by today’s RT hardware in real time.
Of course, but it’s really stunning what rasterization can do even with dynamic motion. We don’t have to switch to a fully RT pipeline to get dynamic effects.
The context was about volumetric diffuse lighting, the other techniques don’t pose a problem. Also raster engines have ways to fake effects without the data. Is it accurate? No, but it can still look good anyway. When I was in school I created awesome smoke effects using raster techniques. A fake effect for smoke can even look better than RT done in real time with insufficient rays. I often notice this in blender: RT volumetric effects are very beautiful if you throw enough rays at them but it requires tons of rays to be effective. If realtime constraints force us to limit the rays, the result is blurry and washed out. So I’d say rasterization techniques still have a place given that RT power is still limited even on modern cards.
Yes, these are all faking it, but they look pretty darn good anyway. Even when it comes to dynamic scenes I am not convinced it’s the end of the road for these techniques because rasterization is not mutually exclusive with lightweight raytracing, which I think is a great compromise.
I’m not so sure consumers are as enthusiastic about RT as they were for 3dfx and other innovations that yielded such drastic and jaw dropping improvements in the 90s.
Even assuming future hardware is one day capable of fully replaces the need for all precomputed values and eliminate all “obsolete” rasterization techniques without compromising performance – something we don’t have yet – it also needs to be affordable for the masses. I also worry about the power requirements for such a beast. What about games being able to run on phones/tablets/laptops? Unless we can make RT cores significantly more efficient than they are today, an RT only future either precludes gaming on portable devices, or requires games to be streamed remotely.
I do expect things to change, but it will be a long time before full scene RT can fully replace optimized rasterization techniques and precomputation. IMHO hybrid techniques are likely to lead the way forward.
I didn’t have enough time to edit, but I notice some paragraphs were highly repetitive, sorry. In any case, I agree we’ve probably said all there is to say about it, haha.
I am really fond of graphics & physics topics. I used to program graphics effects when I was learning CS. It was challenging in a fun way. I miss that. These days I face challenges that just suck and have zero motivational value. It’s such a huge difference between the reasons I got into CS and what I am actually getting out of it. I regret not landing more fulfilling work, though not for the lack of trying.
I remember buying Super Street Fighter 2 for MS-DOS. I was excited, until I tried to play it and it was in slow motion. Then we upgraded the RAM from 8 Mb to 16 Mb and now it was playable at normal speed.
jgfenix,
You can still play it on slow motion:
https://www.retrogames.cz/play_304-DOS.php
(The JavaScript emulator was not up to par on a modern machine).
That aside, yes, RAM really made a difference back then. I think just DOOM alone forced many people to upgrade back in the day.
Indeed pretty crazy, only 29 months. Thanks to Lineage, my 6y old phone is still up to date, happily running Android 15. As with PCs, the problem is in the business model. As software updates are given away (and who would be willing to pay for them), the only way to earn money is by convincing consumers that what used to be a fine phone with good enough hardware and more than enough performance has suddenly become obsolete.
With PCs, consumers can benefit from those frequent hardware upgrades by corporate entities, as it provides nice 2nd-hand computers for a relatively low price. I have not dared that yet with 2nd-hand smartphones.
Gert-Jan,
All of my phones have been second hand for a long time. I even bought one for my dad recently. You can buy from a recycler with high reputation that spell out the condition clearly, often times with a new battery. I understand very much the worry that comes with buying used. Still, both your credit card and ebay do provide some protections if you get a product that doesn’t work or match the listing. You loose the warranty for sure, but I’m ok with that if the product works for a month then I have some confidence that it’s not a dud. If it dies outside the return window, that would suck but considering I paid less than half retail I could still buy a 2nd and still come out ahead – the savings can be that much.
I *do* take precautions to record the packaging and unboxing in case something is a scam. I’ve heard of that happening, but if you have evidence on your side then at least you shouldn’t be out the money.
I haven’t tried, but craigslist might be an option to find someone local. Does anyone here use that?
Alfman,
Having a foot in the second hand market means you get to try out more things. I remember replacing my phone 5 times a year thanks to eBay. I was a student with a lot of free time, and this allowed me to sample many brands.
Today, I would buy used from reputable sellers only. Amazon has their own program. Best Buy has one too. They have been mostly good, or at least I get a full refund, if I don’t like something.
For phones, the concern is IMEI blocking, though. There are some shady people out there. So, be careful.
Manufacturers: We reduced the features and jacked up the prices. We don’t know why nobody’s buying. Wait a sec…. tariffs!!! It’s always the tariffs.
chriscox,
Blaming tariffs is a very convenient excuse. And one camp wants to blame the government, and the other wants to blame companies, and they both agree on using this in their framing.
It of course benefits the companies, and hurts the consumers.
sukru,
I blame corporations for a lot, but the blame for the tariff situation lies in the hands of one man, a president who isn’t supposed to have a right to impose tariffs absent an emergency. To this end congress deserves blame for ceding their constitutional authority to regulate trade.
https://www.independent.co.uk/news/world/americas/us-politics/trump-tariffs-supreme-court-truth-social-b2871345.html
Alfman,
Believe me there is a lot of blame to go around. But none of them helps us, the consumers.
(And do you think if another party comes in power they will actually change the status quo by much? Please look at what happened last 4 times they changed places)
This is really a very good excuse for companies.
Did tariffs cause Microsoft to require 30% profitability from departments?
Did tariffs suddenly increase RAM prices in the last two months by more than 100%? (I really checked the RAM I bought in September, it is about 2.5x today)
How did companies who chose not to increase prices were able to survive?
Please do not fall for the bait
(Edit: link to an article about MS requiring 30% profits: https://currently.att.yahoo.com/att/ridiculous-microsoft-decision-could-explain-103013414.html)
(Edit 2: On tariffs “increase” by those who blamed them: https://www.newsweek.com/biden-slammed-trump-tariffs-kept-experts-worry-1983365)
sukru,
This presumption that I’m a fan of democrats is misplaced – my interests are not represented by either party. In any case, there’s no point in deflecting blame for the tariffs to anyone other than trump. If you want to blame Biden for not reversing Trump’s tariffs, go right ahead. We can debate whether they did anything to promote local business. But you can’t objectively blame anyone other than Trump for the tariffs and inflationary mess we’re seeing today and I’m not sure why you would want to dance around it
Alfman,
Of course the tariffs are initiated by Trump, and he has responsibility. My point is entirely different.
Companies raise prices for unrelated reasons, but then immediately call out “because of economic conditions, … because of tariffs”, which is usually b.s. (It sometimes is actually true, but it is much rarer than they would want us to believe).
It is just an excellent excuse for them. Those two things (tariffs, and corporate greed) should be handled separately.
And, btw, sorry for dragging this too long.
sukru,
Pointing the finger at corporate accountability and greed were kind of my thing. Given previous discussions I assumed you were a free market capitalist.
Anyway, just focusing on the economics we should talk about Adam Smith’s invisible hand and how it determines fair market prices. A company’s job is to maximize profits, and if they can do so by raising prices, then they will. Adam Smith’s invisible hand rewards the companies that sell at fair market value for their product. Setting prices too high results in lost customers and therefor lower profits, setting prices too low looses income and therefor lower profits.
The law of supply and demand still applies even when we throw new tariffs into the mix. Trump’s tariffs were so exorbitantly high and done so chaotically that the new fair market prices are volatile and it takes time for the effects to propagate throughout the market. We might try and estimate the effects of 10%-150% tariffs, prices are going up and there’s no avoiding that, but the real question is where they land given that businesses can have widely different exposure to them. In the end the market will converge around the new tariffs.
Assuming you are right that microsoft are overcharging, then they will loose customers and their profits will suffer for it. If on the other hand microsoft’s math checks out, their high prices will maximize profits under this new economy. If you want to blame microsoft for maximizing profits, well then maybe I was wrong to think of you as a capitalist because maximizing profits is central to this world view.
Alfman,
I’m pretty much still a free market person, advocating voluntary exchange of goods and ideas. This, people choosing the best for them, has brought prosperity to all, which was evident in 19th and 20th century advancements, and almost elimination of abject poverty.
But I also believe governments have a role. I would not be naive like some anarcho-capitalists. Even Adam Smith recognized that, and he advocated a limited, but important welfare role in the modern society (something like 20% of the GDP).
Unfortunately that is not what is happening today. Companies are colluding, they are manipulating the market, and practically lying to the consumers.
That is not something acceptable.
In other words, I am more for Henry Ford style industrial capitalism, but against Dodge Brothers financial one:
https://en.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.
(This would probably make a much better movie than Ford v. Ferrari)
For those who don’t know, and for prosperity (in case Wikipedia article is inaccessible)
Ford argued that his company made too much profit. He wanted to reduce the price of Model T, increase the pay of his workers. He saw customers and employees as “stakeholders”
He was sued by Dodge Brothers who invested in the company back then. They argued this was illegal, as shareholder interests were supreme, and they had to maximize profits at everyone else’s expense.
You can guess who won, and the rest is history.
Love the ZALGO effect Ļ̷̩̺̾i̶̼̳͍͂̒ͅn̵͕̉̾e̴̞͛̓̀̍ ̴͙̙̥͋͐m̸͚̉̆u̴̖̰̪̽̔ͅs̶̨̛̾ţ̷̢̂͛̆͝ ̵̱̐̓̾̔͜ğ̷͕̮̮͆o̷̟͈̐̏̄͝ ̷̢̨̞̉u̴̢̪̭̱̿͑͛̌p̴͈̜̫̖̌ I learned something new here.
https://zalgo.org/