“nVidia Corp. will announce its next-generation desktop graphics chip, which may be known as the GeForce4, in early February. […] Several reports on the capabilities of the NV25 have already been published, most believing the chip will feature six pixel processing pipelines versus the four used by the GeForce3. The reports also suggest that the NV25 will feature significantly higher clock rates, faster memory interfaces, and improved antialiasing capabilities. The Nvidia spokesman declined to comment on any of the features of the new chip.” Read the whole story at ExtremeTech.
Geez most people who want a geforces 3 probly haven’t even close to getting one yet. And still theres no use for this card. Human mind can only handle so far. After a point you can’t tell the differance. I think we are past that. I’m waiting for the video card with better resolution and everything else than the human eye. And people will still bye it.
Great, outdated again.
I say they work on making these cards run cooler and cost less instead of continuing to develop technology no game has used thus far. Oh, and improve the 2D quality while you’re at it.
And still theres no use for this card. Human mind can only handle so far. After a point you can’t tell the differance.
———————–
Not exactly. There are several games out that take advantage of the Geforce3 – whether the unique capabilities or the raw grunt. Besides, I step outside and see a world of vivid detail and effortless expression, then I go back to the PC and notice that there’s still a lot of work to get photorealism happening in any game – let alone utterly convincing gameplay. There are always compromises, and there’s definitely a long, long way to go.
The realise of the Geforcs 4 will drive Geforce 3 prices down a bit and don’t you know Nvidia have a six month product cycle. Quit you’re whining Brad and Androo.
Isn’t that just as much a memoryproblem as a speed/rendering problem. If you want photorealism you will need more details, and more details will need more memory. I say we’ll have to wait for the next generation memory, for something like that.
Note that games got more vivid and detailed when CD-ROMs became a standard (Riven, being a good example). So DVD-games might be better and then …
I dunno. I like to buy cutting edge hardware (ie: my new 2ghz p4 w/ rdram, no flames, i like it).. but i just now bought a GeForce3, and i had to opt for the ti200 JUST so i could get my purchase in the $200 range. I just can’t afford $300+ video card no matter how you dice it. I think many consumers think of it in terms of the CPU/Motherboard being the big ticket items, and that any add-on cards should be priced lower than it (excluding bargain basement deals like K6-2’s, etc). Of course this isn’t the case with the more experienced users, and hardcore gamers, etc, but we are a niche market. A company can’t really survive with JUST us.
I’d have to side with Androo, and ask why not focus on making them less expensive. Shouldn’t be hard to do, as i’ve seen the actual chip pricing in 10000 unit quantities and it doesn’t come remotely close to the selling price of the cards (yes, i know there’s alot more too it than that), minus any r&d they’d make back from the sale of the chips that there’s alot of profit on the top. Guess that’s why they’re #1?
I don’t understand comments people make when they say that “the technology is far enough” or “the human can`t handle more”, etc. If I’d say that for cars, then F1 racing with be with Ladas (no offense here) and everybody would be have the same car…?! Aren’t you glad that the nice `ride` you have now is the product of high-end tech, now available to you? And YOU are paying quite less than when it was first introduce. The new G4 is exactly that: it`s for the people and applications that need that kind of power RIGHT NOW. Of course, in 2 or 3 years, everybody and his dog will be able to purchase one for $100+, but for now, it`s nive to see what`s comming.
I’m never going to understand the mindset that says something like “the technology is far enough,” especially when it comes to computers and graphics displays. It seems that every couple of years some technology pundit will lament, moan and whine that today’s XX MHZ/GHz/THz machine is far more powerful than the needs of the average user, and that there’s no point in making anything faster or more powerful. Luckily, the tech companies just ignore the naysayers and drive onward.
How many of you remember computers only 10 years ago? What was your computing experience like? Would you be willing to go back to 50 MHz 486’s? How about VGA or SVGA cards? No 3D acceleration? No 3D Surround Sound? Very limited RAM? Very limited hard drive sizes?
If you can’t afford the latest & greatest tech development, so what? It’ll become affordable sooner than you think, and in the meantime the previous generation of stuff will go down in price. Ain’t life grand?
Ungoilant
I don’t know in what way it sounded like androo and I were whinning. I was just making a comment.
Harky
I don’t belive in stopping in a technology at all, if theres room for improvement go for it, though at times if the supporting technology isn’t there such as memory or manufacturing techneque so that the technology will be insanely expensive for a small gain then one might want to wait a bit for it to catch up. But still, If your video on your computer was better than what any human could handle, then there would be no point in improving it since video is for the human using the computer. If you improved the human vision then improving the card would make sense. Other wise its like designing some super car meant for humans to ride in but it goes so fast the geforce’s ( as in the forces on a object not the video card) crush the human and kill him/her would it make sense to make car that goes even faster, and will kill the human also?
Brad
My five year old video card can produce better framerates than my eyes can perceive. So can my current GeForce3. The difference between the two cards is that for a given framerate the newer card do a *lot* more drawing and it can do it at a higher resolution. So what do we gain? Visual fidelity. There’s still a lot left to be done in that department. Computer games (the main application for these cards) are nowhere near photorealistic.
So, if the only game you play is pong then this card is only going boost your framerate from one ridiculously high figure to another ridiculously high figure.
all that i can tell you is that i have a 3 year old card and the thing is slow and i can see the frames EACH ONE OF THEM
yes i am 18 and yes i do play games… but even the most basic games that are comming out are too much for the card… i would buy a new one but 1 i dont have the $ and 2 i am leaving to join the forces here soon (I just learned that my unit that i will be joining in one year is going to fight)
so if you ask me if they need to get better and faster…. YES…. it’s the way that we are evolving… Bigger and Better things… Better graphics and Better game quality… means that you will be needing ALOT to run games…
i would say that a video card has a shelf life of about 2 years because by then games are getting too choppy…
and yes they need to get better in the 2d graphics… but thing… will we need 2d graphics much longer??? i can tell you that i would like to work in a 3d envirement…
I don’t understand comments people make when they say that “the technology is far enough” or “the human can`t handle more”, etc. If I’d say that for cars, then F1 racing with be with Ladas (no offense here) and everybody would be have the same car…?! Aren’t you glad that the nice `ride` you have now is the product of high-end tech, now available to you? And YOU are paying quite less than when it was first introduce. The new G4 is exactly that: it`s for the people and applications that need that kind of power RIGHT NOW. Of course, in 2 or 3 years, everybody and his dog will be able to purchase one for $100+, but for now, it`s nive to see what`s comming.
I used my moster fusion 3d card (one of the first that was a 3d accelerator with a 2d display on it) until last year because it played every game I owned. why did I switch? because I bought black and white(which sort of sucks in replay) and the monster could not handle the graphics even remotely. so I went out and bout a GF2 MX card…bam it played. I then upgraded to a GF2 GTS because my brother who I built a computer for, gave his PC with the card in it, to my other brother. it needed the mobo to be replaced(bios problem) so I bought a board with on board sound, and swaped the MX for the GTS.
my other brother does nto play huge high proformance games that require the GTS, I don’t even think they make any that can only be played on the GTS and beyond….so what does he miss eh? I also snaged his SB PCI 128 because he had the on board sound, again he is not into sound editing or MIDI stuff so no biggy.
my point is, I used my hardware (I also have a SB 16 ISA) until the software could not be used, used well that is(if is has crapy proformance I upgrade)
don’t upgrade just because it is latest and gratest, you waist to much cash, upgrade incramentaly over time like I did and then one day, you wil lsee that all you eed to throw down is $150 for a new mobo/proc and you have a new PC.
much cheaper.
I have liberated myself from needing $200+ video cards every couple of years by sticking to gaming on consoles Consequently, I have had a TNT2 card for the longest time, and it’s still more than enough for what I use it for.
Nvidia can do what they want, it still is a stupid video card desing. The last advancement was 3d acceleration and before it was the removal of the monitor text mode.
Now card do mot evolve anymore. We need a raytracing hardware standard and card that use it. More than that we need to see voxel based video. The bottleneck is no more the resolution, the refresh rate, the fpu power or the RAM. The bottle neck is the developing tool and technology that come with the curent polygon way of thing. Raytracing remove LOT of programing trick not really needed.
Also, less emphasis should be made on the card RAM and card processor power. The same way computer BUS are going serial high speed, the same way video card should go slimer with less chip and RAM. I want to be able to harness the power of all my transistor when i compute stuff (not have a card with more transistor than my entire computer that do nothing if not in a game).
One good concept for exemple was the video/audio DSP of the atari falcon 3.0. That machine was ahead of it’s time. It’s funny that the PC is called an open computer and that ST and Amiga 500 were called closed, because to me the PC is the less inovative computer i ever saw. No company want to take the risk of doing a different video card concept.
AlienSoldier,
Why voxel-based video? I’m curious about why that would be anything but frivolous. And you’ve got conflicting ideas. You were talking about how video cards should go with less chip and RAM. Do you have any idea how much RAM voxels take? A single 256x256x256 volume (pretty low resolution) comes out to 16777216 voxels!!! That is a small volume. Could you imagine having a scene full of them? You couldn’t get by with less RAM on the video card. And with voxels, you get the same problem as pixels – jaggies. When you get close to a voxel model, you’ll see the stair-stepped edges. Maybe curved polygons (like ATI’s technology) or sub-division surfaces will be the future, but I really doubt that voxels will be, because you can get better results with less power with other technologies.
Raytracing… it’s not really necessary, is it? Look at the movie industry’s leading renderer, PRMan. It does NOT raytrace, and yet it produces high quality images. Raytracing is too computationally expensive, and you can get similar results with cheaper methods. Also, are you tying polygons to raytracing? How are they tied?
Lastly, I don’t think that you understand how video cards work. You can have either a fast video card, or a slow video card that helps you “compute stuff”. The chips nowadays are much more specific in their purpose, so you can’t just use them to “compute stuff”. They are NOT general purpose. If they were they’d be bigger, slower, more expensive, draw more power, and run hotter. So even if you could get them to do work for you, it wouldn’t be very fast, and on some operations it could not do what you requested of them because the GPU doesn’t support the same operations as the CPU.
You only mentioned the Atari Falcon… could you expand on it?
gtada,
About the Falcon, the computer was not having any video chipset in the taody term, is was a DSP fast enough to be able to do video in sound (and other stuff if you wanted). For exemple it was possible to have better sound in a game with less video and better video in a game with less sound.
I understant your comment about RAM, but my idea is better for system with unified memory. If we take game for exemple the 3d modelling is lot more precise in the video card than the coresponding “collision boxes” in the RAM for the processor. Now with all the effect of particle etc… collision need to be far better than what is curently the norm. That mean that at one time we will need to have the same RAM space for those 2 thing.
As for Voxel taking too much memory , i agree that this is a major problem. But, the resolution of a voxeled object do not need to be equal in all the object. The BeOS app Behavior is a great proof of that by using bigger 3d pixel at the center and going smaller at the surface it give almost the same result.
Yes raytracing is very intensive but it’s the only way to fool the eyes. Looking at how slow a software openGL is to render stuff and looking at how fast a good raytracer can render a scene, it really make us thing about if a hardware ratracer would not be in fact … faster. After all it’s the shadow that take time to render and current technology in 3d card also need lot of power to do shadow. Because the object rendered in raytrace look more real it can be made with far less polygon. Not to mention that lot of optical effect come free with raytracing.
I think NVIDIA should start by making open source drivers for their cards, if you dont have an x86, and you want to have an NVIDIA based card, you are down right screwed. Video vards are getting absurdly hot now days, i truly think its time they start looking for ways to make sure they dont get so hot instead of just smacking bigger and bigger fans on them.
Not true… the x86 isn’t the only one sporting NVIDIA GeForce technology, the PowerMac will get its cut as usual and by the time I get my QuickSilver, it should be sporting the latest GeForce by then:-)
About the Falcon, the computer was not having any video chipset in the taody term, is was a DSP fast enough to be able to do video in sound (and other stuff if you wanted).
Hmmm… pretty interesting. Sounds like a great idea… when I get some time, I’d like to look into it.
I understant your comment about RAM, but my idea is better for system with unified memory. If we take game for exemple the 3d modelling is lot more precise in the video card than the coresponding “collision boxes” in the RAM for the processor. Now with all the effect of particle etc… collision need to be far better than what is curently the norm. That mean that at one time we will need to have the same RAM space for those 2 thing.
You don’t need unified memory to do true collisions. The reason either bounding boxes or spheres are used is because it’s faster (boxes check x, y, and z coord’s and spheres check distance from center of sphere).
But, the resolution of a voxeled object do not need to be equal in all the object. The BeOS app Behavior is a great proof of that by using bigger 3d pixel at the center and going smaller at the surface it give almost the same result.
I can see that, but it still doesn’t address the overall aesthetics of voxels. Jagged edges, ugh. And, voxels use polygons for display anyways… so why do it? I don’t understand what the advantage would be.
Yes raytracing is very intensive but it’s the only way to fool the eyes. Looking at how slow a software openGL is to render stuff and looking at how fast a good raytracer can render a scene, it really make us thing about if a hardware ratracer would not be in fact … faster. After all it’s the shadow that take time to render and current technology in 3d card also need lot of power to do shadow. Because the object rendered in raytrace look more real it can be made with far less polygon. Not to mention that lot of optical effect come free with raytracing.
Raytracing… fast? Hmmm. And OpenGL to me seems, well, pretty fast. There are ways to speed up raytracing, but you start to lose precision. If you’re not concerned about total precision, there are cheaper ways to get the same optical tricks. But don’t get me wrong, it would be waaaaay cool if we started seeing more real-time full-scene raytracing. I just don’t think it’s possible right now for the price range.
“I understant your comment about RAM, but my idea is better for system with unified memory.”
I belive the reason they put ram on the cards is because they can access the ram a lot faster on the card than they can on the system bus.
AlienSoldier it sounds like you want ot centarlise the whole computer I doubt that is going to happen.
“AlienSoldier it sounds like you want ot centarlise the whole computer I doubt that is going to happen. ”
Too bad because it’s the only way to get something from over 333Mhz processor these day. Other than games my processor is used at 10%, 90% of the time (i have a 450Mhz)! Of course i use BeOS
At 2Ghz my PC would have around 1.7Ghz for free (choose your weapons: face recognition, voice recognition …. ray tracing? )