Linked by Thom Holwerda on Fri 27th Mar 2009 21:32 UTC
Intel On Friday, Intel engineers are detailing the inner workings of the company's first graphics chip in over a decade at the Game Developers Conference in San Francisco - sending a signal to the game industry that the world's largest chipmaker intends to be a player. During a conference call that served as a preview to the GDC sessions, Tom Forsyth, a software and hardware architect at Intel working on the Larrabee graphics chip project, discussed the design of Larrabee, a chip aimed squarely at Nvidia and at AMD's ATI unit.
Thread beginning with comment 355606
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: About time.
by Wrawrat on Sat 28th Mar 2009 06:00 UTC in reply to "About time."
Wrawrat
Member since:
2005-06-30

I'd like to know more on that "slowing Microsoft technology" and how Intel will overcome it on a platform "with no limitations". Last I heard, Intel was developing GPUs with mediocre performance and I don't expect a change soon.

Yes, I am quite skeptical of Larrabee. Developing a GPU around the x86 ISA is just strange to me. I understand that x86 compilers are more mature than the ones used for proprietary architectures, but massive parallelisation is quite a new field. Thus, I suppose they are pretty much at the same point. Furthermore, NVidia and ATI will have enough time to develop alternatives, especially since Larrabee won't ship until next year. Sure, Larrabee will get 32 cores, but the current NVidia offerings already have 240. Oh, they lack functionnalities, but that's exactly why it's easier to put more of them on a silicon wafer.

I'm happy to see more competition, but I just don't expect a savior, even for Linux graphics, as high-end 3D graphics is a cut-throat business.

Reply Parent Score: 4

RE[2]: About time.
by helf on Sat 28th Mar 2009 14:24 in reply to "RE: About time."
helf Member since:
2005-07-06

What I don't get is why people seem to think "massive parallelism" is a new field. It's been around since super computers first started appearing... The Crays, Thinking Machines, etc were MASSIVELY parallel. It is how they had such insane performance for the 80s and early 90s.

Even current super computers are that way. It ain't a new field.

Reply Parent Score: 3

RE[2]: About time.
by cyclops on Mon 30th Mar 2009 05:12 in reply to "RE: About time."
cyclops Member since:
2006-03-12

I'd like to know more on that "slowing Microsoft technology" and how Intel will overcome it on a platform "with no limitations". Last I heard, Intel was developing GPUs with mediocre performance and I don't expect a change soon.

Yes, I am quite skeptical of Larrabee. Developing a GPU around the x86 ISA is just strange to me. I understand that x86 compilers are more mature than the ones used for proprietary architectures, but massive parallelisation is quite a new field. Thus, I suppose they are pretty much at the same point. Furthermore, NVidia and ATI will have enough time to develop alternatives, especially since Larrabee won't ship until next year. Sure, Larrabee will get 32 cores, but the current NVidia offerings already have 240. Oh, they lack functionnalities, but that's exactly why it's easier to put more of them on a silicon wafer.

I'm happy to see more competition, but I just don't expect a savior, even for Linux graphics, as high-end 3D graphics is a cut-throat business.


IE4-->Netscape, Word-->Wordperfect, XP-->Vista, Monopoly-->Governments Worldwide etc etc. I'll keep my words short fill in the sentences yourself.

This is not about 3D graphics this is about selling an end to end Hardware computing platform and AMD;Intel;Via;Nvidia are up for it.

Reply Parent Score: 2