Linked by Thom Holwerda on Sat 11th Dec 2010 18:35 UTC
General Development "Using a GPU for computational workloads is not a new concept. The first work in this area dates back to academic research in 2003, but it took the advent of unified shaders in the DX10 generation for GPU computing to be a plausible future. Around that time, Nvidia and ATI began releasing proprietary compute APIs for their graphics processors, and a number of companies were working on tools to leverage GPUs and other alternative architectures. The landscape back then was incredibly fragmented and almost every option required a proprietary solution - either software, hardware or both. Some of the engineers at Apple looked at the situation and decided that GPU computing had potential - but they wanted a standard API that would let them write code and run on many different hardware platforms. It was clear that Microsoft would eventually create one for Windows (ultimately DirectCompute), but what about Linux, and OS X? Thus an internal project was born, that would eventually become OpenCL."
Thread beginning with comment 453263
To read all comments associated with this story, please click here.
not just for gpu's
by rebel787 on Sun 12th Dec 2010 18:45 UTC
Member since:

I'm pleasantly surprised to read the part stating that multi-core cpu's are also targeted by OpenCL.

Reply Score: 1

RE: not just for gpu's
by big_gie on Sun 12th Dec 2010 19:13 in reply to "not just for gpu's "
big_gie Member since:

Yes, OpenCL is really nice. It's really a front-end to many different architectures. You program in OpenCL and it can run on GPU (nvidia and ati), CPU (amdstream), or other kind of processors. No need for porting.
The resulting programs might not be as fast as Nvidia's CUDA (I haven't seen anything proving that though) but at least it can run _everywhere_

Reply Parent Score: 1

RE[2]: not just for gpu's
by Jondice on Sun 12th Dec 2010 19:22 in reply to "RE: not just for gpu's "
Jondice Member since:

Except on OS's that don't have OpenCL available, which is everything other than the Big 3.

Reply Parent Score: 2

CUDA platform support on x86
by fran on Sun 12th Dec 2010 19:29 in reply to "RE: not just for gpu's "
fran Member since:

Interesting. Alot of developments in this area.

Few months ago nvidia also announced its porting of the CUDA platform to x86

I wonder how the integrated GPU/CPU chips coming out next year (Intel'sandy Bridge and AMD Fusion) is going to affect these developments.

Reply Parent Score: 1

RE[2]: not just for gpu's
by mat69 on Sun 12th Dec 2010 21:19 in reply to "RE: not just for gpu's "
mat69 Member since:

Though that does not mean that you should not create different versions for each device(-type), be it by auto-generating code or by hand-crafting it.

E.g. each device has a prefered vector size, using that will make things faster. Further each system has a different amount of local memory etc.

What I find great is that it is relatively easy to write a kernel and with a little amount of time it is quite fast. Only writing the boiler plate code sucks, though the bindings can help there. ;)

Edited 2010-12-12 21:20 UTC

Reply Parent Score: 2