Linked by Thom Holwerda on Sat 11th Dec 2010 18:35 UTC
General Development "Using a GPU for computational workloads is not a new concept. The first work in this area dates back to academic research in 2003, but it took the advent of unified shaders in the DX10 generation for GPU computing to be a plausible future. Around that time, Nvidia and ATI began releasing proprietary compute APIs for their graphics processors, and a number of companies were working on tools to leverage GPUs and other alternative architectures. The landscape back then was incredibly fragmented and almost every option required a proprietary solution - either software, hardware or both. Some of the engineers at Apple looked at the situation and decided that GPU computing had potential - but they wanted a standard API that would let them write code and run on many different hardware platforms. It was clear that Microsoft would eventually create one for Windows (ultimately DirectCompute), but what about Linux, and OS X? Thus an internal project was born, that would eventually become OpenCL."
Thread beginning with comment 453266
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: not just for gpu's
by big_gie on Sun 12th Dec 2010 19:13 UTC in reply to "not just for gpu's "
big_gie
Member since:
2006-01-04

Yes, OpenCL is really nice. It's really a front-end to many different architectures. You program in OpenCL and it can run on GPU (nvidia and ati), CPU (amdstream), or other kind of processors. No need for porting.
The resulting programs might not be as fast as Nvidia's CUDA (I haven't seen anything proving that though) but at least it can run _everywhere_

Reply Parent Score: 1

RE[2]: not just for gpu's
by Jondice on Sun 12th Dec 2010 19:22 in reply to "RE: not just for gpu's "
Jondice Member since:
2006-09-20

Except on OS's that don't have OpenCL available, which is everything other than the Big 3.

Reply Parent Score: 2

RE[3]: not just for gpu's
by big_gie on Sun 12th Dec 2010 19:30 in reply to "RE[2]: not just for gpu's "
big_gie Member since:
2006-01-04

Yes, it's still a new technology. The 1.1 specification is just 5 months old, and the 1.0 is a year old. It's not mature like MPI, but then MPI has been there for something like 2 decades...

Reply Parent Score: 1

RE[3]: not just for gpu's
by tyrione on Mon 13th Dec 2010 17:46 in reply to "RE[2]: not just for gpu's "
tyrione Member since:
2005-11-21

Except on OS's that don't have OpenCL available, which is everything other than the Big 3.


I've got OpenCL from Nvidia running on Debian as they packaged in Experimental.

Reply Parent Score: 2

CUDA platform support on x86
by fran on Sun 12th Dec 2010 19:29 in reply to "RE: not just for gpu's "
fran Member since:
2010-08-06

Interesting. Alot of developments in this area.

Few months ago nvidia also announced its porting of the CUDA platform to x86
http://arstechnica.com/business/news/2010/09/nvidia-ports-its-cuda-...

I wonder how the integrated GPU/CPU chips coming out next year (Intel'sandy Bridge and AMD Fusion) is going to affect these developments.

Reply Parent Score: 1

RE: CUDA platform support on x86
by big_gie on Sun 12th Dec 2010 19:31 in reply to "CUDA platform support on x86"
big_gie Member since:
2006-01-04

Hum interesting. That's why Nvidia's drivers do not support OpenCL on the CPU. They still want to lock people in with CUDA.

Reply Parent Score: 2

RE: CUDA platform support on x86
by kaiwai on Sun 12th Dec 2010 21:35 in reply to "CUDA platform support on x86"
kaiwai Member since:
2005-07-06

Interesting. Alot of developments in this area.

Few months ago nvidia also announced its porting of the CUDA platform to x86
http://arstechnica.com/business/news/2010/09/nvidia-ports-its-cuda-...

I wonder how the integrated GPU/CPU chips coming out next year (Intel'sandy Bridge and AMD Fusion) is going to affect these developments.


From what I understand when it comes to sandy bridge from Intel OpenCL will be based on AVX extensions that should provide the sorts of performance one would normally get from a dedicated GPU. There are rumours that maybe Apple will consider AMD but I think those are premature and misplaced rumours because even though AMD has made great strides when it comes to battery life Intel still has the crown in that area.

Reply Parent Score: 2

CodeMonkey Member since:
2005-09-22

I wonder how the integrated GPU/CPU chips coming out next year (Intel'sandy Bridge and AMD Fusion) is going to affect these developments.


While these integrated chips are on a single package, theyr'e still two discrete components being placed inside a single box. From the OpenCL perspective, whether the devices are integrated or on discretely seperate PCIe bus lanes, the programming API is unchanged. The OS and the framework still see them as two logically seperate devices. When retrieving the list of available OpenCL devices, you'd get get a CPU device and a GPU device. The physical integration into a single package is invisible to the API.

Reply Parent Score: 2

RE[2]: not just for gpu's
by mat69 on Sun 12th Dec 2010 21:19 in reply to "RE: not just for gpu's "
mat69 Member since:
2006-03-29

Yeah.
Though that does not mean that you should not create different versions for each device(-type), be it by auto-generating code or by hand-crafting it.

E.g. each device has a prefered vector size, using that will make things faster. Further each system has a different amount of local memory etc.

What I find great is that it is relatively easy to write a kernel and with a little amount of time it is quite fast. Only writing the boiler plate code sucks, though the bindings can help there. ;)

Edited 2010-12-12 21:20 UTC

Reply Parent Score: 2

RE[3]: not just for gpu's
by tylerdurden on Mon 13th Dec 2010 02:02 in reply to "RE[2]: not just for gpu's "
tylerdurden Member since:
2009-03-17

Exactly, OpenCL is really not that portable when it comes to optimized kernels. A lot of templating is necessary.

I.e. good performing code in ATI GPUs will not necessarily perform efficiently on NVIDIA parts, and viceversa.

The most "portable" of these technologies, ironically, is CUDA. Granted it is only portable across NVIDIA architectures, and now x86 CPUs.

The biggest issue with these sort of tools is that as long as the GPUs etc are on different address spaces, the programming models will continue to be hindered significantly.

Reply Parent Score: 3