Linked by Thom Holwerda on Tue 22nd May 2012 23:26 UTC
Internet & Networking "Just over two months ago, Chrome sponsored the Pwnium browser hacking competition. We had two fantastic submissions, and successfully blocked both exploits within 24 hours of their unveiling. Today, we'd like to offer an inside look into the exploit submitted by Pinkie Pie." A work of pure art, this. Also, this is not the same person as the other PinkiePie. Also also, you didn't think I'd let a story with a headline like this go by unnoticed, did you?
Thread beginning with comment 519148
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[7]: Comment by Radio
by anevilyak on Wed 23rd May 2012 17:53 UTC in reply to "RE[6]: Comment by Radio"
anevilyak
Member since:
2005-09-14


Really.

static uint32 ComputeMaxResults(size_t size_of_buffer) { return (size_of_buffer - sizeof(uint32)) / sizeof(T); }

So, say, Delphi, would prevent someone from making a mistake doing a subtraction and then a division, knowing what the calculation would be used for, would it?


I don't know about Delphi specifically, but for most of the languages that are considered "safe" in this respect, you're not allowed to pass around raw blocks of memory without any type safety/control. As a consequence, the language's runtime always knows and tracks the size of any currently allocated arrays. Given this knowledge, the runtime effectively converts any instance of array[x] to something analogous to:

if (x < 0 || x >= arraySize)
raise_runtime_error();
else
return array[x];

Obviously this incurs some overhead compared to the raw C version, but it does indeed prevent these kinds of problems. So yes, it really is possible to handle in a safe/high level language, if the logic error in question is confined to the level of the language. However, if the logic error is actually in the instructions passed to the GPU itself, and the latter is able to access arbitrary areas of memory, then that's a different story.

Edited 2012-05-23 17:54 UTC

Reply Parent Score: 2

RE[8]: Comment by Radio
by kwan_e on Wed 23rd May 2012 23:20 in reply to "RE[7]: Comment by Radio"
kwan_e Member since:
2007-02-18

However, if the logic error is actually in the instructions passed to the GPU itself, and the latter is able to access arbitrary areas of memory, then that's a different story.


Yes, it is a different story. That is the whole context of this discussion. This SPECIFIC problem with raw GPU access won't be fixed by using a different language.

But on a more general note, I mentioned before about C++ libraries which provide automatic memory management, which ARE intended to be considered as part of the language. I highly recommend using STL and Boost containers when programming in C++. Especially now that compilers have started changing their STL libraries to use move semantics which makes them an order of magnitude faster.

Edited 2012-05-23 23:23 UTC

Reply Parent Score: 2