Linked by Thom Holwerda on Tue 22nd May 2012 23:26 UTC
Internet & Networking "Just over two months ago, Chrome sponsored the Pwnium browser hacking competition. We had two fantastic submissions, and successfully blocked both exploits within 24 hours of their unveiling. Today, we'd like to offer an inside look into the exploit submitted by Pinkie Pie." A work of pure art, this. Also, this is not the same person as the other PinkiePie. Also also, you didn't think I'd let a story with a headline like this go by unnoticed, did you?
Thread beginning with comment 519112
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Comment by Radio
by moondevil on Wed 23rd May 2012 11:09 UTC in reply to "RE[4]: Comment by Radio"
moondevil
Member since:
2005-07-08

And do they prevent you from making the LOGIC ERROR that was explained about the functions?


YES! Because the LOGIC ERROR is about a MEMORY ACCESS ALGORITHM known to ANY C PROGRAMMER.

Reply Parent Score: 3

RE[6]: Comment by Radio
by kwan_e on Wed 23rd May 2012 11:17 in reply to "RE[5]: Comment by Radio"
kwan_e Member since:
2007-02-18

YES! Because the LOGIC ERROR is about a MEMORY ACCESS ALGORITHM known to ANY C PROGRAMMER.


Really.

static uint32 ComputeMaxResults(size_t size_of_buffer) { return (size_of_buffer - sizeof(uint32)) / sizeof(T); }

So, say, Delphi, would prevent someone from making a mistake doing a subtraction and then a division, knowing what the calculation would be used for, would it?

size_of_buffer is an integer.
So is sizeof(uint32).
So is sizeof(T).

Are you seriously telling me there are programming languages out there that would actually tell the programmer "hey, did you know size_of_buffer you passed in was smaller than the size of uint32, and I checked every usage of ComputeMaxResults and I've noticed these unsafe uses?"

Reply Parent Score: 4

RE[7]: Comment by Radio
by moondevil on Wed 23rd May 2012 13:11 in reply to "RE[6]: Comment by Radio"
moondevil Member since:
2005-07-08

Yes, because if you really cared to read everything, you will see that the outcome of such functions is used for buffer manipulation tricks.

This calculation then overflowed and made the result of this function zero, instead of a value at least equal to sizeof(uint32). Using this, Pinkie was able to write eight bytes of his choice past the end of his buffer. The buffer in this case is one of the GPU transfer buffers, which are mapped in both processes’ address spaces and used to transfer data between the Native Client and GPU processes. The Windows allocator places the buffers at relatively predictable locations; and the Native Client process can directly control their size as well as certain object allocation ordering. So, this afforded quite a bit of control over exactly where an overwrite would occur in the GPU process.

The next thing Pinkie needed was a target that met two criteria: it had to be positioned within range of his overwrite, and the first eight bytes needed to be something worth changing. For this, he used the GPU buckets, which are another IPC primitive exposed from the GPU process to the Native Client process. The buckets are implemented as a tree structure, with the first eight bytes containing pointers to other nodes in the tree. By overwriting the first eight bytes of a bucket, Pinkie was able to point it to a fake tree structure he created in one of his transfer buffers. Using that fake tree, Pinkie could read and write arbitrary addresses in the GPU process. Combined with some predictable addresses in Windows, this allowed him to build a ROP chain and execute arbitrary code inside the GPU process.


A safer language would have a runtime error when such situations get detected.

The logic error as you called is only required, because they need to calculate specific values for pointer math. Without pointer math no need for logic errors that turn into buffer exploits.

Reply Parent Score: 2

RE[7]: Comment by Radio
by anevilyak on Wed 23rd May 2012 17:53 in reply to "RE[6]: Comment by Radio"
anevilyak Member since:
2005-09-14


Really.

static uint32 ComputeMaxResults(size_t size_of_buffer) { return (size_of_buffer - sizeof(uint32)) / sizeof(T); }

So, say, Delphi, would prevent someone from making a mistake doing a subtraction and then a division, knowing what the calculation would be used for, would it?


I don't know about Delphi specifically, but for most of the languages that are considered "safe" in this respect, you're not allowed to pass around raw blocks of memory without any type safety/control. As a consequence, the language's runtime always knows and tracks the size of any currently allocated arrays. Given this knowledge, the runtime effectively converts any instance of array[x] to something analogous to:

if (x < 0 || x >= arraySize)
raise_runtime_error();
else
return array[x];

Obviously this incurs some overhead compared to the raw C version, but it does indeed prevent these kinds of problems. So yes, it really is possible to handle in a safe/high level language, if the logic error in question is confined to the level of the language. However, if the logic error is actually in the instructions passed to the GPU itself, and the latter is able to access arbitrary areas of memory, then that's a different story.

Edited 2012-05-23 17:54 UTC

Reply Parent Score: 2