Linked by Thom Holwerda on Sat 11th May 2013 21:41 UTC
Windows "Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening." That's one way to start an insider explanation of why Windows' performance isn't up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand's blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you'll know the answer.
Permalink for comment 561826
To read all comments associated with this story, please click here.
RE[21]: Too funny
by Alfman on Thu 16th May 2013 15:48 UTC in reply to "RE[20]: Too funny"
Alfman
Member since:
2011-01-28

satsujinka,

"If you're not interested in discussing how to implement a relational I/O stream abstraction (which we already both agree would be nice,) I guess there's really nothing else to talk about."

I'll play along, but I must emphasize that any implementation can be swapped out and all programs using the record I/O library would continue to work oblivious to how the library is implemented, which is irrelevant. If your wondering why I'm being such a stickler here, it's because I don't want to hear you criticizing an implementation because it happens to be binary. That's like criticizing C for doing calculations in binary under the hood when you want your numbers to be decimal. Binary is used under the hood because it's more efficient, it doesn't affect your program's I/O.


"As it stands, there is no hardware notion of a tuple. We just have data streams. So we either have to delimit the data or we have to multiplex the stream. If there are other options then please let me know, but 'use higher abstraction' is not a means to implement a system."

We actually have much more than data streams, we have a wealth of data structures as well that can be transferred via shared memory pages to create much more powerful IPC than simple data streams. These aren't used often because 1) the lack of portability between platforms, 2) the lack of network transparency, and 3) many developers never learned about it. Never the less I just wanted to bring it up to point out that IPC isn't limited to streams.


"Moving back, my methodology is perfectly sound. I was not trying to show CSV is easier to parse. I was disproving your claim that CSV is particularly hard to parse."

Regarding CSV files in particular (as documented here https://en.wikipedia.org/wiki/Comma-separated_values) has several issues which your examples completely omitted. One would like to implement a trivial CSV parser this way:

- Read one line to fetch one record
- split on commas
- add fields to a name value collection (using external knowledge about field names)

This is easy, but it breaks on legitimate CSV input files.

1. Quoting characters may or may not be present based on value heuristics.
2. The significance of whitespace is controversial & ambiguous between implementations (ie users often add whitespace to make the columns align, but that shouldn't be considered part of the data).
3. There are data escaping issues caused by special characters that show up inside the field (quotes, new lines, commas). These need to be quoted and escaped.
This is especially troubling because the main record fetching logic has no idea whether a particular newline indicates an end of record or is a databyte without knowing the quoting context in which it showed up, which is even more complicated once you consider that quote characters themselves have unique rules.

Ideally you could identify record boundaries without any knowledge of field level quotation, which is a very serious deficiency for CSV IMHO.


It turns out that XML is somewhat easier to parse without a fancy state machine because the characters used in specifying the records/fields are never present inside text fields. I'm not saying XML is the right choice, it has support for rich multidimentional structures make it complicated for other reasons. But for the sake of argument just consider this subset:

<record a="a" b="b"" c="b<"/>

(edit: pretend that the xml above is correctly quoted, an osnews bug prevents it from displaying as I wrote it)

When the parser reaches a "<" the parser can ALWAYS find the end of that node by searching for ">".
When the parser reaches a quote, it can ALWAYS find the end of the string by finding the next quote. It's trivial because special characters NEVER show up in the data. This is much easier than with CSV.

As far as escaping an unescaping values, here's a trivial implementation of that:

// escape a string
str = replace(str, "&", "&");
str = replace(str, "<", "<");
str = replace(str, ">", ">");
str = replace(str, "\"", """);

(edit: curse you osnews! it should show show & amp ; etc)

I hope this example helps clarify why CSV is awkward to implement for arbitrary data.



"You do go on to say 'redesign bash to handle this'. I assume you also mean 'provide a library that has multiplexed stdin/stdout' as you also have to write and read from an arbitrary number of stdin/stdouts (as corresponds to the number of fields.)"

Hmm, I'm not sure quite what your saying, but I was saying that since all applications would be communicating via data records a shell like bash could receive these data records and then output them as text. When piped into a logger, the logger would take the records and save them using whatever format it uses. The under-the-hood format and even the specific IPC mechanism used by this new record-I/O library could be left unspecified so that each OS could the mechanism that works best for them.


Now the moment you've been waiting for... My implementation would probably use a binary format with length prefixes (aka pascal strings) so that one wouldn't need to scan through text data AT all (eliminates 100% of quoting & escaping issues). Moreover, the parser would know the length of text right away without having to scan through it's bytes first. This way it could allocate new string variables of perfect size. Without the length prefix, you'd have 2 options 1) use one pass to determine the length first, and then another to copy the data or 2) guess the size of string needed then dynamically grow it when it overflows.


I hope this post is clear because I don't think we have much more time to discuss it.

Edited 2013-05-16 16:06 UTC

Reply Parent Score: 2