Microsoft will continue the trend of integrating managed code support in major product releases—first with SQL Server 2005 and later in the Windows Longhorn operating system.
Microsoft will continue the trend of integrating managed code support in major product releases—first with SQL Server 2005 and later in the Windows Longhorn operating system.
That’s a good way.
Just out of curiosity, will MS be writing their low level APIs, drivers, and kernel related stuff in managed code and environment?
Just out of curiosity, will MS be writing their low level APIs, drivers, and kernel related stuff in managed code and environment?
I’m guessing that you’re asking about Longhorn. With the exception of the HAL, the microkernel, and some system devices/services, the answer is yes.
I am going to disagree with Heinr!ch and say no.
i really do think that alot of longhorn will be rewritten in managed code. I don’t think the drivers the kernel will be though
By Huh (IP: —.tc-1.roc-pt.ny.localnet.com) – Posted on 2004-04-06 09:14:52
I am going to disagree with Heinr!ch and say no.
Care to back up your statement or are you just going to troll.
cool!
i always thinking of spelling/grammar check for programming language.
as spelling check increase office productivity,
this will also increase coding productivity
(just think it’s like auto-help/API ref or something).
it even able to intregrates Excel and Word document,
every developers have to write document anyway
(program documentation, progress report, one pager, asking manager for company-sponsored free beer in pantry, … etc.).
well, i’m not a pro-MS, and actually a bit anti-.
but this is..
good job, MS.
I went to DevDays and got to see a long demonstration of the new VS.net 2005 features. I was unimpressed. There were lots of VB developers oohing and ahhing, but I never saw what the big deal was.
BTW, the prebuilt code templates are just that.. Prebuilt. They asked at the show if we could define our own and the guy said no. What a pointless feature because it is useless unless you follow MS’s coding style to a tee. Defining your own might have been a great way to offer it.
Longhorn will not have a managed kernel or managed drivers. Win32, MFC and other backwards compatibility things will remain in C++. Explorer will be at least mostly managed and all the new APIs will be implemented with managed code as well.
I don’t believe you can write even complex applications in a garbage collected environment, let alone an entire operating system. The .NET GC’s complexity is so enormous that the hardware will not keep up with that. If you don’t believe it, check out some independent benchmarks:
http://www.geocities.com/carsten_frigaard/the_toll_of_garbage_colle…
I predict that the following is going to happen. Only the interfaces will be managed in Longhorn and Yukon, the underlying code will be 100% pure C++. A few MS partners will have a chance to purchase the unmanaged APIs for millions of dollars, but most of us small developers will only have access to the managed API, which is 1000s of times slower (speed heavily depends on the number of objects and memory allocated). They either have to find a radically new garbage collector, or drop it, or else the whole concept is going to suffer.
Anonymous (IP: —.dsl.lsan03.pacbell.net)
I don’t believe you can write even complex applications in a garbage collected environment, let alone an entire operating system. The .NET GC’s complexity is so enormous that the hardware will not keep up with that. If you don’t believe it, check out some independent benchmarks:
http://www.geocities.com/carsten_frigaard/the_toll_of_garbage_colle…..
bad link dude. 🙂
Here’s a shorter format: http://tinyurl.com/3ek75
I think his site has exceeded its data transfer limits. But here is the conclusion:
Complexity of allocation:
O(1) in C++
O(S) in C#, where S is the size of the object.
Complexity of deallocation:
O(1) in C++
O(N * log M + M) in C#
where N is the preallocated memory size, M is the number of allocated objects.
Time spent re-inventing the wheel:
O(1) in C#
O(S) in C++, where S is the size of the object.
Time spent debugging:
O(1) in C#
O(N * log M + M) in C++
where N is the # of line of codes you had to write, M is the number of objects.
Money wasted because you’re still not on target and still don’t know when you’ll deliver the final product:
O(1) in C#
O((N*M*T)^2) in C++
where N is the number of line of code you had to re-invent, M is the number of objects, T the time spent to “fudge” the whole together
…
I don’t believe you can write even complex applications in a garbage collected environment, let alone an entire operating system. The .NET GC’s complexity is so enormous that the hardware will not keep up with that. If you don’t believe it, check out some independent benchmarks:
Your comment is biased: lots of complex applications have been written in a garbage collected environment. Entire operating systems have been written in a garbage collected environment. Take a look at LISP some time. It’s always been a garbage-collected environment, and entire operating systems have been written in LISP dialects. And then there’s GNU Emacs. 🙂
Then there’s the paper you link to, which is also biased. Whether deliberatly biased, I cannot say, but it’s biased and wrong in many areas.
First of all, it’s using Mono, or rather the Boehm GC system, as it’s GC system. Microsoft .NET uses a far more advanced generational system, which could change the results of the testing. So trying to use the referenced paper to bash .NET really can’t work.
Second is Table 1, which is biased against C#. C# provides stack allocation, as the article notes in a foot note. If the article were fully consistent, then “Per-container allocator” with C++ would be a “no” with a footnote. And VLA should certainly be a “no”, as you can’t use C99 features from C++. Then there’s “stack manipulation via alloca()”. C# has an equivalent: “stackalloc”:
unsafe {byte *data = stackalloc byte[256];}
The author also needs to ensure that his paper is consistent. He says that C# doesn’t support Resource Acquisition Is Initialization (RAII) (page 4), but he mentions the C# using statment (page 17) which can be used in RAII-style programming:
using (FileStream file = new File.OpenWrite (“some-file”)) {
/* write to file… */
} /* file is automatically closed here */
The C# using statement is also exception-safe, just like C++ destructors, so you are assured that the cleanup method (System.IDisposable.Dispose()) will be invoked, whether or not an exception is thrown.
See also: http://www.interact-sw.co.uk/iangblog/2004/03/23/locking
For anyone who has spent any time looking at malloc(3) implementations and GC systems, Table 4 (quoted above) is bunk. Complete, unadulterated, bunk.
C++ allocation and deletion is never O(1). To keep the runtime heap from becoming overly fragmented, the memory allocator needs to perform bookkeeping. This bookkeeping is overhead, no matter how you look at it. Look at the GNU malloc implementation some time; it’s certainly not a constant-time algorithm.
GC allocation, on the other hand, can be O(1). Strictly speaking, the GC doesn’t need to allocate anything — it’s already allocated — so just return the current Heap pointer and increment the current heap pointer by sizeof(allocated-object). GC Heap allocation can be as fast as the runtime stack, in principal, if not in practice. Of course, you pay for this blazing allocation speed when memory deallocation time comes around, as memory is searched looking for objects to collect. Finalizable objects (objects that have finalizers) take an additional cycle to collect under .NET. Generational GC systems such as the one .NET uses tries to minimize this overhead by using smaller (and thus more quickly searched) heap pools, so there are work arounds.
Finally, his test cases look to be written by someone with little experience with .NET. In particular (page 6)
ArrayList v = new ArrayList (); for (int i = 0; i < 10; ++i) v.Add (i);
will have terrible performance characteristics, due to the required boxing of the integer (and the implicit additional object allocatoin boxing implies). Of course this algorithm will suck. Furthermore, he compares stack-allocated C++ objects against heap-allocated C# objects (page 7). Using a C# “struct” would help immensely here, but a C# struct isn’t used anywhere in the paper.
I haven’t fully read the whole paper, so there may be other issues I’m missing, but this is not a well-researched paper, not by any means.
I recently switched to C#.net, however I found it difficult to switch from using C++/QT. Correct me if I am wrong but in Windows Forms I couldn’t find any layout boxes (such as HBox and VBox in qt and gtk) and the widgets would not grow to accomodate their child widgets, it was extremely difficult porting applications over and in the end it really wasn’t worth the bother. I found a few third-party layout widgets but for a development environment as big as this there should be at least something to make life easier. Maybe that’s why a license for QT is $1500?
Anyone know if they plan to implement this in .net 2.0
If you’re writing .NET or other RAD code, chances are your program is spending most of its time waiting for user input. To make really efficient code in .NET, you’d have to do so many unsafe and otherwise unnatural things, that you might as well work in MC++ or just C++ itself. I think (as I read on one of the Longhorn blogs) that Microsoft’s new version of Explorer is either a mixed-mode (MC++) program or a .NET hosting program written in pure C++.
I’m sure performance-critical code will remain in lower-level languages than .NET.
I have heard (I can’t remember where) that .NET 1.2 will contain containers for Windows Forms. I’m not sure how true this is, though.
I am sure that it won’t matter, long-term. Windows Longhorn is coming, and with it is Avalon, an entirely new model for window layout & drawing. Which looks virtually identical to Qt and GTK box layout. 🙂
Look up XAML, which looks like: (1) ASP.NET for Windows Forms; (2) Glade with inline C# code; (3) probably some Qt equivalent (I’m not that familiar with Qt or KDE).
As for the problems you’re having switching from C++/Qt to Windows Forms, have you tried looking into using Qt? There’s a .NET interface to Qt now, so that might be useful for you:
http://www.trolltech.com/developer/changes/changes-3.3.0.html
I recently switched to C#.net, however I found it difficult to switch from using C++/QT. Correct me if I am wrong but in Windows Forms I couldn’t find any layout boxes (such as HBox and VBox in qt and gtk) and the widgets would not grow to accomodate their child widgets, it was extremely difficult porting applications over and in the end it really wasn’t worth the bother.
WinForms uses the layout method very similar to that of Delphi . You either “dock” form elements to any side of the parent element or set it to fill up what remains of the rest of the space, “docked” elements do not resize, however, you can aditionally “anchor” all widgets to all sides and that will make them resize and/or move to keep the anchored sides at the same distance from the parent border.
But that all is very different from Qt boxes or Swing’s layouts…