When Microsoft launches Windows Server 2003 next month, many additional pieces to the operating system will be works in progress. In the meantime an article asks: when porting applications to .Net, do you have to forsake style to enable speed? It’s a tough choice, TechUpdate says.
I’ve had a little experience with coding in C Sharp for .Net, and applications appear to take slightly longer to start up than Java ones on my AMD 475MHz laptop, and once running are about the same speed as Java. This was using the Microsoft .NET runtime environment. I’ve also played around with running console-based .Net applications in the Mono runtime environment on Linux and it doesn’t exhibit this problem of taking a little while to start up, so appears to be quite useable. We’ll see how useable it really is once they get the Wine/Windows.Forms stuff working fully.
Anyway, applications should be coded to fit in with the platform’s default ‘style’, rather than trying to look fancy. If people want it to look different, use a different desktop theme, with different looking widgets.
Talking of standard look and feel, what is the standard look on Windows these days? The following apps all look different (toolbars and menus) from eachother, despite all being made by Microsoft…
* Visual Studio
* Office
* Explorer
* Wordpad
…just a case of a long “to-do-list” by Mr. Bill and sheer Microsoft laziness?
Are these “additional components” going to be free?
“Are these “additional components” going to be free?”
Why not read the article and find out. The answer is there.
Due to the nature of the caching system used in .Net, the first time ANY .Net app starts up it will take a performance hit. All following requests will not take as long because it’s cached. If a file or .dll changes then it will recache it. Again the initial performance hit will take place and then the page will cached. The speed in .Net is fantastic!
TechUpdate write “Like the Assembler and COBOL developers before them, the C++ folks’ insistence that their language is the only choice for real applications makes it difficult for architects to sell them on the benefits of a CLR and a core set of .Net Framework classes.”
Most of Tim’s comments make sence a part from his lack of understand of the difference between ‘system programming langauages’ and ‘glue and script ones’. There is a reason that real code is written in C++ and he has missed the point.
>>Talking of standard look and feel, what is the standard look on Windows these days? The following apps all look different (toolbars and menus) from eachother, despite all being made by Microsoft…
* Visual Studio
* Office
* Explorer
* Wordpad <<
Well I don’t know what you see, but they all look uniform to me. Maybe in pre XP versions there is some differance now. The only error i see is “View” and “format” are flipped in order from notepad to Word. But seeing one has been around since win1.0 and the other is much newer I can see how that happens. Also Notepad doesn’t have the little windows logo in the corner, once again it’s not like they have changed it in forever so that will happen but i could care less. Now if your talking about how notepad has very few things going on with it’s GUI and Explorer and Word have tons of buttons and stuff, well of course they do a whole lot more. Buttons from Explorer or Word would be pointless in notepad since it wouldn’t have a use for them. All these apps are consistant between each other. I don’t have VS on here at the moment to compair it but i don’t remember it ever being any differant. The only app i can think of that is not very consitant with others is WMP and even that if set on the right skin comes into line. Also I don’t think you could have it be very nice without being a bit differant from the rest. MS’s apps are very consistant from one to the next, you always know where to look for things and how they work. Just because the apps them selves have many differances do to needed things for each app doesn’t make them differant. Explorer needs an address bar, notepad doesn’t. If this is your idea of inconsistancy your way off base.
Does anybody know of any good benchmarks for .NET software (both services and applications)? Possibly with a Mono comparison thrown in?
—
Jamie Burns
Web Templates from http://www.dynamicexpression.com/
—
The CLR in Linux is a module persistent in memory (correct me on that) so it starts faster. In Windows it gets loaded each time you start a .NET app. Gonna change though because the CLR moves into kernel with LH.
From what I see, mono does not have any kernel modules. The speed difference you’re seeing is probably more intelligent caching on the Linux side. But moving the CLR into the kernel? Are they insane? Not only is the CLR code too new to trust, but VM’s are just plain too complex to live in the kernel. Is this going to be NT 4.0 all over again (when introduction of the GDI into the kernel destabilized NT significantly)?
As for the C++ comments: I see C# and Java as just cut-down versions of C++. They don’t really offer any power that’s not available with a good set of C++ class libraries. All of them are too low level, IMHO, for general purpose application coding. Moving forward, this segment is going to be taken up by languages like Python and Perl, especially once their performance gets a shot in the arm with technologies like Parrot (the new, very fast, Perl Virtual Machine).
Weeman:
> Gonna change though because the CLR moves into kernel
> with LH.
Rayiner Hashem:
> From what I see, mono does not have any kernel modules.
Weeman was probably referring to the .NET CLR moving into the Windows kernel with its Longhorn version. MS converts slow subsystems into faster kernel modules all the time – eventually the instabilities are worked out. They will probably set up the CLR as another execution environment like posix, OS/2 and Win16.
As it is, I haven’t had any instability problems with the .NET runtime. Isn’t increased stability supposed to be the whole point to all that code management?
Why do you mention Notepad? Sandy didn’t mention it but said Visual Studio, Office, Explorer and Wordpad have different look and feel thus asked which one is the standard way now…
The article states that .NET will be more “mature” for embedding when next generation PDA will be at 1 Ghz.
So that’s means that means that .NET Embdded version will work correctly with computing power of top Joe PC of 2 years ago !
Forgive me, but how the hell can .NET be so slow ?!?!?!?!
There application doesn’t seem to be very heavy !
Sch:
> Forgive me, but how the hell can .NET be so slow ?!?!?!?!
> There application doesn’t seem to be very heavy !
Interpreting with JIT compilers is always a tradeoff between time and space. The more space you have to cache precompiled assemblies, the less time it takes to recompile them. Most WinCE machines don’t have much space to work with. Also, ARM is a RISC architecture, where 1 Ghz doesn’t mean as much as it does on x86. Don’t forget the emulated floating-point operations either – most ARM processors don’t have FPUs.
On a similar note, whatever happened to that compile on install feature that I remember from the .NET whitepapers? I would love it if my ActiveSync host computer could do a heavily optimized cross-compile of Compact .NET apps as part of the process of installing them onto the WinCE machine. No need to have the PDA interpret (and store) what the desktop can compile ahead of time, right?
>Weeman was probably referring to the .NET CLR moving into
> the Windows kernel with its Longhorn version. MS converts
> slow subsystems into faster kernel modules all the time –
> eventually the instabilities are worked out. They will
> probably set up the CLR as another execution environment
> like posix, OS/2 and Win16.
In NT’s architecture the execution environments all live in user space. See csrss.exe, os2ss.exe, etc.
MS is just making SA a value. There’s already been talk of OS updates and patches requiring SA–esp. server OS. This just makes sure you buy it!
Of course if you have to wait for X features you need right now to show up in Y Months, this is different from Open Source How exactly? [Other than being buggy, expensive, and secretive I mean:P]
Me:
> They will probably set up the CLR as another execution
> environment like posix, OS/2 and Win16.
Anonymous:
> In NT’s architecture the execution environments all live
> in user space. See csrss.exe, os2ss.exe, etc.
Good point, although services used by these subsystems are often in kernal space, usually services shared by all of them. Even Win32 is such a subsystem.
This brings to mind a possible reason that the CLR would need to have certain components running in kernal space: WinFS. WinFS is supposed to be based on the Yukon version of SQL Server, which is supposed to use the CLR as its execution engine, at least for stored procedures. Perhaps WinFS will need the CLR too? Filesystems run in kernel space on NT, don’t they?
I agree with your statements but I work in very low embedded (smart card chip) and JavaCard technology and STIP/JEFF platform.
The lowest end of chip ( 8/16 bits at ~3Mhz and 2.5 Kb RAM + 32 Kb of EEPROM ) are able to run JavaCard at reasonable seepd when using as script. The highest end of chip technologie ( 32 bits at ~20 Mhz and sometimes 30/50 Mhz, 8 Ko RAM and 128 Kb of EEPROM) are confortable with JEFF/STIP platform and we can even now able to do integer computation whithout fear.
JEFF/STIP is a version of Java with some size optimisation on bytecode mnemonics and a much lighter framework. It is very close to the Java platform on power consumption side. Serialisation of data is not a problem thanks to the “huge” CPU power .
So, I foudn amazing that there are not able to run there “small” application at enough speed with a 400 Mhz processor (and ARM is quite good on CPU power in embedded world).
I already had a look at the MIL of .NET and I saw some serious flow for “interpretive” mode. This article is almost convinced me that .NET is not embeddable at all and that you need huge amount of memory to run it and CPU power to run because it can only be usable in “JIT” mode.
Embedded .NET seems a big joke from MS.
Sorry, I miss read. At anyrate they all are the same for all intense purposes, He was trying to make it sound like windows is some inconsitant thing even with MS apps. And simple it’s not.
Brian Hawley is absolutely right. There has to be a reason, other than speed, for putting the CLR into the kernel. Moving something into kernel space doesn’t automagically make it faster. You only need to put things in the kernel if:
a) Your subsystem has extensive contact with kernel data structures or hardware
b) Your subsystem needs to access the address space of multiple processes
In both cases, you can cut the number of context switches by putting something into the kernel. The CLR has neither of these requirements, unless some part of the kernel depends on it. The interesting bit here is that the CLR in the kernel will probably adversely *decrease* performance. Since the kernel is commonly called by all code, system performance is very sensitive to the cache footprint of the kernel. While dynamically compiled .NET code might be quite fast for straight-line code, the CLR entails a huge cache footprint impact.
As for environment subsystems, they no longer run in user space. NT is an inherently client/server design, but it’s client/server with all the servers running in the kernel level.
As for Microsoft “eventually stabilizing things,” remember that “eventually” for the GDI transition meant 4 years, from the release of NT4 in 1996 to Win2K in 2000. Even if this latest stabilization is just as easy (and since the cleanliness of NT’s design has severely degraded since 3.5, I have a feeling it won’t be) than we won’t have a stable Windows again until 2007. Of course, this is entirely speculation at this point. I can’t find any hard information that the CLR will in fact be moved into kernel mode.
First, it has to be said that there are fast virtual machines, like OCaml od Scheme48. I don’t know about the Limbo language used in Inferno, though.
Second, any language which forces you to bad style to get speed SUCKS.
This shows four rather different looking toolbar and menu styles. They look even more different if I was to click on one of the pull-down menus.
http://www.sorn.net/misc/inconsistent.jpeg
For a company who publishes documents about user interface design standards, they’re not setting a very good example.
I put Acrobat there instead of Office beacuse I don’t have Office installed and Acrobat’s yet another example of a slightly different UI.
I could take a screenshot of the KDE desktop I’m using on Solaris at work, and all apps there would exhibit the same look and feel. That’s more like a standard. Even 3rd party KDE apps follow the same standard.
Note: I’m not saying KDE’s better or worse than Windows here – I’m purely comparing the look of standard UI components.
Weeman was probably referring to the .NET CLR moving into the Windows kernel with its Longhorn version. MS converts slow subsystems into faster kernel modules all the time – eventually the instabilities are worked out.
Yes. MS would have to anyway, since Avalon is supposed to replace Win32 (though Win32 will stay in Windows in parallel to Avalon for backwards compatibility), and Avalon is supposed to be 90-95% managed code. So you need a kernel level CLR and GC, for speed and memory management issues.
I was unclear too. Moving the CLR into kernel space ‘guarantees’ one CLR and one GC for the whole system. Not an instance for each application that runs managed code. And since it’s persistently available, applications start way faster too. I just hope MS is going to add some sort of cache for precompilations, style of Temporary Internet Files.
This shows four rather different looking toolbar and menu styles. They look even more different if I was to click on one of the pull-down menus.
http://www.sorn.net/misc/inconsistent.jpeg
what are you using for a taskbar as shown in the image?
She’s using litestep as a replacement shell for explorer.exe.
http://www.litestep.net
I used to use it when I was working with windows. It’s really configurable. Looks like it’s gotten a lot better since then
ugh…
http://www.litestep.net
Sch:
> I agree with your statements but I work in very low
> embedded (smart card chip) and JavaCard technology and
> STIP/JEFF platform.
AFAIK JavaCards use JavaChips, the no-longer-virtual machine in hardware. No interpretation there. Sounds interesting from your description, though.
Java chips are used in a variety of platforms, including at least one handheld computer and a GBA cartridge for J2ME games (any cell phones?).
> I already had a look at the MIL of .NET and I saw some
> serious flow for “interpretive” mode. This article is
> almost convinced me that .NET is not embeddable at all
> and that you need huge amount of memory to run it and
> CPU power to run because it can only be usable in “JIT”
> mode.
To me, one of the most interesting differences between JVM bytecodes and MS IL is just that. The JVM was originally just an emulator of the Java chips that Sun wanted to sell – this architecture makes it great for interpreting, poor for JIT compilation. The CLR was designed from the start to be compiled to native code, either JIT or ahead of time – this makes it comparatively awkward to interpret directly, but easier to optimize and verify.
Trade-offs, I suppose.
Rayiner Hashem:
> You only need to put things in the kernel if:
> a) Your subsystem has extensive contact with kernel data structures or hardware
> b) Your subsystem needs to access the address space of multiple processes
This sounds like an excellent reason to move the code management and GC facilities of .NET into the kernel. Native code can be managed and GCed as well (Managed C++) and if you are going to reinforce the management with TCPA hardware, you’ll probably need to put it in the kernel.
I don’t see any need to put the interpreter in the kernel, unless the transition from JIT data to executable code has some process security implications that need to be handled by kernel code. Even in that case, only that transition facility need be put there. Since most of the overhead of the CLR is the interpreter (or JIT), that would still be able to run as user code.
The “eventually” comment was from MS’s perspective. From me, that was sarcasm
> I just hope MS is going to add some sort of cache for
> precompilations, style of Temporary Internet Files.
Done. It’s called the Global Assembly Cache, or GAC for short.