Windows testers will get a new beta version of Windows Vista, dubbed the December Community Technology Preview beta build, just before next week’s holidays, according to tester scuttlebutt. New to the December release, testers say, will be a number of features and user-interface tweaks. a new defrag module; tight integration of Windows Defender (formerly known as Windows AntiSpyware); and a functional parental-controls filter are all rumored to be in the December Vista build.
It must be difficult to juggle playing catch up with Apple and patching all the various holes. I wish they had gone with the original (rumored) plan and did a complete re-write. Surely the developers must have learned from their mistakes enough to make a sound OS by now, right? Considering how much they’ve been imitating OSX since XP came out they may as well base a new windows on FreeBSD, I doubt people would really fault them for it. I’d be greatfull, I would be willing to pay for a new Windows version if it was actually as good as or better than most of the stuff I can download freely.
It must be difficult to juggle playing catch up with Apple and patching all the various holes. I wish they had gone with the original (rumored) plan and did a complete re-write.
Based on how much of the core has already been rewritten, I think they’re actually doing that to some degree.
I would be willing to pay for a new Windows version if it was actually as good as or better than most of the stuff I can download freely.
Yeah, perhaps you can go and download Gentoo Linux – I hear they’re finally getting an installer. Progress, my friend … progress
Edited 2005-12-17 19:25
> I wish they had gone with the original (rumored) plan and did a complete re-write.
They were going to do it but after a year or so it became apparent that their new kernel was an utterly unusable buggy piece of crap and Gates decided to drop it and revert back to Windows 2003’s codebase. That is most likely the most important decision Microsoft has ever done and I sincerily believe that it was a good one. I can’t remember a precise link right now but there are several articles lying around web telling you about that.
Here’s the story. It’s interesting. Geeks won this time
http://www.site.uottawa.ca/~ssome/Cours/SEG3202/microsoftImportance…
“It’s not going to work,” Mr. Allchin says he told the Microsoft chairman. The new version, code-named Longhorn, was so complex its writers would never be able to make it run properly.
…
Edited 2005-12-17 19:53
That’s not exactly right.
It was based off of XPSP2 (not a complete rewrite), then they decided to scrap all the work they’d done and start fresh with 2003 SP1.
Now, furthermore. A complete rewrite would be an incredibly DUMB thing to do. The Windows kernel is a very good kernel, and is very well designed. There is no reason to scrap it. Not to mention you really have to have a VERY VERY VERY good reason to throw away millions of lines of code that has been maturing over the last 20 years. A rewrite just for the sake of a rewrite is not good.
Complete rewrites are almost NEVER a good option.
Everything you just wrote can be said with a single word. ‘Netscape’
CPUGUY: you’re right about the the re-write, as a programming teacher said, it may bring new features and optimisations, but it brings a whole new lot of bugs in the processs – Windows is 20 years of new features, bug fixes, and basically a complete re-write would require yet another 20 years to get back up to the existing level of stability.
As for Windows; yes, you are correct, the kernel and the lower layers are actually well designed, things became unstuck, however, during the heady late 1990s when all programming rules were thrown out the window in favour of rapid feature adoption with little thought as to the over all impact of those features on stability, security and reliability.
What I would like to see, however, is the eventual official killing off of old crap – win32 is a nice API IF the dead wood was finally killed off, remove old support – those companies who have maintained their programme to a reasonable level, should find that their product will work with the new version, those companies who have failed – Windows won’t suffer, those companies will – people will for ever label them as the company who didn’t give a toss about their customer by failing to release an update.
I think the quote your talking about; came from an interview with Jim Allchin.
This would be in regards to anti-spyware:
Make installing of sofware require a 2nd password or a 3rd password or give a warning; when it has to change (or get access to) core system files or altering a the startup environment. Perhaps giving a user a 2nd thought (chance) before installing a rogue program.
PS: I say break backwards compatibility.
You have to realize that Microsoft’s job is a hell of a lot harder than Apple’s. Apple has no problem breaking all manner of applications to implement new features. I’m fine with that, and it sure encourages rapid progress, but it pisses off regular consumers. If you’ve ever worked in tech support you’ll know what a nightmare each OS X upgrade is in terms of compatibility.
Microsoft has gone the compatibility route, going to great lengths to make sure every broken old application still works. I think it has gotten to a point where the overhead has severely hampered their ability to get anything done. Eventually you just have to drop compatibility for old technology to move forward. I think we’re seeing a start of that in Vista.
>Considering how much they’ve been imitating OSX since XP
>came out they may as well base a new windows on FreeBSD
Just crossed by mind, crazy thought, but, maybe they are using it and we don’t even know!
The BSD license permits this, right?
The Windows [NT onward] kernel is fine for the most part. It’s the Windows userland that can suck bad sometimes. Explorer comes to mind.
Please everyone, buy Windows Vista on a new computer next fall okay? Please do it so we can stop Linux, OS X, Solaris and all the other evil bad operating systems.
We must love and cherish Windows, it is standard, grandma loves Windows. Windows is better, please buy more Windows Now! Install Windows every chance you get pirated or legit, spread Windows. Please ensure that all 6 billion persons on the planet earth uses Windows, whether its 95 or Vista Beta 1 5112. Every body must be able to use Windows, Server or Client.
Help me make this wish become a reality.
Please do this for me, let us stop that company that purchase 24 thousand Dell workstations, laptops and servers and is installing Linux on them, please help! Windows must rule forever until God comes.
Thank you
Even Apple didn’t do a complete rewrite. They just broke native compatibility with just about everything so it seemed like they did but it’s still Mach, BSD, and NeXT API’s.
True, but they did realise that there was no use flogging a dead horse, so they purchased something that worked, added some features, changed some things around and voila, you have MacOS X – and true, it wasn’t a re-write but it was a complete replacement for an operating rather than it being a migration – it would be the equivilant of Microsoft, had they not decided on using win32 ontop of the NT kernel – had they, for example, chosen to base it on a UNIX/POSIX API – which the NT kernel is quite capable of doing.
OS News stop deleting my post!
OS News is not deleting your post. You are being modded down by your peers for disrupting the discussion.
I don’t think they were rewriting the kernel… It’d seem awfully odd to hire David Cutler 15-20 years ago to design you an OS and throw it all away and start anew without hiring another OS expert…
I thought what they threw away weren’t quite so base as the kernel. I really don’t see a reason to throw away the NT kernel; it’s the only part that really works right.
Fancy still needing to defrag in the twenty-first century! Is Windows the only OS with a file system that still requires this?!
All Linux filesystems also require defragmentation. It’s just that none of them has a defrag module.
Fancy still needing to defrag in the twenty-first century! Is Windows the only OS with a file system that still requires this?!
Yes, because no one else needs defragging. Well except
linux (http://www.oo-software.com/en/products/oodlinux/index.html)
and mac (http://www.speedtools.com/Defrag.shtml).
Fancy still not having a journalling file system in the 20th century. NTFS had one since 1992, Ext3 finally crossed that finish line in 2001… OMG T0T4LLy C0PY1NG MS!!!
😛
Linux was created in 1991, care to retract that statement?
Linux filesystems dont get fragmented nothing like FAT or NTFS and since hard drives now days make up for that you wouldn’t even notice.
Linux was created in 1991, care to retract that statement?
What’s to retract? Linux 1991 = 20th century. Ext3 2001 – 21st century. Perhaps you need to re-read.
Linux filesystems dont get fragmented nothing like FAT or NTFS and since hard drives now days make up for that you wouldn’t even notice.
I see. We went from linux doesnt need defrag, to you don’t need it very much.
PS: no one uses FAT16/32 anymore. You should also get rid of your Members Only jacket. 😛
Yes, they DONT need to be defragmented, thats different to them getting fragmented to a point were they DO need to be defragmented.
The clarify.
Linux filesystems do get fragmented but dont need to be defragmented like NTFS. Get back into the real world, alot of people still use Win98/FAT32 and all the worlds Windows users are not on XP.
I’m confused then. According to this article, they are constantly fragmented, and get defragmented as part of the file system. Exactly like NTFS does. Ignoring the point on journaling file systems was a nice touch though. 🙂
You haven’t been able to buy Win98 for 6 years. Most of the world has moved on, since you haven’t been able to buy new software for it in ages. Finding precise sales numbers has been a beyotch, but this article and quote speak about the rapid outpacing Win9X saw starting years ago:
http://www.microsoft.com/presspass/press/2001/nov01/11-11xpcomdexpr…
“Sales of Windows XP by computer manufacturers are over 200 percent higher than sales of Windows 98 in the first month of its availability. Also, Windows XP has been adopted by computer manufacturers faster than any previous operating system. Retail sales of Windows XP have surpassed those of Windows 95 and Windows Millennium Edition (Windows Me) by a significant margin.” XP has been all you can buy for 5 years, so of course it’s outpaced Win98 heavily. Define ‘a lot’ in terms of per capita.
This was five years ago, btw.
Not everyone buys there computer from a large retailer, I used to work at a computer shop that installs Win98 as default and still does. thats 50 computers a week maybe 70 but XP was a option.
Not everyone buys there computer from a large retailer, I used to work at a computer shop that installs Win98 as default and still does. thats 50 computers a week maybe 70 but XP was a option.
Those customers are getting ripped off with pirated software. It is no longer possible to buy Win98 through OEM, customer volume licensed, or high volume retail channels. Even if it was, Win98 hits end of life support June 30, 2006, so they are double screwed on patches, updates, or assistance. Unless your shop bought a few thousand copies of Win98 years ago and has been doling them out at 50 a week, I’d say call http://spa.org and get your reward.
Edited 2005-12-18 04:22
Care to enlighten what makes Linux FS so smart that they don’t get fragmented? It is totally upto the usage pattern in most cases which causes fragmentation. There is no voodoo that Linux has and others don’t because if it was, everyone else would have copied it.
MS is not stupid, it has a lot of brilliant engineers and i don’t think it would be too hard for them to implement such voodoo magic if it existed. So please the next time you make such authorative statement, bring on some facts as well…
The issue isn’t whether the file system requires defragmenting, but how well the operating system handles fragmentation – HFS+ and Linux file systems ALL fragment, and in general, all file systems fragment, the issue is how well these file systems avoid it by using various technologiies.
For me, I don’t care either way whether or not an file system requires fragmentation, as it can be done within a few minutes, as long as one does in regularly (once a week) – hardly something I would call time wasting when compared to the mirrad of other issues that plague operating systems – spyware, viruses, half-assed API’s loaded with bugs etc.
Apple introduced on-the-fly optimizations for the HFS+ filesystem in 10.3. At the end of this article http://www.kernelthread.com/mac/apme/fragmentation/ , the conclusion speaks for itself:
“Defragmentation on HFS+ volumes should not be necessary at all, or worthwhile, in most cases, because the system seems to do a very good job of avoiding/countering fragmentation.”
NTFS exist in five version, and none of then are really journaled file systems, since they can whote charges on the journal, but cannot revert then in case of failure, because of that you yet need to sleep in the front of pc waiting scandisk check the entire disk in case of a power outrage for example.
http://www.backupbook.com/03Freezes_and_Crashes/02Journaling.html
On linux, you just sleep in the front of fsck for maintence reasons. In case who the system are not properly shutdown, no need of fsck ate all, the system can boot at full speed, reverting the changes on journal automagically.
And no modern linux file system needs to be defragmented. Since ext2, data are ordered on the fly on disk. Rates of fragmentation are very small always. Rarely goes above 6%, just pass fsck on a linux file system, they report the level of fragmentation, you will notice sometimes they go 6%, and after some file operations due normal usage of pc, they go below 3% without no apparent reason.
And just to remenber you: ext/ext3 are just 2 of file systems who linux can use by default: you have also reiserfs, jfs and xfs.
NTFS exist in five version, and none of then are really journaled file systems, since they can whote charges on the journal, but cannot revert then in case of failure
Patently false. All of them are journalled, and of course can revert in case of failure. Your article’s writer is misinformed. Try using more than just your first hit from Google.
http://en.wikipedia.org/wiki/NTFS
http://en.wikipedia.org/wiki/Journaling_file_system
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/libr…
(Read the sections on NTFS File System Recoverability and Recovering NTFS File Structures if you want to learn how this actually works. The 2003 Technical Reference should always be your first stop for understanding MS technologies, not backupbook.com, whatever that is). My original fact remains – it was 10 years after NTFS that Ext3 came around and Linux finally had journalling.
because of that you yet need to sleep in the front of pc waiting scandisk check the entire disk in case of a power outrage for example
I have no idea what this means. There is no such thing as ScanDisk in NT-Vista or in NTFS. You’re thinking of Win95/98.
I still don’t understand why I find these links for defragging Ext2, with useful utilities and methods to do so. I get it that ExtX has built in real-time defragmentation – so does NTFS. It just sounds like ExtX is better – but not enough that people aren’t writing apps to do it anyways.
I have no idea what this means. There is no such thing as ScanDisk in NT-Vista or in NTFS. You’re thinking of Win95/98.
It’s quite amusing to see linux fanboys shooting themselves in the foot with proofs of not using any NT kernel based Windows such as XP/W2K.
Speaking of filesystems, how many linux filesystems have support for transactions on API-based level (ie complete ACID semantics), like NTFS 6 on Vista has?
Errr..none.
How many of them have self-healing capabilites like NTFS 6 on Vista has (no more fsck, no more autochk.exe/chkdsk.exe)?
Errr…none.
How many linux filesystems have object-relational mappings such as WinFS, which is currently in beta 1 and will be available as redistributable shortly after Vista RTMs?
Errr…none
It took ext* 10 years to get something as “complex” as journalling, it will probably fully supprot ACID and self-healing somewhere in 2020
(Read the sections on NTFS File System Recoverability and Recovering NTFS File Structures if you want to learn how this actually works. The 2003 Technical Reference should always be your first stop for understanding MS technologies, not backupbook.com, whatever that is). My original fact remains – it was 10 years after NTFS that Ext3 came around and Linux finally had journalling.
No.. the technical article posted are right, some people do not consider NTFS a pure journal file system because it do not suport block jornaling, but only metadata journaling for performance reasons.
I still don’t understand why I find these links for defragging Ext2
You are confusing fragmentation resistence with on the fly defrag. ExtX do not execute a defrag process on background using CPU idle time, it in fact write the data on disk in a organizated manner. Now.. who method are better, serious, i don’t know.
No, the article is opinion. Saying ‘pure’ journaling is completely subjective – if I journal files and recover it when I lose power, I am running a ‘pure’ journaling file system. Ext3 does metadata-based journaling as well ( and in NTFS for Vista/LH (transactional NTFS), it does MD and block journaling as well). Where do we draw the line?
I’m not speaking to ExtX doing on the fly defragging, the other fella was. My expertise is in NTFS, not Ext. 🙂
My personal belief is – there is no file system on earth that does not need a defrag module. Programmers seem to agree with me, as you can get versions of defrag for every FS ever made. I’m honestly quite surprised that the Linux guys here were so resistant to the argument – I thought one of the points to running a tuner OS was to tune tune tune for better performance.
Now yes! I agree with you =) Metadata journaling and block journaling are already very good in their jobs even when not combined.
NTFS is a good file system, as Ext3/JFS/Reisefs.
The algorithm who Ext3 use are created to execute fragmentation resistence are based on the assumption who you have always a good percentage of space free, so they can white the data on such order they don’t need to fragment between already used inodes. As you can note, the performance of that algorith decrease when the disk start to run out of space. It’s works fair nice with fill rates on 80%~90% or with big amounts of small files operation although. (in other words: most of time)
Edited 2005-12-18 04:18
Now yes! I agree with you =) Metadata journaling and block journaling are already very good in their jobs even when not combined.
I’ll drink to that! 🙂
The algorithm who Ext3 use are created to execute fragmentation resistence are based on the assumption who you have always a good percentage of space free, so they can white the data on such order they don’t need to fragment between already used inodes. As you can note, the performance of that algorith decrease when the disk start to run out of space. It’s works fair nice with fill rates on 80%~90% or with big amounts of small files operation although. (in other words: most of time)
NTFS definitely has the same problem. I’d guess that all FS’s do, unless they can guarantee files never getting fragmented in the first place (through magical variable sized clusters 🙂 ). Fragmentation fighting is pretty much a holding action.
I wish the 5270 TAP build had more info posted about it publically, so we could talk about more than the previous 15 years of file systems…
Edited 2005-12-18 04:28
Sorry.. wrong name, now is called chkdsk on XP and 2003. I yet call it scandisk due costume. =)
For the de defacto file system, ext3, there is e2defrag.
JFS provides a defrag utility.
XFS provides one called xfs_fsr
I don’t think there’s a defrag for reiser.
However, the filesystem methods for most fs’s, probably including ntfs, mean that fragmentation’s performance penalty is small: Disk cost only, no algorithmic cost.
No filesystem, except maybe FAT based, *requires* defragmentation.
Also, I don’t believe I’ve ever seen an ext3 filesystem with higher than 3% fragmentation; and I’ve seen some big, old, and abused ext3 partitions.
Lies, i have seen Linux EXT2/3 get corrupt on power outage and it forced me to run fsck to get it in usable state. Linux also needs clean unmount to properly stuff data back into its FS.
HFS does not suffer from major fragmentation issues.
According to this site, the non-contiguity is under a percent:
http://www.kernelthread.com/mac/apme/fragmentation/
Says a guy who’s written a program to degrag HFS .
Defrag utilities are being put into Windows Vista for one of two reasons:
1.) NTFS doesn’t deal with non-contiguity as it allocates files.
2.) NTFS doesn’t need it, but Microsoft doesn’t want to get this call 50,000,000 times: “Where’s the defrag utility?! My computer is slow!”
How many times have you lost a fs due to a loss a fs issue?
How many times have you lost a fs due to a failing disk?
I think most people are more concerned about fs features and fs performance than data security. There’s sort of a minimal level of data integrity which most modern filesystems provide quite well. Beyond that I think most people would rather see performance tuning and features like file rollbacks, and software partitions, etc.
And by people I really mean mostly administrators and also, to a much smaller extent, developers and some power users.
The average user doesn’t care as long as it doesn’t die on him.
There are so many performance tuning decisions to make while deciding how to build your fs. It’s really quite confusing, and you can’t please everyone.
It’s likely that Microsoft has chosen to please something in the middle ground with a slight tendency to please database users.
I don’t think NTFS actually requires defragmenting. I’ve seen people do it, and it doesn’t take very long because it’s not doing much…
Maybe there’s an actual expert on NTFS here who can clear this up and point out if or if not NTFS really needs defragmenting: Hint: If it has effective algorithms for fragmentation resistance it does not need defragmenting.
The discussion is not that Linux filesystem dont get fragmented, it’s that they dont need to be defragmented. I’ve been using Slackware a year and a half with no degraded performance, boot times are always the same. Unlike Windows XP where the boot time just degrades over time. I bet if I used Windows like I use Slackware it would get fragmented heavy since every time you install something it’s fragmented, according to defrag.
I don’t know why v 5259 was the most horrible and slow version I ever tested of all the betas and alphas, maybe we need a new build to return quality.
Do anyone know why that build was so bad.
This post got lost during defragmentation ^_^
Why all this moaning and groaning over defrag? Is this the ONLY feature in the new build? C’mon people, let’s discuss all the features in the the new build. Gee, what a waste of good oxygen.
” issues that plague operating systems – spyware, viruses, half-assed API’s loaded with bugs etc.”
Most non windows users like me are not plagued by this issues at all, it is funny to see windows users to expect that from their working environment. There are defrag utilities but noone uses them in the unix world because they are not needed in 99% of cases. This topic wont interest anyone but windows users, who expect it to be an issue.
I also think a complete rewrite makes sometimes sence becuase, with time you gained new experience writing for example operating systems, you can apply it and avoid a lot of design problems in the future. Especially such giants as Microsoft can afford to do it , they dont have to release it in production but can use all their huge experience and benefit from after some years.
try using the drivers on the 5219 DVD by clicking on ‘install supplemental….’
once done, you should find 5259 performing very nicely,
IMHO i think 5259 was the best release so far, and certaintly the nicest to look at
cheers
anyweb
http://anyweb.kicks-ass.net/computers/os/windows/longhorn/lh5259/po…
I thought you no longer need defragmentation?
EXT2 had always 5% fragmentation and that level was not rising over time, a number I can live with, REISER3 has no defragging utility so I assume the fragmentation level is also rather low. All the others al also fragmentation-low.
Why can NTFS become so fragmented that it needs a defragger?
I mean this seriously! Any answers?