Underscoring Novell’s commitment to the Linux operating system, the company’s chief financial officer on Friday hinted at further acquisitions. Joseph Tibbetts, pointing to Novell’s purchase late last year of two leading Linux companies — Boston’s Ximian and Germany’s SuSE — said the networking giant is on its way to becoming the world’s leading Linux solutions provider. Elsewhere, alternative to Windows should play to its own strengths, says Novell Linux guru.
but what about sun then? they are doing the same thing..
well novell go after windows users?
and well suse have iso`s for downloading, like everyone else?
and what well be the DE gnome, kde or ??
Isn’t that the typical way to compete nowadays… if you can’t do something yourself, just buy up all the companies who CAN and then claim how great you are afterwards.
i am keeping an eye on the local Staples store for a copy of SuSE-9.1 for purchace.
Sun is creating their own Linux Distro, as oppose to Novell who bought SuSE. As far as I recall SuSE has never offered ISO’s instead you can install via ftp and stuff.
And SuSE’s new DE will most likely be GNOME, since XIMIAN is GNOME based. Or perhaps a GNOME/KDE hybrid, who knows.
Oh yeah I think Novell should not shoot for end users just yet. Instead they should shoot for Education establishments, and businesses.
You gotta start by educating people at work/school then they will want to install your operating system on their PC’s.
Though the article isn’t so good, the statement is important. Innovate rather than clone.
In a lot of senses, I think Linux should look at Mac OS X not for things to copy, but from a general methodology/way of thinking. Mac systems share a lot in common with Windows systems, but there are a lot of things that are simply “done differently,” which you can call “innovation” if you like.
It’s important that a desktop not be completely foreign to a user. The basic metaphors [of a windowing environment] should be there. But let’s get away from everything beyond that and rethink things from the ground up. Linux has already done this in some projects. For example, package installation in Debian and Gentoo (and things like zeroinstall) are much different than they are in the windows world.
Mac OS has a lot of changes like this. Package installation doesn’t exist–just drag-and-drop an application to your Applications folder. Expose is another one. And MacOS made some modifications to the generic windowing system by adding hardware effects like transparency and shadows, sheets instead of dialogs, etc.
I think the mainstream projects need to look at some of the innovation being done by Rox-Filer as well… I haven’t installed it, but it looks interesting (and new).
and well suse have iso`s for downloading, like everyone else?
They currently don’t, and I hope they never do. They abide by the GPL. All the sources are availible. However, if you want binaries, you either need to buy the distro (prefered), or, they let you install it by ftp (which isn’t hard. You just need to have a little more working info). If you just want to ‘try out’ Suse, they have live CDs available.
This is a wonderful example of how a company can and should make money with the GPL/LGPL. Those licences don’t require you to give away binaries. Only the source. Let Suse keep their ISOs. Maybe people will, gasp, buy the distro!
Staples sells linux distros?
I thought Ximian was based in Mexico City?
I think is XIMIAN because the ape logo, SIMIO = ape in spanish.
just a theory.
The last think Linux needs is yet another company rebranding RH.
In some recent interview with Nat Friedman he basically came out and said that the desktop isn’t that important right now. So it looks like it’ll be the status quo for Suse, which means KDE and probably evolution.
Ximian is based in Boston, you might be refering to some of it’s employees which are mexican.
I believe Miguel attended UNAM or something.
Hooray for Novell! Finally someone who gets it. I’ve always thought that Linux should just do it’s own thing. Copying other OS’ does _not_ make it an interesting OS.
This is most clear with mono. All they do is copy. Why the heck do they not try to create a better .NET with their own ideas, and make that compatible with Windows? That way they would be in control, not someone else. And if it’s good enough, people would use mono, instead of MS .NET.
Damn I’m happy Novell is with us
Sun is creating their own Linux Distro, as oppose to Novell who bought SuSE
Ummmm… you do realize that Sun’s distribution is really SuSE underneath, don’t you?
The great thing about Linux is that every dollar nearly of investment that goes into Linux will be shown in results in the GPLed code. While it is a Microsoft Tactic, Novells dollars will be going to Linux and not to a crappy version of NetWare development.
Good news.
Ximian is definitely based in Boston.
The generally accepted theory for the name is that Ximian is much like “simian” which is plain old English for “like a monkey.” The X is pretty typical for Unix/Linux stuff, so “Ximian.”
The monkey, which is their logo, is named Rupert, probably named so for the doll Stewie carries around in the show “The Family Guy,” which is the source of the evil monkey cartoon you’ll often see used by the Ximian guys.
As for Mexicans, the only Mexican employee I know of is Miguel de Icaza, the founder of Ximian and creator of Gnome was was raised in Mexico. A quick glance down the Ximian employees list (http://primates.ximian.com/) does show some other Spanish names, but I don’t know their origins.
> Mac OS has a lot of changes like this.
> Package installation doesn’t exist–just drag-and-drop
> an application to your Applications folder.
Ever used a Mac? There are a lot packages. But they are simmilar to Windows’ MSI packages and not like RPMs where the files are spreaded all over the sytem during installation.
are MSI packages one program wrapped into a single folder with all required support packages in the folder?
“This is most clear with mono. All they do is copy. Why the heck do they not try to create a better .NET with their own ideas, and make that compatible with Windows? That way they would be in control, not someone else. And if it’s good enough, people would use mono, instead of MS .NET.
“
I don’t remember where (maybe part of the Novell Brainshare videos?) but I saw a picture about Mono and it showed the goals of Mono. On the bottom you have the .NET base, on top of that there were two main parts, 1 part was being 100% compatible with Microsoft .NET and Windows.Forms so that it is easy for Window’s developers to port their applications to Linux. The other part was Mono’s own feature set that would enhance .Net development for Linux. So they are not just copying Microsoft, they are using Microsoft’s own tactic of “embrace and extend” against them.
Right now they are focussing on the important parts of getting the basic .NET foundation and getting Windows.Forms working. Remember this isn’t even a 1.0 release yet. Give it some time before you judge it.
Personally, I prefer QT, but if it helps gain Linux support, I am all for it.
Why the heck do they not try to create a better .NET with their own ideas, and make that compatible with Windows?
Sure…why not. I mean, a couple junior-highers can set up a sourceforge project and work on it after band practice. I’m sure in 70-80 years it’ll be better than .NET 1.0
-> Jack
>Sun is creating their own Linux Distro, as oppose to Novell
>who bought SuSE. As far as I recall SuSE has never offered
>ISO’s instead you can install via ftp and stuff.
Funny enough, SUN’s JDS is based on SUSE Linux Desktop and SUSE is based on the Slackware (Like Mandrake is based on Redhat and Xandros (Former Corel Linux) is Debian based).
Regarding the “not offering free ISOs” issue, well actually – IMO there is no issue …. I like SUSE – I have supported them in the past and unless they do something really stupid (Like abandoning KDE), I will continue so in the future – an SUSE Linux Pro is still cheaper than a OEM MIcrosoft Windows XP Home license.
Getting nervous, are we?
Novell is doing a lot of smart things. For years they have depended on Microsoft playing nice with their products and it hasn’t been good to them. Now they have the freedom of being completely independant of MS in every way.
I started talking to Novell earlier last year over groupwise technology and have continued to this day (last friday was my last communication with them). It’s really surprising how much the company has changed within this period. Their is an excitement that hasn’t existed in such a long time I’m being told.
Pretty interesting.
Reading this quote:
“We look at what we have to have — whether we need to license that from someone, build it on our own or need to acquire the technology,” Tibbetts said.
screamed one thing to me: Buy Trolltech!!!
This would be a phenomenal move, IMHO.
Eron
What’s in it for them in they buy TrollTech? I don’t see the advantage of Novell owning a toolkit unless they want to go after the development market too. That might be spreading things a little thin since right now they’re trying to target the server and the desktop.
I wouldn’t cry if they did, though. It’d reaffirm their commitment to KDE.
Well of course they’re eyeing the development market–they have a keen interest in things like C#, for instance.
Eron
> Ever used a Mac? There are a lot packages. But they are simmilar to
> Windows’ MSI packages and not like RPMs where the files are spreaded all
> over the sytem during installation.
I do use a Mac, thanks. And the packages to which you refer are only used for the installation of large numbers of programs. For example, Apple’s Development Tools package, or even the Mac OS Panther installer, or Fink, etc.
If you are an individual developer making a single program, then you can wrap that program into a single file which can be d-n-d’ed into the Applications folder. This is indeed how many developers distribute their software (a DMG file which is automounted after download and then the program is in the mounted image, which is d-n-d’ed onto the Applications folder and voila, install complete).
So to be thorough: there are package files, but they are not the preferred form of distribution for individual apps.
>Ever used a Mac? There are a lot packages. But they are >simmilar to Windows’ MSI packages and not like RPMs where the >files are spreaded all over the sytem during installation.
The end user should not have to keep track of where files go when installed. The system should do that. If the files are spread all over the disk or not is irrelevant.
In Linux RPMs could be easily installed by a doubleclick if there are no dependency problems. It would be quite simple to set up any Linux to collect the needed files with yum or apt-get. So why is this not done by most, if any linux distros.
The number of apps that use an installer is going down very fast and are segregated to things like developer tools. 90% of all apps are packaged to be drag n drop. Then to uninstall, drag an drop to trash. There isn’t an easier way to manage apps.
Uno Engborg,
> The end user should not have to keep track of where files
> go when installed. The system should do that. If the files
> are spread all over the disk or not is irrelevant.
But the beauty of Mac OSX is that that user *doesn’t* have to keep track – because the application files are always only in one place: The Applications folder.
It’s worth adding that this single-file application installation was also a big benefit of BeOS, which to my admittedly limmitted knowledge was the first instance of this approach in the modern GUI-driven OS era (of course, in DOS, you didn’t really install anything).
Also, as I understand it, it’s more resource-effecient to take this approach precisely because the systems *doesn’t* have to keep track of where installed applications’ files are strewn about the OS’s file structure.
Not true.
Mono 1.0, to be released in June implements all of .NET 1.1 (last version released by Microsoft) except for SWF (December 2004). Mono 1.0 establishes parity with Microsoft. On top of that, Mono 1.0 innovates, goes beyond .NET 1.1, delivering many new features found in the ECMA spec and not yet released by Microsoft, in addition to many bindings for Gnome Desktop APIs, (including GTK# which people love) and other open source projects. Also, as far as I know, Microsoft .NET doesn’t suppport MacOS X, Linux PPC, Solaris, S390. Mono does.
Hardly a 2 person sourceforge project, IM. Why don’t you try it before you discard it. Do you people ever do that anymore ?
Not true.
Mono 1.0, to be released in June implements all of .NET 1.1 (last version released by Microsoft) except for SWF (December 2004). Mono 1.0 establishes parity with Microsoft. On top of that, Mono 1.0 innovates, goes beyond .NET 1.1, delivering many new features found in the ECMA spec and not yet released by Microsoft, in addition to many bindings for Gnome Desktop APIs, (including GTK# which people love) and other open source projects. Also, as far as I know, Microsoft .NET doesn’t suppport MacOS X, Linux PPC, Solaris, S390. Mono does.
Hardly a 2 person sourceforge project, IMHO. Why don’t you try it before you discard it. Do you people ever do that anymore ?
Hardly a 2 person sourceforge project, IMHO. Why don’t you try it before you discard it. Do you people ever do that anymore ?
I think you misunderstood my extreme sarcasm. I am a big fan of Mono. See, every once in a while you get someone that comes around and says something like “why doesn’t the “community” do something better than java or .NET”, totally clueless to the scope of such a project. The reason that Mono is going to go 1.0 in June and something like Parrot is years(if ever) away from something production worthy is because when you get the ECMA and API specs, half, if not more, of the battle is already over. Just think if Mono had to design a runtime, languages, IL, CLS, CTS, all of the apis. They would be nowhere near a 1.0 release.
“…also a big benefit of BeOS, which to my admittedly limmitted knowledge was the first instance of this approach in the modern GUI-driven OS era…”
Nope, earlier versions of MacOS worked the same way. Many apps were just drag/drop. For example, IE, GaphicConverter, etc could all be installed this way as well. Given, I think BeOS pre-dated mac versions of IE, but GraphicConverter was around before BeOS (if my memory serves correctly anyway…). I’m sure there are other examples anyway.
Ahh, the beauty of staticly linking binaries…if only everyone did it that way installing SW would be a snap! Let the developers worry about the dependancies, and the users about USING their computers
Actually, Sun’s Java Desktop is based off of SUSE 8.x .
@Pakdawg Also, as I understand it, it’s more resource-effecient to take this approach precisely because the systems *doesn’t* have to keep track of where installed applications’ files are strewn about the OS’s file structure.
No, it needs significantly more resources. What about shared libraries? Linux puts them in one place and all the applications can use them (simply by having a depedency on them). If the application can’t have dependencies, then it must bring its own libraries, so there are multiple copies of some popular libraries in the application directories: every instance of a program loads its own copy of the same library into memory. (Note that I don’t know MacOS X first hand, I only interpret the comments here)
@Andrew Ego Ahh, the beauty of staticly linking binaries…if only everyone did it that way installing SW would be a snap! Let the developers worry about the dependancies, and the users about USING their computers
That was the way we did it in the 80’s and early 90’s. Linking statically has some grave disadvantages. The main ones are inefficiency (multiple copies on disk and in memory, leading to longer loading times, more swapping and more cache thrashing) and inflexibility (if a security bug in e.g. libc or zlib is found, you will have to recompile 95% of your system’s binaries; using shared libraries you only replace the libraries).
I hope that this isn’t really the way MacOS X works…
But linking dynamically without version checking leads to the so-called “dll hell”. (Instability.)
Good point. This is an effect of the way of thinking of GNU/FS/OSS vs MS/CSS.
Both ways tout ‘just works’. The difference is between working now, and working forever.
There are a lot of small/medium companies out there that are still using OLDER versions of Novell as their network OS. Particularly for some reason in my part of the country it seems like 30% of business still have Novell & AS400! They’ve been wise enough to resist the MS “creep” but it’s getting harder and harder. Many of these are fairly shrewd business people and MS licensing 6 really pissed them off… But with all the web based stuff becomming reality you can’t ignore MS with their exchange servers and all the available tools from third parties to backup, share, etc. Novell has a shot at grabbing a lot of their loyal customers and converting them to the “Linux Way” if they can get the right mix of solutions ready to go. One of my previous employers switched over to OO.org because they simply weren’t paying for another round of MS “licenses”. I can bet that when Novell can prove it’s wares they’ll be signing up in a hot minute!
We’ve just about reached the point where you can find SOMEBODY to tackle your integration projects on Linux…it may not be the best solution, but if you look hard enough you’ll find it. Give it another 1-2 years and there will be very few MS-only consultant shops…they Will learn Linux or die! There’s a lot of money to be made setting up Linux for people… but the projects and companies are small and there’s lots of software infrastructure still to build.
While I completely agree that creating something like .NET cannot be done overnight, it’s also a fact that Microsoft has been taking a lot of time to build it.
For example, the same goes for designing a kernel. That too is something you don’t do overnight, yet it didn’t stop Linus and the team. And for the whole GNU/Linux thing!
Who says one team should develop all the parts of something like .NET? Do you think only one team designed all the parts of .NET?
The OpenSource community is big, but it’s difficult to coordinate. That’s probably why it’s impossible to come up with something better than .NET (while not copying).
There’s also a huge difference between OpenSource and others. Many people will probably see me as a troll right now, but very little innovation comes from the OpenSource community. If people in the OS community had started developing something like .NET around the same time Microsoft had, they’d probably have a pretty functional product by now. But mono only started developing when Microsoft already had a first version out. It was Microsoft once again who came with something new. Not the OpenSource community.
It hurts, I know. I feel that I am part of this community too, having done some small projects and helping out others. But I am not blinding myself from the truth, because of it.
But I do hope that one day the tables will be turned.
One more thing, I do like mono and I most likely will use it in the future. Although my examples keep refering to mono and .NET, it does not only apply to these.
It was Microsoft once again who came with something new. Not the OpenSource community.
What exactly is new here? Every single non-trivial technology in .NET has been done before, often in open source form, and better. And at least a decade ago! Once again, Microsoft is not being innovative — they are copying. Not to mention the fact that Microsoft didn’t invent .NET. The CLR was originally written and designed by a company called Collusa software. The current .NET framework is a complete (or mostly-complete) rewrite of that technology, but its nothing new.
But I am not blinding myself from the truth, because of it.
If you’re not blind to the truth, you are certainly ignorant of it.
You know what a common problem is with people claiming some certain group doesn’t innovate? They don’t look futher than their nose long is.
While I completely agree that creating something like .NET cannot be done overnight, it’s also a fact that Microsoft has been taking a lot of time to build it.
For example, the same goes for designing a kernel. That too is something you don’t do overnight, yet it didn’t stop Linus and the team. And for the whole GNU/Linux thing!
I shall throw your analogy right back at you. You see, Linus didn’t write the UNIX spec which he implemented, he wrote a UNIX clone for use on x86 hardware. Mono pretty much does the same thing.
What exactly is new here? Every single non-trivial technology in .NET has been done before, often in open source form, and better. And at least a decade ago!
Then where is this better, open-source technology right now. Don’t even try to bring up Lisp. It had 40 years to prove itself and didn’t. Java? not better and not open source. Pascal P-code…uhmm pretty much irrelevant. If this supposed open-source and better virtual machine, compilers, languages, APIS, documentations has already been invented a decade ago, why isn’t it being used today? If you want to get the word out on these mystery technolgies then at least give some concrete examples.
You might live in some little academic world where some half-finished technology that is better than anything out there is sitting on some grad students hard drive, but the rest of us don’t have time to be messing around with non-existent or broken APIs, no documentation, no good tools, everything that is always “almost there”. Once you get out of school and in the business world you’ll understand that people need a whole set of tools that work today….not tomorrow when a few grad students might decide to work on it.
@Mayard:
Darn, so you mean even the kernel lacks serious innovation?
Also, I didn’t know UNIX supported all the things Linux does, today… Sorry for using a bad example, then.
@Rayiner:
But .NET is *not* a 1:1 java clone. It tried something new, and that has worked out. What is mono? A 1:1 .NET clone, with maybe some little addons. Which one is more innovative?
One more thing, looking at what other people are doing is okay. But trying to almost exactly copy it, is bad.
Then where is this better, open-source technology right now. Don’t even try to bring up Lisp. It had 40 years to prove itself and didn’t.
All Lisp proved is that the majority of programmers can’t handle too much innovation at one time. They need it in bite-sized, easily digested pieces, which is precisely what C# and Java are doing.
Java? not better and not open source. Pascal P-code…uhmm pretty much irrelevant. If this supposed open-source and better virtual machine, compilers, languages, APIS, documentations has already been invented a decade ago, why isn’t it being used today?
http://www.jwz.org/doc/worse-is-better.html
There are numerous factors at work here. First, programmers can’t see past their own noses. Second, there is the bandwagon effect. Why try a new technology with few available programmers when there are millions of existing Java/C++ programmers? Third, there is the fact that many of these advances are driven by small companies that cannot survive long enough to see the technology to maturation. Fourth, there is the hype factor. Programmers seem to have this sheepish tendency where they consider hype much more important than technical merit. The primary problem today is the second one. Who wants to take the risk of using Lisp, and having a small pool of developers to choose from, when they could use Java, and have a large pool of developers?
Current developments in commercial computer languages are absolutely comical to those who are familiar with more advanced languages. Java/C# developers get excited about stuff like Xen, when Lisp developers have been using macros to get the same effect, with infinitely more flexibility, for decades. Java developers get excited about ‘foreach’, when such extensions are child-play to define using macros. Java/C# developers get excited about GC, when GC’ed languages have been around for decades. C# developers get excited about lambda/closures, when Lisp has been around longer than I have. Maybe one day Java and C# will get pattern matching, predicate dispatching, and real macros!
Read up on the optimization technologies developed for Lisp compilers. Java/C# compilers are toys in comparison. They have no storage allocation analysis, they have no closure analysis, they have no type inference, they have no support for extending classes at runtime, etc. You’ll see why Lisp/Dylan programmers snicker at the fact that Java/C# developers think its necessary to have a distinction between primitive and object types to retain performance.
Now that this rant is over, I have to say that your original point was moot. I wasn’t arguing about the technical merits of .NET. I was pointing out that none of it is innovative at all. Whether something is innovative or not is entirely independent of whether it is successful. So, when talking about innovation, it doesn’t matter if anybody uses Lisp/Pascal/etc today, what matters is who had the technology first.
Innovation is sometimes rewarded with success, though often it is not. So saying that Mono is a copy of .NET is stupid. Who cares if you are the second person to copy something? The only thing that matters is the first act of innovation — everything after that is just a copy. And there is *nothing* wrong with copying. We stand on the shoulders of our predecessors. What bothers me is that people continue to confuse these actions with actual innovation.
You might live in some little academic world where some half-finished technology that is better than anything out there is sitting on some grad students hard drive
These are all mature, well-tested, well-understood technologies. Research into Lisp compilers took place in the late ’80s and early ’90s. These are all very stable tools. All they need now is people willing to get over the hype, get over the inertia, and just *try* them.