Ars has just published part three in their series “From Win32 to Cocoa”, in which Peter Bright explains why he thinks “Windows is dying, Windows applications suck, and Microsoft is too blinkered to fix any of it.” Part one dealt with the history of both development platforms, part two dived into .Net, different types of programmers, and Windows Vista, and part three details the development platform and tools Apple has to offer, and in what ways they are superior or inferior to Windows’.I don’t wish to dwell for too long on the actual merits of Objective-C over C, or the benefits of Xcode, Interface Builder, and other tools – I mean, which tools someone prefers is a very personal thing, and debating that issue here is pointless and bound to end in tears and torn stuffed animals anyway. What is a very interesting point is that Apple gives away all its development tools, including their various profilers, for free, and how that affects the Mac software ecosystem.
These profiling tools are, as with all the Apple developer tools, free,” Bright writes, “In a world where most equivalent tools are rather expensive, this is very welcome. There’s a clear contrast here with Microsoft.” According to Bright, a ‘bean counter’ at Microsoft realised that you could sell development tools and make money from them. While the Visual Studio Express tools are free, it are the extras where Microsoft is weak.
To get a profiler for Visual Studio, you have to spend money. Quite a lot of money. The Express versions don’t have one. Nor does the Standard version, nor even the Professional version. So to get a profiler, you need Visual Studio Team System 2008 Development Edition. That’ll set you back a couple of thousand bucks. And it’s not even a very good profiler. It’s hard to use, limited in what it can tell you, and just plain ugly.
The problem with this is that it is a rather short-term way of generating income. If you purchase software development tools, you’re going to write software. This application can attract new users, who buy Microsoft software to run this application – and let’s face it, that’s going to generate more money than a single developer buying tools.
As Bright explains:
Giving developers better tools strengthens the platform. Good software is more appealing to users; business-critical software is essential to corporations. So while you could monetize developers, it’s short-sighted. Developers make your platform better, and charging them a lot of money for the privilege just doesn’t make sense. Intel does much the same thing; the chipmaker has a profiler product of its own, and it costs money. And yet, the purpose of that product is to make software that runs better on Intel processors so that people go out and buy more of them.
Apple also ‘eats its own dog food’, as the saying goes. It actually uses the tools, APIs, and frameworks it provides, reassuring their development community that said tools, APIs, and frameworks are sound and stable enough. In fact, in the Apple world, it’s usually the case that Apple will introduce some sort of new UI element or framework, which will later be made available to everyone else. If there’s an API in OS X, Apple is bound to be using it somewhere.
Finally, Apple has finally settled on Cocoa. It took them a while, but ever since 10.4 and 10.5, the direction has been clear: the future of the Mac OS is Cocoa. While Carbon is still there, and it’s still in use in for instance the Finder, it’s on its way out, as evidenced by the fact it won’t become available in 64bit. “This willingness to leave old technology behind is a great strength of the Apple platform,” Bright writes, “Rather than enshrining past decisions in perpetuity, Apple has a willingness to say “enough’s enough; this new way is better, so you should use it”.”
Part four will detail what and how Microsoft can improve things.
It was sad to know that the java bridge was deprecated by apple. There was a time when Java used to be a first class citizen on OS X. Sadly that is not the case anymore.
The good news is that the Ruby Cocoa bridge is still there. So programmers who dont wanna learn Objective C do have a chance of writing good apps.
Of course Apple itself broke Drb in ruby as a part of the 10.5.3 update: http://www.ruby-forum.com/topic/154556
OS X may not be the developer’s heaven as it is made out to be.
You say “learn Objective-C” as if it is this huge chore you have no hope of ever completing. Even with all the fanciness of version 2.0, Objective-C is still just sliver-thin superset of C. I think it is easier, at least in certain regards, for a Java developer to pickup Objective-C than to switch to Python of Ruby.
I hope it doesn’t sound impolite, but… you usually don’t “learn” ObjC, you *do* it – at least if you come from a C programming background. As you mentioned, ObjC is a superset of C, while, for example, C++ is another language, and so is JAVA. So let’s assume you have good C knowledge, did some C++ and are familiar with concepts from the field of object orientation, then ObjC won’t be a hard learning time for you. And if you’re familiar with JAVA, ObjC isn’t a big deal, too, because many elements of the language will look familiar to you. For non-Apple programmers, a look at GNUstep on x86 is worth taking.
At least I may say this from my individual experiences, maybe it’s different when you’re coming from “VisualBasic”. 🙂
I disagree with that. Java/C++/C# i would say are about 90% the same, obj-c is closer to 60%. It is more like c++ with a dash of smalltalk then Java/C# which are more C++ with problems fixed.
If we oversimplify thing, yes, Objective-C is C with a dash of Smalltalk, just as C++ is C with a dash of Simula. I agree that C++, Java and C# are much easier to switch between, given their common ancestry and design philosophy, but, except for a few fairly small conceptual differences (e.g. message passing vs. method calling), when it comes to syntax, design patterns and so on, Objective-C is not that drastically different from them.
Sorry, but Objective-C is crap, in my opinion. The syntax is too arcane, and the whole sender-receiver model is unnecessarily ambiguous. It makes C++ look incredibly elegant, by comparison.
Sorry, but Objective-C is crap, in my opinion. The syntax is too arcane, and the whole sender-receiver model is unnecessarily ambiguous. It makes C++ look incredibly elegant, by comparison.
That’s your opinion and I respect it.
But would you care enought to argument you point of view ? You know arguments, facts…. because it seems that you dislike Objective-C simply because you don’t understand it and find it ambiguous. This really gives you less credibility than you surely deserve.
Oh, I understand it. I just don’t like it. Here’s are some examples of why I don’t like Objective-C. It has a late-binding contract so, when you call a particular object method, there is no guarantee that the receiver will even process it. You don’t know until RUNTIME. That’s weak and error-prone. Contrast that with C++, which has very strict compile-time type-safety built into it. Additionally, I don’t like the syntax for Objective-C. It looks like it was put together by someone who thought that things like consistency and readability are secondary considerations. The code just looks UGLY. Contrast that with C++, with namespaces, visibility modifiers, better inheritance semantics, and there’s no competition.
Oh, I understand it. I just don’t like it. Here’s are some examples of why I don’t like Objective-C. It has a late-binding contract so, when you call a particular object method, there is no guarantee that the receiver will even process it. You don’t know until RUNTIME. That’s weak and error-prone. Contrast that with C++, which has very strict compile-time type-safety built into it. Additionally, I don’t like the syntax for Objective-C. It looks like it was put together by someone who thought that things like consistency and readability are secondary considerations. The code just looks UGLY. Contrast that with C++, with namespaces, visibility modifiers, better inheritance semantics, and there’s no competition.
– Thanks for your answer.
– Late binding is a feature in ObjC and while I agree with you it can be error-prone for the Objc beginner, it allows for better flexibility.
– In ObjC programs there is just no need for namespaces and all C++ related stuffs. Mutiple heritage is a non sense in Objc (and not needed) It’s a bit like arguing that C++ lacks “protocols”, “categories” or “properties” from ObjC.
Flexibility = Instability.
Objective-C is a poor man’s object-oriented language, circa 1990. It doesn’t go far enough in implementing the kinds of useful features that C++ provides, and
I’m stunned that anybody would want to use it when they could be using C++.
It really comes to what you want to use the language for. Objective-C isn’t a one-size-fits-all thing. This way you would have to disregard all dynamic languages (PHP, Smalltalk, Perl, Ruby, Self, Python, to just name a few) as ill approaches.
On the other side one could argue that C++ is so bloated that most programmers don’t use even one third of the language’s features in their work. Some languages are designed maximalistically (C++), some minimalistically (ObjC, Ruby, Self). It really comes down to taste.
Flexibility = Instability
With such a statement I (we) have a much better understanding of how the logical part of your brain works…
And it reminds me of the famous slogans of the party in G. Orwell’s book “1984“. After “War is peace“, “Freedom is slavery” and “Ignorance is strengh” we finaly have a forth slogan “Flexibility is instability“. Thank you so so much !!!
Objective-C is a poor man’s object-oriented language, circa 1990. It doesn’t go far enough in implementing the kinds of useful features that C++ provides…
You do realize that in the above sentence one can freely interposes Objective-C and C++ and your sentence will reamain as “true” as in your original sentence. It’s just a matter of preference (very subjective by essence) and your preferences are valuable for only one person on this planet…. you !!!
…and I’m stunned that anybody would want to use it when they could be using C++
Funny but I’m not stunned at all. I wonder why….
You’re welcome. You plainly see an archaic defect in Objective-C as some kind of benefit, so whatever. More power to you. Enjoy it, if it works for you.
LMAO! Rrrrrright. I’m quite confident that, if you were to actually check around, you’d find that the people that prefer Objective-C to C++ is a very small number, indeed. The problem (I think) is that you’ve been using Objective-C so long that you’ve lost the [objective] capacity to know that it’s crap.
Edited 2008-06-02 23:56 UTC
Enjoy it, if it works for you.
It works for me, for Apple devs and thousands of 3rd party devs… Now you learnt ObjC (a mandatory step to understand it), you wrote several real life programs with it (another mandatory step to understand it) and still it doesn’t work for you… There’s something weird….
LMAO
You’re at the first step of a 3 steps process:
“If you want to make someone angry tell him a lie; if you want to make him furious, tell him the truth. All truth passes through three stages. First, it is ridiculed* , second it is violently opposed, and third it is accepted as self-evident.†– Arthur Schopenhauer, Philosopher, 1788-1860
*:Where you are.
I’m quite confident that, if you were to actually check around, you’d find that the people that prefer Objective-C to C++ is a very small number, indeed
Using exactly the same kind of intellectual (biaised, non-argumented, dishonnest) stunt, one can say that the “Elite” is always composed by a very small number, indeed !
The problem (I think) is that you’ve been using Objective-C so long that you’ve lost the [objective] capacity to know that it’s crap.
I’ve been using ObjC for about 5 years and C++ for about 7 years.
While I can easily count your subjective, non-argumented statements against ObjC, you can’t give a single example of my subjective, non-argumented statements against C++ since I use and like C++ too. Still, according to you, “I” lost objective capacity to know…
Again, if it works for you, great. Whatever. I could care less. If you want a preprocessor masquerading as an object-oriented langugage, fine. Use Objective-C. If you want a real object-oriented language, step up to C++.
Perhaps it’s slipped your mind that YOU were the one who introduced the exclusionary argument earlier in this thread:
“… your preferences are valuable for only one person on this planet…. you !!!”
Because you don’t know any better.
Oh please…
Java developers value portability. Java + Cocoa ties you to OS X while incurring the overhead of the JVM. Ergo, few (any?) developers used the Java Cocoa bridge. Right from the start, it was a solution going in search of a problem.
No one, not even Java developers mourn the demise of the Java Cocoa bridge. Neither should you.
stop complaining about Java and OS X. There are hardly any applications that require java 1.6 and most of those are simply compiled with 1.6 and do not really use any features not present in 1.5. Also there is a 1.6 for mac although it is still not a final version. Yes it sucks that even though Sun java is not open source it takes so long to get it to MacOS but the problem is nowhere near the magnitude that it’s being presented.
To me the best thing about OS X compared to windows is that you get free and very powerful development tools. Xcode is great but then again there is the issue of objective-c … Oh and the author of the article commends Apple for being able to say enough is enough but they paid dearly for that ability and are still paying for it. Adobe anyone? I guess this is the risk of relying on 3rd party vendors but what I really respect in Apple is that they acknowledge the risks and try to offer alternatives to their valued customer base. A good example is the office suite. While it’s not as powerful as MS’s own it is a lot cheaper and quite capable.
This is how you do a menu with a textbox in .net
<Menu Height=”25″ VerticalAlignment=”Top”>
<TextBox Width=”50″></TextBox>
</Menu>
Not exactly rocket science…
Welcome to 2006. Winforms has been deprecated for a very long time now.
That’s all well and good, but it’s still slightly irrelevant. How many end user applications are written in .NET? How easy is that to accomplish in C++ unmanaged code which is what the majority of desktop applications use?
AFAIK obj-c is managed (it has gc, so it almost has to be)
Regardless, I would say 90-95% of windows apps are managed. Only the most performance sensitive stuff is written in unmanaged code nowadays, and we are even starting to see that phased out.
If you want to talk deprecated, nothing written in the last decade should have been using c++. On windows anyways.
Edited 2008-06-02 13:36 UTC
“I would say 90-95%. Only the most performance sensitive stuff is written in unmanaged code nowadays, and we are even starting to see that phased out.
If you want to talk deprecated, nothing written in the last decade should have been using c++.”
That’s true of internal corporate apps, where performance and end user experience matter less, and a lot of code is “throw-away”.
Anything public facing, shrink-wrapped proprietary or open source, or otherwise, and desktop or systems oriented, is almost always C/C++.
For software that is not forced on it’s users (like internal corporate apps), performance, lnf, and overall end user satisfaction, matter very very much. If all those things are not excellent, the software is ignored. For accomplishing these things, C/C++ usually offer the best solutions.
Woah, now that’s something unusual The question I’d ask as a reply to that is something like “says who?”. This depends on so many factors, that stating something like this is more like the 640K should be enough that anything else. Just as one example, where I do my stuff, we develop mostly algorithms, with the gui coming much later, and since almost all the algorithms are written in c++ and on windows, it comes easier to create an mfc gui for it. We also use c# and Java when the need arises, but it’s not a goal, it’s the tool. I never use a language or a tool because it’s suddenly fashionable, I always use that in which I can do what I have to do more conveniently – from a certain point of view.
Maybe I should have been more clear. The official way to develop windows apps is .net, and has been for a very long time now. You can use MFC if you want, you could even use Tcl if you really wanted to. But the api that is being actively worked on and encouraged by the guys doing the OS is .net.
How many of Microsoft’s products are written in .NET?
I’m curious because I am under the impression that the majority of their products are still written in C/C++ as Process Explorer reports very few applications using the mscor*.dll files.
I just finished listening to a podcast where a microsoft guy was complaining that too few people at microsoft had any knowledge of unmanaged code. Outside of the windows/office teams, its all about .net.
Oh boy! I remember upgrading my 512K computer to 640K!
It’s a known fact of life you need at least 1Mb to get anything done. How was anyone getting anything done those days 🙂
Edited 2008-06-02 15:19 UTC
I remember upgrading from my 640k XT to a 1mb 286 and still being steamed that various memory managers ate away at my precious base memory
Objective-C code is *not* managed.
How would you implement garbage collection without a garbage collector? a google of http://www.google.ca/search?hl=en&client=firefox-a&rls=org.mozilla~…
leads me to believe there is a vm.
A GC for c / c++:
http://www.hpl.hp.com/personal/Hans_Boehm/gc/
Just to show that it can be done without a VM.
Edited 2008-06-02 14:24 UTC
I don’t see how garbage collection necessitates VM. It can very well be implemented as a library of memory allocation/deallocation functions, that are just an interface to a garbage collection engine, meant to replace the standard memory management fuctions in the respective language. For example, this is how Boehm garbage collector library works (and I believe Objective-C uses exactly it, although don’t quote me on that). Objective-C is most definitely compiled to bare-metal machine instructions.
EDIT: Oops, beaten to the punch by rob_mx.
Edited 2008-06-02 14:45 UTC
You are right, blame it on the education for the java generation I was confusing a managed memory runtime with managed code, and was unaware you could implement it in that way
Looks like you’re confusing memory management with managed code. Just like C++ Objective-C code is compiled down to native code, which uses the services provided by a runtime for the language-specific features, e.g. message routing.
You are right, I stand corrected
Let me just say that if you had any knowlege about Objective-C, then you’d not have to “believe” [sic] anything about its features, and could actually make some substantial statement.
Objective-C code is compiled to machine code and not “managed” in the sense the word is usually used.
Objective-C manages memory either manually or by reference counting (whether that counts as “garbage-collected” is not universally accepted, but most would say it doesn’t).
If you knew anything about obj-c, you would know with 2.0 there is real, full gc.
Like I have said in other places, I was mistaking a managed memory runtime with a managed language.
Objective C 2.0 is pretty much restricted to Leopard and up to date Tiger installs.
quite easily. you look at the memory, see if it’s being referred to. if it’s not, free it. Do this periodically, and you have a garbage collector.
Sure, that’s oversimplified, but it captures the essence. Hell, you can get garbage collection in pure C code (google Boehm GC) and some applications even use it. (see: GCC, Inkscape, and the GNU native-compiled Java)
Err… No, end-user applications are virtually never written in .Net. Just corporate stuff – the same kind of stuff that might otherwise be written in Java.
You also seem to have this problem with dates.
10 years ago was 1998 – Microsoft were still trying to get away with using a modified, non-standard version of Java.
The first release of the .Net framework was in 2002 – a mere 6 years ago. It wasn’t usable for anything much until version 2, released in 2006. Realistically, developers could have started writing end-user apps in 2006. That’s only two years ago.
As for WPF, that’s only available in .Net 3.0 (early 2007), and only got support from Microsoft’s tools in .Net 3.5 (late 2007). Any applications written with that won’t even have come close to being released yet.
Clearly, you’re not a software developer. Not on Windows anyway.
The vast amount of software written for windows used to be done in VB, not C++, or VB for the front end and C++ as COM objects. Since the shift to .net, everything that used to be done in vb, and most of what used to be done in C++ is now being done in .net. If you are talking about software pushing hardware to the limits (games, cg software, etc), yes, typically it is still done in C++. Other then that it is almost totally .net (with some notable exceptions).
I was not developing professionally on windows at the time (was a Java guy), but I have been using .net since the first beta, closer to 2000, which was 8 years ago, so it feels like a decade 😉
Like I said, I have been writing windows stuff with .net for closer to 8 years now. Granted, they were trivial things, but still.
3.0 was a bit before vista, which was mid 2006. There were some early tools for WPF, but the first real designers only really became available early 2007 (as an out of band CTP release).
The problem with WPF is not the tooling, it is the vast armies of Winforms developers who don’t know how to code a UI with anything but a visual designer. The studio tooling for WPF is (imo) some of the best work they have done so far, which is saying something because MS does great designers. However, you need to actually know WPF to use it properly.
I’ve been writing software for close to 15 years now, professionally for about 8, professionally on windows for about 3.
Oh, really? The ATI/AMD control panel comes to mind. It is quite the consumer application and has nothing to do with corporate activities other than the fact that it was produced by a corporation and is distributed by the same. And this is hardly the only example.
I’m not against .NET but that application is really awful. Slow and bloated…
The software used to flash the N800 and N810 in Windows is written in DotNet and *requires* the DotNet 2 framework.
“The first release of the .Net framework was in 2002 – a mere 6 years ago. It wasn’t usable for anything much until version 2, released in 2006. Realistically, developers could have started writing end-user apps in 2006”
The .NET framework was very usable out of the gate, I developed tonnes of stuff with it using Visual Studio 2002, I also used 1.5, which came out with 2003, and version 2, which came out in 2005, not 2006.
Just because you think something is not useful, doesn’t mean that others don’t. Clearly you aren’t a software developer, at least not on Windows anyway.
1) GC in ObjC is optional, and many (most?) apps don’t use it.
2) GC and “managed code” (whatever that means — usually it seems to mean a VM) are entirely separate considerations. You often have GC without a VM, and it’s quite possible to have a VM that doesn’t do GC. (I believe that this is the case with LLVM, although it gives hooks to implement your own GC)
Actually, most applications written for Windows are inhouse apps written in Visual Basic or C#. Most developers for Windows are working for a company writing apps in VB, VB.NET or C#. Therefore, most user apps are written using the .net framework (or the vb6 runtime, which predates the .net framework)
WPF is only available in Visual Studio 2008. The 2005 plug-in was never release for production use.
If I had time to hand code forms I would be writing in C++, not using a RAD environment.
Winforms is here and will be here for a long time yet. WPF is pretty much unused. I know of no one using it in production.
As with all discussions of this nature, use the right tool for the job. C/C++ is the correct tool for most platforms without a strong framework (LINUX for example.) Java is the correct platform for weenies and losers. Win32 is the only API if you want nice lean apps wich are pretty much unmaintainable, MFC and ATL are slight movements in a positive direction, but still way off the mark. Winforms is pretty much the only choice for DotNet (especially if you want to be cross platform) and Objective C is for the Mac. All other discussions are window dressing 😉 YMMV
Edit: messed up spelling.
Edited 2008-06-03 11:31 UTC
Yeah, I sorta disagree.
Your right that the 2k5 designer wasn’t ever RTM’d, but that doesn’t mean it can’t be used to learn. WPF not only addresses the (valid) points in the article about the inflexibility of winforms, but it flips it around and makes the mac environment look inflexible by comparison.
You are right that WPF adoption has been extremely limited, but for the purposes of this conversation winforms is being used as a straw man.
As for hand coding, you don’t need to hand code, but for the best experience you want to be in split screen (which is what the designer gives you by default), and be switching back and forth between the designer and the code. XAML is a high level declarative markup language, and it is really not hard to learn at all if you have any web experience. The benefits are definately there, and (IMO) make it the best tool for the job for any ui that goes beyond typical LOB stuff.
Just so we are on the same page, check this out http://www.beacosta.com/blog/?p=40 . I like to pull that out in any discussion on why WPF is worth learning, just because so many people really don’t understand how wide the gulf is between it and winforms.
Agreed on pretty much all respects except for the winforms one 😉 But I have the perspective of a web guy who does occasional smart clients, and the soul of an early adopter, so I guess i fall into the YMMV disclaimer
Edited 2008-06-03 12:29 UTC
GNUstep Renaissance (http://en.wikipedia.org/wiki/GNUstep_Renaissance), which also runs natively on OSX, has been around for quite a while now too. The only downside is that it never gained enough traction in the community to force us to write a GUI designer for it.
Strange, but IIRC, I once read about how Windows gained market share by charging people for their dev tools rather than give them away.
IIRC, it goes like this: In those times, hardware prices are sky high, so Windows dev tools, at those times, looked cheap, especially compared to those UNIX ones. Moreover, Microsoft opened up their specs and so there was a lot of freeware compilers that were lousy, but got the job done. This allowed hobbyists and professionals alike to be drawn to the Wintel platform. When they achieved dominance, the cost of a dev licence hindered freeware development and this, in turn, allowed Microsoft to quality control the software produced for Wintel, to a small but noticeable extent.
However, I would agree that leeching off developers is a suicidal move since they are any platform’s greatest asset. Not the users, but the developers. Just ask any open source developer in touch with the times.
Moreover, notice how hardware prices have plummeted for the past decades. If Microsoft kept the hardware to dev tool price ratio, I would bet that those dev tools will have to cost less than to make a burger.
While I liked the article, I found it funny that the author portrayed Microsoft as a company that does not use its own APIs.
I guess Office, Internet Explorer, Live Messenger, Streets and Trips, Visual Studio and every other program MS has ever released were written without an api of any kind at their disposal. lol.
I believe the point the author was trying to make is that, for one, Apple themselves are using, for majority of their apps, the framework they are promoting as preferred for OS X development, unlike Microsoft, who still develop most of their software in Win32 instead of .Net. Also, especially when it comes to UI, most new objects in Cocoa and most new frameworks are “librarized” version of code Apple had to implement for their apps, while other frameworks (including .Net) are usually formally designed and then the features they implement are used in actual apps. I personally am still on the fence as to whether this necessarily leads to better framework quality, but still it is an approach quite different to Microsoft’s.
As I pointed out in a previous post, most developers don’t write the kind of publicly-consumed applications that Microsoft produces (ie. Word, Excel, Visual Studio, IE, etc). They write line-of-business applications that pertain to their particular business use-cases. C/C++ isn’t a good tool to use in those scenarios, but .NET/Java are excellent for that purpose. Hence, that’s why Microsoft advocates .NET (and Sun advocates Java) for LOB apps. I’ve never heard Microsoft suggest that you should be using .NET to produce an app like Word. It’s just common sense: Use whatever tool is most appropriate for the job. Apple is handing every developer a hammer, regardless of the problem, when they really only need a screw driver. The reason is simple: They want to tie you to their platform, no different than Microsoft or Sun or anybody else. At least with .NET and Java, you can run your code on other platforms.
Edited 2008-06-02 18:36 UTC
Perhaps on the Windows platform but I’m truly hardpressed to find Java/.NET applications [mono the exception for GNOME, in part] on Linux and OS X that are Client-Server based applications in the traditional client-server model.
How much of KDE is written in C/C++ and how much of it is written in Java/.NET?
How much of GNUstep is written in C/ObjC/ObjC++/C++ and how much is written in Java/.NET?
How much of OS X is written in Java/.NET versus C/ObjC/ObjC++/C++?
How much of LaTeX Tools are written in Java/.NET?
How much of Adobe’s applications are written in Java/.NET?
Intuit? Corel? Oracle? Sybase? IBM? Sun?
What are Safari, Firefox, IE, OmniWeb, Konqueror, Opera, et. al are written in Java/.NET?
The article is honest. Microsoft wants more people to move to .NET but realizes cutting ties with C/C++ and demanding people switch to C# for general application development would produce an abandoning ship to Linux and OS X tenfold more than they already do.
I think you kind of missed my point. Most app developers don’t write the kinds of apps that would require C/C++. Your average corporate or small-business developer doesn’t need the horsepower or complexity offered by C/C++; so, .NET and Java applications really work well in those environments. In my company, for example, our support devs write data collection apps in C#. Most of the data goes to payroll/accounting or HR. More and more of those apps are moving from client machines up to web servers. But web applications aren’t always possible/useful to all users; for example, the sales folks who are on the road all the time need apps which do projections for them, keep track of expenses, and other stuff. They don’t want to VPN into the corporate network to do all of those things, so client apps work for them. The devs who are writing these apps are using C# because C/C++ would be overkill for what they need. The apps are corporate, highly targeted at specific scenarios, and relatively simple. That, really, is what most devs do in the real world. They aren’t writing Internet Explorer or KDE or whatever. For THOSE guys, yeah, sure, I’ll grant you, C/C++ is the better tool. But those guys only represent a thimbleful of the larger market for developers. All that you have to do is crack open the classified ads and see what sorts of jobs people are looking for. It’s a common theme: Java, .NET, databases, web development. You won’t see many jobs for C/C++ developers, on average, compared to the others, and I think that is reflected in Microsoft’s emphasis on managed code.
The problem here is that you are putting Objective-C/Cocoa into the same league in terms of programming complexity as C/C++, whereas in reality it is much more similar to Java/C#. 99% of the time you don’t really have to deal with pointers (one can regard ‘ClassName *var;’ as syntax sugar, so that doesn’t really qualify as pointer manipulation) and Objective-C’s strong dynamic mechanisms work very well for GUI environments, which are by nature hard to describe flexibly using static typing. This is further emphasized in ObjC-2.0 which features an automatic GC.
As the author pointed out, no single feature of Objective-C or Cocoa really put them above other players. It’s much more about the combination and implementation of these features. Also, one can transition programming styles from dynamic RAD with good code manageability all the way down to assembly-level programming without having to switch languages.
Yet MS has poured loads of development hours into .NET, designing Winforms and then abandoning it in favor of WPF. All this while, MFC has not dramatically changed since its inception about 15 years ago and WTL is not officially supported. I think the message from Redmond is pretty clear.
C++ developers, sod off.
Nonsense. VS 2008 updated MFC.
http://channel9.msdn.com/posts/Charles/Pat-Brenner-New-Updates-to-M…
I think most of Microsft new apps are .NET apps. For example, World Wide Telescope is a .NET app. And I believe that the Expression suite (Blend, Media Encoder, etc) are .NET apps. Sure, older apps like Office aren’t .NET, but there’s no reason to rewrite what already works for no reason.
Also, I recall it took Apple years to finally write the Finder in Cocoa (it was a Carbon app for years).
Visual studio is not written in .NET.
Where you’ll find that XCode was written in Obj-C, using XCode.
Microsoft are not dogfooding their own .NET APIs that they are advertising as the way forward for everybody else.
They are eating their own dogfood for new development. .Net was not around when VS was first written. The Live suite on the other hand is relatively new. Ditto with media center.
“Where you’ll find that XCode was written in Obj-C, using XCode.”
XCode was written using XCode? Really? How’d that work? I image the computers used were also powered by perpetual motion machines that self-assembled in the presence of the RDF.
Xcode is the natural son of “Project Builder” (herited from the NeXT days). Xcode was firstly compiled with ProjectBuilder then Xcode was used to compile following versions of Xcode.
It’s pretty simple and logical IMO.
Edited 2008-06-02 18:25 UTC
I was about to point out the same thing. Regardless of the name changes and different UI, OSX at its core is still NextStep. Liek the author of the article pointed out some of the development tools used today in OSX come from NeXT.
I can’t agree more.
Besides my 2 macs, I have 2 NeXT stations
And countless PCs, old SUN stations….
The same way that GCC compiles GCC and Visual Studio is developed using Visual Studio.
For the VAST number of line-of-business applications, .NET (or Java, if you don’t mind a GUI circa 1995) is more than adequate; and, quite frankly, it’s advisable, given the RAD development model. Not a lot of people actually write huge apps like Visual Studio or Word. Those that do should pick whatever tool is most appropriate.
…right tool for the right job.
What I did like about this article was that at least it shows that there are good tools and good languages available on the Mac, some guys here may not have used a Mac before or maybe assumed it wasn’t up there with other platforms…
Personally, I love Cocoa and Obj-C, but I also love Java, Ruby/Rails and Delphi too… I am just getting into MacRuby too, which might be an interesting way to write apps for the Mac too…
I like Win32 too, sure it has it’s problems, but it gets you there in the end (most of the time)… I haven’t used C#/.Net very much, but the little I did do was pretty cool…
Oh, just in case anyone is still wondering about how hard it is to learn Obj-C, I came from a C/C++/Java background when I first used Obj-C and had no problem adapting, very very simple… Ruby is more of a mind bender in a lot of ways (good ways 😉
I don’t think its the language that is hard, it’s just learning the new APIs, that goes for .Net, JFC, Rails and so on…
The theorical advantage of Java is portability.
– Why are the SUN’s cross-plateform application (for instance Open Office) not fully written in Java ?
– Will we ever see a VirtualBox written in Java ? I doubt so…
Numerous other apps could be written using the advantage of portability, however I’ve never seen photoshop or Cubase written fully in Java. Now a simple clue. Portability’s drawback is performance…
Beyond the portability myth are simple facts…
.Net: I’m sorry I can’t think of any application that is used everyday by million of users and is written using only .Net/C#….
I have no doubt this is a great plateform for developping application I just wonder why even Microsoft doens’t use it for all their home-made application…
Objective-C: Is a very simple OO language, basically an OO layer (the smalltalk way) above plain C.
In contrast, Apple uses this language in virtually all their apps. Millions of people use ObjC written applications. There are thousands applications written using ObjC and Cocoa (or GNUstep).
Guess what ? I love ObjC
Paint.net may not be used by millions but it’s a good example of an excellent .Net application.
Paint.net IS not used by millions of users.
A lot of ObjC apps are used by millions of users everyday. Do you see the difference ?
Paint.net IS not used by millions of users.
Paint.Net 3.31 alone has 2 millons of downloads from downloads.com (and there are other mirrors)´
nikon capture nx is another tool that needs .net (and judging by it’s speed is using it alot)
Sure numbers like this can be misinterpreted.
2 millions downloads ? This surely is a “reference” application…
Can you tell approximately how much isthe total windows user-base ?
100 millions ? 200 millions ? Let’s say 200 millions (although I’m sure it’s far beyond that) that means 1% of the user-base is using Paint.net… Again, 100% of the Mac OS X user-base use ObjC applications on a daily basis… You can find any numbers you want you won’t change that simple fact.
you can argue with your math-teacher if 2 millions qualifie as millions or not, i won’t waste further time on this topic
but why don’t you compare .net with an apple-language that was introduced years after the switch to OSX?
i won’t waste further time on this topic
Good Bye ! Au revoir ! Hasta luego !
100% of Mac users eh? I am sure Adobe would disagree with you. And Finder itself is largely a 32-bit Carbon app:
http://www.carbondev.com/site/?page=64-bit+Carbon
Adobe ? By default, you have no Adobe app pre-installed on a mac.
->Adobe : irrelevant.
Finder: Largely ??? hum you mean not entirely, right ?
Funny you don’t go further in your explanation…
Yes 100% of OS X users use ObjC/Cocoa applications each and every time they use their computer… (Preview,iLife, iWork suite all use extensivily Cocoa through ObjC, let alone AdressBook, Safari, mail, iCal, Xcode, Interface Builder…) Whether or not Cocoa apps are “mapped” to Carbon features at some time in their execution doesn’t change anything at all !
I really hope that Paint.net or nikon capture nx are not the best examples you can give of a high profile .Net apps, because their usage numbers and the category of their users somehow pale in comparison to those of, say, Motion, Soundtrack Pro, Color, Logic or Aperture. (and those are just the apps made by Apple)
Expression Blend and Expression Design are two awesome apps written using WPF. They greatly influenced .Net 3.5. Silverlight and these apps are examples of Microsoft eating their own dog food and making changes based on what they learn.
Seriously though Microsoft has some awesome technology for developers not just around the GUI. WPF is as good as it gets and on the stuf that is not GUI related Apple has nothing comparable.
And that’s precisely why anyone who develops software for public consumption stays away from .NET or Java.
The reason you can’t is because the only way to really tell a .net app at this point (from a user point of view) is the unhandled exception dialog.
Dragging the exe on to Reflector is more technical, but yields faster results.
Microsoft Management Studio for Sql Server is at least in part written in DotNet. The shell app seems not to be (or MS hacked it to look like it isn’t), but the tonne of assemblies it uses say different.
yeah, i was talking from an end user point of view. the rendering engines of both avid and sony vegas also use .net, but im pretty sure the main apps dont
This is the worst series of fanboy articles disguised as technical. This guy has simply no clue, even if he claims to be a developer.
Pearls of wisdom:
* Summary : “WinForms is bad. Java is much better designed.”
That must be because basically not a single Java desktop program is installed on ANY of my PCs while I use many Winforms ones. Go figure.
* Summary : “Apple loves developers so that’s why they give developers tools out for free. Microsoft instead tries to monetize its developers… thus Microsoft don’t understand that more developers means more software for your platform”
Yeah very! Could it be that Apple desperately needs more developers while Microsoft has loads of it writing new apps?
* Summary : “Great! Apple “innovated” (lol!) by introducing Core Audio, Core Video, Core Data so developers don’t need to write their own modules! YaY!”
Do you mean Apple introduced what Windows provided like 13 years ago? YaY!
(and these last two are my favourites…)
* Citation: “Finally, Apple also gives away Quartz Composer, which I have to give a shout out to. Although I have no practical use for it, I’m sure someone does, and it’s really rather funky.”
no words for it…
* Citation: “For example, a kind of “palette” window used for inspecting and adjusting object properties was used in iPhoto and other applications. This is something that lots of software can make use of, and so with Mac OS X 10.5, a system-level palette window was introduced. Instead of a proliferation of slightly different first- and third-party implementations of the concept, Apple has taken a good idea and exposed it to any developer.”
System-wide palette for developers to use… rather innovative ;-D
I mean, if this guy is pretending, he’s great… really…
If he’s not pretending… gosh! If I was Jobs, after reading this, I would wonder “Where the hell am I doing wrong?!?!?” ;-P
A well designed API from a programmers point of view doesn’t necessarily mean you will see it a lot as a user. The majority of client side code you will come into contact with as a Windows user is going to be written in Win32. No one will claim Win32 is well designed, not unless they’ve had 5 – 6 beers.
You’ve missed the point. Apple needs many 3rd party developers and they’ve done a lot to reduce the barrier to entry by making their development tools free. Microsoft too need as many 3rd party developers as they can get. Sadly, the tools are all commercial. Stuff like Shark, Instruments, the stuff in /Developer/Applications/Performance all come for free with Xcode, but you’d need high end VS to get it.
You accuse the author of being a fan boy, and yet you make asinine comments like this? You have clearly no idea what Core Audio and Core Video do.
Have you see the palettes he’s talking about? When Apple introduces a new UI theme/widget, it’s available to all developers from the get go. How long did it take C++ developers on Windows to get access to ribbons?
Though I agree with you that Quartz Composer is a waste of time though it does allow you to create some very amazing looking demos. Pity I have the artistic ability of a hippopotamus.
It depends on what you do with your client computer. For many people, who only use a web browser to read email and play games, that’s probably true. But there’s a huge number of people running in corporate environments that run nothing but managed (C#, Java) LOB applications. Most (if not all) of the applications in these sorts of environments are managed.
Neither would anyone say the same of Cocoa or any of Apple’s APIs. It’s all crap, when it comes right down to it.
That’s simply not true. Not all editions of Visual Studio are commercial. For example, Visual Studio Express (see http://www.microsoft.com/express/default.aspx) is freely available. Also, the Win32 SDK includes a compiler, debugger, headers/libs, and lots of tools. For free. In addition, there’s a huge raft of open source tools such as Eclipse (http://www.eclipse.org/downloads/) which developers can use. Some people would even argue that the open source tools are BETTER. So, really, the idea that you HAVE to use a commercial edition of Visual Studio to develop apps for Windows is just ridiculous and wrong.
DirectSound, DirectShow, DirectInput, Direct3D, etc have all been around for a VERY long time. That’s his point — and it’s valid to mention. Apple isn’t breaking new ground here.
it may be valid to mention that Directx components have been available for years, but the scope and integration into the OS isn’t the same. Besides that Apple also had system services for sound and video even as far back as system 7.5 so I really don’t see his point. No one uses directsound for anything other than games or simple applications. Yes its there and has been for a while, but who really uses it? Even MS themselves don’t fully use it and that is where Apple and MS differ.
Uh, no. I disagree. Microsoft’s technologies are layered in a way that makes it possible/probable to use them either directly or indirectly. For example, at the bottom of the stack is Direct3D/DirectSound/DXVA/etc. Above it are DirectShow for video playback. And, above that is Windows Media Player controls and APIs.
None of this stuff is new. Both MS and Apple provide (and have provided) audio/video APIs. In other words, the level of innovation is only very incremental.
Yes, they do use DirectSound. They just don’t realize it. Whenever most apps call playSound(), it eventually calls down DirectSound. The fact that you aren’t AWARE that you’re calling DirectSound is irrelevant. You are. Why do you think that it’s necessary to call into the lowest layer of the stack in order to call that INTEGRATED? That’s a nonsensical definition for integration. Integration is all about targeting the layer of technology that makes the most sense. If I don’t want to learn about DirectShow or Direct3D or whatever, I can simply host a Windows Media Player control, and have IT do the heavy lifting. Or, if my needs are more complex, I have the option of targeting the lower level runtimes directly.
With VS Express, you’re shafted if you’re a C++ developer and want to use a slightly modern framework instead of coding direct to the Win32 API. There is no MFC. Neither is there any ATL. As a result, you can’t use WTL either. If you’re after designing UIs, you’re stuck with a resource editor that hasn’t been updated since .rc files were introduced. If you’re going to write COM components, tough luck, no GUID generator.
While Xcode doesn’t compete with VS Team System, it’s definitely better than VS Express, Standard or Professional. I mean, come on. What kind of self respecting developer doesn’t profile his code? Or search for memory leaks? The lack of a profiler means no PGO either.
Those components are part of the free Windows Platform SDK. Since many people get both VS Express and Windows Platform SDK, there’s no sense in downloading the same components twice. So, download the Platform SDK. Game over.
No, they are not. I have looked, and you *are* shafted if you wish to develop COM, ATL, WTL or MFC applications with VS Express. It was possible to use WTL/ATL in VS 2005 express. Realizing this mistake, MS has removed all ATL headers from the Platform SDK.
Wrong. The source code for ATL et al is downloadable. You just go and get it.
http://www.microsoft.com/downloads/details.aspx?FamilyID=a55b6b43-e…
What a load dung. The entire industry has been copying what we did at NeXT for the past 15 years. Now that they’ve caught up suddenly Cocoa is crap? Truly classic.
All the Direct this or Direct that have rich, mature and modern Cocoa APIs on OS X. What Apple did, before Steve’s return, was to let their lead in Graphics and Audio languish and go into the ground outside of QuickTime.
Let’s revisit this thread in 3 years and see where the toolkits currently stand.
With Carbon gone you’ll see far more resources extending Cocoa as it should have been a 5 years ago.
Maybe you missed my point: I wasn’t singling Apple out. I think that BOTH Windows AND Apple graphics APIs are the suck. Microsoft hasn’t really provided a way to bridge GDI and Direct3D in any meaningful way. And Apple’s graphics perf with Quartz hardware support is so laughably bad that they won’t turn it on, by default. As for Cocoa/Carbon, who cares? They’re just lame frameworks, like MFC/Win32.
People having been throwing around visual studio claims (one way or another) all over the place in this post. Here is a rough overview of the actual SKUs
VS express editions gives you all you need to do most things. They are limited in
C#/VB.net only
Web designer is kinda sucky
No class diagrams
No .net compact support (for pda/smartphones)
Can only connect to local databases
XML editor only supports XML
Can only debug locally
Can only compile 32bit
No SQL/Server or Windows Server WMI integration
No office addin support
No code profiling
No code analysis
No plugins
No database tools (like schema compares or refactoring)
No integrated testing
No issue/project management features
No database deployment features
out of all of that, 32bit only binaries are the only thing that really sucks. Other then that, it is a very good ide (for the price), and while the latest XCode may be better, it is in the same range. What you are missing are things you would only really miss in more serious projects then stuff for end users.
—————
With studio standard, you get
All the .net languages
The full web designer
Plugin support
Mobile support
Ability to connect to remote databases and services
Database diagrams
64bit support
This is for the more serious hobbiest, or the economy version for independant professionals who do not have an MSDN subscription
—————————
With pro you get
Full remote abilities
XSLT support
SQL Server integration (ssis/dts projects)
Usually pro is another 100$ or so. you would only really get it if you need the heavy duty data transformation stuff in sql server.
———————-
After that, we get into the team system stuff. They are designed for teams of developers working on large projects. You don’t usually actually buy these, you buy MSDN subscriptions
These are divided into SKUs for
Software Developers
Software Architects
Software Testers
Database Professionals
Team Suite (i.e. Ultimate, either for the VP of technology or motivation to get that MVP award)
———————
The apple tools are designed for people doing desktop apps. Personally, I like WPF/C# better, but overall when it comes to smaller things, I think XCode is the better product. As soon as you start talking databases, web services, server side scripting, XCode is not even in the same class, and even the express versions of VS would be a better choice.
As for the criticism that MS doesn’t include a profiler in the free SKU, IMO that is something that should be pushed down to Pro (like testing was). However, a profiler is not something that everyone needs, and in a team setting the guy who needs it will have it.
The only reason Xcode may lack on the Database stuff is for some stupid reason Apple has pulled Enterprise Object Frameworks from the mix.
Put that back in and suddenly they leap-frog VS on this matter.
I dunno, studio has very good db tooling. And what about creating/consuming web services? I can’t imagine working on a large system without services. WCF/Studio makes this ridicules easy
Visual Studio has crappy database tooling. It’s a bit like the source control integration. Great if you want basic stuff, power users – it just gets in the way. I am constantly in Microsoft Management Studio to get any real work done.
I developed Mac programs for years, and in my opinion, Apple treats its 3rd party developers like garbage. They bundle anything and everything so there’s almost no room to compete with them (people complain about the few apps that Microsoft bundles but seem to not mind the massive bundling Apple does). The changes they make to their API are more likely to break apps than Microsoft’s API changes, and it’s not even close. They killed off CodeWarrior by not providing them with the information necessary to create universal binaries. For years, CodeWarrior was the preferred development tool over Apple’s offerings, even though it wasn’t free like Apple’s were. That should tell you something.
Well, the Office team invented Ribbons, not the Windows team. The Office team is independent. They “rolled” their own UI, and anyone else could’ve done the same without waiting on Microsoft. I find your complaint that Microsoft didn’t immediately make the Ribbon UI available to others ironic, because if Microsoft had been split into an “Apps” company and an “OS” company, then this is exactly the behavior you’d get. Any UI that the “Apps” company invented would not be provided by the “OS” company since they would be separate companies. Isn’t this what lots of Microsoft bashers wanted?
At any rate, Microsoft pretty quickly did provide a free license to the UI that apps devs and component devs can use, and there are already multiple component devs providing Ribbon UI to devs, and Microsoft’s own new .NET and MFC libs provide these Ribbon UI tools written by 3rd-party component devs. (Note that the actuall Office code is NOT provided by Microsoft, since the Office team is independent and writes their code in a way that is suitable for their own internal use, but not in a way that is suitable for use in a library usable by 3rd parties). And these 3rd-party Ribbon UI components were available even before Office 2007 RTM’ed.
Good and informative post. Shame I can’t mod you up
Edited 2008-06-03 06:33 UTC
Oh man, you got no clue. I had to join just to debunk you.
I’ve been developing software for 30 years. Yeah, CodeWarrior rocked – I used it for years. Before that I used Symantec C for the Mac. Before that…. However, people used it because it was way better than the expensive MPW Shell – back in the days when Steve was NOT there and Apple did not give it’s tools away free…. So to say it cost more and people still used it is disingenuous. It cost LESS and was the 1st tool available for building PPC apps, and was pretty good at it. They killed off Symantec on the mac as well (wait, Symantec killed off Symantec tools all on its own…)
Metrowerks was bought by Motorola, so as they could have dev tools like Intel has theirs, to make money and make tools for their chips. Motorola switched focus to embedded systems, and abandoned their X86 products (yes, they made a compiler for x86 BEFORE Motorola bought them. You could even cross debug Mac>PC or PC>Mac).
Motorola had an interest in supporting THEIR CPUs, the PowerPC and embedded chips. It was Metrowerks, under the ownership of Motorola, that had no interest in creating Universal Binaries. Motorola and Metrowerks were gone from X86 LONG before the transition, and had no interest in supporting mac OS X strongly.
Timeline – Motorola acquires Metrowerks in 1999..
2000 – Motorola ceases developing the compiler for x86
2001 – Motorola persists in supporting legacy tools with no new developments now in a couple of years.
June 2005 – Apple announces transition to Intel
July 2005 – unwilling to resurrect the X86 compiler killed as soon as Motorola bought Metrowerks, Metrowerks abandons the tools they had not really advanced in 5+ years on the Mac.
Get your facts straight
IIRC it was even worse than that… Metroworks x86 compilers were sold off to a third party. They didn’t own them at the time of transition. Not to worry, they were pretty crappy and only did PE executables. They used GCC for anything else x86 (including their folly into LINUX.)
AFAIR, their Mac PowerPC compiler only ever targeted the CFM too, so Universal Binaries were always going to be a little more tricky.
EDIT: by CFM, we of course mean PEF format (as in, BeOS people don’t need to try to contradict me on that one), as opposed to the native mach-o MacOS X exe format inherited from OPENSTEP/NEXTSTEP. The Next platform always did “fat” binaries in the exact same way Mac OS X does them now (well, I didn’t look at the layout, but I assume any changes are streamlining)… An OPENSTEP .App can be 68000, x86, SPARC and whatever the HP chip was, all at the same time. Much like Universal Binaries now. The Mac “fat” binaries don’t even come close IMO to the elegance of the approach used by Mac OS X.
Edited 2008-06-03 11:14 UTC
Thanks, well put. I forgot about the whole CFM thing….
bottom line is, it was Metrowerks (Motorola, Freescale, what have you) that did not care. They did not bring it forward because they had to actually DO something, which they had not done in 5+ years at that point. I say good riddance to them (it was such a shame – circa 1996, using CW with PowerPlant was the tits. I loved the environment and the Object Oriented framework. They rocked).
Cocoa is targeted for the desktop development.
WPF is for desktop, mobile web, and the web itself (silverlight 2 and 3 will support limited WPF controls).
You guys are taking this article TOO seriously. It comes down to what you’re building and who you’re targeting.
Cocoa Touch is targeted for Mobile.
I’ve had extensive experience developing with Cocoa, along with Java Swing, Win 32 API, and GTK. Here are some points I’d like to make:
1) GUI application writing should not require fast APIs.
Yes, Obj-C is not nearly as fast as, say, C++. However, Sending messages dynamically during runtime makes writing applications very flexible. The speed benefits of C++ are not critical to writing code that handles menus. It is Obj-C’s message passing system that gives Cocoa its incredible flexibility and rapid development.
2) Message passing determined at runtime is a good thing.
The author correctly points out that Obj-C makes it easy to extensively modify the behavior of provided widgets. The author points out how Windows’ APIs don’t allow much changed behavior, which severely hampers its abilities.
I once had to write in Swing some code that uses JTables. It took me several hundreds of lines of code and many hours sifting through documentation to get pretty basic behavior. This sort of thing is much easier to do in Cocoa and in much less code.
3) Speed critical areas in an application can easily switch over to high-performance languages like C.
Obj-C easily allows mixing of languages. In fact, you can even have C++ code mixed in with Obj-C. So, if there’s something slow in the program, it can be replaced by C or C++. The best platforms are those that can be integrated with foreign platforms.
4) Xcode isn’t great, but Makefiles can be used instead.
I love using vim and screen for development. I abhor even good IDEs like Visual Studio and Eclipse. I loathe Xcode over all the other main IDEs. I feel I am very productive with vim and screen even though they require a much higher learning curve than GUI IDEs. While Apple does not make it easy to develop Makefiles that compile a MacOS X application, it can be done. Traditional Unix users should be happy they can use typical tools yet produce beautiful OS X applications.