Usually, when developers or programmers write articles about their experiences developing for a platform they have little to no experience with, the end result usually comes down to “they do things differently, therefor it is bad actually”, which is deeply unhelpful. This article, though, is from a longtime Windows user and developer, but one who hasn’t had to work on native Windows development for a long time now. When he decided to write his own native Windows application to scratch a personal itch, it wasn’t a great experience.
While I followed the Windows development ecosystem from the sidelines, my professional work never involved writing native Windows apps. (Chromium is technically a native app, but is more like its own operating system.) And for my hobby projects, the web was always a better choice. But, spurred on by fond childhood memories, I thought writing a fun little Windows utility program might be a good retirement project.
Well. I am here to report that the scene is a complete mess. I totally understand why nobody writes native Windows applications these days, and instead people turn to Electron.
↫ Domenic Denicola
Denicola decided to try and use the latest technologies and best practices from Microsoft regarding Windows development, and basically came away aghast at just how shot of an experience it really is. I’m not a developer, but you don’t need to be to grasp the severity of the situation after following his development timeline and reading about his struggles.
If this is truly representative of the Windows application development experience, it’s really no surprise just how few new, quality Windows applications there are, and why even Microsoft’s own Windows developers resort to things like React for the Start menu to enabler faster and easier iteration.
This is a complete dumpster fire.

I would argue this is NOT native development.
I love C#, but .Net is not native. It might be a bit more native than Electron of course. At least it compiles to ahead of time x86-64 code. But true native would be Win32/WinRT with C++.
For those issues with direct access to hardware or input settings… yes, that was one area C# was lacking in the API department. But as the article suggested, they had the awesome PInvoke mechanism since day one. (Something Java is still seriously lacking for decades)
> .Net is not native.
Agreed.
.NET came about when the US DOJ was going to break MS into an apps company and an OS company. This was MS’s desperate ploy: a way to take its dev tools cross-platform and keep them competitive even when they were being made and sold by a different company than the company that made the OS.
It didn’t happen, and so .NET was never really needed after the legal threats went away. So after a strong start, it’s been a directionless mess ever since.
A few years ago I found a story about Microsoft acquiring the company that originated it, but I can’t find it again. Of course, MS is as free to edit Wikipedia, destroy evidence and cover its traces as anyone else…
lproven,
I think you are mixing a few different stories here.
The company that originated .Net is Microsoft.
The company they bought was Xamarian, which had built a .Net compatible open source ecosystem (Mono) for Linux. They rewrote that runtime, and released a new fully open source version (the current .Net core is open source) along with Visual Studio for Mac, which was also a rebranding of Mono’s IDE.
I’m not sure where this is coming from, but they were trying to do the exact opposite. They took multi-platform Java, and made it better for Windows. On that I’m not kidding. They wanted to replace Visual Basic with something modern, and built J++ for this purpose.
Of course Sun did not like it. But objectively it was a good language.
When they were told they cannot play with Sun’s toys, they built their own. C# and .Net came out of that effort, which was closed. Again until they bought Xamarian, and later built another clean open source implementation.
sukru,
I disagree with you here. Microsoft’s implementation was not in good faith and left windows users with performance issues and were taking steps to defeat Sun’s portability goals. I don’t know if you recall, but they did the same thing with javascript, the goal being to introduce capricious compatibility issues between netscape and the more popular IE browser (due to MS bundling). These outcomes were not incidental, it was classic embrace, extend, extinguish by Microsoft.
I’m not saying there were no fans of microsoft’s IE, J++, etc, but whether fans liked it or not there was no room to deny that microsoft were employing monopolistic tactics to crush everyone else’s technology. It would be wise not to forget just how bad things were for competition! People were disappointed the government didn’t do enough to break up the monopoly, but even so, if it weren’t for microsoft loosing its antitrust lawsuits the world would be very different today. Microsoft tentacles would have a far stronger grip on new markets. One of the conditions for not breaking up microsoft was to keep antitrust auditors inside microsoft for the next 20 years. Obviously your disdain for government oversight is well established on osnews, but if not for antitrust oversight google would not have been able to compete against an unencumbered MS monopoly. Apple was already on the verge of collapse and microsoft literally gave them the cash infusion to survive.
I know we haven’t been able to agree on the importance of antitrust to keep corporate powers in check, but what happens over and over again is when corporations get too strong, they stop innovating and start relying on market manipulation and coercion to get their way. I think you may be coming around to agreeing this is happening with google. Even people who were big fans of google in the past are seeing how this sucks. Apple fans are learning this lesson as well. Dominant companies are no longer interested in innovation. All the talent and creativity they once had gets phased out, and it gets replaced with market dominance strategies to block competition from the market. The old saying hits the nail on the head: power corrupts, absolute power corrupts absolutely.
Alfman,
The way I look at these “monolithic” companies is a conglomeration of separate entities.
They might look like a whole, but they are internally not. And most often than not, they would be actively fighting against each other.
So, that “monopolistic” Microsoft that wants to crush competition… is a different branch (Windows) than the one that wants to reach out to developers (Dev).
sukru,
Sure, however the presence of different competing division heads within a company is not a mitigation for monopoly abuses outside the company.
In the 90s, prior to the antitrust lawsuits, 3rd party developers had to pay and agree to MS terms to get access to windows programming documentation. It was actually an antitrust case that opened MS programming APIs to the public. It’s not that obvious to me that microsoft would have done it voluntarily.
Of course I understand that all companies have insiders (ie developers or other staff) who genuinely care about the product and customers. Be it microsoft/apple/google/adobe/etc, all have some passionate employees who care about doing what’s right. The issue though is that welldoers are not necessarily the ones running these corporations. Wall street incentivizes a very different kind of leadership.
Alfman,
Yes, it is a constant tug of war, but I don’t think it is just employees vs management.
It is more of clash of ideals.
And specifically for Microsoft, the hardliners lost. And we got open source .Net, open source WinUI, Linux Services, Visual Studio Code, Free basic tier Visual Studio, free documention, and whatnot…
(Of course it is not all unicorns, and rainbows. The team that has lost was in charge of Windows. It would not be a stretch to say Windows has not been in good stewardship in the last 10 years or so)
sukru,
The antitrust rulings have since expired in the ensuing years, but some of these things were mandated by the settlements.
https://en.wikipedia.org/wiki/United_States_v._Microsoft_Corp.#Settlement
I don’t like the direction either, but I think it has a lot to do with executives reacting to market realities. It was easier for past executives to satisfy wallstreet when the market was still growing. Now that the desktop & mobile markets are saturated though, corporations know they can no longer grow by adding customers, so instead they focus on squeezing more out of the existing customers. Of course they know most customers hate it, but recurring subscriptions with built in price hikes are high on the list of changes that corporations desire. Locking us into subscriptions and walled gardens increases money flow even if customers get nothing out of it.
.NET was never intended to be cross-platform. This was a major point of contention for the Mono project (renamed Xamarin) right up until Microsoft bought it, because it was never 100% certain that Microsoft wouldn’t send in the lawyers to shut it down.
Athlander,
I would say, yes and no…
The teams in charge of .Net clearly wanted it to be multi-platform. Their JWS/Macromedia Flash alternative Silverlight ran on Windows, Mac and Linux platforms. And it was (a limited) Portable .Net
They also released all specifications as part of ECMA. And later published a patent covenant.
Up until Steven Sinofsky left of course, there was constant doubt. The higher ups could pull the plug anytime. And they did. At least for Silverlight.
(Long story short, back then Windows team basically beat the Dev team, had Silverlight shut down and replaced with WinRT and C++/CLI + HTML based infamous “METRO” Windows Store applicaitons. Dark times nobody wants to remember.
Scott’s face was all purple that day. Fortunately he had the last laugh)
Always has been.
It has not. Before the .NET crazyness, it was super smooth.
> Before the .NET crazyness
Define when that was.
.NET dates back to the late 1990s and was announced in 2000:
https://www.zdnet.com/article/microsofts-ngws-no-easy-answers/
Before .net? Win32 era?
Well, to be fair, it was super smooth if you used a RAD or some sort of framework. Win32 was, and still is, a pain to use; far from smooth.
Considering Microsoft has been evolving Metro/UWP APIs since they started development of Windows 8, I expected the APIs to be more fleshed out (and not require dropping down to the win32 APIs at all). But then again, I expected Windows to have a consistent UI by now, and it hasn’t. There is a reason most people don’t bother with Metro/UWP apps and why the Microsoft Store is a ghost town.
But anyway, here is the breakdown:
– App doesn’t need performance -> Use electron or some other “webpage and browser in a trenchcoat” solution
– App needs some performance and you also need managed code -> .NET (with C#) or Java SE (for .NET use the version that ships with Windows)
– App needs native performance -> C++ preferably using some framework like QT over win32 (so you can port to MacOS or Desktop Linux if needed)
kurkosdr,
I believe UWP is no longer a thing:
https://news.ycombinator.com/item?id=28932345
We PC gamers refer to the whole stack built on top of WinRT as Metro/UWP/WinUI interchangeably. Also known as the part of Windows we hate. No need to track what Microsoft calls it this week, we still hate it.
kurkosdr.
WinRT is cool though, they learned their lessons and combined the best parts of Win32 + .Net in a native C++ environment.
UWP was an abomination of trying to fix the “web model” onto the desktop with Windows RT style sandboxed apps (not to be confused).
Today they call it WinUI to avoid that very confusion, and the legacy distaste.
Basically a good wrapper of COM/OLE (which is the heart of Windows) + native C++ APIs + tooling / metadata from .Net.
The whole “web model” was only one of the issues. The other is that the whole stack built on top of WinRT is tied to the Microsoft Store with onerous sideloading restrictions if you want to install apps outside the Microsoft Store.
PC gamers are used to getting their software from a variety of sources (Steam, GOG, Zoom Platform) and trying to force the smartphone-inspired model (of an OS-blessed app store) on us is a major regression.
kurkosdr,
Again, that was a mistake, and they are not doing this anymore.
It is of course difficult to fix people’s perceptions, but it literally has no restrictions other than and MIT License
https://github.com/microsoft/microsoft-ui-xaml
(Yep, fully open source)
sukru,
I wish it were “a mistake”. However executives Inside of microsoft probably don’t think the idea itself was the mistake, rather failing to achieve it was the mistake.
Apple and microsoft both want this outcome, but rather than make the changes suddenly and face all the backlash, they’re learning how to normalize restrictions in incremental steps. It’s the boiling frog strategy, so to speak. Google are already ahead of the restriction curve with chromebooks, they’re currently pushing for more on android. Also adding restrictions into the chrome browser…they don’t want people to take notice, but the direction of the change is always the same: keep eroding freedom and independence a little bit at a time. Over generations the losses add up, but people will accept it.
I can’t reply to the correct comment, but sukru
According to the article, it’s still hard to distribute a non-Store app.
The lack of a consistent ui “by now” is kind of the point… They’ve introduced new things several times over the years, but also retain backwards compatibility so a lot of software continues using the old stuff anyway.
“Windows native application”
https://github.com/domenic/display-blackout says “Languages: C# 66.2%, PowerShell 33.8%”. 🙂
I’ve never found Windows application development to be particularly difficult. The Win32 API is fairly straightforward and nothing else is needed.
Would be good if the Linux world had a standard API.
Indeed, it shows in its fragmentation, even the shells are not completely compatible with each others. At least on Windows a batch file works the same on all Windows version (minus extensions et delayed expansions).
You have compatibility at the lowest common denominator – eg the basic bourne shell. A script written for the basic bourne shell will work on linux, bsd, solaris, sunos, ultrix etc.
This is no different to a windows batch script, lowest common denominator compatibility.
Well… the whole thing looks like an attempt to make Linux viable: if one couldn’t make awful experience of developing Linux GUI apps easier then one can make experience of developing Windows apps even worse!
Not sure why Microsoft wants that, though.
zde,
Linux *had* some very good UI APIs, but all of them were squandered for one reason or another.
KDE was the first true Linux desktop. But the old QT license made it non-viable for anything commercial, and the license change came too little too late.
I’m discounting OpenSTEP, TK, Xt, WindowMaker, Motif, and others as they were not Linux first, and to be fair they did not hold a chance.
Then GTK came, it was great in GNOME 2.0 era. Basically almost perfect. But… of course clash of visions meant it was broken, and now is a mess.
Okay, I partly lied. The original was Enlightenment. The best out there for a software based rendered. Even today their CPU code rivals GPU optimized desktops. However, again, … no large following (this is because their APIs kinda… bad)
Anything else?
Nothing native. We had XUL, SDL, wxWidgets, U++, ….
In all that time Windows consistently had Win32 and its wrappers.
An ancient Windows 3.1s application (32-bit extensions) from 1993 will still run today on a 2026 Windows.
(Most likely)
Cannot say the same for almost no Linux application at all (if X11 was still a thing, Motif or Xt would do that though)
No mention of FLTK? It looks native (I don’t know if it uses native controls under the hood…?) It has more of an older/classic look, which some people won’t like but it’s a plus for me.
https://www.fltk.org/shots.php
And it truly is lightweight. I absolutely love that it requires no external dependencies!!! Just one binary for the platform gets the job done. It solved the conundrum of how I should write custom software for windows customers. I didn’t want to use windows only toolkits, and yet I didn’t want it to look like out of place linux software either. And I favored C++ over C (although admittedly I think managed languages are better for GUI software than C++ is).
Here’s something I built for a client, this interface is quite dynamic. Looking back at it now it looks a bit phallic, haha. BTW it’s running under wine…
https://ibb.co/ZRMqpP0X
Hard to believe it’s been nearly two decades when I was starting contract work and this was near the beginning.
Alfman,
Yes, FLTK is cool. But, let’s be honest, how much of an impact it managed in the Linux Application world?
Everyone was jumping form one GTK version to another, GNOME to Canonical’s Unity, and then back, but GNOME has moved by then.
I really wish community chose one “good set” of reliable UI toolkit, at least as a fallback.
Linux way, or rather UNIX way meant the core would be separate than the UI. ffmpeg/cdparanoia/udev/alsa/… would handle the actual work, and people would use thin layer UIs on top of them.
Unfortunately those great yeas in early 2000s were cut short.
(I’m not even going to discuss Electron everywhere + one or more versions of “flat packages”. We would all start crying from despair)
sukru,
Obviously it didn’t win a popularity contest, but with most things popularity begets popularity. A popular company can and will reach way more people regardless of the quality of their software. For better or worse, markets favor the giants and this is true even for FOSS.
I agree, we went in too many directions. The ability to branch is a strength of FOSS, but can also prove quite frustrating for users.
That idea resonates with me. There should be a layer between the functional and GUI aspects. Some projects kind of do this, but we don’t seem to be very consistent.
I understand why things like Flatpak are seen as sub-optimal, the bloat is very real. Yet it exists to work around the notorious dependency hell problem with 3rd party software that has plagued us forever. Even though I prefer more optimized software, I find that over time flatpak and appimage save so much effort over old-school dependency resolution that I can’t blame people for choosing these new self contained packages as the easy way forward.
Alfman,
It is funny… but I believe this is where Windows shines.
All these these hoopla about Visual Basic, J++, .Net is all about this issue.
COM
They managed to build a golden laying goose. The very basic Windows 3.1 model is still the foundation of everything today. And it works as a cross language programming interface.
Visual Basic was just a scripting language over COM/ActiveX
When it got stuck Microsoft saw Java as a legitimate replacement (infamous J++)
When Sun objected, they built .Net.
.Net was heavily tied into COM for system integration. (Most people miss this part)
And today WinUI is another layer on top of COM
Just like Google builds everything with ProtoBuf and gRPC and avoids fragmentation, Windows managed to make everything compatible with COM (and OLE + ActiveX + MFC + C++/CLI + WinRT + WinUI + … on top of it)
Linux so far only has DBUS as a standard cross framework interface. But it has no UI capabiltiies.
“Evil” idea: Let’s bring Registry and GBUS! to Linux
sukru,
Some of those technologies are hated by devs though. MFC, COM and ActiveX had their time to shine, but they only stick around today because the world remains encumbered by legacy software and APIs. Few software developers actually want to work with them. I’m currently working on an old legacy win32 application, and several members of the team have already quit over how aggregating the work is. Businesses don’t normally want to replace code that already works “don’t fix what ain’t broke”, but maintaining legacy code can still be a nightmare. IE was built on that house of cards and even Microsoft decided it needed to be thrown out.
Modern languages have improved programming significantly, but there’s so much old technology cemented into the stack such that we often get stuck with it.
I actually do think Linux would benefit from a structured database for configuration. I could get behind well designed standards and tools for this, but it definitely should not be modeled off the registry, The windows registry wasn’t great to begin with and and has only become worse with the way microsoft botched the 64bit transition. Every hack microsoft adds becomes a burden on the future to maintain it forever, yuck.
Alfman,
That is the point.
We have high level wrappers around them. WinUI is one such thing. Visual Basic was another. And C# definitely leaned onto COM especially in early days (Windows.Forms)
Nobody writes raw COM with C++ macros anymore. Not even MFC or WTL is very popular today.
But the core stays. Whatever language / tooling you choose, it stays compatible with others on Windows.
Again “Common Object Model”, its simplicity transcends time.
I think this was misunderstood. Windows “Registry” was for registering COM components. So that we could use IUnknown and its higher level cousins to instantiate objects
It becoming the de facto config store was just a feature creep. Someone, somehow thought “these great INI files are no longer cool, let’s use something convoluted and opaque”
sukru,
The thing is, just because it’s possible to use COM/ActiveX/whatever, doesn’t imply that legacy components designed in the 90s are a strong foundation for robust software and operating systems going forward. The more experience you have with the technologies, the more convinced I am that you would agree that they deserve their bad reputation. Modern language frameworks are better organized, safer to use, have transformative features, and are just more productive.
Cobal transactions are simple and have objectively transcended time longer than COM…but nobody is suggesting this fact makes it the better interface, right? Just because microsoft kept it around for compatibility doesn’t really mean it’s the best. solution going forward. Modern languages bring many improvements over COM and ActiveX, which have had notoriously bad track records. I’d equate it to driving down a road full of potholes. A good driver might get lucky most of the time but if we don’t fix the problems we increase the risk of severe damage on bad days. This danger needs to be factored into COM’s merit. Frameworks that make good practices easy and bad practices hard is a pro. I assure you that’s not the case with COM/active-x, it’s always been a con.
So you’re focusing only on COM components in the registry. I actually don’t think this was the ideal way to do it because the registry could and did get out of sync with the active-x control file, especially if you needed to juggle different versions such as beta versions that weren’t yet ready for production. Even though the files were separate and run under different application instances, the fact that different versions had to share the registry would break this setup.
I haven’t built an active X control in years, but it was very convoluted. Visual basic did a lot of dirty things in the registry behind your back to try to maintain compatibility (ie changes to function prototypes). but it didn’t always work and your control could get out of sync with the registry settings…You could get these bizarre mismatch errors on an end user machine who happened to install a different fork. And since visual basic wasn’t smart enough to fix the registry issues, the easiest fix was to generate a new CLSID ensuring there’s no conflict. Just thinking about it is giving me a headache.
.net fixed most of these problems. You can even use reflection on arbitrary .net DLLs, no registering needed at all. I’ve used this several times to implement application plugins. You just place the DLL in a plugin folder and boom, done! No active-x registry complications needed. Want to run two versions of an application in different folders? No problem, that works as expected. I’m a huge fan of applications that just run without needing setup software or registry changes. The registry is highly antithetical to this.
Alfman,
You need to decouple the implementation from the high level language
Take this Direct2D example in Rust:
https://github.com/microsoft/windows-rs/blob/master/crates/samples/windows/direct2d/src/main.rs
How does this work?
With COM magic
The Direct2D (and cousins like Direct3D, DirectAudio, etc) are designed as COM exported C++ classes. And Rust … or any other language can directly include them.
Even 20 years later they stay relevant, and code still compiles
The same cannot be said for say SDL1 programs on Linux (without significant effort)
ID2D1Factory never changes, .sonames in Linux always do
sukru,
Well, I never said it couldn’t work. You can take technology from Apollo Missions and make that work too. My point was more about merit. In every post you are pointing out where COM/ActiveX/etc are used, but I’m not claiming that to be wrong. What I am claiming is that new languages have improved significantly over the years and legacy APIs are objectively not able to make the best use of them.
I don’t deny there are DLL hell issues in linux. But new languages have merit and I don’t think you’ve made a strong case for locking down programming interfaces to the 1990s for the rest of time, especially given it’s notoriety. I’m not looking for examples of where it’s used, but rather a justification for holding things back.
I concede we don’t need to be jumping on every single bandwagon as we wouldn’t get anything done, however the technology in question is three decades old now and there’s an opportunity cost in missing out on modern features.
Alfman,
Once again, we are talking about COM, which is not an API, but rather an ABI to be more correct.
Is supports various languages and APIs on top of it, for example, C++ and WTL or WinRT, C# and .Net, Java and J++, Python and pywin32/win32com, Rust and windows-rs, and so on…
The ABI does not enforce (too much) restrictions on modern languages, code idioms and APIs. It just allows interoperability across program and operating system version boundaries.
Linux unfortunately has nothing comparable at that level. And that is very upsetting.
We have C ABI, GObject, DBUS and CORBA. None of which fits well.
sukru,
Of course, we build wrappers for absolutely everything including COM.
In my view neither does COM. So much language innovation has happened since the 90s, not only in critical ways like memory safety, but also with hugely helpful features like async methods. These are game changers. Just because we technically can go back and use technology from the 90s doesn’t mean we should. We should focus on innovating forward rather than backward. Obviously neither of us are going to change our mind, so agree to disagree?
Alfman,
We probably should.
But…. I have a stubborn vein.
But those are orthogonal concerns. COM is an ABI, not a language. An ABI that has stood test of time.
Can be handled by wrappers like any other interop block. At high level you still get the benefits of modern languages like Rust, while the interfaces are automatically converted.
Example at:
https://github.com/microsoft/windows-rs/tree/master/crates/libs/windows
Look at the code, and see how easy it is to use a modern language with existing libraries. Concerns like allocation, reference counting, argument checks translate automatically across boundaries (Again something unfortunately cannot be done easily with Linux)
Actually, async/await patterns work perfectly with COM as well. If the components are marked as MTA (rather than STA), regular async calls work directly out of the box. I mean almost. They are thread safe, so a standard wrapper is added by C# and other languages with almost zero overhead.
And even for older components, you can easily wrap them yourself.
Wrapping example:
https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous-programming/async-scenarios#:~:text=In%20this%20example%2C%20when%20the,in%20interacting%20with%20Task%20objects.
Again if they are marked as MTA (not STA), then regular async calls will work directly.
Basically WinRT is modern COM with IInspectable, IAsyncOperation and friends.
Anyway… sorry for the longer rant.
Okay, okay, one final thing…
This is also why Borland Delphi has failed. Or… lack of it.
None of the TPU/BPU/BPL components you built, or even bought, would work with the next version. Every year, your software would become obsolete.
sukru,
It changes what language features are supported by programming interfaces.
Really though? Aside from legacy software, it’s practically non-existent in modern software development, especially not portable software. It was big in the day but the software world has moved beyond it. It’s weird to be promoting COM/ActiveX now when most of the world have soured on it decades ago.
That’s the equivalent of wrapping every programming interface in “unsafe” sections. You’re own example shows this as well. We do it when we have to but it’s a really bad practice that modern languages are trying to phase out and should be strongly discouraged. Earlier when I spoke about making good practices easy and bad practices hard, this is what I was talking about. COM makes bad practices easy and good practices hard.
learn.microsoft.com is offline right now so I couldn’t view your example. The benefit of async and coroutines is that they don’t need threads and don’t have the shortcomings of MT code. This was a nice evolutionary step and COM isn’t compatible. You’d have to use threads to achieve this with COM, but it isn’t really the same thing.
I feel you’re going to suggest we create multi-threaded wrappers around COM objects. We could do that but it adds more complexity and overhead compared to modern async methods. It’s ok for backwards compatibility reasons, but for forward looking solutions I’d rather programming interfaces that natively offer feature parity with modern languages with no need for wrapping.
It’s fine, but I still don’t think we will agree.
It does have a standard API for many things.
In Linux gaming we have. Win32.
Jeffrey Snover, the man who designed and built Powershell, agrees with Domenic Denicola:
https://www.jsnover.com/blog/2026/03/13/microsoft-hasnt-had-a-coherent-gui-strategy-since-petzold/
It is interesting to see former MS people starting to come out of the woodwork as Copilot crashes and burns and Satnav Nutella’s grasp on power starts to slip.
The important part is even highlighted there — but that’s never expanded: “you either have a Plausible Theory of Success that covers the full lifecycle – adoption, investment, maintenance, and migration – or you have a developer conference keynote”.
That’s something that happens in all companies that achieve monopoly position: people stop thinking about what may happen if they would fail, they start only thinking about what would happen to them, personally, if they would succeed. And, well… “a developer conference keynote” is better, in that regard. Just look on all these graphs and try to imagine how many people got promoted, got bonuses, got nice things… and compare to the “boring alternative future” where Win32 and, maybe, MFC are steadily improved. See who it may help and why?
Same exact thing happens with Android, and even macOS (it’s not “true” monopoly, maybe, court have never decided, but “our developers” are nicely isolated on Apple-only platform and couldn’t easily leave it).
Not only problematical for normal Windows-developer.
Also problematical to run Windows-programs on other systems.
WINE supports Win32.
MFC, Windows Forms and WPF are based on Win32. And so they can principle run on WINE.
But WinRT / UWP was developed different. It isn’t based on Win32 and WINE currently doesn’t support it.
There exististing for example the port of the UWP Windows Calculator to Linux with the UNO platform:
https://platform.uno/blog/windows-calculator-on-linux-via-uno-platform/
But that is slow. And if there existing mixings between Win32 API and UWP API, it will not run on Linux.
Also the Windows Calculator is mostly written in C++. And the port to UNO is a port to C#.
There is currently no possibility to run a Windows C++ UWP program on Linux.
theuserbl,
Again, UWP is no longer a thing, and fully deprecated.
They switched to WinUI 3.0, which is open source
The answer to this is to never touch the latest and greatest. No matter how good it looks, or how committed the OS vendor seems to be. Give it a few years to settle down and not disappear or be quietly abandoned. If it does work out, there will probably be options to help with the switch, but don’t expect downgrading to ever be easy.
This is also vital with Apple: using any of their new “magic” technologies that are going to “revolutionize software development” comes with a big risk of some suit suddenly deciding to cut their losses. I got slightly burned with QuickDraw3D, but dodged OpenDoc.
QuickDraw 3D came too late. Mac OS X brought Open GL and then Metal.
QD3D was a complete 3D library that came out 5 years before OS X. OpenGL is mostly just a graphics interface: by no stretch did it (or could it ever) replace QD3D. IIRC, OpenGL was supposed to become a rendering plugin for QD3D to give access to non-Apple video cards. It ended up as a way for Apple to stop supporting its 3D API without admitting it.
QD3D was pushed so hard for so long that everyone believed it was safe. Anyone who relied on QD3D’s scene graph, file format, gaming support, QTVR, etc, was cut off at the knees (or ankles.) I avoided being caught because I had created my own 3D API before QD3D was released. The only wasted effort was supporting the 3DMF file format.
Metal I can’t talk about: I quit using or developing any Apple products as a result of OS X and the corporate culture that created it, long before it Metal appeared.
A long time ago, quite literally from the dawn of NT, I specialized in native windows applications and their native UI/UX. I still have some of the original Windows 32 services books from that era and believe it or not there is even some software I wrote – for government – still running in Europe.
It was a highly performant and reasonably well defined architecture. You could understand it all from just a couple of books. Even as shell integration and COM came in you really understand ‘the vision’.
I did this all the way through the MFC days. After that Microsoft just lost it’s way with an explosion of languages, API’s and different architectures. Today you can’t even tell what’s current and what’s not current and much of it is poorly maintained, poorly understood – even by Microsoft – and often just not very performant. And it’s frequently subject to weird quirks as the underlying platform evolves.
These days if you want consistency and performance it’s essentially a custom framework running right on top of the 3d rendering. I bought You.i for AT&T for this very reason. For more prosaic solutions I build desktop apps in go and wails with an HTML front end. It doesn’t look anything like whatever is the visual default but since the advent of web applications no one really cares and I get consistently solid performance, compact packages, no DLL hell. Works great on Mac too.
I seem to be the only who likes Windows 11.
But considering they came to their senses the path is clear: WinUI 3 with C++ or C#, I’d like to try creating a WinUI frontend for Transmission.
The macOS version is nice, the GTK one too. But the Qt-based version that is available on Windows is ugly.
A heavily debloated installation of Windows 11 is actually quite nice. Windows has a solid core and security model, Dave Cutler is a smart guy after all. Hyper-V is great and performant and the backwards compatibility is great – I use Windows 11 to run a film scanner that went out of production in 2005. I load the Silverfast driver that was not supposed to even support that scanner paired with the Minolta original software and it all runs perfectly: color management, autofocus, infrared dust removal, etc, etc.. I can also run my old firewire sound cards that stopped working even under macos.
But the bean counters keep adding stuff no one is asking for and redefining the UI/UX in the schizophrenic way Microsoft is doing is no good. File Explorer is laggy. The Start Menu is laggy. Thereś no easy way to stop useless services and some of them keep coming back online after reboot – just for me to stop them.
But the core is indeed solid. If there would be no clownstrike and anti-cheat crap running in the kernel, etc.. and also the whole thing of applying a distributed highly granular security model to hundreds of thousands of objects is quite incredible (active directory is great).
BUUUUUUUUUUUUUUUUUUUUUUUUUUT
After one of the updates last week, one of my laptops is blue screening and won´t boot even safe mode… so hey… how about slowing down with the crap a little bit and focusing on making the OS better and more stable?
Shiunbird,
My luck has been worse with hardware driver compatibility, So many printers/scanner/fax machines only work as basic printers. I’ve lost scanners and video capture cards after upgrading windows too.
https://www.osnews.com/story/135814/microsoft-reportedly-shows-full-screen-windows-11-upgrade-ads-with-two-yes-buttons/
I’ve also encountered drivers containing time bomb logic to intentionally bork older hardware. Prolific are guilty of this and using driver error messages to sell new hardware. The old drivers work fine even on windows 11, however given windows 11 propensity to upgrade drivers without permission, it leads to hardware breakages that are difficult to avoid. While this is Prolific’s fault/intention, Microsoft are the ones automatically distributing and installing this malware. So I think microsoft needs to step up and ban windows updates that contain these kinds of kill features.
I’ve found most people not to like the changes, but people always have different opinions about UI. However I’ve definitely noticed major performance regressions in explorer and start menu too and I don’t really know what’s responsible for it.
This goes back further than windows 11, but yeah, microsoft have been using updates to override user settings. It does suck.
Luckily I haven’t seen a blue screen in ages, but I do have other problems such as an HP laptop waking up and draining the battery when it’s supposed to be hibernating.
I might be the lucky one. Maybe because I do have a HP Tower from their pro line (i7 8700) and HP support is good, there were recent bios and firmware updates. Win 11 runs perfectly (32Go DDR4).
As a musician I still need Windows (or macOS) because of proprietary software that have no FOSS counterparts or even commercial Linux version. Otherwise, my media center runs Ubuntu LTS.
I’m a former Mac OS X user so I like when things look nice. But I think my little projet to make a native WinUI version of Transmission, like they did for macOS, won’t be easy. I will have to figure it how to bind the UI to their C++ code!
It’s not laggy there because it’s my workstation and it has enough RAM. They admitted Win11 used too much RAM. They also don’t want force-feed AI as much as before.. Management was tired of hearing about “Microslop”, I guess. Copilot will be removed from Notepad and Paint. There’s hope that working on the innards and converting as much as they can to Win UI C++ or C# will be considered seriously. I could use some good news, these days.
I too like Windows 11. The UI is for my taste worlds prettier than Windows 10, I like the rounded corners, soft gradients and translucent effects, and on my 3-year-old AMD laptop, it all feels plenty snappy. The first versions were a little rough around the edges and it took them forever to make the widgets, start menu and taskbar customizable – but now I’ve got the widget and start menu recommendations turned off, and the taskbar set to the left-hand bottom side with text labels and app window grouping turned off, just like I had Windows 7 set up back in the day and just like I have my KDE set up, too – it’s unobtrusive and requires a minimum of clicks to get a handle on whatever’s open. I turned off the telemetry and AI stuff and have essentially forgotten it’s there – it’s honestly not hard if you take 5 minutes and use a guide. MS Store and Windows Update work fine for me, it only took them like 20 years to get it more or less right. 99% of of my app updates are done via UniGetUI with Winget backend, which works a treat as well. PowerTools are the shit and get better with every release, though tbh I only use like 5% of the potential functionality. Terminal is modern and does everything you could want, only necessary addition there is to install clink (via UniGetUI of course), so I can have command history, completion and highlighting even when using cmd. Only thing I don’t use is Hyper-V or WSL2, because I’m a Hibernation die-hard and unfortunately Microsoft makes you choose just one or the other AFAIK.
Do I think it’s better in every way than Linux? No. Fedora with KDE is a smidge snappier, and offers a lot of software that’s simply not available on Windows, not to mention being worlds better for doing any kind of web dev work. But for me, Windows is a tad more polished and a tad more stable when it comes to video playback within Firefox, and more compatible with a variety of printers/scanners without having to jump through hoops, which admittedly seems like a niche use case but has come back to bite me in the ass a few times when I’ve been using other people’s devices while traveling. Also, Windows lets me use Hibernation (after jumping through some hoops), while Fedora for whatever reason still doesn’t / the hoops to set it up are too convoluted for any mortal..
Actually, my main reason for keeping Windows around is music production – otherwise I’d probably have bitten the bullet and switched to Linux full-time long ago. But as things stand, I find Windows 11 to be an extremely solid daily driver, more flexible and nicer to use by far than macOS at this point (and that’s saying something, considering I was a macOS die-hard for something like 15 years, until around 2020).