We recently announced Bash on Ubuntu on Windows which enables native Linux ELF64 binaries to run on Windows via the Windows Subsystem for Linux (WSL). This subsystem was created by the Microsoft Windows Kernel team and has generated a lot of excitement. One of the most frequent question we get asked is how is this approach different from a traditional virtual machine. In this first of a series of blog posts, we will provide an overview of WSL that will answer that and other common questions. In future posts we will dive deep into the component areas introduced.
The subsystem relies on ideas and technologies developed as part of Project Drawbridge (more details).
“This subsystem was created by the Microsoft Windows Kernel Team…”
🙂
I wonder how much resources MS will dedicate to keeping their implementation of linux system calls up to date.
I’d be a bit leary about relying on it for anything more than a cygwin replacement right now.
They have to. A lot of younger devs don’t work primarily on Windows and a lot of the newer web tech just doesn’t work well with Windows.
A lot of younger developers will not commit to learn ‘One Microsoft Way’ intricacies.
A lot of developers in shops that weren’t into macs where spinning up vagrant instances or VMs on Windows and using Windows for Window management.
Most web dev shops (I contract as a .NET dev) are a mix of iOS, .NET, Node, PHP. Unfortunately Python doesn’t seem to get a lookin in the UK for reasons I cannot fathom as I’d rather be spending my time building my web apps in python because everything takes me about 1/2 the time than it does with .NET
So, is this supposed to work for windows server? And Microsoft expects people to run linux servers on it? Not sure why they just wouldn’t run on linux.
This is different than docker on windows, which makes total sense to me. Well, more sense than this does anyway.
I’m probably too biased to even consider the implications of linux on windows. So I should probably just leave it alone.
No, it’s meant for developers to develop on Windows, and deploy to Linux.
http://www.hanselman.com/blog/DevelopersCanRunBashShellAndUsermodeU…
Redis is used as an example because it is a PITA to run under windows.
The way I see this it is mainly a cygwin replacement. All the places where you’d normally have to use cygwin today you’ll just launch this new bash.exe instead.
The most common use case for that are for development systems that were built on Linux with no care for compatibility with Windows. There are countless non-web examples of such systems too. Of course you CAN also use it to run a server, but I don’t think it is its primary function.
What annoys me the most about this announcement is that it doesn’t really solve the “console situation” in Windows. Running this bash will feel like entering an alien system just as much as it did with cygwin. I find it really attractive in OS X that something like Terminal.app + the bsd utils + brew feels like a natural part of the OS while cygwin totally does not.
I use CMDER + PowerShell so for me there isn’t a “console problem”.
But it will be modern gnu utils. BSD userland feels like I’ve time traveled back into the early 1990s…
I rather like BSD userland. It’s far more consistent. The OS X specific commands though, I must say, bare no resemblance to most *NIX commands, whether BSD or otherwise.
Its not consistent with GNU.
If you are using OS X or Windows Linux Sub system to develop an app that will be run on Linux in production, you should probably choose windows.
Of course, if you are deploying to a *BSD,then Mac is your friend.
Edited 2016-04-28 18:30 UTC
To be fair, GNU is not consistent with GNU half the time. At least the days of having to play “guess the bzip2 extraction switch” are gone from GNU tar though.
It’s not just young devs mate!
I got jacked off with MS’s tactics back in the 90’s and have been annoyed with them until I finally saw what real competition has done for them (Google/Android in mobile space – man, what a blood bath. MS just can’t compete!)
I got annoyed with the FUD, with the “embrace, extend, extinguish” crap which left me with unportable code. I got annoyed at the, generally, crappy nature of Windows (I stuck with Win2K for a decade it was that much better than the alternatives!).
Most web coding is certainly not based on MS platforms and even application coding isn’t anymore.
I jumped shipped in the early naughties and picked up Qt to go cross platform and then bought Mac’s as they are, nearly, the perfect balance of Unix with a proper good GUI.
I’ve been coding/hacking for pay since ’87. Certainly no young ‘un. Just felt MS could only inflict their tactical (convicted monopolists and market abusers) and technical (Vista/ME anyone) crap on me so many times.
However, C# is damn good and LINQ is a work of genius. What with them porting these tools to other platforms I may well jump back onboard or those tools alone. I’ll leave the GUI and graphics to someone else.
This isn’t really my field so I may just be being ignorant here but as potentially cool as this is, I don’t see it delivering what MS wants in the long run; to attract system admins back to the Windows platform.
If you’re already running a LAMP stack what benefits are there to switching to running a LAMP stack inside windows?
Does the average Windows user have a burning desire to run the latest Linux killer app? Isn’t Linux itself the killer app?
Hell no! I’ve administered both UNIX-based systems (including various flavors of Linux in that) and Windows Server (what I do now). Of the two, Windows is decidedly more complex and harder to maintain. It’s as if they take pride in making it as complex as possible just for the sake of complexity. It’s reliable once it’s all up and running, but the instant you have to do anything to it, you’d better be prepared. What would be a simple configuration change on *NIX often becomes many separate tasks spanning multiple tools in a Windows environment. Sure the GUI tools look more polished than the CLI or the Linux GUI tool equivalents, but once you start to actually dive into them they are a quagmire of absolute inconsistency. On top of that, those who say *NIX errors are cryptic have never seen Microsoft’s error codes. I still don’t know what error code 0x000001904 is; seems they forgot to document that one in their KB.
Just grep the source code for that.
No, wait…
Is there any chance that because you are unfamiliar with the OS means that you are less likely to know the solutions to common issues?
I suspect this is the real problem rather than anything that is endemic with the platform itself.
Edited 2016-04-26 19:02 UTC
Don’t be an asshole. I gave a specific example of complexity. So, what’s error code 0x000001904 in the context of the DNS subsystem, since you’re so ready to accuse me of unfamiliarity?
Then don’t say stupid shit like winblows
I am not going to argue over stuff like one KB you couldn’t fine. I am sure I can find missing man pages on *nix systems.
Winblows.
Your mum
You must be a really bad Linux guy if you don’t know how to fricking use Google, its a permissions issue. Basically the system has permissions but YOU DO NOT.
Do I really need to cite the “imaginary problems kill Windows” TM from TM repo dated 2006 because you can’t figure out how to spend a whole 3 seconds using Google? Sheesh.
BTW this “problem” he is bleating on about? Takes all of 4 seconds to fix and is 100% entirely a case of PEBKAC because he didn’t bother giving himself proper permissions and is no different on Linux and BSD based systems in that if you don’t have proper permissions? You can’t perform certain actions..duh.
Where did you find this info ? I “Googled” for it (and even “Duckduckgoed”) without practical results (and, yes, I took out the leading zeroes and inserted the dns clue to try get better results).
Not really, I’ve spent significant quantities of time administering both Windows and linux servers.
The OP described the situation accurately
I agree with you,
Ive been administrating Windows since i started in the profession when i was a kid in the late 90’s.
The hardest thing for me when administrating a Windows Server compared to a unix server is the unknown. What i mean by this is that i can make a configuration change on a windows server and sometimes for some reason unknown something else break.
I think a lot of this is the layers and layers of crud that has collected over the years. Personally this seemed to happen begining with Windows 2000. Windows NT very much had a “change this and only this occurs” like UNIX, however Windows 2000 especially with the .net stuff started introducing weird bugs and a lot of added complexity, until we arrive to today.
The other thing which is getting a little better but was very disappointing is the lower QA on patches which started occuring a year back.
Don’t get me wrong i am a big Windows2012R2 fan, it’s great, stable and works well when setup, however with all of the tech and cruft in it, it does give me a feeling of a deck of cards when making changes to it.
I believe it’s something Microsoft have recognised which is why they are pushing for stripped down, stable, controlled versions of Windows server such as Server Core and Nano.
I find with unix that usually there is a good reason why something has stopped working or i can’t get working, theres something more logical and not so ‘black box’ about maintaining them.
Im hoping with the direction Microsoft is taking with Server Core, Nano and related technology we will arrive at a point like unix when things are more logical.
(Next step would be for Microsoft to apply this methodology to their server products like Exchange/SharePoint, however with O365 i wonder what incentive they have to push their onpremise server products).
Because there are still a lot of businesses that don’t want to just hand their confidential data over to Microsoft, or who don’t want to outsource their support to someone like Microsoft. Sure they’re great now, but who knows how well they’ll support each customer five years down the road, or ten. The failure of a small business’ Exchange server wouldn’t even be a stinging gnat to Microsoft, but can easily mean the death of that business.
It looks to me that this is more aimed at developers rather than sysadmins. My guess is they want to support Docker containers without needing a Linux VM running underneath.
A lot of web dev tooling doesn’t work properly on windows or there are big problems on windows. This is partly the fault of the fact that most of the people creating the tooling use Macs.
Fine with me. I would too, in their place. OS X is a breath of air every night after dealing with windblows server.
Lost any credibility with “Winblows”.
Why don’t you f–k off to slashdot.
Micrsoft have the best server side web dev tooling, now they are making sure that those that need to use the front end tooling that usually only works properly on MacOS or Linux can be used via an officially supported mechanism.
What any of this has to do with Windows server is irrelevant.
Edited 2016-04-26 18:51 UTC
There is much more to server side dev than web tooling.
I build complex systems that run multi-billion $$$ installations. If these go wrong then we could see major loss of life especially as there are large quantities of very flammable liquids moving around.
For us Server 2008-R2 was the best in class. Since then it has (in our opinion) gone downhill rapidly. Vast acres of whiespace in the GUI’s. Several tools required when before one was all that was needed. etc etc etc.
I too use OSX at home after fighting Mirosoft’s Idiosynacies all day. Simpel, gets the job gone and keeps out of the way. Just like my Operating System Lecturer told me more than 40 years ago.
If MS keeps going the way it is with their server OS then server 2014 will be the last one we use. We already have a Pilot plant running on RHEL. We may well jump ship in 12-18 months.
Just my opinion on the matter. not worth anything but I just want to show that there are at least two disenters here from your POV.
Surely you are the one responsible for that multi-billion installation on server 2014 that gives you trouble with all the white space….obvious troll is obvious and none of this has anything to do with WSL that only runs on client versions of Windows (although I think they will reconsider given the enormously positive response)
It doesn’t make a lot of sense to run WSL when most shops would deploy these to a *nix environment environment in the cloud and take advantage of the cheaper pricing of linux stuff.
Edited 2016-04-27 00:56 UTC
“If these go wrong then we could see major loss of life especially as there are large quantities of very flammable liquids moving around. ”
You need human redundancy there, besides digital
So you can replenish losses. </JokeAlert>
I was obviously talking about only web development (because that is what I care about) and because this tooling is so obviously aimed at web developers that have to work in environments where both *nix and windows are being used.
Edited 2016-04-27 00:53 UTC
That “breath of air” is my face hitting the desk as even after 16 years, OS X is still several ticks less responsive than Windows.
😉
Hopeful you don’t surf this same way.
yuk, windows has improved a lot since those days. With this change, I’d prefer windows over os x.
That’s correct, it’s a developer tool:
https://blogs.windows.com/buildingapps/2016/03/30/run-bash-on-ubuntu…
“Second, while you’ll be able to run native Bash and many Linux command-line tools on Windows, it’s important to note that this is a developer toolset to help you write and build all your code for all your scenarios and platforms. This is not a server platform upon which you will host websites, run server infrastructure, etc. For running production workloads on Ubuntu, we have some great solutions using Azure, Hyper-V, and Docker, and we have great tooling for developing containerized apps within Windows using Docker Tools for Visual Studio, Visual Studio Code and yo docker.”
Edited 2016-04-26 20:57 UTC
What benefits are there in running it under OS X? OS X offers some level of compatibility, but it does so in the context of a desktop/laptop operating system, and ended up being very popular among LAMP developers. In theory, this can do a similar thing but with higher compatibility than OS X.
Did you notice? They mentioned a number of subsystems they have done over the years (like POSIX), followed by this statement:
In other words, cygwin and mingw will still be needed since they have no intention of maintaining any of the subsystems other than their own.
“We describe a working prototype of a Windows 7 library OS that runs the latest releases of major applications such as Microsoft Excel, PowerPoint, and Internet Explorer”
Their are many references to the host OS kernel in Drawbridge being NTOSKRNL, but given roots of the project in creating Azure VM-lites, I can only hazard a guess that somewhere in MS labs there are Linux hostable versions of this Windows 7 Library OS also — does anyone have any evidence suggesting or confirming/denying this
I’d love an “official” Window 7 SuperWINE to be released or leaked out one day……
Edited 2016-04-26 18:17 UTC
Why would Azure VM-Lites imply that there’s a Linux version somewhere?
Oh I’m mostly looking for wish fulfillment.
But a bit like how OS X had a x86 version cooking on the R&D backburner for a good while before it saw the light of day in public -I wouldn’t be at all surprised if MS had an experimental version of the Library OS spin of Windows7 on the boil. (on Linux kernel rather than NTkernel as officially shown with Drawbridge project).
It seems much more efficient than VMs on the one hand, and on the other it could be an insurance policy if heaven forbid they decided to move to another kernel than NT. even for a subset of devices, mobile/tablet. Ubuntu/win phone (with linux kernel and commandline, but windows gui and behaviour when docked).
Currently very unlikely I know. For now, locked down systems, walled apps stores etc seem to be in vogue with the hardware vendors.
But who knows which way the tide of public favor will turn. Windows not just Office as a service atop a minimal FOSS core? I know Microsoft have singularity project and no doubt other experimental kernel options than ever looking to the “insidious” Linux kernel, but that’s discounting the potential attractiveness -admittedly mostly to a nerd subset now, but maybe a growing security,privacy aware subset in future.
If they can engineer the current Linux translation engine and libraries for Bash and Ubuntu on windows, they can probably manage the reverse for Windows on Linux. I mean the upper layers looked to be already in place at least at an alpha level, even with Drawbridge in 2011. So it’s only the kernel translating pieces still to do.
In recent times while architecting the Linux on Windows stuff, I’d almost be surprised if the reverse hasn’t been done at least as a proof of principle, but could be even beyond that as a bona fide insurance option…?
apologies for being the opposite of succinct there
Gotcha. At first, I thought you were suggesting that Azure was running on Linux KVM or something related, which is something I’ve seen repeated over and over again in all sorts of places.
Which, for the record, is wrong. It is a modded version of Windows Hyper-V Server 2008R2.
But if you’re just going to run Windows on top of it anyway, what good would it do you to run it on Linux because of privacy concerns? Most of that telemetry crap isn’t in the kernel, but in the userland.
I guess I was thinking along the lines of a cut down Win-7 (or win-7-like) Windows Library OS that could be spun up on, maybe not Azure but other cloud infrastructures – or Linux servers/desktops alike – for occasional Windows program needs.
..A fantasy telemetry-free Windows-compatibility layer – mostly for office, some lite sql, VS, media editing apps ?
You’re right in reality of course.
Something like this also existed on the Linux side called Longene or Linux Unified Kernel:
https://en.wikipedia.org/wiki/Longene
Sadly, it looks like development has stalled. Hopefully development picks up again someday.
I personally find it more interesting to get Windows applications working on Linux via Wine or PlayOnLinux than running Linux apps on Windows, since Windows already has ports of nearly all Linux apps and Cygwin has always been available too for the CLI tools.
I’ve been working on a script to help integrate my headless Windows 10 VMs into my Arch Linux desktop. Windows 10 x64 can run in a VM with as little as 256 MB of allocated vRAM and quite well with 512 MB vRAM and above. Here’s an example of two seperate headless VM instances running simultaneously with said script:
https://www.youtube.com/watch?v=hy1KaOt74Ys
Edited 2016-04-26 18:48 UTC
Whenever you talk about bash, you have to take into account it was designed in the 70’s. There are many ways to start programs and use admin tools with better technology than bash. Powershell is àvery Good exemple With its ability to use methods from fresh created system objects
And it’s only available on Windows.
Powershell is almost certainly coming to Linux (and OSX) as well. Microsoft doesn’t have entirely clear goals with any of their recent steps except for the one big underlying trend: Run Microsoft software and tooling everywhere.
If Microsoft truly expect to create ‘Windows’ culture within the Linux ecosystem.
Unknown new Territories. Not because of the Players. Because of the Plays.
Really wander about Near Term Horizons.
Why? PowerShell has literally ZERO value proposition for either the Linux or OSX world. Plus it is too bolted on NT’s intrinsic to make the effort worthwhile for Microsoft.
tylerdurden,
I don’t really see why the logic wouldn’t go both ways: Powershell has the same value proposition for unix as bash has for windows, particularly compatibility and familiarity.
While you and I don’t want to use a windows environment under linux, it could help windows users feel more at home on linux. With that said, I’m not sure what value this would have for microsoft. They’d want to make linux->windows transition easy, but windows->linux hard.
The lower at the stack, the more immovable it has to be. Nowadays, 70’s Bash NOT highly related to current Bash. Even if calls look alike.
First off, there is a project to create an open source powersehll implementation (http://pash-project.github.io/). They’re not very far, but it’s still a direct counterexample to your last statement.
Secondly, it’s worth considering why POSIX sh has stuck around so long. It’s not just the legacy compatibility, but the fact that it works, and that alone is more than enough reason not to change it. For what it’s worth, there are a lot of people using shells other than bash (I and quite a few people I know use zsh, RHEL (at least last I checked) uses tcsh for Sun compatibility, many BSD systems use ksh, Debian has dash, etc)
PowerShell has it’s own issues though. There are some critical system tools in Windows that can’t be used under it (bcdedit for example), it has issues with quoting for invoking sub-shells, and it pretty heavily relies on object-oriented constructs which in turn make things slow (the number of cases where OO code is faster than procedural can be counted on one hand).
The shell itself does not dictate the administrative tools as much as the OS it runs on, and how that OS handles things. I’m personally fine with PowerShell other than what I mentioned above and the fact that it often takes more typing to do something than it would in sh. I stick with Linux (and as a result zsh) because I have issues with the management tools in Windows itself, not PowerShell. If my system dies, I want to be able to put it back together again without needing a GUI and installation media which inexplicably does not include administrative tools.
Just be glad though that we’re not stuck using JCL or Rexx…
The Amiga had a shell as part of DOS, but it was pretty bad. We were happy to get a port of REXX, called ARexx. Of course, we were happier when folks started making posix libraries for the Amiga and we could finally use bash.
ARexx. Ahhh, the memories!
It’s hard to figure out what was the more uninformed part of your post…
A lot of people seem to mistake what this is about, and so they start questioning things like security, or how robust it is, or how good performance is relative to a real Linux box. Security and robustness and speed are good of course, but this is in no way meant to be a used in a production environment — certainly not any kind of public-facing one. Its meant simply so that developers on Windows boxes have access to first-rate (with WSL, one-and-the-same) versions of the linux command line tools they know, love, and need. Its so they can set up a local version of their production environment for testing. In fact, for these uses it should be an even better situation than using something like MacPorts on OSX, even with its *nix underpinnings.
But it its not even really about running general-purpose Linux apps (although, its really neat to see people getting linux GUI apps running on Windows XServer ports and I hope it spurns more development on them), and its *explicitly not* about being able to run a public web-server using a LAMP stack on a Windows Server box. Not now, and probably not ever.