The PC industry isn’t doing so well. Sales have dramatically slumped, despite the industry’s efforts to tempt consumers with Windows 8 tablets and transforming touchscreen laptops. But next week, the Consumer Electronics Show in Las Vegas may be the launching pad for a new push – a new brand of computer that runs both Windows and Android.
Sources close to the matter tell The Verge that Intel is behind the idea, and that the chipmaker is working with PC manufacturers on a number of new devices that could be announced at the show. Internally known as “Dual OS,” Intel’s idea is that Android would run inside of Windows using virtualization techniques, so you could have Android and Windows apps side by side without rebooting your machine.
I’m going to make a very daring prediction, that is sure to send ripples across the entire industry: this is not going to turn the tide for the PC.
A dual purpose system is not something I would go for. I see laptops/netbooks and tablets being for different purposes/needs.
I remember the old days when Microsoft strong-armed OEMs in not allowing anything more than Windows to boot. There was for a very short time a Hitachi system dual booting BeOS and Windows.
It would be strange that Microsoft would have changed its ways given that they are trying to gain traction with their Surface and Windows 8.1.
Does Intel has the (legal) muscle to win such a dual booting battle?
It is not going to turn the tide for the PC, but I hope it turn the tide for Windows…. to worst
I also remember when MS tried to boycot any OEM that wanted to make models with one OS and other with Windows. In that case MS will not give the manufacturer the standard OEM discount and charge more for Windows.
I always thought that HP strategy to ship dual boot WebOS on Windows was good, but instead it worsted HP relationship with Microsoft.
But I remember it was a little try to do this, “HP QuickBoot OS” was a little linux (Splashtop?) but it never gain traction.
Yes, this is truly terrible for Microsoft.
CES is the most important consumer electronics show and media all over the world replicates the announcements.
This is a clear message that the Windows-centric computing age is nearing the end.
I’m still surprised that Android hasn’t added a desktop mode yet. The Asus Transformer Prime actually worked pretty decent as a desktop, but it’s clear the OS wasn’t ready.
Actually this message is not about the fate of Windows-centric computing (which took a hit for home market already), but more about Intel’s [lack of] imagination. The only purpose the whole thing serves is to show that Intel’s hardware is capable of running Android via virtualization more efficiently then commodity tablets do natively.
Actually, the trend with PC market is more of a problem for Intel then for any software vendor: Microsoft earns most of its money with Office and [IMO overpriced] server products, and it only needs to hold enough of PC market to keep the main products dominant (eg. “.docx” as the default text format or .Net as a viable development platform). Loosing home market will make Microsoft less comfortable, but won’t do much beyond that. (Or may be even cut Microsoft’s expences on desktop OS R&D).
In the meantime Intel gets its money from x86 hardware. The server part of Intel’s business is only growing, but the mass consumer market – the one Intel had in its pocket recently – is about to vanish (according to some, at least). While Intel also seems to make more money outside mass consumer market, the proportion isn’t as favorable for Intel as it is for Microsoft.
P.S.: bear in mind, that when you buy an Android device, you also pay for a Windows Phone license, albeit not getting one. It’s less money per item, but lower risks and expences and larger market compensate for that. I’m not particularly sure whether Microsoft indeed is loosing anything in terms of financial result. And piracy is also of no concern on mobile side of things. It may even be that Microsoft is OK with loosing desktop market to mobile as long as its office monopoly remains unchallenged.
P.P.S.: Ironically, if desktop Linux was growing with the pace of Android back in the days of Symbian dominance, it would hurt Microsoft much more.
P.P.P.S.: And the next version of Windows Phone will turn things around for Microsoft’s share of mobile market – Nelson can’t be wrong… 😛
Edited 2014-01-04 18:08 UTC
I understand what you mean and I agree with you,
but despite the message they may be trying to send I don’t think the general public will understand it the same way.
CES is a very mainstream event, It’s the magic time in the year where new products and trends are announced. It’s not the place for Intel to send that kind of messages because the media an consumers will give a different interpretation to it.
Mass consumers (those who can’t get this setup on their Windows 8 touch-enabled device already) don’t really care CES as far as I’m aware. Intel needs publicity, and with this show it remind everyone that only x86 hardware offers viable virtualization for consumer-ready devices. They communicate this message frequently enough to get it hyped and well-remembered, and all the rest will simply fade away after CES ends. In the end it will be just another fresh mention of Intel in context of virtualization.
Iwouldn’t be so sure about that, one of the local news channels (WISN 12 Milwaukee) does a tech report and sends a reporter to trade shows like CES and E3 to do a piece on tech trends.
While not goog coverage by our own geeky standards, it does bring CES and other tech trade shows to the uninitiated masses.
Then my point about Intel’s desire to be mentioned again in connection to virtualization is even more valid: Intel shows a device that runs something general consumer expects on phablet, and Intel’s technology allows to do so without sacrificing the ability to run “your normal apps”â„¢. Again, enough time will pass after CES that general consumer forgets it ever existed, but the impression of Intel doing something everyone else failed to do will already give roots.
Lol CES is like the tech Comicon, no one pays attention and a year later most mainstream people are hard pressed to remember a single product that even made it to consumer shelves in a meaningful fashion.
Its a circus for vaporware, flexible screens, and 80 inch TVs that cost $7000.
People care about products in stores, on retail shelves. Not the latest science projects from LG, Samsung, Lenovo, et all.
What do you mean by legal muscle? Microsoft got in deep Anti trust doo doo for that. I don’t think they’d try and revive it.
Edit: removed comment portion that assumed this was dual boot. Its Virtualized… Doesn’t help me at all…
Edited 2014-01-03 22:57 UTC
It isn’t that Microsoft changed…
But the contracts Microsoft used had those items canceled by the anti-trust lawsuits.
Currently Microsoft uses the “advertising” kickbacks to keep vendors in line. What is happening is that there are no kickbacks when users don’t buy the item in the first place.
So they are starting to try to use Android to get the sale… and include Windows to try and get the kickback…
Why do they need Android on the desktop? Any normal Linux is light years better.
On the Intel side, I hope they’ll release proper glibc drivers for their Merrifield SoC, so Linux can be run natively there (Sailfish and co). After all, they participate in Tizen. Because the ARM situation isn’t getting any better and being stuck with libhybris forever is kind of sad.
Edited 2014-01-03 20:24 UTC
Android has a much better (i.e. nonzero) selection of touch friendly apps, obviously.
First of all, if we are talking about regular desktops / laptops, you don’t care about applications being touch friendly. Secondly, you don’t need Android as an OS for it (as an OS it’s pretty crippled). It’s enough to have the runtime built for any particular OS. Not sure why it’s not widely implemented, like OpenJDK does. I.e. where is Android runtime for Linux or for Windows?
Edited 2014-01-03 20:23 UTC
Obviously the motivation here is to create consumer interest for touch screen laptops, or hybrids with “real os” when keyboard is connected and casual touch friendly OS when it’s not.
The assumption therefore, is that we actually need touch screen laptops. Personally I think this is backward.
What we need is the ability to turn a tablet temporarily into a PC to run the odd PC app ( yeah, I know you can already do this using Citrix and the like but it’s not the same ).
While you are right, the truth is that Android is the only Linux distribution that has had commercial success for the average Joe.
Tizen will never go anywhere. I am yet to see any major commercial use of it.
Yes, so far nothing came out of it, but that wasn’t the point. Since Intel participates, they at least should make sure their hardware works with glibc Linux. The sickening situation around Qualcomm and Co. is really annoying, so Intel can be a game changer, if only they’ll release a suitable SoC at last.
Edited 2014-01-05 07:15 UTC
Android has an official SDK, something desktop linux will probalby never have. No dependency hell, no repackaging, once you develop something it is guaranteed to work for many years. This is the main reason for Androids success and desktop Linux’s failure. The only real problem with Android on the desktop is a lack of productivity apps.
Are you really unaware that dependency hell was resolved 7 long years ago by both deb and rpm systems?
Tell another one please.
Hasn’t been resolved at all. Better, but not resolved.
“Both rpm and deb systems”, android one package, all devices, no dependencies to manage. Linux: Hopefully someone in the community bothered to repackage the latest version and upload it to the distro’s repository, and last time I checked gimp and blender were severely out of date in ubuntu’s repository. Need something that wasn’t put in a repository, like the latest version of makehuman or Citrix? Better figure out how to manage those dependencies yourself since no one bothered to fix the errors with make human dependencies, and Citrix is commercial software so the community doesn’t care and instead wants them to do more work because they’re clearly entitled to tell a business what to do. Your software doesn’t use the latest version of everything? oh well, none of those old dependencies are supported anymore. Android? iOS, Mac OSX, Windows, Blackberry, etc. All Official SDKs that make sure the end user and developer never have to put of with this. At what cost? A little wasted disk space. Worth it? Clearly.
Edited 2014-01-04 15:05 UTC
Are you saying Citrix doesn’t know how to package binary software in a redistributable manner? I’ve got binaries of my (GNUstep) app that were built on a Debian-derived distro over 6 years ago and run just fine on the latest Slackwares. The trick is pretty simple: bring everything you need with you, exactly as you said below.
The latest version of Gimp I see in the Ubuntu repos is 2.8.6 from 2013-06-21, i.e. the most up to date before the release of Ubuntu 13.10. The latest stable release of Gimp is one point release above that (2.8.8), hardly what you’d call “severely out of date” (unless you update your software by physical age instead of by features).
As for Blender, the version in Ubuntu is 2.66 whereas the latest pre-built binaries I could find were 2.69. Don’t know how old that is, I couldn’t be bothered to track that down, so perhaps it’s pretty old. However, you can always download the latest pre-built 64-bit Linux tarball from blender.org – just dusted off my old Mint system to try it and it worked flawlessly. Download, unpack, click binary, running. Didn’t have to touch the terminal once.
Well, so far the examples you provided above and my own experience in doing so don’t seem to support your case that somehow packaging software for Linux systems is inherently “hard”. Packaging complex pieces of software for any platform has its challenges, but once you know how to do it, it isn’t that hard.
Android actually has a decent library of third-party apps that Joe Public might actually want. He doesn’t want Blender or Eclipse, he wants Temple Run and Angry Birds. GNU/Linux is great for content creators, Google/linux is great for content consumers.
Android is bad for either creators or consumers. It has horrendous multitasking support (better to say none), and really badly designed system architecture. Sorry, but no. Android on the desktop is a stupid idea. It was created for dummified mobile devices where one application runs at a time. Retrofitting it back into multitasking desktop is pointless.
Edited 2014-01-05 06:05 UTC
They’re bundling BlueStacks? YAWN.
I like this idea, but only with certain conditions.
First, it’s only useful on touch devices – there’s no reason to have Android on the desktop. However, a x86 tablet with the ability to run Win 8 and Android apps, along with the ability to connect a keyboard and turn into a full Windows laptop is appealing.
Second, it would need to have Google Play or a similar app market that can keep apps updated automatically. Android apps get updated so frequently I can’t imagine having to keep up with it manually. I don’t see Google allowing Play to be on a Windows tablet, so that means another market.
I would be interested in such a device if it were dual-boot. But not Android in a VM. The battery life when using it as an Android device will suck compared to running natively, speed will most likely suck too.
If done right not necessarily. It can’t be full Android otherwise it would be a mess of 3 not integrated, intermingled environments (Win desk, Win Metro, Android Launcher). They have to find a way of isolating just Android apps. This way they could run side by side with Metro full screen apps and wouldn’t stand out so much (style aside).
Pure Java apps could run in native windows Dalvik reimplementation, while C based apps would use some rudimentary Linux VM with stripped down Android just to launch apps.
But that needs Windows running, hence the battery life and speed issues.
Sounds retarded. What’s the point?
I figure it’s another desperate attempt by Intel to grab a piece of the largely ARM centric Android market. I bet they’re kicking themselves for selling off XScale…
I agree it’s a blatant atttempt for Intel to try and get the mobile market hooked onto x86 just like the desktop market. Intel must be nearly as scared as Microsoft at the direction modern-day computing is going. It’s moving from power-hungry boxes running expensive software to throwaway machines running open-source software. If MS and Intel don’t sort their s**t out, they’ll end up like DEC or National Semiconductor.
Maybe they’re “too big to fail”, but if they can’t make stuff that people want to buy, there’s no point them trying to sell stuff.
Were you referring to comment titles or the news?
Edit: I agree on the former but am still unsure about the latter.
Edited 2014-01-04 12:02 UTC
I think once Android, Kindle, and iOS can work with devices such as a wireless dvd burner, printer, mouse, etc… then the PC will be done.
At least in my case, as a consumer (who does not produce content), that’s about all I need my laptop for. Then of course, multi-tasking.
The PC still has its place, by all means… just expect it to be relegated to office use, content production (media, programming, etc).
If you need a cursor, CD burner and RHW keyboard, don’t buy a tablet. I own both and use both, but the laptop is used more for creative pursuits and content production than the tablet. My tablet is treated most of the time, like an interactive book. I use it to read PDFs, news articles etc relevent to any creative work i’m doing on the laptop. Two screens are great, especially one that you can scowl at because you code won’t compile, whilst playing Angry Birds on the other.
In a strange way, we’ve gone back to the days of the typewriter. Except that typewriter can play Portal 2 and the textbook can play GTA San Andreas.
To paraphrase Mark Twain. “The death of the desktop has been highly exaggerated”
I say that on the unspoken assumption that tablet makers (or perhaps Google, through Android) will want to finally overthrow Microsoft’s Windows once and for all.
I occasionally use my Bluetooth keyboard with my Kindle Fire HDX, but certainly not often enough to warrant having it attached. In that setup, I’ll prop up the tablet on a picture frame holder, so I’ve got a mini desktop setup going.
And if I could indeed make use of the various hardware peripherals that are standard on a laptop or desktop, then I could easily consider abandoning Windows, as my KF-HDX is far faster than my laptop (its 2.2 GHz quadcore only having to deal with a smaller mobile OS make for a pretty good “power to weight ratio”, so to say).
They should forget about Android on the desktop and just offer dual boot to a standard Linux distro. I guarantee you, if they have it boot to Linux Mint by default, most customers will never see the need to go into Windows AT ALL.
Hardly. But granted, I don’t think Linux Mint fits the bill, it needs to have a proper company behind a Linux desktop = Mandriva, Ubuntu, Redhat or Suse.
Hahahaha…ahahahaha. No. Consumers want a device that provides a software ecosystem for them. They want the iTunes store, the Windows Store or the Google Play store. They don’t want to apt-get-blah-blah in a terminal or put up with similar-but-not-quite-the-same applications. They want something familiar. And given the market penetration of Windows, iOS, and Android that’s what is considered familiar. Putting any Linux other than Android on a consumer device as a user-facing OS would be the most ridiculous thing ever. Unless of course your plan is to have people buy products other than yours.
My mother went from a Windows PC to an Android tablet. No trouble switching and learning a new different platform. If it’s easy to use, most people would adapt. Android is easy to use, Linux is not.
I’m not too sure about that. The Windows software ecosystem extends to many areas that Android would never encounter coming from a mobile platform. Server tools, advanced networking tools, .NET framework and legacy applications are all reasons why Windows is going to be around for a long time. Not to say that Android could not be extended to the PC, but as a stand-alone, that would take a while.
Why not just use plain Android? It is time to force Microsoft Windows to acknowledge that it needs to keep up with the recent trend, a free OS (Not necessarily open source) and only provide services/technical support and apps on top of Windows. I believe MS needs to do this in order to be remain competitive in the near future.
Removing Windows would make the whole x86 barrage with associated cost useless.
Wouldn’t this be handled better with a compatibility layer and run the Java code directly?
Yes, but that would take much longer to build and get to market compared to putting Android in a VM.
But they don’t need to run Android in a VM. Android already runs on Intel based phones.
Android is a mobile OS, not a desktop or laptop OS. Its paradigm is fundamentally different. Windows 8 sucks so much precisely because Microsoft tried to mobile paradigms on a PC. How is using an OS that is designed only for mobile devices rather than trying to be for both going to help when the target is a PC? I don’t see how this really solves anything.
This is big news, because it shows a lack of faith in Windows and the fact that PC vendors are looking to find ways to make the PC do better in spite of the stupidity that is Windows 8, but if you’re selling a PC, put a PC OS on it. To be honest, at first glance at least, what they’re doing sounds stupid. Maybe it somehow makes sense, but IMHO, if you have a PC, it should run an OS designed for a PC, and if Windows doesn’t cut it anymore, your primary options are Mac OS X and Linux, and since Mac OS X is restricted to Apple devices, that means that you’re only real option is Linux.
So, it could make sense if they were dual-booting with Linux, but Android? Not so much. It’s like they’re trying to be schizophrenic about paradigms like Windows 8 is but with dual-booting thrown in.
About the only thing about this that makes sense to me is that if you’re going to be stuck with an OS designed for mobile devices on your PC, you might as well go with a good mobile OS rather than Windows 8. But the real solution is to use an OS designed for the PC.
Putting Anroid make much more sense to them even though it doesn’t make sense for Android to be a Laptop/Desktop’s OS, that is because of Android’s momentum. Anroid apps run full screen and so Windows 8 apps, therefore Android apps will run on Windows 8 in the same way Win 8 native apps, users won’t notice the difference.
Basically, you have Windows 8 that is as crippled as Chrome OS unless you run previous versions of Windows applications to make your life productive on Windows 8, because as per Windows 8 TV ad shows you, the only app they showed was a useless drawing app.
It means also that Windows 8 is relevant only, because it has the name Windows on it, not that it is better than Windows 7 in terms of user experience. Intel saw this opportunity to create a better way to merge Android apps into Windows 8. Microsoft needs to thank Intel for doing this, because, from a user perspective, Windows 8 have tons of apps overnight. Win8 apps are basically the same as Android apps in terms of user experience. Users do not care if they run Android or native applications on Windows as long as it will work for them.
Interesting. One potential problem is whether those Android apps “feel” native, or like they are running in an entirely different virtual machine on top of Windows.
I’d be surprised if they pulled this off and make it compelling, but I’ve been surprised before.
As an alterate for desktops, Acer has an interesting new 27″ 2560×1440 monitor that doubles as a (really big!) Android tablet. I scratched my head at first, but the idea has grown on me a little – kind of a “smart touch monitor” equivalent to a “smart TV”, and as a bonus it provides a great environment for the monitor setup and configuration apps.
Would love to give it a try sometime. In sufficient volume, I suspect Android would add little to the cost of a touch monitor, so it would only have to appeal to a portion of the market to be eventually included by default.
Certainly a bold and creative idea – which often equals “crazy” and occasionally “game changing”. 🙂
http://www.techspot.com/news/55202-acers-new-27-all-in-one-is-a-hig…
Edited 2014-01-04 16:14 UTC
Another Manufacturer to do the same is Lenovo with the ThinkVision 28, but they chose a 28″ Ultra HD (3840×2160) TN panel.
http://reviews.cnet.com/lcd-monitors/lenovo-thinkvision-28-smart/45…
As the hardware to run Android costs next to nothing compared to a 1440p or 4K panel, this might be something we will see more often in high end displays this year.
But we need compatibility with the original 4004 and 8080 running on one of the old S100 bus IMSAI or Altair computers! Who doesn’t want to use toggle switches to get their bootloader.
For everything else, there’s ARM. Oh, except Windows that needs x86 compatibility (even spoiling 64 bit). But even Windows is now ARMed and dangerous to Intel.
ARM is to hardware what Windows was to software.
AMD should have 12 core ARM with some ultra GPU if they were smart.
I take it you haven’t heard of AMD’s upcoming Seattle series ARM Cortex-A57 based SoCs w/ Radeon GPUs? If AMD holds true to form it could be the first ARM system with fully OSS drivers, something that has ben sorely lacking for ARM hardware.
http://www.serverwatch.com/server-news/amd-enlists-arm-based-seattl…
https://slashdot.org/topic/datacenter/amd-seattle-adds-high-speed-in…
I don’t want a glossy touchscreen on my laptop.
I have a smartphone, a tablet: they run Android and I like it. They are ARM based and could be x86 if Intel’s offering is competitive.
I want a laptop that has a nice mate screen around 7-9 inches, as light and autonomous as possible. With all the practical ports, decent power, a great keyboard (this is the selling point, otherwise you get a tablet) no fan. The latest Atoms would be great to offer such laptops but the closest thing is the shitty Asus transformer win8.1 convertible tablet.
That thing has a glossy screen, a useless keyboard that is weight to counter the weight of the tablet: such non-sense ! And too few ports.
I’m waiting for an intelligent ultralight laptop, but nobody seems to be willing to build it.
The closest thing available today is the Acer c720 chromebook.
Typing this on my less-than-a-month-old Acer C720P, a nice little touch Chromebook with 2 GB RAM and a 32 GB SSD. It’s light, fast, and fun to use IMHO – and I love the keyboard, though that’s always a personal preference.
But to touch. It’s certainly not required, but it’s very useful. My favorite use is swiping left-to-right as a “back button” and right-to-left as a “forward button”. It’s also really great for scrolling and pinch-to-zoom (although the latter only works on web apps, not normal web pages – yet).
The non-touch Acer C720 is $199, and touch (C720P) adds $100 to the price. It’s worth it to me, but I think it will vary from person to person and OS to OS.
It would obviously be mandatory for effective use of Android. Not sure I’d make much effort to run Android on a laptop, though – the Chrome app store hasn’t disappointed me at all. I do plan to install Crouton (the Ubuntu that shares the kernel with ChromeOS), and look forward to trying that with touch soon.
Edited 2014-01-04 05:41 UTC
A touch enabled laptop that adds $100 for that privileged doesn’t make any sense. Touch is only optional, Keyboard/Mouse is still the best input you have for your PC/Laptop. Touch must be optional, not required, this is the same reason Win8 native apps is a mess. You can certainly replace touch with a mouse input, but mouse/keyboard with TOUCH? Hardly.
Augments, not replaces. Best of both worlds.
Then asking customer for additional hundred bucks for that privileged?
That is my point.
Then asking customer for additional hundred bucks for that privileged? That is my point. [/q]
Doesn’t sound like your point in the above context, but whatever.
Having used it for a month, I’d say that touch on a laptop is exceptionally useful for scrolling, zooming, and moving among web pages. It’s also needed for certain classes of casual gaming, e.g., Angry Birds and friends, that involve direct manipulation of objects on-screen.
It was worth it for 44 of 50 Amazon reviewers who gave it 5 stars, and those who caused it to sell out in their store in a week’s time and begin selling for an additional premium from third party suppliers. And its still less expensive than the cheapest non-touch Windows 8 laptop I’ve seen, and it’s price-competitive and more usable for text input than any iPad or Android tablet with a bluetooth keyboard I’ve tried.
And it was easily worth $100 for the extra capability to me. YMMV.
Sure, and you can make them _actually_ useful by ditching chromeos entirely and replacing it with a distro of your choice.
Edited 2014-01-04 10:35 UTC
You needn’t ditch ChromeOS, you can run a more traditional Gnu-focused Linux simultaneously. We use Crouton.
But sure, unlike those Windows machines that encrypt the bootloader to protect Microsoft^H^H^H the user, it’s unlocked and trivial to put any OS on a Chromebook you prefer. Haiku? Android? MorphOS?
Laptops are fun again.
Secure boot can be disabled, except on embedded stuff, like the windows RT tablets … then again, good luck installing a non-android OS on an android phone. That argument can (sadly) be made for pretty much any embedded or near-embedded type device on the market today, including your chromebook.
I just cannot fathom why you’d pay for a machine that can run a full OS, but then decide to run just a browser. That same browser can be run from within a fully fledged OS, AND you’ll be able to actually do something with it if you don’t have a network connection.
I like chromebooks, sure … as a source of cheap hackable hardware. ChromeOS is a failure, and for good reason.
<– Warning rant ahead –>
Now, i don’t really like MS, but you have to give them credit where credit is due. Even they haven’t felt the need to sell hardware to you that you can only use with their online services. Even an xbox one still allows you to run local applications.
Truth is, if MS tried to sell something like a chromebook, people would be up in arms.
Apple does it, and they get lots of flak for it. Their evil app store plans are met with serious criticism, and so it should be.
Bottom line: me no comprendo why you buy something that is capable of doing anything a fully fledged laptop can do (albeit slower), and then let google close it all up and make you depend on their services and web “apps” only.
Don’t even get me started on web apps for that matter.
Oh well, it’s late. I am prone to ranting when it’s late, so please excuse me for my behavior.
Um, I was talking about RT tablets. And the RT-based Surfaces. And Windows phones, of course. Pretty much all of the computing hardware Microsoft sells except Surface Pro. Comprende?
Seriously? You’ve never heard of cyanogenmod.org, or Ubuntu (ubuntu.com/phone/install and wiki.ubuntu.com/Touch/DualBootInstallation), or FirefoxOS (http://www.themobimag.com/how-to-install-firefox-os-on-google-nexus…), or webOS, or any other alternate?
Well, I’m honored to introduce to you to whole new world. Welcome to freedom! 😉
Try crouton or chrubuntu, for example. However, it’s just an x86 laptop, so I could install pretty much any OS that I wanted. Just pop it into developer mode, and off we go.
Given that you’ve never heard of loading an alternate operating system on any mobile device, this doesn’t surprise me in the least.
That you think ChromeOS can’t run apps off-line doesn’t surprise me, either.
On the other hand, I am surprised that you are aware that the Chrome browser can run exactly the same apps as ChromeOS under Linux, OS X, or Windows. It’s a little surprising that you haven’t considered the implications of this yet.
I doubt you realize that installing an app (say, the Gimp or Angry Birds) on one installs it on all, so that my environment is consistent for a given account regardless of the hardware I happen to be using. You’d probably miss having to purchase and manually install a copy of each application on each machine, though. What could be more fun, right? *sigh*
That you’re unaware that ChromeOS is by far the fastest growing pre-installed laptop OS doesn’t surprise me, either. It picked up 10% of the market last year (http://www.extremetech.com/computing/173691-chromebooks-pick-up-10-…), starting from 0.2% the previous year.
In fact, you’ve convinced me – you definitely don’t get out much!
I would encourage you to read beyond the Microsoft fan base that has limited you thus far. There’s an entire world of computing out there beyond the Microsoft walls. You should learn about it, or at least not comment so incorrectly about it.
I wasn’t.
Cyanogen is android, they are also forced to take in any binary blobs that come with the manufacturer image, since, as i’m sure you know … all of that stuff is off limits for the FOSS community.
Ubuntu touch only runs on the devices that come with (semi)open drivers, aka some of the nexus devices.
Firefox OS follow the same trend, some nexus devices and some open arm boards.
WebOS is dead.
No, seriously. Try installing webos on your android phone. Let me know how that works out for you.
That’s a tad pretentious, isn’t it? I run leenucks on mips(SGI), pa-risc (HP 9K), alpha (AS1K) as well as on regular desktop hardware.
Chrooting into something is hardly installing a new os, now is it?
Yes, it’s just an X86 laptop with severely non-standard firmware. That’s my whole point. I said chromebooks were only good as a source of cheap hackable hardware, to install a full os on.
lol
What, HTML5 offline storage? Give me a break.
What else are you going to do with it? Intricate tasks such as browsing files? Listen to an MP3?
It can’t. It can only load locally stored web pages that try to hack some usefulness out of a web page that’s not on the internet.
Other than that, there’s a file manager, and little else.
Would you also be surprised to hear that C code can be compiled anywhere?
That wouldn’t change, whether you just run chrome, or chromeos. Chrome apps are chrome apps. This is not an inherent chromeos advantage.
5% of those bought them to hack them.
The rest sold because windows 8 failed.
I don’t care, at all about market share.
Most of the world runs windows … wise choice? Nuff said.
I get it, you’re hip. You live in the cloud. I could never understand.
Son …
Edited 2014-01-04 20:38 UTC
Intel seems to be betting on consumers wanting one device that serves as both a mobile PC and a desktop/laptop-replacement. That seems to be the only reason to include both Android (great apps for mobile users) and Win 8.x (boundless selection of desktop apps but the Metro apps are limited) on the same device.
Perhaps Intel is saying they’ve bought into Canonical’s vision for the future of computing, with phones and tablets that become desktop PC’s when placed in a dock. It certainly speaks volumes about Intel’s lack of faith in Microsoft’s mobile strategy. Only time will tell if consumers really want a device that can do it all, or if they’re content to live with the desktop/laptop + tablet + phone paradigm that currently exists.
If Intel and partners choose the right form factor(s) this idea could be very successful. eg Imagine a 5″ Android phablet that uses (wireless) docking with a desktop or laptop to becomes a fully functional Windows PC.
Back in 2011, BlueStacks released their Intel-based virtualised Android solution in Windows and it didn’t really take off. It’s not clear to me what this “new” proposal will do differently, except for perhaps persuading some OEMs to bundle the virtualisation with their Windows pre-install (which is probably the only way a BlueStacks-style solution could succeed).
It does sound that Intel have something of an NIH issue – there’s no mention of BlueStacks in the Verge article and BlueStacks themselves seem to think that Intel are going alone on this (despite Intel being an investor in BlueStacks!):
http://venturebeat.com/2014/01/03/bluestacks-responds-to-intels-and…
It didn’t take off because most Android apps are useless without a touchscreen.
I can see only 2 possible outcomes:
1)The PC dies, replaced by phones and tablets. But offices and business still need PCs.
2)Somebody totally new comes along and replaces MS Windows with something totally new (not Android). This one would be the ideal solution, IMO.
Personally I use OS X on a 17″ MacBook Pro, but considering that I am never going to buy a 15″ MacBook Pro for 3000 euro, and even less a recycle bin (the so called Mac Pro) for 4000 euro, this is also a temporary solution.
I agree with those who say this isn’t a game changer for the PC. However it is a feature, one that I wish I had on the Mac. I wish it could run, unmodified, all my iOS apps purchased from the App Store, with cut and paste working between environments.
Just another feature….not a world changer, but would be nice.
Microsoft tried to get Android handset makers to dual boot win phone and Android, I don’t think any of the handset makers agreed – It was a stupid idea.
http://www.androidcentral.com/microsoft-wants-htc-s-android-phones-…
This is Microsoft and Intel trying a different approach to the same thing… Why bother ? I think Thom’s response was spot on, this isn’t going to change the tide of anything.
As for a couple of the commenter’s here, I really cant believe there are people that believe Linux still has dependency hell – this is something that existed before YUM, a problem Debian never had because of APT, its such an old problem that was sorted out so long ago and absolutely nothing to do with Android. Its just such a shill thing to say, I got bored of arguing with marketing companies (acting as individuals) paid to spread misinformation, my time is more valuable.
Truth is Linux just does not need the evangelism any more its not a small kernel/os with great potential, its a kernel/os that dominates practically every industry.
I have had a non-IT friend mention that I am the only person they know that uses Linux, without even realising that they were using Linux on their android tablet, I don’t even bother telling them they are actually using Linux.
Even PC Gaming is beginning the move to Linux, only a week to go until the steam machines are revealed – The world has changed in the last 10 years, desktops are no longer relevant.
Once upon a time every company had desktops – now every company I work in use laptops (So many mac book pros out there in the corporate world now) Windows laptops/Desktops are becoming less and less ubiquitous.
Truth is I no longer dislike microsoft, I find all of this money they throw at terrible marketing campaigns hilarious, it only tarnishes their own image when these campaigns back fire on them (Every major outlet that runs windows phone / windows tablet stories is infested with Microsoft shills – All the misinformation they try to spread about their competitors only pisses off the regular readers) and none of it is helping the windows cause.
I no longer dislike Microsoft as it no longer dominates the desktop os space and I can accept the OS, I hate being forced to use something. I hated Microsoft trying to dictate to me that I had to use their software, that they had completely cornered everything.
I realise this is another vain attempt to try and corner the tablet market by Intel and Microsoft, but it is just that another vain attempt – Think about it, they are virtualising android to run Android apps in Windows and Android is a Linux based OS – its just such a turn of events, what a difference a decade makes.
I don’t think Intel has anything to worry about, ARM will get better but Intel will get its power usage down – Haswell is a massive step towards this. Competition will be fierce but we the consumers will benefit massively from it which is great.
Debian still has massive dependancy problems, as indicated by the fact that Wine still doesn’t have regular daily builds of 1-3 revisions out of date being made for it. The official builds of wine are like 1-2 YEARS out of date. And this is on SID which is the experimental distro! It would be nice if Debian had more developers working on keeping packages up to date, and attacking architectural problems like wine 32 and 64 bit packages on 64 bit linux, rather than working on stuff like keeping 3 year old debian updated.
Edited 2014-01-04 19:34 UTC
How is an out of date wine package, “dependency problems” ?
I just decided to check your claim out:
http://packages.debian.org/sid/wine -> Package: wine (1.6.1-11)
http://www.winehq.org/download/ -> Latest stable release: Wine 1.6.1
Looks like debian sid (sid is not experimental – its unstable – http://www.debian.org/releases/) has the latest stable version of wine available.
FYI, Debian Stable is used for server installations, I have worked in many organisations that use Debian stable as their Server OS, I mean multi-million pound companies that rely on Debian Stable (the standard is usually RHEL or CentOS but there are a few massive companies that I know of that rely on Debian). Its stable branch is just that very stable, server production ready stable, this is why the desktop side may seem out of date – Look at RHEL 6 / CentOS 6 and you will notice that that too uses older versions of common desktop applications, its intentional – its not about bleeding edge but using software that has proven itself to work reliably. If you want easy to use up-to-date desktop packages use mint or ubuntu or fedora or debian unstable or arch or gentoo, whatever you want, but what you just explained above has absolutely nothing to do with dependency problems.
If you want the development release of wine (by its very nature beta or alpha software) there is a repo for that as well: http://dev.carbon-project.org/debian/wine-unstable/
I don’t understand how you think this is dependency hell and is a complete non issue anyway.
The more important question to ask is why do you need wine ?
Also if you really value wine maybe you should buy the commercial version of it, http://www.codeweavers.com/ so you can help the wine project : http://www.codeweavers.com/about/support_wine/
It’s insane that you’re claiming Linux dependency hell was solved long ago when the truth is it continues to be an on-going problem. Package managers have nothing to do with why dependency hell exists. YUM and APT can’t fix it. The problem how poorly designed things are written and how over-simplistic building has been made. For example, you write something that depends on just one thing from A. But, A depends on B & C. B depends on D, E, F. C depends on G, H, I. D depends on… E depends on… F depends on… Blah blah blah. A person trying to argue that Linux dependency hell doesn’t exist is like trying to argue that the sun doesn’t exist while standing in the desert at high noon with it beaming straight down on you.
Are you nuts? It’s hard to take comments like these even half serious because they’re so blatantly absurd.
Laptops only replace desktops where it makes sense. Where desktops weren’t the optimal setup in the first place but where good alternatives may not have been available at the time. But to imply that every desktop could be replaced by a laptop, or anything in that ballpark, is completely idiotic. Desktops, laptops, and tablets are not equal devices. They each have their advantages and disadvantages. While there’s a lot of common ground they can share, each sees it’s strengths in different areas & tasks. Anyone who has trouble grasping that fact needs to take an Intro To Computers class, or just ask themselves why they own both a butter knife and a screwdriver.
Seriously I’ve read your past comments I know your a massive Microsoft fan, some of your comments come across as some 1 with a massive vested interest in Microsoft. Out of curiosity, do you get paid to comment on these message threads ?
Just the fact that you believe dependency hell exists without understanding why shared libraries are better than statically linking everything means there is no point arguing with you..
Distributions have long since resolved dependency issues – that is the whole point of APT and YUM …
I’ve got better things to do with my time than argue with you..
Edited 2014-01-04 22:32 UTC
Sorry to wiss in your Fruit Loops but I’m not a fan of any company. If I were this “massive Microsoft fan” you ignorantly claim, I’d be a pretty poor one considering I use Linux as much if not more than Windows on average.
I find it silly people become emotionally attached to for-profit companies.
It’s dumb I have to say this but the discussion is not about shared vs. static libraries. It’s whether or not Linux dependency hell exists. It does, is well-known, and commonly talked about. The fact that you believe no Linux dependency hell exists means you are either trolling or clueless.
That you have to be told again that package managers can not address the issue of dependency hell tells me you don’t understand the roll a package manager plays. The problem is not the grouping & delivery of required packages, it’s that way too much software is designed poorly because to do it correctly requires what many devs see as too much work. Linux dev mentality tends to be to take the easy route, throw everything at you, and don’t complain about it cuz the cost of storage is cheap. Maybe we should move this chat elsewhere, like irc, so you can disagree with devs directly.
You got that right.. I suggest joining and reading the Linux dev mailing lists and major Linux forums for starters. Then perhaps you can learn something about this topic and stop posting such ridiculous comments.
There was a commenter above that believed out of date versions in a repo constituted dependency hell.
What in your opinion causes dependency hell ?
why do package managers not address dependency hell in your opinion ?
Please explain clearly and concisely, what you believe dependency hell to be.
What is the difference between statically linking all dependant libraries to a program and dynamically linking them and why is this directly related to dependency hell ?
As a regular Linux User, please name me 2 recent examples of where you experienced dependency hell, please explain clearly and concisely how you managed to hit the stated dependency issues, what you were trying to do and what distribution you were using.
Once you have provided this information I will continue on with this conversation, until such time there is no point discussing this further. You have provided at best anecdotal evidence and have managed to offend practically every open source / Linux developer with your bullshit about:
“Linux dev mentality tends to be to take the easy route, throw everything at you, and don’t complain about it cuz the cost of storage is cheap”
Already stated. See previous posts.
AGAIN, the discussion is not about static vs dynamic libraries. Stop going off-topic.
Unnecessary. Your request is already available on various Linux mailing lists & major forums. Go read as you have already been told to do.
We’ve already established there’s no point in continuing this nonsense because you can’t even comprehend the actual subject. That’s why you keep veering off.
This is the cherry on top. I haven’t said anything new in that comment and were you to frequent the dev mailing lists and irc channels you would see it coming straight from the horses mouth. If that isn’t enough, there isn’t a single Linux dev I know, of many, who has ever told me they’ve been offended by anything I’ve said. To the contrary, many of them couldn’t care less what other peoples opinions are because it’s not going to change their own or have any affect on their dev work. It’s not a big secret, it’s something I’ve seen openly & willingly admitted countless times over the years.
You were told to go straight to the source and hear/read it for yourself yet you refuse because that eliminates the buffer between yourself and those you’re talking about. For some reason you think you look like less of a dumbfuck if devs aren’t saying it directly to you.
Which basically confirms your full of shit..
Considering you don’t even know the difference between a package manager and include dependencies, you’re comments hold about as much water as a piece of petrified wood. ZZZZZzzzzzzzzzzzzz…
To clarify:
You as a regular Linux user cant remember/provide the last time you suffered from dependency issues, which explains my whole point.
You still don’t get why I keep mentioning dynamic vs statically linked libraries its the key to dependency issues, its what reduces binary size, yet you still don’t get it …
The majority of Linux apps can have features enabled and disabled at compile time, depending on what you want the program to support which in turn reduces or increases dependencies to various dynamic libraries (you can also statically compile all dependencies against the program which completely removes dependencies) – This is done by the package maintainers for the various distributions which is in turn directly linked to the package management systems of the various distributions and your telling me it has nothing to do with this and everything to do with lazy devs who don’t care about disk space …
You provide more anecdotal evidence about Linux devs, painting them all in a bad light, pure FUD at its finest.
At this point my only question for you is are you paid to comment, or are you just a clueless troll ?
You don’t have a clue about Linux, about dependencies or package management and the more you talk the more you make it that much more apparent…
This leads to my previous succinct comment:
“Which basically confirms your full of shit..”
Edited 2014-01-05 11:51 UTC
Wrong. That’s an assumption you made without merit and basis. You were already told where to find such information, yet you refused to go look. Typical of a person unprepared to admit they’re wrong.
Completely irrelevant. At no point has binary size been mentioned or discussed. This is you throwing something random at the wall to distract from the truth — that you haven’t been able to comprehend the root of the problem from the very beginning. Since you can’t grasp what the problem is, you’re not capable of having an intelligent discussion about it, as proven by literally every one of your replies.
How many times do you have to be told that dependency hell is not about grouping package requirements together for easy delivery, but rather a product of poor design? This is not a complicated subject so are you trolling, or just an idiot?
I made no claim that Linux devs are good or bad, I simply shared an observation I’ve witnessed for years. I even encouraged you to go hear it straight from the horses mouth yourself but you refuse to do so because it forces you to admit you’re wrong. After all the dumb nonsense you’ve been spewing, that would cause you too much embarrassment to handle.
Neither and you making such absurd claims is a show of your desperation.
This is like a 3 year old calling me dumb because I won’t agree that 1+1=apples. It’s funny that now you’re using the `I am rubber, you are glue` defense. I guess when that’s all you have left and haven’t earned more than an absolute minimum expectation at best, …..
Your failure to comprehend something doesn’t make those who do full of shit. It just makes you an idiot for not learning about a topic before speaking on it. Good job little buddy. No, … really.
you made the claim your job to prove it. I wanted you to provide evidence of when you last faced dependency issues yet you still avoid the question and redirect.
you avoid the whole section where I explain dynamic / static.. Your words:
You directly contradict yourself here…You bring up lazy devs and cheap storage, dynamically linking libraries reduces application size which means less disk space used …
In your opinion… Prove this with pure hard facts please.
to quote myself again:
It’s there for anyone to see. You refuse to go look. I can’t force you to go hear it straight from the horses mouth any more than you can fault me when you refuse to do so.
I made no contradictions. Sadly I have to point out that “lazy devs” was my own comment, while “cheap storage” was an observation/quote I’ve witnessed. Do you not know the difference between an opinion and an observation? One describes what I think, the other describes what I’ve seen. Two completely different things. So no, the conversation was never about disk usage.
You acknowledge that to be my opinion, yet you demand I “prove” my opinion to you? Do you have any clue how stupid that sounds?
I want you to provide an example of when you last faced dependency hell, if its a real problem then surely you must have experienced it recently. What I ask is really not that difficult for you to do, but you seem to have so much difficulty in doing this.
when I state
I mean your opinions mean nothing, please provide pure hard facts… I cant believe you have such a hard time comprehending such simple stuff.
What a stupid assumption to make. You have no clue in what context I use Linux, how controlled I keep it, and how often I update the software I use. Aside of that, you still don’t understand that the dependency hell I’ve been talking about since post 1 has been code-level, not end-user package delivery. Try asking a question that actually has something to do with what I’m talking about instead of this off-topic nonsense you insist on.
Another stupid assumption. You keep rambling about something else. If you ever learn to stay on-topic and ever learn enough about it to have a semi-intelligent conversation, I might oblige a request from you. But as long as you want to ramble on about off-topic nonsense, I simply don’t care what you’re asking because it has nothing to do with anything I’ve said.
The problem isn’t comprehension. The problem is your inability to differentiate between apples and oranges, opinions and observations, facts and your own imagination. You refuse to join me on any of the major Linux mailing lists or irc channels to have this discussion directly in front of devs. Many of whom are my friends, who you claim I’ve offended somehow by sharing what I’ve seen them say countless times freely & in open view. You haven’t even shown the ability to understand what the subject matter is, much less capable of staying on-topic or saying anything of any substance pertaining to the subject. On top of all that you take any criticisms and apply the “I am rubber, you are glue” defense to them. You even said you have better things to do with your time, yet you’ve behaved the exact opposite. Not surprising since very little coming from you holds any water.
The software must be built before it is packaged, and here come the dinosaurea. Most obvious example these days is cups<->ghostscript combo with circular dependency – you have to build semi-functional version of one of them in order to get fully functional another one on a clean system. Or at least it was so half a year ago. More dinosaurea is hiding in binary-only land, so that you have packages like “libpng14” and “libpng15” in addition to your normal libpng package – which is outright ugly. Right after the new year I was packaging a binary-only game that wanted obsolete versions of udev (“libudev.so.0”) and curl (with “CURL_OPENSSL_3”, which vanished a year ago or so).
These are the problem every packager faces sooner or later, and they are not solved – merely worked around by package management system, build scripts and packagers’ time wasted. Sure, static linking would solve some of these problems, but (1) not all of them (eg. not circular dependency problems) and (2) not for long (once you’ll find out that your libpng14 can’t be built any more because of some news from gcc department or major version bump elsewhere).
So please, quit pretending that the problems don’t exist simply because someone else is constantly working hard to free you from them.
At least you understand what statically linking the libraries does unlike the guy before it was like talking to a clueless brick wall.
This is the whole point though, your basically proving exactly what I said dependency hell is not an issue, because it is handled by the package management systems and by the package maintainers. Saying otherwise is simply fallacious.
It doesn’t matter if you think its worked around or not, the result to an end user using a distribution is the same, its a seamless process that works and has worked for many years.
Your packaging apps, I occasionally package apps as well – The problems you see packaging apps is a completely different problem set, if you see dependency problems at this level its to be expected your no longer just an end user – your usually dealing with the source code of the program, all of the different operating systems have different issues at this level you just dont see them because you don’t have access to the same low level you do with Linux.
There is no simple fix to this. Its not to do with lazy devs or poor design, like everything it has its advantages and disadvantages.
Have you seen the crap Microsoft .net programs on windows can cause ? you need .net v1 to whatever depending on what the app was built against and multiple different apps are built against different versions – massive bloat, not clean.. Or try messing around with windows dlls see how quickly it all goes to shit…
The conversation was never and continues not to be about static vs dynamically linking libraries. The conversation was never about the grouping of packages or package management for easy delivery to the end-user. That being the case, why do you insist on going off-topic?
Oh looky here. You finally acknowledge the problem runs deeper than simply managing packages, just like I have always said. And yes, design has a lot to do with why. Again, I’m not saying anything new or groundbreaking – this stuff is old news. Considering how hard it is for you to comprehend this subject though, I won’t expect much of a reply from you other than your typical rambling off-topic nonsense.
Btw, for somebody who claims to have better things to do with his/her time, you sure do have a lot to say. Albeit destined for the trash bin, but a lot to say none-the-less. I guess being butthurt does that to a person. Sorry little buddy, I didn’t set out to get you so emotionally upset.
Let the grown ups speak, You are absolutely clueless.
Read what Saso has written, you might learn something.
Edited 2014-01-06 18:02 UTC
I think ddc’s reply to you sums things up perfectly. In case you missed it:
“Every user of linux knows of package management systems and no one ever argued they didn’t solve problems for end users. But dependency problems this whole thread was about have nothing to do with end users – developers, package maintainers and other people who have to build code from source are those who suffer. This was repeatedly pointed out to you throughout the whole thread.”
Obviously others understand the subject, and see your total inability to comprehend it.
Sorry, by are you genuinely unable to see the difference between solved problem and problem you are not dealing with? For me as a packager no packaging management system provides any kind of solution.
Every user of linux knows of package management systems and no one ever argued they didn’t solve problems for end users. But dependency problems this whole thread was about have nothing to do with end users – developers, package maintainers and other people who have to build code from source are those who suffer. This was repeatedly pointed out to you throughout the whole thread.
Static linking magically saves most problems. (You still have to find and build dependencies though.) Actually Go people made dependencies non-threat for Go programs.
And introduces huge different problems at the same time, such as:
– wanna ship a security update for libpng? Better be prepared to update every piece of GUI code you have. Or libc? Every single binary that you have.
– wanna add support for a new format or feature in your library (e.g. new profiles in H.264)? Tough luck, shipped software can’t benefit.
– wanna let the user dynamically load some kind of plugins into your app? Fat chance, it has to come with everything it will ever do.
– launch a few statically linked GNOME apps and be prepared to load all the gnome libraries into memory as many times as necessary, slowing down startup and costing you a lot more memory (not an insurmountable amount, but enough to feel the sting).
For these and other reasons Solaris has adopted strict ABI compatibility rules and no longer even ships a static libc. You can still link your internal libraries statically if you want to (“-Wl,-Bstatic -l<mylib> -Wl,-Bdynamic”), but the OS distributed ones are not intended for that and the public interfaces are committed and stable. Linux could use a stable libc ABI and perhaps a few other details, but other than that, ABI stability is pretty much a non-issue there as well.
Btw: for a nice primer on what other pitfalls static linking brings, see: https://blogs.oracle.com/rie/entry/static_linking_where_did_it
Edited 2014-01-06 11:37 UTC
Dealing with issues in repository is what package management systems are really good for. And keep in mind that versioned symbols bring this situation to parity; eg. you have to relink every single piece of code depending on curl when you update package to major (and sometimes even minor) version.
Be package management systems optimized for static linking, you could actually mark packages that need the update, so that security update in libpng wouldn’t trigger rebuild of dependencies that are not affected by vulnerability (eg. when software uses libpng only for encoding or only for decoding images generated within the program). Such system would allow to relink less then you do with now-common systems.
Linking has nothing to do with it – you are speaking about dynamic loading, which is completely different, and may I add a very dangerous thing. It’s roughly equal to exec()ing external stuff. If adequate for the task, link in TCL/Lua/whatever interpreter and you get the same result with more auditable and otherwise sane implementation.
Very wrong:
1. Static linking allows to link in only the routines you need (depends on design of the dependancies’ libraries).
2. UNIX system implement page sharing since System V I believe, so that if you run two apps that are both include the same code (eg. libpng), you’ll have only one instance of libpng loaded and both binaries referencing it. Actually, you only save space by doing so, as dynamic linking requires some memory overhead for loader routines, data, hints, etc. You waste a bit more of disk space though.
I would suggest you reading Rob Pick on it: http://9fans.net/archive/2008/11/142
And here’s the same source after three years: https://blogs.oracle.com/rvs/entry/what_does_dynamic_linking_and
This massively inflates downloads, beyond what is reasonable. Downloading potentially hundreds of megabytes to fix a couple of instructions is ludicrous.
Hence why distros don’t update major versions of libraries within major versions of the distro. Besides, I don’t think versioned symbols are a good solution (we already have soname and that works reasonably well).
How exactly? If we’re talking about security fixes, then every single fix would trigger a relink in (almost) every single GUI application and a redownload of its executable code.
Dynamic loading into non-PIC (static) binaries is dangerous and can and will result in serious headaches. Static linking also defeats ASLR.
It is not up to you to judge that. There are legitimate use cases and you can’t afford to not support them.
And if not adequate, then what?
Unless your library is huge and includes a lot of redundant code, then most or almost all of it is used in some internal capacity. The situation is a trade-off. Sure shared linking is a “catch-all” kind of thing, but it allows significant savings if you have sufficient uses for it. E.g. try building a static library and compare its size to the total in-memory size of a shared-object linked one. In my testing, even a basic Hello World kind of thing runs statically linked at around 578k vs. 5k for dynamically linked. libc.so + ld.so meanwhile are around 1800k, so if at least 3-4 binaries use this, it already turns into a net benefit. Let’s make an experiment. Assuming you statically linked just libc.so into all binaries and assuming it only adds 500k of extra gunk to each binary (remember, above I only did printf(“hello world\n”) – most useful software does a lot more), how does it stack up:
$ find /usr/bin /bin/ -maxdepth 1 -type f | xargs ls -l | awk ‘{sum_norm += $5; sum_static += ($5 + 500000)} END{print sum_norm; print sum_static}’
292362139
1331862139
Congratulations, you’ve just increased the storage requirement for binaries (and to a large part memory requirements) by about a factor for 4.5x.
I suggest you crack open a book teaching OS fundamentals, because you got this pretty wrong (I recommend Maurice J. Bach’s The Design of the UNIX Operating System). The “shared” nature of shared objects is implemented entirely by the runtime linker (ld.so) by mmap()ing the files into multiple address spaces. The kernel then loads the pages only a single time. Meanwhile the runtime linker sets up a per-process relocation table that all code calls into to locate each symbol (since the load addresses depend on where mmap() decided to create the mappings, more or less). The kernel, contrary to what you probably believe, doesn’t do any runtime linking in user processes. All it does is load the app binary plus the interpreter that’s designated in the ELF .interp section and transfers control to the interpreter. The interpreter then takes care of the rest (loading dependent libraries, handling translations, etc.).
Statically linked binaries are different files on disk and are thus different pages in memory. To unify them would require kernel-level memory deduplication (called KSM in Linux) – an enormous performance penalty while being entirely counterproductive (the much easier route is to simply use shared-object libraries) and hence why nobody implements it unless forced to (KSM is used in environments where shared-object information is not available, e.g. in VDI hypervisors).
I read it, the guy makes no reason whatsoever, only assertions. Zero analysis.
In what universe are two different people the same source? Also, you seem mistaken to think that I cited Rod Evans because he is an authority. I didn’t. I cited him because he made some good points.
Now to Roman Shaposhnik’s points: yeah, he’s right, sometimes dynamic linking isn’t a good fit for the problem at hand. But that’s why we retain the capability to produce statically linked libraries and can link a certain subset of them (using the flags I showed you before). But to think that it’s more trouble than it’s worth is ludicrous – that’s throwing the baby out with the bath water.
No it doesn’t – all you need is to relink several binaries, so the whole transfer for single lib’s security update may easily boil down to one libxyz.a download. Package manager could do extraction and relinking on reciever side, which is actually very small computational load.
I have absolutely no interest in distros that don’t do rolling release. That said, the model I hereby explain indeed might be suboptimal for such distros. I simply don’t have enough time to obtain the necessary data.
Again, depends on the nature of the flow. Eg. with vulnerability in libpng you most likely need to update only a handful of apps that render pngs from unsecure sources. And again: relinking is very cheap.
Wrong. Dynamic loading into whatever is dangerous and will result in serious headaches. This technology is largerly insecure by design.
I’m not sure whether something akin to ASLR is completely impossible with static binaries though. Given that addresses are randomized at loading time, I believe that static executables could be remapped on fly and even spread throughout memory. Provided that most systems maintain per-process virtual address space, this shouldn’t be much of undertaking.
Then exec(). Same risks.
Yes, this way I increase the storafe requirements for binaries, and even if I use a filesystem with smart deduplication I still loose a lot of storage capacity compared to dynamicly linked executable. (Though I may add that people from suckless.org claim they’ve got insignificat difference in size in the real world scenario.)
I didn’t say that static linking is better in every possible aspect. But making the argument about size you should consider the importance of the storage capacity savings. On my laptop with 300Gb of HDD I have a bit more then 1Gb of binaries, 2Gb of other stuff outside ~ and 195Gb of user data. Now, let’s assume that static linking would increase the amount of required storage for binaries 10 times (which is way more then it would really do). So:
• dynamic linking scenario: 3Gb of system vs. 195 Gb of user data (ratio: .015, overhead: 0);
• static linking scenario: 12Gb of system vs. 195 Gb of user data (ratio: .062, overhead: 9Gb).
Do these numbers convince you that I should be concerned with saving storage space? At least they don’t convince me. And 300Gb is not all that much of storage these days, as well as 195Gb is not all that much of user data. In fact I use external storage for ~400Gb that I would store locally be my drive sufficient for that. BTW, how do you think, would this 9Gb overhead influence my decision to use external storage with of this scale?
OK, I fucked up here and I should admit it. Nontheless, on my Arch system the total amount of dynamicly loaded dependencies for konqueror (web browsers are unmatched in dependencies count) total in 49288 Kb, which is orders of magnitude less then its memory consumption. I’m still not convinced that the increased storage requirements are really a game changer here.
He (Rob Pike) summarizes Sun’s report on their analysis. If you want so, I could hunt for the paper, but I’m quite confident he wouldn’t lie.
I meant Sun as the same source for both blog posts.
Or accepting the fact that this baby already didn’t make it.
Dynamic linking reduces storage requirements by major increase in complexity and minor slowdown. Given that storage and processing speed are becoming cheaper, while compexity normally only grows, static linking becomes more attractive with time, while dynamic linking becomes less attractive. I’m too lazy to invest required time into this issue, so I’m waiting suckless.org’s stali to see a working all-static linux which I may compare to conventional distro.
P.S.:
It’s up to me as developer to either use the technology or not, as it is up to me as developer to recommend it or not. And it is up to me as user to use software based on it or not. So it’s totally up to me to judge on that. Please, quit speaking to me as if I was your son, employee or debtor – I’m not and you are not in position to tell me what I can and what I can’t do. Seriously, I don’t have any fun reading you patronating me, and I’m not obliged to carry this on. So please, don’t do that.
If I fucked up again somewhere, I’d be glad to know though.
Couple of problems:
1) First you’d have to have a central repository of all possibly affected binaries. Out-of-repo binaries would simply be left behind – shared objects address this.
2) It is not a small computational load. It’d involve reading e.g. all apps in KDE, in their entirety, modifying them in memory and rewriting them.
3) You can’t actually do it with statically linked ELFs because they lack an object table and file/symbol inlining (aka function body inclusion) will destroy all hope of ever getting it back.
Or, on shared objects, when you do break the ABI, bump the soname. Old versions continue to work and new versions will as well.
No, you’ll need to relink every single GUI app, because you may not realize, that all GUI apps need to load icons etc. Moreover, they use abstractions (e.g. NSImage in Cocoa) which provide that facility. To satisfy your requirement above is to force everybody to write extremely specific code with almost no abstractions. Lastly, assumptions like “does this read from unsecure sources” is just a recipe for mistakes slipping through cracks. By updating the shared-object version, you simplify this decision process. Your proposal makes it more complicated.
Dynamic loading and security are entirely orthogonal and it wasn’t what I was talking about. I was talking about stuff like this:
What gets randomized are the load addresses of shared objects, not of individual symbols. That would be impossible even using a second layer of virtual memory indirection (which the hardware doesn’t provide, mind you). I beginning to suspect you don’t actually understand how virtual memory and paging work in detail.
Then exec(). Same risks. [/q]
Communicating out-of-address space over IPC is hugely more expensive. No-go.
Storage deduplication is very expensive due to CPU and memory overhead. If you try to optimize this by just deduplicating your binaries, then you’ve made it even more complicated. You’re simply patching up your leaky ship with more workarounds.
Because this code will take up memory, which is a lot more scarce than disk space (not all of it, obviously, but 1.5% vs 6.2% is significant). Moreover, you constantly think of desktops where you have an entire machine to yourself. You completely forget about servers and especially VDI, where there’s not an embarrassment of riches to spend. Lastly, enjoy relinking several gigabytes worth of binaries on security updates (assuming you could actually make them work – see above for why you can’t).
Now multiply that by the number of apps in your system and you’ll understand why it would be significant.
It’s not about lying, it’s about being wrong. The analysis can be incomplete and/or bad.
These are personal blogs and don’t represent the company’s stance. People are not their companies.
Well so far the alternatives you presented are even worse. You have a pet peeve with shared objects and are willing to rework pretty much how the entire OS works from the binutils, package manager up to software architecture (just exec() instead of dlopen(), you’ll like it!).
I meant that as in “were you given your way and were a system vendor”, not you personally.
Yes, plenty, but sadly you don’t know it.
Yes, this is intended. You either make a package for your binaries or enjoy the consequences. This is true for all modern packaging systems as well.
You don’t need to do it with statically linked ELFs. You may as well relink .o and .a files as you did the first time. And it is not a big deal to keep them around, specificly as every package management tool leaves the packages on disk after installing them.
And this fails when you have binaries linked against libxyz.N, and ABI change happened between libxyz.N.M and libxyz.N.M+1. And please, don’t tell me this doesn’t happen, because I witnessed it multiple times.
No. You don’t have abstractions in your static binaries – you have several groups of binary code linked together, so when you relink the binary, you don’t care about the way libxyz got there – all you need is to know is what parts of libxyz are there. And this is the task package manager can undertake easily.
Not at all. The whole thing about updating only the affected binaries is needed to mitigate the cases when update breaks something rather then fixes something (not something uncommon, and I already gave you link for that). With dynamicly linked binaries such cases are more difficult to solve. Furthermore, my proposal makes things more complicated during setup/update actions, while the acting systems makes things more complicated in runtime.
Wrong, but OK.
Again: don’t do dynamic loading.
And totally avoidable in most cases. In fact I’m yet to see a case when use of dlopen() is not a design flaw.
Yes, because desktops (specifically with DEs) are the most dependency-heavy.
And where you actually need a handful of binaries with a set of dependencies so slim that you don’t really notice the difference between static and dynamic linking.
I enjoy rolling release distro. Do you still try to scare me?
I actually decided to calculate the difference in size of my Arch system with KDE and other desktop stuff if I had every dependency linked staticly. I’ll post my findings when finished.
Now you deny the possibility to measure the practical storage impact of static linking by comparing conventional distro with theirs because they link to a paper? Nice!
Sorry for being an ass in the last post, it’s been a long day. I mean you no disrespect, though I still firmly stand behind all the points I’ve made.
Spoils of internet I guess…
Yeah, building complex pieces of software can be quite complicated. So far I don’t think you’ve discovered anything new. Anyhow, what solution do you propose to the cups-ghostscript situation?
And taking the older libraries and including them with the game package didn’t work why exactly? When running legacy binary-only software, you simply need to make sure that all the ABIs you link against directly are stuff you can control. The portions of your runtime environment you don’t link against directly, such as the kernel ABI, the X11-protocol and all relevant X11 extensions, are stable.
That is not to say that Linux couldn’t do a bit more here. I work in a Solaris-derived environment and there ABI stability is the tradition, even for e.g. kernel-loaded drivers (I can load a Solaris 8 NIC driver in the latest OpenIndiana and it will work).
I don’t propose anything – I never had a look at these, and I don’t really want to get onto it.
Because there is no reliable place to take old binaries from and expect them to work thereafter – I had to build them within package script.
Then why’d you bring it up? Just to complain?
What does “expect them to work thereafter” mean? You can easily download legacy packages from e.g. packages.debian.org, unpack them, load them into your source tree’s distribution environment and include them in your build. You don’t have to recompile each time you build the app. This is how most of us do it. And you can be sure they’ll work. As noted, all the relevant external interfaces (e.g. kernel syscall ABI) are stable.
I didn’t bring it up – I just got into this discussion because you were denying the existance of this problem.
That means that you either ship a huge amount of libraries (with potential security problems that are fixed in newer versions ages ago) or you indeed recompile the dependencies against the newest versions of their dependencies you can get. The latter way is better from security, space efficiency and (sometimes) feature availability perspectives, so blindly using old versions dependecies is not a good way to do.
The problem is a non-problem. You’re just complaining about stuff that is normal and you provide no solution (because if you did, you’d understand that you can’t).
So you want everybody in your dependencies (and not just the system-provided ones) to have committed stable interfaces, never break API/ABI compatibility, yet you want them to keep on making progress? Third parties are not obligated to provide you new functionality or bugfixes on top of old APIs/ABIs. If you want that, backport the stuff yourself.
Edited 2014-01-06 11:57 UTC
This is logical fallacy: the facts that stuff is common (“normal” is wrong choice of language) and I have no solution don’t make it a non-issue. There was a time when powder-based weapons would have quite a lot of chances to kill their operators, and every major battle resulted in several deaths from misbehaving firearms. Nobody had solution for quite some time… I bet you won’t call it a non-issue be you assigned today to operate this kind of weaponry.
Yes, when it makes sense, and no when it doesn’t. I’m OK with necessity of rebuilding stuff when needed, but it doesn’t make it a good practice. I’m not able to fix it and you are not in position to demand fixes from me.
Actually, you’re not in a position to demand anything here. Upstream projects have zero obligation to you, unless you pay for them or provide some other compensation. Don’t like it that libpng changes its interface? Don’t use it or help out with its development (either through donating time or by paying developers) or backport patches or fork it or write your own. There’s plenty of options besides complaining with no suggestions for a solution.
Anyway, all of this is entirely tangential to “dependency hell” which is what was originally talked about. My claim was that these are known and surmountable problems:
I don’t demand anything. Please, pay attention.
Q: So you want everybody in your dependencies never break API/ABI compatibility, yet you want them to keep on making progress?
A: Yes, when it makes sense, and no when it doesn’t.
You could nitpick “want” =/= “demand”, but that’s simply because I misspoke.
You know, if you replace “want” with “demand” in my comment, I won’t sign it. I got into this thread simply to point out that someone was denying an existing problem. I wouldn’t write on this topic otherwise.
The program did not obey Linux Standard Base. Same with there are a list of functions exposed from the Linux kernel that are stable.
The same out right ugly with many versions of the same library for binary compatibility exist on Windows and OS X as well.
curl (with “CURL_OPENSSL_3”, which vanished a year ago or so)
This is not something to complain about exactly. Its disappear from distributions due to Security issue. There are still versions of curl using newer openssl.
So you were working with a program that requires a special hack. End user requires warning they are using something security unsafe.
By the way ghostscript can be built without cups and the cups depend part of ghostscript be built alone solo. So you don’t have to rebuild the core ghost-script. Some distributions ship this way.
So your example of a circle dependency is in fact void. If you want to see how get the build files for debian ghostscript and ghostscript-cups.
In one way you complaint about that circle dependancy could be solved if ghost-script package cut self in two. Then you would not miss understand.
So the build order for building cups and ghostscript.
Ghostscript engine only no cups.
Cups.
Ghostscript cups parts only.
No rebuilding of anything. Ghostscript engine without the cups parts are fully functionally. Without cups support is how Ghostscript is commonly built to be used on Windows. The cups parts are to add printer drivers and filter interfaces to cups to use Ghostscript.
Reason why cups will not build without Ghostscript is that printer filters from Ghostscript developers have been merged into cups over the years. So the other option is all the drivers and filters from Ghostscript be merged into cups. Mind you completely not required.
If you are not installing cups you don’t need ghostscripts cups drivers and filters either. Now some distributions don’t cut ghost-script into two.
Yes most of the circle dependancy problem with building Linux normally comes about from not reading the configure options and working out a source package splits into 2 or install packages. Two independent build-able packages with some case a dependancy order or build. ghostscript engine has to be built before cups drivers and filters.
There is one thing with a true without question circle dependency that is gcc. But the existence of the circle dependency is hidden inside its build scripts.
gcc internal libraries can only be build by the same version gcc that will be use them. Yes there is a xgcc inside the gcc build world that is part a gcc to build the core parts. Without it gcc would not be build able.
Ilovebeer, in my limited opinion, dependency hell exists only when an application was run and linked to a library with a version not installed on a Linux host or that version was impossible to install on that host. This is just one scenario, I have encountered this only when I download tarballs and compile them. It is normal. Try that with Windows, download a source code and compile if you can without manually installing compilers and IDEs.
But with YUM AND APT, you don’t need to worry about dependency hell, because if an app is present in host’s repositories, it will be installed, with just one command, and the rest of the dependency programs/libraries will be installed also as needed without your input.
And my other opinion is that, shared libraries is what make dependency a problem in the past. Again, dependency hell exists only if you add custom repositories that are not supported by your Linux distro, or you download and compiled programs yourself with ./configure && make
Please do explain in more depth. What does “over-simplistic building” mean and how would you address it, assuming it’s a problem.
And that’s exactly what APT and YUM solve, just as the poster before you said. You simply type “[yum|apt-get] install <what-you-want>”, then simply hit Return a couple of times and you’re done. In the likes of Synaptic or Ubuntu’s Software Center (which is what most beginners would use) it’s even simpler – one click, agree to download ‘X’ many MB and you’re done. It could hardly be simpler.
As already stated, I’m not talking about grouping required dependencies and delivering them all together, which is what a package manager does. I’m talking about the spaghetti mess that creates bloat and dependencies that, were things designed better to begin with, would be completely unnecessary. I am talking about code level design, not end-user pre-compiled package management.
Such as? Give examples, I still can’t see what you mean.
You talking about code level design? Is that something has to do with DEPENDENCY HELL?
I could write a complete spaghetti type of code without worrying dependencies.
http://www.techradar.com/news/mobile-computing/tablets/microsoft-ma…
maybe it’s a plan from microsoft to take over android..
Android is (at least currently) ill-suited to a role as a desktop OS and isn’t in the same league as Windows on a traditional desktop. It lacks windowing and has only rudimentary hardware support for peripherals. While full-fledged Linux would be well suited to such a role, many people hear the word “Linux” and immediately think geekware! A much better option would be to simply dump Windows altogether and supply new light-weight desktops (super thin AIO types) with ChromeOS. Google’s new tech darling has already proven itself to be a dark horse running because it has sold well over the holiday shopping season compared to comparable Windows systems. It also feels like a more traditional OS compared to MacOS or Windows. Of course it lacks touch, which is most certainly why OEM’s are choosing Android instead. The best of both worlds would be for Google to quickly advance ChromeOS by making it touch capapble and adding a layer that would allow the tens of thousands of Android apps to run natively. Then I suspect Microsoft will really have something to worry about!
Edited 2014-01-04 23:08 UTC
I can’t honestly say I agree with you about Chrome OS. But, it will be interesting to see where Chrome OS stands after it matures more.
Lacks Windowing is incorrect. You need to open up the SDK you will find Windowing is in fact supported. The default window manager of Android is not exposing this. There third party used by samsung that fully supports Windowing mode on tablets. Yes over 20 percent of the Android existing market.
Rudimentary hardware support is very questionable if that is true. Linux /dev and /proc directory is exposed to Android applications. So all the hardware support of the Linux kernel is included. Lack of applications to use more than rudimentary is more the case.
So you issues is Applications not Android itself.
ChromeOS, as running for example on Google Pixel laptop, does support touch.