Linked by Thom Holwerda on Fri 3rd Jan 2014 19:44 UTC
Hardware, Embedded Systems

The PC industry isn't doing so well. Sales have dramatically slumped, despite the industry's efforts to tempt consumers with Windows 8 tablets and transforming touchscreen laptops. But next week, the Consumer Electronics Show in Las Vegas may be the launching pad for a new push - a new brand of computer that runs both Windows and Android.

Sources close to the matter tell The Verge that Intel is behind the idea, and that the chipmaker is working with PC manufacturers on a number of new devices that could be announced at the show. Internally known as "Dual OS," Intel's idea is that Android would run inside of Windows using virtualization techniques, so you could have Android and Windows apps side by side without rebooting your machine.

I'm going to make a very daring prediction, that is sure to send ripples across the entire industry: this is not going to turn the tide for the PC.

Permalink for comment 580069
To read all comments associated with this story, please click here.
ddc_
Member since:
2006-12-05

1) First you'd have to have a central repository of all possibly affected binaries. Out-of-repo binaries would simply be left behind - shared objects address this.

Yes, this is intended. You either make a package for your binaries or enjoy the consequences. This is true for all modern packaging systems as well.

3) You can't actually do it with statically linked ELFs because they lack an object table and file/symbol inlining (aka function body inclusion) will destroy all hope of ever getting it back.

You don't need to do it with statically linked ELFs. You may as well relink .o and .a files as you did the first time. And it is not a big deal to keep them around, specificly as every package management tool leaves the packages on disk after installing them.

Or, on shared objects, when you do break the ABI, bump the soname. Old versions continue to work and new versions will as well.

And this fails when you have binaries linked against libxyz.N, and ABI change happened between libxyz.N.M and libxyz.N.M+1. And please, don't tell me this doesn't happen, because I witnessed it multiple times.

To satisfy your requirement above is to force everybody to write extremely specific code with almost no abstractions.

No. You don't have abstractions in your static binaries - you have several groups of binary code linked together, so when you relink the binary, you don't care about the way libxyz got there - all you need is to know is what parts of libxyz are there. And this is the task package manager can undertake easily.

By updating the shared-object version, you simplify this decision process. Your proposal makes it more complicated.

Not at all. The whole thing about updating only the affected binaries is needed to mitigate the cases when update breaks something rather then fixes something (not something uncommon, and I already gave you link for that). With dynamicly linked binaries such cases are more difficult to solve. Furthermore, my proposal makes things more complicated during setup/update actions, while the acting systems makes things more complicated in runtime.

Dynamic loading and security are entirely orthogonal and it wasn't what I was talking about.

Wrong, but OK.

I was talking about stuff like this:
"One common failing we have discovered is that many folks built against libc.a but otherwise used dynamic objects to supply other interfaces. [...]
"
Again: don't do dynamic loading.

"Then exec(). Same risks.

Communicating out-of-address space over IPC is hugely more expensive. No-go.
"
And totally avoidable in most cases. In fact I'm yet to see a case when use of dlopen() is not a design flaw.

Moreover, you constantly think of desktops where you have an entire machine to yourself.

Yes, because desktops (specifically with DEs) are the most dependency-heavy.

You completely forget about servers and especially VDI, where there's not an embarrassment of riches to spend.

And where you actually need a handful of binaries with a set of dependencies so slim that you don't really notice the difference between static and dynamic linking.

Lastly, enjoy relinking several gigabytes worth of binaries on security updates (assuming you could actually make them work - see above for why you can't).

I enjoy rolling release distro. Do you still try to scare me?

"Nontheless, on my Arch system the total amount of dynamicly loaded dependencies for konqueror (web browsers are unmatched in dependencies count) total in 49288 Kb, which is orders of magnitude less then its memory consumption.

Now multiply that by the number of apps in your system and you'll understand why it would be significant.
"
I actually decided to calculate the difference in size of my Arch system with KDE and other desktop stuff if I had every dependency linked staticly. I'll post my findings when finished.

"I'm too lazy to invest required time into this issue, so I'm waiting suckless.org's stali to see a working all-static linux which I may compare to conventional distro.

I'm sorry, but that distro looks like the Linux equivalent of science denialism. E.g. he links [...]
"
Now you deny the possibility to measure the practical storage impact of static linking by comparing conventional distro with theirs because they link to a paper? Nice!

Reply Parent Score: 2