Linked by Thom Holwerda on Wed 21st Sep 2016 22:55 UTC
Linux

There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

With that solved, let's get to the real root cause of the problems here:

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

As someone who tried to move his retina MacBook Pro to Linux only a few weeks ago - I can attest to Intel's absolutely terrible Linux drivers and power management. My retina MacBook Pro has an Intel Iris 6100 graphics chip, and the driver for it is so incredibly bad that even playing a simple video will cause the laptop to become so hot I was too scared to leave it running. Playing that same video in OS X or Windows doesn't even spin up the fans, with the laptop entirely cool. Battery life in Linux measured in a 2-3 hours, whereas on OS X or Windows I easily get 8-10 hours.

Order by: Score:
Don't Know what to say...
by dionicio on Wed 21st Sep 2016 23:22 UTC
dionicio
Member since:
2006-07-12

Except for this: [As most of the time] Would bet Engineering not being the stumbling block ;)

Edited 2016-09-21 23:23 UTC

Reply Score: 3

RE: Don't Know what to say...
by segedunum on Thu 22nd Sep 2016 09:32 UTC in reply to "Don't Know what to say..."
segedunum Member since:
2005-07-06

Let's just say I have never seen, and never thought I would see, a fake RAID driver be required by default to access a single disk. How on Earth this helps anything like power management is utterly beyond me.

I would also imagine if you took the disk out of the laptop you wouldn't be able to read it unless you were using the same interface and drivers.

Reply Score: 5

RE[2]: Don't Know what to say...
by dionicio on Fri 23rd Sep 2016 14:50 UTC in reply to "RE: Don't Know what to say..."
dionicio Member since:
2006-07-12

"...I would also imagine if you took the disk out of the laptop you wouldn't be able to read it unless you were using the same interface and drivers..."

Agree if file-system virtualized|scrambled.

Reply Score: 2

RE: Don't Know what to say...
by dionicio on Fri 23rd Sep 2016 14:30 UTC in reply to "Don't Know what to say..."
dionicio Member since:
2006-07-12

Retired now. Still friends and family disturbing this aged bones with the recent, cozy China 'itch' for dominion of the Cyber-From-Always-Fucked-Space.

If only Governments had not adopted this Mil Toy. To late, anyway. Just commit and start cleaning the $h!t.

All of the 'stack' concept is wrong. Making one thing vitally depend from a previous one. And that previous from another. And so. And so and so on.

Don't know how inter-weaved is 'stack' with the Von Neumann concept. [But suspecting nothing at all].

At this time this environment is so fucked. And yes, also had an account at Yahoo ;)

Reply Score: 2

Windows on the MacBook
by LaceySnr on Wed 21st Sep 2016 23:31 UTC
LaceySnr
Member since:
2009-09-28

I was thinking of trying a Linux distro on my 2013 MBP, but maybe I won't even bother now.

Thom do you have any drivers issues with Windows? The moment I install Realtek's audio drivers I hit BSODs, and Apple don't appear to have bothered updating their bootcamp driver suite in a while now.

Reply Score: 1

Writings on the wall
by viton on Thu 22nd Sep 2016 00:19 UTC in reply to "Windows on the MacBook"
viton Member since:
2005-08-09

Apple don't appear to have bothered updating their bootcamp driver suite in a while now.
This is the sign of inevitable switch! ;)

Reply Score: 3

RE: Windows on the MacBook
by No it isnt on Thu 22nd Sep 2016 05:30 UTC in reply to "Windows on the MacBook"
No it isnt Member since:
2005-11-14

You won't lose anything from trying. 2013 is old enough that power management should be slightly better. My 2013 HP ultrabook runs fairly cool and quiet.

I did spend some time tweaking it a couple of years ago, though.

Reply Score: 3

Not this particular case, but...
by dionicio on Thu 22nd Sep 2016 19:36 UTC in reply to "RE: Windows on the MacBook"
dionicio Member since:
2006-07-12

Seems Lenovo's Signatures render 100% Control & 'Trust' of the whole boot stack to Microsoft. Which I see as good, if sold as such.

Why? Those unit will be stronger at remote diagnostics.

Edited 2016-09-22 19:42 UTC

Reply Score: 2

darknexus Member since:
2008-07-15

Why? Those unit will be stronger at remote diagnostics.

Judging by how difficult Windows 10 makes remote admin tasks, somehow I doubt it.

Reply Score: 2

dionicio Member since:
2006-07-12

But not for themselves, at least 8 and 10.

Reply Score: 2

RE: Windows on the MacBook
by dionicio on Thu 22nd Sep 2016 15:55 UTC in reply to "Windows on the MacBook"
dionicio Member since:
2006-07-12

Low End Intel Graphics drivers for Linux are quite energy decent, compared to their Win cousins.

Reply Score: 2

Comment by ddc_
by ddc_ on Thu 22nd Sep 2016 00:13 UTC
ddc_
Member since:
2006-12-05

Intel allocates its resources to development of custom proprietary interface they don't really bother to support in a more or less reasonable manner: no Linux drivers, default Windows driver does things wrong. Sounds like a bad idea that was not dismissed until it was too late. Fine.

But in what kind of universe the proper workaround is to disable standard interface and force the aforementioned custom proprietary vendor-specific interface onto innocents? And more so, how the hell did someone get an idea that removing a firmware setting for unbreaking hard disk access is a reasonable thing to do?

Sure, this story tells a lot about Intel's decision making, but Lenovo's responce is much more idiotic by far. And on top of it there were reports of Lenovo stuff bullying people out of reporting the issue on Lenovo forums...

Reply Score: 6

RE: Comment by ddc_
by dionicio on Thu 22nd Sep 2016 01:12 UTC in reply to "Comment by ddc_"
dionicio Member since:
2006-07-12

Sometimes better to think that PR is -by design- a race to the bottom.

Edited 2016-09-22 01:13 UTC

Reply Score: 3

RE[2]: Comment by ddc_
by dionicio on Thu 22nd Sep 2016 15:59 UTC in reply to "RE: Comment by ddc_"
dionicio Member since:
2006-07-12

Just, could just be a slight hint of uncompetitive behavior? Of course, resource allocation more than enough to dismiss any regulative attempt.

Reply Score: 2

RE: Comment by ddc_
by segedunum on Thu 22nd Sep 2016 10:56 UTC in reply to "Comment by ddc_"
segedunum Member since:
2005-07-06

The general rule is that if something takes effort a manufacturer won't do it. This took effort and there is a certainly reason behind it.

Reply Score: 4

RE[2]: Comment by ddc_
by ddc_ on Thu 22nd Sep 2016 11:03 UTC in reply to "RE: Comment by ddc_"
ddc_ Member since:
2006-12-05

This is a perfectly valid reason for defaulting to Intel's proprietary implementation. This is not a valid reason for removing a knob from firmware though. It almost looks like Lenovo did something stupid to AHCI mode on those laptops and tries to hide the problem instead of working it around in software or recalling hardware. I won't be amazed if people flashing custom firmware with knob reenabled will find some interesting error patterns in disk access, or maybe even fried disks.

Edited 2016-09-22 11:04 UTC

Reply Score: 4

RE[3]: Comment by ddc_
by Lennie on Thu 22nd Sep 2016 20:24 UTC in reply to "RE[2]: Comment by ddc_"
Lennie Member since:
2007-09-22

If you make it possible to use something, you need to test it, create fixes when people notice problems, etc. Those take resources. It's easier to disable/not build/whatever they do with that function.

Reply Score: 2

RE[3]: Comment by ddc_
by Drumhellar on Mon 26th Sep 2016 01:29 UTC in reply to "RE[2]: Comment by ddc_"
Drumhellar Member since:
2005-07-12

Well, considering that the number of people that will be running Linux on this laptop is probably significantly less than the number of users that will accidentally break their systems or unknowingly kill their battery life by disabling the RAID setup, disabling the standard AHCI mode in favor of RAID is perfectly valid.

Reply Score: 2

RE[4]: Comment by ddc_
by Alfman on Mon 26th Sep 2016 02:38 UTC in reply to "RE[3]: Comment by ddc_"
Alfman Member since:
2011-01-28

Drumhellar,

Well, considering that the number of people that will be running Linux on this laptop is probably significantly less than the number of users that will accidentally break their systems or unknowingly kill their battery life by disabling the RAID setup, disabling the standard AHCI mode in favor of RAID is perfectly valid.


Most users don't even know how to get into the bios, especially these days with fast boot. Even if AHCI performs worse, they could simply add a warning in the description of the field to make it clear.

The rest of the industry doesn't have an AHCI problem, and even Lenovo's firmware itself doesn't have an AHCI problem when the firmware is flashed with a hacked version, so the need for proprietary raid seems implausible to me. If blocking linux was unintentional as they claim, then the right thing to do is to remove the restriction. If they dig their heals and continue to block alternatives even after they know about the problem, then it's no longer plausible for them to say it's unintentional.

https://forums.lenovo.com/t5/Linux-Discussion/Installing-Ubuntu-16-0...

Lenovo can do what it wants, but if I were a customer and discovered this restriction, I'd be demanding my money back.

Edited 2016-09-26 02:49 UTC

Reply Score: 2

Some links you might want to try
by galvanash on Thu 22nd Sep 2016 00:57 UTC
galvanash
Member since:
2006-01-25

Power Management:

http://www.webupd8.org/2013/04/improve-power-usage-battery-life-in....

Fan Speed Control:

https://github.com/MikaelStrom/macfanctld

...and if you have a discreet Nvidia GPU as well:

http://bumblebee-project.org/

Note: These may or may not work depending on your MacBook Pro model, but not having at least TLP on a Linux MacBook is 100% guaranteed not to work ;)

You probably won't get 8-10 hours, but 6-8 is doable if you configure everything right.

Edited 2016-09-22 00:58 UTC

Reply Score: 7

Block vs fake support
by laffer1 on Thu 22nd Sep 2016 01:10 UTC
laffer1
Member since:
2007-11-09

I haven't tried to use a recent lenovo but I can can say that some Asus motherboards do not support anything but Windows and Linux for GPT boot even with secure boot disabled. I am unable to boot two different BSDs off of GPT with a recent AMD AM3+ Asus motherboard but MBR works.

Every computer and motherboard sold should be allowed to boot an OS off GPT with an unsigned boot loader as long as secure boot is off.

As for power management, Macs are weird sometimes about fan control. Apple has their own fan management stuff and they don't use stock UEFI found in PCs, but EFI based off an older standard. It's not an apples to apples comparison if you forgive the bad pun.

Reply Score: 9

RE: Block vs fake support
by dionicio on Thu 22nd Sep 2016 01:20 UTC in reply to "Block vs fake support"
dionicio Member since:
2006-07-12

Haven't tried anything else [adding FreeDOS] on recent years. Has the x86 grip-on already reached the 'Great UEFI Black-Out'?

Edited 2016-09-22 01:34 UTC

Reply Score: 2

Secure Boot
by Alfman on Thu 22nd Sep 2016 01:17 UTC
Alfman
Member since:
2011-01-28

Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database...


I know Garrett is an authority on this topic, but I have trouble reconciling point '(a)' with Microsoft's earlier decision to stop requiring user override of secure boot in windows 10.

http://arstechnica.com/information-technology/2015/03/windows-10-to...

Did microsoft revert it's windows 10 policy back to windows 8 and when did that happen?


... and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures.


It's not really good enough that "several free operating systems" are signed by microsoft's key. IMHO an owner should be entitled to install ANY free operating system even if it is not signed by MS/UEFI keys, and even if they've compiled themselves!


Anyways, it seems that neither secure boot nor microsoft are responsible for this particular problem. Apparently Lenovo added a single jump to the firmware to force RAID to remain on. The comments mention that a user was able to reverse engineer the firmware, remove the jump, and boot just fine without raid.

Garrett, the author, wants to shame Intel into providing a RAID driver, I agree entirely with that. However I'm at a loss to understand Lenovo's motivation for going through the trouble of making the tiny change that caused these problems? This is entering pure conspiracy theory, but was this breakage an executive decision rather than an accident? We'll have to see if Lenovo is forthcoming with a firmware fix. If not, then it seems likely this is the outcome they want.

Edited 2016-09-22 01:35 UTC

Reply Score: 5

RE: Secure Boot
by segedunum on Thu 22nd Sep 2016 10:59 UTC in reply to "Secure Boot"
segedunum Member since:
2005-07-06

Did microsoft revert it's windows 10 policy back to windows 8 and when did that happen?

No, and that will gradually be coming as critical mass is reached. Until then drivers are a good way to lock out other operating systems, or make sure people can't install older Windows versions or upgrade to new ones.

Reply Score: 3

It's not Linux; it's the laptop.
by Flatland_Spider on Thu 22nd Sep 2016 02:25 UTC
Flatland_Spider
Member since:
2006-09-01

MBPs are special snowflakes, and the best thing to do is to run MacOS on them. I've been thinking about getting a MacBook just for the battery life and a hobbled Unix environment that isn't the really nerfed crap on Windows.

MBPs are known for having terrible Linux support, and they use EFI rather then UEFI. (https://support.apple.com/en-us/HT201518) They aren't 100% a standard x86 laptop, so there are lots of things that don't work as well on a MBP versus a Dell Precision.

On the Linux side, the power management is behind MacOS and Windows. It's gotten better over the years, as my T420 will attest to, and people are paying attention to it more.

Also, setting tuned to desktop mode might help.

Reply Score: 2

RE: It's not Linux; it's the laptop.
by cpcf on Thu 22nd Sep 2016 03:54 UTC in reply to "It's not Linux; it's the laptop."
cpcf Member since:
2016-09-09

I'm just a newbie in this regard but my personal experiences seem to support these findings.

I've put Linux on several various brand "Windows Laptops" and I find a modern Distro gives me great performance and battery life is good. I make most of my PCs dual-boot these days, and I extend the life of old hardware with Linux.

By comparison I've had nothing but trouble trying to do the same on Apple hardware, even when using Dstros that are supposed to be "Mac Centric". Nearly always there is some third party driver issue or specific hardware combo required to fix the situation.

I'd be inclined to put the hard questions to Apple.

Edited 2016-09-22 03:55 UTC

Reply Score: 2

Brendan Member since:
2005-11-16

Hi,

By comparison I've had nothing but trouble trying to do the same on Apple hardware, even when using Dstros that are supposed to be "Mac Centric". Nearly always there is some third party driver issue or specific hardware combo required to fix the situation.

I'd be inclined to put the hard questions to Apple.


From Apple's point of view, they're selling "hardware + software" as one product (and are not selling "generic hardware" as one product and "OS" a separate product). For this reason they have no reason to care about any other OS whatsoever (in the same way that Toyota has no reason to care if you can put Volvo engine in their cars).

If an alternative OS want to support Apple's systems, then hardware support issues (drivers, etc) are purely the alternative OS developer's problem; not Apple's.

- Brendan

Reply Score: 4

Priorities.
by Brendan on Thu 22nd Sep 2016 05:08 UTC
Brendan
Member since:
2005-11-16

Hi,

The problems here are (in order of importance):

* There is no standard "RAID controller" hardware interface specification that OSs can support. This means that (unlike AHCI where there is a standard/specification) every different RAID controller needs yet another pointlessly different driver that's been specifically written for it.

* Apparently, AHCI has power management limitations that should be fixed (?). There should be no reason to require RAID just to get power management to work, and power management should work for all AHCI controllers without special drivers.

* All OSs struggle to provide drivers (especially when there's no standardised hardware interface specification and you need a different driver for every device, or when something is "very new" and there hasn't been time for people to write drivers yet). This doesn't just affect Linux, and will never change.

- Brendan

Edited 2016-09-22 05:09 UTC

Reply Score: 5

RE: Priorities.
by ahferroin7 on Thu 22nd Sep 2016 13:08 UTC in reply to "Priorities."
ahferroin7 Member since:
2015-10-30

The RAID thing isn't really an issue on Linux though. You have tools like LVM and MD, and these days BTRFS and ZFS, which provide the same functionality with much greater flexibility than a hardware implementation, and quite often more efficiently and reliably than a hardware implementation could. With limited exceptions for really big storage arrays, I know nobody who uses Linux who doesn't run whatever RAID controllers they may have in pass-through mode these days, and even when they don't, they're usually running those smaller RAID arrays as components in a LVM or MD based RAID set.

As far as the power management, the issue is just as much the drives as the controllers, and even more, is an issue with ATA in general.

Reply Score: 2

RE[2]: Priorities.
by Alfman on Thu 22nd Sep 2016 15:04 UTC in reply to "RE: Priorities."
Alfman Member since:
2011-01-28

ahferroin7,

The RAID thing isn't really an issue on Linux though. You have tools like LVM and MD, and these days BTRFS and ZFS, which provide the same functionality with much greater flexibility than a hardware implementation, and quite often more efficiently and reliably than a hardware implementation could. With limited exceptions for really big storage arrays, I know nobody who uses Linux who doesn't run whatever RAID controllers they may have in pass-through mode these days, and even when they don't, they're usually running those smaller RAID arrays as components in a LVM or MD based RAID set.


Linux MD arrays (and other soft-raids) implicitly use more disk bandwidth and CPU than a true hardware raid, which does hurt overall performance. As a side benefit, most HW-raids include BBU cache, which makes a tremendous difference for small writes and even making RAID-6 perform acceptably. My preferred solution is to run LVM on top of a *real* HW-Raid, giving the benefits of both. That said, I do have many linux systems that don't have a real hardware raid where I use MD raid instead.

Side note to anyone experiencing poor MD performance, give this a shot. For me the performance boost is drastic over linux defaults.
/bin/find /sys/devices/ -path "*/block/*/md/stripe_cache_size" -exec sh -c "echo 32768 >"\{\} \;



As far as the power management, the issue is just as much the drives as the controllers, and even more, is an issue with ATA in general.


Can you explain what the problem is?

Reply Score: 2

RE[3]: Priorities.
by ahferroin7 on Thu 22nd Sep 2016 15:26 UTC in reply to "RE[2]: Priorities."
ahferroin7 Member since:
2015-10-30

I won't dispute that MD and LVM based arrays often use more processor time, but I will comment that:
1. In quite a few systems I've seen, this extra processor time actually results in less power consumption than using the hardware RAID controller (this includes a number of server systems with 'good' RAID HBA's we have where I work).
2. I quite often these days see LVM based RAID arrays (which uses MD code internally for RAID these days too) outperform a majority of low end and quite a few mid-range hardware RAID controllers.

The other thing to consider though (and this is part of the reason we almost exclusively use our HBA's in pass through mode at work) is that even on Windows, it's a whole lot easier to get SMART data and other status info out of a disk that you can talk to directly.

For the power management stuff:
With SCSI drives the issue is that not all of them even support power management. Most of the FC based ones do, but you can still find SAS drives without too much difficulty that have poor power management support.

With SATA drives, things get more complicated. There's roughly 3 relevant standards, and most drives only support a subset of them. Many good drives these days support APM based power management, but those that do don't always implement it correctly. Most desktop drives don't support Link State Power Management (the standard that lets the controller and drive power down the link when idle), but most laptop ones do, and it's hit or miss with enterprise drives. Some support AAM, which isn't actually power management but can be used in a hackish way for it, but those that do don't support using it as the same time as APM usually.

In a more abstract sense, part of the issue is with Linux, but it's also an issue with other OS'es, just to a lesser degree. Linux doesn't handle nested power management very well on x86. If any of your drives can't enter a low power state, it blocks the controller from doing so on Linux. In some special cases, it's possible to have the drive remain active but idle while the controller or even just the PCIe link enters a low power state, but Linux doesn't do this.

Reply Score: 2

RE[4]: Priorities.
by darknexus on Thu 22nd Sep 2016 16:59 UTC in reply to "RE[3]: Priorities."
darknexus Member since:
2008-07-15

There's another advantage too: data recovery. Most hardware RAID controllers I've seen use a proprietary RAID scheme. To recover data from a downed machine by moving the drives to another machine is not possible unless the RAID schemes match, which usually means the controller has to be from the same manufacturer at the least.
With software RAID be it md, MacOS, raidctl (*BSD) or whatever, you can easily put the array back together regardless of the underlying hardware. All that's necessary is to use the same software system on the destination machine and the array can be put back together right away. To me, this safety is worth the slight CPU cost. It guarantees that you'll be free from vendor lock.

Reply Score: 5

RE[5]: Priorities.
by Alfman on Thu 22nd Sep 2016 18:41 UTC in reply to "RE[4]: Priorities."
Alfman Member since:
2011-01-28

darknexus,

There's another advantage too: data recovery. Most hardware RAID controllers I've seen use a proprietary RAID scheme. To recover data from a downed machine by moving the drives to another machine is not possible unless the RAID schemes match, which usually means the controller has to be from the same manufacturer at the least.


Definitely true, however there should already be a contingency plan for catastrophic failures anyways. In other words you should plan for complete failure by having full backups, so that when failure happens you still have recovery options. I personally keep local backup and offsite backups.

Even linux md arrays can fail, and while the MD format might theoretically lend itself to easier recovery options, I'd just as soon recover from a backup instead. Of course, YMMV!


With software RAID be it md, MacOS, raidctl (*BSD) or whatever, you can easily put the array back together regardless of the underlying hardware.


Yes, in theory. But I was caught off guard when I tried to move an MD array on an ESATA enclosure from one computer to another. It turns out that version 2 of the MD disk format ties the array to the host, which is incompatible with my use case.

There's a second reason I think this MD "feature" is very shortsighted: on my system, the hostname is stored on the array itself, but mdadm won't load the arrays correctly prior to the hostname getting set, a catch-22. I ended up having to patch mdadm source code to get the expected behavior. I haven't checked if they've fixed the design.

Reply Score: 2

RE[6]: Priorities.
by darknexus on Thu 22nd Sep 2016 19:18 UTC in reply to "RE[5]: Priorities."
darknexus Member since:
2008-07-15

I'm not actually referring to backups. I'm referring to getting that system back up and running, as is. Sometimes that's what needs to be done, asap.
Yes, you should have backups. Full backups, in multiple locations. Sometimes though, the focus is "GET THIS THING UP AND RUNNING NOW!!!!!" WHICH IS WHAT i WAS REFERRING TO.

Reply Score: 1

RE[7]: Priorities.
by Alfman on Thu 22nd Sep 2016 22:01 UTC in reply to "RE[6]: Priorities."
Alfman Member since:
2011-01-28

darknexus,

I'm not actually referring to backups. I'm referring to getting that system back up and running, as is. Sometimes that's what needs to be done, asap.
Yes, you should have backups. Full backups, in multiple locations. Sometimes though, the focus is "GET THIS THING UP AND RUNNING NOW!!!!!" WHICH IS WHAT i WAS REFERRING TO.


I know what you were saying, but that scenario implies a hardware failure of some kind, which is going to take time to fix. If you planned for it, your backup system could be up and running even before you manage to fix the primary system.

In my case I can start using the backup file server in place of the normal file server just by turning it on and changing the IP because it's already set up. My file server actually did die once (after an SSD upgrade of all things), and that's exactly what I did until I was able to fix the primary server. You dislike HW-raid, ok...fine. But for some people raid controller failure is not as big a problem as you make it out to be. It's never even happened to me, and even if it did I would handle it the exact same way.

Conceivably the backup server could fail at the same time, then it would take long while getting all that offsight data back quickly. But keep in mind that my backup server doesn't use HW-raid, so in this scenario, HW-raid wouldn't be at fault anyways.

Please understand I'm not trying to take an elitist attitude here, I know it's not for everyone, I'm just sharing a different point of view ;)

Edited 2016-09-22 22:03 UTC

Reply Score: 2

RE[4]: Priorities.
by Alfman on Thu 22nd Sep 2016 17:32 UTC in reply to "RE[3]: Priorities."
Alfman Member since:
2011-01-28

ahferroin7,

I won't dispute that MD and LVM based arrays often use more processor time, but I will comment that:
1. In quite a few systems I've seen, this extra processor time actually results in less power consumption than using the hardware RAID controller (this includes a number of server systems with 'good' RAID HBA's we have where I work).


That's probably true. The dedicated HW-raid will need energy for the controller, battery charging & cache circuits. So the energy per unit time must be higher. Since the HW-raid can get work done faster, it's less clear to me that the energy per unit work is necessarily worse. That's an interesting question.

For the power management stuff:
With SCSI drives the issue is that not all of them even support power management. Most of the FC based ones do, but you can still find SAS drives without too much difficulty that have poor power management support.


Interesting, I can't even find the sleep command in a scsi command reference manual for SAS. Not that this matters at all in consumer computers. As a side note, it would be foolish of Lenovo to block Linux in enterprise computers where linux is popular.
http://www.seagate.com/staticfiles/support/disc/manuals/Interface~*...



With SATA drives, things get more complicated. There's roughly 3 relevant standards, and most drives only support a subset of them. Many good drives these days support APM based power management, but those that do don't always implement it correctly. Most desktop drives don't support Link State Power Management (the standard that lets the controller and drive power down the link when idle), but most laptop ones do, and it's hit or miss with enterprise drives. Some support AAM, which isn't actually power management but can be used in a hackish way for it, but those that do don't support using it as the same time as APM usually.


APM/ACPI are host APIs, a storage device wouldn't care about it at all. Rather the relevant standard in terms of device compatibility is the Serial ATA command spec. I guess there could be exceptions, but to my knowledge all SATA devices implement the same sleep commands.

Power Management Section 5.3.4
http://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahc...


In a more abstract sense, part of the issue is with Linux, but it's also an issue with other OS'es, just to a lesser degree. Linux doesn't handle nested power management very well on x86. If any of your drives can't enter a low power state, it blocks the controller from doing so on Linux. In some special cases, it's possible to have the drive remain active but idle while the controller or even just the PCIe link enters a low power state, but Linux doesn't do this.


I've definitely encountered PM issues with APM/ACPI (sometimes even with windows), but these are incompatibilities in the host, not the devices. I'd be very surprised to see a compliant/non-faulty SATA device be incompatible with linux. If you have any evidence to the contrary, of course I'll look at it but far more likely is that the incompatibility lies between linux and the host controller.

In any case, even if this is all true, I still don't see an argument for Lenovo disabling standard AHCI - a compliant AHCI controller should work equally well regardless of the vendor, which seems to be confirmed by the user who hacked it back in.

Edited 2016-09-22 17:43 UTC

Reply Score: 2

RE[5]: Priorities.
by ahferroin7 on Thu 22nd Sep 2016 19:08 UTC in reply to "RE[4]: Priorities."
ahferroin7 Member since:
2015-10-30

ahferroin7,
...
That's probably true. The dedicated HW-raid will need energy for the controller, battery charging & cache circuits. So the energy per unit time must be higher. Since the HW-raid can get work done faster, it's less clear to me that the energy per unit work is necessarily worse. That's an interesting question.

It really depends on a bunch of factors. In most of the systems I've seen, it usually is more energy efficient on the CPU (I've tested power requirements on some systems running the same things both ways), but I'm not entirely sure why myself. I've also seen systems where it isn't any better though, and some where it's worse, it's just that most I've seen, it is more energy efficient to use the CPU instead of the HBA for the processing.

It's probably worth mentioning though that hardware 'acceleration' is not always faster, or even more efficient, than just running it directly in software (If you want, I have all kinds of other examples, but they're not really related to the discussion at hand other than as anecdotes reinforcing this point).

Interesting, I can't even find the sleep command in a scsi command reference manual for SAS. Not that this matters at all in consumer computers. As a side note, it would be foolish of Lenovo to block Linux in enterprise computers where linux is popular.
http://www.seagate.com/staticfiles/support/disc/manuals/Interface~*...

I'm not certain if there is one in the SAS spec specifically, but I'm pretty sure there are some PM control commands in the regular SCSI command sets used on most block storage devices. It may be a de-facto standard vendor specific command though.

APM/ACPI are host APIs, a storage device wouldn't care about it at all. Rather the relevant standard in terms of device compatibility is the Serial ATA command spec. I guess there could be exceptions, but to my knowledge all SATA devices implement the same sleep commands.

Power Management Section 5.3.4
http://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahc...

I'm not talking about the sleep commands, I'm talking about the stuff that gets reported by almost all tools as 'APM level'. It's a one byte firmware setting that is supposed to adjust power consumption (and actually controls whether or not the sleep commands are even honored), which is part of the ATA spec, not the AHCI spec (AHCI is a HBA,standard, ATA is the actual communications protocol between the HBA and device).


I've definitely encountered PM issues with APM/ACPI (sometimes even with windows), but these are incompatibilities in the host, not the devices. I'd be very surprised to see a compliant/non-faulty SATA device be incompatible with linux. If you have any evidence to the contrary, of course I'll look at it but far more likely is that the incompatibility lies between linux and the host controller.

I don't know about SATA disks not working with Linux, but there are all kinds of removable devices that have issues (almost nothing that costs less than about 15USD is actually USB compliant for example, except for cables), and I know of quite a few storage controllers that cause issues.

In any case, even if this is all true, I still don't see an argument for Lenovo disabling standard AHCI - a compliant AHCI controller should work equally well regardless of the vendor, which seems to be confirmed by the user who hacked it back in.

I don't understand why they might completely disable it, but I can understand why they may not want to use it. AHCI is not particularly efficient, and it's usually not MP safe (which hurts efficiency even more on most modern systems). If you take the same set of SATA disks and test performance on an AHCI controller and a good SAS HBA, they will almost always get better performance on the SAS HBA. In the same manner, Intel's 'RAID' mode for their SATA controllers is usually more efficient if you can use it.

Reply Score: 1

RE[6]: Priorities.
by Alfman on Thu 22nd Sep 2016 21:18 UTC in reply to "RE[5]: Priorities."
Alfman Member since:
2011-01-28

ahferroin,

It's probably worth mentioning though that hardware 'acceleration' is not always faster, or even more efficient, than just running it directly in software (If you want, I have all kinds of other examples, but they're not really related to the discussion at hand other than as anecdotes reinforcing this point).


Obviously it depends on the hardware and use cases. Just to be clear I categorically rule out the hardware assist raids, which I wouldn't advise anyone to use.

The thing is there are bottlenecks that hardware raid can solve which software raid can not. Consider databases that have to wait for I/O confirmation before committing a transaction, or servers running many VMs concurrently. Data quickly fill the queue and software raid can't do anything but wait whereas an enterprise HW-raid with BBU can write-back the acknowledgement instantly - even faster than an SSD. Another related benefit is that HW-raid can perform raid-6 at line-speeds for burst transactions. Software raid-6 is often painfully slow because it has to wait on several read/write operations across multiple disks in order to compute the xor patterns before committing. This is a fundamental bottleneck for software raid no matter how fast the CPU. Enabling write-back caching with software raid is unsafe and potentially breaks consistency guaranties at the FS and DB levels.


That said, I think software raid is fine for most people building a home NAS or something like that. Also, SSDs could expose bottlenecks in a HW-raid that don't exist in software raid. There could be issues with HW-raid not supporting TRIM. So I think we can agree that there's no definitive answer; it will depend on the hardware and use cases.

I'm not talking about the sleep commands, I'm talking about the stuff that gets reported by almost all tools as 'APM level'. It's a one byte firmware setting that is supposed to adjust power consumption (and actually controls whether or not the sleep commands are even honored), which is part of the ATA spec, not the AHCI spec (AHCI is a HBA,standard, ATA is the actual communications protocol between the HBA and device).


Alright but that's not a problem with the "drives", that's a problem with the firmware, which is something that Lenovo clearly has control over.




If you take the same set of SATA disks and test performance on an AHCI controller and a good SAS HBA, they will almost always get better performance on the SAS HBA.


I had trouble finding benchmarks for this even though there seem to be people posing the question ( http://serverfault.com/questions/297759/anyone-seen-a-meaningful-sa... ). Maybe I'll try to benchmark it myself. However this still does not sound like a good reason to disable AHCI.

Reply Score: 2

RE[7]: Priorities.
by ahferroin7 on Fri 23rd Sep 2016 11:50 UTC in reply to "RE[6]: Priorities."
ahferroin7 Member since:
2015-10-30

Alright but that's not a problem with the "drives", that's a problem with the firmware, which is something that Lenovo clearly has control over.

I mean a one byte setting in the drive firmware, which Lenovo and most other OEM's have essentially zero control over (the firmware, not the setting).


I had trouble finding benchmarks for this even though there seem to be people posing the question ( http://serverfault.com/questions/297759/anyone-seen-a-meaningful-sa... ). Maybe I'll try to benchmark it myself. However this still does not sound like a good reason to disable AHCI.

I don't have any solid numbers myself since I don't really have the means to properly benchmark this, but we see measurably improved throughput on the file servers we run this way at work compared to just using a regular SATA controller.

As far as performance being a reason to disable AHCI mode, I agree, it's not an excuse for completely disabling access to it, but it is a valid reason to not use it by default. Dell, for example, has been shipping systems set with RAID mode as the default for years now, including laptops where there's only one drive bay, for exactly this reason.

Reply Score: 2

RE[6]: Priorities.
by dionicio on Sun 25th Sep 2016 16:30 UTC in reply to "RE[5]: Priorities."
dionicio Member since:
2006-07-12

"(almost nothing that costs less than about 15USD is actually USB compliant for example, except for cables)".

Not to forget 'embedded' DRM issues. To play it safe I have to scale back to USB2. Better success rate with e-Sata, but that a bit more cable&energy messy.

Reply Score: 2

RE: Priorities.
by Alfman on Thu 22nd Sep 2016 14:38 UTC in reply to "Priorities."
Alfman Member since:
2011-01-28

Brendan,

* There is no standard "RAID controller" hardware interface specification that OSs can support. This means that (unlike AHCI where there is a standard/specification) every different RAID controller needs yet another pointlessly different driver that's been specifically written for it.


It's not clear to me whether this is a real HW-RAID or just soft-RAID that sometimes comes cheap boards. But for actual HW-RAID, it's not necessary for the OS to see/access a RAID array differently than a normal disk. Consider the existence of so-called driverless RAID controllers that are exposed through a standard AHCI interface:

http://www.span.com/product/SATA-RAID-Bridge-SPM394-for-5x-SATA-HD-...

There's no technical reason that all hardware raids couldn't do this and then provide an API (or bios) just for configuring the raid. I really wish this is how Dell's PERC raids worked since it would make using them easier, but alas proprietary drivers are the norm.

* Apparently, AHCI has power management limitations that should be fixed (?). There should be no reason to require RAID just to get power management to work, and power management should work for all AHCI controllers without special drivers.


My SATA and even IDE drives can enter power saving modes just fine, so I'd like to understand what the problem was that was enough to justify ACHI being forcefully disabled? If there was a bug with Lenovo's implementation, then it should have been fixed.

Reply Score: 4

What's behind this?
by MrHood on Thu 22nd Sep 2016 06:28 UTC
MrHood
Member since:
2014-12-02

I can attest to Intel's absolutely terrible Linux drivers and power management.


And I thought Intel was Linux-friendly!

The only idea I can come up with then: would they prefer us to use their (cough) commercial (cough) Linux distribution exclusively, on their hardware?

What do you all think about this?

Reply Score: 2

RE: What's behind this?
by ahferroin7 on Thu 22nd Sep 2016 15:33 UTC in reply to "What's behind this?"
ahferroin7 Member since:
2015-10-30

Intel is extremely Linux friendly compared to most hardware vendors. The issues with running Linux on a MBP are Apple issues, not Intel issues. Newer apple hardware (since the siwtch to EFI and x86) is notoriously hard to get Linux working correctly on.

As far as the Lenovo stuff, that I'm dubious of too, I've got a Thinkpad L560 I bought less than a year ago that I run Linux on daily, and have had absolutely zero issues with (at least, zero Linux related ones, I 've had a couple of problems with Windows, and with the BIOS, but that's a separate problem). I actually get better performance and battery life on it using Linux than running the same workloads under Windows 10 Pro which came on it, and the system usually runs cooler too.

Reply Score: 2

RE[2]: What's behind this?
by dionicio on Thu 22nd Sep 2016 17:49 UTC in reply to "RE: What's behind this?"
dionicio Member since:
2006-07-12

Thanks for the hard-lining, ahferroin7. I have not used the new Graphics Engines from Intel. And the Low-ends doesn't give an uncompetitive energy-saving over Linux.

Reply Score: 2

I totally disagree
by unclefester on Thu 22nd Sep 2016 06:31 UTC
unclefester
Member since:
2007-01-13

I bought a Lenovo Ideapad 100s netbook a few months ago. The bootloader was locked. I'm 99.99% sure that MS is responsible for the chicanery because they know many people would simply install Linux.

Intel's hardware is usually known for exceptional compatibility with Linux (particularly laptops). Hardware RAID is a non-issue for 99.9% of home users.

Reply Score: 2

ridiculed
by unclefester on Thu 22nd Sep 2016 07:03 UTC
unclefester
Member since:
2007-01-13

Lenovo make single HD consumer hardware which can't use RAID. So the entire premise of the original article is complete nonsense.

Edited 2016-09-22 07:04 UTC

Reply Score: 4

RE: ridiculed
by Alfman on Thu 22nd Sep 2016 14:12 UTC in reply to "ridiculed"
Alfman Member since:
2011-01-28

unclefester,

Lenovo make single HD consumer hardware which can't use RAID. So the entire premise of the original article is complete nonsense.


From what I understand, even systems that don't use multidisk "RAID" are still being forced to use the raid-controller and Lenovo has modified the firmware to prevent it from being disabled.

Reply Score: 3

2015 Macbook Pro Retina..
by Darkmage on Thu 22nd Sep 2016 07:31 UTC
Darkmage
Member since:
2006-10-20

I've got a late 2015 Macbook Pro Retina 13" here. I don't have power management issues. However there is a nasty bug in the Intel wifi chipset where the 2.4ghz frequencies interact with the video chipset making the screen flash. The only fix is to switch to 5ghz band. OSX and Windows have the same bug. Intel needs to step up their driver game on the storage controllers/wireless side. I think Intel should provide more doco, their programmers are great but they seem to have an Intel way instead of a Linux way.

Reply Score: 3

Thom, please read this
by p13. on Thu 22nd Sep 2016 07:44 UTC
p13.
Member since:
2005-07-10

A word of warning about running linux on macs.

MAKE SURE that you have mbpfanctl (or macfanctl etc) running.
The SMC does NOT take care of the system on it's own in the way that it should.

I've ran linux on my 2010 mac pro since pretty much the day i got it, and it's very easy to run it up to a tcase of 80+ deg without the fans doing much at all. I don't dare take it any further than that.

Reply Score: 5

Lenovo B-series w/o Windows pre-installed
by crystall on Thu 22nd Sep 2016 10:23 UTC
crystall
Member since:
2007-02-06

Ironically enough I often recommend buying Lenovo laptops from their B-series because they're the only models I can find in Europe which can be found w/o Windows pre-installed. Without the Windows license you can have a decent entry-level laptop for less than 300€.

Reply Score: 3

Really?
by ahferroin7 on Thu 22nd Sep 2016 13:02 UTC
ahferroin7
Member since:
2015-10-30

So, Intel supposedly has horrible support for Linux...

I find this claim, combined with your complaints, rather interesting.

Regarding energy efficiency, I've actually never seen any issues with it on Linux. Every x86 system I've ever had (both Intel and AMD) has gotten better energy efficiency under Linux than it has under Windows running the same workload. On the Thinkpad L560 I use daily, I actually get 8 hours average battery life (best I've ever gotten on this system was almost 15, but it was mostly idle) on average when using Linux as compared to about 5 on Windows 10 when doing essentially the exact same thing (in comparison, best on windows was about 7, but it was also mostly idle). This of course requires a small ammount of effort, but at least Linux lets you put in the effort, and even without doing so, I get equivalent battery times on Linux and Windows on this system.

For the 'RAID' operational mdoe for their SATA controllers, it's crap to begin with. You can do the same things from Linux (or Windows if you use Storage Spaces), and they will almost always run more efficiently They have near zero support for it in Linux not because they don't care about Linux, but because the functionality they're trying to provide with it is already provided in a more configurable, more reliable, and often more efficient manner by existing tools available on Linux. All the fake RAID stuff originated because Windows provided no sane volume management, and is still around because Windows still really doesn't have good volume management (it's possible to do with Storage Spaces, but it's not easy).

As far as the GPU drivers, that's easy, Intel's GPU support in Linux is a bit crappy, but it's not anywhere near as bad as AMD or NVIDIA's. I can't comment on the Iris GPU's, but for the traditional HD Graphics branded ones, I actually get better performance on Linux on both my i5-6200 based Thinkpad, and my Xeon 3-12xx-v4 based workstation which I use as a desktop, expecially for 3D stuff (on both systems, I get 10-20 fps better frame rates for OpenGL performance tests under Linux than I do on Windows).

On top of all of that, did you know that Intel actually develops most of their Linux drivers upstream in the mainline Linux kernel? They actually have pretty amazing levels of support compared to many ARM SoC's and a lot of other platforms, although a lot of the default PM options are somewhat poor choices (they're targeted towards servers, which makes some sense, but is still annoying). If you make sure the correct options are set (and in most distros they are), you should have near zero issues on most Windows OEM systems getting decent performance and energy efficiency out of Linux.

Now, as to your specific case, as has been mentioned by other posters, MBP's have crap Linux support, but it's more of an issue with Apple than Intel. Their EFI implementation is broken, and they have an insane number of odd ACPI and SMBIOS bits that only work with OS X. If you boot Linux on one of them and then boot it on an equivalent Windows system and look at the hardware on both, you'll see that about the only similarity is the CPU , the PCH, the RAM, and some of the very minimalistic bits of firmware that Apple can't make proprietary. They also design the OS and hardware in concert with each other, so they can do pretty much whatever they want and it will work fine for their software. When you're buying a Mac, most of what you're paying for (other than the brand and the customer support) is that integration between the hardware and software, which is something no PC manufacturer can do, but also means that other software doesn't run as well on that system. Poor performance of Linux on a MBP isn't an indication that Intel has poor support for it, it's an indication that Apple has no support for it, and the only reason it runs at all is that Intel has above average support for Linux.

Reply Score: 3

v Not microsoft fault
by soviet9922 on Thu 22nd Sep 2016 13:37 UTC
So original..
by kurkosdr on Thu 22nd Sep 2016 17:03 UTC
kurkosdr
Member since:
2011-04-11

As someone who tried to move his retina MacBook Pro to Linux only a few weeks ago - I can attest to Intel's absolutely terrible Linux drivers and power management. My retina MacBook Pro has an Intel Iris 6100 graphics chip, and the driver for it is so incredibly bad that even playing a simple video will cause the laptop to become so hot I was too scared to leave it running.


Yet another Linuxero blaming GPU vendors for crap Desktop Linux graphics. No sir, it is not the fact X.org is a horrible piece of software which has stayed mostly the same at its core since the mid-90s while Microsoft and Apple have gone through multiple graphics and windowing subsystems since then.

No sir, those GPU vendors should devote time and resources to hack around X.org just to get that lucrative 1-2% of the market. Oh wait, there is such a vendor: Nvidia.

zOMG their drivers aren't FOSS!!!111 You see, those basterds at Nvidia had the nerve to keep secret the drivers they paid dearly for, employing full-time developers to hack around X.org and have the only working Desktop Linux GPU drivers in existence. They must devote full-time developers to support our little 1-2%er operating system and its broken X.org AND release the code for everyone to see.

PS: Of course, Nvidia wrote those GPU drivers because they have lucrative contracts with rendering houses using Linux (the price of paying Nvidia to write drivers for it was less than the cost of buying Windows), not for regular users. But you see, Nvidia made the mistake of releasing the software to regular users, instead of giving them the usual broken open-source driver that Intel and AMD give to regular users. Now they are the most hated GPU vendor by the Desktop Linux communitah...

td;dr Desktop Linux does not deserve good GPU drivers, and neither does its community.

Edited 2016-09-22 17:09 UTC

Reply Score: 1

RE: So original..
by ahferroin7 on Thu 22nd Sep 2016 19:18 UTC in reply to "So original.."
ahferroin7 Member since:
2015-10-30

Really? NVIDIA drivers get good performance? That's odd, because I get better performance on the Quadro K620 in my desktop just using it as a framebuffer (no NVIDIA drivers, no nouveau, nothing but the regular framebuffer driver for NVIDIA cards) and doing all the work in software on the CPU than I do running the official NVIDIA drivers. I get even better performance just ditching the Quadro and using the cheap integrated GPU on the Xeon E3-1765 v3 CPU I have, and that's with FOSS drivers officially supported by the upstream vendor which are only provide an ancient version of OpenGL and happen to work perfectly fine on the most recent mainline kernel the moment it gets a new release.

I will not dispute that X is a stinking pile of excrement, but that's not by any means the only issue. The biggest problem has nothing to do with X, and is the fact that none of the hardware vendors are willing to pull their heads out of their arses and realize that people do use Linux outside of pre-built embedded systems and servers.

Reply Score: 1

RE[2]: So original..
by kurkosdr on Fri 23rd Sep 2016 01:23 UTC in reply to "RE: So original.."
kurkosdr Member since:
2011-04-11

I will not dispute that X is a stinking pile of excrement, but that's not by any means the only issue. The biggest problem has nothing to do with X, and is the fact that none of the hardware vendors are willing to pull their heads out of their arses and realize that people do use Linux outside of pre-built embedded systems and servers.


Yeah, those vendors have their heads in their asses for not spending truckloads of money to support OSes with horribly borken, "stinking pile of excrement" graphics subsystems like X.org, which are also 1-2% of the market.

Seriously dude, get real. A vendor -any vendor- will support an OS if its popular or if the OS makes it easy for the vendor to support it. A vendor will never support well an OS that is not popular and is hard to support too. It's like someone asking you to code in a very difficult programming language you have to learn, only for some small contract job that won't pay well. You are not gonna do it if you are a professional software developer. Right?

I am aware that Linuxeros have wet dreams of Intel, AMD and Nvidia spending lots of man-hours to make good Desktop Linux drivers, working around X.org's flaws and such, just to offer it to that 1-2% of Desktop Linux users, but it ain't gonna happen when they can devote those man-hours to Windows (and maybe OS X) which bring 98% of the income to the company.

Intel, AMD and Nvidia are not charities. I repeat, not charities. Not supporting Desktop Linux and its X.org is not a case of "having their heads in the asses", it is a case of making decisions that are sound from a business perspective. Just like not getting a programming job that is both hard and low-paid makes sense for a professional developer.

Don't like this? Get an OGD-1 board or code your own drivers.

PS: I love it when Linuxeros ask "what keeps you to Windows?" and some time afterwards say "I wish Desktop Linux had as good graphics drivers as Windows". The fact Microsoft paid full-time developers very handsomely to develop WDDM so those gpu drivers are made possible (and that is one of the reasons Windows costs money) never crosses their minds (other reasons Windows costs money is not having a bad audio subsystem like PulseAudio or ALSA and not being infested with something like systemd)

Edited 2016-09-23 01:35 UTC

Reply Score: 0

RE[3]: So original..
by Alfman on Fri 23rd Sep 2016 01:51 UTC in reply to "RE[2]: So original.."
Alfman Member since:
2011-01-28

kurkosdr,

Seriously dude, get real. A vendor -any vendor- will support an OS if its popular or if the OS makes it easy for the vendor to support it. I know you Linuxeros have wet dreams of Intel, AMD and Nvidia spending man-hours to make good Desktop Linux drivers, working around X.org's flaws and such, just to get that 1-2% of Desktop Linux users, but it ain't gonna happen when they can devote those man-hours to Windows (and maybe OS X) which bring 98% of the income.


Actually in all seriousness linux/other-os devs are perfectly willing to do the work themselves, the main issue is the lack of h/w specs. FOSS devs are forced to resort to reverse engineering and trial/error.

I don't know if you remember this, but a long time ago it was quite normal for hardware to come with full schematics, bus layouts, instruction cycles, etc. Programmers needed to access the hardware directly and manufacturers wanted to encourage them to support their hardware. Oh how times have changed.


Methinks there's something else behind all this linux rage you have. Oh well, if you don't like it, don't use it. Problem solved! ;)

Reply Score: 5

RE[4]: So original..
by kurkosdr on Fri 23rd Sep 2016 17:49 UTC in reply to "RE[3]: So original.."
kurkosdr Member since:
2011-04-11


Actually in all seriousness linux/other-os devs are perfectly willing to do the work themselves, the main issue is the lack of h/w specs. FOSS devs are forced to resort to reverse engineering and trial/error.

I don't know if you remember this, but a long time ago it was quite normal for hardware to come with full schematics, bus layouts, instruction cycles, etc. Programmers needed to access the hardware directly and manufacturers wanted to encourage them to support their hardware. Oh how times have changed.


Why should companies expose trade secrets to a competitor by giving out specs? Sure, this might help some FOSS coders improve the drivers, maybe, but at the cost of risking leaking trade secrets, which can be bad if the company has a design that beats the competition and they paid lots of money to have that. This is the exact reason why hardware vendors stopped giving out specs.

Again, those companies are not charities. They do not have their heads up their asses, they just make rational choices instead of pandering to 1-2%ers who run an OS with frickin' X.org as its graphics subsystem (X.org, the graphics subsystem of the 80s still clanking along in 2016, really?). Not pandering to eccentrism is not a case of having your head up your ass, and no amount of whining will change that fact. Deal with it.

Methinks there's something else behind all this linux rage you have. Oh well, if you don't like it, don't use it. Problem solved! ;)


Yes, but I am sick and tired of being asked "what keeps you to windows" for the billionth time, literally minutes after (or before) the same person lamented the state of Desktop Linux graphics and the suckiness of X.org. And I am sick and tired of the endless flaming of GPU vendors in an attempt to shift the blame. STOP IT!

Edited 2016-09-23 17:52 UTC

Reply Score: 1

RE[5]: So original..
by Alfman on Sat 24th Sep 2016 02:03 UTC in reply to "RE[4]: So original.."
Alfman Member since:
2011-01-28

kurkosdr,

Why should companies expose trade secrets to a competitor by giving out specs? Sure, this might help some FOSS coders improve the drivers, maybe, but at the cost of risking leaking trade secrets, which can be bad if the company has a design that beats the competition and they paid lots of money to have that. This is the exact reason why hardware vendors stopped giving out specs.

Again, those companies are not charities. They do not have their heads up their asses, they just make rational choices ...



To me, this is very much like the "tragedy of the commons", which describes a scenario where individuals have an incentive to exploit the environment for themselves, but when the group does it collectively, the group as a whole ends up much worse off. Modern IP policy can be classified as that kind of problem. Through misguided greed, manufacturers end up reinventing the wheel over and over again, which ultimately costs more, wastes resources, and hurts consumers.


Not pandering to eccentrism is not a case of having your head up your ass, and no amount of whining will change that fact. Deal with it.



Take a chill. It's not whining, it's philosophy ;)


Yes, but I am sick and tired of being asked "what keeps you to windows" for the billionth time, literally minutes after (or before) the same person lamented the state of Desktop Linux graphics and the suckiness of X.org. And I am sick and tired of the endless flaming of GPU vendors in an attempt to shift the blame. STOP IT!


And are you referring to me? This seems completely disproportionate in response to anything anyone has said.

Edited 2016-09-24 02:07 UTC

Reply Score: 3

RE[3]: So original..
by dionicio on Sun 25th Sep 2016 16:38 UTC in reply to "RE[2]: So original.."
dionicio Member since:
2006-07-12

Simply not the truth, at least for Server MB, kurkosdr. They do care, and reasons not related to 'charity'.

Reply Score: 2

The moral of the story? Buy AMD
by bassbeast on Sun 25th Sep 2016 00:56 UTC
bassbeast
Member since:
2007-11-11

AMD has been opening up their docs as fast as the lawyers can sign off on it, they have paid the salaries of several devs to help speed up their OSS drivers advancement (and have stated their goal is to get rid of the proprietary driver for the open driver) and as you can see by the link below they are spending a ton of money making tools that are open to help the community..

http://developer.amd.com/tools-and-sdks/open-source/

Reply Score: 2