Linked by Hadrien Grasland on Wed 4th Apr 2012 06:45 UTC
Fedora Core "The ARM architecture is growing in popularity and is expected to expand its reach beyond the mobile and 'small embedded' device space that it currently occupies. Over the next few years, we are likely to see ARM servers and, potentially, desktops. Fedora has had at least some ARM support for the last few years, but always as a secondary architecture, which meant that the support lagged that of the two primary architectures (32 and 64-bit x86) of the distribution. Recently, though, there has been discussion of 'elevating' ARM to a primary architecture, but, so far, there is lots of resistance to a move like that."
Thread beginning with comment 512836
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Comment by EvaTheDog
by d3vi1 on Wed 4th Apr 2012 16:47 UTC in reply to "RE[4]: Comment by EvaTheDog"
d3vi1
Member since:
2006-01-28

Would it be feasible to have a shim layer for each ARM variant, but use one ARM kernel?


Yes, it's called a firmware. It's not only about the platform configuration tables (ACPI & others that 99% of existing ARM implementations don't actually offer). It's also about having a scan-able bus architecture for discovering devices (like a PCI bus on most systems) which would allow the starting of the required drivers (compiled in the kernel). It really is what Plug and Play really offered. Until Plug and Play appeared YOU had to know what devices you had and what settings they used (I/O ports, IRQ, DMA, etc.). After plug and play, you only had to provide the driver, the discovery and basic configuration was done by the OS or firmware and the Bus.
My point is that most ARM implementations actually are missing that plug and play part that normal PCs always offer. If upon booting the Linux kernel doesn't find a VGA class PCI device it will not display anything, even if there is a certain memory range available as a frame buffer. How can Linux know about that?


Now regarding Secure Booting in UEFI. There are 3 things to look at when you're talking about it:
1) The actual UEFI. A good thing as it might allow a single Linux kernel for all hardware.
2) The secure boot locking. This is what everyone is complaining about. Microsoft requires that a user can't change the keys for booting a system. It's a bad thing.
3) The secure boot keys. Microsoft does not require that the locked keys are only the ones that the provide. So, in theory, an OEM could include the keys from RedHat or Ubuntu. It's certainly incompatible with a GPLv3 boot loader, but it would still work with any other license both technically and legally. It might work for us if we pressure the vendors to include a single CA (OSS CA). Then again, it beats the purpose of the Secure Booting initiative as also a hacker could get stuff signed by it.

Reply Parent Score: 2

RE[6]: Comment by EvaTheDog
by Alfman on Wed 4th Apr 2012 18:43 in reply to "RE[5]: Comment by EvaTheDog"
Alfman Member since:
2011-01-28

d3vi1,

"Yes, it's called a firmware. It's not only about the platform configuration tables (ACPI & others that 99% of existing ARM implementations don't actually offer)."

Well no actually I meant a shim layer that could be installed alongside the kernel for devices which specifically lack a firmware (or have a custom one). If this is technically feasible (I don't know why it wouldn't be), then it would offer a means to standardize the kernel itself without worrying about mainboard/firmware variations. The shim might work something like a linux bootloader that's specific to the ARM device it's running on.

The benefit of this would be to eliminate the nuisance which is compiling a whole custom kernel for every device.

"After plug and play, you only had to provide the driver, the discovery and basic configuration was done by the OS or firmware and the Bus."

With a standard x86 PCI bus the devices themselves all implement the same PNP protocol, which is not at all specific to the mainboard, and it's not overly complex. If the OS wanted to, it could perform it's own PCI PNP scan without any help from the mainboard firmware.

I suspect ARM's system initialization sequence is similar to that of an x86 (but I want to know if I'm wrong about this). It might require mapping DIMM modules to physical addresses (if they're not already hard coded?), enabling the "chip enable" lines for the mainboard devices, providing a ram map for the OS, beyond that the OS can do everything including PNP and device initialization with drivers.


"My point is that most ARM implementations actually are missing that plug and play part that normal PCs always offer. If upon booting the Linux kernel doesn't find a VGA class PCI device it will not display anything, even if there is a certain memory range available as a frame buffer. How can Linux know about that?"

Well I'd be hopeful that ARM devices got rid of these legacy aspects but on an 86 the PCI video hardware is hardcoded to watch the bus for certain physical addresses. (I'm talking of course about 0xA000 and 0xB000 in addition to the PNP mappings). Most mainboards now have integrated video, but in principal the mainboard is not responsible for video initialization, but rather the video card's own ROM.

I like this tangential conversation topic, but to go back to what I was suggesting, can't all device specific knowledge be packaged into a shim layer that fits between the proprietary hardcoded firmware (or lack thereof) and a standard universal linux binary for ARM?



"Now regarding Secure Booting in UEFI..."

I'm not sure if you've read my take on it already or not, but yea I think it was designed as a mechanism to restrict the owner's own control over the machine. It's a scam that the owner can't control the chain of trust.

Edited 2012-04-04 18:56 UTC

Reply Parent Score: 2