“The ARM architecture is growing in popularity and is expected to expand its reach beyond the mobile and ‘small embedded’ device space that it currently occupies. Over the next few years, we are likely to see ARM servers and, potentially, desktops. Fedora has had at least some ARM support for the last few years, but always as a secondary architecture, which meant that the support lagged that of the two primary architectures (32 and 64-bit x86) of the distribution. Recently, though, there has been discussion of ‘elevating’ ARM to a primary architecture, but, so far, there is lots of resistance to a move like that.”
Easy decision for me. If they make ARM the primary architecture, then I switch to a different distribution. Sitting two or three releases behind for a stable release is just not an option for me.
Whoever suggested this change needs their head examined. http://tinyurl.com/pxrfyp
Looking at the article, I don’t think the discussion is whether to make ARM the primary architecture, but a primary architecture.
Fedora already officially have two primary level archtectures (x86 and x86_64) according to http://fedoraproject.org/wiki/Architectures so this would just be adding a third.
Edited 2012-04-04 08:11 UTC
And a forth. And soon enough a fifth. You have ARMv5 and ARMv7 and you should soon enough have a new 64-bit ARM ARMv8. This complicates things a bit. Furthermore, it’s hard to support a distribution for ARM given that there are no generic ARM systems available yet.
Every configuration for almost every ARM system should be hardcoded at compile-time and if you’re planning to support 100 different ARM systems might find out that you need 100 kernels.
Linux needs to auto-detect the hardware available to it such as frame buffers, i/o devices, etc. Most ARM systems out there (mobile phones for example), don’t have any auto-detection features and only work with hardcoded defaults and this just won’t work.
Future ARM systems will work just right with Fedora as they will be closer to following a standard. They will use UEFI as a firmware, they will have a standard PCI bus, etc.
Right now I don’t think that Fedora on ARM is doable but I applaud their attempt to make it happen.
Agree. They need to figure out how to reduce the compile time for the ARM ports. From the article it says it takes over a day as compared to an hour and a half for x86. By the time they solve that problem, I imagine a standardized ARM architecture should emerge. I just hope all of the systems are not secure boot locked out of booting anything other than win 8.
Bill Shooter of Bul,
“I just hope all of the systems are not secure boot locked out of booting anything other than win 8.”
One step forward, a hundred steps back.
I know there’s no bios standard for ARM platforms, but does any know just how different the arm variants are in practice? Is it just a matter of missing the platform configuration tables to initialize the system, or are things really that incompatible in the kernel/drivers that we’d need dozens of different kernels?
Would it be feasible to have a shim layer for each ARM variant, but use one ARM kernel?
Yes, it’s called a firmware. It’s not only about the platform configuration tables (ACPI & others that 99% of existing ARM implementations don’t actually offer). It’s also about having a scan-able bus architecture for discovering devices (like a PCI bus on most systems) which would allow the starting of the required drivers (compiled in the kernel). It really is what Plug and Play really offered. Until Plug and Play appeared YOU had to know what devices you had and what settings they used (I/O ports, IRQ, DMA, etc.). After plug and play, you only had to provide the driver, the discovery and basic configuration was done by the OS or firmware and the Bus.
My point is that most ARM implementations actually are missing that plug and play part that normal PCs always offer. If upon booting the Linux kernel doesn’t find a VGA class PCI device it will not display anything, even if there is a certain memory range available as a frame buffer. How can Linux know about that?
Now regarding Secure Booting in UEFI. There are 3 things to look at when you’re talking about it:
1) The actual UEFI. A good thing as it might allow a single Linux kernel for all hardware.
2) The secure boot locking. This is what everyone is complaining about. Microsoft requires that a user can’t change the keys for booting a system. It’s a bad thing.
3) The secure boot keys. Microsoft does not require that the locked keys are only the ones that the provide. So, in theory, an OEM could include the keys from RedHat or Ubuntu. It’s certainly incompatible with a GPLv3 boot loader, but it would still work with any other license both technically and legally. It might work for us if we pressure the vendors to include a single CA (OSS CA). Then again, it beats the purpose of the Secure Booting initiative as also a hacker could get stuff signed by it.
d3vi1,
“Yes, it’s called a firmware. It’s not only about the platform configuration tables (ACPI & others that 99% of existing ARM implementations don’t actually offer).”
Well no actually I meant a shim layer that could be installed alongside the kernel for devices which specifically lack a firmware (or have a custom one). If this is technically feasible (I don’t know why it wouldn’t be), then it would offer a means to standardize the kernel itself without worrying about mainboard/firmware variations. The shim might work something like a linux bootloader that’s specific to the ARM device it’s running on.
The benefit of this would be to eliminate the nuisance which is compiling a whole custom kernel for every device.
“After plug and play, you only had to provide the driver, the discovery and basic configuration was done by the OS or firmware and the Bus.”
With a standard x86 PCI bus the devices themselves all implement the same PNP protocol, which is not at all specific to the mainboard, and it’s not overly complex. If the OS wanted to, it could perform it’s own PCI PNP scan without any help from the mainboard firmware.
I suspect ARM’s system initialization sequence is similar to that of an x86 (but I want to know if I’m wrong about this). It might require mapping DIMM modules to physical addresses (if they’re not already hard coded?), enabling the “chip enable” lines for the mainboard devices, providing a ram map for the OS, beyond that the OS can do everything including PNP and device initialization with drivers.
“My point is that most ARM implementations actually are missing that plug and play part that normal PCs always offer. If upon booting the Linux kernel doesn’t find a VGA class PCI device it will not display anything, even if there is a certain memory range available as a frame buffer. How can Linux know about that?”
Well I’d be hopeful that ARM devices got rid of these legacy aspects but on an 86 the PCI video hardware is hardcoded to watch the bus for certain physical addresses. (I’m talking of course about 0xA000 and 0xB000 in addition to the PNP mappings). Most mainboards now have integrated video, but in principal the mainboard is not responsible for video initialization, but rather the video card’s own ROM.
I like this tangential conversation topic, but to go back to what I was suggesting, can’t all device specific knowledge be packaged into a shim layer that fits between the proprietary hardcoded firmware (or lack thereof) and a standard universal linux binary for ARM?
“Now regarding Secure Booting in UEFI…”
I’m not sure if you’ve read my take on it already or not, but yea I think it was designed as a mechanism to restrict the owner’s own control over the machine. It’s a scam that the owner can’t control the chain of trust.
Edited 2012-04-04 18:56 UTC
Exactly.
Do you have read the article?
They don’t plan, to make ARM the primary architecture. They plan, to make ARM one primary architecture. beside x86 and x86_64.
Technically speaking it is “a” (inclusive) primary architecture, not “one” (the latter being exclusive) :p
they’re already late.
there is a delay before people realize they’re dead, and it is the same with a company. ARM is not optional and the linux vendors either know that or are being left behind
Yes, and Fedora know this as well as anybody – they’ve supported ARM for quite a while now.
The question is simply over whether that support should be elevated to the status of “primary architecture”, which imposes a lot of extra requirements. For example, it means that if a package is broken on ARM, that should block the release of that package to every other architecture too. That kind of thing.
/rant on
But which ARM are you talking about ? ARMv5 ? ARMv7 ? ARMv8 ? Profile A, M ? With or without an FPU ?
Then, if you want to do anything slightly useful with an ARM processor (like, say, simply blinking LEDs or sending text on a serial port for debugging), comes the joy of per-SoC support : an implementation of “ARM Support” which works, say, on the Ti OMAP 4430 of a Pandaboard, won’t work on a competing chipset like Qualcomm’s Snapdragon family, nor on the next generation of the same chipset (OMAP 5 series), and may even not work very well on another chipset of the same family (OMAP4470) without adjustments.
Sad truth is, distribution support does not matter much on ARM. As long as ARM doesn’t have a unified architecture, the most crucial thing is having a manufacturer rewrite half of your kernel according to half-proprietary specs in order to make it work on each SoC it sells. And test it, too. If you thought that Linux’s GPU support was bad, wait until you see someone actually trying to run it on ARM hardware that’s even slightly exotic.
And the only big company which has made any effort towards standardizing ARM so far, Microsoft, has added the requirement that competing OSs may not be installed on “their” hardware, so this situation is unlikely to change anytime soon.
/rant off
Edited 2012-04-05 06:39 UTC
I am sure Red Hat is going to go with the most common chips. They target enterprise server rooms, so that will probably mean ArmV5, V7, V8. There is no reason why this can’t be handled like Gentoo. You compile for the proc you have. This is actually one place where that paradigm would make a lot of sense.
RIght now its a chicken vs egg thing. Not enough people have ARM devices that can work on it. As for devices, there are actually quite a few out there. There are plenty of dev boards and also a few consumer devices. Red Hat does have full time people working on ARM. It is coming to the server room.
As for development, updating the code is not that simple. There are things that x86 cpus will do mathematically that an ARM processor won’t do. Heck, there are even differences between the different ARM processors.
This is really just a question of timing, not if.