Linked by Thom Holwerda on Mon 17th Sep 2012 16:56 UTC, submitted by Andy McLaughlin
OSNews, Generic OSes "Visopsys (VISual OPerating SYStem) is an alternative operating system for PC-compatible computers, developed almost exclusively by one person, Andy McLaughlin, since its inception in 1997. Andy is a 30-something programmer from Canada, who, via Boston and San Jose ended up in London, UK, where he spends much of his spare time developing Visopsys. We had the great fortune to catch up with Andy via email and ask him questions about Visopsys, why he started the project in the first place, and where is it going in the future."
Order by: Score:
Useful website
by error32 on Mon 17th Sep 2012 17:26 UTC
error32
Member since:
2008-12-10

The visopsys website has a lot of useful link to os development documents. Probably already well know to homebrew os-devs

Reply Score: 2

The hardest part
by Alfman on Mon 17th Sep 2012 17:51 UTC
Alfman
Member since:
2011-01-28

Pingdom: "What’s been the hardest part of working on Visopsys?"

Andy: "Hardware support, definitely. The world of PC hardware has always been a bit daunting, since there are so many different manufacturers, all interpreting the standards in their own ‘unique’ ways. You might write something like an IDE driver that works on every system you try, release it, and then find out it isn’t working properly on hundreds of other peoples’ systems."


I have to agree completely with his answer. Designing and implementing the OS foundation is the "easy" part. It's the part we're all most interested in as programmers. Going through and actually implementing the drivers (often without specifications and even without all hardware configurations) is the part that ultimately bogs down the hobby OS developer.


What we need is some kind of universal driver standard that can be shared across all operating systems. Ideally this would be in source form and the layer could be optimised away by the compiler. This way a driver wouldn't be written for "Windows X" but instead for the "2012 PC driver standard". The OS would implement the standard and immediately support numerous compatible hardware devices. It's a pipe dream though. For it's part, MS would never participate, and their cooperation would be pretty much mandatory.

Reply Score: 3

RE: The hardest part
by ferrels on Mon 17th Sep 2012 18:01 UTC in reply to "The hardest part"
ferrels Member since:
2006-08-15

That all sounds good in theory, but in practice it reduces your OS to running on hardware of the least common denominator (in terms of performance)....your driver may work on 98%+ of the hardware out there but the performance suffers immensely. VESA drivers for video are a good example of this. Almost all video hardware will work in VESA mode, but you're essentially left with only the most basic functions and features....i.e. no hardware GPU acceleration, limited color depths and resolutions, etc.

Edited 2012-09-17 18:02 UTC

Reply Score: 2

RE[2]: The hardest part
by Alfman on Mon 17th Sep 2012 18:13 UTC in reply to "RE: The hardest part"
Alfman Member since:
2011-01-28

ferrels,

Well, VESA was written ages ago and at that time they performed pretty well because graphics were not accelerated.

The reason I stated the "2012 PC driver standard" was because I envisioned the standard itself to be updated every few years to adopt to new hardware interfaces. Non-standard extensions could be implemented too, but the idea would be for new functionality to ultimately be incorporated into the standard at some point.


The great thing about this is that the standard could be both forward and backward compatible.

As an example. an OS might support the 2012 standard for webcams. Come 2020, when the standard is depreciated, a generic 2012->2020 wrapper layer could never the less assure that all 2020 operating systems could continue to run the 2012 web cam drivers. (I think I uncovered another disincentive for manufacturers to support this ;) )

Conversely, it'd be possible to have a generic 2020->2012 conversion driver to allow an older OS to run the newer hardware drivers.


Edit: Extending EOL is only a side benefit, but the intention would be to eliminate the duplication of work in OS drivers and make it much easier for all operating systems to support all hardware at least in their basic modes. More advanced features should still be possible even if they're not supported in all operating systems.

Edited 2012-09-17 18:19 UTC

Reply Score: 2

RE[3]: The hardest part
by zima on Mon 17th Sep 2012 19:45 UTC in reply to "RE[2]: The hardest part"
zima Member since:
2005-07-06

Webcams are already more or less covered, by USB video class - don't really even need driver installation (sure, some manufacturer drivers are often/usually provided; but they tend to offer just superfluous trinkets beyond what the OS driver does, and they sometimes introduce some weird issues & performance degradation - that happened to me once, the default OS drivers clearly bogged down the system much less during webcam operation)

PS. Compatibility with USB video class is BTW something made mandatory by MS to get Vista & up logo certificate, on a product - which also brought to, say, Linux much better support of any random webcam than it used to be the case - so overall your "It's a pipe dream though. For it's part, MS would never participate" seems not entirely warranted...

Edited 2012-09-17 19:55 UTC

Reply Score: 3

RE[4]: The hardest part
by ssokolow on Mon 17th Sep 2012 20:12 UTC in reply to "RE[3]: The hardest part"
ssokolow Member since:
2010-01-21

Yeah. Microsoft actually does have an incentive to standardize drivers.

Aside from malware, the biggest source of kernel instability these days is buggy drivers. So, the more Microsoft can get more hardware sharing less code that they write and test themselves, the more stable Windows becomes.

(And the easier it is to push the PR line that locking down the system, iOS-style, will kill rootkits and spyware without harming any legitimate users)

Edited 2012-09-17 20:24 UTC

Reply Score: 2

RE[4]: The hardest part
by Alfman on Mon 17th Sep 2012 20:31 UTC in reply to "RE[3]: The hardest part"
Alfman Member since:
2011-01-28

zima,

I've purchased a few webcams, one this year. I have yet to own a webcam where windows drivers weren't necessary...but then it's a noname brand. If what you are saying is true and they are becoming standardised, that's a very welcome change!

Reply Score: 2

RE[5]: The hardest part
by ssokolow on Mon 17th Sep 2012 20:41 UTC in reply to "RE[4]: The hardest part"
ssokolow Member since:
2010-01-21

Did they say "USB Video Class", "UVC", or something along the lines of "Designed for Windows Vista/7"?

If not, they may be older designs. If so, then your problem is Windows's approach to drivers.

There are some devices where they will work with one of the drivers Windows has built-in, but because of the metadata their USB microcontrollers report, you need to craft your own INF file to make Windows aware of that.

Reply Score: 2

RE[5]: The hardest part
by zima on Tue 18th Sep 2012 17:35 UTC in reply to "RE[4]: The hardest part"
zima Member since:
2005-07-06

I have yet to own a webcam where windows drivers weren't necessary...but then it's a noname brand. If what you are saying is true and they are becoming standardised, that's a very welcome change!

"if"? They're out there, for better part of a decade ( http://en.wikipedia.org/wiki/USB_video_device_class#Revision_Histor... & when I was shopping for a webcam 5 years ago or so, there were certainly some USB video class models available; few on http://en.wikipedia.org/wiki/List_of_USB_video_class_devices or http://www.ideasonboard.org/uvc/#devices list are at least that old)

I guess some noname models might still use oldish innards, not modified for a long time ...as usual with hardware, you check if it fulfils your criteria before buying (but, if you care about such "total" plug'n'play in a webcam, how did you miss the existence of USB video class?), and/or get something very popular - hence widely supported (that example I gave, when the default Windows drivers worked better than manufacturer-provided - it wasn't even a USB video class webcam; it was about pre-UVC, but as standard as they come, classic QuickCam Express)

Reply Score: 2

RE[6]: The hardest part
by Alfman on Tue 18th Sep 2012 18:25 UTC in reply to "RE[5]: The hardest part"
Alfman Member since:
2011-01-28

Yes, I say "if" because clearly some new webcams are not supported by windows 7 out of the box. I'm looking now at newegg and I can't tell which ones are and which are not, they don't list their usb classes or identification numbers. To be honest, I don't mind installing drivers the old-school way, it was never really a criteria for me.

The only device type where not having built in drivers really hurts is network cards - since there's no way to go online and download new drivers.

Reply Score: 2

RE[7]: The hardest part
by zima on Tue 18th Sep 2012 19:00 UTC in reply to "RE[6]: The hardest part"
zima Member since:
2005-07-06

But total world domination of UVC is not what I had in mind when writing "Webcams are already more or less covered, by USB video class" - for there to be no outliers, you'd have to force everybody to use UVC, and how do you propose doing that? (other than... MS becoming much more aggressive - and not only about the logo, but outright banning all non-compliant devices from Windows; I can bet you'd grumble much more about such scenario :p ).

So yes, webcams following their USB class are out there and quite numerous, no need for "if" - salesmen not advertising it, and consumers seemingly not caring much, is another issue.

Because it's good to have built-in drivers, or even such device class. Makes hardware more likely usable, down the line (that decade+ old QuickCam Express that I mentioned, still recognized & working flawlessly; no such luck with one similarly old and much nicer - but also rarer - Philips webcam; plus, OS-included drivers of chipsets and such often tended to be more trouble-free, in my experience)

Reply Score: 2

RE: The hardest part
by tidux on Mon 17th Sep 2012 18:12 UTC in reply to "The hardest part"
tidux Member since:
2011-08-13

That's what the BIOS does. Welcome to 16-bit mode.

Reply Score: 4

RE[2]: The hardest part
by Alfman on Mon 17th Sep 2012 19:07 UTC in reply to "RE: The hardest part"
Alfman Member since:
2011-01-28

Of course, BIOS is strictly a legacy interface today, essentially undeveloped since the 1980's. Never the less, in it's prime, consider what a huge phenomenal success BIOS was in achieving a level of hardware independence that would have been impossible without it.

The small OS I wrote did run on PCs other than mine. I didn't have to do anything extra to make it run on a laptop, it just did because those standards existed. I am absolutely positive that Andy and all other indy-OS devs you'll find will agree that they crave a standard interface that would just make their OS work everywhere without having to reinvent the wheel and write new drivers for the Nth time.

Let's all have a good laugh comparing the idea to a 16 bit BIOS, but on a serious level I'd rather not dismiss the notion of a modernised standard as a joke. It would be tremendously useful in promoting innovation in the operating system space by making it much easier for alternative operating systems to be taken seriously as competitive platforms.

Reply Score: 3

RE[3]: The hardest part
by zima on Mon 17th Sep 2012 19:35 UTC in reply to "RE[2]: The hardest part"
zima Member since:
2005-07-06

Well then that's what the UEFI does now. Or at least was supposed to, I think - didn't really work out (NVM forcing all OS into the same idiosyncrasies; IIRC it follows some of WinNT, so MS even kinda participates...)

Reply Score: 2

RE[4]: The hardest part
by Alfman on Mon 17th Sep 2012 20:27 UTC in reply to "RE[3]: The hardest part"
Alfman Member since:
2011-01-28

Well, maybe, but I only posted that in response to the BIOS comment. I don't consider UEFI firmware a great substitute for a software driver standard. I don't have practical experience with UEFI, but I see some possible negative implications:

1. The hardware firmware can't be managed as easily/safely as software drivers can be. Can I update firmware drivers for one device independently from the rest or is this a monolith firmware?

2. For UEFI services to work, my devices will have to be supported through my mainboard. If the device uses a newer standard, and my os supports the newer standard, is it possible that my mainboard can never the less prevent me from using it because it lacks firmware updates?

3. I don't think UEFI can contain drivers to support all external peripherals - like webcams, cameras, scanners, various adapters, voip devices, etc. It seems like a bad idea to try and cram all the drivers for these in the motherboard's UEFI services.

To be honest, I'd rather have a standard that is capable of scaling to all sorts of devices and not one that depends on my motherboard's firmware implementation. So I think a software solution would be better....however I'd like to hear other ideas.

Reply Score: 2

RE[5]: The hardest part
by ssokolow on Mon 17th Sep 2012 20:34 UTC in reply to "RE[4]: The hardest part"
ssokolow Member since:
2010-01-21

To be honest, I find UEFI ominous because, apparently, most motherboard manufacturers start with Intel's reference implementation and end up with something as big and complex as an OS kernel.

It's bad enough that the motherboard's firmware now contains enough of a network stack to spy on you and phone home if subverted. Does it really also need to be so big that it's statistically guaranteed to have exploits?

(When this BIOS-based motherboard breaks, I'm either going to buy a replacement from the crop of pre-Win8 mobos or I'm going to start my shopping at the CoreBoot compatibility list.)

Reply Score: 2

RE[6]: The hardest part
by ssokolow on Mon 17th Sep 2012 21:59 UTC in reply to "RE[5]: The hardest part"
ssokolow Member since:
2010-01-21

Update: Here's are a couple of things on the pros and cons of UEFI and how they relate to Linux:

https://www.youtube.com/watch?v=V2aq5M3Q76U
http://mjg59.dreamwidth.org/11235.html

Reply Score: 2

RE[7]: The hardest part
by Alfman on Tue 18th Sep 2012 01:04 UTC in reply to "RE[6]: The hardest part"
Alfman Member since:
2011-01-28

ssokolow,

Yea I know mjg59 isn't a big fan of UEFI, although I've only heard his take as it related to secure boot. I'll have to wait till I have more time to really follow your links. (very good find though, thanks for linking them!)

Regarding webcam compatibility, I haven't a clue what the box said, only that win7 was supported, which was good enough for me (user reviews revealed linux compatibility too). I don't mind that I needed a driver - the main point is that one is available. To me, the benefit of standard drivers would mean that manufacturer drivers could be loaded in independent operating systems even if manufacturers don't specifically cater to them. It would give a new breath of opportunity to homebrew OS developers.

Edited 2012-09-18 01:08 UTC

Reply Score: 2

RE[5]: The hardest part
by zima on Tue 18th Sep 2012 18:07 UTC in reply to "RE[4]: The hardest part"
zima Member since:
2005-07-06

I only posted that in response to the BIOS comment. I don't consider UEFI firmware a great substitute for a software driver standard

It was mostly just half-joking continuation of "That's what the BIOS does. Welcome to 16-bit mode" post just above ...though UEFI did have also that goal in mind, IIRC. And which, as I said, didn't really work out - and you pointed out some possible issues with the UEFI approach.

Problem is, perhaps that's one of the very few ways of achieving such total plug'n'play? The others would be a) drawing on the work of existing standard bodies (but I have some doubts if Haiku, Syllable or Visopsys support, say, even USB video class) b) drawing on the work done for the big boys (what NDISwrapper does, and ReactOS has a goal of using all standard Windows drivers IIRC; I believe there's also notable BSDs <-> Linux cross-pollination, extending also to Haiku and such).
Either way, it would force more idiosyncrasies of dominant OS onto independent ones - would that be good?

Cameras are also covered BTW, VoIP devices largely fall under some device class for external USB soundcards, and there was also some standard for scanners IIRC.

Reply Score: 2

RE[6]: The hardest part
by Alfman on Tue 18th Sep 2012 19:23 UTC in reply to "RE[5]: The hardest part"
Alfman Member since:
2011-01-28

zima,

"Problem is, perhaps that's one of the very few ways of achieving such total plug'n'play?"

I think it depends what you mean by PNP. As a hardware spec, PNP refers to the mechanisms of allocating the resources (memory, ports, interrupts, dma) for hardware and identifying it. This is mostly solved by Bios/UEFI already and I see no reason to change it. Even when no drivers are available in the OS, the system is still able to allocate device resources via PNP. I don't think there's a PNP problem for hobby operating systems in general.


You keep suggesting to draw on the work of existing standards bodies, and to the extent that we can I agree it's a good idea. However I am not aware of any standards that approach the driver problem holistically and with the goal of being applied across operating systems. Your usb video class example is fine, but it falls short of solving the more general problem (even assuming all usb webcams could use this standard). Once my OS has implemented this USB standard, can I plug in any PCI frame buffer capture card and use it? Can I plug in a firewire device and capture from it? Can I plug in an eithernet/Wifi webcam device and capture from it? Can I pair to a bluetooth webcam? The answer in most cases is going to be no because the USB standard is just that, it's not intented as a generic solution to the driver problem. I want a driver solution that can continue to work even if a new bus comes along to replace USB. I want something specifically designed to solve the driver problem for all operating systems without regards to the standards of a specific bus.


A second thing to keep in mind is that this driver standard should not seek to dictate how manufacturers should build their hardware. I feel they should be unrestricted to build the hardware however they want. It would be the software driver's responsibility to bridge the gap between the driver standard and the hardware interface.

If needed they could add proprietary extensions until the time when the standard officially adopted those extensions. Even then, the existing hardware wouldn't need to be updated, only the drivers. This would mean the functionality defined in the standard would always be available to all operating systems, only non-standard functionality would be OS specific.


"Either way, it would force more idiosyncrasies of dominant OS onto independent ones - would that be good?"

Your example was taking existing drivers today (say from windows) and making them the defacto standard. That's not what I had in mind. Shared drivers would be designed from the ground up to be more agnostic.

Reply Score: 2

RE[7]: The hardest part
by zima on Mon 24th Sep 2012 23:58 UTC in reply to "RE[6]: The hardest part"
zima Member since:
2005-07-06

Thing is, approach like that of USB video class is likely pretty much one of the very few workable solutions. Another being just using Windows (and/or Linux...*) drivers.
If you wish for such "total PNP" (and I used the term more broadly, what the abbreviation says), that is what in practice you want, I think. What is realistic (on several fronts, also how it impacts / constraints OS architectures).

And all USB can use this standard - it will just take some time. Though the thing is, webcams get typically integrated now ...more and more peripherals either gets integrated or disappears, random driver woes become less of issue.
Firewire also has similar ~class for a long time BTW ...oh, but it disappears. As do TV tuners. Also, ethernet/wifi cameras don't need drivers, and the USB will be almost certainly around for a long time...

OTOH, if something were to come that would really be as all-encompassing shared driver model as you envision, it could quite possibly somewhat preclude future improvements, or at get in their way more than would otherwise be the case (with how we already must care about compatibilities, legacies, and such). Once really introduced, not much would get improved. You description reminded me a bit about Kronos group, OpenGL stagnation, relatively limited "standards" of doing things within it.

And I don't really think there's any big "driver problem" to be solved, anyway.


* BTW, funny thing about Linux kernel devs that came to my attention - apparently they are quite willing to work under NDAs when working on drivers. Which basically says "now that Linux is strong, we'll also do the things which impair newcomer operating systems"

Edited 2012-09-25 00:12 UTC

Reply Score: 2

RE: The hardest part
by Brendan on Tue 18th Sep 2012 04:12 UTC in reply to "The hardest part"
Brendan Member since:
2005-11-16

What we need is some kind of universal driver standard that can be shared across all operating systems. Ideally this would be in source form and the layer could be optimised away by the compiler. This way a driver wouldn't be written for "Windows X" but instead for the "2012 PC driver standard". The OS would implement the standard and immediately support numerous compatible hardware devices. It's a pipe dream though. For it's part, MS would never participate, and their cooperation would be pretty much mandatory.


This has been attempted several times before. The most popular attempt was UDI (Uniform Driver Interface), which failed despite being backed by several large companies (including Sun/Solaris).

Some OSs (e.g. Linux) had religious objections ("OMG what if we wrote drivers and Microsoft could use them!"), some OSs had security problems ("OMG binary blobs created by unknown third-party developers running at the highest privilege level because our kernel is monolithic!"), and some OSs had technical reasons for not using it (e.g. very different driver interfaces, capabilities and feature sets). The end result was that very few people wrote drivers for it because most OSs didn't support it (and then OSs that didn't have religious, security or technical reasons for avoiding it didn't bother supporting it anyway because there weren't many drivers).

The ancient BIOS services are not acceptable because they're single-tasking synchronous interfaces (e.g. your OS freezes while the firmware waits until a DMA transfer completes). For exactly the same reason, UEFI is not an acceptable answer either. For a decent OS, you want to be able to be transferring data to/from disk while transferring data to/from network while transferring data to/from sound card while transferring data to/from USB; where the CPUs are all still free to do unrelated/useful work (and aren't stuck in a "while(waiting) {}" loop).

- Brendan

Reply Score: 2

RE[2]: The hardest part
by Alfman on Tue 18th Sep 2012 05:30 UTC in reply to "RE: The hardest part"
Alfman Member since:
2011-01-28

Brendan,

Those are great points. I'm not under any delusions that it would be easy to get everyone on board the shared driver model. Even if we managed to solve all the technical issues, I think many corporations would reject it on account of them benefiting from high barriers to entry in the field of alternative operating systems. So I'm really very sceptical myself that we could pull it off in this day and age when computers are only getting more closed rather than more open.


Never the less, I still like dissecting the theory and appreciate your critique.


1. "Some OSs (e.g. Linux) had religious objections ('OMG what if we wrote drivers and Microsoft could use them!')"

Well, presumably the manufacturers would be on board (if they weren't the model would be bound to fail anyways). This means the manufacturers would be providing the drivers and that they would run on linux, visopsys, hurd, etc. as well as windows.


2. "some OSs had security problems ('OMG binary blobs created by unknown third-party developers running at the highest privilege level because our kernel is monolithic!')"

This is true. In principal I don't object to requiring source code with drivers, but that's not a very realistic sale. Many linux distros already have proprietary blobs, and nobody would be forced to install those. On the one hand, the high availability of shared manufacturer drivers could decrease the incentive of volunteers to write open source drivers for the same hardware. On the other hand, these volunteers could put their time to much better uses instead of reinventing the wheel.

Ultimately though if you don't trust the manufacturer of your hardware to write stable/trustworthly software, then arguably you've got no business installing their hardware on your machine either.


3. "and some OSs had technical reasons for not using it (e.g. very different driver interfaces, capabilities and feature sets)."

This is indeed probably one of the more controversial aspects, but the way I see it the drivers should be as modular as possible to make it easy to hook them into however the OS needs them. At the extreme, all operating systems today have no choice but to work with fixed interfaces anyways (ones provided by the hardware). Moving this into software shouldn't be that much of a burden to a system's design. Like I said earlier, *ideally* this wrapping layer would be in a kind of source form and it's overhead could be optimised away when it's compiled&linked with callees within the OS.


"The ancient BIOS services are not acceptable because they're single-tasking synchronous interfaces (e.g. your OS freezes while the firmware waits until a DMA transfer completes)."

Great observation. Clearly these would have to be asynchronous. Also, all device drivers should be able to run in parallel on SMP. Which is a reasonable requirement assuming they don't share resources.

Just a note: many linux drivers themselves were subject to the big kernel lock until recently, which caused similar driver serialisation bottlenecks.
http://lwn.net/Articles/380174/


I wish there would be a viable market for this, it is a project I think I would enjoy working on.

Edited 2012-09-18 05:38 UTC

Reply Score: 4

RE[3]: The hardest part
by Brendan on Wed 19th Sep 2012 11:55 UTC in reply to "RE[2]: The hardest part"
Brendan Member since:
2005-11-16

1. "Some OSs (e.g. Linux) had religious objections ('OMG what if we wrote drivers and Microsoft could use them!')"

Well, presumably the manufacturers would be on board (if they weren't the model would be bound to fail anyways). This means the manufacturers would be providing the drivers and that they would run on linux, visopsys, hurd, etc. as well as windows.


Almost all hardware manufacturers only care about Windows; and like you already said, Microsoft has nothing to gain from adopting this. The rare few manufacturers that do care about anything else (NVidia, ATI/AMD, Intel) only care about Linux. Linux developers (and GNU specifically) are the people that refused to support UDI for religious reasons ( http://www.gnu.org/philosophy/udi.html ).

2. "some OSs had security problems ('OMG binary blobs created by unknown third-party developers running at the highest privilege level because our kernel is monolithic!')"

This is true. In principal I don't object to requiring source code with drivers, but that's not a very realistic sale. Many linux distros already have proprietary blobs, and nobody would be forced to install those. On the one hand, the high availability of shared manufacturer drivers could decrease the incentive of volunteers to write open source drivers for the same hardware. On the other hand, these volunteers could put their time to much better uses instead of reinventing the wheel.


Linux has some binary blobs; but they don't like it and would get rid of them if they could. It'd be extremely hard to convince them to replace existing/working open source drivers with alternative (potentially closed source) drivers.

Ultimately though if you don't trust the manufacturer of your hardware to write stable/trustworthly software, then arguably you've got no business installing their hardware on your machine either.


It's not the hardware manufacturers that would be writing drivers. It's people volunteering their time to slap together something that "works for me(tm)".

3. "and some OSs had technical reasons for not using it (e.g. very different driver interfaces, capabilities and feature sets)."

This is indeed probably one of the more controversial aspects, but the way I see it the drivers should be as modular as possible to make it easy to hook them into however the OS needs them.


In this case, people/volunteers will only write the modules for the OS they care about and the driver won't work on any OS that needs different modules.

At the extreme, all operating systems today have no choice but to work with fixed interfaces anyways (ones provided by the hardware). Moving this into software shouldn't be that much of a burden to a system's design.


A driver is a kind of abstraction layer between whatever interface the OS wants and whatever interface the hardware provides. What you're talking about is having drivers that provide an unwanted interface, with a "driver driver" between the unwanted interface and the OS itself.

Like I said earlier, *ideally* this wrapping layer would be in a kind of source form and it's overhead could be optimised away when it's compiled&linked with callees within the OS.


Trying to get different developers to agree on a common device driver standard will be hard. Trying to get different developers to agree on a common programming language is going be even harder.

"The ancient BIOS services are not acceptable because they're single-tasking synchronous interfaces (e.g. your OS freezes while the firmware waits until a DMA transfer completes)."

Great observation. Clearly these would have to be asynchronous. Also, all device drivers should be able to run in parallel on SMP. Which is a reasonable requirement assuming they don't share resources.


Not just asynchronous, but "asynchronous with IO priorities". For example, you should be able to say "pre-fetch these sectors as low priority" and also say "change the priority of my earlier request (I need them sectors ASAP!)" or "cancel that request, I don't need those sectors now".

Not just different drivers running at the same time, but the same driver being able to complete multiple requests at the same time (e.g. if there's some sort of sector cache or something) and accepting new requests while existing requests are being performed.

Of course this is just the tip of the iceberg (e.g. I was only really looking at some aspects of storage device drivers). If you want to look at something like video driver interfaces then the sheet really hits the fan.

I wish there would be a viable market for this, it is a project I think I would enjoy working on.


A lot of people wish it was viable. Some try to do it (UDI, EDI/Extensible Device Interface). It hasn't worked yet.

I'm not necessarily saying it's impossible though - just very unlikely to become widespread.

- Brendan

Reply Score: 2

RE[4]: The hardest part
by Alfman on Wed 19th Sep 2012 14:59 UTC in reply to "RE[3]: The hardest part"
Alfman Member since:
2011-01-28

Brendan,

We're talking over one another. I keep hearing that this wouldn't become widespread and manufacturers wouldn't get involved, etc, but we've established this already and I've been saying it explicitly from the start. Without cooperation from the top corporations, this would not succeed. It would most likely have the same fate as UDI.

I'm just talking about how it should be.


"A driver is a kind of abstraction layer between whatever interface the OS wants and whatever interface the hardware provides. What you're talking about is having drivers that provide an unwanted interface, with a 'driver driver' between the unwanted interface and the OS itself."

I think if you talked to most OS developers you'd find that drivers are by far the most difficult problem to address (Andy McLaughlin said the same thing in the interview). This would be something they want very much. It'd be extremely desirable to support all hardware by implementing one interface.


Linux users might view it as a non-issue because linux is actually big enough to have manufacturers and tons of volunteers working on it's drivers. But even then we'd save a lot of work and reduce tons of kernel bloat by using standardised drivers, which in some cases could even be of better quality and performance than today's drivers.


One the one hand, I can understand your point of what if the interface is not the one you want? But on the other hand it's not a new problem being caused by this model. The USB webcam interface may not be the one we want, but it's the one we get. The video hardware interface might not be the one we want, but it's the one we get. The sound card hardware interface might not be the one we want, but it's the one we get. In all these instances an OS has to deal with not one, but hundreds or thousands of subtly different hardware interfaces doing the exact same thing. So adding an interface for standardised drivers really shouldn't be a big deal. Of course I'd want real experts to work together at creating good interfaces from the get go.


"Trying to get different developers to agree on a common device driver standard will be hard. Trying to get different developers to agree on a common programming language is going be even harder."

Yep, standards are always hard.


"Not just asynchronous, but 'asynchronous with IO priorities'"

Good idea. I don't want us to reinvent the wheel here, but basically we'd want to be able to expose everything the drives are capable of. I think most operating systems use a SCSI interface internally even for representing drives that aren't on a SCSI bus.

"Not just different drivers running at the same time, but the same driver being able to complete multiple requests at the same time (e.g. if there's some sort of sector cache or something) and accepting new requests while existing requests are being performed."

That's usually implied by asynchronous designs, but let's say it explicitly.

Reply Score: 2

RE: The hardest part
by aargh on Tue 18th Sep 2012 07:28 UTC in reply to "The hardest part"
aargh Member since:
2009-10-12

What we need is some kind of universal driver standard that can be shared across all operating systems.


You mean like Genode? Now, why hasn't someone already thought of that...

Reply Score: 1

RE[2]: The hardest part
by Laurence on Tue 18th Sep 2012 08:16 UTC in reply to "RE: The hardest part"
Laurence Member since:
2007-03-26

"What we need is some kind of universal driver standard that can be shared across all operating systems.


You mean like Genode? Now, why hasn't someone already thought of that...
"


Genode isn't really a universal driver standard though.

In fact, I'm still not 100% certain what Genode is setting out to achieve even after (loosely) following the project for a few years. but from what I can gather, it's a complete micro kernel architecture design to allow running guest kernels (such as Linux) and thus their drivers.

Reply Score: 2

RE: The hardest part
by Laurence on Tue 18th Sep 2012 08:03 UTC in reply to "The hardest part"
Laurence Member since:
2007-03-26

That's a great idea in theory, but wouldn't it push kernels into a more hybrid or even -dare I say it- micro kernel design?

Without wanting to get into a debate about micro vs monolithic kernels; if a unified driver format meant constraints on the design then I could see a lot of potential advocates turning their back on said driver format. Particularly those who have a monolithic kernel (Linux being the biggest loss).

I guess, even if the above supposition was correct (and there's a very good chance I'm talking complete BS here lol), then at least a unified format could offer "fall back" drivers for occasions when optimised / native kernel drivers are not available (much like the aforementioned VESA mode - which is invaluable for setting up servers)

Reply Score: 2

RE[2]: The hardest part
by Alfman on Tue 18th Sep 2012 16:03 UTC in reply to "RE: The hardest part"
Alfman Member since:
2011-01-28

Laurence,

"That's a great idea in theory, but wouldn't it push kernels into a more hybrid or even -dare I say it- micro kernel design?"

I don't think so, but it's worth investigating. Linux can call VESA or UEFI without becoming more hybrid (note that's not the exact model I'm proposing per-say, but I never the less think it's a valid counter-example). Actually my gut instinct is to say the opposite may be more of a concern, how would a microkernel incorporate these drivers?

Obviously the microkernel's goal is to isolate the drivers from one another, would it be able to jail the drivers and still have them work? That depends how they're written. The standard would have to be very clear about how drivers could interact with the system, no direct manipulation of GDT or interrupt tables, drivers would need to request permission to access ports instead of assuming they're running in ring-0. They'd need standard ways to coordinate memory mapping. These murky details all need to be ironed out for sure, but with a well defined standard, a good reference implementation, a robust test suite, and a certification process, then we should have quality drivers that work everywhere without worrying about OS-specific quirks. I don't think an existing operating systems would need too many changes (assuming it's drivers were already modular and self-contained). It wouldn't be too different from writing a new OS-specific driver for a new piece of hardware, only this particular OS-specific driver will be capable of driving all hardware supported by the shared driver standard.

Edited 2012-09-18 16:07 UTC

Reply Score: 2

RE[3]: The hardest part
by Laurence on Wed 19th Sep 2012 07:59 UTC in reply to "RE[2]: The hardest part"
Laurence Member since:
2007-03-26


I don't think so, but it's worth investigating. Linux can call VESA or UEFI without becoming more hybrid (note that's not the exact model I'm proposing per-say, but I never the less think it's a valid counter-example).

I'm not sure those examples are applicable as VESA is an agreed standard where each OS has incorporated their own drivers and UEFI happens outside of the OS.

ndiswapper might be an applicable comparison though as that runs Windows drivers on a Linux kernel. I don't pretend to be an expert on how ndiswapper works, but from what I gather, it's quite similar to FUSE; ie it has a kernel driver but the actual imported drivers (be that ntfs-3g in FUSE or a Windows wireless driver in ndiswapper) will run in user space.

I'm not sure if the same would be required if going down a totally universal driver set. Probably not. I'm not a kernel developer so not really in a position to comment hehehe


Actually my gut instinct is to say the opposite may be more of a concern, how would a microkernel incorporate these drivers?

I don't really follow your line of thinking there. I'm not saying you're wrong (I really don't want to come across like I'm knowledgeable here because I'm really not!) but I'd appreciate it if you could elaborate a little more please ;)


Obviously the microkernel's goal is to isolate the drivers from one another, would it be able to jail the drivers and still have them work? That depends how they're written. The standard would have to be very clear about how drivers could interact with the system, no direct manipulation of GDT or interrupt tables, drivers would need to request permission to access ports instead of assuming they're running in ring-0. They'd need standard ways to coordinate memory mapping. These murky details all need to be ironed out for sure, but with a well defined standard, a good reference implementation, a robust test suite, and a certification process, then we should have quality drivers that work everywhere without worrying about OS-specific quirks. I don't think an existing operating systems would need too many changes (assuming it's drivers were already modular and self-contained). It wouldn't be too different from writing a new OS-specific driver for a new piece of hardware, only this particular OS-specific driver will be capable of driving all hardware supported by the shared driver standard.

I might be saying something really stupid here, so please forgive me; but if the existing architecture has drivers written in a modular / self-contained way, then wouldn't that be a hybrid kernel?

I think I have a basic grasp on all this (I did experiment with writing my own kernel many years ago), but I'm definitely no more experienced than a curious n00b. So I apologise if I'm making no sense there.

Reply Score: 2

RE[4]: The hardest part
by Alfman on Wed 19th Sep 2012 13:43 UTC in reply to "RE[3]: The hardest part"
Alfman Member since:
2011-01-28

Laurence,

"I might be saying something really stupid here, so please forgive me; but if the existing architecture has drivers written in a modular / self-contained way, then wouldn't that be a hybrid kernel?"

Oh I see what you are thinking. Instead of explaining it in my own words, I'll drop a fairly decent wikipedia article on the matter:

https://en.wikipedia.org/wiki/Monolithic_kernel

"A monolithic kernel is an operating system architecture where the entire operating system is working in the kernel space and alone as supervisor mode."

and
"Modular operating systems such as OS-9 and most modern monolithic operating systems such as OpenVMS, Linux, BSD, and UNIX variants such as SunOS, and AIX, in addition to MULTICS, can dynamically load (and unload) executable modules at runtime. This modularity of the operating system is at the binary (image) level and not at the architecture level. Modular monolithic operating systems are not to be confused with the architectural level of modularity inherent in Server-Client operating systems (and its derivatives sometimes marketed as hybrid kernel) which use microkernels and servers (not to be mistaken for modules or daemons)."


In short, a hybrid or microkernel differs in that it uses the CPU segregation mechanisms to protect pieces of the kernel from itself. This typically has further implications, like microkernel modules needing to communicate via IPC instead of being able to hook into each other more directly via dynamic linking or function pointers. But either kernel style could have pluggable modules (similar to DLLs).

Reply Score: 2

RE[5]: The hardest part
by Laurence on Thu 20th Sep 2012 07:18 UTC in reply to "RE[4]: The hardest part"
Laurence Member since:
2007-03-26

I see what you're saying. I guess even if there wasn't a technical limitation, there might still be a political one as to have a universal driver format, you'd have to have a universal binary format. You raised a good point about how windows uses DLLs and Linux uses ko's. So on one OS, PE's are the preferred binary format and on the other -Linux- ELFs are.

I'm fairly certain I read somewhere that the Linux kernel is written in such a way that it can have support for other binary blobs (in fact a.out is still natively support), but the question is, would Linus, Redmond nor any of the other kernel devs want a "foreign" (for want a better term) executable format to be supported in kernel space? Maybe I'm just being naive or overly critical, but I couldn't see that happening.

You did also raise the point about compatible source code, but Linux has a hard enough time getting source code for things like 3D graphic acceleration and Wireless chipsets, so I couldn't see any universal model working unless it supported closed binary blobs.

I don't mean to be pessimistic as I think your idea is a great one. I'm just trying to understand the logistics of it all ;)

Reply Score: 2

RE: The hardest part
by Vanders on Tue 18th Sep 2012 10:44 UTC in reply to "The hardest part"
Vanders Member since:
2005-07-06

...all interpreting the standards in their own ‘unique’ ways. You might write something like an IDE driver that works on every system you try, release it, and then find out it isn’t working properly on hundreds of other peoples’ systems


IDE? All. My. Hate.

I often said that it appears every single manufacturer of an ATA controller took the specification, had it translated into Japanese by a 13 year old Spanish teenager and then handed it to a Brazilian programmer to read out loud to a Polish developer as he tried to implement it. I have no other explanation for how hardware designers could all come to such odd interpretations of something that was supposedly a written standard.

Reply Score: 2

Beautiful.
by gus3 on Tue 18th Sep 2012 20:17 UTC in reply to "RE: The hardest part"
gus3 Member since:
2010-09-02

My friend, this belongs in the Fortunes database.

Reply Score: 1

RE: The hardest part
by demetrioussharpe on Wed 19th Sep 2012 00:10 UTC in reply to "The hardest part"
demetrioussharpe Member since:
2009-01-09

What we need is some kind of universal driver standard that can be shared across all operating systems. Ideally this would be in source form and the layer could be optimised away by the compiler. This way a driver wouldn't be written for "Windows X" but instead for the "2012 PC driver standard". The OS would implement the standard and immediately support numerous compatible hardware devices. It's a pipe dream though. For it's part, MS would never participate, and their cooperation would be pretty much mandatory.


This has already been tried; it's called UDI (Uniform Driver Interface). Unfortunately, it's exceptionally hard to get everyone on board with such an effort. The open source community largely ignored it, because it allowed the proliferation of binary drivers to continue. None of the other OS vendors had any incentive to participate in it because they don't have a problem getting manufacturers to create drivers for them. It's a shame, because there's also a reference implementation available, but no one seems to care. Perhaps all of the hobby OS's should work to create drivers for this API to allow driver sharing. If you'd like to check UDI out, here're a few links:

http://en.wikipedia.org/wiki/Uniform_Driver_Interface
http://www.projectudi.org/
http://projectudi.sourceforge.net/

Reply Score: 1

RE[2]: The hardest part
by Alfman on Wed 19th Sep 2012 00:45 UTC in reply to "RE: The hardest part"
Alfman Member since:
2011-01-28

demetrioussharpe,

Thank you for the links. UDI had been mentioned already, and as far as I can tell it fell into obscurity a decade ago for the political reasons that have already been mentioned. It was a good idea but I don't think it wasn't a complete solution either, only targeting system devices like network and block devices. I couldn't find anything in UDI for webcams, scanners, or even mice.

To those of you who may be questioning why bother talking about this when the chance of adoption is next to none, well...I'm an os-dever at heart, I fantasize about how things should be. I suspect it's the same thing that drives people like Visopsys's creator.

Reply Score: 2

RE[3]: The hardest part
by demetrioussharpe on Wed 19th Sep 2012 03:11 UTC in reply to "RE[2]: The hardest part"
demetrioussharpe Member since:
2009-01-09

demetrioussharpe,

Thank you for the links. UDI had been mentioned already, and as far as I can tell it fell into obscurity a decade ago for the political reasons that have already been mentioned. It was a good idea but I don't think it wasn't a complete solution either, only targeting system devices like network and block devices. I couldn't find anything in UDI for webcams, scanners, or even mice.

To those of you who may be questioning why bother talking about this when the chance of adoption is next to none, well...I'm an os-dever at heart, I fantasize about how things should be. I suspect it's the same thing that drives people like Visopsys's creator.


You're welcome. I'll say this, though: since UDI is defunct, there's room for someone(s) to pick up the project & create a standard for the rest of the types of devices. Going forward, this could be a worthy API for the hobby OS community to pick up & work on as an overall solution to this problem, since this will be the main showstopper that'll stop most hobby OSes from going mainstream. Just because Linux & the other main open source OSes didn't pick it up, doesn't mean that it wouldn't be a boon for other groups. In fact, this might actually level the playing field for the other OSes, since Linux seems to get most of the open source OS developer talent.

Reply Score: 1

Comment by benb320
by benb320 on Mon 17th Sep 2012 20:33 UTC
benb320
Member since:
2010-02-23

What part of Canada is he from?

Reply Score: 1

RE: Comment by benb320
by Lennie on Tue 18th Sep 2012 16:55 UTC in reply to "Comment by benb320"
Lennie Member since:
2007-09-22

The article mentions:

"University in Calgary Canada"

So maybe he is from the west coast ?

Reply Score: 2

Pretty interesting.
by UltraZelda64 on Tue 18th Sep 2012 00:02 UTC
UltraZelda64
Member since:
2006-12-05

I've played around with Visopsys a few times before, and I figured... why not give it another try, see if there's been a new release since I last tried it. There has been. Problem is, every single time I attempt to run it in VirtualBox the entire (host) machine crashes. First time, the screen just locked up. Then, two or three times after, the whole system crashed and rebooted. I found that this problem still occurs even with the older release (0.69), which I have successfully run before (though I'm not sure if it was in VirtualBox or even in a virtual machine at all). OS? openSUSE 12.2 with VirtualBox 4.1.18_OSE, so pretty damn recent.

I wish there was a better virtualization program for Linux... VirtualBox seems too damn buggy and causes nothing but problems. Over the years, it still is a pain in the ass.

Update: Damn. I burned both versions of Visopsys to a CD-RW, and neither one would boot on my hardware. Error initializing. But at least it ended more gracefully than VirtualBox, giving me the option to press a key to reboot the system.

Edited 2012-09-18 00:15 UTC

Reply Score: 2

RE: Pretty interesting.
by Alfman on Tue 18th Sep 2012 01:28 UTC in reply to "Pretty interesting."
Alfman Member since:
2011-01-28

UltraZelda64,

This is my experience:

apt-get install kvm
Download the cdrom image on this page:
http://visopsys.org/download/index.php


# I can get the cdrom to run without the installer
# run (as normal user)
kvm -cdrom visopsys-0.71.iso

This allows you to go in and play around with some of the built in software. Does this work for you?


For me installation fails...

# This creates a 1GB hard drive image (optional)
kvm-img create -f qcow2 viso.hd 1G

# Run the installation cdrom
kvm -hda viso.hd -cdrom visopsys-0.71.iso -boot d

# visop wouldn't allow me to partition and install on the same boot, but after creating a partition I was able to restart kvm and run the installer "successfully" (at least it said so).

# After install, run the virtual machine without the cdrom
kvm -hda viso.hd


For me, it just hangs there.

Given that you've tested it on bare metal too, I suspect it's likely an issue with the OS itself.

Edited 2012-09-18 01:33 UTC

Reply Score: 2

RE[2]: Pretty interesting.
by UltraZelda64 on Tue 18th Sep 2012 02:28 UTC in reply to "RE: Pretty interesting."
UltraZelda64 Member since:
2006-12-05

I don't know; like I said, I did run the older version on an older Gateway machine from around 2001 (P4 1.7GHz, 256MB RAM, ATAPI). The problem machine I am trying to run it on is a Dell from around 2006 with an AMD Athlon 64 X2 Dual Core 3800+, a gig of memory and a SATA connection to all drives. If I had to guess, I bet the problem lies in the SATA interface or the BIOS, but really I have no clue. I can't remember exactly what the error was when running it directly on the system, but I think it did mention something about the drives. Still, the OS initially loaded from the CD fine, so I don't know. Anyone else able to get this OS running on a SATA-based system?

Reply Score: 2

RE: Pretty interesting.
by Laurence on Tue 18th Sep 2012 13:46 UTC in reply to "Pretty interesting."
Laurence Member since:
2007-03-26

I wish there was a better virtualization program for Linux... VirtualBox seems too damn buggy and causes nothing but problems. Over the years, it still is a pain in the ass.


You mean apart from VMWare, Xen and KVM? You could even run qemu from the desktop, if you just wanted pure hardware emulation without any kernel optimisations.

Reply Score: 2

Great piece
by jido on Tue 18th Sep 2012 06:12 UTC
jido
Member since:
2006-03-06

Thanks for linking to the article. Really inspirational.

Reply Score: 2