Linked by Thom Holwerda on Mon 17th Sep 2012 16:56 UTC, submitted by Andy McLaughlin
OSNews, Generic OSes "Visopsys (VISual OPerating SYStem) is an alternative operating system for PC-compatible computers, developed almost exclusively by one person, Andy McLaughlin, since its inception in 1997. Andy is a 30-something programmer from Canada, who, via Boston and San Jose ended up in London, UK, where he spends much of his spare time developing Visopsys. We had the great fortune to catch up with Andy via email and ask him questions about Visopsys, why he started the project in the first place, and where is it going in the future."
Thread beginning with comment 535666
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: The hardest part
by Laurence on Wed 19th Sep 2012 07:59 UTC in reply to "RE[2]: The hardest part"
Member since:

I don't think so, but it's worth investigating. Linux can call VESA or UEFI without becoming more hybrid (note that's not the exact model I'm proposing per-say, but I never the less think it's a valid counter-example).

I'm not sure those examples are applicable as VESA is an agreed standard where each OS has incorporated their own drivers and UEFI happens outside of the OS.

ndiswapper might be an applicable comparison though as that runs Windows drivers on a Linux kernel. I don't pretend to be an expert on how ndiswapper works, but from what I gather, it's quite similar to FUSE; ie it has a kernel driver but the actual imported drivers (be that ntfs-3g in FUSE or a Windows wireless driver in ndiswapper) will run in user space.

I'm not sure if the same would be required if going down a totally universal driver set. Probably not. I'm not a kernel developer so not really in a position to comment hehehe

Actually my gut instinct is to say the opposite may be more of a concern, how would a microkernel incorporate these drivers?

I don't really follow your line of thinking there. I'm not saying you're wrong (I really don't want to come across like I'm knowledgeable here because I'm really not!) but I'd appreciate it if you could elaborate a little more please ;)

Obviously the microkernel's goal is to isolate the drivers from one another, would it be able to jail the drivers and still have them work? That depends how they're written. The standard would have to be very clear about how drivers could interact with the system, no direct manipulation of GDT or interrupt tables, drivers would need to request permission to access ports instead of assuming they're running in ring-0. They'd need standard ways to coordinate memory mapping. These murky details all need to be ironed out for sure, but with a well defined standard, a good reference implementation, a robust test suite, and a certification process, then we should have quality drivers that work everywhere without worrying about OS-specific quirks. I don't think an existing operating systems would need too many changes (assuming it's drivers were already modular and self-contained). It wouldn't be too different from writing a new OS-specific driver for a new piece of hardware, only this particular OS-specific driver will be capable of driving all hardware supported by the shared driver standard.

I might be saying something really stupid here, so please forgive me; but if the existing architecture has drivers written in a modular / self-contained way, then wouldn't that be a hybrid kernel?

I think I have a basic grasp on all this (I did experiment with writing my own kernel many years ago), but I'm definitely no more experienced than a curious n00b. So I apologise if I'm making no sense there.

Reply Parent Score: 2

RE[4]: The hardest part
by Alfman on Wed 19th Sep 2012 13:43 in reply to "RE[3]: The hardest part"
Alfman Member since:


"I might be saying something really stupid here, so please forgive me; but if the existing architecture has drivers written in a modular / self-contained way, then wouldn't that be a hybrid kernel?"

Oh I see what you are thinking. Instead of explaining it in my own words, I'll drop a fairly decent wikipedia article on the matter:

"A monolithic kernel is an operating system architecture where the entire operating system is working in the kernel space and alone as supervisor mode."

"Modular operating systems such as OS-9 and most modern monolithic operating systems such as OpenVMS, Linux, BSD, and UNIX variants such as SunOS, and AIX, in addition to MULTICS, can dynamically load (and unload) executable modules at runtime. This modularity of the operating system is at the binary (image) level and not at the architecture level. Modular monolithic operating systems are not to be confused with the architectural level of modularity inherent in Server-Client operating systems (and its derivatives sometimes marketed as hybrid kernel) which use microkernels and servers (not to be mistaken for modules or daemons)."

In short, a hybrid or microkernel differs in that it uses the CPU segregation mechanisms to protect pieces of the kernel from itself. This typically has further implications, like microkernel modules needing to communicate via IPC instead of being able to hook into each other more directly via dynamic linking or function pointers. But either kernel style could have pluggable modules (similar to DLLs).

Reply Parent Score: 2

RE[5]: The hardest part
by Laurence on Thu 20th Sep 2012 07:18 in reply to "RE[4]: The hardest part"
Laurence Member since:

I see what you're saying. I guess even if there wasn't a technical limitation, there might still be a political one as to have a universal driver format, you'd have to have a universal binary format. You raised a good point about how windows uses DLLs and Linux uses ko's. So on one OS, PE's are the preferred binary format and on the other -Linux- ELFs are.

I'm fairly certain I read somewhere that the Linux kernel is written in such a way that it can have support for other binary blobs (in fact a.out is still natively support), but the question is, would Linus, Redmond nor any of the other kernel devs want a "foreign" (for want a better term) executable format to be supported in kernel space? Maybe I'm just being naive or overly critical, but I couldn't see that happening.

You did also raise the point about compatible source code, but Linux has a hard enough time getting source code for things like 3D graphic acceleration and Wireless chipsets, so I couldn't see any universal model working unless it supported closed binary blobs.

I don't mean to be pessimistic as I think your idea is a great one. I'm just trying to understand the logistics of it all ;)

Reply Parent Score: 2