BareMetal OS now supports TCP/IP by way of a port of LwIP, originally by Adam Dunkels for embedded devices.
BareMetal is a 64-bit OS for x86-64 based computers. The OS is written entirely in Assembly, while applications can be written in Assembly or C/C++.
BareMetal boots via Pure64 and has a command line interface with the ability to load programs/data from a hard drive. Current plans for v0.7.0 call for basic TCP/IP support, improved file handling, as well as general bug fixes and optimizations.
What advantages would there be in using this over something like FreeDOS? I get the usage in HPC but they also name education and embedded and I just can’t think of any advantage in those contexts.
If you are doing an embedded that every byte counts you 1.- probably aren’t going to need 64bit and 2.- Linux and FreeDOS are better supported. And education? Unless there is some other use like “how to build your own OS 101” class (which again the simpler FreeDOS or better supported Linux would probably be better) I just can’t picture a 64bit CLI OS having much use, not when there are Linux distros like PuppyOS that will run on 12 year old hardware just fine.
So is there an angle I’m missing here?
Same could be said about your useless contribution. Yes, you are indeed missing the point. Perhaps because you not even dared to visit the BareMetalOS page and read the author’s original intend.
FreeDOS is 32 bits, while it could be proven enough with a DOS4GW layer, nowadays even ARM provides 64 bits processors, which permit hypervisors.
Otherwise, as a base for a new 64 bits Grub-like OS selector with native WOL and PXE loading, that’s fine.
Next time, don’t forget to switch your brain on, and pay the Imaginarium of Doctor Parnassus a visit
Kochise
From the site, here is what they say about Education – “Provide an environment for learning and experimenting with programming in x86-64 Assembly as well as Operating System fundamentals.”
Is there some kind of “how to build your own OS 101”? Well yes actually. There are a great many of them. I myself would be pretty upset if I enrolled in one in 2014 and they tried to teach me the internals of a 16-bit single tasking interrupt handler (DOS). Linux is great but hardly the “simple” system you might want to use in education settings.
Also, if your goal is x86-64 assembly, is DOS really your environment of choice? You do know that it operates in real mode right? The Bare Metal OS code itself (written entirely in assembly) is probably a fantastic resource for learning (although I have not looked at it). Certainly, it must be better than the FreeDOS C or Linux C code.
Frankly, your comment confuses me greatly.
Edited 2014-02-15 19:01 UTC
It is 64bit and assembly.
Aha. I’ve done embedded project with 4GB of RAM and a lot of fixed point mathematics => you well benefit from 64bits.
Yes indeed, you are missing many points:
– doing an assembly OS is fun.
– 64bit is the next native size of x86 CPUs in the future, so why stick to some
– one intention (AFAIK) is to have a small multi-cpu OS (note: not multi-tasking), so in case of small, Linux is the wrong choice.
– in terms of education: It is for sure nothing for students, but professionals who want to learn new stuff.
– BMOS is not intended to replace a GPOS like BSD or Linux.
BTW: I am in no way related to the Baremetal OS author. But I like the new OS approach.
Examining Linux in a “how to build an OS” type class is a terrible idea, simply due to the vast size of the project. OpenBSD would be better – An older version of Minix probably even better than that. This is because more mature operating systems have lots of performance hacks and compatibility hacks that make the actual effective design less clear from a an education standpoint.
But, as far as use in an embedded (or perhaps even HPC) context where performance matters, it would be much better than FreeDOS, which is 16-bit.
Running 64-bit code on FreeDOS would require switching between 64-bit long mode and 16-bit real mode every time any of the DOS services are needed – if any of the dos services are 32-bit, well, that’s extra context switching.
Plus, long mode has far more general purpose registers available than the four you get in real mode or protected mode (32-bit), which makes a significant improvement in performance AND ease of use, compared to old x86. If you’re using less than 4GB of ram, then depending on your code, the benefit from the extra registers could easily be larger than the penalty from longer word sizes. If you need more than 4GB of ram (Well, 3GB in practice), you absolutely need 64-bit. 32-bit + PAE is only fast enough when you’re alternative is to swap to disk.
Yes the one that you don’t know anything about this subject but still likes to write stuff.
FreeDOS is a much more complicated 16 bit system written in C. Linux is a extremely more complex operating system written in C. 64 bit code needn’t be larger than 16 bit one _in_a_practical_system_.
With that said/written I don’t see much use for an assembly language operating system except for education/entertainment (of the developer(s)).
That still doesn’t explain my question which is WHY U NEED 64BITS IN ASM???????
In HPC yes, you use large datasets, in the other 2 contexts? You have Kolibri, Minix, FreeDOS, a whole bunch of choices that would seem better suited to the task.
Oh and Kochise really needs to take a Midol, just saying.
If you want to use 64 bits in the higher level without context switching from 16/32 bits, better start the whole system to 64 bits and stay that way.
But this requires Rtfm and Stfu, not Midol…
Kochise
bassbeast,
“That still doesn’t explain my question which is WHY U NEED 64BITS IN ASM???????”
The question of bitsize (16/32/64) is orthogonal to the question of language (c/asm). The reason for using assembly on 64bit is the same as using it on 32bit or 16bit – direct access to the CPU. Whether asm is actually the best choice is debatable, however it is a separate factor from bit size.
As for bit size, there is absolutely no doubt that we want to use an OS with the same bit size as the application. Otherwise we end up an OS that cannot properly manage resources, system calls that have to be marshaled back through legacy CPU modes, indirection via specialized low memory buffers, much more difficult debugging, etc. This isn’t a new problem, it’s something 32bit developers had to deal with on dos, however now the problem is even worse because long mode (amd64) explicitly dropped support for vm86 as well as segmentation.
Since none are 64bit OSes, none of them are great choices for 64bit development.
Edit: I found this, a 64bit port of freedos. It’s not clear that it ever got off the ground. Assuming they care to address backwards compatibility, they would need to support all the modes (16,32,64 bit), but that doesn’t alleviate the incompatibilities between modes (ie how is a 64 bit app going to call a 16bit driver?). Dropping support for 16bit would be the easiest option, however that kills off one of the biggest uses cases of freedos for me today (flashing new firmware to hardware).
http://sourceforge.net/projects/dos64/
IMHO it’s better to start with an OS that doesn’t have a legacy problem.
Edited 2014-02-17 17:01 UTC
Because x86 sucks in 16 and 32 bit mode.
In 64 bit mode you have lots of registers to play with and more instructions that aren’t available in the other modes.
Is that good for you?
Would be awesome to run NodeJS on this. Is it posix compliant?
Thank god, no. Why for heaven’s sake is there always the question for “Posix”?
Because POSIX compliance makes it easy to get your software up-and-running, since there a buttload of tools available for POSIX systems.
But, there are enough POSIX-compliant systems – All the UNIXes, Linux, BSDs, Mainframe, and even a deprecated Windows user-mode POSIX runtime.
POSIX is cool, but there are things that are stupid – case-sensitive file names for one. If the reason for that is simply it keeps the code simpler when it doesn’t have to check that a ‘w’ is the same as a ‘W’, yeah, okay, I sorta understand how that’s needed in the few situations where resources are so tight that it matters, but if somebody actually thought it was a good idea to have “Readme” and “readme” in the same directory, well, that person deserves to lose a couple of fingers.
It’s the XXI century and somebody is making a case against case sensitivity with a straight face. Wow… 🙂
For file names? Absolutely.
Besides serving to potentially confuse the user and making typing file names with lots of upper-case characters a pain, why is it better?
Long ago, some scientists were among the few that needed computers and could use them. For example, people dealing with chemical compounds benefited from distinguishing “CO” (carbon monoxide) from “Co” (cobalt) in the filenames.
There were some other users that needed to distinguish some words in the filenames, for example “Digital” (the company) from “digital” (the adjective), which was very useful when searching files, and so on.
Nth Man,
Outside of special circumstances I generally a don’t think it’s good to have multiple files where the only distinction is in letter case. Having a case insensitive file system doesn’t (strictly) mean you can’t perform a case sensitive search.
The thing I don’t like about (mandatory) case sensitivity in file systems is that it affects all downstream components as well. Consider the following:
http://forum.nginx.org/read.php?11,83120
This problem has affected me and unfortunately it can not be fixed from userspace because the kernel file system itself is forcing the userspace code to have case sensitive semantics, even when the case is not intended to be significant.
Edit: of course there’s no right or wrong answer, but I’m just offering an alternative point of view.
Edited 2014-02-18 02:58 UTC
> Having a case insensitive file system doesn’t (strictly)
> mean you can’t perform a case sensitive search.
I’m curious, let’s talk about the particular case of Windows, does someone know if Windows users are able to look for filenames that have the text “Word” inside and get those results instead of having them mixed with the filenames that contain “word”?.
> The thing I don’t like about (mandatory) case sensitivity
> in file systems is that it affects all downstream
> components as well. Consider the following:
> http://forum.nginx.org/read.php?11,83120
The “find” command in Linux has the “name” parameter, and also the “iname” parameter for case-insensitive searches. Nginx could also be prepared for people that does not follow the capitalization rules when writing. In a somehow related note there were webmasters sharing a directory, but using Samba, only to one “user”, etc; and then using its “mount point” so the webserver access to those files was case-insensitive. Go figure 🙂
> Edit: of course there’s no right or wrong answer, but I’m
> just offering an alternative point of view.
Yes, you talked about the “(mandatory) case sensitivity in file systems” although the “(mandatory) case insensitivity” also has its consequences, of course.
Nth_Man,
Why not? Applications that perform directory scans don’t really exhibit the problem. For example, in a unix environment ported to windows, “find” command works just as it does under unix with regards to -iname and -name.
Well, the trouble is when the syscalls are case sensitive, you HAVE to pass in the case sensitive names. If you don’t have the case sensitive name, then you HAVE to perform directory scans before being able to open the file, which is a lot more work. This is somewhat more difficult to program, but the real dealbreaker is the overhead, which is not acceptable for high performance daemons like nginx.
So making http://MyStore.com/NewProducts/Photo.jpg equivalent to http://mystore.com/newproducts/photo.jpg, a seemingly simple problem, you have to open up a pandora’s box of workarounds (ie mod_speling used by apache) and live with the inherent overhead and atomicity problems.
You could lowercase the entire URL and prohibit upper case letters in the filesystem. It would technically work since lowercase URLs will always match lowercase file names, but it seems rather excessive to give up upper case characters just to work around this problem especially if the names are exposed to users and upper case characters are actually desired.
Edited 2014-02-18 12:37 UTC
> Why not?
In https://superuser.com/questions/266110/how-do-you-make-windows-7-ful… cite a lot of factors in a more comprehensive way 🙁 .
> > let’s talk about the particular case of Windows
> Applications that perform directory scans don’t really
> exhibit the problem. For example, in a unix environment
> ported to windows, “find” command works just as it does
> under unix with regards to -iname and -name.
The original question was about the particular case of Windows, thinking about a normal user with normal means that e.g. wants to look for “Access” but doesn’t want to get the results of the word “access”, but your example of a “unix environment ported to windows” and using the “find” program from Unix was something that I liked :-). Good move 🙂
> […] the real dealbreaker is the overhead, which is not
> acceptable for high performance daemons like nginx.
Mmm… If we mix users with inconsistent criteria (let’s call this file “tree.jpg” and somewhere else let’s call it “Tree.jpg”) + Nginx not ready to cope with it + desires to get the maximum speed + not a desire in users to follow capitalization rules + problems like “they are not going to use always lower characters” + etc. we’re going to have several problems :-(. Anyway, the users that depend on you have to thank that at least they have you, who worries about looking for a lot of options, like the ones that you quoted in your last message.
Nth_Man,
I was pointing out that case insensitive file systems (including NTFS on windows) are capable of case sensitive searches. Whether MS chooses to expose such functionality in their userland tools is a different matter. Case sensitivity seems fairly unimportant to most people, ie it’s rare/never that I want to search for xyz.doc and not include Xyz.doc. So MS not implementing a case sensitive search in explorer doesn’t seem like a huge loss to me (YMMV).
Nginx already supports case insensitivity, but the problem is in the filesystem. The fact of the matter is that many websites operators want to use case insensitive paths. Can you give a good reason why the administrators should NOT be allowed to set the case policy for themselves in the data partitions where case sensitivity is completely undesired?
Still, it’s disappointing that one needs to ditch their native file system to achieve the desired results. Should the linux community really consider this an acceptable long term solution?
Edited 2014-02-18 19:28 UTC
> Can you give a good reason why the administrators should NOT be allowed to set the case policy for themselves in the data partitions where case sensitivity is completely undesired?
“No. Can you?”, it looks that there is a misunderstanding there. This way this thread will never end, it has arrived to its sixth nested level, and that’s when I stop reading and writing in it. Good bye.
Those are remarkably narrow use cases.
Also, some users like to look for filenames that have the text “Word” inside and get those results instead of having them mixed with the filenames that contain “word”. The same happens when searching for “Writer”, “Wine”, “Access”, “Excel”, etc.
> Those are remarkably narrow use cases.
Let’s notice that I said “for example”, not “those are all the cases”.
Edited 2014-02-18 09:09 UTC
Using personal preference in order to restrict the overall programmability of a system in an arbitrary way leads to certain lack of scalability.
I can see how arbitrary limitation, in order to establish a determined behavior on the system, may be OK in products with very specific application/audience targets. But for something as generalized as an OS or API, that’s just asking for trouble because there is a very high probably that you completely missed a very common case scenario that may be crucial for lots of potential customers.
It’s the XXI century, and computing technology and capabilities have changed a hell of a lot since the 60s/70s.
tylerdurden,
Well, case sensitivity is in-and-of itself an arbitrary limitation for some users/applications.
The point isn’t that one is better than the other, but rather that system administrators should be able to decide this policy for themselves rather than hard coding it arbitrarily into the kernel/fs.
This post talks about exactly this topic.
http://www.krisdavidson.org/2010/11/25/mounting-a-case-insensitive-…
They actually recommended switching to a vfat or ntfs partition. It certainly feels wrong to me to have to recommend a file system based on case sensitivity rather than more technical characteristics, but there you have it that’s what hard coded policy makes you do.
As a poster in the comments wrote, IBM anticipated this very duality with JFS, which can be formatted as case sensitive or not. It seems to me the best solution would be for more file systems to take JFS’s lead in putting case sensitivity policy in the hands of the administrator.
Hi, author of the OS checking in.
I wasn’t expecting to see this get announced here but it’s good to see the publicity anyway! There is still some work to be done before the port is released to the public.
While an internal IP stack would be preferred, this is a pretty good proof of concept.
-Ian
Can I just ask you why LODS*/STOS* are so commonly used? In many cases in such a way that they are both bigger and slower than simply using MOV instructions?
OnT: Congratulations on the TCP/IP stack – the majority of small operating systems never get to that stage. Next up USB?
Good question! I started working on the OS while I was still learning Assembly. I also tried to keep things very simple to improve the readability of the code. The source code is slowly being updated to use faster opcodes.
As for USB support… it’s not on the roadmap. The OS is being targeted mainly for server applications so USB isn’t that much of a requirement.