MicroWeb is a web browser for DOS! It is a 16-bit real mode application, designed to run on minimal hardware.
This is a thing that exists. Incredible.
MicroWeb is a web browser for DOS! It is a 16-bit real mode application, designed to run on minimal hardware.
This is a thing that exists. Incredible.
Interesting as a pet project. It seems appropriate to mention that there is another DOS browser called Arachne that I’ve used, which is a bit more sophisticated supporting images, CSS, VESA, etc.
https://en.wikipedia.org/wiki/Arachne_(web_browser)
I think it needs a 386 though. But by this point I doubt that’s a limiting factor for users of DOS who are probably running and/or emulating much more modern hardware.
Arachne cheats by requiring the presence of 1MB of EMS or XMS to break the conventional memory barrier (in addition to the 500KB of conventional memory it also requires). On real hardware, EMS must be purchased as an add-on ISA card (and hence its existence cannot be taken for granted on most machines) and XMS is only supported on 286 CPUs and above.
MicroWeb runs on a 8088. This means the programmer has 640KB of memory to work with (the README explicitly states no EMS or XMS is required).
BTW either Arachne or MicroWeb are meant for any serious browsing. They exist mostly to show off old hardware. MicroWeb’s differentiator is that it can show off really old hardware, down to the first IBM PC.
kurkosdr,
I wouldn’t call it “cheating”, it’s just a different target. Consider that when Arcachne was first released, new DOS computers would have had pentium pros by then and 386 computers were already 11 years old. You might target older CPUs if you have something to prove, but just as a matter of practicality the computing world had long since evolved well past the days of low conventional memory.
Yes I read that as well, but it’s going to severely limit what this browser can do. In principal the project could increase capacity by “swapping” to/from period appropriate bitbanged PIO floppy controllers of the time, but even that was only a few hundred KB. There’s no doubt the memory limits of 8088 will clearly hold back the project, but it really comes down to the author’s goal. Maybe he’s ok keeping it as a basic text markup browser.
You can say that today, but the practical applications for Arachne were more obvious back then. DOS users may wish to download files without having to reboot between DOS and windows. This was my main use case and arachne let you do exactly that! In fact some computers still didn’t have windows installed at all. Today I’d agree with you that there’s little need for them, but I would argue that Arachne did have practical uses back then, like downloading maps for Quake, which I played in DOS 🙂
I had actually used Arachne back in the day. It was better than Lynx for having image support.
Now of course, even the most basic web pages are impractical to use on Arachne or text only browsers. They are huge! Just the google.com homepage is about 2 megabytes, with a lot of heavy javascript. Not to mention tons of anchors, making keyboard navigation really difficult. (And google.com is actually trying to be lean).
Interesting proof of concept.
It would be nice if web standards people took a step back and reconsidered the monstrosity they created not to mention how traumatic it is to view the web without an ad blocker.
I can’t remember what the earliest Watcom compiler version was my portability layer covered. I know it was fine with Windows. I can’t remember if it covered versions back to DOS.
It crossed my mind to write a software renderer. Scaleability wasn’t straightforward as you have to deal with translating data sets depending on if you were doing 3D or 2.5D. There’s some techniques you can use as well as content editor and content constraints. Shading models can be performant but on older machines textures mapping can be slow. At that level you’re getting into machine code.
I think we’re past that, sadly, but I agree in spirit.
The improvements to the web in the last 15 years or so seem to just be new ways to serve us sponsored content.
Or perhaps I’m just getting old.
As I said below, the improvements to the web in the last 15 years or so were made with the intention of making SWFs and Java applets obsolete (which they did).
Try coding something like YouTube (complete with the YouTube player featuring complete feature parity) with web standards from 15 years ago. No Flash or Java allowed.
So, the improvements seem like bloat in a vacuum, but considering what they replaced (SWFs and Java Applets), I understand the reason for their existence.
Web standards are an API now. This means that web standards come with the complexity of an API. It also means that you have API levels (this is what quirks modes such “strict”, “transitional”, “standards” etc are, they are API levels like Android’s API levels but with cute names instead of numbers).
Basically, embarking on the task to implement the web standards is like embarking on the task to re-implement the Android API.
You can complain about how HTML Documents are supposed to be documents, and I promise to lend an ear, but I doubt anyone in power will listen. Modern web standards were designed with the intention to make SWFs and Java applets obsolete (which they did), and your opinion or mine doesn’t really matter.
kurkosdr,
I agree with your points on the subject. Although I think about it from the opposite perspective as well: HTML’s document heritage makes things difficult for some applications. HTML obviously treats everything as text markup but simple things like columns and positioning panels can be surprisingly frustrating when you use CSS. It can be like fitting square pegs in round holes. This is doubly frustrating if you are a programmer and know exactly how to position things with simple javascript, but it goes against the spirit of HTML and CSS. As impressive as HTML is, sometimes I think we use it simply because it’s there and not necessarily because it’s the best tool for every job.
Yes, HTML is now pretty much “legacy”, but I don’t think it will be easy to change.
Some web applications will use Canvas directly. However that requires duplicating a lot of functionality, like popup menus, copy-paste, accessibility, keyboard navigation, …
But the same can be said for x86 opcodes, C subset of C++, non-generic C# / Java collections, CSMA/CD in Ethernet, and many other technologies we use every day. If we were to redesign them today, things would be different, but also incompatible.
As always: https://xkcd.com/927/
sukru,
Agreed.
I don’t think we need to go directly to canvas, I think widgets would be the way to go for UI design ideally. I really miss environments like VB and other things for rapid UI design. And while ASP.Net has tried to bring that to the web, IMHO it’s bad in large part because the integration between server side and a client side browser is so jarring. For documents HTTP/HTML works well, but for UI purposes I wish we could throw away the HTTP postback model entirely. It is inefficient for both the client and server and it’s just clumsy to work with UIs that need to be regenerated constantly. You can technically implement whatever you want in javascript, but it’s built on top of a slow DOM object model that lacks custom controls, lacks encapsulation, lacks stateful capabilities other than regenerating everything every screen. HTML is nice for some things, but in other cases it’s seems like we’re using it because it’s there rather than because it’s good at doing the job. Still, there isn’t much point complaining about it because I fully agree that it’s not going to change. At least having a common standard is better than proprietary solutions.
I’d say x86 isn’t really that important outside of windows. If you run a linux desktop on ARM, you may not even realize it since most programs will compile and run just fine on ARM.
But anyways to get to your point, I agree there are a lot of legacy engineering decisions baked into our technology. For better or worse it continues to shape the future. Had engineers at the time known that their work would set the standard for many decades into the future, they probably would have done a better job regarding future proofing. Things like 32bit addresses, 1500byte packets, unencrypted data are examples of things that were good enough for the time but shortsighted for the future.
It mildly interesting as a wheel that already exists (and is much better) that is DOS specific. You can easily run much fully featured text based browsers on ancient ISA 10mbit or modem based equipment today using Linux. Shoot, you might even get some things that are “ok” at 800×600 full graphical.
I know because I’ve done this.
A minimally effective platform for displaying modern interactive rich media web content from first principles would be an interesting exercise. That would blow a few smug vested interests out of the water.
My framework had an XML parser and URL parser. It was on my roadmap to build an in-game browser back when browsers were fairly modest, as well as in game rich content like a book reader which allowed you to read real books you could pick off a shelf in-game. Because my VM could process scripts either internally or as an external desktop application the book reader application could have been used as an ordinary standalone application. I thought it was cool.
Game developers blast high performance content at the screen all the time. Web infrastructure developers not so much and it shows.