TheRegister has an article about persistent storage for computers using magnetic memory. This triggered me to recall Genera, the Symbolics Object Oriented OS, and what an amazing system could be built pulling together an object oriented OS with a persistent storage. Where there was no need for files and pipes and everything could know about everything else. I’m way out of my depth here, but come on someone, build the future!
Object Oriented OS and Magnetic Memory
2003-06-12 OS News 33 Comments
and MRAM….in 10 years when we can get a fat chunk of this into the 20 GB range then we can talk about dumping Hard drives.
Yeah but in 10 years, 20 GB will be roughly equiv to what 20 MB is now…
It’ll be laughable as a harddrive replacement, just as flash ram is today. I can see this being very beneficial for small footprint devices like PDAs and network appliance type machines
“MRAM essentially comprises a grid of microscopic magnets rather than the transistor-based memory cells used to create standard SDRAM.”
I thought SDRAM was capacitor-based. Can anyone clear this up for me?
I believe SDRAM uses both transistors and capacitors, the transistors are what actually store the data, and the capacitors allow this stored data to be updated
Anyway it works something like that…I’m sure this is an over simplification
Someone correct me if i’m wrong IANAE
Correct me if i’m wrong, and I’m probably dating myself by saying this, but, doesnt the concept of MRAM sound a lot like the Ferro-Magnetic (sp) core memory used in the immense room sized computers of the late 70’s? If I remember properly the core elements were strung along a grid of wires and held a charge correlating to the binary value to be stored 0 = no charge 1 = positive charge… or something like that. I don’t think it was able to hold it’s state once the grid was discharged. Even still, dontcha think the engineers that thought this up could have come up with persistant memory centered around something a little more 21st century? *shrug* doesn’t seem that fascinating to me.
SRAM (Static RAM) uses transistors for the actual storage. Two inverters per cell.
DRAM uses capacitors and needs periodic refreshing
The definition of SDRAM:
SDRAM (synchronous DRAM) is a generic name for various kinds of dynamic random access memory (DRAM) that are synchronized with the clock speed that the microprocessor is optimized for.
However the name SDRAM is used in a much wider extend these days, but the fact still remains that common SDRAM modules are DRAM, whose storing element are capacitors. However as far as i know there are a few transistors associateed which each cell, although they are not part of the actual storing, but creates the correct interface to the capacity.
The magnetic memory arrays you are talking about have the size of a chessboard, even for a small memory. This means that a 64 bit memory is very large.
The ones in question have cell-sized in the nanometer scale, which makes several megabytes of memory possible in small packages in integrated circuits, like ordinary memory.
“This triggered me to recall Genera, the Symbolics Object Oriented OS, and what an amazing system could be built pulling together an object oriented OS with a persistent storage. ”
Apparently the idea of an OOO (Object Oriented Os) is appealing.
Only a few more decades.
It’s a shame that Symbolics failed. But their (financial) failure seems to have more to do with expensive hardware than any failings of the operating system design. So my question is this, does anyone know of any lisp machine based operating systems available for x86 systems. If not can anyone give insight into how one could be built.
What do you know? Less than 24 hours and my wish has come true.
> I’m way out of my depth here
Yeah, unfortunately you are…..
> This triggered me to recall Genera, the Symbolics Object Oriented OS
which afaik wasn’t OO – CLOS wasn’t invented until much later in the history of Lisp. The Lisp used at the time had many interesting properties, however OO as we know it today wasn’t one of them.
> and what an amazing system could be built pulling together an object oriented OS with a persistent storage.
Well all desktop operating systems support persistant storage (hard disks . What I was talking about was transparent object persistance – ie the idea of “saving” and of “files” is removed, instead there are only objects that are swapped out to disk (or even network) in much the same way that Linux pages out memory in a transparent fashion.
So, you don’t need to spend ages writing serialization code, you just create an object. The OS will automatically store it in the background. Of course it’s more complex than that, you can tag objects as temporary and so on, plus you can do some cool things with FS tech as the objects are all identified by GUID.
> Where there was no need for files and pipes and everything could know about everything else
Huh? I think Genera did have files, though i’m not sure they bore much resemblance to the type we use today. But I’m not an expert on LispMs. Somebody who is should correct both me and Paul.
OO is not suitable for an OS because it cannot efficiently provide a base for all data models. OO is a complex and inconsistent data model and cannot, therefore, efficiently provide a foundation for data models that are more simple or more complex. It is not Atomic enough. Also, there are too many conditions in the OO model that is reduces opportunity for optimization and automation.
A better model might be something based on relational algebra and set theory. Sets of ordered pairs have this amazing ability to efficiently provide the building blocks for any other data model, yes even OO.
linked to by this OSnews story (wouldn’t you know it, after all, there is a reason we all love this site)
http://www.osnews.com/comment.php?news_id=752, indicates that LispM’s were in fact used for OOP.
Oh, didn’t see this post before:
It’s a shame that Symbolics failed. But their (financial) failure seems to have more to do with expensive hardware than any failings of the operating system design.
Well, you could credibly argue that the LispM wasn’t really up to it in the real world. The main problem was that Lisp was pretty inefficient due to the high level of abstraction (it’s an outgrowth of lambda calculus). The need to tag everything slowed it down compared to programs written in C, despite being much nicer to work in. To try and alleviate that problem special CPUs were built that had support for this tagging built in, however obviously they were more expensive than the generic CPUs being built at the time….
So my question is this, does anyone know of any lisp machine based operating systems available for x86 systems
You could (and maybe still can) get expansion cards for old-skool Macs (system 7 era i think, they use NuBus). You can’t buy them anymore as far as I know, you’d have to use eBay. Although originally LispMs were entire machines, eventually they were reduced to what were essentially “lisp accelerators”, that took advantage of Mac hardware. You could boot into the OS using them (there were several competing LispMs i think, well, at least 2).
So, it would certainly be possible to build such a thing for x86, modern hardware is fast enough to make it more than possible. Not sure that you’d want to though. The sheer number of years and resources thrown at the problem since then have meant that despite Lisps elegance, the frameworks provided by things like Java/.NET/KDE/GNOME etc blow away anything that Genera could ever do. You have to look at it in the context of the time it was built.
If not can anyone give insight into how one could be built.
Take a look at emacs for an example of a simple Lisp based environment. ELisp is a very old form of Lisp of course, not really representative of the modern day language. It’s also rather limited, ie no threading, limited GUI support. BUT you can still write useful apps with it, and many people do so.
If you wanted to try experimenting with truly out of this world operating systems today, I’d advise you to read the works of Fare over at http://tunes.org/, and investigate research languages like slate, http://slate.tunes.org/
PS – don’t go pester the authors of this stuff on IRC, like I did. You’ll get flamed
New langauges and new OS concepts go hand in hand you’ll find.
A better model might be something based on relational algebra and set theory. Sets of ordered pairs have this amazing ability to efficiently provide the building blocks for any other data model, yes even OO.
Yeah, but then you end up with RDF *shudder*
The main problem with this approach is how to represent it in a way that humans can usefully interact with. Dunno about you, but I’d hate programming by specifying sets of ordered pairs. I’d guess Lisp is semi-close to that.
In an article that I read recently they stated that OOP is defined differently by almost everyone who uses it. This might explain your insistance that it is unsuitable, while to others (including myself) it seems like the ideal paradigm for operating system development. Also, don’t forget that OOP doesn’t have to be used for every part of the system. Simply using it for defining device drivers would help build a much higher machine abstraction than current operating systems provide.
OK, I suck. It seems CLOS did indeed exist at that time. History has never been my strong point, despite my love for it.
So yeah, Common Lisp had (still has) quite a powerful object system, if one that’s rather inscrutable to people used to Java or C++.
It’s kind of like GObject in that the syntax wasn’t really meant for it, but on the other hand the syntax/structure of Lisp is a lot more flexible than C is…
What I would be interested in is what system you can setup for defining device drivers in a LispM-style system. Does Emacs (by the way the language used in Emacs is Emacs Lisp, ELisp is an entirely different dialect) provide the ability to directly communicate with hardware? It would seem to me that using an object-oriented lisp with hardware access would be a good way to get automatic hard ware detection.
Thanks to Anonymous for posting that link. I’d read Fares article before, but not for some time. Good to read about this old stuff. Too bad… I seriously doubt any OS in wide usage will ever have such an uncompromising design.
Still…. that’s for a reason. The world is a messy place. And it’s not hard to produce a slick and integrated system when you control everything. I guess that’s why the LispMs were popular in academia but not so much in industry…
For those who doubt that OOP can be used to build a platform, than explain why it’s not possible with the knoledge that the real world that we live in is able to handle extreme complexity and it is based on decentralized responsibility. If the real world of objects that we live in can handle such complexity, than why shouldn’t we demand the same from our software.
I don’t doubt that OOP can be used to build a platform, but I don’t understand your reasoning. The “objects” of OOP software and “objects” such as a pen or table are merely homonymic.
“If the real world of objects that we live in can handle such complexity, than why shouldn’t we demand the same from our software.”
We do live in the real world of objects but we don’t think in it. We learn, think, and understand in the world of emmotion or in the world of discourse i.e. language. It is a falacy to say that because we live in a world of objects that we should write in a language of objects.
Every person on this planet understands and perceives the real world of object through a combination of emmotion and language. I won’t even assume that we can create an emmotional computer system so addressing the languages that everyone on this planet uses to think, none of them are OO.
What each of them do have, however, is words representing ideas or objects and conecting them are prepositions. Any data model that does not start and stop very close to the simply model of the prepositional phrase it not something upon which to build. After all, human language his built everything we know so far, unemmotional of course.
How so ? I mean a pen has characteristics right, and can be operated upon to change the characteristic. Isn’t that the point of an object? (very basicly put)
In the world of objects that we live in, I can break a cup by dropping it on the cement, well because I did that, I can expect that my bedroom will still be the same as I left it. On the other hand, in the world of action based architecture, if I drop my cup on the floo and it breaks, than that might mean that when I go into my bedroom, all of the furnature is levitating.
I am talking about complexity. An action based design can not handle extreme complexity, while an object oriented design obviously can. Look at the real world.
An action based architecture is centralized, behavior is tied to global data structures. An object oriented design is decentralized, because data and behavior is encapsulated.
I would like to have a decentralized and organic architecture. That way no single entity can control it. The current architectures are vendor architectures, they are centralized.
The entire point of device drivers is to do the abstracting itself. Writing a driver in OO would be stupid and pointless, as the hardest part of driver development is actually extracting the raw data from the hardware and manipulating it. An object oriented model will not help this at all; straightforward C is much better. What’s next, device drivers in .NET? How about Prolog? (actually, Prolog isn’t such a bad idea for another layer…keeping a database of all hardware data; just devise a system for converting raw data to Prolog facts and have the listener interact with the kernel…nah, that would be dumb.)
This is probably the most interesting and straight forward persistence OS.
Essentially, you have a large, persistent VM space.
The “whole” system is stored within that VM space.
On boot, several core pages are loaded, and execution starts. As state changes within memory, all of those pages are flushed to disk.
Should you machine suddenly lose power, the system picks back up at its last checkpoint. So, imagine you’re playing a zany game of Civ III or something and your little sister kicks out your power cord. You plug it back in, and not only is your Civ III game back where you left off (say, 10 seconds ago), but so is the MP3 song you were playing off of your hard disk.
The system had truly pervasive persistence, but it wasn’t in the world of modern networks. Broken network connection screw up a lot of things.
Another fellow built a similar system (custom hardware) with a persistent VM and a HUGE Smalltalk image. Used it to keep track of the audiophile record players he made and sold.
Smalltalks are as persistent as you are. As often as you save the image, your system is persistent. Lisp is similar, but folks save the image less often than in Smalltalk.
You can almost get their easily if you simply mmap a large image file and start making changes to your hearts content. Things get nasty when the system crashes, however.
Of course, object persistence is “easy” when you’re just doing heap snapshots. But save for their MP3 or Video collections today, 99% of humanity could get by with a “mere” 4GB of persistent storage. (Heard a story that Hertz Rental Cars runs their entire reservation system out of a 4GB RAM resident Oracle database). Of course, once we get 64-Bit computers, then that 4GB space explodes to …umm…, well, more than 4GB.
See, when folks want “transparent persistence”, they also seem to want “transparent, portable persistence”. They want to move the persisted bits around, like a Word file.
That gets a LOT more difficult. Its not just objects but also more context that’s being saved.
OS X Cocoa has “free” persistence, so does Java. Does .NET/C# have it? Probably. But, they’re expensive and slow operations.
But, to me, it’s not transparent if you have to “write” the object. That alone makes it less transparent. That’s what makes the image based systems so nice. Just create whatever you want, and snapshot the whole thing. If the system has a built in facility where that snapshot is essentially automatic, then so much the better.
This is how your PDAs work. You may commit your changes, but you don’t have to. If I scribble half a note on my Newton and turn it off, by gum, there’s a half a note there when I turn it back on.
This is where the Home Computers need to be, IMHO. Instant off, instant on, total persistence. Like your refridgerator.
When your system is completely persistent, then you no longer “save” documents. You just close them. You keep track of changes, perhaps “close points”, whatever. Open up a document, change it, close it. The entire undo log is there with it. You can “compress” the document (and lose its undo log), you can “delete” it, do whatever.
“Saving” the document becomes more of an exporting / externalizing function, to, say, mail it. That takes a direct action, and creates a different document. It’s in your “sent” folder.
The idea is that you can work better with your environment when actions are less permenanent. Closing an unsaved document is permanent, thus the warnings. Closing a save document isn’t. You can get it back. As long as your changes can be “undone”, there’s no reason to not save your documents. If you’re worried about corrupting an original, make a copy, just like you do with a piece of paper.
Raskin talks a lot about these kinds of things.
Regarding Lisp Machines, they were an image based persistent system.
But they also had your everyday normal file system and other what nots. They had a versioned file system, but big deal.
The nice thing about the Lisp machines was that you had the source to most everything on the machine, that you could do things to really core system facilities, in a high level language. For example, imagine wanting to tweak your ethernet drivers on the fly.
Or say you’re trying to track down a network problem so you look at your “write” call on the socket-stream. You trace it down into the socket-streams implementation, and then delve deeper into the systems lowlevel io syscall, which leads you to the driver where you go “aHA!”, fix it, and move on.
A simple example, a program is streaming out to a file. The disk gets full. You go and remove some old files, freeing up some space, and tell the original errored program to continue, and it finishes the save, without never “knowing” it was out of space. The code to write the file was something like:
(with-open-file (output “file.out”)
(for thing in list-of-things
do (save-thing output thing)))
The fact that the code doesn’t even check for the file-system-full condition is what makes this an elegant facility.
But, it’s a Lisp machine, for developing in Lisp. While you can use it for word processing, you don’t give it to secrataries for that.
Hmmm…thanks, Will. That was an excellent post. I now see what Mike Hearn was trying to get at with his system. Well, I think it would be a lot simpler if the system had its own native document format that supported undo logs within itself, for instance, and then to port it you would just export it to RTF or whatever, which does not support the undoing and all the features of the native format. Documents tend to be small enough that duplication does not become a problem.
The entire point of device drivers is to do the abstracting itself. Writing a driver in OO would be stupid and pointless, as the hardest part of driver development is actually extracting the raw data from the hardware and manipulating it. An object oriented model will not help this at all;
It would seem to me that having one consistent hardware abstraction that can be kept consistent even while being extended (through the use of encapsilation and inheritance) is anything but stupid and pointless. For example, as you say, one of the hardest things about device drivers is extracting raw data from hardware. If a generic driver is written for each type of interface (for instance USB), then drivers for USB devices wouldn’t need to worry about extracting data from the hardware. They can simply inherit that functionality from the generic driver, and then they only have to worry about manipulating that data. That way every driver does not have to reimplement the same functions.
In theory it would be nice if we needed only one driver to run common-by-funtion hardware. In practice, it would not be trivial to reduce a driver for, say, NVidia and ATI graphics cards to perform well using little special code. Face it, the only things that seem to evolve about as fast as OSS/FS code are cutting-edge hardware. And yes, many USB devices share the same or similar drivers, but this was by design. The same could be said for legacy serial devices and PC parallel port devices (assuming you knew which control bits/code were important).
Not to downplay KeyKOS, but many of these capabilities are available in Linux and other systems. Combinations of
User-Mode Linux http://user-mode-linux.sf.net
and a sheer plethora of other research and experimental systems impliment similar concepts. The idea of trivial persistance (i.e., “its everywhere you want to be”) is hot, but hard. I would suspect the hardest issue is that for it it to truly work everyone has to play nice. Given the need for marketshare and capitalization that will most likely not happen. However, if it does happen I think the OSS/FS systems have the best shot at working because they *must* work with each other. This is by social design as well as technology.
Persistence becomes hard once networks come into the equation. With true local state information, you can pretty much guarantee to be able to restore it, with information concened with remote states, there is no such guarantee.
OS’s that interoperate and play nice make this easier, but its still decidedly non-trivial.
And as for OOP and device drivers… makes a lot of sense to me, but then I used to own an Amiga, and it’s executive was highly OO, with every device inheriting from the base device “abstract class” (it was written in assembly & C, so these concepts weren’t used, but its still what it was doing).actually
and of course:
Best I can do for you.