Linked by Thom Holwerda on Thu 2nd Feb 2017 22:52 UTC
General Unix

But today's breakthroughs would be nowhere and would not have been possible without what came before them - a fact we sometimes forget. Mainframes led to personal computers, which gave way to laptops, then tablets and smartphones, and now the Internet of Things. Today much of the interoperability we enjoy between our devices and systems - whether at home, the office or across the globe - owes itself to efforts in the 1980s and 1990s to make an interoperable operating system (OS) that could be used across diverse computing environments - the UNIX operating system.

[...]

As part of the standardization efforts undertaken by IEEE, it developed a small set of application programming interfaces (APIs). This effort was known as POSIX, or Portable Operation System Interface. Published in 1988, the POSIX.1 standard was the first attempt outside the work at AT&T and BSD (the UNIX derivative developed at the University of California at Berkeley) to create common APIs for UNIX systems. In parallel, X/Open (an industry consortium consisting at that time of over twenty UNIX suppliers) began developing a set of standards aligned with POSIX that consisted of a superset of the POSIX APIs. The X/Open standard was known as the X/Open Portability Guide and had an emphasis on usability. ISO also got involved in the efforts, by taking the POSIX standard and internationalizing it.

A short look at the history of UNIX standardisation and POSIX.

Order by: Score:
Double sided
by Alfman on Fri 3rd Feb 2017 00:13 UTC
Alfman
Member since:
2011-01-28

Posix brought many benefits, like source code portability - the importance of which can never be understated. With that said, what a headache it can be, sometimes I wish POSIX could be replaced with something more modern and less quirky. They would standardize existing APIs without asking how much merit they had for standardization.

Reply Score: 3

RE: Double sided
by Lennie on Fri 3rd Feb 2017 12:49 UTC in reply to "Double sided"
Lennie Member since:
2007-09-22

They would standardize existing APIs without asking how much merit they had for standardization.


I wonder what the view on that was in those days, would they also think those were bad choices.

Reply Score: 2

RE: Double sided
by Rugxulo on Sat 4th Feb 2017 03:00 UTC in reply to "Double sided"
Rugxulo Member since:
2007-10-09

Posix brought many benefits, like source code portability - the importance of which can never be understated.


So how many "POSIX" code bases are still maintained for lesser platforms? Everything nowadays is effectively Windows or "POSIX" (Linux or Mac), whether it would run elsewhere or not. Only very few still care about lesser platforms, unfortunately. Which means most code is (indirectly) platform-specific.

I feel like portability is a false promise, never living up to its ideals. It's very discouraging. Even simple things aren't portable (unless you kick the tires until your foot falls off).

Keep in mind that AutoTools still relies (unwisely, IMO) on a POSIX Bourne shell. Can you imagine Linux having to use a 4DOS/4NT clone just to configure and build sources? It would be ridiculous, yet here we are.

I'm not really complaining to any one part, just saying that the ideals are often loftier than the reality.

Reply Score: 1

RE[2]: Double sided
by Alfman on Sat 4th Feb 2017 04:30 UTC in reply to "RE: Double sided"
Alfman Member since:
2011-01-28

Rugxulo,

So how many "POSIX" code bases are still maintained for lesser platforms? Everything nowadays is effectively Windows or "POSIX" (Linux or Mac), whether it would run elsewhere or not. Only very few still care about lesser platforms, unfortunately. Which means most code is (indirectly) platform-specific.


A few years ago I learned that marketers have a concept called "the rule of three":
http://www.inc.com/scott-elser/the-marketing-rules-of-three.html

Basically it describes the way markets consolidate around 3 significant players and the rest essentially become irrelevant as markets mature. It's obviously not an exact science, but I think the rule is reliable enough to predict that alternatives aren't going to come back on their own so long as we leave things to capitalistic forces, which have always favored consolidation of market power. The odd thing is that it applies even to non-commercial/free software.

Regarding operating systems, the main reason alternative operating systems have to clone existing APIs is because they can't get software ported to them otherwise. Sadly, the same rule also means that even once they do achieve software compatibility, they'll still have to fight it out in a tiny niche of the market that may not be sustainable for them. And it has absolutely nothing to do with merit, it's just the dynamics of natural unregulated markets.


I feel like portability is a false promise, never living up to its ideals. It's very discouraging. Even simple things aren't portable (unless you kick the tires until your foot falls off).


Well, they can be, but IMHO the main obstacle to portability tends to be the lack of robust standardization.

I feel like portability is a false promise, never living up to its ideals. It's very discouraging. Even simple things aren't portable (unless you kick the tires until your foot falls off).

Keep in mind that AutoTools still relies (unwisely, IMO) on a POSIX Bourne shell. Can you imagine Linux having to use a 4DOS/4NT clone just to configure and build sources? It would be ridiculous, yet here we are.


I hate autotools with a passion! It's whole reason for existence is poor standards, and the way it goes about brute forcing everything by reinvoking a compiler hundreds/thousands of times is absolutely grotesque, I just hate how they addressed one problem with another.

I'm not sure what direction you want this discussion to go in, but yeah there's no disagreement from me that things could be much better ;)

Reply Score: 2

Nonsense
by Brendan on Fri 3rd Feb 2017 11:09 UTC
Brendan
Member since:
2005-11-16

Hi,

That article is extremely biased (so biased that it can be considered "almost pure bullshit").

The breakthroughs we've seen would've been made regardless of whether Unix existed or not; and without *NIX and POSIX stifling innovation we would have seen far more breakthroughs.

Much of the interoperability we enjoy between our devices happened despite Unix/POSIX, not because of Unix/POSIX; and could be more accurately be attributed to the existence of multiple very different OSs (Unix vs. VMS vs. Novell vs. Windows vs. ...).

All OSs may or may not be not known for their stability and reliability regardless of whether they are/aren't an implementation of *nix. Without any correlation between "implementations of Unix" and "stability and reliability" you can't pretend there's causation. If you look at OSs for mission critical servers (NonStop OS, z/OS, etc) you'll see the opposite - most aren't Unix.

There's a huge amount of evidence to show that Unix/POSIX failed to evolve. For every new type of device that has been introduced in the last 30 years (mouse, sound cards, touchscreens/tablets, scanners, cameras, 2D graphics, 3D graphics, virtualisation, ...); every single "Unix OS" has to add non-standard (and typically incompatible) extensions. If you write a modern application (anything that doesn't make the end user vomit) portability (even just portability between "*nix clones" alone, and even just portability between "Linux running Gnome" and "Linux running KDE") can only be obtained through non-standard libraries that have nothing to do with POSIX. Almost all of the portability that actually does exists comes from programming languages and not the OS (e.g. being able to run Java applications on almost everything).

Apple's OS X, the first HTTP server, the establishment of WWW, IBM's deep blue chess computer, DNA and RNA sequencing and Slicon Graphic's digital effects all owe their fame to talented application developers and/or marketing people and/or other factors (porn!); and Unix/POSIX is not even slightly responsible for any of these things.

The fact is that (despite the pure drivel that this article is) for every possible use case, Unix is either dead (e.g. anything involving modern user interfaces) or irrelevant (e.g. anything where the only thing that matters is the application and programming language and not the OS).

- Brendan

Reply Score: 5

RE: Nonsense
by Alfman on Fri 3rd Feb 2017 13:24 UTC in reply to "Nonsense"
Alfman Member since:
2011-01-28

Lennie,

The breakthroughs we've seen would've been made regardless of whether Unix existed or not; and without *NIX and POSIX stifling innovation we would have seen far more breakthroughs.


I see what you mean, but I usually give that distinction to MSDOS ;) At least it eventually became obsolete.


POSIX is considered by many to be "good enough" and continues to displace better alternatives in exchange for basic compatibility. Despite this, gaps in functionality and scalability have lead to fragmentation anyways, just take the basic socket polling mechanisms for example:

select
poll
ppoll
epoll
kqueue
io_getevents
/dev/poll

Did I miss any? The functional overlap is ridiculous, they all do the same thing in different ways to address the limitations of the original select call. I've come across countless examples of this over the years and IMHO it's one of the banes of bad standards.

Unquestionably we could develop better standards, but since POSIX isn't realistically going away, this is what happens in practice:

https://xkcd.com/927/

Edited 2017-02-03 13:36 UTC

Reply Score: 5

RE[2]: Nonsense
by zlynx on Fri 3rd Feb 2017 22:50 UTC in reply to "RE: Nonsense"
zlynx Member since:
2005-07-20

Look at all that innovation that was somehow "stifled!"

Reply Score: 2

RE[3]: Nonsense
by Alfman on Fri 3rd Feb 2017 23:58 UTC in reply to "RE[2]: Nonsense"
Alfman Member since:
2011-01-28

zlynx,

Look at all that innovation that was somehow "stifled!"



Are you suggesting that POSIX has not stifled the industry because over the years different implementations have solved it's shortcomings in different ways?

I can't tell if you are being sarcastic or not, haha ;)

IMHO it would be far better to collectively push for a unified standard that actually fixes the problems everyone has with the earlier standard. Although I fully appreciate that doesn't happen and so we end up in the xkcd cartoon I linked to earlier: standards = standards + 1

Reply Score: 2

RE: Nonsense
by christian on Fri 3rd Feb 2017 16:35 UTC in reply to "Nonsense"
christian Member since:
2005-07-06

Hi, That article is extremely biased (so biased that it can be considered "almost pure bullshit"). The breakthroughs we've seen would've been made regardless of whether Unix existed or not; and without *NIX and POSIX stifling innovation we would have seen far more breakthroughs.



Not sure how UNIX has stifled any innovation.

By contrast, competing systems either had no innovation, and certainly no third party innovation (think contemporary systems such as RSX-11, VMS, z/OS) or were so badly implemented as to stifle innovation for fear of breaking compatibility (DOS/Windows debacle.)



Much of the interoperability we enjoy between our devices happened despite Unix/POSIX, not because of Unix/POSIX; and could be more accurately be attributed to the existence of multiple very different OSs (Unix vs. VMS vs. Novell vs. Windows vs. ...).



Actually, the interoperability between devices is OS agnostic. The internet, and with it TCP/IP, largely paved the way for inter-device interoperability.

Compatibility can certainly be achieved with high level run times (like Java, Python etc.) but it's easier to write just one or two runtimes (UNIX runtime, Windows runtime, z/OS runtime) than it is to write a runtime for every single disparate OS we might have had without UNIX.




...snip...

There's a huge amount of evidence to show that Unix/POSIX failed to evolve. For every new type of device that has been introduced in the last 30 years (mouse, sound cards, touchscreens/tablets, scanners, cameras, 2D graphics, 3D graphics, virtualisation, ...);



I think it's a bit unfair to single out the UNIX core API for failing to handle future advances in IO devices. But at least with the ioctl interface, there is a method to go beyond the "stream of bytes" interface used by UNIX. And even then, any device can be encoded into that stream of bytes, so not even ioctl is required if the stream of bytes is considered a transport link to devices.

I think the UNIX file abstraction is proving remarkably robust in the face of new device developments.



...snip... portability (even just portability between "*nix clones" alone, and even just portability between "Linux running Gnome" and "Linux running KDE") can only be obtained through non-standard libraries that have nothing to do with POSIX. Almost all of the portability that actually does exists comes from programming languages and not the OS (e.g. being able to run Java applications on almost everything).



Most of these programming languages or libraries came from the UNIX/POSIX background. UNIX makes writing a runtime relatively easy. It has a small number of simple abstractions, on which you can build more specialized and higher level abstractions and implementation details.

Compare this with eg. Windows. Win32, and heaven help us, win16 before it has barely heard of the word abstraction. As an example, I present winsock. An I/O API totally divorced from other I/O methods in Windows, to the point that it is very difficult to write code that works with network connections mostly the same way it can work with files or with input/output from the console user.

Hell, Windows was barely compatible with itself until they managed to move everyone to WinXP. Win9x and WinNT compatibility was a joke, with WinNT especially compromised security wise until well into this century.

At the same time my company was struggling to support a product that worked across Windows NT 4.x/2000/XP, we also supported SunOS 4/5, HP-UX 9/10/11, AIX 3/4/5+ and Linux, and had ports to Irix and BSD just to keep the code portable. And the UNIX side was vastly easier to support, with most of the UNIX side problems caused by the various pthread implementations across all those OSes (pthreads evolved somewhat between the initial drafts implemented by the likes of HP-UX 9/10.1 and the final pthreads used in later versions).



Apple's OS X, the first HTTP server, the establishment of WWW, IBM's deep blue chess computer, DNA and RNA sequencing and Silicon Graphic's digital effects all owe their fame to talented application developers and/or marketing people and/or other factors (porn!); and Unix/POSIX is not even slightly responsible for any of these things.



Not directly, but the likes of SGI, IBM, HP and SUN chose UNIX for very valid reasons. They could all have certainly had home brew operating systems, and indeed, IBM/HP had many to chose from, but everyone chose UNIX largely because it's just a nice system. All those VAXen on which BSD was developed already had an OS, VMS. But Berkeley chose to develop BSD from UNIX instead.

Even the original UNIX research team preferred to move UNIX around, rather than trying to port the tools created with UNIX to other systems. UNIX was among the first truly portable systems, running on disparate processor types with different word sizes by the late 1970s.



The fact is that (despite the pure drivel that this article is) for every possible use case, Unix is either dead (e.g. anything involving modern user interfaces) or irrelevant (e.g. anything where the only thing that matters is the application and programming language and not the OS). - Brendan



But as a historical perspective, almost everything we use today came out of the fact UNIX/POSIX is a relatively clean and simple interface, which does well to keep out of your way. The wealth of tools, libraries and applications on top of UNIX is, I think, testament to the original UNIX design.

And without the critical mass afforded multiple vendors supporting a single API, we might well have continued living in a proprietary networked world, or perhaps not even had a single internet as we know it at all. We might have all been stuck with AOL/CompuServe/MSN BBS like systems if no single system had gained momentum.

The MS/DOS should serve as a warning of what the world would have been like without UNIX. I for one, thank my UNIX/POSIX creating overlords. Once PCs were powerful enough to run UNIX (like) OS, MS/DOS and Windows was forced to catch up. Without UNIX, we'd all be running Windows on our PCs, servers and phones. And a monoculture is not good for anyone.

Reply Score: 3

RE[2]: Nonsense
by moondevil on Fri 3rd Feb 2017 17:38 UTC in reply to "RE: Nonsense"
moondevil Member since:
2005-07-08

The main reason why those companies, and universities like Berkeley and Stanford chose UNIX was that it was free of charge.

Why spend tons of resources developing their own OSes, when AT&T was obliged to just charge a symbolic price for the license.

Had AT&T been allowed to charge for UNIX the same amount of money something like the prices of VMS, the history would have been quite different.

Reply Score: 2

RE[2]: Nonsense
by Alfman on Fri 3rd Feb 2017 18:32 UTC in reply to "RE: Nonsense"
Alfman Member since:
2011-01-28

christian,

Actually, the interoperability between devices is OS agnostic. The internet, and with it TCP/IP, largely paved the way for inter-device interoperability.


That right there is another example of legacy that's holding us back.

There's no denying that interoperability is extremely important, which is why we continue to use IPv4, and yet there's also no denying that these legacy standards are tearing the modern internet apart at the seams. We are in a scenario where we simultaneously desperately need to replace the existing standards, and yet we are unable to due to legacy compatibility.

I'm not going to pretend I would have foreseen today's problems 4/5 decades ago when IP communications were invented, maybe or maybe not. But we have to take our heads out of the sand and at least admit there have been several negative long term consequences to their early decisions. What we have to decide on now is if we want to continue to live with bad IPv4 standards for today's internet or if we want to break them to get something better, like IPv6. Both choices are painful and will have negative repercussions.

Left to it's own vices, the industry has avoided short term transition costs, at the expense of continued long term problems.

IMHO it's pretty clear that in the long term, breaking away from inadequate legacy standards is the right thing to do. Nobody wants to go through the transition, but the alternative violates the end to end connectivity principal that the internet founders took for granted. So in order to uphold their vision for how the internet should work, we HAVE to replace their legacy protocols. It's ironic.

Reply Score: 3

RE[3]: Nonsense
by Vanders on Fri 3rd Feb 2017 19:57 UTC in reply to "RE[2]: Nonsense"
Vanders Member since:
2005-07-06

The long slow transition from IPv4 to IPv6 is not due to any technical limitations. The problem is, as always, money: companies don't want to pay to upgrade network kit that can't handle IPv6, ISPs don't want to spend an extra 10 cents on every consumer router for IPv6 support, and so on.

So I'm really not sure what you think is being "held back" by IP or UNIX.

Reply Score: 2

RE[4]: Nonsense
by Alfman on Fri 3rd Feb 2017 22:28 UTC in reply to "RE[3]: Nonsense"
Alfman Member since:
2011-01-28

Vanders,

The long slow transition from IPv4 to IPv6 is not due to any technical limitations. The problem is, as always, money: companies don't want to pay to upgrade network kit that can't handle IPv6, ISPs don't want to spend an extra 10 cents on every consumer router for IPv6 support, and so on.

So I'm really not sure what you think is being "held back" by IP or UNIX.


The old legacy IPv4 protocol is clearly no longer adequate but the fact is that IPv4 is so critical (aka mandatory) for so much technology that it has become an impediment to it's successor, IPv6.

The official plan was a universal dual stack deployment of IPv6 with the eventual intent of phasing out IPv4. Well, we're nearly six years past "world ipv6 day" and the first phase of IPv6 deployment is MIA. I don't even have the option since my monopoly ISP doesn't offer IPv6, I can only test it through an IPv6 broker that forwards my IPv4 traffic. Even then, most of the web doesn't support IPv6.

Don't take my word for it, go ahead turn on IPv6 and disable IPv4, the majority of the web goes black, including some properties from big companies like Microsoft. If you are an IPv6 user and don't have IPv4 to fallback on, you are effectively a second class user on the web and it's quite likely many of your websites and P2P games/file sharing/telephony won't work for you. This catch-22 is a large part of the problem for IPv6. This is what I mean when I say legacy incumbent standards can end up displacing better ones indefinitely.

Don't get me wrong, I'm in total agreement with you that we have to blame the industry for not investing in IPv6 to make the transition happen, but you have to concede that so long as IPv4 support is much better than IPv6, that's an obstacle to creating critical mass for IPv6 such that people and companies would naturally demand it.


So I think we'll both agree on the merit of IPv6, but without some kind of artificial incentive I'm afraid that because of our complacency with legacy standards, we could end up with carrier grade IPv4 NAT as a permanent fixture of our infrastructure.

Edited 2017-02-03 22:44 UTC

Reply Score: 2

RE[2]: Nonsense
by Brendan on Sat 4th Feb 2017 01:04 UTC in reply to "RE: Nonsense"
Brendan Member since:
2005-11-16

Hi,

Not sure how UNIX has stifled any innovation.


I'd recommend reading Rob Pike's "Systems Software Research is Irrelevant" paper: http://doc.cat-v.org/bell_labs/utah2000/utah2000.html

By contrast, competing systems either had no innovation, and certainly no third party innovation (think contemporary systems such as RSX-11, VMS, z/OS) or were so badly implemented as to stifle innovation for fear of breaking compatibility (DOS/Windows debacle.)


VMS was designed for distributed systems and had multiple unique features for this purpose, including (e.g.) a versioning file system. It was also one of the earliest OSs to use modern virtual memory management.

NonStop OS (originally from Tandem Computers I think, it became part of Hewlett Packard Enterprise after) was designed for hardware redundancy - software was split into isolated pieces, and 3 copies of each piece would run on isolated CPUs, such that if any CPU failed (or even just gave different results) it could auto-detect and auto-recover. It used a radically different approach to programming (a "shared nothing, message passing" approach) and used radically different hardware.

z/OS is part of a long line of mainframe OS from IBM. It's this long line of mainframe OSs that are responsible for a lot of ideas (checkpointing, capability based addressing, virtualisation, etc).

"IBM i" (which is what I was originally thinking of - I goofed) is very unique for multiple reasons - it's object based (no files!), it uses "single level store", etc. It's also designed to minimise maintenance/administration (automated self-care, etc) and is unmatched by any other OS.

I think it's a bit unfair to single out the UNIX core API for failing to handle future advances in IO devices. But at least with the ioctl interface, there is a method to go beyond the "stream of bytes" interface used by UNIX. And even then, any device can be encoded into that stream of bytes, so not even ioctl is required if the stream of bytes is considered a transport link to devices.


Sure, there's multiple ways that all the different and incompatible *nix clones have worked around the utter failure of Unix to evolve, including IOCTLs (which originally had to exist as a hack to work-around the problem caused by the unusable idiocy of "everything is a file").

I think the UNIX file abstraction is proving remarkably robust in the face of new device developments.


No; people are just getting better at necromancy (tricks to make a dead OS seem like it's undead).

Most of these programming languages or libraries came from the UNIX/POSIX background.


The idea of "programming language as portable abstraction" pre-dates UNIX by several decades.

Compare this with eg. Windows. Win32, and heaven help us, win16 before it has barely heard of the word abstraction. As an example, I present winsock. An I/O API totally divorced from other I/O methods in Windows, to the point that it is very difficult to write code that works with network connections mostly the same way it can work with files or with input/output from the console user.


So you're saying that change is/was necessary for Windows to evolve, and because (unlike Unix) it didn't fail to evolve it dominated "desktop" for decades even though other OSs (Unix) has a head start and ample opportunity to become entrenched?

Not directly, but the likes of SGI, IBM, HP and SUN chose UNIX for very valid reasons. They could all have certainly had home brew operating systems, and indeed, IBM/HP had many to chose from, but everyone chose UNIX largely because it's just a nice system.


Sure, once upon a time (a long time ago), Unix was a nice OS - a "9 out of 10" if you like. Then technology improved, and what people expect from an OS changed with it, and Unix failed to notice; and now Unix is still a nice "9" but it's "9 out of 20" in an era where most OSs surpassed it.

All those VAXen on which BSD was developed already had an OS, VMS. But Berkeley chose to develop BSD from UNIX instead.


They chose to develop BSD from source code they were given instead of choosing to develop BSD from proprietary source code they never saw?

Even the original UNIX research team preferred to move UNIX around, rather than trying to port the tools created with UNIX to other systems. UNIX was among the first truly portable systems, running on disparate processor types with different word sizes by the late 1970s.


Again, primarily because of C (and its ancestor, B); but also because the hardware was evolving quickly then too.

Note that most of the people that created Unix thought it was so awesome that they threw it in the trash and started Plan 9.

But as a historical perspective, almost everything we use today came out of the fact UNIX/POSIX is a relatively clean and simple interface, which does well to keep out of your way. The wealth of tools, libraries and applications on top of UNIX is, I think, testament to the original UNIX design.


No; some of the stuff we actually use today came out of the fact the Unix/POSIX sucks so badly that people were forced to replace/bury/extend it. The remainder came from other places that pre-date Unix or are unrelated to Unix.

And without the critical mass afforded multiple vendors supporting a single API, we might well have continued living in a proprietary networked world, or perhaps not even had a single internet as we know it at all. We might have all been stuck with AOL/CompuServe/MSN BBS like systems if no single system had gained momentum.


Sure; without Unix we might still be using telegraph, or we might by flying around in hovercars, or we might be all dead, or... Most likely is that Unix made no difference - the Internet would've existed and became ubiquitous regardless of which OS happened to be used when hardware became affordable.

The MS/DOS should serve as a warning of what the world would have been like without UNIX. I for one, thank my UNIX/POSIX creating overlords. Once PCs were powerful enough to run UNIX (like) OS, MS/DOS and Windows was forced to catch up. Without UNIX, we'd all be running Windows on our PCs, servers and phones. And a monoculture is not good for anyone.


Unix should serve as a warning of what the world would have been like if people weren't smart enough to distinguish between idiotic hyperbole and a logical argument.

If history were different it's impossible to predict what would have happened; but increased competition between different OSs in the 1970s and 1980s (rather than too many people settling for "bad but convenient Unix") is a more likely possibility than most, and in that case maybe Microsoft would've had stronger competition in the 1990s and might not even exist now.

- Brendan

Reply Score: 2

RE[3]: Nonsense
by tylerdurden on Tue 7th Feb 2017 01:56 UTC in reply to "RE[2]: Nonsense"
tylerdurden Member since:
2009-03-17


Unix should serve as a warning of what the world would have been like if people weren't smart enough to distinguish between idiotic hyperbole and a logical argument.


Dear lord, the lack of self awareness...

Reply Score: 2

RE[2]: Nonsense
by Rugxulo on Sat 4th Feb 2017 04:45 UTC in reply to "RE: Nonsense"
Rugxulo Member since:
2007-10-09

The MS/DOS should serve as a warning of what the world would have been like without UNIX.


DOS and OS/2 and Win9x/NT all had POSIX toolsets (MKS, gnuish, DJGPP, EMX, Cygwin). It did help tremendously ... for a while, but eventually everyone gave up most on those (probably excepting Cygwin, but even MinGW is more popular, and that's not very POSIX at all).

So I don't think "POSIX" is a savior for anything. You can certainly praise certain specific tools or APIs or OSes, but overall everything in software is chaotic. Thus, success isn't tied to anything else but hard work and dedication. (Or luck, timing, inertia, marketing, money, licensing. But I prefer to dream that good software will always rise to the top. FPC, FTW!)

Reply Score: 1

RE[2]: Nonsense
by Carewolf on Sat 4th Feb 2017 11:31 UTC in reply to "RE: Nonsense"
Carewolf Member since:
2005-09-08

"
...snip... portability (even just portability between "*nix clones" alone, and even just portability between "Linux running Gnome" and "Linux running KDE") can only be obtained through non-standard libraries that have nothing to do with POSIX. Almost all of the portability that actually does exists comes from programming languages and not the OS (e.g. being able to run Java applications on almost everything).



Most of these programming languages or libraries came from the UNIX/POSIX background. UNIX makes writing a runtime relatively easy. It has a small number of simple abstractions, on which you can build more specialized and higher level abstractions and implementation details.

"

Background maybe, but the times something like Qt can use a POSIX standard are vanishingly small and increasingly so. The biggest pro left is probably filename encoding and shell escaping, and I am not even sure those were standardized by POSIX. The filesystem and memory management implementation does use POSIX APIs but they have diverged so much the POSIX implementation is in practice split four ways: Linux, Darwin(Mach kernels), BSDs+Solaris and QNX. For a long time pthreads was a proud POSIX API but has now been deprecated in favor of C++11 threads. Add to that that the mobile offsprings of these systems all removed POSIX APIs (iOS, Android and BB10), and a cross-platform toolkit can rely even less on anything POSIX.

Still I do appreciate at filesystem conventions are the same. Just had to fix a new build-system for Chromium to integrate with Windows. Baah.

Edited 2017-02-04 11:33 UTC

Reply Score: 2

RE[2]: Nonsense
by Megol on Sat 4th Feb 2017 21:47 UTC in reply to "RE: Nonsense"
Megol Member since:
2011-04-11


<snip>

The MS/DOS should serve as a warning of what the world would have been like without UNIX. I for one, thank my UNIX/POSIX creating overlords. Once PCs were powerful enough to run UNIX (like) OS, MS/DOS and Windows was forced to catch up. Without UNIX, we'd all be running Windows on our PCs, servers and phones. And a monoculture is not good for anyone.


MSDOS was basically a clone of CP/M (though on an API level) and later integrated some POSIX inspired stuff. The first IBM PC had 64kiB RAM - try to fit a Unix type system + application into that.

PCs could run Unix type operating systems when released. Xenix, QUNIX (later QNX) etc.

Microsoft planned to evolve MSDOS and Xenix into one system as Xenix was popular (for a long time the Unix with most installations) however the interest for Xenix shrunk while the interest for MSDOS exploded. Microsoft prioritized as their customers wanted.

Later MS and IBM collaborated on a modernization of (IBM/MS) DOS called OS/2. Different development practices, different goals (e.g. IBM wanted to run on 80286 systems which caused extreme problems* while MS wanted to optimize for 80386 systems) and the increasing sales of Windows made MS hire the main designer behind DEC VMS and create Windows NT.

*: one example: OS/2 ran in protected mode on 80286 processors and was to support multitasking even for MSDOS real mode (no protection features, 1MiB memory max.) programs. This meant that the system had to switch between protected mode and real mode at preemption time - but the 80286 was braindead in that it couldn't per design switch back to real mode when running in protected mode. IBM solved this by patching the BIOS code so that when a flag was set by software it would jump into a programmer specified location and bypass the normal BIOS setup, the operating system could then specify a routine to handle the real mode task and then send a reset command to the keyboard controller which would reset the processor returning it to real mode. That was _slow_.

Reply Score: 2

RE[3]: Nonsense
by Alfman on Sun 5th Feb 2017 03:10 UTC in reply to "RE[2]: Nonsense"
Alfman Member since:
2011-01-28

Megol,

This meant that the system had to switch between protected mode and real mode at preemption time - but the 80286 was braindead in that it couldn't per design switch back to real mode when running in protected mode. IBM solved this by patching the BIOS code so that when a flag was set by software it would jump into a programmer specified location and bypass the normal BIOS setup, the operating system could then specify a routine to handle the real mode task and then send a reset command to the keyboard controller which would reset the processor returning it to real mode. That was _slow_.



I remember all that, it was such a hack, and it probably still exists in today's keyboard controller too, just like abusing the keyboard controller to mask the physical A20 memory address line. What a glorious mess we made of things ;)

Reply Score: 2

RE: Nonsense
by Vanders on Fri 3rd Feb 2017 20:00 UTC in reply to "Nonsense"
Vanders Member since:
2005-07-06

Unix is either dead (e.g. anything involving modern user interfaces) or irrelevant (e.g. anything where the only thing that matters is the application and programming language and not the OS).

It's so irrelevant that Microsoft have invested quite some effort into Linux compatibility on Windows.

POSIX/Single Unix is a programming interface. There are quite literally billions of devices out there that support POSIX. That's quite an odd definition of "dead". Is there going to be a film about it at 11?

Reply Score: 2

RE[2]: Nonsense
by Brendan on Sat 4th Feb 2017 01:23 UTC in reply to "RE: Nonsense"
Brendan Member since:
2005-11-16

Hi,

It's so irrelevant that Microsoft have invested quite some effort into Linux compatibility on Windows.


Sure - Linux compatibility. Microsoft deprecated "Windows Services for UNIX" and then introduced "Windows Subsystem for Linux". Why? Because Linux compatibility matters (for Azure) and Unix compatibility doesn't.

POSIX/Single Unix is a programming interface. There are quite literally billions of devices out there that support POSIX. That's quite an odd definition of "dead". Is there going to be a film about it at 11?


Wrong. Single UNIX Specification is a set of standards that define the design of an OS; including things like shells, commands, utilities (and their command line arguments and behaviour), etc. It does includes APIs (for one language only), but that's only part of a larger whole.

- Brendan

Reply Score: 2

RE[3]: Nonsense
by Vanders on Sat 4th Feb 2017 10:55 UTC in reply to "RE[2]: Nonsense"
Vanders Member since:
2005-07-06

How did I know someone would come along and attempt to split a hair over "Linux" v's "UNIX"?

PROTIP: Linux is, for all intents and purposes, Single Unix Specification compliant. It's Unix.

Reply Score: 2

RE[4]: Nonsense
by Brendan on Sat 4th Feb 2017 12:35 UTC in reply to "RE[3]: Nonsense"
Brendan Member since:
2005-11-16

Hi,

How did I know someone would come along and attempt to split a hair over "Linux" v's "UNIX"?


That's easy - you knew someone would correct you because you knew you were technically wrong.

PROTIP: Linux is, for all intents and purposes, Single Unix Specification compliant. It's Unix.


Linux distributions are "Unix like", but are not "100% Unix compatible" and not "Unix(tm)". However this is missing the point.

Even if Linux was both "100% Unix compatible" and "Unix(tm)"; "Linux compatible" (including support for the extensions Linux added on top of POSIX that software designed for Linux has grown to depend on) would still matter more than "Unix compatible".

- Brendan

Reply Score: 2

RE[5]: Nonsense
by Vanders on Sat 4th Feb 2017 14:45 UTC in reply to "RE[4]: Nonsense"
Vanders Member since:
2005-07-06

That's easy - you knew someone would correct you because you knew you were technically wrong.

...but I'm not technically wrong? I knew someone would sperg because they couldn't help themselves.

Linux distributions are "Unix like", but are not "100% Unix compatible" and not "Unix(tm)". However this is missing the point.

a) The trademark is UNIX (see I can do it too)
b) That is exactly the point.

Every Unix (and every UNIX) has their own extensions. They always have. Those extensions sometime get folded back into the SuS and become new features of SuS; sometimes those features are not adopted wholesale but are used to create a generic standard that everyone can implement. Sometimes they're never merged into SuS.

So unless you are claiming that Linux is not a Unix because it has non-SuS extensions, you're insane.

Reply Score: 2

RE[4]: Nonsense
by Drumhellar on Sat 4th Feb 2017 18:54 UTC in reply to "RE[3]: Nonsense"
Drumhellar Member since:
2005-07-12

How did I know someone would come along and attempt to split a hair over "Linux" v's "UNIX"?


Because, in the specific example you cited, it is extremely important.

WSU was a UNIX-like POSIX environment, originally designed, in large part, so WindowsNT could be used on government production systems where POSIX compatibility was mandatory, even if the software being written for it wasn't POSIX.

It didn't run binaries from other Unix system. You couldn't drop in an iBCS binary and expect it to work. You had to build software from source.

However, WSL is different - it isn't just "Linux compatible" with source code, it runs actual Linux binaries - the exact same binaries that you download in Ubuntu. The purpose of WSL isn't meant to run on productions systems, it is so Linux developers can run their development stack on Windows.

This isn't splitting hairs. It is a significant and important difference between the two.

Reply Score: 2

RE[5]: Nonsense
by Vanders on Sun 5th Feb 2017 10:19 UTC in reply to "RE[4]: Nonsense"
Vanders Member since:
2005-07-06

...but it isn't a significant difference in the context of this discussion? WSU & the Linux compatibility layers implemented POSIX API's for Windows. That the Linux compatibility goes farther by offering ABI compatibility is irrelevant in a discussion about APIs.

The point is incredibly simple: POSIX & Single Unix and Unix are not dead, nor are they in any way irrelevent. They're relevant for multiple reasons, not the least of which is that Linux implements them, and it's clear how important Linux is to the world: so important that even Microsoft consider compatibility with the APIs (& ABI) that Linux implements: namely a large portion of the Single Unix Specification!

Reply Score: 2

RE: Nonsense
by Rugxulo on Sat 4th Feb 2017 03:59 UTC in reply to "Nonsense"
Rugxulo Member since:
2007-10-09

Almost all of the portability that actually does exists comes from programming languages and not the OS


I'm not sure if I'm understanding your point, but indeed "portability" is a ruse. Most compatibility is really just avoiding compiler bugs, tiptoeing around dialectical differences, and tons of preprocessor #ifdef magic (or separate modules, libs, etc). Rarely is anything automatically supported except for simple stuff. Heck, compilers themselves are rarely portable, irony of ironies.

Reply Score: 1

RE: Nonsense
by Drumhellar on Sat 4th Feb 2017 22:57 UTC in reply to "Nonsense"
Drumhellar Member since:
2005-07-12

If you look at OSs for mission critical servers (NonStop OS, z/OS, etc) you'll see the opposite - most aren't Unix.


They are POSIX, though.

Reply Score: 2

RE[2]: Nonsense
by christian on Mon 6th Feb 2017 01:13 UTC in reply to "RE: Nonsense"
christian Member since:
2005-07-06

" If you look at OSs for mission critical servers (NonStop OS, z/OS, etc) you'll see the opposite - most aren't Unix.


They are POSIX, though.
"

z/OS is actually a bona-fide UNIX:

http://www-03.ibm.com/systems/z/os/zos/features/unix/

Reply Score: 1

RE: Nonsense
by tylerdurden on Sun 5th Feb 2017 20:09 UTC in reply to "Nonsense"
tylerdurden Member since:
2009-03-17

Hi,

That article is extremely biased (so biased that it can be considered "almost pure bullshit").


You mean, like your post?

Reply Score: 2

Irrelevant comment by a biologist
by Gone fishing on Sat 4th Feb 2017 08:30 UTC
Gone fishing
Member since:
2006-02-22

Biology is full of legacy nonsense, you have more or less the same hox genes as a fruit fly that’s 670 million years of legacy. Cytochromes are ubiquitous a represent over a billion years of legacy. In evolved systems legacy doesn’t seem to stop innovation. Evolution is capable of absorbing bad design mitigating its consequences.

Not saying redesign isn’t a good idea just maybe legacy isn’t the break on innovation that one would expect.

Reply Score: 2

Alfman Member since:
2011-01-28

Gone fishing,

Biology is full of legacy nonsense, you have more or less the same hox genes as a fruit fly that’s 670 million years of legacy. Cytochromes are ubiquitous a represent over a billion years of legacy. In evolved systems legacy doesn’t seem to stop innovation. Evolution is capable of absorbing bad design mitigating its consequences.

Not saying redesign isn’t a good idea just maybe legacy isn’t the break on innovation that one would expect.


That's a very insightful comparison, however I also think there's a crucial difference: our DNA evolved by by means of (near-)infinite amount of entropy as input evaluated over hundreds of millions of years using the natural fitness selector we call surviving on earth. In effect, the allowable complexity in DNA is practically unbounded when physics itself is the computer.

Focusing now on human programmers, we do have limits to the complexity that we can handle. It's the reason we have to actively fight spaghetti code or suffer the consequences of complexity overload. It's the reason we have to refactor overly complex code. When our systems become too complex, it does actively impede our mental abilities to work on them, even if physics itself wouldn't otherwise have a problem with it.


Mind you I think it's a very interesting point and this is merely my initial gut reaction, there's a lot to think about.

Reply Score: 2

Gone fishing Member since:
2006-02-22

Alfman

I think it is an interesting thought too.

I'm not sure how much we call thought or design isn't actually evolutionary in a Darwinian sense. Design initiating the almost sexual reproduction of ideas and then initiating an initial culling process.

The biological process is heavily constrained. Redesign is almost impossible, working bad solutions (such as RuBP carboxylase) are kept because fitness cannot be reduced even temporally; and although there is "(near-)infinite amount of entropy as input evaluated over hundreds of millions of years" the initial inputs are extremely modest changes. They always need to maintain backwards compatibility (at least initially) and are generally the cumulative product of point changes.

Do you think these constraints are similar to design constraints (with less famine and death)?

Reply Score: 2

Alfman Member since:
2011-01-28

Gone fishing,

I'm not sure how much we call thought or design isn't actually evolutionary in a Darwinian sense. Design initiating the almost sexual reproduction of ideas and then initiating an initial culling process.


That's quite meta ;)



The biological process is heavily constrained. Redesign is almost impossible, working bad solutions (such as RuBP carboxylase) are kept because fitness cannot be reduced even temporally; and although there is "(near-)infinite amount of entropy as input evaluated over hundreds of millions of years" the initial inputs are extremely modest changes. They always need to maintain backwards compatibility (at least initially) and are generally the cumulative product of point changes.


That's what I figured. As programmers we can make a deliberate decision to 'cull' as you say because we know the end result can become more fit. However a biological system might have great difficulty making this leap outside the current local maxima (A) to another more optimal solution (B) because the required changes are too great to achieve through random luck and gradual evolutionary paths could be too mutated to survive on their own.

What if we could get to B using artificial means, Frankenstein style? ;)

One question I've had about DNA: biologists say there's a lot of legacy leftover DNA, but does it serve any purpose at all? What would happen if it were changed and/or removed?


Do you think these constraints are similar to design constraints (with less famine and death)?


In the sense that we are biological systems and as designers we are able to decided when to 'cull' complexity from designs of our own, then it would seem rational to conclude that biological systems are technically capable of culling, at least indirectly. Does it follow then that a biological process could somehow encode the tools & processes & knowledge for managing "planned" changes to it's own DNA within it's own DNA?

I definitely have questions about the limits of protein folding. If DNA can produce a human body & brain, I would think that maybe organs that deliberately alter DNA could be theoretically possible too?

If so, then the question then would be if there could ever exist a natural (or artificial) fitness function that would produce DNA for such an organ? Would there be a meta-fitness-function for this intelligent DNA to control it's self-altering capabilities?

It all has the feel of a bad scifi movie ;)

Reply Score: 2

Gone fishing Member since:
2006-02-22

Alfman

One question I've had about DNA: biologists say there's a lot of legacy leftover DNA, but does it serve any purpose at all? What would happen if it were changed and/or removed?


That legacy DNA still has its own purpose – that is being copied. If you consider that 5-8% is made up of viral DNA, that DNA has found an excellent way of immortalising itself and continues to replicate although its lack of direct functionality means that its code is comparatively unstable (as natural selection is not acting on it.) I think about 1% of the human genome codes for proteins and a further 25% has known functions such as regulatory functions, so most DNA has little function from our point of view, I guess some of it will have unknown function but most is just legacy, although might be needed to allow chromosome pairing. This from a programmers point of view this is very bad coding.

In the sense that we are biological systems and as designers we are able to decided when to 'cull' complexity from designs of our own, then it would seem rational to conclude that biological systems are technically capable of culling, at least indirectly. Does it follow then that a biological process could somehow encode the tools & processes & knowledge for managing "planned" changes to it's own DNA within it's own DNA?


Biology does have sex! which is often a non random process where organisms choose which characteristics and therefore which genes are the fittest and which to cull. The odd thing about sex, is that it demonstrates the effects of unintended consequences, where apparently rational choices lead to bizarre outcomes. I’m thinking of peacock, tails, birds of paradise and even human mate selection, it turns out sexual fitness and fitness for natural selection are not always the same.

I definitely have questions about the limits of protein folding. If DNA can produce a human body & brain, I would think that maybe organs that deliberately alter DNA could be theoretically possible too?


Well the cell does use proteins even ribozymes (RNA that has catalytic functionality and may be legacy of pre protein biology) to change and manipulate DNA. Proteins are also essential factors in gene expression. However the central dogma tells us that information always flows from code to phenotype and not the other way, but code (genes) can change code (genes) indirectly. If Richard Dawkins is right, all your genes are in competition with each other and you represent a temporary alliance within this competition. The problem is, that the gene is working for the gene, and not you, you might think characteristic is good for you, society the planet but the gene acts for the gene not you, society or the planet, obviously genes being inert code arent looking into the future consider cancer as an example.

Reply Score: 2

Alfman Member since:
2011-01-28

Gone fishing,

Biology does have sex! which is often a non random process where organisms choose which characteristics and therefore which genes are the fittest and which to cull. The odd thing about sex, is that it demonstrates the effects of unintended consequences, where apparently rational choices lead to bizarre outcomes. I’m thinking of peacock, tails, birds of paradise and even human mate selection, it turns out sexual fitness and fitness for natural selection are not always the same.


Sexual selection only works within a species though. Would there be any way for the process to make a dramatic change in one go? If not, then that seems to constrain the possible paths of evolution. Evolutionary paths from A to B that require sub-optimal intermediary steps (aka less desirable for the sexual/natural selection process) would end up being culled, which makes B unattainable even if it would be favorable to A.


So, with the above in mind, my unqualified answer to your earlier question would be that these constraints would seem to be different than design constraints which don't have to have this limitation in we make a conscious decision to reach point B.

I guess you could make sexual selection a "conscious decision to reach point B" as well, but it would require tens of thousands of generations acting in unison based on artificial data - I don't it would actually work in practice.

Reply Score: 2

Gone fishing Member since:
2006-02-22

Sexual selection only works within a species though. Would there be any way for the process to make a dramatic change in one go? If not, then that seems to constrain the possible paths of evolution. Evolutionary paths from A to B that require sub-optimal intermediary steps (aka less desirable for the sexual/natural selection process) would end up being culled, which makes B unattainable even if it would be favorable to A.


Well put! This cannot happen in biology - redesign in this sense cannot happen and so biology is full of ad hoc fixes to sub optimal design. Modest redesign can occur with all this legacy code and often significant redundancy, we can have evolutionary experimentation, but the redesign cannot significantly reduce fitness.

I know in human design this is theoretically possible but is it in reality? I started visiting osnews as I was interested in BeOS it seemed a system with more potential than win9x. However, it was less fit, and Windows with all its legacy problems is here and BeOS isn't (not making a comment about Haiku). We as this thread points out have significant legacy systems in IT and redesigning these systems seems problematic. That "sub-optimal intermediary" seems problematic in IT and not just biology.

Edited 2017-02-06 15:46 UTC

Reply Score: 2

Alfman Member since:
2011-01-28

Gone fishing,

I know in human design this is theoretically possible but is it in reality? I started visiting osnews as I was interested in BeOS it seemed a system with more potential than win9x. However, it was less fit, and Windows with all its legacy problems is here and BeOS isn't (not making a comment about Haiku). We as this thread points out have significant legacy systems in IT and redesigning these systems seems problematic. That "sub-optimal intermediary" seems problematic in IT and not just biology.


I see what you mean now, I wasn't thinking quite in these terms. I guess we were fortunate to be around during the earlier years.

Edited 2017-02-06 19:00 UTC

Reply Score: 2