Linked by David Adams on Thu 20th Nov 2008 04:19 UTC
General Unix Linux and other Unix-like operating systems use the term "swap" to describe both the act of moving memory pages between RAM and disk. It is common to use a whole partition of a hard disk for swapping. However, with the 2.6 Linux kernel, swap files are just as fast as swap partitions. Now, many admins (both Windows and Linux/UNIX) follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. Let us say I’ve 32GB RAM, should I set swap space to 64 GB? Is 64 GB of swap space really required? How big should your Linux / UNIX swap space be?
Thread beginning with comment 337869
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: XDMCP/NX server
by sbergman27 on Thu 20th Nov 2008 13:54 UTC in reply to "RE: XDMCP/NX server"
sbergman27
Member since:
2005-07-24

Wouldn't it make sense to have more than one machine do what you are making that poor server do?

No. More machines mean more administration. I cannot and would not go to management to say we need to allocate funds to buy another server to increase our administration load. People pay big bucks for virtualization tools to *consolidate* their servers. Why would I push for server *proliferation*? I segregate functions only when security or other practical considerations require it.

BTW, the C/ISAM -> SQL gateway operates on the COBOL files used by the POS/Accounting system, and the desktop users use the POS/Accounting system as one of the application on their desktops. It makes a great deal of sense to keep all the disk and socket accesses local.

Although performance is only "OK" right now, it will be excellent in about a week when we add the memory. The dual 3.2GHz Xeons are only moderately loaded, being shared by only 60 users. (Multicore is vastly over-hyped for standard business desktop workloads. But that's a topic for another post.)

Even after all these years, I still find Unix/Linux multi-user efficiency to be amazing.

Edited 2008-11-20 14:01 UTC

Reply Parent Score: 3

RE[3]: XDMCP/NX server
by google_ninja on Thu 20th Nov 2008 17:55 in reply to "RE[2]: XDMCP/NX server"
google_ninja Member since:
2006-02-05

The only thing I would say is that works fine; as long as you wont need to scale dramatically upward, and as long as you will never expose that machine outside your firewall. As soon as you let users remote desktop in, I would move the ISAM datastore on to its own more locked down machine, or at the least look at virtualization options (which makes scaling alot easier too)

Although performance is only "OK" right now, it will be excellent in about a week when we add the memory. The dual 3.2GHz Xeons are only moderately loaded, being shared by only 60 users. (Multicore is vastly over-hyped for standard business desktop workloads. But that's a topic for another post.)


Dual core makes sense, one core dedicated to your active application, the other core dedicated to everything else going on with the system, assuming the workload is greater then email/producivity apps. You have to have to work alot harder to find a case where quad core makes any sort of sense on a business machine, let alone the stuff coming down the pipe like the nehalem.

It will be real interesting to see what happens in the programming world in the next few years, since the languages we have right now make multi threading so painful. It is very conceivable that we will finally see the rise of LISP (or some other functional language) that everyone has been jokingly predicting for so many years now.

Reply Parent Score: 3

RE[4]: XDMCP/NX server
by sbergman27 on Thu 20th Nov 2008 18:25 in reply to "RE[3]: XDMCP/NX server"
sbergman27 Member since:
2005-07-24

The only thing I would say is that works fine; as long as you wont need to scale dramatically upward, and as long as you will never expose that machine outside your firewall.

This client buys a new server every three years, about the time that the hardware support contract from the manufacturer is about to run out. Next spring, we'll have a very economical new server that will scale to 4 or 5 times the load that this one handles. Figure about 300 users. Not that we'll reach that number of users during its lifetime.

I imagine that some admins would try to overcomplicate this with multiple servers talking over the network, virtualization, and a generally buzzword compliant topology. And they would end up with a slower, more expensive, less reliable, harder to administer system. Since I semi-retired, I'm more interested than ever in systems that just work. And this config has proven itself over 6 years, 3 servers, the addition of a couple of remote offices, and a tripling of users.

Massive scaling, of a kind which requires the whole "shared nothing" approach is interesting to think about... but very unlikely to be needed.

Edited 2008-11-20 18:26 UTC

Reply Parent Score: 3