Linked by Thom Holwerda on Thu 30th Aug 2018 21:27 UTC
Java

Java 11 has recently been feature frozen and contains some really great features, one in particular we’d like to highlight. The release contains a brand new Garbage Collector, ZGC, which is being developed by Oracle that promises very low pause times on multi-terabyte heaps. In this article we'll cover the motivation for a new GC, a technical overview and some of the really exciting possibilities ZGC opens up.

Why would you look at that - I get to use the Java database category.

Order by: Score:
Java needs this
by uridium on Thu 30th Aug 2018 21:54 UTC
uridium
Member since:
2009-08-20

Whilst this is going to probably get down-voted as trolling or something, I really think this new ZGC is exciting.

Java and Garbage collection allows probably hundreds of thousands of mediocre to poor quality devs that can't get their heads around code that doesn't leak and design something tight and efficient. So, this is a really good (!) thing. Some of the horrendous things I've seen in large systems over the decades.. ;)

Reply Score: 1

RE: Java needs this
by ssokolow on Thu 30th Aug 2018 23:16 UTC in reply to "Java needs this"
ssokolow Member since:
2010-01-21

Whilst this is going to probably get down-voted as trolling or something, I really think this new ZGC is exciting.

Java and Garbage collection allows probably hundreds of thousands of mediocre to poor quality devs that can't get their heads around code that doesn't leak and design something tight and efficient. So, this is a really good (!) thing. Some of the horrendous things I've seen in large systems over the decades.. ;)


Agreed. Fundamentally, it's the same kind of "enable developers with restricted skills to successfully expand into a new area" innovation that drove a large slice of the popularity of Node.js (allows frontend devs to also work on the backend) and Rust (allows developers from managed languages to write un-managed code).

Basically, it's a win for pragmatism in a world where it simply isn't practical to service demand by training more really good developers.

(That said, I think Sun really made some missteps in designing the standard library APIs which continue to impede development of actively-used applications like JDownloader to this day.)

Edited 2018-08-30 23:18 UTC

Reply Score: 2

RE[2]: Java needs this
by uridium on Sat 1st Sep 2018 03:28 UTC in reply to "RE: Java needs this"
uridium Member since:
2009-08-20

I won't comment on node.js as I have insufficient experience or interest in it.

Okay. So I was serious when I stated that the new ZGC is a brilliant idea. I shouldn't probably as I've had a rather large number of gigs where I've been paid and fed to come in and analyse, and do call stack analysis and refractor the code for performance (.. and unfortunately stability reasons in a few instances). The problem as I see it is that Java dev's frequently use System.gc(); and garbage collection in general as a crutch to support bad development habits. Java strongly encourages it in my opinion.

Example: Vehicle Tolling Operations Billing software. I came in at the end to /sbin/fsck the whole thing. 26 developers (mix of on and off-shore). Performance on non-live canned datasets was initially around 130 cars a second on a very beefy system which degraded to 1-2 cars a second after half an hour. The software from the sub-contractors was correct and verifiably functional. (Tick!). Call analysis showed that there was >1800 method calls per vehicle during various tolling points required for a billable journey. Each method call was a "new" allocation over the top of potentially thousands of others. Why not use a stateless set of function calls between threads? "It's not how we program and this works just fine, go buy a system with bigger ram and it'll last longer. That's a day-to-day config management problem now, not a development issue " ..well it didn't wasn't. The attitude is common and stuck in my craw. I no longer rail against it.

Four months of re-factoring and work, and the code was a few dozen megabytes smaller, easier to maintain (and extend which we later did 2 years later) and rocketed along at close to 13,000 cars a second with the same canned data with an application uptime in the months between OS reboots for updates. Under 30 calls and only one "new" anywhere in sight which was freed once the transaction was complete.

Please developers, take some pride in your craft. Do things properly.

So, I think this new feature is potentially a wonderful thing that may mitigate situations such as this.

Reply Score: 3

RE[3]: Java needs this
by Alfman on Sat 1st Sep 2018 14:35 UTC in reply to "RE[2]: Java needs this"
Alfman Member since:
2011-01-28

uridium,

Why not use a stateless set of function calls between threads? "It's not how we program and this works just fine, go buy a system with bigger ram and it'll last longer. That's a day-to-day config management problem now, not a development issue " ..well it didn't wasn't. The attitude is common and stuck in my craw. I no longer rail against it.


I sympathize man! I am very am very adamant about the importance of efficient coding from the start and I also have examples of where the lack of efficiency has caused big problems down the line.

I had a client with an old website that needed to be replaced, but the client wanted to go to an offshore company to save on costs. This always irks me in my field, but it's pretty common so "whatever". Yet the website they ended up building was so pathetically slow that even individual page requests would crawl. Even with quad cores, the server could only handle a few requests concurrently before they would queue. This is pathetic and the outsource team tried to blame everything but the website code.

They brought me back in to fix it. Upon profiling, I quickly discovered the framework was guilty of doing the same things over and over again throughout the website. Settings were stored in xml files, which would have been fine but they would reread the same xml files over and over again in different sections of the website.

I did manage to cut out about half the overhead but was severely restrained by their budget (in their mind they just bought a new website and didn't want to pay me for more development regardless of how bad the code was). I tried installing a varnish caching front end (to hide the latency behind a fast cache). Cache hits were extremely fast, but cache misses were still excruciatingly slow and broke things like logins/etc.

My assessment was that the framework was too shoddy and needed to be replaced, I quoted a 3rd of the original price they paid the original offshore team to do it originally. They balked at the price, and went to another offshore company, who presumably did it for less.

This is always what gets me, at least in the SMB space who are my clients. Too many companies just want to pay the least amount possible without regards to quality or performance. Sometimes I feel developers get a bad rap even though companies are guilty of handicapping us with tradeoffs that they themselves voted for with their money ;)



Please developers, take some pride in your craft. Do things properly.


And also: Please employers, take some pride in the craft of your developers. Do things properly. ;)

Edited 2018-09-01 14:42 UTC

Reply Score: 3

RE[4]: Java needs this
by uridium on Sun 2nd Sep 2018 02:23 UTC in reply to "RE[3]: Java needs this"
uridium Member since:
2009-08-20

Alfman,

Geeze. I actually felt bad reading this and my heart goes out to you. :-\

A dear family friend from Melbourne I've known for 20+ years (Vagues if you're probably watching) made an annoyingly pertinent observation a couple of years ago. Paraphrasing he said:

"Since 2009 GFC, businesses, especially at the lower SMB end have changed focus and attitude dramatically. No longer do they want the best solution for their business, now if a solution is the cheapest that allows them to limp past the post, even if it has warts, bumps and is painful to use and is 'Good enough' to mostly work - That's the one we want!"

Feels like we're circling the drain.

Reply Score: 3

Comment by jmorgannz
by jmorgannz on Fri 31st Aug 2018 05:03 UTC
jmorgannz
Member since:
2017-11-05

I think relegating people who code against a managed memory model as somehow lower skilled or poor quality is elitist and flat out wrong.

The reality is that making a human track memory manually is a relic of the past,the type of skill that is destined to be automated away.

Yes it does take a special type to be able to manually manage memory safely and correctly.
It's a rare skill - very rare in fact to the point that even the "pro's" routinely get it wrong, leading to security disasters.

No, being able to code this way is not to be more highly skilled.
It's differently skilled.

Id much rather have a devs mind focused on great design, performance, and maintainability, than having to waste mental resources on tracking bits.

And that's what I believe it is, a waste.

As a crude analogy, nobody in the electronics industry thinks of someone who can assemble a device by hand as more skilled than someone who can design a device simply because they can't operate the manual tools.

Reply Score: 9

RE: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 06:50 UTC in reply to "Comment by jmorgannz"
kwan_e Member since:
2007-02-18

Yes it does take a special type to be able to manually manage memory safely and correctly.
It's a rare skill - very rare in fact to the point that even the "pro's" routinely get it wrong, leading to security disasters.


Memory management failures don't lead to security disasters, other than through DoS attacks that creates massive memory leaks. And managed memory still doesn't do anything about non-memory resources.

Security disasters are still all about accessing something you shouldn't, which happens in managed environments too, as recent processor exploits show.

Reply Score: 1

RE[2]: Comment by jmorgannz
by satai on Fri 31st Aug 2018 07:36 UTC in reply to "RE: Comment by jmorgannz"
satai Member since:
2005-07-30

val a = alloc()
dispose(a)
readto(a)

It's not an DoS issue only but a more serious security problem.

The errors of manual memory management are not "forget to dispose" ones but the other way "dispose something usefull" too.

Reply Score: 4

RE[3]: Comment by jmorgannz
by Brendan on Fri 31st Aug 2018 17:15 UTC in reply to "RE[2]: Comment by jmorgannz"
Brendan Member since:
2005-11-16

Hi,

val a = alloc()
dispose(a)
readto(a)

It's not an DoS issue only but a more serious security problem.

The errors of manual memory management are not "forget to dispose" ones but the other way "dispose something usefull" too.


There's a bunch of tools (static analysers, Valgrind, etc) to detect those kinds of problems, and it's trivial to roll your own "malloc wrapper" (that inserts canaries, etc) to detect these kinds of problems without any tools at all; so these kinds of bugs "almost never" exist in released software and therefore "almost never" become security problems.

Of course if (e.g.) Oracle releases a whole new GC that contains bugs (because it's incredibly complex for performance reasons and all complex code has bugs), billions of "previously perfectly secure" Java apps can suddenly become exploitable. For a random estimate, I'd say that there's a 20% chance of this happening soon. ;)

- Brendan

Reply Score: 2

RE[2]: Comment by jmorgannz
by jmorgannz on Fri 31st Aug 2018 08:55 UTC in reply to "RE: Comment by jmorgannz"
jmorgannz Member since:
2017-11-05

Are you serious?

Pointer arithmetic has to be one of the number one enablers of security issues.

Note that manual memory management doesn't just include (de)allocation, but also addressing.

Reply Score: 3

RE[3]: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 09:51 UTC in reply to "RE[2]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

Are you serious?

Pointer arithmetic has to be one of the number one enablers of security issues.


Are you serious?

Pointer arithmetic does not count as memory management.

You can get access errors with or without memory management because they are different things.

Note that manual memory management doesn't just include (de)allocation, but also addressing.


No it doesn't. Now you're just redefining the term to cover your mistake.

Edited 2018-08-31 09:52 UTC

Reply Score: 2

RE[4]: Comment by jmorgannz
by feamatar on Fri 31st Aug 2018 10:17 UTC in reply to "RE[3]: Comment by jmorgannz"
feamatar Member since:
2014-02-25

kwan_e, I think you are a better developer to admit that yes, messing with pointers, that is addressing memory, is memory management. satai illustrates the problem pretty well.

Reply Score: 2

RE[5]: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 11:47 UTC in reply to "RE[4]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

that is addressing memory, is memory management.


Whether or not addressing memory directly is error prone is a completely separate issue to whether it belongs in the category "memory management".

Memory management issues may lead to memory addressing problems, but that doesn't mean one operation is a subset of the other. Otherwise, you may as well argue that "integer overflow" is also "memory management" because integers are used to manipulate pointers (or as array subscripts) that can lead to the same problem. Hell, why not then argue that Meltdown and Spectre makes a language like Javascript suffer from memory management security.

Quite simply, the common meaning of memory management is merely about allocation and deallocation.

Things can be their own class of errors without having to be placed under the same umbrella just because there is overlap. Overlap != subset. Saying "memory access" is "memory management" is a subset relationship.*

I'm the last person to argue that language must be rigidly used. But meaning comes from what a definition includes, and excludes. Let's not fudge around terms until it loses all meaning.

* Programmers need to understand that not everything fits into a nice OO hierarchy where all things must be subsets (subclasses) of others. Many things overlap, rather than having to be in an is-a relationship.

Edited 2018-08-31 11:50 UTC

Reply Score: 2

RE[6]: Comment by jmorgannz
by ssokolow on Fri 31st Aug 2018 14:03 UTC in reply to "RE[5]: Comment by jmorgannz"
ssokolow Member since:
2010-01-21

Whether or not addressing memory directly is error prone is a completely separate issue to whether it belongs in the category "memory management".

Memory management issues may lead to memory addressing problems, but that doesn't mean one operation is a subset of the other. Otherwise, you may as well argue that "integer overflow" is also "memory management" because integers are used to manipulate pointers (or as array subscripts) that can lead to the same problem. Hell, why not then argue that Meltdown and Spectre makes a language like Javascript suffer from memory management security.


Messing up checked array indexing doesn't allow you to escape the array and overwrite things like stack return addresses.

Pointers represent a hole in the type system that array indexing only matches if it's unchecked (which makes it a syntactic sugar for pointers).

Edited 2018-08-31 14:04 UTC

Reply Score: 3

RE[7]: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 14:48 UTC in reply to "RE[6]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

[Messing up checked array indexing doesn't allow you to escape the array and overwrite things like stack return addresses.


That still doesn't make it a memory management issue. It's an access violation issue, regardless of whether memory management is involved.

Furthermore, you don't need to escape the array or overwrite return address to make it a security issue. Even the SEI Cert Java coding standard lists certain exceptions as security concerns:

https://wiki.sei.cmu.edu/confluence/display/java/ERR01-J.+Do+not+all...
https://wiki.sei.cmu.edu/confluence/display/java/ERR02-J.+Prevent+ex...
https://wiki.sei.cmu.edu/confluence/display/java/EXP01-J.+Do+not+use...

There are plenty of other access violation type rules in the standard that don't come under "memory management".

Reply Score: 2

RE[6]: Comment by jmorgannz
by feamatar on Sat 1st Sep 2018 13:01 UTC in reply to "RE[5]: Comment by jmorgannz"
feamatar Member since:
2014-02-25

kwan_e, it is not the integer overflow the problem, but that pointers are memory references. In Java, where you don't have memory references, you cannot read from invalid memory locations despite having integer overflow.

Same way, it is not allocation and deallocation the problem of memory manament, but how you track your memory.

And direct memory access is the only problem, because you can have fun like char** v and things like char** p = v+2. And now have luck with free(v).

And the icing on the cake, that you have no way to figure out if p was freed before, and **p=3 can still work without an issue after that free(v).

Reply Score: 3

RE[7]: Comment by jmorgannz
by kwan_e on Sun 2nd Sep 2018 12:11 UTC in reply to "RE[6]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

I don't think you understand my point.

Pointer access is a problem.

But that doesn't make it a "memory management" problem.

Reply Score: 2

RE[2]: Comment by jmorgannz
by FlyingJester on Fri 31st Aug 2018 20:44 UTC in reply to "RE: Comment by jmorgannz"
FlyingJester Member since:
2016-05-11


Memory management failures don't lead to security disasters


Have you heard of Heartbleed?

Reply Score: 2

RE[3]: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 23:51 UTC in reply to "RE[2]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

"
Memory management failures don't lead to security disasters


Have you heard of Heartbleed?
"

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160

"crafted packets that trigger a buffer over-read"

Again, memory access violations are not "memory management". Stop trying to broaden the definition "memory management" into a meaningless term where everything vaguely related to memory is "memory management".

Reply Score: 2

RE: Comment by jmorgannz
by Sidux on Fri 31st Aug 2018 11:05 UTC in reply to "Comment by jmorgannz"
Sidux Member since:
2015-03-10

Manual labor is highly appreciated still but is increasingly expensive and also hard to find because it also requires passion most of the time.
The number of services running in a cloud these days for simplest "things" is still increasing thus having people dedicated for them it's a cost management doesn't want to bother anymore ("operational" cost reduction).
It's the right direction Java is going towards, and that is also an industry standard these days..

Reply Score: 2

RE: Comment by jmorgannz
by Alfman on Fri 31st Aug 2018 14:31 UTC in reply to "Comment by jmorgannz"
Alfman Member since:
2011-01-28

"jmorgannz

I think relegating people who code against a managed memory model as somehow lower skilled or poor quality is elitist and flat out wrong.

The reality is that making a human track memory manually is a relic of the past,the type of skill that is destined to be automated away.

Yes it does take a special type to be able to manually manage memory safely and correctly.
It's a rare skill - very rare in fact to the point that even the "pro's" routinely get it wrong, leading to security disasters.

No, being able to code this way is not to be more highly skilled.


I do agree with the general gist of what you are saying, and I upvoted you for it, however I think it clearly takes more skill to manage pointers correctly oneself.

As a crude analogy, nobody in the electronics industry thinks of someone who can assemble a device by hand as more skilled than someone who can design a device simply because they can't operate the manual tools.



Skills can become redundant through automation/advancements. But at the same time, someone who can do it by hand *IS* more skilled than someone who can only design it on paper and have an automated process take care of the rest.


Consider someone who is very good at long mental arithmetic. The fact that technology makes this skill mostly redundant doesn't mean we ought not recognize the higher skills possessed by people who do it manually. Some goes for manual memory management, manual electronics assembly, blacksmith, etc.

Reply Score: 2

RE: Comment by jmorgannz
by Brendan on Fri 31st Aug 2018 18:11 UTC in reply to "Comment by jmorgannz"
Brendan Member since:
2005-11-16

Hi,

I think relegating people who code against a managed memory model as somehow lower skilled or poor quality is elitist and flat out wrong.


In theory; yes it's wrong. There's lots of programmers that know and use multiple different languages; and these programmers don't suddenly become lower skilled when they switch from one language (with manual memory management) to another language (with automatic memory management).

In practice; automated memory management sacrifices code quality for faster development time and lower development costs; and Universities are pumping out graduates that have no experience (because they just graduated) who all "know" Java (because it's taught at almost every Uni). Companies that are interested in sacrificing quality for lower development cost have a tendency to hire these inexperienced graduates so they can pay them less and reduce development costs even more. On top of that, because Java is relatively high level/abstract, people can get by without any basic knowledge of how computers actually work and without any understanding of things like "locality" and the high cost of cache misses and TLB misses. All of these things combine to create a perception of "lower skilled" amongst experienced developers.

The reality is that making a human track memory manually is a relic of the past,the type of skill that is destined to be automated away.


The reality is that when a language uses a GC good programmers end up doing things like setting references to NULL when they're finished using them to make sure that the garbage collector can collect the memory (and avoid higher than necessary memory consumption and prevent memory leaks caused by keeping references around after they're no longer needed).

In other words, for good programmers, there's very little difference between making sure you call "free()" or "delete()" (for manual memory management) and making sure you set references to NULL (for automatic memory management) - it's the same "programmer keeping track of what is/isn't in use" skill for both cases. The only real difference is what happens when you get it wrong (or don't bother) - is the mistake detected and fixed easily (due to tools, checks, etc that are common practice for manual memory management) or will it go unnoticed and never be fixed (because that's what a GC is for!).

Yes it does take a special type to be able to manually manage memory safely and correctly.
It's a rare skill - very rare in fact to the point that even the "pro's" routinely get it wrong, leading to security disasters.


Sounds like a nice fantasy; but if you look at list/s of CVEs you won't find anything to imply a correlation between manual memory management and security problems. It's all plain old bugs (e.g. forgetting an "else"), clueless people ("Let's use XOR to encrypt our passwords!"), integer range problems (including "array index out of bounds"), etc.

Id much rather have a devs mind focused on great design, performance, and maintainability, than having to waste mental resources on tracking bits.


First, design and implementation are separate things. E.g. you can design a bicycle without building one; and you can build a bicycle without designing one (using an existing design).

Second, for implementation, performance and maintainability are almost opposites - you're continually making compromises between them because you have to sacrifice one to get the other. Focusing on both at the same time is like trying to focus on walking north while also focusing on walking south.

As a crude analogy, nobody in the electronics industry thinks of someone who can assemble a device by hand as more skilled than someone who can design a device simply because they can't operate the manual tools.


You're confusing design and implementation again.

For a better analogy; imagine there are 3 artists. One can spend 5 minutes with charcoal to make a sketch. One can spend 2 months with oil paint and brushes to make an extremely realistic painting. One can do both.

This is the same "development time vs. quality of finished product" compromise.

The artist that can do both is obviously more skilled (even when they happen to be doing a sketch in 5 minutes). The artist that can't do high quality/extremely realistic paintings is less skilled.

- Brendan

Edited 2018-08-31 18:14 UTC

Reply Score: 2

RE[2]: Comment by jmorgannz
by joshv on Fri 31st Aug 2018 18:44 UTC in reply to "RE: Comment by jmorgannz"
joshv Member since:
2006-03-18

If you think you have to set things to null to allow the GC to collect them, you really don't understand Garbage Collection.

The vast majority of collection happens to objects that simply fall out of scope. If you ever see a programmer setting a local variable to null at the end of a method, they don't know what they are doing.

If you've got static state somewhere, yes, you do have to be careful to either null out references, or remove them from static Collections, or they won't be GCd - but this seems to me to be common sense. If you create some sort of static collection somewhere and put things in it, you are explicitly telling the VM to keep things around. If you don't want it kept around, either don't put it in there, remove it, or use something like a WeakHashMap.

Reply Score: 1

RE[3]: Comment by jmorgannz
by kwan_e on Sat 1st Sep 2018 00:05 UTC in reply to "RE[2]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

If you think you have to set things to null to allow the GC to collect them, you really don't understand Garbage Collection.


It's not that you "have to set things to null", it's that people do do this in the hopes it will make the GC better. Whether it works or not, people actually do this.

Reply Score: 3

RE[4]: Comment by jmorgannz
by feamatar on Sun 2nd Sep 2018 23:21 UTC in reply to "RE[3]: Comment by jmorgannz"
feamatar Member since:
2014-02-25

Mind you, in 12 years of coding, I have seen this in practice only from C++ programmers who were new to Java or C#.

Really, if you have a minimum understanding how GC works you don't do that.

Reply Score: 2

RE[5]: Comment by jmorgannz
by kwan_e on Mon 3rd Sep 2018 02:17 UTC in reply to "RE[4]: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

Mind you, in 12 years of coding, I have seen this in practice only from C++ programmers who were new to Java or C#.


And I have seen this in programmers who only programmed in Java. Or in code examples from Java programmers.

Reply Score: 2

RE[3]: Comment by jmorgannz
by Brendan on Sat 1st Sep 2018 01:43 UTC in reply to "RE[2]: Comment by jmorgannz"
Brendan Member since:
2005-11-16

Hi,

If you think you have to set things to null to allow the GC to collect them, you really don't understand Garbage Collection.

The vast majority of collection happens to objects that simply fall out of scope. If you ever see a programmer setting a local variable to null at the end of a method, they don't know what they are doing.


That's equivalent to (e.g.) trying to free a local variable (or local structure, array, ...) in C - you shouldn't with manual or automatic memory management because "memory management" isn't involved in the first place (for C, it's typically just some space on the stack and not allocated from the heap at all).

If you've got static state somewhere, yes, you do have to be careful to either null out references, or remove them from static Collections, or they won't be GCd - but this seems to me to be common sense. If you create some sort of static collection somewhere and put things in it, you are explicitly telling the VM to keep things around. If you don't want it kept around, either don't put it in there, remove it, or use something like a WeakHashMap.


Exactly - in most of these cases it's common sense to do something to ensure the memory is freed (regardless of whether it's manual memory management or automatic memory management).

The only cases where GC actually makes a "more than superficial" difference is when it's an arrangement of multiple things; where (for GC) you can free the parent without worrying about any of the things it referred to (the children). That's where it comes down to "development time vs. quality" (e.g. a little time writing code to manually free the children before freeing the parent vs. letting GC search through multiple GiBs of stuff to find the children).

- Brendan

Reply Score: 3

RE[4]: Comment by jmorgannz
by feamatar on Sun 2nd Sep 2018 23:45 UTC in reply to "RE[3]: Comment by jmorgannz"
feamatar Member since:
2014-02-25


That's where it comes down to "development time vs. quality" (e.g. a little time writing code to manually free the children before freeing the parent vs. letting GC search through multiple GiBs of stuff to find the children).


Seriously wonder all these supposedly programmers who write in here, did any CS classes?

What is little time writing code to manually free the children???
m->v=malloc()
doSomething(v)
free(v)

Dare you call the last statement?
If so, my doSomething function might have a little surprise for you.


Regarding that search through multiple GiBs of stuff to find the children: that is not how it works.

Reply Score: 2

RE[2]: Comment by jmorgannz
by kwan_e on Fri 31st Aug 2018 23:55 UTC in reply to "RE: Comment by jmorgannz"
kwan_e Member since:
2007-02-18

Sounds like a nice fantasy; but if you look at list/s of CVEs you won't find anything to imply a correlation between manual memory management and security problems. It's all plain old bugs (e.g. forgetting an "else"), clueless people ("Let's use XOR to encrypt our passwords!"), integer range problems (including "array index out of bounds"), etc.


Haven't you heard? People here are now redefining memory management to include "integer range problems" and "array index out of bounds".

They seem to think that if something touches "raw memory" it automatically becomes a memory management issue.

Reply Score: 2

Comment by jmorgannz
by jmorgannz on Fri 31st Aug 2018 05:07 UTC
jmorgannz
Member since:
2017-11-05

Also, I am aware it wasn't specifically stated that devs who don't manage memory manually are mediocre or poor.

So take the above as a statement rather than a reply.

Reply Score: 2

RE: Comment by jmorgannz
by avgalen on Fri 31st Aug 2018 08:12 UTC in reply to "Comment by jmorgannz"
avgalen Member since:
2010-09-23

Managing memory manually is mostly a good thing when you are working in an environment with limited hardware resources or when you are writing software that is directly accessing hardware (OS/Drivers).

However most consumer/server-software that is written now doesn't know/care which hardware or OS it is running on and only interfaces with the layers on top of that via libraries, virtual machines, runtimes, etc. It doesn't make much sense to spend a lot of written code/time on resource-management when you don't know which resources you are actually using (5% or 50%). The only exception seems to be when GC is piling up and suddenly executes and stalls your program (and others) in an unacceptable way. That seems to be exactly the situation that is addressed here, so well done Java!

Reply Score: 4

RE[2]: Comment by jmorgannz
by anevilyak on Fri 31st Aug 2018 18:43 UTC in reply to "RE: Comment by jmorgannz"
anevilyak Member since:
2005-09-14

To be fair, the existing Java GC already handled it pretty decently. The new one appears to be specifically designed to also handle it in extremely large environments, hence the mention of "very low pause times on multi-terabyte heaps.". I don't know about you, but that's definitely not a situation I'm going to encounter in day-to-day use.

Reply Score: 3

RE[3]: Comment by jmorgannz
by avgalen on Mon 3rd Sep 2018 08:29 UTC in reply to "RE[2]: Comment by jmorgannz"
avgalen Member since:
2010-09-23

To be fair, the existing Java GC already handled it pretty decently. The new one appears to be specifically designed to also handle it in extremely large environments, hence the mention of "very low pause times on multi-terabyte heaps.". I don't know about you, but that's definitely not a situation I'm going to encounter in day-to-day use.

big-memory was indeed the reason they started to develop the stack, but in general this seems to fix the "stop the world" problems of GC by marking and tracing memory from a limited pool of roots (scaling GC linearly instead of exponentially, O(n) etc). Solutions like these will indeed make the most difference on big systems and might even perform worse on small solutions. That is why Java has configurable GC.

I am very curious about this part of the article for the future-5-years-from-now where we might start to see multiple memory-layers in servers:

Future possibilities
Coloured pointers and load barriers provide some interesting future possibilities.
Multi-tiered heaps and compression
With flash and non-volatile memory becoming much more common, one possibility is to allow for multi-tiered heaps in the JVM where objects that are technically live but rarely if ever used are stored on a slower tier of memory.
This could be possible by expanding the pointer metadata to include some usage counter bits and using this information to decide where to move an object that requires relocation. The load barrier could then retrieve the object from storage when if it was required in future.
Alternatively instead of relocating objects to slower tiers of storage, objects could be kept in main memory but in a compressed form. This could be decompressed and allocated in to the heap by the load barrier when requested.

Reply Score: 2

RE: Comment by jmorgannz
by ssokolow on Fri 31st Aug 2018 14:01 UTC in reply to "Comment by jmorgannz"
ssokolow Member since:
2010-01-21

Nonetheless, I should probably have mentioned that I count myself as one of those "managed developers incapable of un-managed work until Rust came around" that I spoke of.

Reply Score: 3

RE: Comment by jmorgannz
by Dasher42 on Fri 31st Aug 2018 17:56 UTC in reply to "Comment by jmorgannz"
Dasher42 Member since:
2007-04-05

It really depends on the kind of work, really. All the business services that do a lot of grunt work without regard to latency are fine with the likes of Java.

However, I once heard a programmer from a particle accelerator lab give a talk about his work. You spend millions of dollars to smash atomic particles together and detect every little thing that happens without a nanosecond's hitch, garbage collection is something that has to be manually done, and done right.

I wish programming was always to that standard, lean and simple and secure. Unfortunately, mediocrity is pragmatic for a lot of people.

Reply Score: 2