Linked by Thom Holwerda on Wed 15th Mar 2017 23:22 UTC
Apple

Some interesting figures from LinkedIn, who benchmark the compiling times of their Swift-based iOS application. You'd think the Mac Pro would deliver the fastest compiles, but as it turns out - that's not quite true.

As you can see, 12-core MacPro is indeed the slowest machine to build our code with Swift, and going from the default 24 jobs setting down to only 5 threads improves compilation time by 23%. Due to this, even a 2-core Mac Mini ($1,399.00) builds faster than the 12-cores Mac Pro ($6,999.00).

As Steven Troughton-Smith notes on Twitter - "People suggested that the Mac Pro is necessary because devs need more cores; maybe we just need better compilers? There's no point even theorizing about a 24-core iMac Pro if a 4-core MBP or mini will beat it at compiling."

Thread beginning with comment 641985
To read all comments associated with this story, please click here.
Parallel Programming
by theTSF on Thu 16th Mar 2017 19:57 UTC
theTSF
Member since:
2005-09-27

The problem we have today is Processor speed more or less peaked, we make it up by going parallel with multiple cores. However most developers develop software with a single thread in mind, so a Computer with 24 cores will perform most task as well as one with 4. Because (over simplified)
Core 1 - OS
Core 2 - IDE
Core 3 - Browser
Core 4 - Compiling

When you get a lot of cores there will just be a lot of cores being idle. Having a lot of cores makes sense for jobs that are happening on many threads working together or are taking a lot of requests at the same time.

However most programs are based on Top Down development. So one thing happens then the next. SO the advantage of multiple cores cuts down rather fast.

Reply Score: 3

RE: Parallel Programming
by Alfman on Thu 16th Mar 2017 20:19 in reply to "Parallel Programming"
Alfman Member since:
2011-01-28

theTSF,

When you get a lot of cores there will just be a lot of cores being idle. Having a lot of cores makes sense for jobs that are happening on many threads working together or are taking a lot of requests at the same time.

However most programs are based on Top Down development. So one thing happens then the next. SO the advantage of multiple cores cuts down rather fast.


I agree with your points, even though many of us acknowledge the limits of sequential programming we do it anyways. We're going to have to make some big changes to take advantage of parallelism. We keep assuming this change is coming down the pipeline, but it never arrives. Even though this is impeding progress, it might not happen... much like IPv6. (Now let's duck for cover, haha).

Reply Parent Score: 2

RE: Parallel Programming
by tidux on Thu 16th Mar 2017 22:56 in reply to "Parallel Programming"
tidux Member since:
2011-08-13

This is part of why I enjoy my current Linux setup. Other than Firefox, most of the processes I use are small and only depend on shared network or filesystem features to work together.

* mail sync is separate from mail reading

* music playback is separate from the UI

* most things live in terminals unless they're innately tied to graphics by their nature (media viewers, GIMP, etc.)

* software building is usually done through gcc, g++, or go which all scale beautifully to the 16 threads I have available

* VMs can have a few cores and a chunk of RAM carved off for themselves without impacting the rest of the system

Reply Parent Score: 3

RE: Parallel Programming
by Flatland_Spider on Fri 17th Mar 2017 17:53 in reply to "Parallel Programming"
Flatland_Spider Member since:
2006-09-01

Most desktop workflows are stupidly linear, and we can't do much about that because humans. Multithreading vim only gets me so much.

It's nice to say "developers develop software with single thread in mind", but then there is reality. Only so many workloads can be parallelized, and only so much concurreny can be squeezed out of problems. The work doesn't scale because the data can't be broken down enough due to interdependence or the set isn't big enough. Especially end user stuff.

Servers are a different story because they have a lot more going on. Provided the language isn't a barrier to parallelization. There are lots of opportunities to use many cores since a lot of the work has low interdependence between requests.

Reply Parent Score: 1