Linked by Thom Holwerda on Tue 19th Dec 2017 19:22 UTC
Android

Today, we are excited to announce Quick Boot for the Android Emulator. With Quick Boot, you can launch the Android Emulator in under 6 seconds. Quick Boot works by snapshotting an emulator session so you can reload in seconds. Quick Boot was first released with Android Studio 3.0 in the canary update channel and we are excited to release the feature as a stable update today.

There's a quite a few other improvements and new features, as well.

Thread beginning with comment 652300
To view parent comment, click here.
To read all comments associated with this story, please click here.
gilboa
Member since:
2005-07-06

In all honestly most of the complexity is bloat. Even linus torvalds has acknwoledged linux has a bloat problem. Android is tuned for very specific hardware. If the kernel on your android phone does "support 1000 upon 1000 of CPU types, chipsets, devices, etc.", then whoever built it did a pretty bad job in trimming it down to only the hardware present on the device. Know what I mean? The build it supposed to be optimized for that specific hardware.


The OP in this sub thread was comparing C64 to a PC. He wasn't talking about Android, nor was I.


Arguably not true by performance per clock...


Actually, The amount of rows/sec PostgreSQL -on- Linux can store or index is billion years a head of any database that existed in the C64 or even the early PC days. Even if you generate odd matrices such as rows/sec per Mhz or rows/second per disk RPM, the performance difference is simply staggering.
Same goes for networking (packets/sec per Mhz) and graphics (triangles/sec per Mhz).


Sure, but it doesn't explain why the overhead is so much greater.


Sure it does.
Back in the C64 or even in the XT days, a graphics card was nothing more than a dual ported memory chip.
Today a graphics card is a hugh-SMP CPU that's expected to push billions of vertices and handle complex requests simultaneously. How can you possibly expect to continue program such a beast by calling 'int 10h'?

Networking is no different. How can you possibly compare a C64 modem that was barely cable of pushing 1200bps via the simplest of interfaces (serial port) to a multi-PCI-E 100GbE network device that includes smart buffering, packet filtering and load-balancing?


IMHO it's true of most code.


Being a system developer I can't say that I care much for user facing code ;)

- Gilboa

Edited 2017-12-21 13:22 UTC

Reply Parent Score: 4

Alfman Member since:
2011-01-28

gilboa,

The OP in this sub thread was comparing C64 to a PC. He wasn't talking about Android, nor was I.


The actual OP was likely speaking generically, but never the less we can talk in terms of PCs if you like. Do you have any reason to believe there's less bloat on PCs?

Sure it does.
Back in the C64 or even in the XT days, a graphics card was nothing more than a dual ported memory chip.
Today a graphics card is a hugh-SMP CPU that's expected to push billions of vertices and handle complex requests simultaneously. How can you possibly expect to continue program such a beast by calling 'int 10h'?

Networking is no different. How can you possibly compare a C64 modem that was barely cable of pushing 1200bps via the simplest of interfaces (serial port) to a multi-PCI-E 100GbE network device that includes smart buffering, packet filtering and load-balancing?


The memory mapped devices are significantly faster than the legacy PIO ones, and on top of this the bus speeds have increased dramatically. Hardware initialization time is so fast that a stopwatch would be too slow to measure it. Most time is a result of software deficiencies. While complexity can contribute to software deficiencies, it's not the inherent cause of slowdowns on modern hardware that you are making it out to be.

One problem is that network drivers, graphics drivers, audio drivers, printer drivers, usb drivers, etc come in packages of 10-100+MB, which is quite unnecessary and can end up causing delays and consuming system resources unnecessarily. At least modern SSDs are so fast that they help mask the worst IO bottlenecks caused by bloat, but alot of it is still happening under the hood.



I appreciate that fast hardware is considered much cheaper than optimizing code. However there's little question that legacy programmers were writing more optimal software, that's really the gist of what we're saying. It was really out of necessity since on old hardware they couldn't really afford to be wasteful like we are today.

Reply Parent Score: 1

gilboa Member since:
2005-07-06

The actual OP was likely speaking generically, but never the less we can talk in terms of PCs if you like. Do you have any reason to believe there's less bloat on PCs?


Bloat is a general term that doesn't mean much.
More-ever, I was commenting about bloat within the kernel, so I suggest we talk about specific cases of *bloat* within the kernel.

The memory mapped devices are significantly faster than the legacy PIO ones, and on top of this the bus speeds have increased dramatically. Hardware initialization time is so fast that a stopwatch would be too slow to measure it. Most time is a result of software deficiencies. While complexity can contribute to software deficiencies, it's not the inherent cause of slowdowns on modern hardware that you are making it out to be.


Yeah. But you expect far more from your hardware than you did 30 years ago.
The Linux kernel boots within 2-5 seconds, within this window it needs to configure dozens of devices, load complex firmwares, program theses devices (in the case of GPUs, network devices and RAID controllers), setup virtualization (including virtualized devices) and start the user mode.
The amount of works being executed during these 5 seconds, is billions of time more complex that io.sys and msdos.sys did during the MSDOS days.

One problem is that network drivers, graphics drivers, audio drivers, printer drivers, usb drivers, etc come in packages of 10-100+MB, which is quite unnecessary and can end up causing delays and consuming system resources unnecessarily.


A. This is an issue only on Windows machines. Linux drivers are quite small and never bundle billion useless applications.
B. Most of the bloat comes from user-facing applications (again) that has nothing to do with the actual driver.
C. Having large number of garbage application doesn't necessary have any effect on the actual performance. E.g. nVidia's fat management application doesn't necessary drops one FPS from any game it manages (quite the opposite).

At least modern SSDs are so fast that they help mask the worst IO bottlenecks caused by bloat, but alot of it is still happening under the hood.


Sure it does.
I doubt that you'll be willing to live with C64 functionality these days.
Anecdotal evidence: I'm being considered a neanderthal as I can actually work a full day out of text console + VIM.

I appreciate that fast hardware is considered much cheaper than optimizing code. However there's little question that legacy programmers were writing more optimal software, that's really the gist of what we're saying. It was really out of necessity since on old hardware they couldn't really afford to be wasteful like we are today.


Look, being a kernel / system developer, I still write asm code from time to time. But I should be honest, that even if I really, really try, I seldom get any meaningful performance increase compared to cleanly written C code + GCC optimization. (And writing cross platform asm is a real b**ch).
Even if you talking about UI: Its very easy to write highly optimized code when you're dealing with simple requirements. Complexity (what you consider bloat) usually trails requirements.
E.g. It fairly easy to develop a simple-yet-fast VI, Its 1000 times harder, when you try to develop a fully fledged IDE w/ a debugger, UI designer, syntax checker, multi-lang spell checker, project tools, testing tools, built-in browser, cloud support and whats not.

- Gilboa

Edited 2017-12-23 07:55 UTC

Reply Parent Score: 2