Google has released the first Developer Preview for Android O, which is probably going to be released somewhere in the Fall. There’s a lot changes in this one, but the biggest one is probably the limits Android O is going to place on applications running in the background.
Building on the work we began in Nougat, Android O puts a big priority on improving a user’s battery life and the device’s interactive performance. To make this possible, we’ve put additional automatic limits on what apps can do in the background, in three main areas: implicit broadcasts, background services, and location updates. These changes will make it easier to create apps that have minimal impact on a user’s device and battery. Background limits represent a significant change in Android, so we want every developer to get familiar with them. Check out the documentation on background
execution limits and background location limits for details.
There’s more – improvements in keyboard navigation, Navigation Channels for managing notifications, picture-in-picture on smartphones, wide-gamut colour support for applications, several new Java 8 features, and more. A big one for audio people: Sony has contributed a lot of work to audio in Android O, adding the LDAC wireless audio codec.
It’s available on the usual Nexus devices.
“Sony has contributed a lot of work to audio in Android O, adding the LDAC wireless audio codec.”
Great, just what Android needs, yet more proprietary closed standards and blobs of code no one else can use without a license and NDA – assuming Sony would even license it should someone else want to use it.
Edited 2017-03-22 00:18 UTC
Not to mention completely useless for headphones, since almost any codec is transparent at half the bandwidth allowed by A2DP in Bluetooth 2.1… absolutely no need for anything propietary.
Edited 2017-03-22 00:36 UTC
Except that it sounds a lot better than normal bluetooth. I think it’s nice for people who want the option.
Actually it’s not. Bluetooth is perfectly capable of handling 16bit/44.1KHz playback (roughly 175kB/s) which is considered the optimal digital playback format based on empirical analysis. People like to say higher sample rates and bit sizes are better for playback as well as production, but the scientific evidence doesn’t support that conclusion at all.
https://people.xiph.org/~xiphmont/demo/neil-young.html
The takeaway: “Empirical evidence from listening tests backs up the assertion that 44.1kHz/16 bit provides highest-possible fidelity playback.”
Look under the subtitle “Listening Tests” where it gives links to scholarly research along with links discussions on the topic, if you don’t want to read through all the signal theory and human auditory anatomy.
16/44.1 doesn’t require any proprietary codecs beyond what Bluetooth already carries to meet such a target.
It’s good to have the choice.
BTW, restrictions on background tasks are very welcome. I hate it when app vendors think they have dominion over my device because I gave their app permission to be installed.
Edited 2017-03-22 13:03 UTC
The codecs aren’t for higher than needed bitrates, but for better compression. I don’t think any speaker or headphone uses uncompressed PCM audio over bluetooth.
And which Bluetooth profile are you using that provides 1.75 Mbps of bandwidth? And what audio devices are you connecting to that support that?
A2DP provides less than 330 kbps (kilobits) of bandwidth for audio. And has pretty bad codecs running over that.
Apt-X provides better audio quality using the existing A2DP bandwidth.
LDAC provides 3x the bandwidth of A2DP (up to 990 kbps), along with better codecs, for overall better sound quality.
Bluetooth itself easily provides considerably more bandwidth when it’s utilized. And I wrote 175 KILObytes, not megabytes. In fact Bluetooth 4 & 5 can handle that easily and there’s no reason vendors can’t implement a royalty free codec over Bluetooth via the data channels. As I keep saying LDAC is over engineered and proprietary. Android doesn’t need this IN THE UPSTREAM. If you want LDAC then buy a Sony device, but it shouldn’t be in the upstream system with code that can’t be used by anyone else.
175 kiloBYTES/sec is 1.75 megaBITS/sec.
What? No it isn’t. It’s 1.4mbps
https://www.wolframalpha.com/input/?i=175+kilobytes+per+second+in+me…
Also feeling anal retentive today, I will say that is surely not 1.4mbps either, once you dice PCM data in packets and wrap them in the bluetooth protocol.
phoenix,
You mean 1.4mbps, right?
Anyways, I cringe every time the xiph audio link comes up because of some of their inaccurate claims. They put forward the science behind the Nyquist frequency in explaining why 44khz is enough to represent our 22khz audio hearing, and if they had stopped there I’d be ok with it, but then they misrepresent higher frequencies as being worse than 44khz, which is a falsehood.
They claimed that anything over 44khz is not perceivable, however to the extent that it were audible, then it directly contradicts their earlier claim and technically the higher fidelity signal that best matches nature is the “better” audio, assuming the intention is to reproduce actual sounds as they happened. After all, there’s no 44khz limit to be found in nature, so unless they want to criticize nature for producing >44khz audio, then they have no business suggesting higher fidelity is worse.
I believe what they might have meant was that some ADC implementations that advertise 192khz fail to achieve that level of response due to poor circuitry or inadequate calibration or whatever. I could accept criticism on that basis, but then they should at least retract their blanket statement that 192khz is considered worse and instead emphasize that some implementations of it fail to achieve the claimed fidelity (hopefully with evidence to back their claim). Ironically I agree with them that I can’t hear above 22khz, but without evidence for their claims about the inferiority of 192khz sampling, they are just as guilty as those they’re trying to debunk, which bothers me.
Edited 2017-03-23 04:48 UTC
Anything above 44KHz means nothing, whether it is audible or not for more than 0.1% of the worlds population. Music is sold almost only in 16b 44KHz samples, and the market for super hi-fi is minuscule, and almost nobody has ever shown any interest in it whatsoever.
Anyway, if there were any measurable advantages for super hi-fi, they surely would not be perceivable anywere but in expensively conditioned rooms and using very expensive equipment. Pumping out 96KHz, 24b sound out your iPhone’s headphone socket (oops! it doesn’t have one anymore!) and then shoving it into your ears together with all the noise in your car, your office or the subway makes no sense.
Lobotomik,
I agree. Although I can see some genuine merit in recording at 24bits HDR. Even if 16bits is enough to represent the output, the fact that the *input* will never be perfectly calibrated to 16bits means that it would be practically impossible to achieve perfect 16bit resolution strait off the bat during a live recording. Either you risk overloading the 16bit register during recording, or you deliberately attenuate the signal to less than the 16bit range just so that you don’t risk overflowing it. But if you recorded at 24bit HDR to begin with, you wouldn’t have this dilemma.
Theoretically 192khz could be put to good use by a a microphone array to cancel out unwanted noise. (Think “MIMO” technology, but applied to audio). Note I’m not saying the output needs to be 192khz, but having a 192khz input would allow software to triangulate the source of audio with far more precision than 44khz and this information could be used to clean up environmental noise even more than would otherwise be possible. I don’t know if anyone actually does this, but it could help separate noise from the audience from the performers on stage for example
Edited 2017-03-23 14:33 UTC
No, what they meant was that with crappy hardware, the noise reconstructed from higher sample rates can lead to audible artefacts. If you take a audio spectrometer and actually compare things, there are visible artefacts well within the normal human hearing range in the output from most DAC’s that cost less than a few hundred dollars.
They have other issues, namely that the nyquist limit is an idealized factor and they discuss it in ways that don’t matter. Sure you can sample a 22.05k sine wave at 44.1k and get an accurate reconstruction, but you have to have the correct dithering algorithm to do so because you only have 2 samples for every cycle, and if you drop the waveform you are sampling down by 50 Hz, your reconstruction will fall apart. From a practical perspective, you actually need closer to 4-8 times the highest frequency you want to reconstruct to get things accurately. Taken together with the fact that most music doesn’t have any frequency components above about 5.5kHz (which they also fail to mention and is an important part of why 44.1k has remained the standard for so long), this in turn means that with a reasonable dithering algorithm you can actually cover almost 100% of what most people actually hear in any given musical or vocal recording perfectly with a 44.1k sample rate. Based on this and the fact that the higher sample rate can cause other issues with cheap (read as ‘prevalent’) hardware, there is no practical advantage for non-scientific or studio usage to using a sample rate over 44.1k.
The problem overall though is that most people don’t understand that the biggest contributing factors to poor quality are actually the encoding and the hardware, not the sample rate and bit depth. Killing off MP3 and most other lossy codecs for general distribution would do more for improving audio quality than amping the sample rate up to 192k and insisting on 24-bit samples ever would.
And that’s why we have Bluetooth higher than 2.1, no?
While adequate bandwidth is available, using it consumes power. The more data you transmit or receive, the more power is consumed by the radio. If you could maintain the same quality while reducing power consumption, that would be useful.
Meanwhile my s7 is stuck at 6.0 for some reason.
Weird. My brand new Galaxy S7 Edge got the Nougat update right as I booted it up for the first time. Then again – carriers don’t fuck with phones in The Netherlands, and mine is bought off-contract anyway.
I live in Belgium.
Bought mine off contract too.
The edge seems to have gotten the update sooner.
As the Android people on here are so fond of saying: you bought it. You knew what you were getting into, going with Samsung.
Sucks from the other side…
Nothing from this list has been addressed: http://itvision.altervista.org/why-android-sucks.html
This is truly appalling.
Have you read the article you linked, he said that although these problems are not yet fixed, Android is still the best mobile OS.
The best mobile OS doesn’t make it perfect or resolves long standing issues.
In the release notes they mention support for wider gamut displays – but I’m curious if this means they’ve added true color management to the system (it doesn’t have it in the current release) or if they are just using some kind of flatter hack.
Anyone have more info on that?
Meanwhile, rumor has it I might get my Nougat update this month….. But, probably not.
Be careful what you wish for. I have a Nexus 6P and Nougat has been fraught with issues from fast battery drain to Bluetooth connectivity to missing features compared to the cheaper Nexus 5X. And this is a Google phone for the love of Pete! I would leave the house in the morning with 90+% battery and with my phone in my coat pocket all day at work (we can’t have them in the labs) power would be in the 20% range at the end of the day. Bluetooth wouldn’t connect with my car at all for many months whereas all of my previous Nexus phones worked perfectly. The battery and bluetooth issues appear to have finally been addressed with version 7.1.2 (which is technically a beta) but it looks like you’d have a very long wait to get that high if you haven’t yet received version 7.0. With all that being said, this is my last Google phone and I’ve been exclusively a Google phone customer since I owned the Nexus One. I’m eyeing the Samsung S8 when it comes time to upgrade. Good luck!
Edited 2017-03-22 20:39 UTC
cmost,
I have a BLU phone running 5.1, and every once in a while it gets into a state where the battery looses almost the entire charge in a day. It comes and goes and I wasn’t able to figure out why. GSAM battery monitor (root mode) pointed to “Kernel (Android OS)” as the guilty culprit, not a userspace app, yet almost all websites tend to blame applications, so out of desperation I tried background application blockers, of course those did nothing to fix the kernel bug. Because of the severity of the drain, I had to keep it plugged in anytime I was at my desk. I tried powering off and on my phone, but all to no avail.
Then I came across a post for an unrelated 4.x phone, but with the same symptoms. It said to unplug it and then power it off and back on, that’s it. I was resetting the phone but I didn’t register the significance of unplugging it first. While I was at my desk I would reset the phone while it was plugged in. When I followed the unplug/poweroff/poweron sequence consistently, it actually worked. The Android kernel would finally let the CPU sleep and the battery usage graph showed a sudden and dramatic improvement! It’s still a major android bug, but it’s a big relief now that I know how to work around it consistently. Since my last charge it has been running for 5 days and indicates 50% remaining.
I can’t say if your Nexus has the same bug, if so shame on google, but it’s worth a shot. Let me know if that does anything.
Edited 2017-03-22 22:12 UTC
Why is Opus not used as an official BT codec? It is freely available, patent-free, as high quality as any, and offers a wide gamut of bandwidths and latencies.
Not only that, but heavy investors in Bluetooth like Broadcom have invested in developing that codec, so it is not something they are forced to adopt from a competitor. And I believe Google uses it for their web video format, so there are probably hundreds of millions of devices that secretly support it.
I thought the BT codec list was set in stone (to SBC, AptX and nothing else), but that is clearly not the case, if Apple and Sony can use their own.
Anybody knows why Opus seemingly stopped in its tracks the moment it was finalized?
Because, for representing data other than spoken word, the reference Opus encoder is horrible. In particular, no matter how I adjust the parameters, it does a horrible job at the extreme low and extreme high end of wide-ranging music. The best way I can describe it is that it sounds like there are grains of sand in the audio.
If they want Opus to be taken seriously, they need a better reference encoder. The codec itself is capable, but it is not currently demonstrated well just how capable it really could be. I have a suspician that, if Opus were used, they’d simply stick the reference encoder in place and that would be terrible both for the users and for Opus’ future.