It seems Google is hell-bent on removing anything from Android that makes the platform stand apart from iOS. One of the features of Android and the Play Store that users of rooted and/or de-Googled phones will be familiar with is SafetyNet Attestation, something that Android applications can use to check, among other things, if the device it’s running on is rooted or not, and take any action from there based on that information. Notoriously, some banking applications on Android will refuse to work on rooted and/or de-Googled devices because of this.
Earlier this year, at Google I/O, the company unveiled the successor of SafetyNet Attestation, called the Google Play Integrity API, and it comes with a whole lot more functionality for developers to control what their application can do on devices based on the status of the device and the application binary in question. Play Integrity will let the developer’s application know if its binary has been tampered with, if Google Play Protect is enabled, if the Android device it’s running on is “genuine”, and a whole lot more.
Based on that information, the application could decide to warn users when they’re about to do something sensitive that their device is rooted, or it could just throw up its hands entirely and refuse to function at all – and there’s really not much the user can do about this. A new capability of the Play Integrity API is that developers can now also determine where it came from – i.e., if it was sideloaded or installed through a non-Play application store – and then throw up a dialog allowing the user to switch to the version from the Play Store instead. Doing so will delete the original binary and all its data, and replace it with the Play Store version.
The problem here is that the only other option is to cancel, and not have the application load at all.
As you can see, the remediation dialog tells you to “get this app from Play” in order to continue using it. There’s an option to close the dialog, but there’s no way to bypass it entirely. If you close the dialog, a response is sent to the app that lets the developer know so they can decide whether to continue blocking access.
↫ Mishaal Rahman at Android Authority
Several applications appear to already be using this new capability, and while it won’t mean much for people running Google’s, Samsung’s, or any other “blessed by Google” version of Android on unrooted devices, people running, say, /e/OS, GrapheneOS, LineageOS, or any other de-Googled and/or rooted device is going to be having a very bad time if more and more applications adopt this capability. If you’re running a device without Play Services, relying solely on the vast and varied library of applications from F-Droid, for instance, while also sideloading a few applications only available in the Play Store, you could very well be running into problems.
We’ll have to see just how widespread this capability becomes, but I can already foresee this becoming yet another major headache for anyone trying to use a smartphone that isn’t from blessed by Apple or Google. Personally, I’m lucky in that Swedish banking and ID applications worked on de-Googled Android phones, but with the expanding reach of the Play Integrity API, as well as possible “let’s enable this by default” shenanigans by Google, I’m definitely worried about this remaining so in the future.
One of the reasons I jumped into Android was the ability to downgrade Play Store apps by downloading an older version of the app from a place like ApkMirror and sideloading it (you can’t do this for purchased apps obviously, but it works for free apps). This trick is also useful for older phones, since the Play Store no longer shows you the last version of an app compatible with your Android version, if your device can’t run the latest an greatest version of an app, the Play Store won’t serve you any version, all you get is an error message.
So, it’s not only users rocking de-Googled Android phones that are getting the shaft, people like me with “blessed” Android devices are getting the shaft too.
So, now that sideloading has come to iOS for EU users, Android’s remaining advantage is that Google doesn’t charge a Core Technology Fee. Yet.
i hope rooting will become ‘mainstream’ again, if not for anything else, this. about 5 years ago most open source devs ate google’s pill that rooting is a security risk and i have to manually patch the boot.img every time calyx decides to update itself..
You kind of answered your own question: Rooting won’t become “mainstream” again because several popular apps consider rooted devices a security risk and will refuse to run on rooted devices. I mean, Netflix doesn’t work on rooted Android devices, nor do some banking apps, so these two are already a huge disqualifier for going mainstream.
And yes, you can play the cat-and-mouse game by deploying countermeasures against those apps’ countermeasures, but the reason rooting was semi-mainstream was because you didn’t have to do that, rooting was a set-it-and-forget-it affair.
kurkosdr,
This is what I think as well. It’s a way to effectively ban rooting for most users by making it useless without officially making android incapable of rooting.
I don’t want my device tethered to a google account, but I’ve had to give up some apps. Some that I needed for work required me to get a 2nd phone (I’m still upset over this). The google dependencies are a big part of the android ecosystem and for a normal user who doesn’t want to fight technology, well life is easier if they embrace google.
This of course is by design. This wouldn’t be as bad if there were more viable alternatives, but the duopoly leaves us stuck embracing google or apple, which is part of the problem. There’s tons of pressure on consumers to choose a duopoly phone because of 3rd party apps that aren’t subject to antitrust proceedings against apple or google. Google Play Integrity API exploits this, google can say they didn’t block non-google configurations, 3rd parties did.
Alternatives are in a really hard spot. Can antitrust regulators do anything to allow owners to choose alternatives without being blocked? Theoretically they could break up google’s exclusive monopoly over device attestation and let owners choose a different attestation provider (such as /e/ or lineageos). Otherwise, if nothing is done, Google Play Integrity API becomes inherently discriminatory to alternatives and that will be that.
Google’s trick is that it outsources Google lock-in to the developer in a voluntary fashion. Sure, you can develop an Android application that works with rooted AOSP and you can also use a non-Google identity provider (Outlook for Android does this), but most developers lazily assume the existence of a Google identity provider in Android devices and some developers go even further and use Google Play Integrity API and ban rooted devices. But none of this is required to develop an Android app, and this keeps Google in the clear.
So yes, you can develop your app to use a variety of attestation/integrity APIs, but most other developers assume the existence of Google Play Integrity API if they want to use an attestation/integrity API.
kurkosdr,
Yep. Agree with you on every point.
For now, some devices – such as my fairphone4 – pass the integrity checks with e/os and can thus install these apps fine. They should theoretically continue to be able to run them. But time will tell if google manages to tighten up the integrity checks, this is a cat and mouse game not dissimilar to the adblocking situation.
I (somewhat) understand the possible security risks but this would be a highly uncompetitive and thus risky move for google.
A statuquo was found with video DRM in browsers somehow.
Will the Aurora Store be deemed as “installed by Play”? I refuse to create a Google account to use the Play Store.
And here’s the pinch. What alternative is there? iOS?
Google and Apple effectively killed all the alternatives in the market, Blackberry, windows, WebOS etc.
So we have no choice to lose more access and choice of our phones that have become integral to our day to day lives
Adurbe,
Yes, this is the problem. Too much of our day to day lives are dependent on the apple/google duopoly, and unfortunately both are designing technology to block alternatives and restrict owner control. Technology is being designed to serve their interests rather than ours. I do not approve of this, but this is happening whether I like it or not. Without some kind of intervention, this is the future and consumers will have little say.
Supporting normal installation of applications for Android and iOS should be mandated by law.
With A/B partitions we could have a system image an another without it and switch whenever we want. I wonder if somebody already did this.
yeah no…. i am not getting this update. I have gotten so many antifeatures even on lineageos that enough is enough. Chrome is purged and all google apps as well. It works for me until it does not and i accept that. No more updates, EVER.
I’m obviously in the minority here, but if the app is a closed source, copyrighted application, I don’t see why they shouldn’t optionally have the capability to require it be installed from a trusted source. The application is licensed to your use with certain rights, if you don’t agree to them don’t install it. I’m not sure what applications from the google play store are typically sideloaded, or really why, but I can think of many where that is a really really really bad idea.
I’ve got a three sim from the UK. If I want to load the three app, since I don’t live in the UK, I can’t get it from the store. I have to sideload it. Just a legitimate example. I hope I still will be able to do so.
Bill Shooter of Bul,
You probably already realize this, but it’s not that owners don’t want trusted applications. The real problem is that they are forced to go through the mobile duopoly to get them. IMHO this is unreasonable and anticompetitive. Better solutions are possible, but get overlooked in favor of solutions that keep us dependent on apple & google.
This doesn’t resolve the antitrust concerns though. It’s like other monopolized industries where consumers are forced to go to the monopoly, whether they want to or not. Want to see your favorite band? Tough, either deal with the monopoly or you don’t get to see them.
https://www.cnbc.com/2023/01/25/the-live-nation-and-ticketmaster-monopoly-of-live-entertainment.html
Monopolies and this lack of competition are becoming a huge problem for us. In theory we tolerate the negatives of capitalism because a healthy free market promotes efficiency and companies competing for customer business. Absent this competition however, we’re just left with all-powerful corporations squeezing captive users for all they can without us being able to go elsewhere because the alternatives are blocked.
The dualopoly is what it is. An app developer can not break out of it, Their role is to deliver the best app they can with the existing technology and policy constrains that exist. If another app store pops up that supports the required features, sure the app can be added to that one too without much problem from app developers. This is very orthogonal to that concern. If you as a end user feel like you’re sticking it to apple or google? Ok, have fun with that,
Bill Shooter of Bul,
I cannot agree that we should just be complacent. A tyrant can make the case for safety, which in principal we can all agree with, but this is not a good excuse for dismissing safe solutions that don’t involve a tyrant. Well in the same sense, the fact that a locked down ecosystem can provide safety is not a good excuse to do away with safe solutions that aren’t tied to vendor locking.
Yes, in principal other competitors can enter the fray with their own features. In practice though it’s the giant tech companies that set the defacto standards that we all become dependent on. Others struggle for relevancy, doubly so because the biggest players have control over the OS. Nobody else is really able to overcome the barriers to entry and that’s why we’re living under a duopoly today. If we decide not to do anything to address the barriers, well then we essentially doom ourselves and our children to living in a bastardized form of capitalism without competition for the long term. For me, it’s hard to make peace with the idea that it’s ok for a couple companies to control everything.
Its really really hard for me to take this kind of screed seriously, when comparing the companies behavior to tyrants. Its not comparable, and kind of disgusting to make that comparison.
Bill Shooter of Bul,
I meant it as a simile.
https://www.wordnik.com/words/simile
They are two different things, but the logic is the same and that’s the point. It is a rhetorical device to create distance from a point of contention in order to speak about logical relationships without being so encumbered by the original topic. I guess you don’t like my example, but I still say the logic is sound and you should take the point seriously.
Anyway, I’m not going to change your mind, I know, but can you acknowledge that corporate power plays can be hidden under the guise of safety? We’re not against safety, but we are against it being use as a smokescreen for taking away our rights and antitrust activities.
We expect that from a microwave, router, echo show, dvd player etc.. But we know android flavor Amazon is cross compatible with android flavor Huawei.
Doing that on a device you already purchased with the intention of loading apps is something else. Iphone and android phones loose support and can stop even being able to load the play store. Try connecting an ASUS TV cube and you see why its a problem.
Lets play the devils advocate and say Netflix or banking apps need some kind off integrity checks, but they exist outside of the app. I was already using Netflix on my windows phone and Bank of America from the browser until google actively started using bugs going after browsers.
Every APK from the Play Store is digitally signed, so this solves the “trusted source” problem. The reason an app vendor might want to restrict sideloading is to prevent the installation of old versions of the app via sideloading (as of now, you can download an old version of the app from a site like ApkMirror and sideload it).
Sideloading allows users to escape gradual enshittification of an app by sideloading an old version of the app (so they don’t have to use the enshittified latest version of the app), so it’s obvious why some app vendors might not want that.
Then there are people on de-Googled Android phones with no access to the Play Store.
It’s not a bad idea if you verify the signature, or use a website that does it for you.
kurkosdr,
Neither apple nor google are interested in doing it, but 3rd party code signatures are a solved problem. There are certificate authorities that offer code signing certificates. Alternatively there are federated/decentralized solutions that exist as well, which is what I prefer. It would not be hard for apple/google to develop standards for this and incorporate into their respective operating systems.
For example, HTTPS and DKIM provide cryptographic signatures for the content of web pages and emails…
https://www.cloudflare.com/learning/dns/dns-records/dns-dkim-record/
…any of these approaches could cryptographically verify the software. And they could additionally provide a whitelist/blacklist on top of these. “This package is signed by bank.com and is trusted by google.com” or whatever CA principals that are installed. We’re not held back by a lack of technical solutions. Browsers have been doing it forever, rather it’s the lack of will by the tech giants to cede control. The dependency They benefit from the lack of openness.
All apk files served by the Play Store are signed, so apk files served by ApkMirror are also signed (since they are downloaded from the Play Store).
kurkosdr,
Ok, but I was really envisioning a scenario where apps could be signed by alternative methods that aren’t strictly dependent on google. Our operating systems would be able to verify this automatically even for packages that are sideloaded from sources other than the app store. I’m not against google additionally signing the packages, but they should not be the exclusive CAs.
Yeah I understand and agree with most of what you’re saying here. But I still think its a bad idea, as a former app developer, you do not want to use a random old version. It may do very strange unexpected and bad things if the back end servers api changes. App developers often have the idea that since they are in control of the updates from the app store, they can break older versions of the api that were baked into older versions of the app. You may be fine, or if they are nice you may get a nice error message. But you’re more than likely to get a buggy app experience. Depending on how critical that app is, you may be in for a very bad time.
The Play Store supports disabling automatic app updates as a documented feature (and it’s not even buried too deep in the Play Store settings), so as a developer, you cannot assume every user is running the latest version of your app.
If your app has to call an API, you must do API versioning and put checks in your app to error out if it sees an API version it doesn’t expect. The WhatsApp and Uber apps do this (I know because I have disabled automatic app updates in the Play Store on my main Android phone).
There is a difference between what developers should do and what they actually do. Shocking, I know. But as a consumer you should protect your self by using an app as it was intended from a trusted source, keeping updates. A developer does not need to support anything more than they say they support. If the older version destroys your data, because you did something out of the ordinary they aren’t going to dive into the tape backups in the basement to rebuild it.
Hardware attestation needs to be regulated.
DRM in general needs to be regulated (if we assume DRM should even exist). Hardware attestation is a type of DRM. But I wouldn’t bet on it happening, since industry interests are huge here (for example, region lockout is a big “win” of unregulated DRM).
Not entirely convinced that “banking” and “smartphone” are a safe + wise combination.
As long as you protect your assets in a crypto wallet the buzzwords will keep you safe.
A lot safer than using a browser on a desktop OS with no (process) sandboxing.
You have all missed the Elephant in the room.
https://developer.android.com/google/play/integrity/standard
Note the need for a app’s server here.
So yes google and future can charge extra for using this feature.
Next issue what if google servers have a network outage or issue that means response for application validation does not work. Think cloudstrike but on Android where stacks of applications stop working on users because they cannot validate themselves as installed correctly and have a paid up license.
This is about reducing applications from rolling their own digital rights management solutions. Google has left it in the application developers hands to decide if the binary runs if validation fails or not.
The reality this is part of a bigger problem software as a service. Software you are not going to be able to side load because of this will most likely be operating under software as a service model. Yes do also note that application developer can under the google service system take the app server off line at any time so causing the validation of the shipped application to no longer work. The reality is is not just being able to install applications on third party android devices being blocked this is that application developer can using this system rung pull the software out from under you as a end user and this is all android users of the application not use the one using custom android roms.
Yes detect applications using this API should be alarm bells that you don’t own this application and that you can be rug pulled.
oiaohm,
Such outages are rare, but not new or unheard of. Despite all the redundancies that are undoubtedly in place, putting all your eggs in one basket is just inherently dangerous. Yet we continue to shun diversity and embrace monopolization, so much so that it’s become normal for businesses to run their entire operation through a single point of failure. This puts the entire business at risk for a widespread site outage or employee errors like this one.
https://arstechnica.com/gadgets/2024/05/google-cloud-explains-how-it-accidentally-deleted-a-customer-account/
Sites like downdetector can be very telling here because when a lot of big companies go down simultaneously, it’s a good indication that they may be sharing a common data center failure mode.
https://downdetector.com/
Incidentally, I’ve seen youtube streams going down more often than they used to and I wonder why. I’ll look it up on downdetector and see that I’m not alone. The website itself doesn’t go down, probably handled by automatic fallover, but the requests for specific video streams fail, translates into users being able to click and stream other videos but not the one they were watching.
I am curious about the cause of this. Obviously youtube runs a distributed architecture with redundancy, so why does it occasionally fail? I wonder if it’s a problem with the way it’s engineered or if it’s just a matter of managers trying to manage youtube costs by minimizing redundancy to a point where it sometimes fails?
Back to the topic…
Yes, this is ultimately the trouble with all DRM. We’ve always had to deal with this reality here and there, but the question is whether its use becomes more normalized and harder to avoid in the future.
https://arstechnica.com/gadgets/2024/05/google-cloud-explains-how-it-accidentally-deleted-a-customer-account/
This here is exactly the problem.
https://developer.android.com/google/play/integrity/standard
go down to
“””After you request an integrity verdict, the Play Integrity API provides an encrypted response token. To obtain the device integrity verdicts, you must decrypt the integrity token on Google’s servers. To do so, complete these steps:”””
Yes application developer must create a services account that either the application developer can delete themselves or google has a opps and deletes so rendering application non functional.
So failure here does not require outage.
Other thing to remember if google did not provide this DRM some other third party vendor would have provide applications with some third party DRM that even more likely to leak your user data.
I hate software as a service a lot because your terms of use can be changed by the developer at any time including bricking the application.
“””Obviously youtube runs a distributed architecture with redundancy, so why does it occasionally fail? I wonder if it’s a problem with the way it’s engineered or if it’s just a matter of managers trying to manage youtube costs by minimizing redundancy to a point where it sometimes fails?””””
There is a core one called internet transversal. Issue is most cases you only get 1 IP address to connect to this will resolve to one route though the internet to connect to the other end. Yes this route should correct if something in the middle breaks but there are limitations to it. Look up Border Gateway Protocol (BGP) this was created in 1989 this is how the route of the packet you send over the internet is decided. BGP only checks if a route path is valid by default every 30 seconds with keep alive messages. We have over 50 thousand “autonomous system (AS)” Yes these are ISPs and the like. Remember the BGP system will be caching calculate routes so will hand out a stack of people the same not longer functional route.
https://www.itnews.com.au/news/china-systematically-hijacks-internet-traffic-researchers-514537
Of course there are cases of BGP being abused as well leading to non optimal routes because people want to spy. Yes adding more hops increases failure rate. BGP does not always pick the most optimal route when working right either.
The reality is the complete internet when you look closer looks like that item held together by luck and stacks of duct tape its surprising that outages are not even more common. Yes the core internet route from point a to point b being a mess is not something google can fix them themselves. Yes more traffic a site has the more likely in a 24 hour time frame that a percentage of their users will not be able to connect for some period of time due to a bad BGP routes being given to their packets. Yes 30 seconds to detect that route has a broken link so is defective before BGP stops giving that route out and with 50000 AS is very possible for you to get this over 120 times in a row so not be able access some site for 1 hour because you cannot get a functional route instead hit 120 different AS that just happen to be broken at the time you packet was trying to pass though them. Yes 120 failure points *30 seconds(for how long each will take to be detected) this does add up very quickly.
The internet core design is not reliable communication breakages and disruptions should be expected as normal.
oiaohm,
Props to you for thinking of that. BGP screwups are possible, although it doesn’t seem to me that it would be a common occurrence. I’ve had ssh connections remain stable for weeks on end when I leave them open.
I guess the next time I witness a youtube failure I could perform trace routes to see if anything fishy is going on.
I performed some traceroutes just now (when nothing is wrong) and I notice two different types of traffic:
1) Some traffic gets streamed via servers identifying themselves as “*.1e100.net”, which I know google uses to host many services.
2) Some traffic gets streamed via servers that don’t respond to traceroute at all, but whois identifies the network range as belonging to my ISP.
It seems probable that #2 are part of google’s content distribution network to offload traffic. If I’m not mistaken, these are google servers that are colocated within the ISP network to improve latency and decrease backbone loads. In terms of youtube streaming, it raises the question of whether the failures where happening in #1 or #2. I would think that failures in #2 could quickly switch to #1, but maybe it doesn’t work that way or it takes some time before traffic stops being directed to faulty servers. But as an outside observer, I don’t really know how it was built and how failure modes are rectified.
https://support.hpe.com/techhub/eginfolib/networking/docs/switches/5710/5200-4992_l3-ip-rtng_cg/content/517702406.htm
This explains the multi directional traffic routing.
“””If I’m not mistaken, these are google servers that are colocated within the ISP network to improve latency and decrease backbone loads. In terms of youtube streaming, it raises the question of whether the failures where happening in #1 or #2. I would think that failures in #2 could quickly switch to #1, but maybe it doesn’t work that way or it takes some time before traffic stops being directed to faulty servers.”””
This depends how the BGP goes belly up.
https://www.noction.com/blog/bgp-flapping
Yes lets say your route to google happens to randomally suffer from bgp flapping. This can result in packet losses and failure to connect as you keep on getting given non functional routes.
Remember what I said that ti could take BGP system 30 second to wake up a route is dead and you as a user have been attempting to use that first route you have been given for like 29 seconds and if you have BGP flapping the next route you get given can be busted as well and so on and on until some network administrator notices and forces a fix.
“””I’ve had ssh connections remain stable for weeks on end when I leave them open.”””
is this connection going though a high traffic area of the network. Because if it not you are most likely going to get a single stable route.
Google and other large vendors will be connected in high traffic areas of the network so areas where traffic is more likely to be load balanced though less reliable AS as in ISPs and the like.
The reality its human network admin constantly stepping in and fixing BGP routing issues why the complete internet does not progressively stop working. One paper said if you just replaced defective hardware and no one network admin did any BGP route tweaking/fixing the Internet would be non functional in about 6 months.
The failure modes of BGP are written in many standards and its not good. Really its kind of impressive that major sites like google, facebook, youtube… don’t have even more outages coming from simple BGP failures.
oiaohm,
Thanks, but I already know how BGR is used. I can see empirically that youtube doesn’t rely on it, they explicitly send clients to different public IP addresses without relying on BGR to update traffic routes. When you think about this, it makes sense because they have much more control when they explicitly tell client what servers to connect to.
They might update BGR records, but switching paths on the fly would be bad for routing continuous traffic flows, subpar and actually hurt traffic. You don’t want to be in these states for long, especially not with loads as big as youtube.
I’m not saying google never do BGP maintenance at all, but as a mechanism for directing youtube users to the server that has their content, BGP is not a good way to dial in fine grained control. It’s far easier and more effective to just tell clients what IP you want them to connect to. Furthermore the IP paths for these servers can and should remain stable, which is to say static and optimal. A packet dump clearly shows youtube directing clients to different IPs. And I tried the same youtube request from different ISPs/servers and I got different IPs rather than the same IP being routed to different servers via BGP.
So while the youtube-BGP-failure theory is interesting food for thought, you seem to be pushing that theory without evidence for it. It seems more likely to me that a server simply failed or became overloaded.
“””I’m not saying google never do BGP maintenance at all, but as a mechanism for directing youtube users to the server that has their content, BGP is not a good way to dial in fine grained control. It’s far easier and more effective to just tell clients what IP you want them to connect to. Furthermore the IP paths for these servers can and should remain stable, which is to say static and optimal.”””
The reality is the paths don’t remain stable. You do not get to point a to point b with just a IP address on the internet. The route to allow IP to IP is worked out by BGP.
This is where the mistake comes in IP paths across the internet are not static and stable. Yes this dynamic reconfiguration to deal with network load and routers going off line does not always work right.
“””A packet dump clearly shows youtube directing clients to different IPs. And I tried the same youtube request from different ISPs/servers and I got different IPs rather than the same IP being routed to different servers via BGP.”””
This done by google is attempt mitigation. By spreading your users out over many different IP address you should reduce the BGP congestion points and number of users who get broken when BGP does a stupid. Note the key word here should BGP has been caught routing traffic around the complete world to go from one side of a city to the other.. Congestion is when BGP gets creative and does wrong things at times attempting to find a performant route.
You have to remember BGP is built on the idea that servers do and will go off line and that servers will run out of bandwidth to send by X route so then reroute.
There are minor breaks in BGP system every day most self correct quickly. Yes just because you cannot connect to X IP address does not mean the server at X IP address is down or their ISP is having any issue. Just your route across the internet may not be working.
The reality here a percentage of Youtube issues and any other big service is routing.’
Yes if you have magically been routed around the world of course you connection is going to be slow.
Alfman people taking care of BGP routing systems are given a lot of training to be dealing with the failures. If you ever do that job you find that you have be dealing with issues about half hour. Your idea that the route should be static/fixed define does not work. Remember I am at ISP1 I am routiing stuff though ISP2 and ISP2 wants to restart their servers connected in my direction they are perfectly allowed to. So any route going though ISP2 now has to be rerouted though lets say ISP3 while ISP2 is down. You don’t want users to notice this that much right. Of course sometimes is just straight up hardware failure without notice. Again is the size of the Internet this is happening all over the place all the time.
Some of these events will cause like traffic jams others the traffic will go around without issue.
Lot of places collect youtube outage stuff and the like but they don’t collect enough detail to locate down where is the failure. Is it Youtube or is it routing to youtube. Yes going to different ISP and seeing different IP address this is just one of the way to attempt to mitigate by increasing the individual BGP routes to their servers. This is only mitigation not a fix.
oiaohm,
I know what BGP does, but I’m simply not able to agree that is BGP is responsible for youtube stream errors without any evidence for that. There are a whole lot of other failure modes that are just as, if not much more, likely.
Yes, we need BGP to tell routers how to route, great. However it does not follow that youtube are using BGP for real time traffic control. That has all the cons you brought up and more. It is perfectly valid for youtube to control traffic by telling clients which IPs to connect to. This is scalable, robust, and from what I’m seeing this is exactly what youtube does.
I’m not denying that glitches are possible, but you seem bent on blaming BGP when other failure modes are more likely. While I don’t mind discussing these hypothetical scenarios, clearly we need evidence before we can point the finger at BGP.