About 20 percent of third-party apps available through the Android marketplace allow third-party access to sensitive data, and can do things like make calls and send texts without the owners’ knowledge, according to a recent security report from security firm SMobile Systems. There’s no indication that any of the highlighted apps is malicious, but the report does underscore the inherent risks of a more open ecosystem as opposed to Apple’s oppressive yet more controlled environment, with every app being vetted before availability.With a freer marketplace, the burden is going to need to fall on the user to choose their apps carefully, and there will certainly be an opening for trusted sources to create repositories where only apps that have been certified to be safe will be available for download. We’ve talked a lot abou the post-PC era, and it’s certain that as users flock to new platforms, criminals and mischief-makers will follow. Microsoft has taken the heat for its insecure desktop platform for the past couple of decades, and we’ve been able to pretty much lay the blame for botnets and phishing at Microsoft’s doorstep. It will be interesting to see how these new platforms will fare once they find themselves in the full spotlight, with all the attention of the world’s malicious hackers.
Shocking news at 11… PEBKAC.
At least they agree this is not an Android problem per se, just a common problem… there is users. lol.
PEWTD – Problem Exists With The Designer. Security is a process, not a feature, and the user should remain safe and in control should the worse happen because the system is designed as such. You could blame the user for causing a car crash, but you shouldn’t blame them if the car’s engineering fails to protect them; that’s down to the design of the car, not the user.
The user chooses to install an application and is given a list of what the app has access to. The user has the choice, if they choose to continue then that is their decision.
Written like a designer complaining that an end user should have the experience, knowledge and foresight to be cautious like a security expert.
Kroc is correct when this falls on the architect, not the end user.
But how much is too much when it comes to an application giving off warnings before an end user does something? or how restrictive should it be where there is a weighing up between keeping the individual safe and giving maximum flexibility? at some point one has to take off the training wheels and allow the user to stay upright on the bike – and yes that might mean going into the gutter or straying into the road and getting hit by a car.
There is a thing called personal responsibility that is sorely lacking these days – time that end users exercised that instead of being mindless click and drool mouth breathers.
The problem with the “PEWTD – Problem Exists With The Designer” thinking is that some designers deliberately design malicious code. When they do so, they will also do their utmost to obscure the fact that the code is malicious from the end user.
According to this article:
http://www.pcmag.com/article2/0,2817,2365651,00.asp
Where the Google spokesperson (Cannings) says this:
Google themselves are apparently going to take responsibility for “malicious applications”. If Google are alerted by one end user to the existence of a malicious application, or if Google identify such an application themselves, they apparently can and will take action and delete it from everybody’s android phone.
Edited 2010-06-25 05:57 UTC
No need to spam the same post over and over and over again simply to get your post count up; I read it once in reply to a previous post, there is no need to repeat it over and over again. As much as I’d love to believe in the benevolent dictatorship of Google, we have already seen it take a nasty turn in the case of Apple and the iOS platform – which is the reason why I’ve stuck to my good old iPod Classic and ZTE R6 Mobile phone.
Remember when Apple was the darling of geeks? there is nothing stopping Google from making the ‘tough decisions’ when they need to even if it means angering a few geeks along the way.
I wasn’t spamming … I personally think this is very problematical. Google have already zapped two applications out of existence in the Android universe.
One the one hand, there is nobody in a better position than Google to take on such a “Android app police” role. They could be everybody’s Android anti-malware monitor, without taking up anybody’s Android CPU power. That bit is pretty neat, really.
However, having said that, they also effectively have built in a “veto” for themselves on what can be installed on Android phones. If Google don’t like it they can wipe it from everyone’s Android phone. In fact, if someone else (maybe the government or the RIAA or MPAA) don’t like it, perhaps they may be able to force Google to zap it from everyone’s Android phone.
That aspect of it seems even worse than Apple’s shenanigans.
So how about a bit of sane discussion on Google’s real-world provisions here, what Google are actually doing and planning to do, instead of pontificating from on high about how users cannot be relied upon to do the best thing by themselves. Apparently, for Andoid phones, they aren’t going to be asked to.
From the original quoted article, Google’s response was this:
Google apparently really meant it when they said they would disable any apps that are found to be malicious.
Googles in-built provision to remotely zap malicious Android apps destroys the original article’s criticism of Android, but to my mind it opens up a whole plethora of utterly different potential criticisms for Google to answer to. The self-same zapper can presumably, at Google’s say-so, zap anything at all on people’s phones.
Apple has the same power over iPhoneOS/iOS devices.
What if the user doesn’t want to learn how to ride a bike? To break from your analogy, a user doesn’t want to “use a phone”. He wants to share his photos, send an email or whatever.
This situation is primarily our fault (“our” in a very broad way; as in our industry). Some point in the past, we decided that computers aren’t going to stay on the server rooms, but there is going to be one in every home. Since this is the direction we’ve taken, we have to actively support it by providing “training wheels” to everyone who needs them. Since you could say that generally a mobile app’s audience needs these training wheels, you can’t just shout RTFM to anyone who is confused and expect to be successful.
Then your analogy breaks down when one considers “I don’t want to learn how to drive a car, I just want to get from A to B” to which one could say, “well, use public transport”. Analogous to public transport would be devices like iPad/iPhone/iPod Touch – and with that comes restrictions, the very restrictions that people here decry as draconian. Apple has taken it upon themselves to be the benevolent ‘provider’ in lieu of individuals making such choices. What has been delivered to end users is an environment where they’re taken care of – but we have people decry what Apple is providing. You either have security and less freedom, or freedom and all of the responsibility in your own hands – you can’t have both.
Edited 2010-06-25 12:16 UTC
That’s what the rest of my comment was about. At some point in the past we decided that they wouldn’t need a landline or snail mail to stay in touch, that they wouldn’t need to go to the store for their shopping, etc, but they could do all those things from a computer. So we advertised that fact and sold a computer (in some form) to everyone under the sun.
Or, to return to your analogy, the car companies promised that everyone could use a car for anything. “Use public transport” is not an answer any more.
Wrong. You can. It’s just about two things : a file explorer which allows to access his own files, and the ability to download and install packages from other sources than the App Store.
You don’t have to use them, as a beginner. And it’ll help experienced user to get things done. Exactly like Mac OS X’s terminal. Does somebody think that said terminal is a security flaw, when put in a simpler form in iDevices ? Except Apple, who want to make more money, I mean…
Edited 2010-06-25 14:12 UTC
“Apple – The public transport of high-tech”, has a ring to it.
What does that make Linux and FOSS, a push bike? e.g. you can go where ever you want, doesn’t cost you anything and it will be more work – but the exercise will do you good.
I’d just stick with the normal aproach:
”
This program accesses your; address book, phone calling
[ ] do not show this warning in future.
[allow] [deny]
”
Each program can then warn the user that it will be touching stuff or, ideally, the OS provides the monitoring, warning and permissions on a per application basis.
I don’t think a minimum of one warning per application is too much and people can always opt to leave the box unchecked and see the warning each time if desired.
The internet has been buzzing with this this story. After overwhelming response from commenters, most authors have amending their articles to reflect the fact that this report is mostly fud from a company trying to sell android security software, given that you have to give these apps permission at install time. still important to be vigilant, though.
That report is more or less garbage. The apps certainly can’t do so without the user’s knowledge for the simple reason that, when you go to install an app, the OS tells you a list of the security capabilities the app requests (which includes things like making calls, accessing your data, etc.) and confirms before installing it.
I’m curious how it asks. I don’t own an Android based phone but depending on how these warnings are prompted to the user, makes all the difference. Are they ONLY asked at install time? What if a seemingly innocuous app starts making random calls/texts at a later time?
If it’s the way installing apps is planned to work on my OS, then the app will be killed by the OS, since it asks for something it didn’t got the right to do at install time.
(Security based on fine-grained permissions like that is the way any OS should work. The user/admin model is so outdated that it’s laughable. As someone said here, what do you fear at most ? Losing /bin or losing /home ?)
Just out of curiosity, are other people that the article’s author seriously thinking that malware can’t get on Apple’s App Store as easily as on the Android Market ?
Edited 2010-06-24 19:45 UTC
Can it? I have no idea other than Apple’s approval process seems Draconian at times, right down to the search for usage of unapproved API’s.
I don’t believe that Apple gets the source when they are reviewing applications. They can detect usage of undocumented APIs through the symbols in the binary.
It’s quite easy to do malicious things even when the source can be reviewed (see the underhanded C code contest), so I doubt that Apple has any way of detecting malware that isn’t blindingly obvious.
No it can’t get on as easily, especially when Android apps can update themselves without having to go through the market.
http://blogs.forbes.com/firewall/2010/06/21/researcher-builds-mock-…
Of course malware is less likely on the Apple App Store than Android market. That’s one the benefits of Apple’s curated app distribution model with its built in quality control system. Its actually the main reason Apple do it that way.
Its best not to try to hide this – better to say the truth which is that the Android way is freer but less secure.
Its up to the end user what model they prefer more. The Android way where you get less security (and fewer apps) but those apps are distributed in a more decentralised and and less controlled way. Or the Apple way where there are more apps and their distribution and quality control is more restrictive.
Then leave the consumer to select the model they prefer.
I think the consumer will prefer safety (given the general experience of consumers during the Windows dominated desktop era) but I could be wrong – the end users will decide through their purchasing decisions.
http://www.businessinsider.com/google-activates-160000-android-phon…
I’m not so sure about that “strong second” notion, particularly in markets other than the US. Like this:
http://ausdroid.net/
Edited 2010-06-25 01:11 UTC
If I understand your question correctly, the rights it asks for at install time are all it ever gets. Ergo, if it didn’t ask for the “make calls” right when you go to install it, it cannot suddenly change its mind later and will fail if it tries to.
Well, that’s sort of what I was getting at… how the user is alerted. For example, some sort of address book related app might, logically, ask for the permissions to make calls when first installed. Isn’t this a situation where if it were a malicious app it could then later, make calls/send texts without notice?
That’s certainly the case. What it does help you catch though is cases where an app is asking for rights it very obviously shouldn’t need. For instance, suppose you go to install a game, and it asks for the right to make calls. Why would it possibly need that? That’s the case it’s designed to catch. An app later making malicious use of the capabilities you’ve allowed it to have is another animal entirely, and I don’t see an easy way around that short of prompting every single time the app tries to do anything whatsoever, which would be a cure worse than the disease.
Well, there’s actually a 3rd option where you could be prompted the first time such usage is actually requested with the option to keep being alerted each time or to “always allow” for that app. This still might not entirely cure the issue but it is then at least up to the user whether to completely trust an app or not and/or when to decide to trust it. The plus side to this, also, is that a malicious coder has no way of knowing when the client would “trust” the app – unlike now, knowing that if it makes it past the install it is home free.
I believe a similar scheme is already used on certain smartphone OS’s for things such as Location Services.
Edited 2010-06-24 20:47 UTC
And the average user is supposed to understand exactly what’s being asked and make the correct judgement call at install time? The same users who see a popup on their home PC telling them that they have a virus and need to install this you-beaut software to fix it, to later find they actually installed a trojan?
OK, I can see how that will work…
I’m not entirely convinced that it is a good idea, but this is Google’s answer to that sceanrio:
Google Remotely Deletes Android Apps
http://www.pcmag.com/article2/0,2817,2365651,00.asp
Hmmmmm. I can see some good aspects about that, and some not-so-good.
Edited 2010-06-25 04:03 UTC
This might not be directly related to article, but why in the mobile world there is more emphasis on installing apps. Didn’t we on the desktop OS move from installing apps to virtually do everything using a browser.
Steve said html5 is the future so why aren’t there many html5 apps which should negate many security issues.
Here, I’ll explain in details why I think that the App store is not a revolution in security, and how things can really be improved, based on simple computer security notions.
Let’s take as a basic assumption that absolute security does not exist. It’s effectively true, because it would require the user to make any kind of hardware and software he uses all by itself. The best we can do is to reduce the amount of trust that the user has to put in third parties. As far as I know, these are the strategies which exist as of today :
1-Keeping the user well-informed and in control.
2-Make knowingly malicious software disappear.
3-Have experts analyzing the software and say if it can be relied upon.
4-Be cautious about what kind of software is installed, do not let anything make its way.
5-Put limitations on what software is able to do.
Now let’s analyze those strategies. First from a philosophical point of view.
1, 4, and 5 are at the same time the most safe and liberal (as opposed to authoritarian) solutions. They don’t require to trust an additional third-party about what is safe, the user is independent instead of being treated like a child.
2 and 3, on the other hand, are more dangerous, because you have to rely about an unknown bunch of people who pretend to know what is safe (and can be themselves authors of malicious software). For that reason, those options should not be used independently, but rather as a complement of other options that one can bypass.
Now, let’s get more technical.
1/Keeping user informed
One might argue that it requires the user to have some previous knowledge of malware. However, everybody has such knowledge, to some extent, in the form of common sense. If an unknown guy comes at home and ask if he can borrow your TV set, you’ll probably say “no”, because you’re almost sure that he will never come back. What the system manufacturer has to do is to describe, in an understandable yet precise fashion, what the application wants to do. Precision is important : an application should not ask for “access to system files”, but rather for “ability to change active wi-fi connection”. This requires a fine-grained underlying security permission system.
A second thing the system manufacturer can do is to make the system analyze the permission being asked, and specifically warn the user about dangerous ones. As an example “Make a phone call with prior acknowledgement from the user” is relatively safe, while “Make a phone call without prior acknowledgement” or “Access all system files” are dangerous options, which the security system should warn the user about.
A security system built around those ideas can both help an expert who wants to know if the application is safe and a non-technical user who can check, at his knowledge level, if the software is asking for reasonable things.
2/Remove known malware from user sight
This is the most obvious benefit of a central repository system. All properly maintained central repositories (from Debian’s APT repos to the Android market) provide this kind of security, and it cannot work properly outside of such a system, as anyone who experienced antivirus software on Windows can acknowledge. This system has, however, a major flaw that make it insufficient alone except in technical environments : in order to work, it requires a large and representative part of the user base to keep the repository maintainers informed about what malware they have found.
3/Expert check
This one is quite interesting, because it is were being open-source can actually result in increased software security. That’s because a source code can generally be fully checked for malicious behaviors in a reasonable amount of time. In a binary-based system, however, such a full check cannot be performed, and the sole advantage that an expert has over an average user is his experience (which is not necessarily worth much in a rapidly-evolving and polymorphic area like malware).
All the expert can do, when being given a binary file, is to try to run it in every single possible way and check for unwanted behaviors. However, this is insufficient. Suppose that an attacker makes malware which looks like a plane reservation application. The application checks plane availability from a distant server (which is controlled by the attacker), and allows the user to book a seat. However, if a specific available plane is registered on the server, the application suddenly goes evil. This is commonly called a backdoor.
The attacker first submits his application to the expert, with usual content on the server. The expert stress-tests the app, and notices no strange behavior, so he approves it, and people start to download and use the app. Once the app is widespread, all the attacker has to do is to put the fatal entry on the server, and BOOOM ! Millions of phones are suddenly tainted by malware.
4/Thinking before installing
This one can be very effective again phishing-like attacks (“Your phone is slow because it is tainted with malware ! Download this absolutely free software to remove it !”), but it requires prior education of the user.
Does one really require to replace a phone book ? A core component of the system like the control panel or the filesystem manager ? One shouldn’t give very high permissions to an application without thinking not twice, but rather three times. If a core component of the system doesn’t work, you should consider moving to another system, warning the manufacturer about its defects, or patching it yourself if you’re a technical user and it’s free software. Relying on several third-parties for core components of an OS is generally a bad thing, as some Linux user may acknowledge.
5/Limiting apps capabilities
This one is *extremely dangerous*. People who wrote the security system just don’t know what the users will do with their computer. Moreover, computer usage varies wildly as time passes. Who are we to decide what is good for the user ?
However, this idea can find its place in a less extreme form, that is carefully choosing what an application can do without a security permission being given. As an example, consider a situation where applications have a private folder, sort of like in OSX. They can do whatever they want in said folder, but they can’t do anything outside of that folder without prior acknowledgement (as an example through use of an “open file” dialog, or by getting the appropriate security permission). This way, the amount of warnings about security permissions can be reduced, and hence once one does appear, there are higher chances that the user will read it instead of just clicking “next”/”ok”/”ack”/whatever right away.
Now where does the App store fits in this model ? Most smartphone OSs use a combination of repositories and limited abilities. Android adds up an informed user. The App store chooses to add up an expert check, which, as we’ve seen before, certainly isn’t the most effective way of managing security, except if the users are dumb or uninformed (I prefer to think that the latter option is the good one). It’s not intrinsically superior, given that the security systems are properly designed and patched, it’s still three security principles out of five available.
One must them ask himself if relying on experts who can do little because they have to check binary apps is worth being forced to download from this repository, which has no security benefit and is a purely jerkish move from Apple aiming at maximum incomes…
Edited 2010-06-25 08:31 UTC
Of course the Apple system is more secure. It’s no contest.
Android apps can phone home and change themselves without user permission
http://blogs.forbes.com/firewall/2010/06/21/researcher-builds-mock-…
Thanks for posting your manuscript though.
Those are security defects, which indeed require patching, in the Android operating system, not in the market model. The previous posts were about the App Store model, not about iOS’ specific implementation.
(I won’t be advocating Android facing iOS, since in my opinion both operating systems are canned crap. In fact, I think that the whole touchscreen smartphone idea has only spawned canned crap in all of its current implementations, though Windows Phone 7 Series looks somewhat promising if they sell it on phones with a physical keyboard)
You’re welcome, sir =p
Edited 2010-06-25 09:07 UTC
It’s a very poor design that will likely be exploited.
As much as geeks lament the locked down nature of the App store it does have a pristine security record.
There’s more to improving the security of applications than your list shows, there is also developer verification which is part of the App store application process.
As for binary security checks they can be performed with software. Not 100% effective but when combined with developer verification you have a strong deterrent.
You can be dismissive of the app store but it has an excellent security record that cannot be denied.
What do you call developer verification exactly ? Some kind of digital signing that (is supposed to) identify the guy who submitted the app ?
Moreover, I agree that the App store has an excellent security record… But it’s just like Nokia’s Ovi Store, Microsoft’s Marketplace, Android’s Market, RIM’s I-don’t-remembler-how-they-called-it or even the old $5 java games download pages in that respect : there are only little to no recorded exploits in each case, so we can’t make conclusions yet. It’d be like saying “Oh, dammit, those mobile OSs are so much more secure than Windows !”.
To get a good picture, we should have good data in the form of hundreds of recorded exploits. Which the mobile phone repository system does not have yet, because it’s just an uninteresting target at the moment. Plus, it lacks global penetration on the market : at the moment, smartphones still are mostly used by geeks and some executives who want to show how rich they are because they can…
Edited 2010-06-25 10:30 UTC
Verify that the developer has a legal address and bank account. Requiring that the developer has a verified paypal account is an easy way of doing this. It’s just an additional security precaution that deters criminals.
Allowing unverified submission from all parts of the world is too risky. Very little malware comes from the US and Western Europe and security policies should take this into account.
It’s just a repository where package are checked before admission. Tons of these exist in the rest of the computing world. I’m not dismissive of that, as long as it’s coupled with other strategies. What I don’t understand is why the App Store is presented like some kind of revolutionary product.
Edited 2010-06-25 10:35 UTC
Oh….I never considered it to be revolutionary.
Well implemented yes but I thought only Apple fans believed it was revolutionary.
There were app vaults well before the App store, and well designed ones like Steam.
I think this is a little more complex. A user with limited computer experience already places a lot of trust to his computer. If he can’t understand what is going behind the scenes, and of course it’s not reasonable to expect everyone to do so, he simply must trust his computer. This ranges from the simplest of things (the app will launch when I double click the icon) to the most complex (click here to restore the pc after it has broken). Since the user is trained to trust what the computer says, he will most likely do so when what the computer says is from an untrusted source. This is why the most successful attacks are the fake antiviruses.
When a legit alert from a user’s antivirus pops up, he doesn’t really understand what is happening behind the scenes, so he just clicks the “Keep me safe” button. When a malicious pop up says the exact same thing he will do the exact same thing.
In your analogy, the user would most likely lend his TV set if the person that asked was his brother. If the user doesn’t have at least some rudimentary training about computer security, there is no difference between the OS and an untrusted third party. It’s the same entity, “the computer”.
That’s something I’ve been thinking about for some times. My answer is that system warnings have to look and behave in a way that no other app looks like, with imitation being forbidden in some way (or privileged in a way that the OS issues a “only install if you know what you’re doing” window during installation).
As an example, let’s suppose applications can’t draw borderless windows and arbitrary-sized UI widgets, nor register double right clicks from the mouse. If the system uses some, this result can be achieved. This is akin to Windows 2000’s use of Ctl+Alt+Del to prevent login window spoofing.
The user then only have to know that he can trust borderless windows, but must be careful about strange behaviors from the rest of the applications. That doesn’t sound very complicated.
Edited 2010-06-25 14:13 UTC
But that violates principle #5 of your rant–limiting what the apps are allowed to do. In particular limiting the way the UI presents itself in regards to border, widgets, etc. strikes me as a very Apple-esque way of handling things that would certainly lead to developer discontent on any kind of open platform.
It’s true that you have to accept some limitations, but here I think there is no choice. I can’t find a way of visually separating system and app controls without possible spoofing that doesn’t involve giving system messages a specific UI toolkit and forcing other applications to use the standard one.
However, you have to consider that none application, except games maybe, should use non-standard widgets without a very good reason ? Non-fullscreen borderless windows, bitmap buttons, bitmap background images of a specific color, are all things which cause bugs when changing visual theme and screen resolution, and which reduce usability due to inconsistent behavior. Moreover, blind people cannot use such applications easily because screen readers won’t read text when it’s written in an image.
If enforcing use of the standard toolkit for all non-system applications is apple-ish, then so be it, but I think it’s the best choice for millions of good reasons. And as an example of why non-standard toolkits are bad, I’ll invoke the multiple qualities of HTML websites facing Flash websites.
Flash is good for tiny apps, like games and video players, but it shouldn’t ever be used as a website creation platform. The reason why it’s here is that it fixes the lacks of HTML+Javascript, which were never designed to build full applications.
Modern operating systems have the chance to include complete UI toolkits which can be used together with real application programming languages to create full-fledged apps. This removes the need of non-standard toolkits like Flash, which should hence be wiped out of the surface of the user application world.
Edited 2010-06-26 14:34 UTC
I have to say that all this talk about how to make installing un-vetted apps on Android as safe as installing vetted apps from the Apple App Store seems a bit pointless.
Why would Apple make their devices that complex and that potentially dangerous in order to add a feature (ie the ability to install Apps from an “open” source) that the vast majority of consumers don’t care about? Apple’s aim, it model, is to make complex devices as easy and safe to use as possible. Hence the App Store.
If the App store model proves unpopular Apple will not prosper and will probably change direction to be more Android like. If that approach proves popular and Apple prospers it will keep the model. Its pretty simple really. It will all work itself out in the long run.
Indeed, the discussion becomes pointless at the moment where we start to invoke Apple’s opinion.
You are saying that Apple have safety as their goal. I personally think that allowing people to install whatever they want instead of having them lick Stevie’s boots before is not reducing security, as long as it’s done properly, for reasons which are stated before. Taking this reasoning further, I think that this is not the reason why Apple did that, and propose a reasonable alternative explanation (companies all want to maximize their profit, no matter how jerkish the chosen option is).
But again, in the end, none of us know Apple’s motivations. It’s even possible that they had several reasons for doing that, including false beliefs of increased security.
A couple of points.
Its true we don’t exactly know Apple’s motivation’s re the App Store model but they don’t make a lot of profit on on it, I thinks economically its a bit like the iTunes music store that way. I saw recently a figure of $1 billion paid to developers in total so far which would make Apple’s cut roughly 300-400 million dollars since the launch of the App Store. Given the scale of Apple’s operations and profitability nowadays that’s chump change for them. Remember that’s only revenue not profit – the cost of the App Store has to be deducted. I wouldn’t be surprised if it only just made a profit or even just broke even.
I think their motivations are mixed but probably include a desire not to see anybody else control the iOS development platform by getting between them and their community of developers (that’s happened before in Apple’s long history and its always been a disaster for them) and a desire to ensure a high quality end user experience by controlling as much as possible the whole product stack. The latter reason is probably the most important – everything I have ever read about Steve Job’s says the guys is a product perfectionist and that what he lives to do is to make fantastic products that people want to buy.
In the end its all moot. The only thing that matters is what succeeds in the market place and it seems as if, for now, the consumers absolutely adore the iPhone (which includes its App Store model). Did you see those lines on the launch day for iPhone 4 or the pre-order numbers?
I don’t think Apple are too obsessed with market share (although more aware of it since their defeat by Microsoft in the 90s) as long as the iPhone continues to grow sales and make huge profits for them. Their strategy re any threat from Android may well be similar to the strategy that Job’s adopted when he came to rescue Apple in relation to the Windows threat. Then he realised that Apple’s competitors on the desktop was the OEM desktop makers and not Windows itself, and now Apple’s competitors is the other handset makers and not so much Android itself.
Could say more – fascinating subject but too tired and too hungary – so perhaps another time….
Profit does not necessarily take the form of direct financial benefits. Increased control over a population is profit, too.
As an example, Google make little to no direct benefits by distributing Android or Chrome for free. There’s no ad included, no fee for using them. However, if those product gain large market share due to their low price tag, it can be beneficial to Google.
Let’s consider the example of usage statistics : those people who gave Google the right to watch what they’re doing are giving them precious information about their life in exchange of a better browsing experience. Google can then use this information to make better ads, which people have a higher chance to click. “Real” financial benefits can ensue.
It’s not a good thing to keep past failures in mind when doing something new. Screw drivers get sold even though their vendors do not tightly control the spec, and it’s the real product value which wins in the end…
Perfectionism is important in order to make a great product. But for something as large as the computer industry, there are times where you have to make a choice, and specialize yourself in something while leaving the rest to other peoples.
As an example, Apple don’t make Macbooks themselves. They partly design it, and they make the software for it, but it’s ASUS who work on the hardware part in the end. And from a quality point of view, it doesn’t sounds that bad, except maybe from a heat management point of view (which is the necessary drawback of wanting small computers to be silent).
Well, Windows sells very well, and I’m not ready to admit that other OSs are irrelevant because of their low sales. Every innovation in history started by being something small.
Moreover, the evolution of mankind is not adiabatic. You cannot say at some instant that an equilibrium state has been reached, that someone in the market has “won”. Things like speculation show how much the notion of equilibrium is irrelevant where considering how things go on a market.
Apple managed to make some sales, good. But once all current potential iPhone customers will have bought one, will they manage to match competition and adapt their product to the needs of more people, like they did by widening the iPod product line but didn’t with the Mac ? This remains to be seen.
I totally and absolutely agree. People buy a phone, not an OS. That’s the reason why carriers should stop bloating phone OSs with crap and giving a false impression of poor general quality of the phone, when they are the ones that are to blame.
This is where I’m against the holistic approach of Apple, by the way. In my opinion, everyone should stick to his job, and only give design docs to others. Laptops filled by crap in the software area because of the hardware manufacturer ? This is unacceptable. Carriers re-doing home screens of the phones and preventing people from updating instead of just doing their job of processing phone calls and texts ? This is unacceptable too. People should stick to the areas where they are competent. And a finite number of people cannot be competent in every way.
Why not ! It was an interesting read.
Edited 2010-06-27 15:59 UTC
My posts are likely to be few and far between in the next couple of months as I am about to embark on a long road trip through the more remote parts of the American south west. I live in crowded London and its a great city but the first time I hit the open highway on cruise control on my first trip to the States I fell in love with the US of A. When you guys sing about America the beautiful you are not joking, I find the beauty of the Rockies and the deserts on the Colerado plateau simply sublime.
So late August, 5000 miles and about 3000 photos on my Nikon D700 later I will be back.
I hacked a Dell Mini 10v to run Snow Leopard as cheap expendable road kit a while ago and this will be its first field trip. I will let you know how I get on. I will probably buy an iPad in the states where they are cheaper than in the UK.
And remember “When the going gets weird, the weird turn pro”
and we’ve been able to pretty much lay the blame for botnets and phishing at Microsoft’s doorstep
Well I would blame botnets on the people that write them and they can only exist in their current numbers due to the amount of people that engage in poor security practices like keeping the OS out of date or installing pirated software.
As for phishing most attempts are through email which is OS independent. It’s the browser that catches phishing attempts, not the OS.
Absolute tosh! Honestly, one of the main reasons Microsoft had to deal with decades security problems was because of their dodgy design decisions (ActiveX), and lack of security of focus.
So to silently suggest Apple might have the safest route, and Google should rectify the Android platform to be more like Apple’s is utter nonsense. What is this article trying to achieve!? Other than selling security software.
More open development and peer review would be beneficial to Android. On the Apple side, they validate all application submissions but at the same time, every developer makes there own fart application. Android has the potential to foster more shared development (let all the fart app developers polish the same code). At minimum, this would return the peer review benefit of FOSS development. Over in the Maemo stables, the list of programs is shorter but appears to be shared and developed with far more good will than what we’re seeing with Android.