Linked by Thom Holwerda on Thu 3rd Nov 2011 22:54 UTC
Mac OS X And so the iOS-ification of Mac OS X continues. Apple has just announced that all applications submitted to the Mac App Store have to use sandboxing by March 2012. While this has obvious security advantages, the concerns are numerous - especially since Apple's current sandboxing implementation and associated rules makes a whole lot of applications impossible.
Permalink for comment 496215
To read all comments associated with this story, please click here.
RE[7]: Comment by frderi
by Neolander on Sun 6th Nov 2011 22:11 UTC in reply to "RE[6]: Comment by frderi"
Neolander
Member since:
2010-03-08

I'm not sure if you aware of how the black hat industry works. Make no mistake, this is a multi million dollar industry. There are people out there that make a living out of it. There are people who do nothing all day but to find these zero-day bugs. And when they find them, they sell them on the black market, for hundreds or thousands of dollars. These aren't the kinds of bugs that come to light by patches. The black hat industry has moved beyond that. These are bugs that aren't known by their respective vendors and aren't patched in any of their products. This information is then bought by malware writers, who exploit them in their malicious code for keylogging, botnets, whatever. There's not a hair on my head that thinks black hats are not capable of writing Stuxnet-like functionality. Don't underestimate these guys, they're way smarter than you think.

I don't think that black hat guys are stupid or not capable to pull out top-quality exploits. For all I know, Stuxnet may just have been the American government hiring some black hats. But it is a fact that the more information about an exploit spreads, the most likely it is to reach the ears of developers, who will then be able to patch it.

So if a black hat has a high-profile, Stuxnet-like exploit at hand, won't he rather sell it for a hefty sum of money to high-profile malware editors which will then use it to attack high-profile targets, than sell it for the regular price to a random script kiddie who will use it to write yet another fake antivirus that displays ads, and attempts to steal credit card information ?

True. On a Mac, .pkg/.mpkg packages do that. They actually are little more than a bundle of an archive files and some xml data to describe its contents. it supports scripting, resources, …

Indeed, these are relatively close to what can be found on Linux. Now, personally, what I'd like to see is something between DMGs and this variety of packages. A standard package format which does not require root access for standard installation procedures and has an extremely streamlined installation procedure for mundane and harmless software, but still has all the bells and whistle of a full installation procedure when it is needed.

Its an interesting train of thought, but I still think there would be a lot of human design based decisions to be made for the different devices, and I don't know if the net gain of letting the computer do this would be greater than just redesigning the UI yourself, especially on iOS devices, where its trivial to set up an UI.

Oh, sure, I'm not talking about making UI design disappear, just changing a bit the balance of what's easy and what's difficult in it in favor of making software work for a wider range of hardware and users.

Adopting a consistent terminology, designing good icons, making good error messages, avoiding modals like pest, many ingredients of good UI design as it exists today would remain. But making desktop software scale well when the main window is resized or designing for blind people would be easier, whereas a price for this would be paid in terms of how easy it is to mentally perceive what you are working on during design work, making good IDEs even more important.

It has to have the functionality to support the use cases for the device. Everything else is just clutter.

This is not as trivial as you make it sound, though. Sometimes, the same use cases can be supported with more or less functionality, and there is a trade-off between comfort and usability.

Take, as an example, dynamically resizable arrays in the world of software development. Technically, all a good C developer needs in order to do that is malloc(), free() and memcpy(). But this is a tedious and error-prone process, so if resizing arrays is to be done frequently (as with strings), stuff which abstracts the resizing process away such as realloc() becomes desirable.

But that was just a parenthesis.

Some UI's which are basically displays of underlying functionality. These tend to be very tedious and time consuming to work with. There are others which actually take the effort to make the translation between a simple user interaction and the underlying technology. A lot of thought can go into the process of trying to come to grips with how these interactions should present itself to the user, and in some cases, it takes an order of a magnitude more effort than it takes to actually write the code behind it.

Well, we totally agree that UI design really is tedious and important stuff, and will remain so for any foreseeable future ;)

You're looking at it from a developer perspective, I'm looking at it from a user perspective. As a user I don't care if there's a windowing technology behind it or not. I don't see it, I don't use it, so it doesn't exist.

By this logic, a huge lot of computer technology does not exist, until the day it starts crashing or being exploited, out of being treated as low-priority because users don't touch it directly ;)

More seriously, I see your point. Mine was just that if you took a current desktop operating system, set the taskbar to auto-hide, and used a window manager which runs every app in full screen and doesn't draw window decorations, you'd get something that's extremely close in behaviour to a mobile device, and all software which doesn't use multiple windows wouldn't need to be changed a tiny bit. So full screen windows are not so much of a big deal as far as UI design is concerned, in my opinion.

Desktop computers have windowing functionality (The classic Mac OS even had way too many of it) There are more differences than that. Some popups, like authorizations, are modal, some others, like notifications, are non-modal. They way they display these things is different as well. But these are just individual elements, and in the grand scheme of things, trivialities.

And mobile OSs have modal dialogs and notifications too. No, seriously, I don't see what's the deal with windows on mobile devices. AFAIK, the big differences, as far as UI design is concerned, is that there is a very small amount of screen estate and that touchscreens require very big controls to be operated. But you talk about this later, so...

(...) Good tablet apps are layed out differently than good desktop apps. This is not a coincidence. Some of those differences are based on the different platform characteristics, as you mentioned. But other reasons have to do with the fact that the use cases for these apps differ greatly. I'm convinced that when you are designing UI's, you have to start from the user experience and define these use cases properly to be able to come to an application design thats truly empowering your users.

And this is precisely an area where I wanted to go. Is there such a difference in use cases between desktops and tablets ? I can use a desktop as well as a tablet to browse the web, fetch mail, or play coffee-break games. And given some modifications to tablet hardware, such as the addition of an optional stylus, and the addition of more capable OSs, tablets could be used for a very wide range of desktop use cases.

Now, there is some stuff which will always be more convenient on a desktop than on a tablet, and vice versa, because of the fundamental differences in hardware design criteria. But in the end, a personal computer remains a very versatile machine, and those we have nowadays are particularly similar to each other. Except for manufacturers who want to sell lots of hardware, there is little point in artificially segregating "tablet-specific" use cases and "desktop-specific" use cases. That would be like turning laptop owners who play games in derision because they don't have "true" gaming hardware, which I hope you agree would be just wrong. Everyone should use whatever works best for them.

Reply Parent Score: 2