Linked by Thom Holwerda on Thu 3rd Nov 2011 22:54 UTC
Mac OS X And so the iOS-ification of Mac OS X continues. Apple has just announced that all applications submitted to the Mac App Store have to use sandboxing by March 2012. While this has obvious security advantages, the concerns are numerous - especially since Apple's current sandboxing implementation and associated rules makes a whole lot of applications impossible.
Thread beginning with comment 496190
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: Comment by frderi
by frderi on Sun 6th Nov 2011 19:37 UTC in reply to "RE[5]: Comment by frderi"
frderi
Member since:
2011-06-17


I agree that software protections which are good enough against everyday desktop and mobile threats will be insufficient against targeted attacks with colossal financial and human means like Stuxnet. When you're facing this sort of attacks, you need NASA-like permanent code auditing and warfare-like financial and human means to achieve good security.

However, I also believe that that the average desktop/mobile user is not likely to have to worry about this anytime soon.


I'm not sure if you aware of how the black hat industry works. Make no mistake, this is a multi million dollar industry. There are people out there that make a living out of it. There are people who do nothing all day but to find these zero-day bugs. And when they find them, they sell them on the black market, for hundreds or thousands of dollars. These aren't the kinds of bugs that come to light by patches. The black hat industry has moved beyond that. These are bugs that aren't known by their respective vendors and aren't patched in any of their products. This information is then bought by malware writers, who exploit them in their malicious code for keylogging, botnets, whatever. There's not a hair on my head that thinks black hats are not capable of writing Stuxnet-like functionality. Don't underestimate these guys, they're way smarter than you think.


Hmmm... Which version of OS X are we talking about here ? I think that on the (admittedly a little old) 10.5 machines which I'm used to, Safari automatically mounts and opens dmgs but does not do anything else.


Opening safe files is an option you can turn off and on in the options; it also works with zip files.


I really, really do not like Windows-like installers, but I see the value in standard packages whose installation goes a bit beyond copying a folder at a standard location. File associations, applications which start on system boot, security permissions... All that benefits from being managed at once during "installation" time.


True. On a Mac, .pkg/.mpkg packages do that. They actually are little more than a bundle of an archive files and some xml data to describe its contents. it supports scripting, resources, …


You are right that cross-device portability, if possible, would be about much more than basic UI fixes. I've not started full work on that yet, but an interesting path to study, in my opinion, would be to start with a relatively abstract theory of human-computer interactions, then gradually specialize it towards the kind of devices and users which the OS or application wants to target.


Its an interesting train of thought, but I still think there would be a lot of human design based decisions to be made for the different devices, and I don't know if the net gain of letting the computer do this would be greater than just redesigning the UI yourself, especially on iOS devices, where its trivial to set up an UI.


And when you do too little, people just say "meh" and move along ;) I guess that defining reasonable goals for a product must be one of the hardest tasks of engineering !


It has to have the functionality to support the use cases for the device. Everything else is just clutter. After defining the goals of your app, you need to design the practical implementation of the functionality. As a user, I really appreciate it when a lot of thought has gone into this process. Some UI's which are basically displays of underlying functionality. These tend to be very tedious and time consuming to work with. There are others which actually take the effort to make the translation between a simple user interaction and the underlying technology. A lot of thought can go into the process of trying to come to grips with how these interactions should present itself to the user, and in some cases, it takes an order of a magnitude more effort than it takes to actually write the code behind it.

Well, they do have windows, in the sense of a private display which the application may put its UI into without other software interfering. It just happens that these windows are not resizable, full screen, and as a consequence are hard to close and can only be switched using the operating system's task switcher. Which makes multi-windows interfaces impractical. But those ought to disappear anyway ;)


You're looking at it from a developer perspective, I'm looking at it from a user perspective. As a user I don't care if there's a windowing technology behind it or not. I don't see it, I don't use it, so it doesn't exist. Desktop computers have windowing functionality (The classic Mac OS even had way too many of it) There are more differences than that. Some popups, like authorizations, are modal, some others, like notifications, are non-modal. They way they display these things is different as well. But these are just individual elements, and in the grand scheme of things, trivialities.


Although they do not have a mouse, they still have pointer-based UIs. Only this time, the pointer is a huge greasy finger instead of being a pixel-precise mouse, so hovering actions must not be a vital part of the UI, and controls must be made bigger to be usable. Since controls are bigger and screens are smaller, less controls can be displayed at once, and some controls must either go of be only accessible through scrolling. But this does not have to be fully done by hand, UI toolkits could do a part of the job if the widget set was designed with cross-device portability in mind...


Try to think a little bit further than the practicalities of the UI elements and think about the overall user experience instead of the engineering challenges. Good tablet apps are layed out differently than good desktop apps. This is not a coincidence. Some of those differences are based on the different platform characteristics, as you mentioned. But other reasons have to do with the fact that the use cases for these apps differ greatly. I'm convinced that when you are designing UI's, you have to start from the user experience and define these use cases properly to be able to come to an application design thats truly empowering your users.

Reply Parent Score: 0

RE[7]: Comment by frderi
by Neolander on Sun 6th Nov 2011 22:11 in reply to "RE[6]: Comment by frderi"
Neolander Member since:
2010-03-08

I'm not sure if you aware of how the black hat industry works. Make no mistake, this is a multi million dollar industry. There are people out there that make a living out of it. There are people who do nothing all day but to find these zero-day bugs. And when they find them, they sell them on the black market, for hundreds or thousands of dollars. These aren't the kinds of bugs that come to light by patches. The black hat industry has moved beyond that. These are bugs that aren't known by their respective vendors and aren't patched in any of their products. This information is then bought by malware writers, who exploit them in their malicious code for keylogging, botnets, whatever. There's not a hair on my head that thinks black hats are not capable of writing Stuxnet-like functionality. Don't underestimate these guys, they're way smarter than you think.

I don't think that black hat guys are stupid or not capable to pull out top-quality exploits. For all I know, Stuxnet may just have been the American government hiring some black hats. But it is a fact that the more information about an exploit spreads, the most likely it is to reach the ears of developers, who will then be able to patch it.

So if a black hat has a high-profile, Stuxnet-like exploit at hand, won't he rather sell it for a hefty sum of money to high-profile malware editors which will then use it to attack high-profile targets, than sell it for the regular price to a random script kiddie who will use it to write yet another fake antivirus that displays ads, and attempts to steal credit card information ?

True. On a Mac, .pkg/.mpkg packages do that. They actually are little more than a bundle of an archive files and some xml data to describe its contents. it supports scripting, resources, …

Indeed, these are relatively close to what can be found on Linux. Now, personally, what I'd like to see is something between DMGs and this variety of packages. A standard package format which does not require root access for standard installation procedures and has an extremely streamlined installation procedure for mundane and harmless software, but still has all the bells and whistle of a full installation procedure when it is needed.

Its an interesting train of thought, but I still think there would be a lot of human design based decisions to be made for the different devices, and I don't know if the net gain of letting the computer do this would be greater than just redesigning the UI yourself, especially on iOS devices, where its trivial to set up an UI.

Oh, sure, I'm not talking about making UI design disappear, just changing a bit the balance of what's easy and what's difficult in it in favor of making software work for a wider range of hardware and users.

Adopting a consistent terminology, designing good icons, making good error messages, avoiding modals like pest, many ingredients of good UI design as it exists today would remain. But making desktop software scale well when the main window is resized or designing for blind people would be easier, whereas a price for this would be paid in terms of how easy it is to mentally perceive what you are working on during design work, making good IDEs even more important.

It has to have the functionality to support the use cases for the device. Everything else is just clutter.

This is not as trivial as you make it sound, though. Sometimes, the same use cases can be supported with more or less functionality, and there is a trade-off between comfort and usability.

Take, as an example, dynamically resizable arrays in the world of software development. Technically, all a good C developer needs in order to do that is malloc(), free() and memcpy(). But this is a tedious and error-prone process, so if resizing arrays is to be done frequently (as with strings), stuff which abstracts the resizing process away such as realloc() becomes desirable.

But that was just a parenthesis.

Some UI's which are basically displays of underlying functionality. These tend to be very tedious and time consuming to work with. There are others which actually take the effort to make the translation between a simple user interaction and the underlying technology. A lot of thought can go into the process of trying to come to grips with how these interactions should present itself to the user, and in some cases, it takes an order of a magnitude more effort than it takes to actually write the code behind it.

Well, we totally agree that UI design really is tedious and important stuff, and will remain so for any foreseeable future ;)

You're looking at it from a developer perspective, I'm looking at it from a user perspective. As a user I don't care if there's a windowing technology behind it or not. I don't see it, I don't use it, so it doesn't exist.

By this logic, a huge lot of computer technology does not exist, until the day it starts crashing or being exploited, out of being treated as low-priority because users don't touch it directly ;)

More seriously, I see your point. Mine was just that if you took a current desktop operating system, set the taskbar to auto-hide, and used a window manager which runs every app in full screen and doesn't draw window decorations, you'd get something that's extremely close in behaviour to a mobile device, and all software which doesn't use multiple windows wouldn't need to be changed a tiny bit. So full screen windows are not so much of a big deal as far as UI design is concerned, in my opinion.

Desktop computers have windowing functionality (The classic Mac OS even had way too many of it) There are more differences than that. Some popups, like authorizations, are modal, some others, like notifications, are non-modal. They way they display these things is different as well. But these are just individual elements, and in the grand scheme of things, trivialities.

And mobile OSs have modal dialogs and notifications too. No, seriously, I don't see what's the deal with windows on mobile devices. AFAIK, the big differences, as far as UI design is concerned, is that there is a very small amount of screen estate and that touchscreens require very big controls to be operated. But you talk about this later, so...

(...) Good tablet apps are layed out differently than good desktop apps. This is not a coincidence. Some of those differences are based on the different platform characteristics, as you mentioned. But other reasons have to do with the fact that the use cases for these apps differ greatly. I'm convinced that when you are designing UI's, you have to start from the user experience and define these use cases properly to be able to come to an application design thats truly empowering your users.

And this is precisely an area where I wanted to go. Is there such a difference in use cases between desktops and tablets ? I can use a desktop as well as a tablet to browse the web, fetch mail, or play coffee-break games. And given some modifications to tablet hardware, such as the addition of an optional stylus, and the addition of more capable OSs, tablets could be used for a very wide range of desktop use cases.

Now, there is some stuff which will always be more convenient on a desktop than on a tablet, and vice versa, because of the fundamental differences in hardware design criteria. But in the end, a personal computer remains a very versatile machine, and those we have nowadays are particularly similar to each other. Except for manufacturers who want to sell lots of hardware, there is little point in artificially segregating "tablet-specific" use cases and "desktop-specific" use cases. That would be like turning laptop owners who play games in derision because they don't have "true" gaming hardware, which I hope you agree would be just wrong. Everyone should use whatever works best for them.

Reply Parent Score: 2

RE[8]: Comment by frderi
by frderi on Mon 7th Nov 2011 13:02 in reply to "RE[7]: Comment by frderi"
frderi Member since:
2011-06-17


So if a black hat has a high-profile, Stuxnet-like exploit at hand, won't he rather sell it for a hefty sum of money to high-profile malware editors which will then use it to attack high-profile targets, than sell it for the regular price to a random script kiddie who will use it to write yet another fake antivirus that displays ads, and attempts to steal credit card information ?


I don't think android exploits are really that "high profile" and if there's money to be made, I don't think blackhats really care about what profile it has. Its all about return of investment. The more of the same systems there are in the market, the more interesting an exploit becomes, since your attack surface increases by a great margin.

To give you an example : Suppose I'm a malware writer, and I write a worm that at a certain night every month at 2 am, silently calls an overseas toll number, allowing me to connect $1 from the call. I wrote the app, but I need some clever exploits to insert it into the system. Suppose its not one hack but a collection of pretty neat hacks, and after shopping around, it sets me back $25K to have them. Then I write the worm and release it, and its able to infect a little 100.000 smartphones. Since the call is sporadic and it only costs a couple of dollars, its improbable that people will discover it right away. Hardly anyone checks every call every month. So over the course of a year, I collect a cool 5.8 million dollers. Say check their bill once every few months and 1% discovers it every month, thats still $3,8M. Say 4% check it every month, thats still $1,3M. Still quite a nice a nice investment. Say the exploit costs me 5 times as much, or even 25 times as much, its still a nice investment. I don't know if the numbers are realistic because i'm not a black hat, I just wanted to show that kind of potential a dominating smartphone platform has.


Indeed, these are relatively close to what can be found on Linux. Now, personally, what I'd like to see is something between DMGs and this variety of packages. A standard package format which does not require root access for standard installation procedures and has an extremely streamlined installation procedure for mundane and harmless software, but still has all the bells and whistle of a full installation procedure when it is needed.


I'm not quite sure what you mean with "between" the two. A .dmg is a virtual disk file describing the content of a disk volume, a .pkg is an installation description for a bill of materials that is read by an application and executed. Both can be combined with eachother. .pkg files are scriptable, extensible with programming, and combinable into metapackages. You can make them as simple or as complicated as you like. You can specify in your .pkg if the application requires authentication or not. If you're just installing in a users home folder, you can do so without authentication..


This is not as trivial as you make it sound, though. Sometimes, the same use cases can be supported with more or less functionality, and there is a trade-off between comfort and usability.


The best user interaction designs are often the ones who have a novel way at doing things and with an ingenious simplicity. Take the iPod, for example. The click wheel is an inherently simple design, much simpler than having buttons. Its still a lot faster to navigate around your device than it is with buttons although buttons are more complicated.


Mine was just that if you took a current desktop operating system, set the taskbar to auto-hide, and used a window manager which runs every app in full screen and doesn't draw window decorations, you'd get something that's extremely close in behaviour to a mobile device, and all software which doesn't use multiple windows wouldn't need to be changed a tiny bit. So full screen windows are not so much of a big deal as far as UI design is concerned, in my opinion.

Is there such a difference in use cases between desktops and tablets ? I can use a desktop as well as a tablet to browse the web, fetch mail, or play coffee-break games. And given some modifications to tablet hardware, such as the addition of an optional stylus, and the addition of more capable OSs, tablets could be used for a very wide range of desktop use cases.


I think they are. UI's aren't flat surfaces. Every good UI has depth. The important things are on the surface of the UI, the less important stuff is tucked away deeper. A good UI balances what needs to be where on the frequency of the use case. If manipulations are many, you better make it obvious on the surface of the UI. If its infrequent, its better to tuck them away deeper, so its not in the way to clutter with the important stuff. Although our post pc devices do similar things, I think their use cases can differ greatly, because of circumstantional circharacteristics. So I think to tune them well towards to their intended use, their UIs need to be different as well. I'll give you some examples :

Consider mail. When I'm using mail on my desktop, I want to have all the tools at my fingertips to be able to be most productive in my mail client. All the "Power tools", like sorting, moving and labeling my email, advanced editing functionality,... need to be right where I want them. A smartphone does mail too, but thats just about where the similarities end. Email on a smartphone is more a way to keep you up-to-date on your inbox, and shoot the occasional short reply if things can't wait. No sane person is going to do lengthy emails on their smartphones or do mailbox maintenance, that stuff's just way too tedious on a tiny screen. Now tablets are somewhat in the middle between smartphones and desktop computers, but I still don't think people will want to do a lot of mailbox management on a tablet, because its still too tedious. A tablet is more something to take with you when you're on the move or in the coutch and when you need more comfort while doing email and you tend to do more than just skimming your imbox and typing a short reply. So the use case for mail on a tablet will be somewhere between those smartphones and PC's. You'll want a couple of features more than on a smartphone, but less than on a desktop.

Another example is the Garageband application. Garageband is a sequencer which shipped with a Mac for a while now, and has a version for iPad and now iPhone. The mobile versions are essentially a visual multitrack recorder with extras thrown in. The desktop version is more of an editing, polishing and export tool. So you can record your jams your iPhone or iPad, transfer your recordings on your desktop computer, clean it up and export it. This "software on an standardized platform replacing dedicated appliances" approach works really well to turn the post pc devices into truly versatile tools. Controlling virtual appliances with a mouse has always been awkward but they make a lot more sense on a post PC device. The biggest mistake one could make in terms of the tablet form factor is to look at it as PC in a frame. We both know that technically, thats what it is. But its this technical myopia that has caused tablets to be a dud in the market place and to come forward with a compelling solution until the iPad came along.

Reply Parent Score: 1