Last week, we mentioned that the extremely popular open source video player VLC is getting a brand-new interface in its upcoming 4.0 release, expected to debut later this year. VLC 4.0 isn’t ready for prime time use yet—but because the program is open source, adventurous users can grab nightly builds of it to take a peek at what’s coming. The screenshots we’re about to show come from the nightly build released last Friday—20210212-0431. VLC is an incredibly popular application, so any major user interface overhaul like this is sure to lead to a lot of bikeshedding.
Wayland (the protocol and architecture) is still lacking proper consideration for color management. Wayland also lacks support for high dynamic range (HDR) imagery which has been around in movie and broadcasting industry for a while now (e.g. Netflix HDR UI). While there are well established tools and workflows for how to do color management on X11, even X11 has not gained support for HDR. There were plans for it (Alex Goins, DeepColor Visuals), but as far as I know nothing really materialized from them. Right now, the only way to watch HDR content on a HDR monitor in Linux is to use the DRM KMS API directly, in other words, not use any window system, which means not using any desktop environment. Kodi is one of the very few applications that can do this at all. This is a story about starting the efforts to fix the situation on Wayland. This is a great article to read – and an important topic, too. Colour management and HDR should be a core aspect of Wayland, and these people are making it happen.
If you have used tools like Google’s PageSpeed Insights, you probably have run into a suggestion to use “next-gen image formats”, namely Google’s WebP image format. Google claims that their WebP format is 25 – 34% smaller than JPEG at equivalent quality. I think Google’s result of 25-34% smaller files is mostly caused by the fact that they compared their WebP encoder to the JPEG reference implementation, Independent JPEG Group’s cjpeg, not Mozilla’s improved MozJPEG encoder. I decided to run some tests to see how cjpeg, MozJPEG and WebP compare. I also tested the new AVIF format, based on the open AV1 video codec. AVIF support is already in Firefox behind a flag and should be coming soon to Chrome if this ticket is to be believed. Spoiler alert: WebP doesn’t really provide any benefits, and since websites generally use JPEG as a fallback anyway, you end up with having to store two images at the same time, defeating the purpose entirely.
I am a programmer. I do not deal with digital painting, photo processing, video editing. I don’t really care for wide gamut or even proper color reproduction. I spend most of my days in a text browser, text editor and text terminal, looking at barely moving letters. So I optimize my setup to showing really, really good letters. A good monitor is essential for that. Not nice to have. A MUST. And in “good” I mean, as good as you can get. These are my thoughts, based on my own experience, on what monitors work best for programming. There’s a lot of good advice in here. We all know higher pixel densities make our user interfaces and text crisper, but a surprising number of people still don’t seem to know just how much of a gamechanger high refresh rates can be. If you’re shopping around for a new monitor, and you have to choose between higher pixel count or a high refresh rate, you should 100% without a doubt go for the higher refresh rate. The difference 120Hz or 144Hz will make in just how smooth and responsive a UI can be is astonishing. I think the sweet spot is 1440p at 144Hz, preferably with FreeSync or Gsync. Both Windows and Linux support high refresh rates out of the box, but as the linked article notes, macOS basically has no clue anything above 60Hz exists, and you’ll have to be very careful about what display you buy, and be willing to jump through annoying hoops every time you load up macOS just to enable high refresh rates.
Inkscape 1.0 has been released. A major milestone was achieved in enabling Inkscape to use a more recent version of the software used to build the editor’s user interface (namely GTK+3). Users with HiDPI (high resolution) screens can thank teamwork that took place during the 2018 Boston Hackfest for setting the updated-GTK wheels in motion. This is just the tip of the iceberg of this massive release.
Today, it seems we’re on another track completely. Despite being endlessly fawned over by an army of professionals, Usability, or as it used to be called, “User Friendliness”, is steadily declining. During the last ten years or so, adhering to basic standard concepts seems to have fallen out of fashion. On comparatively new platforms, I.E. smartphones, it’s inevitable: the input mechanisms and interactions with the display are so different from desktop computers that new paradigms are warranted. Worryingly, these paradigms have begun spreading to the desktop, where keyboards for fast typing and pixel-precision mice effectively render them pointless. Coupled with the flat design trend, UI elements are increasingly growing both bigger and yet somehow harder to locate and tell apart from non-interactive decorations and content. I doubt anyone here will disagree with the premise of this article, even if you might disagree with some of the examples. These past few weeks I’ve set up virtual machines of all the old Windows releases just to remind myself of just how good the graphical user interface introduced in Windows 95 was perfected over the years, culminating in the near-perfect Classic theme in Windows XP and Server 2003. Later iterations of the Classic theme, in Vista and onward, would sadly retain some of the Aero UI elements even when setting the Classic theme, ruining the aesthetic, and of course, the Classic theme is gone altogether now – you can’t set it in Windows 10. Similarly, Platinum in Mac OS 9 is still more coherent, more usable, and more intentful than whatever macOS brought to the table over the years. We can find solace in the fact that trends tend to be cyclical, so there’s a real chance the pendulum will eventually wing back.
I stopped there because we had to get back to work, but without even leaving the Finder and Desktop I was able to find a bunch of things that long-time Mac users had never known about because they never discovered them in their daily use. None of this is meant to say macOS is garbage or anything like that. It’s just interesting to see when people who love the Mac and are so critical of “discoverability” on the iPad. I’m not even saying the iPad is better than the Mac here, I’m just saying that “discoverability” is one of the big things that has people in a tizzy right now about the iPad, but I think some are laying into the iPad harder than is warranted. You have no idea how many undiscoverable or obtuse features, functions, tricks, and so on you take for granted when using old, established platforms like Windows or macOS.
Glass is a simulated operating system user interface (UI) project and it is being made with Unity 2018.4. It is not a real OS, although everything in the package is functional and can be changed easily. Not really an operating system, of course, but still a fascinating project. It also highlights just how versatile modern game engines really are – this is the same engine some of my favourite modern cRPGs and Cities: Skylines are running on.
What explains the popularity of terminals with 80×24 and 80×25 displays? A recent blog post “80×25” motivated me to investigate this. The source of 80-column lines is clearly punch cards, as commonly claimed. But why 24 or 25 lines? There are many theories, but I found a simple answer: IBM, in particular its dominance of the terminal market. In 1971, IBM introduced a terminal with an 80×24 display (the 3270) and it soon became the best-selling terminal, forcing competing terminals to match its 80×24 size. The display for the IBM PC added one more line to its screen, making the 80×25 size standard in the PC world. The impact of these systems remains decades later: 80-character lines are still a standard, along with both 80×24 and 80×25 terminal windows. As noted, a follow-up to our earlier discussion.
A month ago, we discussed an article about just how difficult text rendering is, and today we get to take a look at the other side of the coin – text editing. Alexis Beingessner’s Text Rendering Hates You, published exactly a month ago today, hits very close to my heart. Back in 2017, I was building a rich text editor in the browser. Unsatisfied with existing libraries that used ContentEditable, I thought to myself “hey, I’ll just reimplement text selection myself! How difficult could it possibly be?” I was young. Naive. I estimated it would take two weeks. In reality, attempting to solve this problem would consume several years of my life, and even landed me a full time job for a year implementing text editing for a new operating system.
A rollicking and surprisingly political blog post takes us through a fascinating history, connecting 1860-era US bank note presses to the 80×20 terminal standard, passing though the Civil War, the US census, mechanical computers, punch cards, IBM, early display technology, VT100, ANSI, CP/M, and DOS along the way.
Rendering text, how hard could it be? As it turns out, incredibly hard! To my knowledge, literally no system renders text “perfectly”. It’s all best-effort, although some efforts are more important than others. Text rendering is, indeed, in the eye of the beholder, and often preferences revolve around what people are used to more than anything else. Still, displays with higher and higher DPI have taken some of the guesswork out of text rendering, but that doesn’t mean it’s a walk in the park now.
GUIs are bloatware. I’ve said it before. However, rather than just complaining about IDEs I’d like to provide an understandable guide to a much better alternative: the terminal. IDE stands for Integrated Development Environment. This might be an accurate term, but when it comes to a real integrated development environment, the terminal is a lot better. In this post, I’ll walk you through everything you need to start making your terminal a complete development environment: how to edit text efficiently, configure its appearance, run and combine a myriad of programs, and dynamically create, resize and close tabs and windows. I don’t agree with the initial premise, but an interesting article nonetheless.
At its annual Adobe Max conference, Adobe announced plans to bring a complete version of Photoshop to the iPad in 2019.
Photoshop CC for iPad will feature a revamped interface designed specifically for a touch experience, but it will bring the power and functionality people are accustomed to on the desktop.
This is the real, full photoshop - the same codebase as the regular Photoshop, but running on the iPad with a touch UI. The Verge's Dami Lee and artist colleagues at The Verge got to test this new version of Photoshop, and they are very clear to stress that the biggest news here isn't even having the "real" Photoshop on the iPad, but the plans Adobe has for the PSD file format.
But the biggest change of all is a total rethinking of the classic .psd file for the cloud, which will turn using Photoshop into something much more like Google Docs. Photoshop for the iPad is a big deal, but Cloud PSD is the change that will let Adobe bring Photoshop everywhere.
This does seem to be much more than a simple cash grab, and I'm very intrigued to see if Adobe finally taking the iPad serious as a computing platform will convince others to do so, too - most notably Apple.
GrafX2 is a bitmap paint program inspired by the Amiga programs â€‹Deluxe Paint and Brilliance. Specialized in 256-color drawing, it includes a very large number of tools and effects that make it particularly suitable for pixel art, game graphics, and generally any detailed graphics painted with a mouse.
The program is mostly developed on Haiku, Linux and Windows, but is also portable on many other platforms.
This program has been around since the early '90s, and runs, among other platforms, on Haiku today. Amazing.
There's something about the macOS operating system that kind of drives people wild. (Heck, even the original Mac OS has its strong partisans.) In the 17 years since Apple first launched the first iteration of the operating system based on its Darwin Unix variant, something fairly curious started to happen: People without Macs suddenly wanted the operating system, if not the hardware it ran on. This phenomenon is somewhat common today - I personally just set up a Hackintosh of my own recently - but I'd like to highlight a different kind of "Hackintosh", the kind that played dress-up with Windows. Today's Tedium talks about the phenomenon of Mac skinning, specifically on Windows. Hide your computer's true colors under the hood.
I used to do this back in the early 2000s (goodness, I've been here way too long!). It was a fun thing to do, since you could never make it quite good enough - there was always something to improve. Good times.
Today (May 15, 2018) is the 30 year anniversary of CHI'88 (May 15-19, 1988), where Jack Callahan, Ben Shneiderman, Mark Weiser and I (Don Hopkins) presented our paper "An Empirical Comparison of Pie vs. Linear Menus". We found pie menus to be about 15% faster and with a significantly lower error rate than linear menus!
This article will discuss the history of what's happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.
Fantastic read with fantastic examples. Set some time aside for this one - you won't regret it.
Look at this screenshot of MacPaint from the mid-1980s. Now look at this screenshot of a current version of Microsoft Excel for Mac. Finally, consider just how different the two applications actually are. The former is a 30-year-old black and white first party application for painting while the latter is a current and unabashedly third party application for creating spreadsheets. Yet despite having been created in very different decades for very different purposes by very different companies, these two very different applications still seem a part of the same thread. Anyone with experience in one could easily find some familiarity in the other, and while the creators of the Macintosh set out to build a truly consistent experience, there is only one significant piece of UX that these two mostly disparate applications share - the menu bar.
The lack of a menu bar in (most) touch applications is really what sets them apart from regular, mouse-based applications. It makes it virtually impossible to add more complex functionality without resorting to first-run onboarding experiences (terrible) or undiscoverable gestures (terrible). While menus would work just fine on devices with larger screens such as tablets and touch laptops - I use touch menus on my Surface Pro 4 all the time and they work flawlessly - the real estate they take up is too precious on smartphones.
If touch really wants to become a first-class citizen among the mouse and keyboard, developers need to let go of their fear of menus. Especially for more complex, productivity-oriented touch applications on tablets and touch laptops, menus are a perfectly fine UI element. Without them, touch applications will never catch up to their mouse counterparts.
Ending the year a new release of the "desktop engine" Arcan and its reference desktop environment, Durden.
Arcan is a different take on how to glue the user-experience side of operating systems together. It has been in development for well over a decade, with the modest goals of providing a more secure, faster, safer and flexible alternative to both Xorg and terminal emulators, as well as encouraging research.
The latest release improves on areas such as crash resilience, wayland client support, VR devices, OpenBSD support and visual goodies. You can read through the full release post, with some of the more technical bits in the related articles about crash-resilient Wayland compositing and "AWK" for multimedia.
I wonder if these rugged aesthetics, now commonplace in cutting-edge websites, can work at scale - in mobile apps used by +1b people. Instagram's new UI paved the way: can this effort be replicated in other categories (e.g. gaming)? Is brutalism a fad or the future of app design? Would it make apps more usable, easy-to-use and delightful? To end with, would it generate more growth? Conversions experts sometimes suggest that more text equals more engagement - what if we push this idea to the extreme?
There's something unsettling about these brutalist redesigns by Pierre Buttin - but I don't outright hate them. There's something very functional about them.