Linked by Thom Holwerda on Sun 4th Sep 2016 00:10 UTC
BeOS & Derivatives

Haiku uses a custom vector image format to store icons. This is surprising both because most OSes consider bitmaps to be totally sufficient to represent icons and because there are plenty of vector graphics formats out this (e.g. SVG). The goal of the Haiku Vector Icon Format (HVIF) is to make vector icon files as small as possible. This allows Haiku to display icons as several sizes while still keeping the files small enough to fit into an inode (i.e., inside a file’s metadata). The goal of keeping the icons in the metadata is to reduce the disk reads needed to display a folder - it also each file to only require one disk read to display.

This blog post examines the details of the HVIF format using a hex editor and the canonical parser's source code. In the process of dissecting an example icon, I'll also show you an optimization bug in the icon image editor.

Great article.

Order by: Score:
svgz?
by birdie on Sun 4th Sep 2016 05:17 UTC
birdie
Member since:
2014-07-15

Sounds like they were hellbent on not using SVG + GZ compression which many Linux distros already employ.

Also, smaller icons (16x16 24x24 32x32) are almost always manually painted and come in PNG because there are no good algorithms to downscale vector graphics to such tiny areas.

In short I still don't understand them ;-)

Reply Score: 2

RE: svgz?
by geleto on Sun 4th Sep 2016 06:49 UTC in reply to "svgz?"
geleto Member since:
2005-07-06

Actually the best way to draw small icons is to create them as vectors on a 16x16 grid, snapping the vertices and edges to the pixel centers.

Edited 2016-09-04 06:50 UTC

Reply Score: 3

RE: svgz?
by Sauron on Sun 4th Sep 2016 07:05 UTC in reply to "svgz?"
Sauron Member since:
2005-08-02

Your not on your own about not understanding things. The aim of Haiku is to be compatible with BeOS, but so many things are so different I'm surprised there's any compatibility at all!
I use both BeOS 5 and Haiku and some things are different and won't work on each other. Still, it's early days yet and things do keep improving, here's hoping for a version 1 release before too long.

Reply Score: 2

RE[2]: svgz?
by Morgan on Sun 4th Sep 2016 23:50 UTC in reply to "RE: svgz?"
Morgan Member since:
2005-06-29

I've been following the project since its inception, and honestly with every year it just gets more and more difficult to maintain backward compatibility while still practicing good OS design and usability. I love BeOS, and I still say it was by far the greatest OS of its time. But, its time has passed, and in an alternate universe where Be Inc. survived and became a true rival to Microsoft, I could see it evolving into something like our Haiku. And I think the Haiku devs are just fine with being in such a place.

Reply Score: 1

RE: svgz?
by le_c on Sun 4th Sep 2016 07:09 UTC in reply to "svgz?"
le_c Member since:
2013-01-02

Here is some more info:

https://www.haiku-os.org/news/2006-11-06/icon_facts

Zeta had gz svgs but they are still too big...

Moreover, Haiku icons support different level of details (can svg do that?):

https://www.haiku-os.org/docs/userguide/en/applications/icon-o-matic...

Reply Score: 2

RE[2]: svgz?
by Chrispynutt on Mon 5th Sep 2016 08:33 UTC in reply to "RE: svgz?"
Chrispynutt Member since:
2012-03-14

Theoretically you could create groups based on different LODs. Then with CSS switch on the additional detail as they scaled larger.

Although this looks far more elegant.

It does get me thinking though!

Reply Score: 2

RE: svgz?
by agentj on Sun 4th Sep 2016 07:10 UTC in reply to "svgz?"
agentj Member since:
2005-08-19

These resolutions are good if you use monitors made from dirt or watch it on your calculator ...

Reply Score: 1

RE[2]: svgz?
by Sauron on Sun 4th Sep 2016 08:48 UTC in reply to "RE: svgz?"
Sauron Member since:
2005-08-02

Which many in the undeveloped world do, almost! So all is good.

Reply Score: 3

RE[2]: svgz?
by nicubunu on Sun 4th Sep 2016 11:56 UTC in reply to "RE: svgz?"
nicubunu Member since:
2014-01-08

icons are not only to be shown on your desktop, they can appear on menus, panels, toolbars, notification area and so on, where the space is limited.

Reply Score: 5

A bit confused
by Lennie on Sun 4th Sep 2016 11:30 UTC
Lennie
Member since:
2007-09-22

Are they storing the icon in the meta data of the file for which they want to display the icon or a icon general file ?

I would expect a general icon file and not the file itself. Because if it's the latter how do you easily change the icon of all files of a file type without having to rewrite the meta data of a whole lot of files ?

Reply Score: 2

RE: A bit confused
by looncraz on Sun 4th Sep 2016 21:56 UTC in reply to "A bit confused"
looncraz Member since:
2005-07-24

To not get too technical, imagine the file system as a series of spreadsheets - one sheet per folder, with nested sheets for subfolders. Each file is a row in a sheet, and the first cell of that row contains a fixed amount of data that is read just to get its basic details (name, size, dates).

Now, imagine, years later, you find that there is room inside this cell to place more data at no extra cost in terms of storage space or performance... but what to put there? What is the next most commonly used piece of data for a file? Answer: its icon. But icons are big... and new tech demands even larger icons...

So, the only way to make the icons fit was a new approach to representing the data. It was managed, and now the data fits in that same cell... basically giving free storage - and free loading. So when anyone needs the icon attached to a file, it's data is already loaded by the file system.... if it's not needed the fact that it was loaded has no negative impacts.

Reply Score: 3

RE[2]: A bit confused
by Kochise on Mon 5th Sep 2016 06:31 UTC in reply to "RE: A bit confused"
Kochise Member since:
2006-03-03

Yes it have : needless repetition of icons of the same file type that could be cached instead.

In that spare space, I'd rather put MD5 for file consistency checking and search tags.

Reply Score: 3

RE[3]: A bit confused
by ahwayakchih on Mon 5th Sep 2016 08:06 UTC in reply to "RE[2]: A bit confused"
ahwayakchih Member since:
2006-03-22

Yes it have : needless repetition of icons of the same file type that could be cached instead.


That would be true only if every file in the system had its own, custom icon set. BeOS/Haiku does not do that of course (unless user has a lot of time and motivation to actually set custom icon on every file and folder in the file system).

There are "global" icons per file type (with fallback to "generic" icon) and then any file and/or folder can have own, custom icon. Icon specific to a file or folder is stored in that file metadata (BFS attribute(s)) that is read when info about that file is read.

Reply Score: 1

RE[3]: A bit confused
by phoudoin on Mon 5th Sep 2016 08:14 UTC in reply to "RE[2]: A bit confused"
phoudoin Member since:
2006-06-09

BeOS and Haiku have a database of MIME types, where generic icons are stored. These are cached by the system and, obviously, the same icon for generic type of files is not stored over and over in each inodes.

For specific file's icon, like Application binary for instance (whose each have a distinct MIME type, known as Application MIME signature, BTW), icon is stored only in that file inode in a compressed binary vector format.

And for orthogonal sakeness, the MIME types database is itself stored as a hierarchy of files (text/plain is a file name "plain" stored under .../mimetypes/text surfolder) with some metadata, rules for auto-detection and, you guess it, MIME specific icon stored in the "plain" file inode.
To change all text/plain default icon, only one HVIF icon must be updated, not all text/plain files's icons, which would be crazy ! And will works only for files on a BFS file system, which would be even more crazy...

Reply Score: 2

SVG
by nicubunu on Sun 4th Sep 2016 11:42 UTC
nicubunu
Member since:
2014-01-08

I expect one issue with SVG to be complexity: SVG is a complex standard, with a lot of features. If the OS is supposed to use SVG icons, then it may have to support a lot of those features, leading to a complex (and slow) SVG renderer.
Also, regarding size: I use Inkscape to create SVG images and it can save it two types of SVG (perhaps other editors work the same). One is "Inkscape SVG", which preserves a lot of metadata, useful when your image is to be further edited, and the other is "Plain SVG", which will render identically but has a smaller file size.
Still, as defined in its standard, a SVG file will have a lot of headers and metadata, which for a simple image can be really big. If one precisely know how the image was created and how is to be used, he can discard such headers and metadata and save space. Sure, it would result in broken SVG images, but not a big issue if the usage scope is limited.

Reply Score: 2

The clue is in the quote
by kwan_e on Sun 4th Sep 2016 11:57 UTC
kwan_e
Member since:
2007-02-18

So many people questioning the reasons behind the approach when the answer was already in the summary:

This allows Haiku to display icons as several sizes while still keeping the files small enough to fit into an inode


https://en.wikipedia.org/wiki/Inode , for those who don't know what an inode is.

Reply Score: 6

RE: The clue is in the quote
by teco.sb on Sun 4th Sep 2016 13:34 UTC in reply to "The clue is in the quote"
teco.sb Member since:
2014-05-08

So many people questioning the reasons behind the approach when the answer was already in the summary:

"This allows Haiku to display icons as several sizes while still keeping the files small enough to fit into an inode

"
While I saw that statement, it doesn't make much sense. It's equivalent to saying, "I want to jump off an airplane because I like the feeling of wind blowing on my face." While you do get what you're looking for, there are others, less risky ways of achieving that like rolling the windows down on you car, standing in front of a fan, etc. Same applies here. There are a few different ways to get a small icon size. But why do ICON files, out of all the other ones out there, need to be stored in a single inode or in the file metadata?

The real reason to me, reading the article, seems to be:
The goal of keeping the icons in the metadata is to reduce the disk reads needed to display a folder - it also each file to only require one disk read to display.

But is that really a problem? Are disk reads so slow that we need a whole new file format to deal with icons? No other graphical OS I've ever header of seems to have this "slow disk read" problem. Plus, it's not like icons are constantly accessed. If you have multiple files with the same format in a folder, it's not like you're going to have multiple file accesses... you just cache it in memory and display that how many every times you need.

This kind of reeks of over-optimization.

https://en.wikipedia.org/wiki/Inode , for those who don't know what an inode is.

Reply Score: 2

Earl C Pottinger Member since:
2008-07-12

The answer to that is yes!

I have watch Windows and Linux take a number of seconds to fill in their icons at times. Not all the time, it just seems to happen just when you needed the system the soonest ;)

On Haiku the icons just appear - no waiting. If you have a SSD like me the speed difference between Haiku and other OSs does not exist it seems, but access that flash-drive for the first time today and sometime the icons will take a long time to appear when you are not using Haiku.

I know the icons are not stored on the flash drive, but something is slowing down icon drawing in both Windows and Linux.

Edited 2016-09-04 14:01 UTC

Reply Score: 2

RE[3]: The clue is in the quote
by teco.sb on Sun 4th Sep 2016 15:13 UTC in reply to "RE[2]: The clue is in the quote"
teco.sb Member since:
2014-05-08

But that's mostly because most advanced file managers do more than just display the file-type icon. At least, Windows Explorer and Nautilus try to display more than just the file-type icon. They produce a thumbnail view, which is likely what takes the most amount of time since the file needs to be opened, read, a thumbnail created and added to a database. I have Nautilus set-up to not create thumbnails and accessing folders is seamless, even on slow USB sticks.

Obviously, having more "functionality" is going to require more code, requiring more time to process.

Reply Score: 2

RE[3]: The clue is in the quote
by cdude on Tue 6th Sep 2016 05:50 UTC in reply to "RE[2]: The clue is in the quote"
cdude Member since:
2008-09-21

Its the same with Symbian which embeds also thumbnails into images (metadata) to minimize seek-time.

Reply Score: 2

RE[2]: The clue is in the quote
by kwan_e on Sun 4th Sep 2016 16:55 UTC in reply to "RE: The clue is in the quote"
kwan_e Member since:
2007-02-18

No other graphical OS I've ever header of seems to have this "slow disk read" problem.


This being Haiku, maybe they can't move as fast as the development of other graphical OS and so are targeting older, more constrained, systems where slow disk reads are a problem?

Plus, it's not like icons are constantly accessed.
.
.
.
you just cache it in memory and display that how many every times you need.


Unless you're on a memory constrained system where you can't cache even a sizeable percentage of recent files visited.

Furthermore, maybe developing a highly custom (probably small) processing library means they don't have to drag in dependencies like SVG or PNG etc when building the filesystem driver.

And actually, in some desktops environments, folder icons are for thumbnails, which may be updated on the fly. Having it fit inside an inode structure makes updates faster.

This kind of reeks of over-optimization.


Maybe, but it's good to try things out to see where it may lead to for other metadata.

Reply Score: 2

RE[2]: The clue is in the quote
by Alfman on Sun 4th Sep 2016 17:14 UTC in reply to "RE: The clue is in the quote"
Alfman Member since:
2011-01-28

teco.sb,

While I saw that statement, it doesn't make much sense. It's equivalent to saying, "I want to jump off an airplane because I like the feeling of wind blowing on my face." While you do get what you're looking for, there are others, less risky ways of achieving that like rolling the windows down on you car, standing in front of a fan, etc. Same applies here. There are a few different ways to get a small icon size. But why do ICON files, out of all the other ones out there, need to be stored in a single inode or in the file metadata?



It makes sense when you consider disk access time. Today indirection is less expensive on SSDs, but prior to these disk latency was a nearly universal performance bottleneck. On a conventional disk seeking might take 9-15ms, incurred for every indirection. That could add over a second of overhead per hundred files. Every level of indirection that can be eliminated makes it that much faster. So this seems like a clever way to improve performance.

Of course there would have been other ways to solve the problem and if I were to tackle it myself I'd probably do something like adding a cache entry to each directory (something like materialized views in the database world).


Edit: I don't know how many people used BFS on CDROMs, but this could have been useful on that media too.

Edited 2016-09-04 17:29 UTC

Reply Score: 3

RE[3]: The clue is in the quote
by malxau on Sun 4th Sep 2016 18:17 UTC in reply to "RE[2]: The clue is in the quote"
malxau Member since:
2005-12-04

It makes sense when you consider disk access time...On a conventional disk seeking might take 9-15ms, incurred for every indirection...Every level of indirection that can be eliminated makes it that much faster.


It makes that one aspect faster, at the cost of spending more CPU to rasterize the vector format. In the beginning, icons were uncompressed bitmaps, which was really about minimizing CPU. Obviously these tradeoffs change over time, and today fewer reads at the expense of CPU makes more sense.

But teco.sb nailed it with the thumbnail comment. This discussion is confusing two things: generic icons, per type, which are easily cached and don't need to be stored per file at all so there's no reads to eliminate in directory listings; and thumbnails, which are intrinsically per file and need to be read for every file. But what's missing in the latter case is these also need to be generated by reading the contents of files, which is the slow part, and it's unclear what it means to generate a vectorized thumbnail for a jpeg/pdf/docx file anyway. For something like docx, it'd be possible to generate a perfectly scaleable vector but it would be big; or it could be approximate but that means it won't look nice at arbitrary resolutions anyway. Take your pick.

I suspect the motivation here is for file types with embedded icons - mainly executables. But is that the common case? And if the file had an embedded icon, some process still needs to pull it out of the file and put it into the inode, so it's akin to a thumbnail generation process, and the question becomes when that extraction occurs.

Engineering is tradeoffs. Something gets better by making something else worse, and the real skill is knowing which cases are the most common, and the best candidates for making "better."

Reply Score: 4

RE[4]: The clue is in the quote
by Alfman on Sun 4th Sep 2016 19:06 UTC in reply to "RE[3]: The clue is in the quote"
Alfman Member since:
2011-01-28

malxau,

I suspect the motivation here is for file types with embedded icons - mainly executables. But is that the common case? And if the file had an embedded icon, some process still needs to pull it out of the file and put it into the inode, so it's akin to a thumbnail generation process, and the question becomes when that extraction occurs.


Does Haiku use embedded icons or does it use dedicated icon files? I got the impression that storing icon files inside of inodes is merely a byproduct of having small icon files rather than having a special process to achieve this. Although someone more familiar with Haiku would have to confirm.

Reply Score: 2

RE[2]: The clue is in the quote
by axeld on Wed 7th Sep 2016 08:03 UTC in reply to "RE: The clue is in the quote"
axeld Member since:
2005-07-07

But is that really a problem? Are disk reads so slow that we need a whole new file format to deal with icons? No other graphical OS I've ever header of seems to have this "slow disk read" problem. Plus, it's not like icons are constantly accessed. If you have multiple files with the same format in a folder, it's not like you're going to have multiple file accesses... you just cache it in memory and display that how many every times you need.

This kind of reeks of over-optimization.


It's just a different kind of optimization; Windows, for example, has an on disk icon cache for this purpose.

This has the downside that it doesn't notice if an icon changes on disk; if an icon of an application changes after an update, you'll need to manually empty the icon cache to make it visible.
Also, if that icon cache is actually empty, or Windows decides to rebuild it, icons will be slow to appear.

On Haiku, such a cache isn't necessary, as the system is fast enough to render the icons when needed.

Also, inventing a new icon vector format has other benefits, like greatly reducing complexity compared to SVG, introducing features like level-of-detail, snap-to-grid for making the icons extra sharp (like hinting for fonts -- unfortunately, the author of that article decided to resize the images, so the sharpness of the vector icons is completely gone), etc.

Edited 2016-09-07 08:05 UTC

Reply Score: 1

RE: The clue is in the quote
by nicubunu on Mon 5th Sep 2016 13:39 UTC in reply to "The clue is in the quote"
nicubunu Member since:
2014-01-08

Questioning is legit, since the solution has its drawbacks: I imagine is very difficult to implement icon theme switching, user selectable custom icons and a icon designer is limited in the authoring tools available. Is a trade-off, and a trade-off is always open to questioning.

Reply Score: 2

Overengineering is a real word.
by hdjhfds on Sun 4th Sep 2016 16:58 UTC
hdjhfds
Member since:
2013-08-19

You should check it out, really.

Reply Score: 1

Thom_Holwerda Member since:
2005-06-29

You should check it out, really.


And that's why programmers and coders tend to crank out crap software at an alarming rate.

Good software doesn't exist, and this attitude is the reason.

Reply Score: 1

Pro-Competition Member since:
2007-08-20

I have to say, "under-engineering" is at least as big a problem as "over-engineering". I can't begin to count the number of times I've run into code that was written without any thought to future expansion. The inevitable result is hacked-on kludges, which become nearly impossible to maintain, to cover use-cases that could easily have been foreseen with a little thought.

I admit to tending toward the "over-engineering" side. I always try to think of directions the requirements might go in the future, and make allowances for painless expansion. If it is never used, then I have "over-engineered" slightly. But if it is, I have saved myself (and/or a future developer) a lot of time and grey hair, and the company money and embarrassment.

It is definitely a balance, but over time, one gets a feel for the general directions things tend to go.

To bring it back on topic, I like the balance that was struck in the HVIF file format. It has enough optimization options to meet its mission, but not so many that it becomes burdensome to understand. The number of commands, flags and options seems reasonable. Custom number storage formats might seem excessive, but as long as the code/result is simple and predictable, and the space savings are real (which they seem to be), I have no problem with it.

BTW, this was a very interesting low-level, hands-on article. Please keep them coming as you find them!

Reply Score: 2

kwan_e Member since:
2007-02-18

"You should check it out, really.


And that's why programmers and coders tend to crank out crap software at an alarming rate.

Good software doesn't exist, and this attitude is the reason.
"

No, it isn't. Every well thought out design decision has opportunity costs. Design decisions that seem good at the time may turn out to be bad - like X11 - and it takes engineering work to get away from it.

There's only one reason why good software can't exist - evolution by natural selection. Evolution by natural selection creates irreducible complexity, and nothing can prevent that.

http://www.talkorigins.org/faqs/comdesc/ICsilly.html

Add a part. Then make it necessary. That's all you need. Trying to prevent making things necessary simply adds a layer of abstraction, which itself becomes necessary. It becomes even more necessary if such a layer becomes a standard and has to be supported over time.

Reply Score: 2

dpJudas Member since:
2009-12-10

There's only one reason why good software can't exist - evolution by natural selection. Evolution by natural selection creates irreducible complexity, and nothing can prevent that.

Such nonsense. While it is true you can't remove top of the bridge after you removed the stone below, you can easily do so if you add some temporary support at that spot while rearranging the structure.

Reply Score: 2

kwan_e Member since:
2007-02-18

"There's only one reason why good software can't exist - evolution by natural selection. Evolution by natural selection creates irreducible complexity, and nothing can prevent that.

Such nonsense. While it is true you can't remove top of the bridge after you removed the stone below, you can easily do so if you add some temporary support at that spot while rearranging the structure.
"

The only one spouting nonsense here is you. The example doesn't even talk about removing the "top of the bridge". It talks about removing any part of the structure, with the structure losing its specified function when it does so.

When you add temporary support, that becomes a NECESSARY part of the structure while the operation is being carried out. You haven't removed the irreducible complexity. You only rearranged it, as you said. And for that time span when it's being rearranged, it has lost its specified function.

Now in the world of software development, if you have to support a released version, you can't add a temporary support structure. The released version exists. You can only add a temporary support structure in the development version, and once released, would likely break previous versions. Either way, you just added to the complexity, not removed it.

You obviously didn't even consider the implications of the Mullerian Two-Step properly.

Reply Score: 2

dpJudas Member since:
2009-12-10

When you add temporary support, that becomes a NECESSARY part of the structure while the operation is being carried out. You haven't removed the irreducible complexity. You only rearranged it, as you said. And for that time span when it's being rearranged, it has lost its specified function.

What the premise forgets to take into account is that temporary support allows for the original complexity to be altered/become irrelevant. As an example, in biology, something like 95% of our DNA is dormant doing nothing. That is PRIOR complexity no longer needed in order to describe a modern human.

Now in the world of software development, if you have to support a released version, you can't add a temporary support structure. The released version exists. You can only add a temporary support structure in the development version, and once released, would likely break previous versions. Either way, you just added to the complexity, not removed it.

Not true. If we take the evolution of Windows as an example, there once was a complexity called 16-bit Windows that no longer exists. Likewise in biology we once had a tail, we didn't need it anymore, and now it is no more.

You obviously didn't even consider the implications of the Mullerian Two-Step properly.

Oh I considered it alright and see endless examples out there (both in engineering and biology) that shows it to be wrong.

Reply Score: 2

kwan_e Member since:
2007-02-18

What the premise forgets to take into account is that temporary support allows for the original complexity to be altered/become irrelevant.


So what? Whether the original complexity can be altered or become irrelevant is beside the point of the premise.

The point is the original function has become altered or irrelevant. Once the original function has become altered or made irrelevant, it no longer serves the ORIGINAL PURPOSE. That's what makes the original function irreducibly complex. This is what the Mullerian two-step is about, but you keep missing it.

You seem to be think irreducibly complex is about "not being able to change". Irreducible complexity is about not being able to take things away without breaking something. THAT is what causes the problems. It's not that things can't change. It's that things can't change without breaking the ORIGINAL thing.

As an example, in biology, something like 95% of our DNA is dormant doing nothing. That is PRIOR complexity no longer needed in order to describe a modern human.


No. A lot of non-coding DNA is there for structural and epigenetic purposes. Of the remaining junk DNA that's truly junk.

Not true. If we take the evolution of Windows as an example, there once was a complexity called 16-bit Windows that no longer exists. Likewise in biology we once had a tail, we didn't need it anymore, and now it is no more.


Your example doesn't prove your point. Instead, it disproves it. The mess that went into slowly taking out the 16-bit Windows part still left an imprint on the design of Windows for a long time. Same goes for Win32, which is an even harder job. Windows is one the best examples to prove my point regarding Thom's argument - irreducible complexity makes things extremely hard to change and leaves an imprint.

And no, our tail isn't gone. The genes are still there. They're just not fully expressed, and some people do have a vestigial tail of varying lengths. We have an appendix that's no longer used for digestion. Whales can grow vestigial legs.

"You obviously didn't even consider the implications of the Mullerian Two-Step properly.

Oh I considered it alright and see endless examples out there (both in engineering and biology) that shows it to be wrong.
"

Only because you completely misunderstand the point of the Mullerian two-step. And you keep getting simple facts wrong.

Reply Score: 2

dionicio Member since:
2006-07-12

Multi-environmental species also tend to have a larger amount of non-expressing DNA.

[Just appending that both strategies coexist and predate].

Reply Score: 2

kwan_e Member since:
2007-02-18

Seems I didn't finish this thought:

No. A lot of non-coding DNA is there for structural and epigenetic purposes. Of the remaining junk DNA that's truly junk.


Of the remaining junk DNA that's truly junk, the fact that it is still there shows the complexity still leaves an imprint long after its usefulness is gone. Rather than getting rid of the junk, it is simply much safer to leave it there. Removing it may cause more damage than the cost of having to replicate it - and replicating it is cheap because it doesn't have the same fidelity requirements as non-junk DNA.

And that is software development in a nutshell. No one intends for these things to happen. They just do.

Reply Score: 2

Bill Shooter of Bul Member since:
2006-07-14

Absolutely not true. Thom, I respect your opinion on most things, but on software engineering, I think you're out of your relm of understanding.

Over engineering, leads to crazy parts of the software which do things that no one really needs, while neglecting common features that every will. Classic Mac OS had a bunch of neat UI things, while the core was pretty rotten. The engineering dollars would have been better spent in the underbelly of the beast. Its why OS X is so successful, while the predecessor was not. Despite the fact that classic had better gee whiz ui parts.

Reply Score: 2

Megol Member since:
2011-04-11

Absolutely not true. Thom, I respect your opinion on most things, but on software engineering, I think you're out of your relm of understanding.

Over engineering, leads to crazy parts of the software which do things that no one really needs, while neglecting common features that every will. Classic Mac OS had a bunch of neat UI things, while the core was pretty rotten. The engineering dollars would have been better spent in the underbelly of the beast. Its why OS X is so successful, while the predecessor was not. Despite the fact that classic had better gee whiz ui parts.


But this case isn't about overengineering, it is about a good solution to a real problem. And about Mac OS - Apple tried to update the core, several times in fact.

Reply Score: 1

Try The "Icons" Screensaver
by Pro-Competition on Sun 4th Sep 2016 17:28 UTC
Pro-Competition
Member since:
2007-08-20

For me, the most impressive demonstration of the vector icons is in the "Icons" screensaver. It draws random icons in random sizes, some larger than 128x128, and they all look great at all sizes.

Reply Score: 2

RE: Try The "Icons" Screensaver
by alphaseinor on Sun 4th Sep 2016 20:05 UTC in reply to "Try The "Icons" Screensaver"
alphaseinor Member since:
2012-01-11

That is of course the definition of a vector graphic is that it scales well.

Some of the comments in this thread don't realize this is a 10+ year old file format, it doesn't matter what size it is drawn at, 16x16 or 256x256 it scales exactly the way it should thanks to parametrics and geometry, not like a bitmap.

I forgot who wrote the format, but even at the time there were a lot of people who thought this approach was very elegant for tiny files.

back in 2006, Linux used bitmaps, Mac used PDF, and Haiku used HVIF... lord only knows what windows was doing at that time as I remember waiting for icons to paint on the screen... if they were bitmap based, they had no optimization.

Edited 2016-09-04 20:07 UTC

Reply Score: 1

phoudoin Member since:
2006-06-09

HIVF was designed by Stephan Aßmus.

Reply Score: 2

nicubunu Member since:
2014-01-08

actually, a vector icon can look awfully blurry at small sizes like 16x16 it it was not designed with that resolution in mind, and it can look awfully simplistic at 256x256 if it was designed for 16x16.

Reply Score: 2

phoudoin Member since:
2006-06-09

HIVF support level of detail for this purpose: some shapes wont be drawn under some size, some will only above some size...

Reply Score: 2

Great article...
by dionicio on Mon 5th Sep 2016 00:17 UTC
dionicio
Member since:
2006-07-12

Actually, a lesson on Haiku. Good open software is Haiku, philosophically.

Reply Score: 3

RE: Great article...
by dionicio on Mon 5th Sep 2016 00:34 UTC in reply to "Great article..."
dionicio Member since:
2006-07-12

"I like playing with Haiku; after trying a number of less-popular-than-Linux operating systems, it’s the most stable and usable of the ones I tried. It’s also different enough from Linux to have an interesting point of view. This vector icon format is just one example of that."

Of course, being different enough, lacking all the tool chain which actually make icon compiling so 'breeze' on Linux.

Think the key word here is 'playing'. This is -without excuses- good for our mind.

Reply Score: 2

RE: Great article...
by dionicio on Mon 5th Sep 2016 00:48 UTC in reply to "Great article..."
dionicio Member since:
2006-07-12

Would love to see a working superset of HVIF. Good enough for a quotidian, quick 'sketch' editor ;)

Reply Score: 3

RE[2]: Great article...
by umccullough on Mon 5th Sep 2016 17:15 UTC in reply to "RE: Great article..."
umccullough Member since:
2006-01-26

The guy who developed the HVIF format was also one of the developers of Wonderbrush (for BeoS, Zeta, and Haiku) - a vector image creation/editing tool.

Reply Score: 2

RE[2]: Great article...
by dionicio on Mon 5th Sep 2016 20:30 UTC in reply to "RE: Great article..."
dionicio Member since:
2006-07-12

"...One of the most important features of WonderBrush concerning its use for graphics design is the nonlinear and nondestructive editing. Basically, it means that you can adjust anything later on. Even though WonderBrush is an editor for bitmap graphics, the application doesn't edit pixel data, it generates it."

WonderBrush Editor seem to use [internally] vector data, [which allows the 'nonlinear and nondestructive editing'] but doesn't seem to produce it.

Resource: http://www.yellowbites.com/wonderbrush.html

Reply Score: 2

RE[3]: Great article...
by umccullough on Mon 5th Sep 2016 22:07 UTC in reply to "RE[2]: Great article..."
umccullough Member since:
2006-01-26

Been a while since i used it, but it does open/save HVIF and SVG files IIRC.

On the other hand, I might be thinking of the Icon-o-matic (written by the same dev)... but I'm pretty sure Wonderbrush has a proprietary vector format that it uses - and you can then export as bitmap (PNG, etc).

Reply Score: 2

Remember the time ...
by DeepThought on Tue 6th Sep 2016 09:30 UTC
DeepThought
Member since:
2010-07-17

... the format is dated from.
It is from 2006 /at least the post/ so, maybe even 2005.
At this time, disk speed was much less than today and SSDs wheren't even invented.

On the other hand: He did it because he could. I like this format and would wish this "old school" programming (saving space, time) would come up again.

Reply Score: 1

RE: Remember the time ...
by Megol on Tue 6th Sep 2016 12:55 UTC in reply to "Remember the time ..."
Megol Member since:
2011-04-11

... the format is dated from.
It is from 2006 /at least the post/ so, maybe even 2005.
At this time, disk speed was much less than today and SSDs wheren't even invented.


Of course SSDs were invented in 2006! Solid state drives are old technology with RAM and bubble memory designs stretching back at least to the 60's. Even if we limit SSD to modern designs the Compact Flash standard (Flash memory - ATA interface) dates back to the 90's.


On the other hand: He did it because he could. I like this format and would wish this "old school" programming (saving space, time) would come up again.


While this format have advantages it also have disadvantages: with a fast storage device just reading a pre-rendered bitmap is faster. Using multiple pre-rendered bitmaps of different sizes enables higher quality too.

With that said I really do like vector icons and this format, a good solution for the problem at hand.

Reply Score: 1

RE[2]: Remember the time ...
by DeepThought on Tue 6th Sep 2016 13:48 UTC in reply to "RE: Remember the time ..."
DeepThought Member since:
2010-07-17

Of course SSDs were invented in 2006! Solid state drives are old technology with RAM and bubble memory designs stretching back at least to the 60's.


Ok, yes. I should have been more precisely: Large, fast Flash based SSDs for the average you and me were not available.
;-)


While this format have advantages it also have disadvantages: with a fast storage device just reading a pre-rendered bitmap is faster. Using multiple pre-rendered bitmaps of different sizes enables higher quality too.


That's a point. If rendering takes longer than reading from slow media, the only benefit remaining is the size.

Edited 2016-09-06 13:52 UTC

Reply Score: 1

RE[3]: Remember the time ...
by Alfman on Tue 6th Sep 2016 15:11 UTC in reply to "RE[2]: Remember the time ..."
Alfman Member since:
2011-01-28

DeepThought,


Ok, yes. I should have been more precisely: Large, fast Flash based SSDs for the average you and me were not available.
;-)
...
That's a point. If rendering takes longer than reading from slow media, the only benefit remaining is the size.



SSDs are clearly much faster than spinning disks, but they're still one of the slowest parts of a computer.

The vast majority of the time the CPU is twiddling it's thumbs waiting for I/O. Consider a modern system with an intel SSD with 500µs access time and a 2.4GHz processor, the CPU will run through 1.2M cycles before the SSD returns any data.


Of course this is just academic, optimizing icons in this day and age is a bit silly! On the other hand if we got rid of most indirection on bootup it could help with load times, which are still far from optimal.

Reply Score: 2