Benchmarks Archive

Intel Performance Strategy Team publishing intentionally misleading benchmarks

Today something happened that many may not have seen. Intel published a set of benchmarks showing its advantage of a dual Intel Xeon Platinum 9282 system versus the AMD EPYC 7742. Vendors present benchmarks to show that their products are good from time-to-time. There is one difference in this case: we checked Intel’s work and found that they presented a number to intentionally mislead would-be buyers as to the company’s relative performance versus AMD. Intel is desperate, and it’s really starting to show.

Is C++ fast?

A library that I work on often these days, meshoptimizer, has changed over time to use fewer and fewer C++ library features, up until the current state where the code closely resembles C even though it uses some C++ features. There have been many reasons behind the changes – dropping C++11 requirement allowed me to make sure anybody can compile the library on any platform, removing std::vector substantially improved performance of unoptimized builds, removing algorithm includes sped up compilation. However, I’ve never quite taken the leap all the way to C with this codebase. Today we’ll explore the gamut of possible C++ implementations for one specific algorithm, mesh simplifier, henceforth known as simplifier.cpp, and see if going all the way to C is worthwhile.

GeForce RTX 2080 Ti and RTX 2080 FE review

AnandTech benchmarked the new RTX graphics cards, and concludes:

So where does that leave things? For traditional performance, both RTX cards line up with current NVIDIA offerings, giving a straightforward point-of-reference for gamers. The observed performance delta between the RTX 2080 Founders Edition and GTX 1080 Ti Founders Edition is at a level achievable by the Titan Xp or overclocked custom GTX 1080 Ti’s. Meanwhile, NVIDIA mentioned that the RTX 2080 Ti should be equal to or faster than the Titan V, and while we currently do not have the card on hand to confirm this, the performance difference from when we did review that card is in-line with NVIDIA's statements.

The easier takeaway is that these cards would not be a good buy for GTX 1080 Ti owners, as the RTX 2080 would be a sidegrade and the RTX 2080 Ti would be offering 37% more performance for $1200, a performance difference akin upgrading to a GTX 1080 Ti from a GTX 1080. For prospective buyers in general, it largely depends on how long the GTX 1080 Ti will be on shelves, because as it stands, the RTX 2080 is around $90 more expensive and less likely to be in stock. Looking to the RTX 2080 Ti, diminishing returns start to kick in, where paying 43% or 50% more gets you 27-28% more performance.

Neither of the two new RTX cards seem to be particularly smart purchases at this point - the 2080 barely performs any better than a 1080 Ti, and while the 2080 Ti does offer a decent performance improvement over the 1080 Ti, it's also $1200. You might want to wait to see if NVIDIA's raytracing efforts pay off and gets adopted in video games, and if said raytracing features don't suck too much performance.

Huawei, Honor caught cheating on benchmarks

Does anyone remember our articles regarding unscrupulous benchmark behavior back in 2013? At the time we called the industry out on the fact that most vendors were increasing thermal and power limits to boost their scores in common benchmark software. Fast forward to 2018, and it is happening again.

Companies lie. They lie all the time. As with anything related to performance measuring and comparisons - wait for trusted third party benchmarks from places like AnandTech and GamersNexus. Company-provided figures are almost always anything from unrealistic best-case scenarios at best, or downright lies at worst.

Performance of the 8088 on PC, PCjr, and Tandy 1000

It's well-known that you should measure the performance of your code, and not rely only on the opcode's "cycle counts".

But how fast is an IBM PC 5150 compared to a PCjr? Or to a Tandy 1000? Or how fast is the Tandy 1000 HX in fast mode (7.16Mhz) compared to the slow mode (4.77Mhz)? Or how fast is a nop compared to a cwd?

I created a test (perf.asm) that measures the performance of different opcodes and run it on different Intel 8088 machines. I run the test multiple times just to make sure the results were stable enough. All interrupts were disabled, except the Timer (of course). And on the PCjr the NMI is disabled as well.

There's no point in any of these benchmarks, but that doesn't make them any less interesting.

iPhone 8 is world’s fastest phone (it’s not even close)

The "Bionic" part in the name of Apple's A11 Bionic chip isn't just marketing speak. It's the most powerful processor ever put in a mobile phone. We've put this chip to the test in both synthetic benchmarks and some real-world speed trials, and it obliterates every Android phone we tested.

As far as SoCs go, Apple is incredibly far ahead of Qualcomm and Samsung. These companies have some serious soul-searching to do.

I can't wait for AnandTech to dive into the A11 Bionic, so we can get some more details than just people comparing GeekBench scores.

Intel Core i9-7900X review

Intel's latest 10-core, high-end desktop (HEDT) chip - the Core i9-7900X - costs £900/$1000. That's £500/$500 less than its predecessor, the i7-6950X. In previous years, such cost-cutting would have been regarded as generous. You might, at a stretch, even call it good value. But that was at a time when Intel's monopoly on the CPU market was as its strongest, before a resurgent AMD lay waste to the idea that a chip with more than four cores be reserved for those with the fattest wallets.

AMD's Ryzen is far from perfect. But when you can buy eight cores that serve even the heaviest of multitaskers and content creators for well under half the price of an Intel HEDT chip, i9 and X299 are a hard sell (except, perhaps, to fussy gamers that demand a no-compromises system).

The question is: Are you willing to pay a premium for the best performing silicon on the market? Or is Ryzen, gaming foibles and all, good enough?

I've said this countless times, but I want to keep bringing this one home: this is what competition does. It lowers prices, improves performance, and makes Intel looks like a stumbling fool. And what better day to celebrate the benefits of competition than today?

Cheers, America. Party safe!

Google Now vs. Siri vs. Cortana

So there you have it. As of October 4, Google Now has a clear lead in terms of the sheer volume of queries addressed, and more complete accuracy with its queries than either Siri or Cortana. All three parties will keep investing in this type of technology, but the cold hard facts are that Google is progressing the fastest on all fronts.

Not surprising, really, considering Google's huge information lead. Still, I have yet to find much use for these personal assistants - I essentially only use Google Now to set alarms and do simple Google queries, but even then only the English ones that do not contain complicated names.

The state of cheating in Android benchmarks

With the exception of Apple and Motorola, literally every single OEM we've worked with ships (or has shipped) at least one device that runs this silly CPU optimization. It's possible that older Motorola devices might've done the same thing, but none of the newer devices we have on hand exhibited the behavior. It's a systemic problem that seems to have surfaced over the last two years, and one that extends far beyond Samsung.

Pathetic, but this has been going on in the wider industry for as long as I can remember - graphics chip makers come to mind, for instance. Still, this is clearly scumbag behaviour designed to mislead consumers.

On the other hand, if you buy a phone based on silly artificial benchmark scores, you deserve to be cheated.

Real world comparison: GC vs. manual memory management

"During the 4th Semester of my studies I wrote a small 3d spaceship deathmatch shooter with the D-Programming language. It was created within 3 Months time and allows multiple players to play deathmatch over local area network. All of the code was written with a garbage collector in mind and made wide usage of the D standard library phobos. After the project was finished I noticed how much time is spend every frame for garbage collection, so I decided to create a version of the game which does not use a GC, to improve performance."

Web Browser Grand Prix VI: Chrome 13, Firefox 6, Mac OS X Lion

The latest browser benchmarks are in... again - seems like there's a new one every week. This is one of the best "browser battle" articles though. Chrome 13, Firefox 6, IE9, Opera 11.50, and Safari 5.1 are put through 40-something tests on both Windows 7 and Mac OS X Lion. As a PC guy I was pretty impressed with the performance of Safari on OS X, and the reader feature looks awesome too. The author also uncovered a nasty Catalyst bug that makes IE9 render pages improperly and freeze up under heavy loads of tabs. The tables at the end pinpoint the strengths and weaknesses of each browser, which is nicer than a 1-10 or star rating. Good article, and thorough.

Test Driving GNU Hurd, with Benchmarks Against Linux

Phoronix has conducted some preliminary benchmarks, comparing Debian GNU/Hurd to Debian GNU/Linux. "There was only a handful of tests that could be successfully run under Debian GNU/Hurd and in those results the numbers were generally close, though Debian GNU/Linux was running about 4% faster in some and with the MP3 encoding the Linux OS was nearly 20% faster. Debian GNU/Hurd is an interesting project but for now its support is still in shambles, the hardware support is vastly outdated, and there is also no SMP support at this time. Regardless, it will be interesting to see how Debian GNU/Hurd turns out for the 7.0 Wheezy milestone."

GCC 4.6, LLVM/Clang 2.9, DragonEgg Benchmarks

"Version 4.6 of GCC was released over the weekend with a multitude of improvements and version 2.9 of the Low-Level Virtual Machine is due out in early April with its share of improvements. How though do these two leading open-source compilers compare? In this article we are providing benchmarks of GCC 4.5.2, GCC 4.6.0, DragonEgg with LLVM 2.9, and Clang with LLVM 2.9 across five distinct AMD/Intel systems to see how the compiler performance compares."

WebM, H264: Encoder Speed Benchmark

A comment on the recent article about the Bali release of Googles WebM tools (libvpx) claimed that one of the biggest problems facing the adoption of WebM video was the slow speed of the encoder as compared to x264. This article sets out to benchmark the encoder against x264 to see if this is indeed true and if so, how significant the speed difference really is.

Compiler Benchmarks of GCC, LLVM-GCC, DragonEgg, Clang

"LLVM 2.8 was released last month with the Clang compiler having feature-complete C++ support, enhancements to the DragonEgg GCC plug-in, a near feature-complete alternative to libstdc++, a drop-in system assembler, ARM code-generation improvements, and many other changes. With there being great interest in the Low-Level Virtual Machine, we have conducted a large LLVM-focused compiler comparison at Phoronix of GCC with versions 4.2.1 through 4.6-20101030, GCC 4.5.1 using the DragonEgg 2.8 plug-in, LLVM-GCC with LLVM 2.8 and GCC 4.2, and lastly with Clang on LLVM 2.8."

Workstation Benchmarks: Windows 7 vs. Ubuntu Linux

Here is the continuation of a series of comparison tests that is without doubt bound to cause a huge amount of controversy: Workstation Benchmarks: Windows 7 vs. Ubuntu Linux There are performance wins and losses on both sides of the fence, but Ubuntu compares very well with Windows 7, and no doubt these tests indicate a much closer performance comparison than most people would have expected.

First Look: VP8 vs. H264

Now that Google has opened up VP8, the big question is obviously how it'll hold up to H264. Of course, VP8 already wins by default because it's open source and royalty free, but that doesn't mean we should neglect the quality issue. Jan Ozer from StreamingMedia.com has put up an article comparing the two codecs, and concludes that the differences are negligible - in fact, only in some high-motion videos did H264 win out. As always, this is just one comparison and most certainly anything but conclusive. Update: Another comparison. I can't spot the difference, but then again, I'm no expert.