Overall I am impressed with the PureDarwin project and have enjoyed conducting my research around it. They have achieved a lot, considering that the project is funded by community donations and run by volunteers. It definitely isn’t a production-ready system, but for developers it has the potential to come in very useful. The PureDarwin team have been able to successfully install MacPorts in PureDarwin, allowing many software packages such as Apache HTTPd, Git and even XFCE to be installed. Unfortunately this is non-trivial to achieve without strong networking support, but it shows the potential use cases of PureDarwin. The problem with Darwin is that you’re always confined to Apple’s whim; the company has a history of delaying Darwin code dumps after new macOS releases for a long time, not including any ARM/iOS code for almost a decade, and the releases themselves don’t really have a commit history and comments – they’re just big code dumps. I guess Darwin is interesting from an enthusiasts’ perspective, but as far as Apple goes, they don’t really seem to care all that much about it, other than scoring the occasional good press.
Ubuntu 19.10 is unusual for an October Ubuntu release in that I would call it a must-have upgrade. While it retains some of the experimental elements Ubuntu’s fall releases have always been known for, the speed boosts to GNOME alone make this release well worth your time. If you prefer to stick with more stable releases, most of what’s new in 19.10 will eventually be backported to 19.04 and possibly even the last LTS release, 18.04. Still, unless you’re unflinchingly committed to the stability of LTS releases, I see no reason not to upgrade. As I said at the start, Ubuntu 19.10 is quite possibly the best release of Ubuntu Canonical has ever delivered. It’s well worth upgrading if you’re already an Ubuntu user, and it’s well worth trying even if you’re not. The speed improvements to GNOME are incredibly enticing. I’m a Mint/Cinnamon user, but this is definitely intriguing me.
A U.S.-based foundation overseeing promising semiconductor technology developed with Pentagon support will soon move to Switzerland after several of the group’s foreign members raised concerns about potential U.S. trade curbs. The nonprofit RISC-V Foundation wants to ensure that universities, governments and companies outside the United States can help develop its open-source technology, its Chief Executive Calista Redmond said in an interview with Reuters. Can’t blame them.
Four of our colleagues took a stand and organized for a better workplace. This is explicitly condoned in Google’s Code of Conduct, which ends: “And remember… don’t be evil, and if you see something that you think isn’t right — speak up.” When they did, Google retaliated against them. Today, after putting two of them on sudden and unexplained leave, the company fired all four in an attempt to crush worker organizing. Google hired a union-busting firm, so the olden days of The Pinkerton Detective Agency which never sleeps, the Homestead Strike, the Colorado Labor Wars, and other late 19th century and early 20th century battles between workers on one side, and factory owners, the government, and independent “security” agencies on the other, seem back in swing. Not that it matters. Extremists will praise Google, centrists will excuse it away, and the rest will condemn Google, but keep using Google Search and Android anyway – and Google knows it. In a corporatocracy, companies and their leaders are untouchable.
AnandTech reviews AMD’s latest and greatest. AMD has scored wins across almost all of our benchmark suite. In anything embarrassingly parallel it rules the roost by a large margin (except for our one AVX-512 benchmark). Single threaded performance trails the high-frequency mainstream parts, but it is still very close. Even in memory sensitive workloads, an issue for the previous generation Threadripper parts, the new chiplet design has pushed performance to the next level. These new Threadripper processors win on core count, on high IPC, on high frequency, and on fast memory. If you had told me three years ago that AMD were going to be ruling the roost in the HEDT market with high-performance 32-core processors on a leading-edge manufacturing node, I would have told you to lay off the heavy stuff. But here we are, and AMD isn’t done yet, teasing a 64-core version for next year. This is a crazy time we live in, and I’m glad to be a part of it. I need one of these for translating, posting to OSNews, and playing a few non-demanding games, right?
Yes, I know Wayland has made some controversial design choices. The fact is, Wayland is the only viable X11 successor, which will hopefully bring more security and stability to the Linux desktop. Regardless of how it pans out, there’s nothing like a bit of competition to drive innovation. I won’t discuss any more politics in this post. Also a disclaimer: I’m no systems programming expert (though I aspire to be), neither am I an expert in X11, Wayland, or their associated protocols or codebases. This post merely draws on my experiences as an end user that enjoys a highly customised workflow. Wayland has been the talk of the town in the Linux world for quite a while now, but it seems a lot of important pieces of a modern desktop Linux distribution simply aren’t ready for it.
When Jean-Baptiste Kempf joined École Centrale Paris as a student in 2003, he was tasked with helping run the university’s computer network. It included an unusual project: student-run open-source software that had been running on a couple of university servers for seven years. To students, the project was known as “Network 2000.” To the rest of the world, it was VLC media player. Kempf—now the president of VLC’s parent organization, the nonprofit VideoLAN—is the person who helped guide VLC’s journey from student project to ubiquitous software. (VideoLAN Client, the original name for the project, is where VLC gets its name.) On the surface, he’s laid-back, casual, and frank, though that belies a steely determination. As the person overseeing the project and its team, he sets the tone for VLC as a whole. VLC is one of those quintessential pieces of software. An outstanding application.
I like using Linux. I use it on my desktop – especially now that League of Legends runs incredibly well on Linux thanks to the Lutris and League of Linux reddit community. I’d also like to use Linux on my laptop (an XPS 13 9370), but here I run into a major hurdle that despite a lot of trials and tribulations, I have been unable to overcome: playing video. Of course, Linux – in my case, Linux Mint – can play any format under the sun just fine, either locally, on-demand, or streaming, and in my case, it’s YouTube video that matters (720p-1080p). The problem lies not in what desktop Linux can play, but in how it does so. Decoding video on my laptop running Linux is apparently remarkably inefficient, to the point where the processor reaches temperatures of 60-70°C, and since the fan kicks in at around 60°C, watching video on Linux means constant fan noise. When playing the same videos on Windows on the exact same laptop, temperatures stay comfortably below 40°C, without ever even coming close to spinning up a fan. I have tried everything. Here’s an itemised list of things I’ve tried, including multiple different combinations: I’ve installed tlp. This has had no effect. I’ve manually configured my processor – through tlp – to make sure it doesn’t turbo beyond 50%. This has had no effect. I’ve disabled Intel Turbo Boost in UEFI altogether. This has had no effect. I’ve undervolted my CPU. This gives me maybe 1-2 degrees every now and then, so effectively it hasn’t helped. I’ve tried the latest mainline kernel just to see if there’s been improvements in power management or any Intel drivers. This has had no effect. I’ve tried the Chromium builds with VAAPI support to enable hardware acceleration on YouTube video. This has had no effect. I’ve tried downloading YouTube videos with youtube-dl and playing them back locally. This has had no effect. I’ve tried forcing H264 on YouTube. This has had no effect. There’s probably things I’ve tried that I’ve forgotten about and thus aren’t on this list. As you can imagine, my past few days and weeks have been frustrating, to say the least. I even decided to install Linux Mint on my Surface Pro 4 to see if similar problems pop-up there, and lo and behold, that device, too, sees massive temperature spikes when using Linux instead of Windows. I understand and can accept if Linux isn’t as efficient as Windows when it comes to power management and decoding video, and am okay with a few degrees here and there. However, I just cannot understand nor accept a 20-30°C difference with something as elemental as decoding video. After all of this, I can only conclude that desktop Linux has an incredibly bad video decoding pipeline compared to Windows, and considering I’ve been struggling with this several times over the past few years without any noticeable improvement, it seems like it’s not something high on anybody’s list of things to improve. Linux’ inefficient video decoding pipeline won’t be much of an issue on desktop machines – playing video has virtually no material temperature impact on my desktop since my custom watercooled GTX 1070 and i7-7700K are way overkill – but on thermally constrained laptops, the problem becomes massively apparent. It is frustrating. I prefer Linux over Windows, I want to use it on my laptop, but as it stands now, I simply can’t. I’m at my wits’ end.
When you upgrade to macOS 10.15 Catalina, your boot volume will effectively be split into two. Assuming it’s the standard internal storage, your existing boot volume will be renamed to Macintosh HD – Data, and a new read-only system volume created and given the name Macintosh HD. However, when your Mac starts up in Catalina, you won’t see the Data volume, as it’s hidden inside the System volume, in what Apple refers to as a Volume Group. I miss the olden days where disk layouts were simple and straightforward. Look at the partition layout of any recent operating system, and you’ll be greeted by several small partitions with specific functions, such as boot manager partitions, restore partitions, and so on. These partitions are hidden, and I’ve always been of the school that if you need to hide something, you probably designed it wrong. In any event, I understand why this is necessary, but that doesn’t make it any less hacky and messy.
webOS OSE 2.1.0 has been released. Since I’m sure not everyone has kept track of where webOS has ended up, this is where we stand today: webOS is a web-centric and usability-focused software platform for smart devices, which has proven its performance and stability in over 70 million LG Smart TVs. Since its adaptation to display products, webOS has come a long way and evolved into a software platform applicable to a broader range of products. The open source project of webOS, called webOS Open Source Edition (OSE), was announced in March 2018 under the philosophy of open platform, open partnership, and open connectivity. On top of the core architecture of webOS, webOS OSE offers additional features that allow extension to more diverse industry verticals. This release seems light on changes, as the release notes illustrate.
Apple Inc. is overhauling how it tests software after a swarm of bugs marred the latest iPhone and iPad operating systems, according to people familiar with the shift. Software chief Craig Federighi and lieutenants including Stacey Lysik announced the changes at a recent internal “kickoff” meeting with the company’s software developers. The new approach calls for Apple’s development teams to ensure that test versions, known as “daily builds,” of future software updates disable unfinished or buggy features by default. Testers will then have the option to selectively enable those features, via a new internal process and settings menu dubbed Flags, allowing them to isolate the impact of each individual addition on the system. If the many issues with and complaints about iOS 13 are to be believed, this seems like a much needed intervention.
Yesterday, Trump visited a six year old factory where Mac Pros are being assembled, and Tim Cook appeared in a Trump campaign ad. After Mr. Trump departed the factory, he tweeted, “Today I opened a major Apple Manufacturing plant in Texas that will bring high paying jobs back to America.” About the only thing that’s true in this tweet is that the factory is located in Texas. First, Trump didn’t open the factory – it’s been in use for six years now. Second, it’s not major at all – it only assembles the Mac Pro with about 500 employees. Third, it won’t bring any jobs back because it’s been open for six years already. Lastly, it isn’t an Apple factory – it’s owned by another, independent company. Cook stood next to him, and didn’t correct Trump at all. On Wednesday, Mr. Trump called Mr. Cook a “very special person” because of his ability to create jobs. He turned to Mr. Cook and said, “What would you say about our economy compared to everybody else?” Mr. Cook replied, “I think we have the strongest economy in the world.” “Strongest in the world,” Mr. Trump said. The president then took questions on the impeachment inquiry and launched into a tirade against “the fake press.” Mr. Cook stood silently nearby. John Gruber, longtime Apple blogger and one of the most outspoken defenders of Apple’s policies: I’ve been on board with Cook’s stance on engaging Trump. Participating in Trump’s technology council does not imply support for Trump. Engaging Trump personally, in private phone calls and dinners, does not imply support. But appearing alongside Trump at an Apple facility in a staged photo-op is implicit support for Trump and his re-election. A low moment in Apple’s proud history, and a sadly iconic moment for Tim Cook. I hope avoiding those tariffs is worth it. History rarely bestows consequences on companies cooperating with the far right and nazi extremists. IG Farben’s directors were all released by the US within only a few years, and IG Farben still exists today in the form of several highly profitable companies, namely Agfa, BASF, Bayer and Sanofi. Volkswagen was founded by a Nazi labour union, produced what would become the Beetle for Nazi Germany, built military vehicles during the war using 15.000 slaves from concentration camps, and still exists today as one of the biggest automobile conglomerates in the world. IBM aided the Nazi regime in the organisation of the Holocaust, while in the US, it orchestrated the concentration camps where Japanese Americans were held. Meanwhile, Henry Ford’s antisemitism and nazi sympathies are well-documented, and Ford, too, is one of the largest automobile makers in the world. Point is, there’s zero risk for Cook to openly associate himself with someone like Trump. Extremists will praise him, centrists will excuse it away, and the rest will condemn Cook, but keep buying iPhones and Macs anyway – and Tim Cook knows it. In a corporatocracy, companies and their leaders are untouchable.
It seems like Google is working hard to update and upstream the Linux kernel that sits at the heart of every Android phone. The company was a big participant in this year’s Linux Plumbers Conference, a yearly meeting of the top Linux developers, and Google spent a lot of time talking about getting Android to work with a generic Linux kernel instead of the highly customized version it uses now. It even showed an Android phone running a mainline Linux kernel. Android is the most popular Linux distribution by far, so a move to a more generic Linux kernel benefits the ecosystem as a whole.
A couple of weeks ago, we landed a commit that took years of effort at Mozilla. It removed “XBL”, which means we’ve completed the process of migrating the Firefox UI to Web Components. It wasn’t easy – but I’ll get to that later. It’s taken a couple years of work of remarkably steady progress by a small team of engineers along with the support of the rest of the organization, and I’m happy to report that we’ve now finished. This is a big accomplishment on its own, and also a foundational improvement for Firefox. It allows teams to focus efforts on modern web standards, and means we can remove a whole lot of duplicated and complicated functionality that wasn’t exposed to websites. The fact the people at Mozilla have been able to do this without any major disruptions to Firefox users is pretty impressive.
Sean Gallagher: In a post yesterday to the Microsoft Tech Community blog, Microsoft Windows Core Networking team members Tommy Jensen, Ivan Pashov, and Gabriel Montenegro announced that Microsoft is planning to adopt support for encrypted Domain Name System queries in order to “close one of the last remaining plain-text domain name transmissions in common web traffic.” That support will first take the form of integration with DNS over HTTPS (DoH), a standard proposed by the Internet Engineering Task Force and supported by Mozilla, Google, and Cloudflare, among others. “As a platform, Windows Core Networking seeks to enable users to use whatever protocols they need, so we’re open to having other options such as DNS over TLS (DoT) in the future,” wrote Jensen, Pashov, and Montenegro. “For now, we’re prioritizing DoH support as the most likely to provide immediate value to everyone. For example, DoH allows us to reuse our existing HTTPS infrastructure.” But Microsoft is being careful about how it deploys this compatibility given the current political fight over DoH being waged by Internet service providers concerned that they’ll lose a lucrative source of customer behavior data. This clearly isn’t the sexiest of subjects, but there’s an important tug of war happening here between ISPs and privacy advocates.
The Debian project is pleased to announce the second update of its stable distribution Debian 10 (codename buster). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available. Debian users probably already have this installed, because Debian package management is awesome and you can pry APT from my cold, dead hands and yes I’m totally biased when I say that APT is massively better than any of its alternatives. Sue me.
I recently found some USB devices on eBay (Epiphan VGA2USB LR) that could take VGA as input and present the output as a webcam. Given that I was keen on the idea of not needing to lug out a VGA monitor ever again and there was claimed Linux support I took the risk and bought the whole job lot for about £20 (25 USD). When they arrived, I plugged one in under the expectation that it would come up as USB UVC Devices but they did not. Was I missing something? Turns out that he was, and that was the start of a rather wild ride.
An independent developer has managed to hack a Calculator to run Windows 10 operating system, but it’s not a basic or scientific calculator that we normally use. According to the photos, the device is actually the HP’s Prime Graphing Calculator which comes with a touch screen interface, and good industrial design. The photos shared by the developer Ben shows off Windows 10 IoT (Internet of Things) edition running on the HP Prime Graphing Calculator. Perhaps not the most useful hack in the world, but still very cool.
Ars Technica reports: The Supreme Court has agreed to review one of the decade’s most significant software copyright decisions: last year’s ruling by an appeals court that Google infringed Oracle’s copyrights when Google created an independent implementation of the Java programming language. The 2018 ruling by the Federal Circuit appeals court “will upend the longstanding expectation of software developers that they are free to use existing software interfaces to build new computer programs,” Google wrote in its January petition to the Supreme Court. In a sane world, this idiotic ruling would be overturned and Larry Ellison cries in his huge pile of money. Sadly, this world is far from sane, so this could really go either way.
Deciding between building a mainstream PC and a high-end desktop has historically been very clear cut: if budget is a concern, and you’re interested in gaming, then typically a user looks to the mainstream. Otherwise, if a user is looking to do more professional high-compute work, then they look at the high-end desktop. Over the course of AMD’s recent run of high-core count Ryzen processors that line has blurred. This year, that line has disappeared. Even in 2016, mainstream CPUs used to top out at four cores: today they now top out at sixteen. Does anyone need sixteen cores? Yes. Does everyone need sixteen cores? No. Do I want sixteen cores? Yes.