Monthly Archive:: November 2024
Managing unexpected or significant expenses can be daunting, especially when immediate funds are unavailable. In Wisconsin, installment loans offer a practical solution for individuals seeking a manageable way to cover such expenses. Let’s dive into what makes them a versatile option and how they can benefit borrowers. How Installment Loans Provide Work Installment loans Wisconsin offer borrowers a lump sum of money upfront, which is repaid over time in regular, manageable payments. Typically made monthly, these payments include both the principal and interest, ensuring borrowers can budget their repayments consistently. This type of borrowing stands out due to its predictability. The terms, repayment schedule, and the total amount owed are clearly defined at the time of the agreement, reducing uncertainty. Borrowers can plan their finances better, knowing exactly how much they need to repay and when. This straightforward process makes term credits an accessible and appealing option for many individuals. Accessibility for Diverse Financial Needs A significant advantage of these loans is their accessibility to many borrowers. In Wisconsin, installment loans cater to various financial situations, such as covering medical expenses, repairing vehicles, or funding home improvements. Unlike traditional credit options, which may have strict requirements, scheduled advances often have a simpler application process. Borrowers generally need proof of income, identification, and banking details. This flexibility makes them an option for individuals with varying credit histories, including those with limited or poor credit. By offering an alternative to conventional lending, installment loans empower more people to address their immediate needs without unnecessary hurdles. This inclusivity ensures that a broader audience can benefit from them, regardless of their financial background. Manageable Repayment Plans for Better Budgeting One of Wisconsin’s most notable features of installment loans is its structured repayment plan. Instead of repaying the entire amount at once, borrowers can make smaller, periodic payments over a set timeframe. This approach helps distribute the financial burden and makes the repayment process more manageable. These loans allow borrowers to align repayments with their income cycles, such as monthly paychecks. This predictability ensures that individuals can meet their obligations without disrupting other financial priorities. Moreover, the fixed payment amounts make budgeting easier, reducing the risk of financial strain. The ability to spread out payments over time also makes fixed payment capitals preferable for more considerable expenses. Borrowers can address significant financial needs without depleting their savings or compromising their ability to handle future obligations. Who Benefits Most from This Type of Borrowing? Installment loans are especially beneficial for Wisconsin individuals who require more money but prefer the flexibility of paying it back over time. For instance, someone facing an unexpected medical bill or planning a home renovation can use them. They are also ideal for people with steady incomes who can commit to regular repayments. The structured plan helps them stay on track while addressing financial challenges. Borrowers who prefer predictable terms and payments often find installment loans to be a practical option. Additionally, those who may not qualify for traditional credit options due to limited credit history or other factors can benefit from the accessibility of installment loans. This inclusivity allows more individuals to meet their financial needs effectively. The Importance of Responsible Borrowing While installment loans provide flexibility and convenience, Wisconsin borrowers need to approach them responsibly. Borrowers should only take out what they truly need and ensure they can comfortably meet their repayment obligations. Before committing to a capital, reviewing the agreement in detail is important. Understanding the total cost, including interest and additional fees, helps borrowers plan better and avoid surprises. Budgeting for monthly payments and setting aside funds for repayments ensures the debt is manageable within their financial means. It’s also wise to compare different lending options and terms before deciding. Researching and choosing the most favorable terms can significantly impact the overall cost and ease of repayment. Responsible borrowing ensures that installment loans remain a helpful tool rather than a source of financial stress. Evaluating Alternatives for Financial Flexibility While installment loans are a valuable solution, they may not always be the best fit for every situation. Exploring alternatives can help individuals find the most appropriate option for their needs. Credit cards or personal lines of credit may be more practical for smaller amounts. These options often offer flexibility for short-term needs. Secured lending, such as those backed by assets, may provide lower interest rates for larger expenses. Additionally, depending on their circumstances, some individuals might benefit from assistance from community programs or family support. Planning and building an emergency fund can also reduce reliance on borrowing for future expenses. By evaluating all available options and considering the long-term implications, individuals can choose the best solution for their specific financial situation. Installment loans in Wisconsin provide a structured and accessible way to manage various financial needs. Predictive payments and broad eligibility allow borrowers to address significant expenses without overwhelming their budgets. By understanding the terms and borrowing responsibly, individuals can effectively use them to maintain financial stability and achieve their goals.
Are your online efforts helping or hurting your visibility in search results? Many local businesses overlook critical issues that can impact their search performance. Addressing these challenges early can prevent long-term damage to your online reputation and rankings. This article will discuss common SEO mistakes businesses should avoid in order to have a robust online presence. Let’s see what difficulties might be holding your website back. Understanding the pitfalls concerning technical SEO Portland can help you optimize your website better. Ignoring Website Speed Issues Slow websites frustrate users and drive them away. Search engines also factor speed into rankings, affecting your visibility. If your site is sluggish, it could be costing you potential customers and reducing engagement. Use tools to monitor and improve your site’s performance regularly. Compress images, optimize code, and leverage caching to ensure your site stays fast. Make speed optimization a routine part of your website maintenance plan. Failing to Optimize Mobile Responsiveness Many users access websites through their phones, but is your site truly mobile-friendly? Pages that don’t adapt to smaller screens lose traffic and negatively impact search rankings. Poor design leads to frustrated visitors and lost conversions. Ensure your design works seamlessly across all devices, from phones to tablets. Regularly test updates to confirm mobile compatibility and promptly fix any issues that arise. Overlooking Broken Links Broken links create a poor user experience and damage your search visibility. These dead ends not only frustrate visitors but also make your site appear unprofessional and outdated. Use tools to identify and fix broken links as quickly as possible. Replacing or redirecting these links helps maintain trust and ensures smooth navigation. A website free of errors keeps users engaged and boosts your credibility. Skipping Regular Crawl Error Checks Crawl errors prevent search engines from indexing your site properly, leaving valuable pages invisible in search results. These issues can arise from server errors, incorrect URLs, or blocked resources. Regularly monitor for and address crawl issues to maintain consistent search engine access. A well-maintained website improves your overall visibility and helps search engines rank your content effectively. Fixing these errors is critical for ensuring your audience can find you online. Forgetting to Use Structured Data Structured data helps search engines understand your content better, leading to more prominent results. Without it, you miss opportunities for enhanced visibility in search snippets. Schema markup highlights critical information like services, locations, or events that users are looking for. Adding structured data is a simple way to stand out in search results in Portland. Don’t overlook this tool to improve your website’s relevance and reach. Ignoring Duplicate Content Problems Duplicate content confuses search engines and diminishes your ranking potential. It also frustrates visitors who are seeking unique and valuable information. Use tools to identify and eliminate redundant pages or sections across your website. Replace duplicated content with fresh, engaging material that aligns with your audience’s needs. Focusing on originality helps your site stand out from competitors and improves your online presence in Portland. Addressing technical SEO mistakes is essential for boosting your Portland business’s online presence. If you need expert assistance with technical SEO in Portland, connect with professionals who can help you thrive. Focus on resolving speed issues, mobile compatibility, and crawl errors to enhance visibility. Don’t let broken links, duplicate content, or missing structured data hold your site back from success. Mastering these basics ensures your website works harder for your business.
Today is “Black Friday”, which is the day where a lot of retailers, both online and offline, pretend to have massive discounts on things they either raised the prices for a few weeks ago, or for useless garbage they bought in bulk that’ll end up in a landfill within a year. Technology media happily partakes in this event, going full-mask off posting an endless stream of “stories” promoting these discounts. They’re writing ads for fake discounts, often for products from the very companies they’re supposed to report on, and dress them up as normal articles. It’s sad and revealing, highlighting just how much of the technology media landscape is owned by giant media conglomerates. OSNews does not partake. We’re independent, answer to nobody, and are mostly funded directly by you, our readers. If you want to keep it this way, and keep OSNews free from the tripe you see on every other technology site around this time, consider supporting us through Patreon, making a one-time donation through Ko-Fi, or buying some merch. That’s it. That’s our extra special discount bonanza extravaganza Black Friday super coverage.
The Cinnamon Desktop, the GTK desktop environment developed by the Linux Mint project, has just released version 6.4. The focus of this release is on nips and tucks in the default theme, dialogs, menus, and other user interface elements. They seem to have taken a few pages out of GNOME’s book, especially when it comes to dialogs and the OSD, which honestly makes sense considering Cinnamon is also GTK and most Cinnamon users will be running a ton of GNOME/Libadwaita applications. There’s also a new night light feature to reduce eyestrain, vastly improved options for power profiles and management, and more. Cinnamon 6.4 will be part of Linux Mint’s next major release, coming in late December, but is most likely already making its way to various other distributions’ repositories.
Recently, I’ve been moving away from macOS to Linux, and have settled on using KDE Plasma as my desktop environment. For the most part I’ve been comfortable with the change, but it’s always the small things that get me. For example, the Mail app built into macOS provides an “Unsubscribe” button for emails. Apparently this is also supported in some webmail clients, but I’m not interested in accessing my email that way. Unfortunately, I haven’t found an X11 or Wayland email client that supports this sort of functionality, so I decided to implement it myself. And anyway, I’m trying out Kontact for my mail at the moment, which supports plugins. So why not use this as an opportunity to build one? ↫ datagirl.xyz Writing a Kmail plugin like this feels a bit like an arcane art, because the process is not documented as well as it could be, and I doubt that other than KDE developers themselves, very few people are interested in writing these kinds of plugins. In fact, I can’t find a single one listed on the KDE Store, and searching around I can’t find anything either, other than the ones that come with KDE. It seems like this particular plugin interface is designed more to make it easy for KDE developers to extend and alter Kmail than it is for third parties to do so – and that’s fine. Still, this means that if some third party does want to write such a plugin, there’s some sleuthing and hacking to be done, and that’s exactly the process this article details. In the end, we end up with a working unsubscribe plugin, with the code on git so others can learn from it. While this may not interest a large number of people, it’s vital to have information like this out on the web for those precious few to find – so excellent work.
A three-year fight to help support game preservation has come to a sad end today. The US copyright office has denied a request for a DMCA exemption that would allow libraries to remotely share digital access to preserved video games. ↫ Dustin Bailey at GamesRadar This was always going to end in favour of the massive gaming industry with effectively bottomless bank accounts and more lawyers than god. The gist is that Section 1201 of the DMCA prevents libraries from circumventing the copy protection to make games available remotely. Much like books, libraries loan out books not just for research purposes, but also for entertainment purposes, and that’s where the issue lies, according to the Copyright Office, who wrote “there would be a significant risk that preserved video games would be used for recreational purposes”. The games industry doesn’t care about old titles nobody wants to buy anymore and no consumer is interested in. There’s a long tail of games that have no monetary value whatsoever, and there’s a relatively small number of very popular older games that the industry wants to keep repackaging and reselling forever – I mean, we can’t have a new Nintendo console without the opportunity to buy Mario Bros. for the 67th time. That’d be ludicrous. In order to protect the continued free profits from those few popular retro titles, the endless list of other games only a few nerds are interested in are sacrificed.
There have been some past rumblings on the internet about a capacitor being installed backwards in Apple’s Macintosh LC III. The LC III was a “pizza box” Mac model produced from early 1993 to early 1994, mainly targeted at the education market. It also manifested as various consumer Performa models: the 450, 460, 466, and 467. Clearly, Apple never initiated a huge recall of the LC III, so I think there is some skepticism in the community about this whole issue. Let’s look at the situation in more detail and understand the circuit. Did Apple actually make a mistake? ↫ Doug Brown Even I had heard of these claims, and I’m not particularly interested in Apple retrocomputing, other than whatever comes by on Adrian Black or whatever. As such, it surprises me that there hasn’t been any definitive answer to this question – with the amount of interest in classic Macs you’d think this would simply be a settled issue and everyone would know about it. This vintage of Macs pretty much require recaps by now, so I assumed if Apple indeed soldered on a capacitor backwards, it’d just be something listed in the various recapping guides. It took some very minor digging with the multimeter, but yes, one of the capacitors on this family of boards is soldered on the wrong way, with the positive terminal where the negative terminal should be. It seems the error does not lie with whomever soldered the capacitors on the boards – or whomever set the machine that did so – because the silkscreen is labeled incorrectly, too. The reason it doesn’t seem to be noticeable problem during the expected lifespan of the computer is because it was rated at 16V, but was only taking in -5V. So, if you plan on recapping one of these classic Macs – you might as well fix the error.
The EB-5 Immigrant Investor Program , revitalized by the Reform and Integrity Act (RIA) of March 2022, has created new opportunities for foreign investors through rural Targeted Employment Areas (TEAs). These areas offer significant advantages, such as priority processing and set-aside visas, making them an attractive option for investors seeking faster immigration processes. In this article, we explore what rural TEAs are, how they benefit investors and developers, and provide a state-by-state guide to understanding their significance. What is a Rural TEA? A rural Targeted Employment Area (TEA) is defined as a location that meets specific criteria: These criteria ensure that rural TEAs remain focused on areas that truly need economic stimulation and development. For investors, this designation represents not only an opportunity to support underserved communities but also a pathway to streamlined immigration benefits. Key Benefits of Investing in a Rural TEA Investing in a rural TEA comes with numerous advantages: Compliance and Documentation To benefit from the rural TEA designation, EB-5 developers and investors must provide thorough documentation proving the project’s eligibility. This includes location and population data from credible and up-to-date sources. The RIA grants USCIS exclusive authority to designate high-unemployment TEAs, emphasizing the importance of accuracy and reliability in submitted documents. How the RIA Transformed TEAs The Reform and Integrity Act introduced several pivotal changes to the EB-5 program: Why Rural TEAs Are a Strategic Choice Rural TEAs offer a win-win scenario for both investors and communities. Investors gain access to faster immigration processing and reduced investment thresholds, while rural areas benefit from economic growth, job creation, and infrastructural development. With only 20% of annual EB-5 projects allocated to rural TEAs, competition remains low, making them an optimal choice for forward-thinking investors. Explore Rural TEA Opportunities by State The following states feature designated rural TEAs, each offering unique investment opportunities: Conclusion Rural TEAs have redefined the EB-5 investment landscape, creating unparalleled opportunities for foreign investors while promoting economic development in underserved areas. With benefits like priority processing, reserved visas, and reduced investment thresholds, rural TEAs are a strategic choice for those looking to navigate the EB-5 program effectively. For investors seeking to make an impact and secure U.S. residency, understanding and leveraging rural TEAs is the key to success. Stay ahead in the competitive EB-5 arena by exploring rural TEA opportunities and aligning your investments with the provisions of the RIA. Whether you’re a developer or an investor, rural TEAs offer a pathway to mutual growth and success.
The moment a lot of us has been fearing may be soon upon us. Among the various remedies proposed by the United States Department of Justice to address Google’s monopoly abuse, there’s also banning Google from spending money to become the default search engine on other devices, platforms, or applications. “We strongly urge the Court to consider remedies that improve search competition without harming independent browsers and browser engines,” a Mozilla spokesperson tells PCMag. Mozilla points to a key but less eye-catching proposal from the DOJ to regulate Google’s search business, which a judge ruled as a monopoly in August. In their recommendations, federal prosecutors urged the court to ban Google from offering “something of value” to third-party companies to make Google the default search engine over their software or devices. ↫ Michael Kan at PC Mag Obviously Mozilla is urging the courts to reconsider this remedy, because it would instantly cut more than 80% of Mozilla’s revenue. As I’ve been saying for years now, the reason Firefox seems to be getting worse is because of Mozilla is desperately trying to find other sources of revenue, and they seem to think advertising is their best bet – even going so far as working together with Facebook. Imagine how much more invasive and user-hostile these attempts are going to get if Mozilla suddenly loses 80% of its revenue? For so, so many years now I’ve been warning everyone about just how fragile the future of Firefox was, and every one of my worries and predictions have become reality. If Mozilla now loses 80% of its funding, which platform Firefox officially supports do you think will feel the sting of inevitable budget cuts, scope reductions, and even more layoffs first? The future of especially Firefox on Linux is hanging by a thread, and with everyone lulled into a false sense of complacency by Chrome and its many shady skins, nobody in the Linux community seems to have done anything to prepare for this near inevitability. With no proper, fully-featured replacements in the works, Linux distributions, especially ones with strict open source requirements, will most likely be forced to ship with de-Googled Chromium variants by default once Firefox becomes incompatible with such requirements. And no matter how much you take Google out of Chromium, it’s still effectively a Google product, leaving most Linux users entirely at the whim of big tech for the most important application they have. We’re about to enter a very, very messy time for browsing on Linux.
There are so many ecological, environmental, and climate problems and disasters taking place all over the world that it’s sometimes hard to see the burning forests through the charred tree stumps. As at best middle-income individuals living in this corporate line-must-go-up hellscape, there’s only so much we can do turn the rising tides of fascism and leave at least a semblance of a livable world for our children and grandchildren. Of course, the most elementary thing we can do is not vote for science-denying death cults who believe everything is some non-existent entity’s grand plan, but other than that, what’s really our impact if we drive a little less or use paper straws, when some wealthy robber baron flying his private jet to Florida to kiss the gaudy gold ring to signal his obedience does more damage to our world in one flight than we do in a year of driving to our underpaid, expendable job? Income, financial, health, and other circumstances allowing, all we can do are the little things to make ourselves feel better, usually in areas in which we are knowledgeable. In technology, it might seem like there’s not a whole lot we can do, but actually there’s quite a few steps we can take. One of the biggest things you, as an individual knowledgeable about and interested in tech, can do to give the elite and ruling class the finger is to move away from big tech, their products, and their services – no more Apple, Amazon, Microsoft, Google, or Amazon. This is often a long, tedious, and difficult process, as most of us will discover that we rely on a lot more big tech products than we initially thought. It’s like an onion that looks shiny and tasty on the outside, but is rotting from the inside – the more layers you peel away, the dirtier and nastier it gets. Also you start crying. I’ve been in the process of eradicating as much of big tech out of my life for a long time now. Since four or five years ago, all my desktop and laptop PCs run Linux, from my dual-Xeon workstation to my high-end gaming PC (ignore that spare parts PC that runs Windows just for League of Legends. That stupid game is my guilty pleasure and I will not give it up), from my XPS 13 laptop to my little Home Assistant thin client. I’ve never ordered a single thing from Amazon and have no Prime subscription or whatever it is, so that one was a freebie. Apple I banished from my life long ago, so that’s another freebie. Sadly, that other device most of us carry with us remained solidly in the big tech camp, as I’ve been using an Android phone for a long time, filled to the brim with Google products, applications, and services. There really isn’t a viable alternative to the Android and iOS duopoly. Or is there? Well, in a roundabout way, there is an alternative to iOS and Google’s Android. You can’t do much to take the Apple out of an iPhone, but there’s a lot you can do to take the Google out of an Android phone. Unless or until an independent third platform ever manages to take serious hold – godspeed, our saviour – de-Googled Android, as it’s called, is your best bet at having a fully functional, modern smartphone that’s as free from big tech as you want it to be, without leaving you with a barely usable, barebones experience. While you can install a de-Googled ROM yourself, as there’s countless to choose from, this is not an option for everyone, since not everyone has the skills, time, and/or supported devices to do so. Murena, Fairphone, and sustainable mining This is where Murena comes in. Murena is a French company – founded by Gaël Duval, of Mandrake Linux fame – that develops /e/OS, a de-Googled Android using microG (which Murena also supports financially), which it makes available for anyone to install on supported devices, while also selling various devices with /e/OS preinstalled. Murena goes one step further, however, by also offering something called Murena Workspace – a branded Nextcloud offering that works seamlessly with /e/OS. In other words, if you buy an /e/OS smartphone from Murena, you get the complete package of smartphone, mobile operating system, and cloud services that’s very similar to buying a regular Android phone or an iPhone. To help me test this complete package of smartphone, de-Googled Android, and cloud services, Murena loaned me a Fairphone 5 with /e/OS preinstalled, and while this article mostly focuses on the /e/OS experience, we should first talk a little bit about the relationship between Murena and Fairphone. Murena and Fairphone are partners, and Murena has been selling /e/OS Fairphones for a while now. Most of us will be familiar with Fairphone – it’s a Dutch company focused on designing and selling smartphones and related accessories that are are user-repairable and long-lasting, while also trying everything within their power to give full insight into their supply chain. This is important, because every smartphone contains quite a few materials that are unsustainably mined. Many mines are destructive to the environment, have horrible working conditions, or even sink as low as employing children. Even companies priding themselves on being environmentally responsible and sustainable, like Apple, are guilty of partaking in and propping up such mining endeavours. As consumers, there isn’t much we can do – the network of supply chains involved in making a smartphone is incredibly complex and opaque, and there’s basically nothing normal people can do to really fully know on whose underpaid or even underage shoulders their smartphone is built. This holiday season, Murena and Fairphone are collaborating on exactly this issue of the conditions in mines used to acquire the metals and minerals in our phones. Instead of offering big discounts (that barely eat into margins and often follow sharp price increases right before the holidays), Murena and Fairphone will donate
Every now and then, news from the club I’m too cool to join, the plan9/9front community, pierces the veil of coolness and enters our normal world. This time, someone accidentally made a package manager for 9front. I’ve been growing tired of manually handling random software, so I decided to find a simple way to automate the process and ended up making a sort of “package manager” for 9front¹. It’s really just a set of shell scripts that act as a frontend for git and keep a simple database of package names and URLs. Running the pkginit script will ask for a location to store the source files for installed packages (/sys/pkg by default) which will then be created if non-existent. And that’s it! No, really. Now you can provide a URL for a git repository to pkg/add. ↫ Kelly “bubstance” Glenn As I somehow expected from 9front, it’s quite a simple and elegant system. I’m not sure how well it would handle more complex package operations, but I doubt many 9front systems are complex to begin with, so this may just be enough to take some of the tedium out of managing software on 9front, as the author originally intended. One day I will be cool enough to use 9front. I just have to stay cool.
The author of this article, Dr. Casey Lawrence, mentions the opt-out checkbox is hard to find, and they aren’t kidding. On Windows, here’s the full snaking path you have to take through Word’s settings to get to the checkbox: File > Options > Trust Center > Trust Center Settings > Privacy Options > Privacy Settings > Optional Connected Experiences > Uncheck box: “Turn on optional connected experiences”. That is absolutely bananas. No normal person is ever going to find this checkbox. Anyway, remember how the “AI” believers kept saying “hey, it’s on the internet so scraping your stuff and violating your copyright is totally legal you guys!”? Well, what about when you’re using Word, installed on your own PC, to write private documents, containing, say, sensitive health information? Or detailed plans about your company’s competitor to Azure or Microsoft Office? Or correspondence with lawyers about an antirust lawsuit against Microsoft? Or a report on Microsoft’s illegal activity you’re trying to report as a whistleblower? Is that stuff fair game for the gobbledygook generators too? This “AI” nonsense has to stop. How is any of this even remotely legal?
A month and a bit ago, I wondered if I could cope with a terminal-only computer. The only way to really find out was to give it a go. My goal was to see what it was like to use a terminal-only computer for my personal computing for two weeks, and more if I fancied it. ↫ Neil’s blog I tried to do this too, once. Once. Doing everything from the terminal just isn’t viable for me, mostly because I didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows 3.1 installation we never used), and I’m pretty sure the experience of using MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t start forming properly until Windows 95 came out, and as such, computing is inherently graphical for me, and no matter how many amazing CLI and TUI applications are out there – and there are many, many amazing ones – my brain just isn’t compatible with it. There are a few tasks I prefer doing with the command line, like updating my computers or editing system files using Nano, but for everything else I’m just faster and more comfortable with a graphical user interface. This comes down to not knowing most commands by heart, and often not even knowing the options and flags for the most basic of commands, meaning even very basic operations that people comfortable using the command line do without even thinking, take me ages. I’m glad any modern Linux distribution – I use Fedora KDE on all my computers – offers both paths for almost anything you could do on your computer, and unless I specifically opt to do so, I literally – literally literally – never have to touch the command line.
I had to dive into our archive all the way back to 2017 to find the last reference to the MaXX Interactive Desktop, and it seems this wasn’t exactly my fault – the project has been on hiatus since 2020, and is only now coming back to life, as MaXXdesktop v2.2.0 (nickname Octane) Alpha-1 has been released, alongside a promising and ambitious roadmap for the future of the project. For the uninitiated – MaXX is a Linux reimplementation of the IRIX Interactive Desktop with some modernisations and other niceties to make it work properly on modern Linux (and FreeBSD) machines. MaXX has a unique history in that its creator and lead developer, Eric Masson, managed to secure a special license agreement with SGI way back in 2005, under which he was allowed to recreate, from scratch, the IRIX Interactive Desktop on Linux, including the use of SGI’s trademarks and IRIX’ unique look and feel. It’s important to note that he did not get access to any code – he was only allowed to reverse-engineer and recreate it, and because some of the code falls under this license agreement and some doesn’t, MaXX is not entirely open source; parts of it are, but not all of it. Any new code written that doesn’t fall under the license agreement is released as open source though, and the goal is to, over time, make everything open source. And as you can tell from this v2.2.0 screenshot, MaXX looks stunning even at 4K. This new alpha version contains the first changes to adopt the freedesktop.org application specifications, a new Exposé-like window overview, tweaks to the modernised version of the IRIX look and feel (the classic one is also included as an option), desktop notifications, performance improvements, various modernisations to the window manager, and so, so much more. For the final release of 2.2.0 and later releases, more changes are planned, like brand new configuration and system management panels, a quick search tool, a new file manager, and a ton more. MaXX runs on RHEL/Rocky and Ubuntu, and probably more Linux distributions, and FreeBSD, and is entirely free.
This is a Silicon Graphics workstation from 1995. Specifically, it is an ‘Teal’ Indigo 2 (as opposed to a ‘Purple’ Indigo 2, which came later). Ordinarily that’s rare enough – these things were about £30,000 brand new. A close look at the case badge though, marks this out as a ‘Teal’ POWER Indigo 2 – where instead of the usual MIPS R4600 or R4400SC CPU modules, we have the rare, unusual, expensive and short-lived MIPS R8000 module. ↫ Jonathan Pallant It’s rare these days to find an article about exotic hardware that has this many detailed photographs – most people just default to making videos now. Even if the actual contents of the article aren’t interesting, this is some real good hardware pornography, and I salute the author for taking the time to both take and publish these photos in a traditional way. That being said, what makes this particular SGI Indigo 2 so special? The R8000 is not a CPU in the traditional sense. It is a processor, but that processor is comprised of many individual chips, some of which you can see and some of which are hidden under the heatsink. The MIPS R8000 was apparently an attempt to wrestle back the Floating-Point crown from rivals. Some accounts report that at 75 MHz, it has around ten times the double-precision floating point throughput of an equivalent Pentium. However, code had to be specially optimised to take best advantage of it and most code wasn’t. It lasted on the market for around 18 months, before bring replaced by the MIPS R10K in the ‘Purple’ Indigo 2. ↫ Jonathan Pallant And here we see the first little bits of writing on the wall for the future of all the architectures trying to combat the rising tide of x86. SGI’s MIPS, Sun’s SPARC, HP’s PA-RISC, and other processors would stumble along for a few more years after this R8000 module came on the market, but looking back, all of these companies knew which way the wind was blowing, and many of them would sign onto Intel’s Itanium effort. Itanium would fail spectacularly, but the cat was out of the bag, and SGI, Sun, and HP would all be making standard Xeon and Opteron workstations within a a few years. Absolutely amazing to see this rare of a machine and module lovingly looked after.
This is the first post in what will hopefully become a series of posts about a virtual machine I’m developing as a hobby project called Bismuth. This post will touch on some of the design fundamentals and goals, with future posts going into more detail on each. But to explain how I got here I first have to tell you about Bismuth, the kernel. ↫ Eniko Fox It’s not every day the a developer of an awesome video game details a project they’re working on that also happens to be excellent material for OSNews. Eniko Fox, one of the developers of the recently released Kitsune Tails, has also been working on an operating system and virtual machine in her spare time, and has recently been detailing the experience in, well, more detail. This one here is the first article in the series, and a few days ago she published the second part about memory safety in the VM. The first article goes into the origins of the project, as well as the design goals for the virtual machine. It started out as an operating systems development side project, but once it was time to develop things like the MMU and virtual memory mapping, Fox started wondering if programs couldn’t simply run inside a virtual machine atop the kernel instead. This is how the actual Bismuth virtual machine was conceived. Fox wants the virtual machine to care about memory safety, and that’s what the second article goes into. Since the VM is written in C, which is anything but memory-safe, she’s opting for implementing a form of sandboxing – which also happens to be the point in the development story where my limited knowledge starts to fail me and things get a little too complicated for me. I can’t even internalise how links work in Markdown, after all (square or regular brackets first? Also Markdown sucks as a writing tool but that’s a story for another time). For those of you more capable than me – so basically most of you – Fox’ series is a great series to follow along as she further develops the Bismuth VM.
Valve, entirely going against the popular definition of Vendor, is still actively working on improving and maintaining the kernel for their Steam Deck hardware. Let’s see what they’re up to in this 6.8 cycle. ↫ Samuel Dionne-Riel Just a quick look at what, exactly, Valve does with the Steam Deck Linux kernel – nothing more, nothing less. It’s nice to have simple, straightforward posts sometimes.
Ah, the Common Hardware Reference Platform, IBM’s and Apple’s ill-fated attempt at taking on the PC market with a reference PowerPC platform anybody could build and expand upon while remaining (mostly) compatible with one another. Sadly, like so many other things Apple was trying to do before Steve Jobs returned, it never took off, and even Apple itself never implemented CHRP in any meaningful way. Only a few random IBM and Motorola computers ever fully implemented it, and Apple didn’t get any further than basic CHRP support in Mac OS 8, and some PowerPC Macs were based on CHRP, without actually being compatible with it. We’re roughly three decades down the line now, and pretty much everyone except weird nerds like us have forgotten CHRP was ever even a thing, but Linux has continued to support CHRP all this time. This support, too, though, is coming to an end, as Michael Ellerman has informed the Linux kernel community that they’re thinking of getting rid of it. Only a very small number of machines are supported by CHRP in Linux: the IBM B50, bplan/Genesi’s Pegasos/Pegasos2 boards, the Total Impact briQ, and maybe some Motorola machines, and that’s it. Ellerman notes that these machines seem to have zero active users, and anyone wanting to bring CHRP support back can always go back in the git history. CHRP is one of the many, many footnotes in computing history, and with so few machines out there that supported it, and so few machines Linux’ CHRP support could even be used for, it makes perfect sense to remove this from the kernel, while obviously keeping it in git’s history in case anyone wants to work with it on their hardware in the future. Still, it’s always fun to see references to such old, obscure hardware and platforms in 2024, even if it’s technically sad news.
Windows 10’s free, guaranteed security updates stop in October 2025, less than a year from now. Windows 10 users with supported PCs have been offered the Windows 11 upgrade plenty of times before. But now Microsoft is apparently making a fresh push to get users to upgrade, sending them full-screen reminders recommending they buy new computers. ↫ Andrew Cunningham at Ars Technica That deadline sure feels like it’s breathing down Microsoft’s neck. Most Windows users are still using Windows 10, and all of those hundreds of millions (billions?) of computers will become unsupported less than a year from now, which is going to be a major headache for Microsoft once the unaddressed security issues start piling up. CrowdStrike is fresh in Microsoft’s minds, and the company made a ton of promises about changing its security culture and implementing new features and best practices to stop it from ever happening again. That’s going to be some very tough promises to keep when the majority of Windows users are no longer getting any support. The obvious solution here is to accept the fact that if people haven’t upgraded to Windows 11 by now, they’re not going to until forced to do so because their computer breaks or becomes too slow and Windows 11 comes preinstalled on their new computer. No amount of annoying fullscreen ads interrupting people’s work or pleasure are going to get people to buy a new PC just for some halfbaked “AI” nonsense or whatever – in fact, it might just put even more people off from upgrading in the first place. Microsoft needs to face the music and simply extend the end-of-support deadline for Windows 10. Not doing so is massively irresponsible to a level rarely seen from big tech, and if they refuse to do so I strongly believe authorities should get involved and force the company to extend the deadline. You simply cannot leave this many users with insecure, non-maintained operating systems that they rely on every day to get their work done.
VMS Software, the company migrating OpenVMS to x86 (well, virtualised x86, at least) has announced the release of OpenVMS 9.2-3, which brings with a number of new features and changes. It won’t surprise you to hear that many of the changes are about virtualisation and enterprise networking stuff, like adding passthrough support for fibre channel when running OpenVMS in VMware, a new VGA/keyboard-based guest console, automatic configuration of TCP/IP and OpenSSH during installation, improved performance for virtualised network interfaces on VMware and KVM, and much more. Gaining access to OpenVMS requires requesting a community license, after which OpenVMs will be delivered in the form of a preinstalled virtual disk image, complete with a number of development tools.