Apple announced a trio of major new hearing health features for the AirPods Pro 2 in September, including clinical-grade hearing aid functionality, a hearing test, and more robust hearing protection. All three will roll out next week with the release of iOS 18.1, and they could mark a watershed moment for hearing health awareness. Apple is about to instantly turn the world’s most popular earbuds into an over-the-counter hearing aid. ↫ Chris Welch at The Verge Rightfully so, most of us here have a lot of issues with the major technology companies and the way they do business, but every now and then, even they accidentally stumble into doing something good for the world. AirPods are already a success story, and gaining access to hearing aid-level features at their price point is an absolute game changer for a lot of people with hearing issues – and for a lot of people who don’t even yet know they have hearing issues in the first place. If you have people in your life with hearing issues, or whom you suspect may have hearing issues, gifting them AirPods this Christmas season may just be a perfect gift. Yes, I too think hearing aids should be a thing nobody has to pay for and which should just be part of your country’s universal healthcare coverage – assuming you have such a thing – but this is not a bad option as a replacement.
System76, purveyor of Linux computers, distributions, and now also desktop environments, has just unveiled its latest top-end workstation, but this time, it’s not an x86 machine. They’ve been working together with Ampere to build a workstation based around Ampere’s Altra ARM processors: the Thelio Astra. Phoronix, fine purveyor of Linux-focused benchmarks, were lucky enough to benchmark one, and has more information on the new workstation. System76 designed the Thelio Astra in collaboration with Ampere Computing. The System76 Thelio Astra makes use of Ampere Altra processors up to the Ampere Altra Max 128-core ARMv8 processor that in turn supports 8-channel DDR4 ECC memory. The Thelio Astra can be configured with up to 512GB of system memory, choice of Ampere Altra processors, up to NVIDIA RTX 6000 Ada Generation graphics, dual 10 Gigabit Ethernet, and up to 16TB of PCIe 4.0 NVMe SSD storage. System76 designed the Thelio Astra ARM64 workstation to be complemented by NVIDIA graphics given the pervasiveness of NVIDIA GPUs/accelerators for artificial intelligence and machine learning workloads. The Astra is contained within System76’s custom-designed, in-house-manufactured Thelio chassis. Pricing on the System76 Thelio Astra will start out at $3,299 USD with the 64-core Ampere Altra Q64-22 processor, 2 x 32GB of ECC DDR4-3200 memory, 500GB NVMe SSD, and NVIDIA A402 graphics card. ↫ Michael Larabel This pricing is actually remarkably favourable considering the hardware you’re getting. System76 and its employees have been dropping hints for a while now they were working on an ARM variant of their Thelio workstation, and knowing some of the prices others are asking, I definitely expected the base price to hit $5000, so this is a pleasant surprise. With the Altra processors getting a tiny bit long in the tooth, you do notice some oddities here, specifically the DDR4 RAM instead of the modern DDR5, as well as the lack of PCIe 5.0. The problem is that while the Altra has a successor in the AmpereOne processor, its availability is quite limited, and most of them probably end up in datacentres and expensive servers for big tech companies. This newer variant does come with DDR5 and PCIe 5.0 support, but doesn’t yet have a lower core count version, so even if it were readily available it might simply push the price too far up. Regardless, the Altra is still a ridiculously powerful processor, and at anywhere between 64 and 128 cores, it’s got power to spare. The Thelio Astra will be available come 12 November, and while I would perform a considerable number of eyebrow-raising acts to get my hands on one, it’s unlikely System76 will ship one over for a review. Edit: here’s an excellent and detailed reply to our Mastodon account from an owner of an Ampere Altra workstation, highlighting some of the challenges related to your choice of GPU. Required reading if you’re interested in a machine like this.
How a vehicle’s accident history influences insurance premiums holds the key to making a better decision about car purchase. Higher accident-proneness translates to a higher premium amount; hence, the impact on the overall cost of owning and running the car is enormous. This article explores the relationship between a vehicle’s accident history and the premium for insurance owing to that vehicle. It underlines the importance of knowing it before purchasing it. Consumers must also determine how old accidents might impact the ability of the car to protect the occupants or last long enough without significant failures. Moreover, some insurance companies will not insure or charge higher deductibles on such vehicles because of the history of high-severity crashes. Knowing about such indirect impacts can help consumers negotiate terms and avoid surprise financial liabilities. How Accident History Affects Insurance Premiums Insurance companies take a risk when they calculate premiums. The car’s history of accidents can reflect a part of that risk: the more accidents a vehicle has been associated with, the higher the premium. A car in several accidents may be considered high-risk, whereas a vehicle with no accident history would be given lower rates. Potential buyers should thoroughly research a vehicle’s history to make informed decisions. Resources like a Texas VIN Lookup can provide essential insights into whether the car has been involved in any accidents. By entering the Vehicle Identification Number (VIN) into a reliable service, buyers can access detailed reports that outline the car’s past, including any reported accidents and their severity. The Role of Insurance Companies Insurance companies consider a variety of factors when determining premiums, including: Understanding these factors can help consumers anticipate the costs of insuring a used vehicle. The Importance of Vehicle History Reports These reports contain important information about the involvement of the vehicle in accidents in the past, the status of the title, and other information that could affect the insurance rates. Services like StatReport give detailed VIN checks showing if the motor vehicle has been involved in an accident and the resultant damage. The reports sometimes include previous owners, service records, accident history, etc. This transparency allows likely buyers to make informed decisions about the vehicles they put into consideration. Knowing a vehicle’s history will help buyers avoid buying cars with concealed problems that can cost them many resources in repairs. By looking at the report, they will also know if the asking price is to the book. Hidden Costs of Accident-Damaged Vehicles Buying a car with an accident history may cost more due to insurance; there may also be some hidden costs. Some of the possible problems include: Checking for Stolen Vehicles In addition to examining accident history, checking whether the vehicle has been reported stolen is crucial. A stolen vehicle check is essential in ensuring you are not inadvertently purchasing a car with legal complications. Many VIN check services provide this information as part of their reports. Knowing whether a vehicle has been reported stolen can save you from potential legal issues and financial losses. If you buy a stolen vehicle unknowingly, you could lose both your investment and the car itself once law enforcement gets involved. Understanding State-Specific Regulations Each state has regulations regarding how accidents affect insurance rates and what information must be disclosed when selling a vehicle. Familiarising yourself with these regulations is vital for protecting your rights as a consumer. For example, some states require sellers to disclose any known accidents or damage when selling a used car. Understanding these laws can empower buyers to ask the right questions and demand transparency from sellers. The Role of Professional Inspections Moreover, in addition to the vehicle history report, each used car being sold to a potential buyer should be taken for a professional inspection. A qualified mechanic might reveal prospective issues related to accidents that have taken place with the car in the past, which might not be visible if one were to conduct a casual inspection. A careful examination might reveal poor repairs or structural damage that could impact safety and performance. This additional level of scrutiny safeguards the buyer by ensuring that he is informed of what he is getting himself into before making any financial commitment. Conclusion They have a significant relationship with insurance rates. While the happening of an accident increases the rate, the effect of an accident on insurance is one of the most misunderstood subjects for a vehicle. Buyers need to understand, while making a purchase decision, how the accident history affects the premium for a car. The Texas VIN Lookup services, entire vehicle history reports, and the like come into play here to help buyers avoid hidden costs and probable risks in buying a pre-owned or secondhand car. Accidents in its history impact insurance rates, resale prices, and the cost of keeping the car on the road. The confidence that comes with researching a vehicle for purchase, combined with seeking a professional inspection of it, allows buyers to make appropriate selections conducive to their financial goals and personal safety concerns. Knowledge is power- the more informed one is about these factors, the better a used vehicle purchase decision can be made.
It’s no secret that a default Windows installation is… Hefty. In more ways than one, Windows is a bit on the obese side of the spectrum, from taking up a lot of disk space, to requiring hefty system requirements (artificial or not), to coming with a lot of stuff preinstalled not everyone wants to have to deal with. As such, there’s a huge cottage industry of applications, scripts, modified installers, custom ISOs, and more, that try to slim Windows down to a more manageable size. As it turns out, even Microsoft itself wants in on this action. The company that develops and sells Windows also provides a Windows debloat script. Over on GitHub, Microsoft maintains a repository of scripts simplify setting up Windows as a development environment, and amid the collection of scripts we find RemoveDefaultApps.ps1, a PowerShell script to “Uninstall unnecessary applications that come with Windows out of the box”. The script is about two years old, and as such it includes a few applications no longer part of Windows, but looking through the list is a sad reminder of the kind of junk Windows comes with, most notably mobile casino games for children like Bubble Witch and March of Empires, but also other nonsense like the Mixed Reality Portal or Duolingo. It also removes something called “ActiproSoftwareLLC“, which are apparently a set of third-party, non-Microsoft UI controls for WPF? Which comes preinstalled with Windows sometimes? What is even happening over there? The entire set of scripts makes use of Chocolatey wrapped in Boxstarter, which is “a wrapper for Chocolatey and includes features like managing reboots for you”, because of course, the people at Microsoft working on Windows can’t be bothered to fix application management and required reboots themselves. Silly me, expecting Microsoft’s Windows developers to address these shortcomings internally instead of using third-party tools. The repository seems to be mostly defunct, but the fact it even exists in the first place is such a damning indictment of the state of Windows. People keep telling us Windows is fine, but if even Microsoft itself needs to resort to scripts and third-party tools to make it usable, I find it hard to take claims of Windows being fine seriously in any way, shape, or form.
In early 2022 I got several Sun SPARC servers for free off of a FreeCycle ad: I was recently called out for not providing any sort of update on those devices… so here we go! ↫ Sidneys1.com Some information on booting old-style SPARC machines, as well as pretty pictures. Nice palate-cleanser if you’ve had to deal with something unpleasant this weekend. This world would be a better place if we all had our own Sun machines to play with when we get sad.
I don’t think most people realize how Firefox and Safari depend on Google for more than “just” revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it’s a hard image format to implement!), much of Safari’s cryptography (from BoringSSL), Firefox’s 2D renderer (Skia)…the list goes on. Much of Firefox’s security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium’s architecture. ↫ Rohan “Seirdy” Kumar Definitely an interesting angle on the browser debate I hadn’t really stopped to think about before. The argument is that while Chromium’s dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let’s face it, it’s not like they’re in a great position to do so. I’m not saying I buy the argument, but it’s an argument nonetheless. I honestly wouldn’t mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain’t on us as users. I’m still firmly on the side of less Chromium, please.
Google has gotten a bad reputation as of late for being a bit overzealous when it comes to fighting ad blockers. Most recently, it’s been spotted automatically turning off popular ad blocking extension uBlock Origin for some Google Chrome users. To a degree, that makes sense—Google makes its money off ads. But with malicious ads and data trackers all over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it’s not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. ↫ Michelle Ehrhardt at LifeHacker Here’s the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users’ trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you’re ever going to get. It’s definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you’re feeling generous, set up Pi-Holes for your parents, friends, and relatives. It’s worth it to make their browsing experience faster, safer, and more pleasant. And once again I’d like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It’s not like display ads are particularly profitable anyway, so I’d much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room – think IRC, but more modern, and fully end-to-end encrypted – is now up and running and accessible to all OSNews Patreons as well.
Something odd happened to Qualcomm’s Snapdragon Dev Kit, an $899 mini PC powered by Windows 11 and the company’s latest Snapdragon X Elite processor. Qualcomm decided to abruptly discontinue the product, refund all orders (including for those with units on hand), and cease its support, claiming the device “has not met our usual standards of excellence.” ↫ Taras Buria at Neowin The launch of the Snapdragon X Pro and Elite chips seems to have mostly progressed well, but there have been a few hiccups for those of us who want ARM but aren’t interested in Windows and/or laptops. There’s this story, which is just odd all around, with an announced, sold, and even shipped product suddenly taken off the market, which I think at this point was the only non-laptop device with an X Elite or Pro chip. If you are interested in developing for Qualcomm’s new platform, but don’t want a laptop, you’re out of luck for now. Another note is that the SoC SKU in the Dev Kit was clocked a tiny bit higher than the laptop SKUs, which perhaps plays a role in its cancellation. The bigger hiccup is the problematic Linux bring-up, which is posing many more problems and is taking a lot longer than Qualcomm very publicly promised it would take. For now, if you want to run Linux on a Snapdragon X Elite or Pro device, you’re going to need a custom version of your distribution of choice, tailored to a specific laptop model, using a custom kernel. It’s an absolute mess and basically means that at this point in time, months and months after release, buying one of these to run Linux on them is a bad idea. Quite a few important bits will arrive with Linux 6.12 to supposedly greatly improve the experience, but seeing is believing. Qualcomm made a lot of grandiose promises about Linux support, and they simply haven’t delivered.
I want to take advantage of Go’s concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD, Same Instruction Muliple Data . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go’s compiler does not utilizes simd, and I cound not find a general propose simd package that I liked. I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? ↫ Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don’t fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.
We’ve pulled together all kinds of resources to create a comprehensive guide to installing and upgrading to Windows 11. This includes advice and some step-by-step instructions for turning on officially required features like your TPM and Secure Boot, as well as official and unofficial ways to skirt the system-requirement checks on “unsupported” PCs, because Microsoft is not your parent and therefore cannot tell you what to do. There are some changes in the 24H2 update that will keep you from running it on every ancient system that could run Windows 10, and there are new hardware requirements for some of the operating system’s new generative AI features. We’ve updated our guide with everything you need to know. ↫ Andrew Cunningham at Ars Technica In the before time, the things you needed to do to make Windows somewhat usable mostly came down to installing applications replicating features other operating systems had been enjoying for decades, but as time went on and Windows 10 came out, users now also had to deal with disabling a ton of telemetry, deleting preinstalled adware, dodge the various dark patterns around Edge, and more. You have wonder if it was all worth it, but alas, Windows 10 at least looked like Windows, if you squinted. With Windows 11, Microsoft really ramped up the steps users have to take to make it usable. There’s all of the above, but now you also have to deal with an ever-increasing number of ads, even more upsells and Edge dark patterns, even more data gathering, and the various hacks you have to employ to install it on perfectly fine and capable hardware. With Windows 10’s support ending next year, a lot of users are in a rough spot, since they can’t install Windows 11 without resorting to hacks, and they can’t keep using Windows 10 if they want to keep getting updates. And here comes 24H2, which makes it all even worse. Not only have various avenues to make Windows 11 installable on capable hardware been closed, it also piles on a whole bunch of “AI” garbage, and accompanying upsells and dark patterns, Windows users are going to have to deal with. Who doesn’t want Copilot regurgitating nonsense in their operating system’s search tool, or have Paint strongly suggest it will “improve” your quick doodle to illustrate something to a friend with that unique AI Style™ we all love and enjoy so much? Stay strong out there, Windows folks. Maybe it’ll get better. We’re rooting for you.
If you read my previous article on DOS memory models, you may have dismissed everything I wrote as “legacy cruft from the 1990s that nobody cares about any longer”. After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today’s 64-bit machines can theoretically access 16EB. All of this growth has been in service of ever-growing programs. But… even if programs are now more sophisticated than they were before, do they all really require access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms? Let’s try to answer those questions to find some very surprising answers. But first, some theory. ↫ Julio Merino It’s not quite weekend yet, but I’m still calling this some light reading for the weekend.
Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts. In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports. ↫ Kevin Purdy at Ars Technica It’s a welcome collection of changes and features to better align Android’ theft and personal privacy protection with how thieves steal phones in this day and age. I’m not sure I understand all of them, though – the Private Space, where you can drop applications to lock them behind an additional pin code, confuses me, since everyone can see it’s there. I assumed Private Space would also give people in vulnerable positions – victims of abuse, journalists, dissidents, etc. – the option to truly hide parts of their life to protect their safety, but it doesn’t seem to work that way. Android 15 will also use “AI” to recognise when a device is yanked out of your hands and lock it instantly, which is a great use case for “AI” that actually benefits people. Of course, it will be even more useful once thieves are aware this feature exists, so that they won’t even try to steal your phone in the first place, but since this is Android, it’ll be a while before Android 15 makes its way to enough users for it to matter.
Earlier this year we talked about Huawei’s HarmonyOS NEXT, which is most likely the only serious competitor to Android and iOS in the world. HarmonyOS started out as a mere Android skin, but over time Huawei invested heavily into the platform to expand it into a full-blown, custom operating system with a custom programming language, and it seems the company is finally ready to take the plunge and release HarmonyOS NEXT into the wild. It’s indicated that HarmonyOS made up 17% of China’s smartphone market in Q1 of 2024. That’s a significant amount of potential devices breaking off from Android in a market dominated by either it or iOS. HarmonyOS NEXT is set to begin rolling out to Huawei devices next week. The OS will first come to the Mate 60, Mate X5, and MatePad Pro on October 15. ↫ Andrew Romero at 9To5Google Huawei has been hard at work making sure there’s no ‘application gap’ for people using HarmonyOS NEXT, claiming it has 10000 applications ready to go that cover “99.9%” of their users’ use case. That’s quite impressive, but of course, we’ll have to wait and see if the numbers line up with the reality on the ground for Chinese consumers. Here in the est HarmonyOS NEXT is unlikely to gain any serious traction, but that doesn’t mean I would mind taking a look at the platform if at all possible. It’s honestly not surprising the most serious attempt at creating a third mobile ecosystem is coming from China, because here in the west the market is so grossly rusted shut we’re going to be stuck with Android and iOS until the day I die.
Engineers at Google started work on a new Terminal app for Android a couple of weeks ago. This Terminal app is part of the Android Virtualization Framework (AVF) and contains a WebView that connects to a Linux virtual machine via a local IP address, allowing you to run Linux commands from the Android host. Initially, you had to manually enable this Terminal app using a shell command and then configure the Linux VM yourself. However, in recent days, Google began work on integrating the Terminal app into Android as well as turning it into an all-in-one app for running a Linux distro in a VM. ↫ Mishaal Rahman at Android Authority There already are a variety of ways to do this today, but having it as a supported feature implemented by Google is very welcome. This is also going to greatly increase the number of spammy articles and lazy YouTube videos telling you how to “run Ubuntu on your phone”, which I’m not particularly looking forward to.
Next up in my backlog of news to cover: the US Department of Justice’s proposed remedies for Google’s monopolistic abuse. Now that Judge Amit Mehta has found Google is a monopolist, lawyers for the Department of Justice have begun proposing solutions to correct the company’s illegal behavior and restore competition to the market for search engines. In a new 32-page filing (included below), they said they are considering both “behavioral and structural remedies.“ That covers everything from applying a consent decree to keep an eye on the company’s behavior to forcing it to sell off parts of its business, such as Chrome, Android, or Google Play. ↫ Richard Lawler at The Verge While I think it would be a great idea to break Google up, such an action taken in a vacuum seems to be rather pointless. Say Google is forced to spin off Android into a separate company – how is that relatively small Android, Inc. going to compete with the behemoth that is Apple and its iOS to which such restrictions do not apply? How is Chrome Ltd. going to survive Microsoft’s continued attempts at forcing Edge down our collective throats? Being a dedicated browser maker is working out great for Firefox, right? This is the problem with piecemeal, retroactive measures to try and “correct” a market position that you have known for years is being abused – sure, this would knock Google down a peg, but other, even larger megacorporations like Apple or Microsoft will be the ones to benefit most, not any possible new companies or startups. This is exactly why a market-wide, equally-applied set of rules and regulations, like the European Union’s Digital Markets Act, is a far better and more sustainable approach. Unless similar remedies are applied to Google’s massive competitors, these Google-specific remedies will most likely only make things worse, not better, for the American consumer.
Internet Archive’s “The Wayback Machine” has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker, stating that the Internet Archive was breached. “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!,” reads a JavaScript alert shown on the compromised archive.org site. ↫ Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it’s interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it’s an interesting note nonetheless. I’m still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library – there’s literally not a single reason to do so, and the only people you’re going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.
I finally seem to be recovering from a nasty flu that is now wreaking havoc all across my tiny Arctic town – better now than when we hit -40 I guess – so let’s talk about something that’s not going to recover because it actually just fucking died: Windows 7. For nearly everyone, support for Windows 7 ended on January 14th, 2020. However, if you were a business who needed more time to migrate off of it because your CEO didn’t listen to the begging and pleading IT department until a week before the deadline, Microsoft did have an option for you. Businesses could pay to get up to 3 years of extra security updates. This pushes the EOL date for Windows 7 to January 10th, 2023. Okay but that’s still nearly 2 years earlier than October 8th, 2024? ↫ The Cool Blog I’d like to solve the puzzle! It’s POSReady, isn’t it? Of course it is! Windows Embedded POSReady’s support finally ended a few days ago, and this means that for all intents and purposes, Windows 7 is well and truly dead. In case you happen to be a paleontologist, think of Windows Embedded POSReady adding an extra two years of support to Windows 7 as the mammoths who managed to survive on Wrangel until as late as only 4000 years ago. Windows 7 was one of the good ones, for sure, and all else being equal, I’d choose it over any of the releases that cam after. It feels like Windows 7 was the last release designed primarily for users of the Windows platform, whereas later releases were designed more to nickle and dime people with services, ads, and upsells that greatly cheapened the operating system. I doubt we’ll ever see such a return to form again, so Windows 7 might as well be the last truly beloved Windows release. If you’re still using Windows 7 – please don’t, unless you’re doing it for the retrocomputing thrill. I know Windows 8, 10, and 11 are scary, and as much as it pains me to say this, you’re better off with 10 or 11 at this point, if only for security concerns.
Sometimes I have the following problem to deal with: An OS/2 system uses NetBIOS over TCP/IP (aka TCPBEUI) and should communicate with a SMB server (likewise using TCPBEUI) on a different subnet. This does not work on OS/2 out of the box without a little bit of help. ↫ Michal Necasek My 40° fever certainly isn’t helping, but goes way over my head. Still, it seems like an invaluable article for a small group of people, and anyone playing with OS/2 and networking from here on out can refer back this excellent and detailed explanation.
In today’s world, everything is turning digital: manufacturing, retail, and agriculture. The global digital transformation market is set to reach a worth of $1009.8 billion by 2025, according to a report from Grand View Research, and this is one of the many reasons why technology has turned out to be the go-to method for streamlining operations, creating efficiency, and unlocking new possibilities. Development teams-specialized groups of tech talents-are at the heart of this transformation, moving material digitisation forward. Their influence is experienced across many industries, redefining how firms approach innovation, sustainability, and customer interaction. The Role of Dedicated Development Teams in Material Digitization The consistency, expertise, and focus that dedicated development teams can bring often provide the necessary impetus for an in-depth tackle of these complexities of material digitisation. It is not all about coding; in fact, it is about teams made up of project managers, analysts, engineers, and designers who integrate digital technologies into material handling and processing. Why a Dedicated Team? Choosing a dedicated team model for digitisation projects offers several advantages: Driving Innovation and Efficiency Dedicated development teams have been making revolutionary contributions to material digitisation. They digitise conventional materials and, in the process, create completely new avenues for innovation and efficiency in handling them. Case Studies of Success Navigating Challenges Together Of course, material digitisation comes with its problems. Data security, integration into existing systems, and the guarantee of actual-to-life digital material representation are specific difficulties facing most committed development teams. Partnering with an it outstaffing company can enhance their skill and teamwork, contributing to overcoming these setbacks. Overcoming Data Security Concerns Among the most critical issues in any digitisation project is data security. This develops dedicated teams with solid measures for protection, including encryption and secure access controls to digital materials. Additionally, regular audits of updates are needed in security to locate weaknesses that emerging threats could use. By prioritizing data security, organizations earn user trust and ensure the conduction of their services according to regulatory standards. Seamless Integration With Existing Systems Similarly, dedicated teams work at seamlessly integrating these into existing systems so that the digital materials can be put to practical use. In most cases, this demands bespoke API development or middleware solutions that will make the data flow across platforms smooth and unhindered. Rigorous testing and validation are thus required to establish that all systems communicate effectively and that data integrity is not compromised. Here, integration means increased productivity and an enhanced ability on the part of users to apply digital resources more usefully. The Multifaceted Benefits of Material Digitization However, dedicated development teams touch material digitisation well beyond operational efficiencies, driving it toward greener pastures and personalisation. Sustainability Through Digitization By digitizing materials, companies can reduce waste and optimize resources. For example, digital inventory systems prevent overproduction and excess inventory through efficient demand forecasting. This helps not only the environment but also the company’s bottom line. Besides, real-time data analytics enable organizations to make more informed decisions and respond promptly to various changes in markets and industries. Being sustainable in practice would enable companies to remain competitive in their respective industries. Enhancing Customer Engagement Material digitisation also opens up several new opportunities related to customer experiences. New immersive experiences offered by VR and AR enable the customer to try out a product virtually before buying it. Not only will this improve the buying experience, but it will also help develop a better brand relationship. Moreover, personalized experiences can also be built based on user preference, which genuinely makes a customer feel unique and understood. Hence, businesses can create customer loyalty and reinforce purchases by offering memorable and unique interactions. The Road Ahead: Collaborating for a Digitized Future Material digitisation is an ongoing journey full of potential and challenges. Companies need to continue their exploration, as the role of dedicated development teams will become much more important. Specialized teams are not simple service providers but strategic partners in innovation that help businesses navigate the complexities of the digital landscape. A Collaborative Ecosystem The digitisation of materials needs an ecosystem approach in which businesses, developers, and even end-users will work together. Encouraging open communication, feedback, and co-innovation leads to more practical digitisation solutions. For continuous improvement, various forms of partnership across different sectors will facilitate stakeholders’ use of diversified experience and insight. This collaborative approach accelerates the development of new technologies and ensures solutions that fit real user needs. Staying Ahead of the Curve Keeping one’s head above water is only possible with continuous learning and adaptation in a continuously changing digital world. The development teams should continually explore new technologies, methodologies, and practices to ensure that the digitisation of materials meets current needs but will also address future trends and opportunities. This allows teams to be more proactive in introducing innovative solutions that maximize efficiency and improve the user experience. With a culture of continuous improvement, organizations will be in leadership positions in their industry and prepared for whatever complications arise from the ever-changing digital landscape. Conclusion The influence of dedicated development teams goes deep and wide in material digitization. Pledged to expertise, innovation, and a perspective for the future, they are fostering industries down the value chain to unlock new potentials, efficiency, and sustainability while making the customer experience more engaging. No doubt, this team and business collaboration will form a cornerstone of this journey in digital transformation as it pertains to the way we interact with materials in our everyday lives.
Entirely coincidentally, the KDE team released Plasma 6.2 yesterday, the latest release in the well-received 6.x series. As the version number implies, it’s not a groundbreaking release, but it does contain a number of improvements that are very welcome to a few specific, often underserved groups. For instance, 6.2 overhauls the Accessibility settings panel, and ads, among other things, colourblindness filters for a variety of types of colourblindness. This condition affects roughly 8-9% of the population, so it’s an important new feature. Another group of people served by Plasma 6.2 are artists. Plasma 6.2 includes a smorgasbord of new features for users of drawing tablets. Open System Settings and look for Drawing Tablet to see various tools for configuring drawing tablets. New in Plasma 6.2: a tablet calibration wizard and test mode; a feature to define the area of the screen that your tablet covers (the whole screen or a section); and the option to re-bind pen buttons to different kinds of mouse clicks. ↫ KDE Plasma 6.2 release announcement Artists and regular users alike can now also enjoy better colour management, more complete HDR support, a tone-mapping feature in Kwin, and much more. Power management has been improved as well, so you can now manage brightness per individual monitor, control which application block going to sleep, and so on. There’s also the usual array of bug fixes, UI tweaks, and so on. Plasma 6.2 is already available in at least Fedora and openSUSE, and it will find its way to your distribution soon enough, too.