MicroPythonOS is a lightweight, fast, and versatile operating system designed to run on microcontrollers like the ESP32 and desktop systems. With a modern Android-like touch screen UI, App Store, and Over-The-Air updates, it’s the perfect OS for innovators and developers. ↫ MicroPytonOS’ website It’s quite neat to see this running in such a constrained environment, especially considering it comes with a graphical user interface, some basic applications, and niceties like OTA updates and an application repository. As the name implies, MicroPythonOS uses native MicroPython for application and driver development, making cross-platform portability from microcontrollers to regular PCs a possibility. It’s built on the MicroPython runtime, with LVGL for graphics, packaged by the lvgl_micropython project. It’s still relatively early in development, but it’s completely open source so anyone can help out and improve the project. I’m personally not too well-versed in the world of microcontrollers like the popular ESP32, so I’m not entirely sure just how capable other operating systems and platforms built on top if it are. This particular operating system seems to make it rather easy and straightforward for anyone to build and distribute an application for such microcontrollers, to a point where even an idiot like myself could relatively easily buy, say, an ESP32 kit with a display and assemble my own collection of small applications. To repeat myself, it simply looks neat.
It was good while it lasted, I guess. Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family. Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies’ powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies’ advanced technologies and extensive partner ecosystem. ↫ Qualcomm’s press release Qualcomm’s track record when it comes to community engagement, open source, and long-term support are absolutely atrocious, and there’s no way Arduino will be able to withstand the pressures from management. We’ve seen this exact story play out a million times, and it always begins with lofty promises, and always ends with all of them being broken. I have absolutely zero faith Arduino will be able to continue to do its thing like it has. Arduino devices are incredibly popular, and it makes sense for Qualcomm to acquire them. If I were using Arduino’s for my open source projects, I’d be a bit on edge right now.
Bradford Morgan White has published an excellent retrospective of QNX, the realtime microkernel operating system focused on embedded use cases. The final paragraph made me sad, though. QNX is a fascinating operating system. It was extremely well designed from the start, and while it has been rewritten, the core ideas that allowed it survive for 45 years persist to this day. While I am sad that Photon was deprecated, the reasoning is sound. Most vendors using QNX either do not require a GUI, or they implement their own. For example, while Boston Dynamics uses QNX in their robots, they don’t really need Photon, and neither do SpaceX’s Falcon rockets. While cars certainly have displays, most vehicle makers desire their screen interfaces to have a unique look and feel. Of course, just stating these use cases of robots, rockets, and cars speaks to the incredible reliability and versatility of QNX. Better operating systems are possible, and QNX proves it. ↫ Bradford Morgan White at Abort Retry Fail Way back in 2004, before I even joined OSNews properly, I wrote about QNX as a desktop operating system, because back then I went through a short stint where I used QNX and its amazing Photon MicroGUI as my primary desktop. Back then, there was a short-lived but very enthusiastic community using QNX on desktops, sharing tips and findings, supported by one or two QNX employees who tried their best to support this fledgling community in the face of corporate indifference. Eventually, these QNX employees left the company, and QNX started making it clearer than ever that they were not, in any way, interested in people using QNX on desktops, and in all honesty, they were most likely correct. However, I still think we had something special there, and had QNX’ management decided to help us out, it could’ve grown into something more sustainable. An open source QNX and Photon could’ve had an impact. Using QNX on the desktop back then was much easier than you might imagine, with graphical package managers, capable browsers and email clients, a massive pile of open source packages, pretty great performance, and little to no need to ever leave the GUI and use a CLI. If your hardware was properly supported, you could have a great experience. One of the very small “what-ifs” form the early 2000s.
Can these months please stop passing us by this quickly? It seems we’re getting a monthly Redox update every other week now, and that’s not right. Anyway, what have the people behind this Rust-based operating system been up to this past month? One of the biggest changes this month is that Redox is now multithreaded by default, at least on x86 machines. Unsurprisingly, this can enable some serious performance gains. Also contributing to performance improvements this month is inode data inlining for small files, and the installation is now a lot faster too. LZ4 compression has been added to Redox, saving storage space and improving performance. As far as ports go, there’s a ton of new and improved ports, like OpenSSH, Nginx, PHP, Neovim, OpenSSL 3.x, and more. On top of that, there’s a long list of low-level kernel improvements, driver changes, and relibc improvements, changes to the main website, and so on.
Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t exist, that the revenues are impossible, that one of the companies involved burns billions of dollars and has no path to profitability, is an act of irresponsible make believe and mythos. ↫ Edward Zitron The numbers are clear. People aren’t paying for “AI”, and those that do, are using up way more resources than they’re actually paying for. The profits required to make all of this work just aren’t realistic in any way, shape, or form. The money being pumped around doesn’t even exist. It’s a scam of such utterly massive proportions, it’s easier for many of us to just assume it can’t possibly be one. Too big to fail? Too many promises to be a scam. It’s going to be a bloodbath, but as usual when the finance and tech bros scam entire sectors, it’s us normal folk who will be left to foot the bill. Let’s blame immigrants some more while we implement harsh austerity measures to bail out the billionaire class. Again.
Your lovely host, late last night: Google claims they won’t be sharing developer information with governments, but we all know that’s a load of bullshit, made all the more relevant after whatever the fuck this was. If you want to oppose the genocide in Gaza or warn people of ICE raids, and want to create an Android application to coordinate such efforts, you probably should not, and stick to more anonymous organising tools. ↫ Thom Holwerda Let’s check in with how that other walled garden Google is trying to emulate is doing. Apple has removed ICEBlock, an app that allowed users to monitor and report the location of immigration enforcement officers, from the App Store. “We created the App Store to be a safe and trusted place to discover apps,” Apple said in a statement to Business Insider. “Based on information we’ve received from law enforcement about the safety risks associated with ICEBlock, we have removed it and similar apps from the App Store.” ↫ Katherine Tangalakis-Lippert, Peter Kafka, and Kwan Wei Kevin Tan for Business Insider Oh. Apple and Google are but mere extensions of the state apparatus. Think twice about what device you bring with you the next time you wish to protest your government’s actions.
Google has been on a bit of a marketing blitz to try and counteract some of the negative feedback following its new developer verification requirement for Android applications, and while they’re using a lot of words, none of them seem to address the core concerns. It basically comes down to that they just don’t care about the consequences this new requirement has for projects like F-Droid, nor are they really bothered by any of the legitimate privacy concerns this whole thing raises. If this new requirement is implemented in its current form, F-Droid will simply not be able to continue to exist in its current form. F-Droid builds the applications in its repository themselves and signs them, and developer verification does not fit into that picture at all. F-Droid works this way to ensure its applications are built from the publicly available sources, so developers can’t sneak anything nefarious into any binaries they would otherwise be submitting themselves. The privacy angle doesn’t seem to bother Google much, either, which shouldn’t be a surprise to anyone. With this new requirement, Android application developers can simply no longer be anonymous, which has a variety of side-effects, not least of which is that anyone developing applications for, say, dissidents, can now no longer be anonymous. Google claims they won’t be sharing developer information with governments, but we all know that’s a load of bullshit, made all the more relevant after whatever the fuck this was. If you want to oppose the genocide in Gaza or warn people of ICE raids, and want to create an Android application to coordinate such efforts, you probably should not, and stick to more anonymous organising tools. Students and hobbyists are getting the short end of the stick, too, as Google’s promised program specifically for these two groups is incredibly limited. Yes, it waves the $25 fee, but that’s about the only positive here: Developers who register with Google as a student or hobbyist will face severe app distribution restrictions, namely a limit on the number of devices that can install their apps. To enforce this, any user wanting to install software from these developers must first retrieve a unique identifier from their device. The developer then has to input this identifier into the Android Developer Console to authorize that specific device for installation. ↫ Mishaal Rahman at Android Authority Google does waive the requirement for developer certification for one particular type of user, and in doing so, highlights the only group of users Google truly cares about: enterprise users. Any application installed by an enterprise on managed devices will not need to have its developer certified. Google states that in this particular use case, the enterprise’s IT department is responsible for any security issues that may arise. Isn’t it funny how the only group of users who won’t have to deal with this nonsense are companies who pay Google tons of money for their enterprise tools? The only way we’re going to get out of this is if any governments step up and put a stop to this. We can safely assume the United States’ government won’t be on our side – they’re too busy with their recurring idiotic song-and-dance anyway – so our only hope is the European Commission stepping in, but I’m not holding my breath. After all, Apple’s rules and regulations regarding installing applications outside of the App Store in the EU are not that different from what Google is going to do. While the EU is not happy with the details of Apple’s rules, their general gist seems to be okay with them. I’m afraid governments won’t be stepping in to stop this one.
And here we have yet another case of the EU’s consumer protection legislation working in our favour. Dutch privacy and consumer rights organisation Bits of Freedom sued Facebook over the company’s little trick of disregarding a user’s settings under a variety of circumstances, such as when a user opts for a chronological, non-profiled timeline, only to have Facebook reset itself to the profiled timeline upon a restart. The judge states that Meta is indeed acting in violation of the law. He says that “a non‑persistent choice option for a recommendation system runs counter to the purpose of the DSA, which is to give users genuine autonomy, freedom of choice, and control over how information is presented to them.” The judge also concludes that the way Meta has designed its platforms constitutes “a significant disruption of the autonomy of Facebook and Instagram users.” The judge orders Meta to adjust its apps so that the user’s choice is preserved, even when the user navigates to another section or restarts the app. ↫ Bits of Freedom press release This is good news, of course, but I really wish we would take this a step further: a complete ban on targeted advertising and timeline manipulation based on harvested user data. I just don’t believe these business models and ragebait machines offer anything of value to society, and in fact, do far more harm than good. I am convinced that our world would be a better place without these business models. We restrict of outright ban dangerous substances or activities all the time. This should be among them.
With Google closing up Android at a rapid pace, there’s some renewed interest in mobile platforms that aren’t either iOS or Android, and one of those is Ubuntu Touch. It’s been steadily improving over the years under the stewardship of the UBports Foundation, and today they released Ubuntu Touch 24.04-1.0. Ubuntu Touch 24.04-1.0 is the first release of Ubuntu Touch which is based on Ubuntu 24.04 LTS, a major upgrade from Ubuntu 20.04. This might not be as big compared to our last upgrade from Ubuntu 16.04 to 20.04, but this still brings newer software stack to Ubuntu Touch (such as Qt 5.15). ↫ Ubuntu Touch 24.04-1.0 release announcement In this release, aside from the upgrade to Ubuntu 24.04 LTS, there’s now also a light mode for the shell, including experimental support for switching themes on the fly. Applications already supported a light theme since the previous releases, so adding support for it in the main shell is a welcome improvement. We’ve also got experimental support for encrypting personal data, which needs to be enabled per device, which I think indicates not all devices support it. On top of that, there’s some changes to the phone application, and a slew of smaller fixes and improvements as well. The list of supported devices has grown as well, with the Fairphone 5 as the newcomer this release. The list is still relatively small, but to be fair to the project, it includes a number of popular devices, as well as a few that are still readily available. If you want to opt for running Ubuntu Touch as your smartphone platform, there’s definitely plenty of devices to choose from.
Microsoft is reorganising the Windows teams. Again. For those unaware, the Windows organization has essentially been split in two since 2018. Teams that work on the core of Windows were moved under Azure, and the rest of the Windows team (those that focused on top level features and user experiences) remained under the Windows org. That is finally changing, with Davuluri saying that the Windows client and server teams are now going to operate under the same roof once again. “This change unifies Windows engineering work under a single organization … Moving the teams working on Windows client and server together into one organization brings focus to delivering against our priorities.” ↫ Zac Bowden at Windows Central I mean, it’s obviously far too simplistic to attribute Windows’ many user-facing problems and failures on something as simple as this particular organisational split, but it sure does feel like it could be a contributing factor. It seems like the core of Windows is mostly fine and working pretty well, while the user experience is the ares that has suffered greatly in recent years, pressured as the Windows team seems to have been to add advertising, monetisation, tons of sometimes dangerous dark patterns, and more. I hope that bringing these two teams back together will eventually lead to an overall improvement of the Windows user experience, and not a deterioration of the core of the platform. In other words, that the core team lifts up the user experience team, instead of the user experience team dragging the core team down. A Windows that takes its users seriously and respects them could be a fine operating system to use, but it reorganisations like this take a long time to have any measurable effect. Of course, it could also just have no effect at all, or perhaps the rot has simply spread too far and wide. In a few years, depressing as it may seem, Windows 11 might be regarded as a highlight.
This article is intended to be a comprehensive guide to writing your first GNOME app in Lua using LuaGObject. The article assumes that you already understand Lua and want to get started with building beautiful native applications for GNOME. I also assume you know how to use a command line to install and compile software. Having some knowledge of the C programming language, as well as the Make, Gettext, and Flatpak software will be helpful, but shouldn’t be required to understand this guide. ↫ Victoria Lacroix Exactly what is says on the tin.
Many Outlook users encounter the error message – “PST file is not an Outlook data file”, while trying to open or import a PST file into the application on another system. It is obvious from error message that Outlook cannot correctly read the file. This happens if there is corruption in the PST file or due to some other internal or external factors. In this guide, you’ll find the possible reasons for this error and know the solutions to resolve it without any hassle. Reasons behind the ‘PST File is not an Outlook Data File’ Error There are several reasons that may cause this error, such as: Corrupted or Damaged PST File There are high chances that your PST file is corrupted due to which Outlook is unable to read the file and throwing this error message. PST File is Read-Only If your PST file attribute is set as read-only, then Outlook fails to make any changes or do modifications to the file, resulting in the error. Compatibility Issues Compatibility issues can also prevent you from opening or importing a PST file. For example, when trying to open a PST file created in higher Outlook version in a lower version. Oversized PST File Too large or oversized PST files are difficult to open or import. You may face issues or errors when importing such large files into Outlook. Solutions to Resolve the ‘PST File is not an Outlook Data File’ Error Depending on the cause of the error, you can follow the below solutions to fix this error. 1. Check the Attributes of the PST File It might happen that the PST file attributes are set to Read-Only. This might be the reason you’re getting the error when accessing the file. The solution is simple – verify the file’s attributes and change them if required. Follow the given steps: In the Properties window, ensure that the Read-Only attribute and Hidden are not selected. If selected, then uncheck them and click OK. Now, try to access the PST file and check if the issue is resolved. 2. Update your Outlook Application Outdated Outlook application may develop bugs or issues which can prevent you from performing certain operations. If this is the case, it is obvious that updating the application can help fix the issue. Here’s how you can check and update your Outlook: Sometimes, updates are managed by your IT administrator, due to which the option ‘Update Now’ is unavailable or disabled in your account. Still, you can update your account manually by following the given steps. Once Outlook is installed, configure your profile. If the error is not resolved, follow the next method. 3. Create a New Outlook Profile Issues or errors may arise if your Outlook profile gets corrupted or damaged. Creating a new profile can help resolve such issues. You can follow the given steps to create a new profile in Outlook: After that, complete the email setup by following the sign-in wizard and instructions. 4. Reduce the PST File Size Accessing or importing excessively large PST file may cause this error. So, reduce the PST file size if it is large. Here are some approaches you can follow: 5. Repair Corrupted PST File Corruption or inconsistencies in the PST file can result in the said error. You can conveniently repair the PST file with the Microsoft Outlook’s Inbox Repair Tool, also known as ScanPST.exe. Follow these instructions to repair PST using this tool: While ScanPST can fix damaged PST files, it has certain limitations, such as: As an alternative solution and avoid such limitations, you can opt for a more powerful PST repair tool, like Stellar Repair for Outlook. This tool can repair PST file with any level of file corruption and without any file size limitations. After successfully repairing the file, it saves all the mailbox items in a new fresh PST file by maintaining the same folder hierarchy and keeping the data intact. It can also auto-split large PST file to reduce the file size. The tool can seamlessly repair PST files of any Outlook application version – 2021, 2019, 2016, 2013, and earlier. Conclusion As you have seen above, the PST file is not an Outlook data file error is the result of corruption in the file, large file size, incompatibility problems, and other reasons. It is easy to resolve this error. You can follow the stepwise solutions explained above to fix this error, depending on the cause. If this error is the result of corruption in PST file, you can rely on Stellar Repair for Outlook as it is one of the advanced tools to repair corrupted PST files. This tool recovers all the items from corrupted PST file and save them in a new PST file with complete data integrity.
Have you ever heard of the Encore 91 computer system, developed and built by Encore Computer Corporation? I stumbled upon the name of this system on the website for the Macintosh like virtual window manager (MLVWM), an old X11 window manager designed to copy some of the look and feel of the classic Mac OS, and wanted to know more about it. An old website from what appears to be a reseller of the Encore 91 has a detailed description and sales pitch of the machine still online, and it’s a great read. The hardware architecture of the Encore 91 series is based on the Motorola high-performance 88100 25MHz RISC processor. A basic system is a highly integrated fully symmetrical single board multiprocessor. The single board includes two or four 88100 processors with supporting cache memory, 16 megabytes of shared main memory, two synchronous SCSI ports, an Ethernet port, 4 asynchronous ports, real-time clocks, timers, interrupts and a VME-64 bus interface. The VME-64 bus provides full compatibility with VME plus enhancements for greater throughput. Shared main memory may be expanded to 272 megabytes (mb) by adding up to four expansion cards. The expansion memory boards have the same high-speed access characteristics as local memory. ↫ Encore computing 91 system The Encore 91 ran a combination of AT&T’s system V.3.2 UNIX and Encore’s POSIX-compliant MicroMPX real-time kernel, and would be followed by machines with more powerful processors in the 88xxx series, as well as machines based on the Alpha architecture. The company also created and sold its own modified RISC architecture, RSX, for which there are still some details available online. Bits and bobs of the company were spun off and sold off, and I don’t think much of the original company is still around today. Regardless, it’s an interesting system with an interesting history, but we’ll most likely never get to see oe in action – unless it turns up in some weird corner of the United States where the rare working examples of hardware like this invariably tends to end up.
The consequences of Google requiring developer certification to install Android applications, even outside of Google’s own Play Store, are starting to reverberate. F-Droid, probably the single most popular non-Google application repository for Android, has made it very clear that Google’s upcoming requirement is most likely going to mean the end of F-Droid. If it were to be put into effect, the developer registration decree will end the F-Droid project and other free/open-source app distribution sources as we know them today, and the world will be deprived of the safety and security of the catalog of thousands of apps that can be trusted and verified by any and all. F-Droid’s myriad users will be left adrift, with no means to install — or even update their existing installed — applications. ↫ F-Droid’s blog post A potential loss of F-Droid would be a huge blow to anyone trying to run Android without Google’s applications and frameworks installed on their device. It’s pretty clear that Google is doing whatever it can to utterly destroy the Android Open Source Project, something I’ve been arguing is what the rumours about Google killing AOSP really mean. Why kill AOSP, when you can just make it utterly unusable and completely barren? Sadly, there isn’t much F-Droid can do. They’re proposing regulators the world over look at Google’s plans, and hopefully come to the conclusion that they’re anti-competitive. Specifically the European Union and the tools provided by the Digital Markets Act could prove useful here, but in the end, only if the will exists to use them can these tools be used in the first place. It’s dark times for the smartphone world right now, especially if you care about consumer rights and open source. iOS has always been deeply anti-consumer, and while the European Union has managed to soften some of the rough edges, nothing much has changed there. Android, on the other hand, had a thriving open source, Google-free community, but decision by decision, Google is beating it into submission and killing it off. The Android of yesteryear doesn’t exist anymore, and it’s making people who used to work on Android back during the good old times extremely sad. Jean-Baptiste Quéru, husband of OSNews’ amazing and legendary previous managing editor Eugenia Loli-Queru, worded it like this a few days ago: All the tidbits of news about Android make me sad. I used to be part of the Android team. When I worked there, making the application ecosystem as open as the web was a goal. Releasing the Android source code as soon as something hit end-user devices was a goal. Being able to run your own build on actual consumer hardware was a goal. For a while after I left, there continued to be some momentum behind what I had pushed for. But, now, 12 years later, this seems to have all died. I am sad… ↫ Jean-Baptiste Quéru And so am I. Like any operating system, Android is far from perfect, but it was remarkable just how open it used to be. I guess good things just don’t survive once unbridled capitalism hits.
WordPress has evolved into a central platform for content delivery, marketing execution, and business data interaction. As organizations increasingly rely on CRMs, ERPs, and automation tools, integrating WordPress into broader system architectures becomes essential to maintain data consistency, reduce manual input, and enable real-time operational visibility. Whether used to synchronize leads with Salesforce, trigger campaigns in Mailchimp, or connect WooCommerce to inventory systems, these integrations transform WordPress from a standalone CMS into a fully embedded component of enterprise workflows. Key Integration Scenarios and Business Use Cases CRM systems like HubSpot, Salesforce, and Zoho integrate with WordPress to automatically capture and sync form submissions as leads, assign tags, or trigger workflows. This reduces manual entry and aligns marketing with sales pipelines. In e-commerce setups, WooCommerce can be connected to inventory management tools, shipping providers, and accounting platforms to ensure stock accuracy, automate order fulfillment, and streamline financial reporting. Marketing automation platforms such as Mailchimp or ActiveCampaign can respond to WordPress events, such as user signups, purchases, or content downloads, by triggering segmented email campaigns or adjusting lead scoring. For internal operations, user data from WordPress membership portals, employee directories, or learning management systems can be pushed to intranet tools or custom platforms to maintain consistency across HR, training, or access control systems. Integrating WordPress with external platforms such as CRMs or enterprise resource planning systems often requires a tailored backend architecture and reliable API management, areas where teams like IT Monks bring deep expertise in custom implementations and data flow optimization. Integration Methods: APIs, Webhooks, and Middleware External systems frequently rely on WordPress as a structured content source, authentication layer, and interaction point. Through its REST API or WPGraphQL, WordPress can expose posts, pages, users, custom fields, and taxonomies to external applications, allowing seamless data retrieval without duplicating backend infrastructure. Mobile apps, single-page applications, and business dashboards can fetch and render this data in real time, keeping interfaces synchronized while maintaining a clean separation between content management and presentation logic. Business platforms may integrate WordPress-sourced data into broader enterprise systems, such as product information management (PIM), knowledge bases, or customer portals. For example, product catalogs or event schedules created in WordPress can be indexed or consumed by search engines, ERPs, or partner extranets without requiring manual export or reformatting. This enables centralized content governance while distributing structured data across multiple digital endpoints. Authentication workflows also benefit from WordPress extensibility. By extending or interfacing with its login mechanisms, external applications can support single sign-on (SSO) via OAuth2, JWT, or SAML, reducing user friction and maintaining a unified identity framework. When WordPress functions as the identity provider, it can grant or revoke access to multiple connected systems based on roles, capabilities, or metadata stored in its user schema, facilitating secure access control across the ecosystem. Structuring Data Flows for Reliability and Security Integrating WordPress with external systems at scale requires well-defined data flows that prioritize structural alignment, security, and resilience. Each data entity, posts, users, orders, and custom fields, must be explicitly mapped to corresponding fields in the target system to prevent schema mismatches or logic errors. This is particularly critical when syncing complex objects, such as nested product attributes, user roles with permissions, or multi-step form inputs. Sync timing is another strategic layer. Real-time data syncing is essential for event-driven scenarios such as lead capture, user registrations, or payment confirmations, where immediate availability is critical for downstream processes. In contrast, batch processing is more efficient for scheduled operations like syncing analytics, exporting order logs, or updating inventory, where latency is acceptable but throughput and stability are prioritized. Security mechanisms must be enforced at every endpoint. REST API and GraphQL endpoints should require OAuth2 tokens, API keys with granular scopes, or signed requests with expiration logic. WordPress capabilities and user roles should be used to constrain who or what can initiate a sync, especially for write operations. Logging mechanisms should capture each exchange, request payloads, response statuses, and timestamps to create an audit trail for monitoring and debugging. Robust error handling ensures resilience in production environments. Network failures, rate limits, invalid payloads, or upstream errors can be mitigated using retries with backoff, queued requests, or temporary storage of unsynced data. Middleware services or custom-built retry logic can help isolate failures without interrupting the entire workflow, enabling graceful degradation and easier recovery. Plugins and custom development approaches offer flexible paths for integrating WordPress with business systems, depending on complexity, performance needs, and system architecture. Many off-the-shelf plugins provide streamlined connectors for popular platforms. For example, WP Fusion syncs user and e-commerce data with CRMs like HubSpot or ActiveCampaign, Uncanny Automator links WordPress events to external triggers, and Gravity Forms can be extended via Zapier to push submissions to hundreds of services. These plugins are effective for standardized use cases where integration logic is predictable and platform support is mature. They reduce development time, provide visual configuration, and often include built-in error handling, logging, and authentication. However, they may be limited when working with proprietary APIs, large data volumes, or tightly coupled enterprise workflows. Custom development becomes essential when integrating with unique APIs, enforcing conditional logic, or maintaining performance in high-traffic environments. Developers can use wp_remote_post() and wp_remote_get() to send and retrieve data securely, implementing authentication headers, request timeouts, and structured payloads. API responses should be sanitized, validated, and stored using scalable mechanisms such as custom database tables or transient caching, avoiding reliance on generic options or post meta fields that could slow down the admin interface or lead to data bloat. For long-term maintainability, custom integrations should follow WordPress coding standards, be encapsulated in modular plugins, and include logging layers to monitor data exchange and catch failures early. This ensures that even complex, business-critical integrations remain stable and traceable as both platforms evolve.
Unite is an operating system in which everything is a process, including the things that you normally would expect to be part of the kernel. The hard disk driver is a user process, so is the file system running on top of it. The namespace manager is a user process. The whole thing (in theory, see below) supports network transparency from the ground up, you can use resources of other nodes in the network just as easily as you can use local resources, just prefix them with the node ID. In the late 80’s, early 90’s I had a lot of time on my hands. While living in the Netherlands I’d run into the QNX operating system that was sold locally through a distributor. The distributors brother had need of a 386 version of that OS but Quantum Software, the producers of QNX didn’t want to release a 386 version. So I decided to write my own. ↫ Jacques Mattheij What a great story. Mattheij hasn’t done anthing or even looked at the code for this operating system he created in decades, but recently got the urge to fix it up and publish it online for all of us to see. Of course, resurrecting something this old and long untouched required some magic, and there’s still a few things which he simply just can’t get to work properly. I like how the included copy of vi is broken and adds random bits of garbage to files, and things like the mouse driver don’t work because it requires a COM port and the COM ports don’t seem to work in an emulated environment. Unite is modeled after QNX, so it uses a microkernel. It uses a stripped-down variant of the MINIX file system, only has one user but that user can run multiple sessions, and there’s a basic graphics mode with some goodies. Sadly, the graphics mode is problematic an requires some work to get going, and because you’ll need the COM ports to work to use it properly it’s a bit useless anyway at the moment. Regardless, it’s cool to see people going back to their old work and fixing it up to publish the code online.
Some time ago, I described Windows 3.0’s WinHelp as “a program for browsing online help files.” But Windows 3.0 predated the Internet, and these help files were available even if the computer was not connected to any other network. How can it be “online”? ↫ Raymond Chen at The Old New Thing I doubt this will be a conceptual problem for many people reading OSNews, but I can definitely understand especially younger people finding this a curious way of looking at the word “online”. You’ll see the concept of “online help” in quite a few systems from the ’90s (and possibly earlier), so if you’re into retrocomputing you might’ve run into it as well.
Exchange rates can move incredibly fast because modern global financial markets are all interconnected. One political news from the USA can seriously shake global markets, and having super-fast tools that can be used to track and convert currencies using favorable rates can be critical. Modern systems rely on precise and real-time currency data, which usually operates through a currency converter API. This technology is a digital bridge that allows apps and platforms to access up-to-date data within milliseconds from global markets. From fintech startups to major banks, modern APIs quietly power countless international transactions and are the backbone of the modern financial ecosystem. What is a currency converter API? An API is an abbreviation for Application Programming Interface, which lets different software systems talk to each other and exchange data. A currency converter API is a dedicated API that specifically connects financial platforms to exchange rate sources such as central banks, financial exchanges, or liquidity providers. These APIs return live price rate data instantly. This means when someone checks a price in another currency or sends a payment, the app uses an API to check and use the current rate automatically. For example, a live currency converter API helps fintech apps and retail users calculate real-time exchange values, ensuring customers always see the latest data with transparent prices. It eliminates the need for manual updates and human error, creating an automated and seamless experience. What these APIs do is make it possible for global apps to remain consistent and reliable with accurate data at all times. How real-time converter APIs actually work Every time a user views or makes a foreign exchange currency conversion online, a series of steps happens in the background: These APIs use JSON format to quickly send and receive data online. Caching systems ensure speed and reliability while servers handle heavy traffic from global fintech platforms. For large-scale apps like PayPal, for example, uptime and consistency are very critical. These APIs are designed to process millions of requests daily without lag or errors, which allows users to see current rates and execute transitions instantly. Key tech behind currency converter APIs Modern converter APIs are advanced and rely on complex infrastructure to keep up with global financial developments. There are several key technologies behind current converter APIs, including cloud computing, machine learning, blockchain technology, security protocols, and so on. These technologies make APIs very reliable and fast, coupled with top-notch security and robustness. The role of APIs in Fintech innovation As we can see, APIs are super reliable and crucial in modern online currency converters and transaction apps. As a result, they are common among Fintechs. Many startups utilize APIs to reduce costs, increase scalability, and ensure overall security while maintaining convenience. Modern APIs enable fintech companies to provide super-fast services at a low cost, and conversion rates are typically much better than those of traditional banks. Currency converter APIs are at the heart of fintech’s evolution. They power everything from cross-border payments, remittances, digital banking platforms, to global e-commerce price displays. Without them, the modern financial app world would have to rely on either outdated or manually developed solutions, which would increase risks. APIs level the field for startups and let small fintech firms access the same high-quality data as banks, allowing them to be flexible. As a result, fintechs often offer better rates at cheaper transaction costs, allowing them to be strong competition to banks. For developers, APIs are very simple to integrate into an app, as there is no need to manually track or store exchange rates. They save time, enhance accuracy, and reduce maintenance expenses.
Bitcoin BTC Cryptocurrency – Free Stock Image Bitcoin transactions reached nearly 500,000 daily in 2024. Meanwhile, there were over 1,9 million Ethereum transactions on a record-breaking day in January, 2024. These are two of the leading cryptocurrencies driving the most on-chain payments, and retailers are starting to notice. Bitcoin among other cryptocurrencies has started functioning like real money, with payment processors and retailers returning it to the checkout. The checkout shift has been in the works for years, as various successful integrations have proven how much users want to be able to spend crypto anywhere. Crypto-linked debit cards, Bitcoin ATMs, and Lightning-enabled vending machines hinted at what was to come. Users can easily pay for software, buy coffee, or play online games using crypto. For example, no verification casinos have used blockchain tech to enable greater privacy and anonymity, letting users sign-up through their wallets without sending verification documents. Users didn’t have to wait for endless registration processes. Instead, they could play slots, blackjack, roulette, baccarat, or other popular games in seconds while enjoying super-fast payout options that came with various tokens. Crypto has become an everyday currency users spend on various platforms, which is largely responsible for the inspiration of bringing them to regular commerce. Retail rails could see between $1.7 and $2.5 million on-chain payments daily, which is a modest share of the worldwide transaction volume if you consider that Bitcoin has already moved over $19 trillion. However, it represents a new type of growth. Every payment will settle directly on the Bitcoin network without banks or other third-party intermediaries. Retailers love the idea because it reduces the transaction fees and risk of fraud. Bitcoin’s Lightning Network makes it possible, handling millions of microtransactions every month through integrations with Cash App, Strike, and OpenNode. Retailers accept payments immediately while the settlements complete on the chain later, offering speed and finality that makes it more practical for commerce. For the first time ever, developers and retailers will use the same rails thanks to innovative collaborations like the One Pay partnership between Zero Hash and Walmart. Smaller merchants have long adopted similar models across North America and Europe using the BTCPay Server, CoinCorner, and BitPay retail solutions. For years, crypto payments were processed off the chain through custodial wallets, but the return to on-chain settlements marks a great revolution. Transactions that weren’t truly Bitcoin transfers off the chain can now return to being 100% BTC. Retail rails will restore the direct and peer-to-peer settlement that made Bitcoin hit the ground running in 2009. The idea appeals to users who want to hold their own keys and transact around third parties. It also means reduced exposure to frozen funds and chargeback risks for retailers. Many large-scale commerce platforms like Shopify have integrated Bitcoin processors to allow their smaller retailers to simplify how they accept the cryptocurrency during the checkout process. Customers pay directly from wallets while merchants receive the funds instantly, whether as Bitcoin or converted to local currencies. Settlements are cleared in minutes. User-facing wallets are also maturing. Apps like Phoenix, Wallet of Soshi, and Breez simplify Lightning payments to mimic QR code scans. Modern users spend their Bitcoin without thinking about the underlying technology. It becomes a similar system to Apple Pay for retailers, but it also removes the intermediaries and settlement delays. Some digital marketplaces take it further by allowing their content creators to accept Bitcoin for sales, creating a traceable, global, and fast payment cycle. Retailers also consider various ways to integrate payment types into their apps, but interest is certainly growing fast around Bitcoin. The commerce industry looks at how developers experiment with microtransactions for in-game purchases and streaming services. Blockchain games allow players to earn and spend their Bitcoin within the game, creating an entire gaming economy that exists purely on the chain. Retailers are implementing the same rails for stores and online shops, creating a unified payment system. Bitcoin’s math works, which attracts more retailers. Transaction fees were once unpredictable, but they are much easier to manage now because of payment channels and batching. Smaller transactions will move through the Lightning Network to settle quicker when the network becomes busy, maintaining more stable costs. Retailers can choose when and how to move the funds instead of waiting for card settlements and paying higher conversion fees. This flexibility creates opportunities for better margins and fund deliveries in cross-border commerce. Synchrony even plans to integrate crypto capabilities in Walmart’s One Pay app, which can later be used if the features are activated. The new Walmart cards will then have a native wallet users can access. Steak & Shake is another American brand that rolled out the Lightning Network’s capabilities to welcome crypto users who want to pay using Bitcoin. Retailers are simply wondering whether the on-chain checkouts can preserve unit economics, reconcile back office systems, and reduce payment process friction. While developers build tools to make these systems easier to deploy, some open-serve plugins from BTCPay Servers will support platforms like Magento and Woocommerce. These plugins will handle invoicing and lock exchange rates. Meanwhile, bigger retailers like Walmart have the opportunity to activate those added features within their app. Zero Hash already secured approval to offer custody, trading, and transfers for the retail giant should they turn the settings on. Analytics platforms have already shown where and when Bitcoin payments occur, and the data is promising as retailers can enjoy more repeat customers and deliver much faster settlements with lower fees. Meanwhile, consumers respond well to the control they regain using on-chain payments. Bitcoin transactions are transparent, final, and timestamped. They don’t need to wait hours for banks or worry about frozen accounts. This will be particularly useful for those in regions with poor payment infrastructure. Bitcoin checkouts will be the ultimate upgrade, not just for local checkouts but for global commerce.
What if you have a PC-98 machine, and you want to run Linux on it, as you do? I mean, CP/M, OS/2, or Windows (2000 and older) might not cut it for you, after all. Well, it turns out that yes, you can run Linux on PC-98 hardware, and thanks to a bunch of work by Nina Kalinina – yes, the same person from a few days ago – there’s now more information gathered in a single place to get you started. Plamo Linux is one of the few Linux distributions to support PC-98 series. Plamo 3.x is the latest distribution that can be installed on PC-9801 and PC-9821 directly. Unfortunately, it is quite old, and is missing lots of useful stuff. This repo is to share a-ha moments and binaries for Plamo on PC-98. ↫ Plamo98 goodies The repository details “upgrading” – it’s a bit more involved than plain upgrading, but it’s not hard – Plamo Linux from 3.x to 4, which gives you access to a bunch of things you might want, like GCC 3.3 over 2.95, KDE 3.x, Python 2.3, and more. There’s also custom BusyBox config files, a newer version of make, and a few other goodies and tools you might want to have. Once it’s all set and done, you can Linux like it’s 2003 on your PC-98. The number of people to whom this is relevant must be extraorinarily small, but at some point, someone is going to want to do this, only to find this repository of existing work. We’ve all been there.