Debian has released its latest version, Debian 13 “trixie”.
This release contains over 14,100 new packages for a total count of 69,830 packages, while over 8,840 packages have been removed as obsolete. 44,326 packages were updated in this release. The overall disk usage for trixie is 403,854,660 kB (403 GB), and is made up of 1,463,291,186 lines of code.
↫ Debian 13 release announcement
I’m never quite sure what to say about new Debian releases, as Debian isn’t exactly the kind of distribution to make massive, sweeping changes or introduce brand new technologies before anyone else. That being said, Debian is a massively important cornerstone of the Linux world, forming the base for many of the most popular Linux distributions.
At some point, you’re going to deal with Debian 13.
This release will be the final release with ARMv4 EABI soft-float (armel) support (although I think Armv7 hard-float support continues on). This probably doesn’t affect Raspberry Pi because they famously recompile everything for Armv6 hard-float anyway. But it is interesting to see that we’re coming to the end for ARM7, ARM9 and ARM11 support in upstream Debian stable. Not that anyone should be making new devices with any of those processors in 2025.
This release has also dropped MIPS support (mipsel and mips64el). I haven’t used one, but I understood that for a while this ISA was popular in routers. I guess it’s all Armv7 or Armv8 now. Or actually I think my TP-Link has an Intel Atom in it. I wonder if it’s 32-bit or 64-bit.
Anything older than Cortex A ARMv8 is basically dead besides some lowcost trash. Even AArch32 is being removed from modern SoCs, so they no longer execute ARM 32bit code.
pikaczex,
Well, it will be “trash” on the basis that it’s not supported. But often times for IOT applications simple CPUs are more than enough. The applications they enabled were always more about connectivity & accessibility than CPU horsepower., which doesn’t matter at all. And even to this day 32bit is plenty for many systems that require less than 4GB. Think of applications where a microcontroller has enough horsepower, but you’d like the connectivity and tooling of a real OS, linux is a great choice.
It’s not practical to continue support indefinitely and the end of support line has to be drawn somewhere, so it’s understandable why this has to be done. I can’t really be critical of projects that shift resources around to keep up with mainline hardware changes, But even so it doesn’t mean the previous hardware was trash, it just means the cost/benefit analysis wasn’t in it’s favor.
> often times for IOT applications simple CPUs are more than enough
I would go even further. For anything requiring less than 4 GB of RAM, 64 bits is wasteful. Why bother with 64 bits if the first 32 of them are always zero?
32 bit systems need less silicon,generate less heat, have smaller code size, and draw less power. What are the 64 bit benefits for giving all that up?
64 bits is not inherently any faster unless you are processing numbers bigger than 32 bits, We think of 64 bit being faster but that is because our PCs added not just 64 bits but also a bunch more registers and things like vector extensions at the same time as the extra bits. Even the 32 bit version of RISC-V has 32 registers though.
Perhaps if you are doing encryption or video encoding on the CPU you need 64 bits. How often are you doing those in an embedded context? My company makes mobile video recorders but the encoding and encryption are both done with dedicated hardware anyway, not on the CPU. And if you are doing AI, you have a GPU or NPU.
The biggest obstacle I can think of is 64 bit time. You can still do 64 bit time on 32 bits but if you are using time a lot then this could really slow things down. For embedded or IoT style devices, network timestamps will not be a problem until 2038 as all timestamps before then fit in 32 bits. I suppose many filesystems require 64 bit time but, again, the first 32 bits are going to be zero until 2038 even there. Perhaps a specialized instruction to spit out a 64 bit number with 32 zeros at the front may be in order just for timestamps.
This is another one of those classic cases of something being deprecated and the internet going up in arms about how they think it’ll suddenly stop working.
If you’re using one of those smaller ARM SoCs, you still have older software that’ll happily run on it. Most commercial users will be in embedded applications anyway, things like household appliances, industrial controllers, etc. You don’t need to run a full up-to-date Debian on your dishwasher, and the manufacturers really don’t care about the “latest and greatest”. For those that actually make hardware with these older SoCs, it’s basically irrelevant.
The123king,
If you already have everything you need then you can just keep using it. What can happen though is that you have all the original media that came with the product (lets just say from 2010), but you want a library or program that needed to be installed from the repos and isn’t present on the install media, which is a common occurrence. You can see where this is going. In one instance at work they had purchased a turnkey centos application server. It wasn’t a problem that it used old software per say, we are able to roll out new functionality on the old stack But it turned out to be a big problem when the original repos were no longer accessible: compiler tools. libraries, etc. The absence of repos makes it several magnitudes harder to work with today than it originally was.
I’ve encountered this with old embedded systems as well and frankly I gave up on them. The hardware was fine, but it wasn’t worth the effort to use them without repos. Theoretically we could archive entire repos and not just the software that comes with a product. but that’s a lot of effort and cost in anticipation of future requirements we may not know about for decades.
64-bit time to get us past the Year 2038 problem, and the end of 32-bit x86 support. I have exactly one legacy laptop system affected by this.
Also a new format for apt sources files and some changes to the size of /boot and the way swap is handled.
Pipewire is usable this release without having to use backports to get your wireless earbuds working. Newer KDE has been solid.
Have fun everybody…
> often times for IOT applications simple CPUs are more than enough
I would go even further. For anything requiring less than 4 GB of RAM, 64 bits is wasteful. Why bother with 64 bits if the first 32 of them are always zero?
32 bit systems need less silicon,generate less heat, have smaller code size, and draw less power. What are the 64 bit benefits for giving all that up?
64 bits is not inherently any faster unless you are processing numbers bigger than 32 bits, We think of 64 bit being faster but that is because our PCs added not just 64 bits but also a bunch more registers and things like vector extensions at the same time as the extra bits. Even the 32 bit version of RISC-V has 32 registers though.
Perhaps if you are doing encryption or video encoding on the CPU you need 64 bits. How often are you doing those in an embedded context? My company makes mobile video recorders but the encoding and encryption are both done with dedicated hardware anyway, not on the CPU. And if you are doing AI, you have a GPU or NPU.
The biggest obstacle I can think of is 64 bit time. You can still do 64 bit time on 32 bits but if you are using time a lot then this could really slow things down. For embedded or IoT style devices, network timestamps will not be a problem until 2038 as all timestamps before then fit in 32 bits. I suppose many filesystems require 64 bit time but, again, the first 32 bits are going to be zero until 2038 even there. Perhaps a specialized instruction to spit out a 64 bit number with 32 zeros at the front may be in order just for timestamps.