On github, there has been an increasing trend of using “Staleness detector bots” that will auto-close issues that have had no activity for X amount of time.
In concept, this may sound fine, but the effects this has, and how it poisons the core principles of Open Source, have been damaging and eroding projects for a long time, often unknowingly.
I’m not a developer and even I can instantly see such bots would create countless problems. I had no idea such bots were being used.
This is only a tip of the iceberg. There are bots that star your project on GitHub to bump it up as a more popular. Devs use this rating to typically determine if a project is worth trying out or participating in. It also give employers the impression that your project is a significant contributor to the OSS space. I guess once a service gets popular enough, there’ll be people that try to game it to get a competitive advantage.
Why would anyone thinking closing a bug just because it hadn’t been fixed was a good idea? That is exactly the reason *not* to close a bug yet. They don’t magically fix themselves. I would not bother to submit a bug report to any project that was in the habit of deleting unfixed bug reports like this.
Minuous,
I think stack exchange has a similar problem with open questions getting closed before they can be answered. Although there it’s not due to bots but rather overly aggressive moderators incentivized by gamification that rewards their over involvement. Meanwhile mods don’t get points if they don’t do anything, even when there’s nothing needing to get fixed and doing nothing is the best course of action.
Well the reason behind them kind of makes sense. If a bug has been logged against version 3.45 of a project, but submitted a stack trace all sorts of wonderful screen shots and debugging information, but the project is now on version 10.56. Usually those wonderful details that were painstakingly collected by the user aren’t really helpful to devs working on the current version. It really sucks as a bug reporter, It happens to me a lot. I try to help to the extent that I can without busting out the unfamiliar code base and trying to fix myself but this is what happens to most of my reported bugs. Either this or I get a reply a year or two later asking how to reproduce. First of all, its in the comment and second of all all hope of fixing the issues is pretty much lost now. the bug hasn’t happened again or I’ve given up on the project due to it and don’t really have the time and effort to try to reproduce on demand now that the dev had finally gotten around to it.
*nod* That dynamic is why I gave up on posting “Yes, it’s still a problem. Don’t auto-close” replies on the dozens of bug reports and feature requests I’d filed against various KDE apps after a decade of no movement.
JWZ’s CADT Model at its finest. I wish they’d at least require human triage of feature requests.
Yeah, a few times I found some trivial KDE apps had obvious reproducible bugs, I thought, I know I’ll fix them! So I downloaded the source from my repo, learned it, found the bug fixed it Then downloaded the upstream to patch it in the right spot and…. it was already fixed. D’oh. the fixes just didn’t land in my distro yet.
Maybe I should step up and become a maintainer of some apps I care about, that would allow me to be better versed in the current state of things.
I understand this is counterintuitive but enormous backlogs tend to keep dev teams overwhelmed and hamper prioritisation and planning. Teams/devs should feel free to declare Backlog Bankruptcy https://www.productplan.com/blog/backlog-bankruptcy/
Are you suggesting that the answer to a project full of errors is to act like those errors don’t exist? Man, no wonder us real engineers don’t like it when software developers refer to themselves as Software Engineers.
I guess maybe this is what Boeing was trying to do when they released the 767-MAX full of bugs. After all, what could possibly go wrong when you act like your problems don’t exist?
teco.sb,
Software engineers deserve a lot of flack for creating software bugs, but in all seriousness Boeing had real engineers certify the 737 max despite it’s issues. Those engineers even reissued an airworthiness notice after the accidents because the “Maneuvering Characteristics Augmentation System” was operating to their spec. There was a lot of pressure to blame pilots rather than to admit the MCAS was inherently dangerous.
These disasters highlighted fraud and questionable policy within Boeing and the FAA, but I really don’t think they are cases you can point to and blame software engineering.
https://en.wikipedia.org/wiki/Boeing_737_MAX
If you are looking for a mission failure to blame on software engineering, here’s an interesting case that might be blamed on software engineers not doing enough testing against unexpected inputs/conditions.
https://spacenews.com/software-problem-blamed-for-ispace-lunar-lander-crash/
Wow, that’s an interesting one. What’s the point of putting a sensor in a spacecraft and sending it to the moon, if you’re gonna disregard its readings anyway?
What’s the point of sending a fucking spacecraft to the moon if you’re not being serious about it. In all honesty any real engineer would’ve probably put two or three of those same sensors aboard and the computer would require them to report coherent data, or it would ignore only the sensor reporting inaccurately.
@sj87:
You’re not wrong, and in any Earth-bound situation that’s the most sensible course of action. However, in spaceflight every gram counts; just breaking free from Earth’s gravity requires a massive amount of fuel, and every bit of mass added to the craft means more fuel must be added, and when you add more fuel you’re adding more mass so it’s a runaway problem.
Until we can further miniaturize the electronics and/or create a more efficient fuel source there will be hard limits on any spacebound craft, especially one expected to successfully land on another planet or moon.
Hi Alfman,
I’m pretty sure we agree on this, maybe I just didn’t properly express myself earlier.
When engineers cross the line, then there are real consequences. The whole 767-MAX debacle was a true example of what happens when engineers and regulators get complacent. Unfortunately, it’s not the only such example.
Yes, that was the point I was trying to make. When engineers disregard real issues/problems on their designs (such as bugs in the software), real bad things happen. The idea that you should just close bug reports and tickets because it would negatively affect the mental health of the developers is ludicrous! If your product has that many issues, then it shouldn’t be out in the world.
Declaring “Backlog Bankruptcy” by deleting bug reports and tickets because the author’s “stress grew as the backlog ballooned” is ridiculous, and the fact that someone actually put this down in written word and dispersed it for others to see is mind boggling. If your design is that crappy, then you should be stressed. Instead of declaring Backlog Bankruptcy, they should think about not shipping that steaming pile of crap to their customers. But, as the article suggests, they’ll ship an unfinished and bug ridden product.
Now, imagine if that article had been written by an engineer at Boeing. “We disregarded the known issues because they were too much and decided they would be fixed in a future iteration of the design.”
This is why I can’t take anyone defending the use of “Software Engineer” to refer to developers. Once someone can show me how a developer addresses the Engineer’s Code of Ethics (https://www.nspe.org/resources/ethics/code-ethics), then maybe things can start looking rosier in the software world.
teco.sb,
I assumed you were referring to the 737-max disasters that I linked to and 767 was a typo, but since you’ve said it twice now I need to ask does 767-max refer to something else I’m not aware of? If so then we might not be talking about the same thing?
Sure, but to be clear my nitpick was with the example and the suggestion that software bugs were at fault. However in the case of MCAS specifically that was not a “software bug”, it was certified and then re-certified as being to Boeing’s engineering specs. To the extent there was a flaw (IMHO there was), it was the fault of a the real engineers who designed it that way! Granted maybe I am just biased due to my profession, but I don’t like classifying flaws in the specs as “software bugs”. To me bugs are unintentional behaviors, not intended ones. To throw software professionals under the bus for following the engineers’ specs in good faith seems very unfair to me. I’m not trying to deny all the bad practices in the software industry, but it irks me a bit that my profession is getting blamed for the faults of actual engineers.
“Sky scrapers are sinking into the ground. Fricking software engineers and their bugs…”
https://www.theguardian.com/us-news/2022/jan/10/san-francisco-millennium-tower-sinking
Procedural I agree with you, closing open bugs without doing anything to fix them is lazy. If bug tracking employees have been doing their jobs, open bugs have informational value and closing them without a resolution discards valuable information. But if a bug system was so poorly maintained (ie maybe nobody was doing it) that it just contains an unmaintained mountain of garbage anyways, then I do see the point in closing it all. Still, this clearly speaks poorly of the company/organization commitment to doing things the right way.
IIRC companies like redhat will mass close tickets on major version changes to clean the slate. The effect of which transfers the onus to users to reopen them every version of software.
Other companies like google take a unique approach: make sure customers cannot reach you. This way there are no open bugs that have to be closed. It’s brilliant. /sarcasm
Well, now that’s embarrassing… I did mean to 737-MAX. A 767-MAX never existed, I don’t think.
When this popped up on lobste.rs and someone attempted to write a defense of stale-botting in reply, this was my response:
…and, to quote another commenter (caboteria) whose stance aligns with mine: