Linux Foundation drops the ban-hammer on University of Minnesota over controversial 'research'...

Cal Jeffrey

Posts: 4,178   +1,424
Staff member
WTF?! It's probably not unusual for individual users to get banned from contributing to Linux for making poor decisions. However, for the first time to my knowledge, the Linux Foundation has soft-banned an entire domain. Any user submitting commits from a umn.edu (University of Minnesota) address will be "default-rejected" until further notice.

The Linux Foundation has banned the entire University of Minnesota from contributing to the Linux kernel. The expulsion comes after researchers from the school published a paper titled "Open Source Insecurity: Stealthily Introducing Vulnerabilities via Hypocrite Commits." The paper details how Qiushi Wu and Kangjie Lu, both students at U of M, intentionally submitted code with security flaws to "test the kernel community's ability to review 'known malicious' changes."

Linux Foundation fellow Greg Kroah-Hartman did not appreciate the "bad faith" experiment. He mentions in an email to other kernel maintainers, including Linus Torvalds himself, that from now on, they should reject all submissions from users with a umn.edu email address.

"I'll take this through my tree, so no need for any maintainer to worry about this, but they should be aware that future submissions from anyone with a umn.edu address should be by default-rejected unless otherwise determined to actually be a valid fix," Kroah-Hartman wrote. He said that maintainers are still free to approve submissions, but only if "they provide proof and can verify it." So it is essentially a soft ban.

"But really, why waste your time doing that extra work?" Kroah-Hartman added as an afterthought.

The two researchers' actions were not the only factor in determining to ban the entire school. A third U of M user had several submissions of junk code that did nothing. Kroah-Hartman ordered that all past commits coming from the university be reverted and re-reviewed. He started the work himself and listed scores of code that he has already reverted.

In response to the ban, UMN leadership issued a statement promising "remedial action."

"Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel.

"We take this situation extremely seriously. We have immediately suspended this line of research. We will investigate the research method and the process by which this research method was approved, determine appropriate remedial action, and safeguard against future issues, if needed. We will report our findings back to the community as soon as practical."

The researchers also issued a statement denying they ever intentionally submitted bugs into the kernel. They said that flawed patches were introduced to maintainers for feedback via email. Once they got a reply amounting to "looks good," they informed the maintainers of the intentional bug and told them not to make the commit. However, the whole controversy started after someone found at least four vulnerabilities that made it through review submitted by someone with a UMN email address.

While not apologizing for their work, the students did express regret for the extra effort it caused maintainers.

"We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time," Wu and Lu explained. "We had carefully considered this issue, but could not figure out a better solution in this study."

The issue has understandably provoked heated exchanges within the Linux community. Kernel developer Laura Abbot condemned the frivolous nature and conclusions of the study, pointing out that the possibility of malicious code being intentionally introduced is already well-known in the community.

Images credit: Stanislaw Mikulski

Permalink to story.

 
"Laura Abbot condemned the frivolous nature and conclusions of the study, pointing out that the possibility of malicious code being intentionally introduced is already well-known in the community."

Hmmmm.....so you already knwo about this problem, this experiement showed it is still a problem, so you ban the ones pointing the problem out.

Sounds like Laura should be doing something about the malicious code issue instead of condemming those experiemnting with the system.
 
"Laura Abbot condemned the frivolous nature and conclusions of the study, pointing out that the possibility of malicious code being intentionally introduced is already well-known in the community."

Hmmmm.....so you already knwo about this problem, this experiement showed it is still a problem, so you ban the ones pointing the problem out.

Sounds like Laura should be doing something about the malicious code issue instead of condemming those experiemnting with the system.

Constantly submitting bogus patches is not "pointing out the problem", it's "poisoning the well".
 
Constantly submitting bogus patches is not "pointing out the problem", it's "poisoning the well".
You're right, it is. Of course, if you point out that a well has an issue and the village refuses to fix it, poisioning it with some harmless junk is a good way to get their attention that this problem SERIOUSLY needes fixed. The maintainers should probably address the problem rather then ban the ones pointing out the problem still exists.
 
Constantly submitting bogus patches is not "pointing out the problem", it's "poisoning the well".
In the public domain, IMO, any code submitted by anyone should absolutely be verified by the maintainers of the code. I have no idea if this is done by The Linux Foundation, but if it is not, that is great cause for concern, IMO.

If code submissions are not verified, The Linux Foundation should expect flak from researchers like these, or someone who's system is compromised by malicious code that was not verified.
 
In the public domain, IMO, any code submitted by anyone should absolutely be verified by the maintainers of the code. I have no idea if this is done by The Linux Foundation, but if it is not, that is great cause for concern, IMO.

If code submissions are not verified, The Linux Foundation should expect flak from researchers like these, or someone who's system is compromised by malicious code that was not verified.

So you really don't understand the concept of "trust"? Because most organizations rely on it heavily. Many sets of eyes keep Linux safe but intentionally wasting the time of those testers and reviewers is incredibly stupid and exemplifies the "just a prank, bro" mentality. Or perhaps these Chinese researchers were cloaking their REAL hopes for their malware in a facade of testing the system..
 
So you really don't understand the concept of "trust"? Because most organizations rely on it heavily. Many sets of eyes keep Linux safe but intentionally wasting the time of those testers and reviewers is incredibly stupid and exemplifies the "just a prank, bro" mentality. Or perhaps these Chinese researchers were cloaking their REAL hopes for their malware in a facade of testing the system..

Good thing the paper was published then.
 
Good thing the paper was published then.
There is white hat security and black hat security. White hats discover a vulnerability and then inform the programmers of the hole, only revealing the hole after it's been patched in collaboration with the programmers. Black hats don't give a **** and do whatever they want, no matter who gets hurt.

This is black hat masquerading as white hat in a maneuver whose subtext is more to try and intimate that the entire open-source model is invalid rather than identify and exploit a specific code weakness, probably because they serve masters whose interests are aligned against open-source in general.
 
Extra effort, wasted precious time? In case of a critical door, making sure it isn't a backdoor should be part of effort at all times. Looks like somebody has interests in keeping that backdoor open.

You can never be too cautious. If you made it, double-check it anyway. If someone else made it, assume it's dangerous until proven otherwise.
I love that quote.
 
some harmless junk
I know most people won't do due diligence to read the paper before commenting, but in this case, I recommend everyone go and read the abstract at least.

"The introduced vulnerabilities are critical because they may be stealthily exploited to impact massive devices."

That's the kind of bugs they are bragging about. It's not harmless. To be fair, the ethics section did mention they didn't intend to land them in merge window, but clearly some of them slipped through. If you meant to test the safeguard of a water reservoir critical for lots of people, and you use real poison and accidentally succeed, I am pretty sure you aren't getting away from prison just because it's your "research".
 
Last edited:
It's always fun looking at techspot tech specialists giving advices on programming when some of them never actually ever reviewed a pull request themselves. Reviewing is hard and takes a lot of time, and is a human task meaning that errors can go through the review process. If you have someone sabotaging your work on purpose it can be dramatic for any enterprise. You can blame security or "backdoor" but if you are bound by contract to the enterprise you'll be fired and maybe also sued, for opensource it's different and that's why it's important to only work with people with good integrity, I think the guys at Linux Foundation have been very nice to only give a soft ban. Ruining trust between universities and opensource is something very serious.
 
"Laura Abbot condemned the frivolous nature and conclusions of the study, pointing out that the possibility of malicious code being intentionally introduced is already well-known in the community."

Hmmmm.....so you already knwo about this problem, this experiement showed it is still a problem, so you ban the ones pointing the problem out.

Sounds like Laura should be doing something about the malicious code issue instead of condemming those experiemnting with the system.
Actually the process is working as intended. If you are not a reliable submitter of quality code, you should be banned from submitting. This is extremely logical imo.

As a senior dev, I pay extra scrutiny to low reliability co-workers work. If they aren't reliable with quality, there are some things I'd vet line by line or even request they don't work on like mission critical core code. To not do that is negligence.
 
The problem here is not so much the experiment, but how they went about it. This was a study that should have been run up the flagpole. They should have at least contacted Torvalds or a senior fellow before proceeding to try tricking maintainers with malicious code. If someone in the Foundation had known this was going on, the risks of anything getting through would have been minimized as would the work involved with the task of reverting the bad changes. It would not have damaged the study as long as the LF higher up in the know allowed them to conduct the study while keeping him or her in the loop. Instead, they did the whole thing in secret, which IMO is unethical for a project of this nature.

You could equate it to those "secret shopper" programs where the shopper goes into a restaurant or whatever to test the customer service. They do not do this without management approval. In fact, management is the one asking for the CS audit. Nobody in management at the Foundation asked for this oreven knew it was going on. that's where the researchers fugged up.
 
I have to agree with the foundation, the actions of those University researchers was incredibly poor judgement and not at all an act of good faith, nor was it at all trustworthy.
 
If you can't contribute anything constructive, nevermind. You may not be capable of it.

But don't be the lowest of the lowlifes and be destructive.

If you can't be helpful, then stay away. Away from productivity. Let others improvise and bring about positive change.
 
You're right, it is. Of course, if you point out that a well has an issue and the village refuses to fix it, poisioning it with some harmless junk is a good way to get their attention that this problem SERIOUSLY needes fixed. The maintainers should probably address the problem rather then ban the ones pointing out the problem still exists.

You do realize that this is the entire point of having an open source system? Every one of these "villagers" can contribute a bucket of water to the well. Every villager will review every other villagers bucket to make sure that it's good before it's put in the well. If it's a poisoned bucket, or malicious code in the real-world example, then it's removed. There's nothing more to be done about malicious code being added to an open source system because that's literally what an open source is meant to do. It's obviously working well given that the code submitted during this study was removed.
 
The problem here is not so much the experiment, but how they went about it. This was a study that should have been run up the flagpole. They should have at least contacted Torvalds or a senior fellow before proceeding to try tricking maintainers with malicious code. If someone in the Foundation had known this was going on, the risks of anything getting through would have been minimized as would the work involved with the task of reverting the bad changes. It would not have damaged the study as long as the LF higher up in the know allowed them to conduct the study while keeping him or her in the loop. Instead, they did the whole thing in secret, which IMO is unethical for a project of this nature.

You could equate it to those "secret shopper" programs where the shopper goes into a restaurant or whatever to test the customer service. They do not do this without management approval. In fact, management is the one asking for the CS audit. Nobody in management at the Foundation asked for this oreven knew it was going on. that's where the researchers fugged up.
And there lies the rub.
 
Always has been because it's open source.

Still better than closed source software hiding existing vulnerabilities (and built in backdoors for that matter) while the maintainers say "it's secure" xD

:)
Well, I've always heard the opposite logic, open source is the most safe cause it's open and controlled.
 
Back