safe and sound

Secuity in the Age of Open Source Software
Wednesday, April 8th, 2015

back to blog index

I remember the first time I heard about “open source” software. I saw that a friend was browsing the internet on something called “Mozilla Firefox,” and I was intrigued. I soon learned (with a fairly limited understanding) that Firefox was free because dedicated programmers teamed up to make it, without being paid. I was amazed, and I still am. What a fantastic contribution to society, and what an amazing gift to have the talent to create something like that. I figured with so many people working on it, with the simple goal of making it great, that it must be perfect.

The collaborative power of passionate developers has made countless invaluable contributions to the tech world. But while we assume that all those eyes on the code makes it phenomenal - and it does - it also poses certain risks. One place where open source software may be lacking is security.

Jeff Atwood, co-founder of Stack Overflow, wrote this week on the security of open source software, and it opened my eyes to a few points.He quotes John Viega, a well-known author and software security specialist, who said:

The fact that many eyeballs are looking at a piece of software is not likely to make it more secure. It is likely, however, to make people believe that it is secure. The result is an open source community that is probably far too trusting when it comes to security.

Experts have said that yes, while many eyes are often looking over the code that we imagine in the giant repositories of open source projects, the sheer amount of code is often overwhelming, and few of the people examining the code are security specialists.

In essence, my view of open source software, that is shared by many people, has lead to literally a false sense of security, as famously demonstrated by the “Heartbleed” bug exposed in the OpenSSL cryptography library. In essence, Attwood explains that “there are not enough qualified eyeballs to look at code.” He goes on to explain the ways that software teams do work hard to catch bugs, often through a bug-finding bounty system.

The methods of bug-finding aside, it’s interesting to consider the fact that while open source software is amazing, it’s not perfect. Many writers in the tech industry have considered the incentive systems in open source software, and in a field where money is not a factor, security concerns may not be the most “glamorous” category of contributions to projects, and at times it can be left lacking.

What could fix the issue? The idea of paying people when they find bugs certainly has it’s merits, but it also exposes moral pitfalls. Are there better ways? It’s fun to imagine a… say, pro-social environment, where bugs are exposed for the good of the project, and for the users in general. Linus Torvalds, the creator of Linux, has stated that security bugs are really just like any other type of bug, and that they shouldn’t be inherently seen as exceptionally-notable failures. Of course security ranks as one of the most important concerns in the creation of software, rightly so. But if we rank bug-finding in security just like bug finding anywhere else, we will avoid shaming the creators, and perhaps even make the task a more attractive one. By praising the work of good engineers and avoiding the attribution of mistakes to blanket ineptitude, we may also attract smarter minds do the field.

And that’s important to note as well. As Atwood said, we need more qualified eyes. Lots of praise is heaped on the developers who create amazing products; perhaps more praise can be shifted to those who make them more secure. In a field where money doesn’t drive decisions, community makes things happen. If we can foster an environment where the “rockstars” include those keeping our information safe, perhaps more people will be attracted to the task, and we’ll be safer. And the open source movement continue on with the trust and admiration that it deserves.

back to blog index