Compartilhar via


The Trouble with Giblets

I don't write about the SDL very much, because I figure that the SDL team does a good enough job of it on their blog, but I was reading the news a while ago and realized that one of the aspects of the SDL would have helped if our competitors were to adopt it.

 

A long time ago, I wrote a short post about "giblets", and they're showing up a lot in the news lately.  "Giblets" are a term coined by Steve Lipner , and they've entered the lexicon of "micro-speak".  Essentially a giblet is a chunk of code that you've included from a 3rd party.  Michael Howard wrote about them on the SDL blog a while ago (early January), and now news comes out that Google's Android SDK contains giblets that contain known exploitable vulnerabilities

I find this vaguely humorous, and a bit troubling.  As I commented in my earlier post (almost 4 years ago), adding a giblet to your product carries with it the responsibility to monitor the security mailing lists to make sure that you're running the most recent (and presumably secure) version of the giblet.

What I found truly surprising was that Android development team had shipped code (even in beta) with those vulnerabilities.  Their development team should have known about the problem with giblets and never accepted the vulnerable versions in the first place.  That in turn leads me to wonder about the process management associated with the development of Android.

I fully understand that you need to lock down the components that are contained in your product during the development process, that's why fixes take time to propagate into distributions. As I've seen it from watching FOSS bugs, the typical lifecycle of a security bug in FOSS code is: A bug is typically found in the component, and fixed quickly.  Then over the next several months, the fix is propagated into the various distributions that contain the fix.  So a fix for the bug is made very quickly (but is completely untested), the teams that package up the distribution consumes the fix and proceeds to test the fix in the distribution.  As a result, distributions naturally lag behind fixes (btw, the MSFT security vulnerabilities follow roughly the same sequence - the fix is usually known within days of the bug being reported, but it takes time to test the fix to ensure that the fix doesn't break things (especially since Microsoft patches vulnerabilities in multiple platforms, the fix for all of them needs to be released simultaneously)).

But even so, it's surprising that a team would release a beta that contained a version of one of it's giblets that was almost 4 years old (according to the original report, it contained libPNG version 1.2.7, from September 12, 2004)!  This is especially true given the fact that the iPhone had a similar vulnerability found last year (ironically, the finder of this vulnerability was Travis Ormandy of Google).  I'm also not picking on Google because of spite - other vendors like Apple and Microsoft were each bitten by exactly this vulnerability - 3 years ago.  In Apple's case, they did EXACTLY the same thing that the Android team did: They released a phone that contained a 3 year old vulnerability that had previously been fixed in their mainstream operating system.

 

So how would the SDL have helped the Android team?  The SDL requires that you track giblets in your code - it forces you to have a plan to deal with the inevitable vulnerabilities in the giblets.  In this case, SDL would have forced the development teams to have a process in place to monitor the vulnerabilities (and of course to track the history of the component), so they hopefully would never have shipped vulnerable components.  It also means that when a vulnerability is found after shipping, they would have a plan in place to roll out a fix ASAP.  This latter is critically important because history has shown us that when one component is known to have a vulnerability, the vultures immediately swoop in to find similar vulnerabilities in related code bases (on the theory that if you make a mistake once, you're likely to make it a second or third time).  In fact, that's another requirement of the SDL: When a vulnerability is found in a component, the SDL requires that you also look for similar vulnerabilities in related code bases.

Yet another example where adopting the SDL would have helped to mitigate a vulnerability[1].

 

[1] Btw, I'm not saying that the SDL is the only way to solve this problem.  There absolutely are other methodologies that would allow these problems to be mitigated.  But when you're developing software that's going to be deployed connected to a network (any network), you MUST have a solution in place to manage your risk (and giblets are just one form of risk).  The SDL is Microsoft's way, and so far it's clearly shown its value.

Comments

  • Anonymous
    March 07, 2008
    I think that often enough, simply having a formal security process with teeth in it will inevitably reduce security problems.  I get the feeling that lots of development efforts do not think about security until after the fact.  

  • Anonymous
    March 07, 2008
    MS: I agree with you on both points.  MSFT had an extreme wake-up call several years ago, and turned the ship around (which takes a LOT of time given that many of our products are operating on an 18 month ship cycle). I'm hoping other developers realize this.

  • Anonymous
    March 07, 2008
    Yep, even the latest Linux distros don't have such old libraries. BTW, MS is not the only one who do a formal security review. OpenBSD also does, and also fixes vulnerabilities fast, which would compensate for the risks of the "full disclosure" approach the project uses.

  • Anonymous
    March 07, 2008
    Yuhong: I never said that OpenBSD doesn't have a formal process.  In fact, I've singled them out in the past as being the only *nix distro I know of that seems to "get" the idea of security. And I did say that there were other methodologies other than the SDL that work.  I was just pointing out how the SDL would have never permitted this kind of vulnerability to happen.

  • Anonymous
    March 07, 2008
    Larry, I agree with most of what you are saying. The logic of using outdated opensource libraries baffles me. However, you said: >> In Apple's case, they did EXACTLY the same thing that the Android team did: They released a phone that contained a 3 year old vulnerability that had previously been fixed in their mainstream operating system. But i say: The phones have not been released yet. The Android software is in early alpha stages and is not in beta yet.

  • Anonymous
    March 08, 2008
    The comment has been removed

  • Anonymous
    March 08, 2008
    Now that Google and Apple are getting into the big time like Microsoft is, we are about to see them eat a lot of their words regarding Windows security. JamesNT

  • Anonymous
    March 10, 2008
    "Now that Google and Apple are getting into the big time like Microsoft is, we are about to see them eat a lot of their words regarding Windows security." Yep, Mac malware is becoming more common, which is why Apple added things like ASLR into Leopard.

  • Anonymous
    March 10, 2008
    Yuhong: Yes they added ASLR, they didn't turn it on for most of the "interesting" binaries in the system - including the network facing binaries (Safari, Rendezvous, iTunes). So I wonder how much benefit they get from it.

  • Anonymous
    March 13, 2008
    BTW, if you can read APSL code, in dyld there are some ASLR code if you are interested in seeing how it work in Mac OS X.

  • Anonymous
    March 18, 2008
    The comment has been removed

  • Anonymous
    March 18, 2008
    Igor: Actually with SxS deployment, the servicing model for GDI+ is pretty clear.  The problem happened when you distribute GDI+ without using SxS. Apps that used SxS deployments for GDI+ didn't have any issues with security fixes, MSFT was able to service them without requiring application involvement.

  • Anonymous
    March 18, 2008
    Larry, I mentioned GDI+ specifically because Microsoft at one point released a hotfix which searched a whole system in an ettempt to find GdiPlus.dll and replace it with up to date copy. I am curious to hear your opinion on code "reuse".

  • Anonymous
    March 18, 2008
    Igor: I know - that hotfix was to find the apps that didn't use SxS deployment. And as for code re-use, I think you believe that developing code for Windows occurs with the same level of complexity as a university project. It's not (this is an understatement).  Windows is (as I understand it) the single most complicated single code base in history. There IS code re-use in the windows OS, and there are teams that obsessively look at code for opportunities to reuse code as much as possible.  But it's not as simple as "pick a function or a library and finding out how many times it has been duplicated".

  • Anonymous
    March 18, 2008
    Larry, of course I do not think that Windows is simple. That would be plain stupid and I am not that stupid. Unfortunately the real world example says the opposite of what you are saying -- for example, Vista takes 10x more disk space than XP and it doesn't offer 10x performance or 10x more features. That can only mean code bloat which comes from the lack of proper reuse.

  • Anonymous
    March 18, 2008
    Oh, and before you say "it is 10x biggger because of all the security checks we have put in" -- it is not 10x more secure than XP either ;)

  • Anonymous
    March 18, 2008
    Igor, you're simply being rude. It's not bigger because of the security checks.  It's bigger for lots of reasons.  The first is because you don't know how to add - you need to discard all the files in the WinSxS directory because those files don't take space on the disk (they're hard links to the actual files). Other reasons are: Larger (higher resolution) bitmaps (media count), larger (higher resolution) icons, more icons, etc. It's also not possible to relate code size with increased functionality.  Again, that's an extremely simplistic view. Are there opportunities to reduce the bloat in the OS?  Of course.  But is Vista 10x as bloated as XP?  Not a chance.

  • Anonymous
    March 18, 2008
    The comment has been removed

  • Anonymous
    March 21, 2008
    "It's also not possible to relate code size with increased functionality" I strongly disagree with that but since my previous comment wasn't published I won't bother to explain why. Whoever believes that code size doesn't relate to functionality or performance is a bad programmer in my opinion regardless of his formal education and regardless of the size of a project he has been working on so far. There are countless examples of that in the software market. One only has to look around and compare.

  • Anonymous
    March 23, 2008
    The comment has been removed

  • Anonymous
    March 24, 2008
    The comment has been removed

  • Anonymous
    March 25, 2008
    Yuhong: Do you have evidence that ntoskrnl.exe for Vista is 10x the size of ntoskrnl.exe for XP?  My Vista ntoskrnl.exe is 3.5M, my test machine's ntoskrnl is 2.5M.  That's not a 10x difference. Vista is bigger than XP.  No question there.  But it's not 10x bigger - as I mentioned above, you're using tools that don't understand hard links so they're double counting disk space usage (for example, on Vista, the entire winsxs directory is almost entirely hard links, so most naive disk usage apps double count the contents of that directory)

  • Anonymous
    March 25, 2008
    Larry, you need a "Frequently [Un]Asked [but should be asked] Questions" page.  It should include answers to questions such as "What is a kernel, and what does an operating system contain in addition to a kernel?" and "What is a hard link, and why doesn't it take up space?" I think you and Raymond are both taking the wrong approach to Igor Levicki.  The best way to deal with trolls (even trolls who actually know something about the topic) is to ignore them.

  • Anonymous
    March 25, 2008
    "Yuhong: Do you have evidence that ntoskrnl.exe for Vista is 10x the size of ntoskrnl.exe for XP?  My Vista ntoskrnl.exe is 3.5M, my test machine's ntoskrnl is 2.5M.  That's not a 10x difference." No I don't and that is my point.

  • Anonymous
    March 26, 2008
    I mean, you are misinterpeting my comment on kernel bloat.

  • Anonymous
    April 05, 2008
    BTW, hard links are the primary reason why Vista have to be installed on a NTFS file system.