2 August 2007

Ship Now, Patch Later

Technorati tags: ,

Subject: Re: It would be nice if MS could settingle on a single subnet for updates

On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"
>"Leythos" <void@nowhere.lan> wrote in message
>> Mike.Brannigan@localhost says...

This thread is about the collision between...

    No automatic code base changes allowed

...and...

    Vendors need to push "code of the day"

Given the only reason we allow vendors to push "code of the day" is because their existing code fails too often for us to manage manually, one wonders if our trust in these vendors is well-placed.

A big part of this is knowing that only the vendor is pushing the code, and that's hard to be sure of.  If malware were to hijack a vendor's update pipe, it could blow black code into the core of systems, right pas all those system's defenses.

With that in mind, I've switched from wishing MS would use open standards for patch transmission to being grateful for whatever they can do to harden the process.  I'd still rather not have to leave myself open to injections of "code of the day", though.

>NO never ever ever in a production corporate environment do you allow ANY of
>your workstations and servers to directly access anyone for patches
>I have never allowed this or even seen it in real large or enterprise
>customers. (the only place it may crop up is in mom and pop 
>10 PCs and a Server shops).

And there's the problem.  MS concentrates on scaling up to enterprise needs, where the enterprise should consolodate patches in one location and then drive these into systems under their own in-house control.

So scaling up is well catered for.

But what about scaling down? 

Do "mom and pop" folks not deserve safety?  How about single-PC users which have everything they own tied up in that one vulnerable box?  What's best-practice for them - "trust me, I'm a software vendor"?

How about scaling outwards? 

When every single vendor wants to be able to push "updates" into your PC, even for things as trivial as prinyers and mouse drivers, how do you manage these?  How do you manage 50 different ad-hoc update delivery systems, some from vendors who are not much beyond "Mom and Pop" status themselves?  Do we let Zango etc. "update" themselves?

The bottom line: "Ship now, patch later" is an unworkable model.

>As you said your only problem is with Microsoft then the solution I have
>outlined above is the fix - only one server needs access through your
>draconian firewall policies.  And you get a real secure enterprise patch
>management solution that significantly lowers the risk to your environment.

That's prolly the best solution, for those with the resources to manage it.  It does create a lock-in advantage for MS, but at least it is one that is value-based (i.e. the positive value of a well-developed enterprise-ready management system).

However, I have to wonder how effective in-house patch evaluation really is, especially if it is to keep up with tight time-to-exploit cycles.  It may be the closed-source equivalent of the open source boast that "our code is validated by a thousand reviewers"; looks good on paper, but is it really effective in practice?

Public Conversations

No comments: