[Full-Disclosure] Security & Obscurity: First-time attacks and lawyer jokes
peter at peterswire.net
Thu Sep 2 17:24:29 BST 2004
Dave Aitel wrote detailed comments, which I appreciate, and I respond to
some of them here. Others in the threads have made some similar
> As the Japanese Proverb says, "Only painters and lawyers can change
> black to white."
> What are your goals with this paper? If you seem to have gotten a
> hostile response, than keep in mind that this is a ten year old debate
> in this, and other on-line forums, and that despite your previous
> House Privacy Czarness", you don't have any information security
I plead guilty to being a lawyer and law professor, who has
practiced law, worked in the government, and taught for a bunch of
years. Small measures of self-defense on my lack of infosec background:
(1) I've taught semester courses on the Law of Cybersecurity twice in
the past two years. (2) I presented earlier versions of this paper
before technical audiences that included Bruce Schneier, Matt Blaze, and
lots of other IT experts and stayed up late at night trying to learn
from them. (That doesn't mean they agree with the paper, but a lot of
earlier flaws have been fixed.). (3) In government, I worked daily with
the people in OMB who were responsible for computer security for the
federal government. (4) For the past couple of years I have been on the
Microsoft Trustworthy Computing Academic Advisory Board, with IT experts
including Eugene Spafford and a bunch more. That has immersed me in a
lot of security discussions, and I have continued to talk with many Open
Source programmers as well.
In addition, legal academia often provides a lot of
> background for actual law. The laws (DMCA, etc) in this area are
> horribly dysfunctional, and if based on "research" such as your paper,
> only going to get more so. Furthermore, these awful, but well meaning
> laws directly impact the freedom of many people, hinder business, and
> generally cause misfortune even to the causes they claim to provide
> such as "Homeland Security (tm)".
> If, as is suspected, you are trying to begin a legal framework for
> future laws which will put penalties on the disclosure of certain
> of information, or the groundwork for a government agency to mandate
> information security on private citizens, than you can expect a long
> bloody fight in this, and every other arena.
My belief is that the Department of Homeland Security and the
current Administration generally have gone far overboard in their
insistence on secrecy. The paper, by clarifying the military/intel
assumptions, seeks to show the relatively limited set of conditions
where the secrecy approach holds true in a networked world. Readers of
FD understandably are concerned that I am a secrecy nut, but in the
policy debates I am in fact much more likely to be supporting the
Freedom of Information Act and other openness initiatives than I am to
support secrecy and over-classification. More at
> The flaw in your specific example [about a software program freezing
up it is attacked] is that every program can be run as
> many times as you need to "attack" it. You would never need more than
> one copy.
First, there are times when you cannot attack the program over
and over. For instance, you may not have the ability to access the
software over and over again, such as when it is running on someone
else's system and you don't have continuous access. Second, other
persons on FD have written to me privately about self-modifying code
that would render Dave Aitel's point untrue. With that said, the
example could be better written.
Much more importantly, though, is that Dave accepts one of the
fundamental points of my paper in trying to refute it. He says "every
program can be run as many times as you need to attack it." Exactly!
The big difference between physical and computer security that I
emphasize is the number of attacks. Dave emphasizes the number of
attacks. Hey, it's a unifying principle that even lawyers and
non-experts can understand in the future! (See separate post today on
why the analogy between physical and cyber security is useful.)
A theme of the paper: when attacks are closer to first-time
attacks (when they have high uniqueness), then secrecy can be an
effective tool. When there is low uniqueness, security through
obscurity is BS. And many, many cyberattacks fall into the second
> The paper goes into some sidetrack about people trying to find the
> hidden gems in video games - an activity that may or may not have
> something to do with computer security, but is clearly irrelevant
> ("fluffy") in this context.
My students love the video game part when they read it -- it
helps them see the similar patterns of cyberattacks, physical attacks,
and video game "attacks."
> Also, the paper doesn't do a good job of proving that the Efficient
> Capital Markets Hypothesis is relevant to the discussion. It's clearly
> true that attackers will gain a lot from disclosure, but the Open
> model doesn't care, because they only have one way to fix their
> - disclose bugs. The paper even goes so far as to say the ECMH
> doesn't apply. But if it doesn't apply, why mention it? (page 30
> that the paper was simply suggesting it as an area for further
> "research", but that would make a better footnote than paper section).
> Adding to the fuzzyness feel is the way the paper reaches for an
> in another social science, and fails.
Some discussion on FD and elsewhere assumes that vulnerabilities
will be found very, very efficiently -- if the flaw is there, then it's
a matter of only a short wait before someone finds the flaw. In talking
with people who write software, however, I was repeatedly struck by
their observation that it takes considerable hard work and expertise to
find new vulnerabilities. The ECMH discussion gives reasons for
thinking that vulnerability discovery will, in some settings, be less
instantaneous than many seem to have assumed.
Paper at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=531782
Full-Disclosure is hosted and sponsored by Secunia.