This page features general information about computer security. Information is organized by source and each section is organized by topic. See the Table of Contents for a more detailed look at the organization of this site.
NOTE: This site is updated infrequently so some information and links may be out of date.
Advisories
A number of groups from around the world provide information about security vulnerabilities and methods to remove or reduce the danger of particular vulnerabilities for different computer operating systems.
Documents
Many articles have been written about various topics in computer and network security that have been published on the Internet.
Electronic Magazines, Newsletters and News Sites
There are some magazines, newsletters and news sites available online that provide timely information about computer security.
Frequently Asked Questions (FAQ)
A FAQ is a summary document written by knowledgeable individuals for a particular topic and it contains commonly requested information about the topic.
Groups and Organizations
A number of computer security organizations exists that provide information to the public or to their members.
Mailing Lists
Mailing Lists provide a dialog on areas of interest to the members of the list.
Newsgroups
USENET newsgroups are a series of discussion groups that can be useful to obtain current information of a specific topic. Some newsgroups are a better source of information than others.
Request for Comments (RFC) on computer and network security topics
A RFC is a document from Internet Engineering Task Force (IETF) containing information about a new proposed standard.
Software
A large amount of software is available to improve the security of a system.
World Wide Web (WWW) Sites
Many WWW sites provide a large amount of information about various topics in computer security. Some of these sites are simply large indexes but others contain a collection of information on a specific topic.
http://www.jmu.edu/computing/security/
Computer security is the current computer science collaboration of the week! Please help improve it to featured article standard.
This article describes how security can be achieved through design and engineering. Please see the computer insecurity article for an alternative approach that describes the current battlefield of computer security exploits and defenses.
Computer security is a field of computer science concerned with the control of risks related to computer use.
The means traditionally taken to realize this objective is to attempt to create a trusted and secure computing platform, designed so that agents (users or programs) can only perform actions that have been allowed. This involves specifying and implementing a security policy. The actions in question can be reduced to operations of access, modification and deletion. Computer security can be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.
In a secure system the authorised users of that system are still able to do what they should be able to do. One might be able to secure a computer beyond misuse using extreme measures:
The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.
Eugene H. Spafford, director of the Purdue Center for Education and Research in Information Assurance and Security. [1]
However, this would not be regarded as a useful secure system.
It is important to distinguish the techniques used to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws in their security designs cannot be made secure without compromising their usability. Consequently, most computer systems cannot be made secure even after the application of extensive "computer security" measures. Furthermore, if they are made secure, often it is to the detriment of usability.
Contents [hide]
1 Secure Operating System Context
2 Computer Security By Design
3 Early History of Security By Design
4 Secure Coding
5 Techniques for Creating Secure Systems
6 Capabilities vs. ACLs
7 Other Uses of the Term "trusted"
8 Notable Persons in Computer Security
9 See Also
10 References
11 External Links
[edit] Secure Operating System Context
One context of the term computer security is its use pertaining to a technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the science did not change, the technology is almost inactive today, perhaps because it is complex or not widely understood. Such ultra strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced on an operating environment. An example of such a security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the Memory Management Unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system that if certain critical parts are designed and implemented correctly can ensure that it is physically impossible for arbitrarily hostile or intelligently subversive applications to violate the security policy. This amazing capability is enabled because they not only impose a security policy, but they also completely protect themselves from corruption. Ordinary operating systems lack the completeness property in this latter capability. The design methodology to produce such secure systems is not an ad-hoc best effort activity, but one that is very precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security and the capability to produce them is not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information and military secrets. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of Top Secret to unclassified (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two orthogonal components, security capability (as Protection Profile) and assurance levels (as EAL levels.) For reasons that are the subject of another article, none of these ultra high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.
[edit] Computer Security By Design
Computer security is a logic-based technology. There is no universal standard notion of what secure behavior is. “Security” is a property that is unique to each situation and so must be overtly defined if it is to be seriously enforced, defined by a Security Policy. Security is not an ancillary function of a computer application, but often what the application doesn’t do. Unless the application is just trusted to ‘be secure,’ security can only be imposed as a constraint on the application’s behavior from outside of the application. There are several approaches to security in computing, sometimes a combination of approaches is valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
4. Trust no software but enforce a security policy with trustworthy mechanisms.
Many approaches unintentionally follow 1. Obviously, 1 and 3 lead to failure. Since 2 is expensive and non-deterministic, its use is very limited. Because 4 is often hardware-based mechanisms and avoid abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of 2 and 4 are often used in a layered architecture with thin layers of 2 and thick layers of 4.
There are a variety of strategies and techniques used to design in security. There are few, if any strategies to add on security after design. Some of the strategies to design in security are discussed in this section.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way, even if an attacker has subverted one part of the system, fine-grained security ensures that it is just as difficult for them to subvert the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be compromised to compromise the security of the system and the information it holds. Defense in depth works when the subverting one hurdle is not a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. What constitutes such a decision and what authorities are legitimate is obviously controversial.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable in the long term. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
[edit] Early History of Security By Design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
[edit] Secure Coding
The majority of software vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are immune to some of these defects, but are still prone to code/command injection and other software defects which lead to software vulnerabilities.
[edit] Techniques for Creating Secure Systems
The following techniques can be used in engineering secure systems. These techniques, whilst useful, do not of themselves ensure security. One security maxim is "a security system is no stronger than its weakest link"
Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
A bigger OS, capable of providing a standard API like POSIX, can be built on a microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: eg Hurd.
Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
Strong authentication techniques can be used to ensure that communication end-points are who they say they are.
Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.
Some of the following items may belong to the computer insecurity article:
Don't run an application with known security flaws. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.
Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.Backups are a way of securing your information; they are another copy of all your important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Backups can be kept in a multitude of locations, some of the suggested places would be a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals.
Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. The backup needs to be moved between the geographic sites in a secure manner, so as to prevent it from being stolen.
Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules.
Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer - such as through an interactive logon screen - or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
Encryption is used to protect your message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that is to say, sufficiently difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message.
Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
Social engineering awareness - Keeping yourself and your employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of your network and servers.
[edit] Capabilities vs. ACLs
Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems — only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems and commercial OSes still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language [2].
The Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from operating systems where security is not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities for good reason. Years may elapse between one problem needing remediation and the next.
A good example of a current secure system is EROS. But see also the article on secure operating systems. TrustedBSD is an example of an opensource project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
[edit] Other Uses of the Term "trusted"
The term "trusted" is often applied to operating systems that meet different levels of the common criteria, some of which are discussed above as the techniques for creating secure systems.
A computer industry group led by Microsoft has used the term "trusted system" to include making computer hardware that could impose restrictions on how people use their computers. The project is called the Trusted Computing Group (TCG). See also Next-Generation Secure Computing Base.
Computer security is a highly complex field, and it is relatively immature, except on certain very secure systems that never make it into the news media because nothing ever goes wrong that can be publicized, and for which there is not much literature because the security details are proprietary. The ever-greater amounts of money dependent on electronic information make protecting it a growing industry and an active research topic.
[edit] Notable Persons in Computer Security
See List of Computer security specialists and List of Cryptographers
[edit] See Also
Authentication
Authorization
Cryptography
Computer security model
Differentiated security
Internet Firewalls
Network security
Data security
Formal methods
Identity management
Internet privacy
Cyber security standards
Wireless LAN Security
Timeline of hacker history
Information Leak Prevention
http://www.geeksquad.com/contactus/?PSRCH
2006-12-04 14:15:23
·
answer #1
·
answered by neema s 5
·
1⤊
0⤋