Terms in this set (553)

A series of actions to be taken in order to make it hard for an attacker to successfully attack computers in a network environment. Hardening refers to providing various means of protection in a computer system. Protection is provided in various layers and is often referred to as defense in depth. Protecting in layers means to protect at the host level, the application level, the operating system level, the user level, the physical level and all the sublevels in between. Each level requires a unique method of security.

A hardened computer system is a more secure computer system.

Hardening is also known as system hardening.

Hardening's goal is to eliminate as many risks and threats to a computer system as necessary. Hardening activities for a computer system can include:

•Keeping security patches and hot fixes updated
•Monitoring security bulletins that are applicable to a system's operating system and applications
•Installing a firewall
•Closing certain ports such as server ports
•Not allowing file sharing among programs
•Installing virus and spyware protection, including an anti-adware tool so that malicious software cannot gain access to the computer on which it is installed
•Keeping a backup, such as a hard drive, of the computer system
•Disabling cookies
•Creating strong passwords
•Never opening emails or attachments from unknown senders
•Removing unnecessary programs and user accounts from the computer
•Using encryption where possible
•Hardening security policies, such as local policies relating to how often a password should be changed and how long and in what format a password must be in
A Certificate Revocation List (CRL) is a list of digital certificates that have been revoked by the issuing Certificate Authority (CA) before their scheduled expiration date and should no longer be trusted. CRLs are a type of blacklist and are used by various endpoints, including Web browsers, to verify whether a certificate is valid and trustworthy. Digital certificates are used in the encryption process to secure communications, most often by using the TLS/SSL protocol. The certificate, which is signed by the issuing Certificate Authority, also provides proof of the identity of the certificate owner.

When a Web browser makes a connection to a site using TLS, the Web server's digital certificate is checked for anomalies or problems; part of this process involves checking that the certificate is not listed in a Certificate Revocation List. These checks are crucial steps in any certificate-based transaction because they allow a user to verify the identity of the owner of the site and discover whether the Certificate Authority still considers the digital certificate trustworthy.

The X.509 standard defines the format and semantics of a CRL for a public key infrastructure. Each entry in a Certificate Revocation List includes the serial number of the revoked certificate and the revocation date. The CRL file is signed by the Certificate Authority to prevent tampering. Optional information includes a time limit if the revocation applies for only a period of time and a reason for the revocation. CRLs contain certificates that have either been irreversibly revoked (revoked) or that have been marked as temporarily invalid (hold).

Digital certificates are revoked for many reasons. If a CA discovers that it has improperly issued a certificate, for example, it may revoke the original certificate and reissue a new one. Or if a certificate is discovered to be counterfeit, the CA will revoke it and add it to the CRL. The most common reason for revocation occurs when a certificate's private key has been compromised. Other reasons for revoking a certificate include the compromise of the issuing CA, the owner of the certificate no longer owning the domain for which it was issued, the owner of the certificate ceasing operations entirely or the original certificate being replaced with a different certificate from a different issuer.


The problem with Certificate Revocation Lists, as with all blacklists, is that they are difficult to maintain and are an inefficient method of distributing critical information in real time. When a certificate authority receives a CRL request from a browser, it returns a complete list of all the revoked certificates that the CA manages. The browser must then parse the list to determine if the certificate of the requested site has been revoked. Although the CRL may be updated as often as hourly, this time gap could allow a revoked certificate to be accepted, particularly because CRLs are cached to avoid incurring the overhead involved with repeatedly downloading them. Also, if the CRL is unavailable, then any operations depending upon certificate acceptance will be prevented and that may create a denial of service.
A virtual private network (VPN) is a technology that creates an encrypted connection over a less secure network. The benefit of using a secure VPN is it ensures the appropriate level of security to the connected systems when the underlying network infrastructure alone cannot provide it. The justification for using VPN access instead of a private network usually boils down to cost and feasibility: It is either not feasible to have a private network -- e.g., for a traveling sales rep -- or it is too costly to do so. The most common types of VPNs are remote-access VPNs and site-to-site VPNs.

A remote-access VPN uses a public telecommunication infrastructure like the internet to provide remote users secure access to their organization's network. This is especially important when employees are using a public Wi-Fi hotspot or other avenues to use the internet and connect into their corporate network. A VPN client on the remote user's computer or mobile device connects to a VPN gateway on the organization's network. The gateway typically requires the device to authenticate its identity. Then, it creates a network link back to the device that allows it to reach internal network resources -- e.g., file servers, printers and intranets -- as though it was on that network locally.

A remote-access VPN usually relies on either IPsec or Secure Sockets Layer (SSL) to secure the connection, although SSL VPNs are often focused on supplying secure access to a single application, rather than to the entire internal network. Some VPNs provide Layer 2 access to the target network; these require a tunneling protocol like PPTP or L2TP running across the base IPsec connection.



Parsing VPN gateways.

A site-to-site VPN uses a gateway device to connect the entire network in one location to the network in another -- usually a small branch connecting to a data center. End-node devices in the remote location do not need VPN clients because the gateway handles the connection. Most site-to-site VPNs connecting over the internet use IPsec. It is also common to use carrier MPLS clouds, rather than the public internet, as the transport for site-to-site VPNs. Here, too, it is possible to have either Layer 3 connectivity (MPLS IP VPN) or Layer 2 (Virtual Private LAN Service, or VPLS) running across the base transport.

VPNs can also be defined between specific computers, typically servers in separate data centers, when security requirements for their exchanges exceed what the enterprise network can deliver. Increasingly, enterprises also use VPN connections in either remote-access mode or site-to-site mode to connect -- or connect to -- resources in a public infrastructure-as-a-service environment. Newer hybrid-access scenarios put the VPN gateway itself in the cloud, with a secure link from the cloud service provider into the internal network.
Network Address Translation (NAT) is the process where a network device, usually a firewall, assigns a public address to a computer (or group of computers) inside a private network. The main use of NAT is to limit the number of public IP addresses an organization or company must use, for both economy and security purposes.

The most common form of network translation involves a large private network using addresses in a private range (10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, or 192.168.0 0 to 192.168.255.255). The private addressing scheme works well for computers that only have to access resources inside the network, like workstations needing access to file servers and printers. Routers inside the private network can route traffic between private addresses with no trouble. However, to access resources outside the network, like the Internet, these computers have to have a public address in order for responses to their requests to return to them. This is where NAT comes into play.

Internet requests that require Network Address Translation (NAT) are quite complex but happen so rapidly that the end user rarely knows it has occurred. A workstation inside a network makes a request to a computer on the Internet. Routers within the network recognize that the request is not for a resource inside the network, so they send the request to the firewall. The firewall sees the request from the computer with the internal IP. It then makes the same request to the Internet using its own public address, and returns the response from the Internet resource to the computer inside the private network. From the perspective of the resource on the Internet, it is sending information to the address of the firewall. From the perspective of the workstation, it appears that communication is directly with the site on the Internet. When NAT is used in this way, all users inside the private network access the Internet have the same public IP address when they use the Internet. That means only one public addresses is needed for hundreds or even thousands of users.

There are other uses for Network Address Translation (NAT) beyond simply allowing workstations with internal IP addresses to access the Internet. In large networks, some servers may act as Web servers and require access from the Internet. These servers are assigned public IP addresses on the firewall, allowing the public to access the servers only through that IP address. However, as an additional layer of security, the firewall acts as the intermediary between the outside world and the protected internal network. Additional rules can be added, including which ports can be accessed at that IP address. Using NAT in this way allows network engineers to more efficiently route internal network traffic to the same resources, and allow access to more ports, while restricting access at the firewall. It also allows detailed logging of communications between the network and the outside world.

Additionally, NAT can be used to allow selective access to the outside of the network, too. Workstations or other computers requiring special access outside the network can be assigned specific external IPs using NAT, allowing them to communicate with computers and applications that require a unique public IP address. Again, the firewall acts as the intermediary, and can control the session in both directions, restricting port access and protocols.

NAT is a very important aspect of firewall security. It conserves the number of public addresses used within an organization, and it allows for stricter control of access to resources on both sides of the firewall.
Kerberos /ˈkərbərɒs/ is a computer network authentication protocol which works on the basis of 'tickets' to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. The protocol was named after the character Kerberos (or Cerberus) from Greek mythology, the ferocious three-headed guard dog of Hades (hellhound).

Kerberos is a protocol for authenticating service requests between trusted hosts across an untrusted network, such as the internet. Kerberos is built in to all major operating systems, including Microsoft Windows, Apple OS X, FreeBSD and Linux.

Since Windows 2000, Microsoft has incorporated the Kerberos protocol as the default authentication method in Windows, and it is an integral component of the Windows Active Directory service. Broadband service providers also use Kerberos to authenticate DOCSIS cable modems and set-top boxes accessing their networks.

Kerberos was originally developed for Project Athena at the Massachusetts Institute of Technology (MIT). The name Kerberos was taken from Greek mythology; Kerberos (Cerberus) was a three-headed dog who guarded the gates of Hades. The three heads of the Kerberos protocol represent a client, a server and a Key Distribution Center (KDC), which acts as Kerberos' trusted third-party authentication service.

Users, machines and services using Kerberos need only trust the KDC, which runs as a single process and provides two services: an authentication service and a ticket granting service. KDC "tickets" provide mutual authentication, allowing nodes to prove their identity to one another in a secure manner. Kerberos authentication uses conventional shared secret cryptography to prevent packets traveling across the network from being read or changed and to protect messages from eavesdropping and replay attacks.
Trusted computing is a broad term that refers to technologies and proposals for resolving computer security problems through hardware enhancements and associated software modifications. Several major hardware manufacturers and software vendors, collectively known as the Trusted Computing Group (TCG), are cooperating in this venture and have come up with specific plans. The TCG develops and promotes specifications for the protection of computer resources from threats posed by malicious entities without infringing on the rights of end users.

Microsoft defines trusted computing by breaking it down into four technologies, all of which require the use of new or improved hardware at the personal computer (PC) level:
•Memory curtaining -- prevents programs from inappropriately reading from or writing to each other's memory.
•Secure input/output (I/O) -- addresses threats from spyware such as keyloggers and programs that capture the contents of a display.
•Sealed storage -- allows computers to securely store encryption keys and other critical data.
•Remote attestation -- detects unauthorized changes to software by generating encrypted certificates for all applications on a PC.

In order to be effective, these measures must be supported by advances and refinements in the software and operating systems (OSs) that PCs use.

Within the larger realm of trusted computing, the trusted computing base (TCB) encompasses everything in a computing system that provides a secure environment. This includes the OS and its standard security mechanisms, computer hardware, physical locations, network resources and prescribed procedures.

The term trusted PC refers to the industry ideal of a PC with built-in security mechanisms that place minimal reliance on the end user to keep the machine and its peripheral devices secure. The intent is that, once effective mechanisms are built into hardware, computer security will be less dependent on the vigilance of individual users and network administrators than it has historically been. Concerns have arisen, however, about possible loss of user privacy and autonomy as a result of such changes.
;