Get ahead with a $300 test prep scholarship
| Enter to win by Tuesday 9/24
CISSP - 3) Security Engineering Domain
Engineering and Management of Security (Second largest domain in terms of number of covered topics)
Terms in this set (192)
Implement and manage engineering processes using secure design principles
Understand the fundamental concepts of security models
Confidentiality, Integrity, and Multi-level Models
An International systems engineering standard covering processes and life cycle stages:
Left Side represents concept development and the decomposition of requirements into functions and physical entities that can be architected, designed, and developed.
Right side represent integration of these entities and their ultimate transition into the field, where they are operate and maintained.
Systems Engineering topics
Technical: Requirements definition, requirements analysis, architectural design, implementation, integration, verification, validation, transition.
Managerial: Decision analysis, technical planning, technical assessment, requirements management, risk management, configuration management, Interface management, technical data management.
Securing Information and systems.
Requires the use of multiple, overlapping protection approaches addressing the people, technology, and operational aspects of information systems. Thus the failure or circumvention of any individual protection approach will not leave the system unprotected.
Common Criteria - ISO/IEC 15408 was the first truly international product evaluation criteria
Provides a structured methodology for documenting security requirements, documenting and validating security capabilities, and promoting international cooperation in the area of IT Security.
Production profiles - are a common set of functional and assurance requirements for a category of vendor products deployed in a particular type of environment.
The vendor product (ToE) is then examined against the specific profile by a third-party lab using common evaluation methodology.
EAL 1 - Functionality tested - lowest level
EAL2 - Structurally tested
EAL3 - Methodically tested and checked
EAL4 - Methodically designed, tested, and reviewed
EAL5 - Semi-formally designed and tested
EAL6 - Semi-formally verified, designed and tested
EAL7 - Formally Verified, design and tested.
NIST SP 800-27 - Engineering principles for Information Technology Security (A baseline for achieving security)
Five life cycle planing phases:
- Development/Acquisition: system is designed, purchased, programmed, developed, or otherwise constructed.
- Implementation: system is tested and installed
- Operation/Maintenance: the systems performs its work
- Disposal: involves the disposition of information, HW and SW.
- Security foundation
- Risk Based
- Ease of use
- Increase Resilience
- Reduce vulnerabilities
- design with network in mind.
Has its own discrete security methodology
comprises its own discrete views and viewpoints
address non-normative flows through systems and among applications
Introduces its own normative flows through systems and among applications
Introduces unique, single-purpose components in the design
calls for its own unique set of skills and competencies of the enterprise and IT architects.
ISO/IEC 21827:2008 - Systems Security Engineering Capability Maturity Model (SSE-CMM)
Describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering.
- The entire life cycle
- The whole organization
- Concurrent interactions with other disciplines
- Interactions with other organizations
Common System Components: Processors
Processors perform four main tasks:
Fetching: from memory
In is vital that system provide means to protect multiple processes, tasks, and threads from the other processes/tasks/threads that main contain bugs or exhibit unfriendly actions.
Multitasking - switches from one process to another quickly to speed up processing
Multiprocessing - increase the number of cpu's in the system where each processor can assume some of the load.
Multithreading - split programs into threads; OS time slices the threads and gives one thread some time on the CPU, then switches to another thread.
Processor Key Features
Tamper detection sensors
Battery backed logic with a physical mesh
The ability to customize a device with secure boot capabilities.
Secure memory access controller with on-the-fly encrypt and decrypt capabilities
Static and differential power analysis (SPA/DPA) countermeasures
Smart card UART controllers
Regardless of its location, primary storage stores data that has a high probability of being requested by the CPU.
Memory Protection - purpose is to prevent a process from accessing memory that has not been allocated to it.
Segmentation: Dividing memory into segments
Paging: divides the memory address space into equal-sized blocks called pages
Protection Keying - a protection key mechanism divides physical memory up into blocks of a particular size, each which has an associated numerical value called a protection key.
Holds data no currently being used by the CPU and is used when data must be stored for an extended period of time using high-capacity nonvolatile storage.
Virtual memory - storing part of the data on secondary storage which is called a virtual page.
Address Space Layout Randomization (ASLR)
Memory protection which involves randomly arranging the positions of key data areas of a program, including the base of the executable and the positions of the stack, heap, and libraries in a process's memory address space. The address space layout randomization is based on the low chance an attacker can guess the locations of randomly placed areas.
Executable space protection is the marking of memory regions as non-executable, thus any attempt to execute machine code will cause exception errors.
ASLR and Executable space protection help prevent buffer overflow attacks like return-to-libc and return-to-plt.
Loads and runs binary programs
Schedules the task swapping
Tracks the physical location of files on the computers HD
Enterprise Security Architecture
Focused on setting the long-term strategy for security services in the enterprise.
- Long-term view of control
- provides a unified vision for common security controls
- leverages existing technology investments
- Provides a flexible approach to current and future threats
- all information is not equal or constant in terms of value and risk over time.
- an efficient security program applies the right technology to protect the most critical assets combined with quality processes that reduce risks to acceptable business levels.
- includes regular management reviews and technology assessments to ensure controls are working as intended and providing feedback so that technology and processes can adapt to changes in value and risk over time.
ESA intended benefits
- Provide guidance to IT architects and senior management to enable them to make better security-related investment and design decisions
- Establish future-state technology architecture for the security environment
- Support, enable and extend security policies and standards
- Describe general security strategies use to guide security-related decisions at technical architecture and solution levels.
- Leverage industry standards and models to ensure security best practices are being applied
- Present and document the various elements of the security architecture in order to ensure proper linkage and alignment.
- Define technology security architecture in relation to other technology domains
- Provide an understanding of the impact on the security posture
- Manage IT solution risk consistently across the project, while leveraging industry best practices
- Reduce costs and improve flexibility by implementing reusable, common security services
- Provide a secure mechanism for end-of-life and decommissioning solutions when necessary.
Common Security Services - a sample taxonomy of services that may be used as building blocks for EAS
Boundary Control Services - concerned with how and whether information is allowed to flow from one set of systems to another or from one state to another - (security zones of control)
Access Control Services - focus on the identification, authentication, and authorization of subject entities as they are deployed and employed to access the organization's assets (reduced-sign-on or single-sign-on).
Integrity Services - focus on the maintenance of high integrity systems and data through automated checking to detect and correct corruption. (i.e., anti virus, content filtering, whitelisting, IPS)
Cryptographic Services - involve public key infrastructure as well as the continued use of PKI functions through external providers. can included common hashing and encryption services, tools and technologies
Audit and Monitoring Services - focus on secured collection, storage, and analysis of audited events through centralized logging as well as events themselves through IDS or similar services.
Zones of control
An area or grouping within which a defined set of security policies and measures are applied to achieve a specific level of security. Used to group together those entities with similar security requirements and levels of risk.
separation of the zones ensures that the capability of accessing or modifying information and systems in a more secure zone does not leak through to a less secure zone.
Defined as encryption that is strong enough to make brute-force attacks impractical because there is a higher work factor than the attacker wants to invest into the attack.
Developed a common context for understanding complex architectures. Allows for the communication and collaboration of all entities in the development of the architecture.
"Is a logical structure for identifying and organizing the descriptive representations (models) that are important in the management of enterprises and to the development of the systems, both automated and manual, that comprise them."
Sherwood Applied Business Security Architecture (SABSA)
Holistic life cycle for developing security architecture that begins with assessing business requirements and subsequently creating a "chain of traceability" through the phases of strategy, concept, design, implementation, and metrics. It represents any architecture using six layers
The Open Group Architecture Framework (TOGAF)
provides a common set of terms, and architecture development method (ADM) that describes the step-by-step process employed by TOGAF architects an architecture content framework (ACF) to describe standard building blocks and components as well as numerous reference models.
IT Infrastructure Library (ITIL)
Defines the organizational structure and skill requirements of an IT organization as well the set of operational procedures and practices that direct IT Operations and infrastructure, including security operations.
- Service strategy
- service design
- service transition
- service operations
- continuous service improvement
Types of security models
State Machine Model: - state describes a systems at a point in time. It describes the behavior of a system as it moves between one state and another, from one moment to another. The purpose is to define which actions will be permitted at any point in time to ensure that a secure state is preserved.
Time is very important. according to this rule set (i.e., security policy), a model system's secure state can only change at distinct points in time, such as when an event occurs or a clock triggers it.
Multilevel Lattice Models: describes strict layers of subjects and objects and defines clear rules that allow or disallow interactions between them based on the layers they are in. Subjects are assigned security clearances that define what layer that are assigned to and objects are classified into similar layers.
Noninterference Models: may be considered a type of multilevel model with a high degree of strictness, severely limiting any higher-classified information from being shared with lower-privileged subjects even when higher-privilege subjects are using the system at the same time.
Thus they address obvious and intentional interactions between subjects and objects, but the also deal with the effects of covert channels that may leak information inappropriately.
It minimizes leakages that may happen through covert channels because there is a complete separation between security levels. Thus a higher-security level has not way to interface with the activities at a lower level.
Matrix-based Models: Access control matrix is a two-dimensional table that allows for individual subjects and objects to be related to each other. Its a concise way to represent the capabilities that subjects have when accessing particular objects. Typical access methods for content are read, write, edit, and delete.
Information Flow Models: focus on how information is allowed or not allowed between individual objects. Its used to determine if information is being properly protected throughout a given process and may be used to identify potential covert channels, unintended information flow between compartments in compartmented systems.
Bell-LaPadula Confidentiality Model
Uses labels to keep track of clearances and classifications and implements a set of rules to limit interactions between different types of subject and objects.
- Does not address "need-to-know"
- Lattice based model
- State-machine model
- information flow model
No read up No write down Constrained to its level
Read down Write up
Biba Integrity Model
It focuses on ensuring that the integrity of information is being maintained by preventing corruption. It assigns integrity levels to subjects and objects depending on how trustworthy they are considered to be.
#1 goal - prevent data modification by unauthorized parties
- Lattice based model
- State-machine model
- information flow model
No read down No write up Cannot Invoke up
Read up Write down
Clark-Wilson Integrity Model
Recommended a strict definition of well-formed transactions
Access Triple - Subject-Program-Object: subject no longer has direct access to the object.
- Prevent data modification by unauthorized parties
- Prevent unauthorized data modification from authorized parties.
- Maintain internal and External consistency (data reflects the real world)
Lipner Model (Implementation Model)
Combines elements of Bell-LaPadua and Biba together with the idea of job functions or roles in a novel way, protect both confidentiality and integrity.
Assign security levels and functional categories to subjects and objects.
- subjects: Clearance level and job function
- objects: sensitivity of the data or program and its functions are defined according to its classification.
Was the first to separate objects into data and programs.
Brewer-Nash (Chinese Wall) Model (Confidentiality)
Focuses on preventing conflict of interest when a given subject has access to objects with sensitive information associated with two competing parties.
Its unusual because the access control rules change based on the subject behavior (i.e., associated to one client).
Primarily concerned with how subjects and objects are created; how subjects are assigned rights or privileges, and how ownership of objects is managed.
A set of objects
A set of subjects: (process or a domain). The domain is the set of constraints controlling how subjects may access objects.
A set of rights: govern how subjects may manipulate the passive objects.
Controls how subjects and objects are created, how subjects are assigned rights
It describes eight primitive protection rights (commands) that subjects can execute to have an effect on other subjects or objects.
Harrison-Ruzzo-Ullman Model (Integrity)
Its composed of a set of generic rights and a finite set of commands. Where subject should be restricted from gaining particular privileges.
Capturing and Analyzing Requirements
An SA should start with establishing key principles and guidelines for the design. Principles are defined as functional statements of belief, mandatory elements that will restrict the overall design and establish the key priorities for protection.
- Functional Requirements - address what the design mus do or accomplish.
- Nonfunctional requirements - focus on the qualities of the services, including any requirements for reliability and performance.
A security model is a specification that describes the rules to be implemented to support and enforce the security policy.
The security policy is the what while the security model can be the how.
Certification and Accreditation
The objective is to determine how well a system measures up to a preferred level of security in the real world, then make a decision whether to proceed with its use in the enterprise.
Certification phase: - the product or system is tested to see whether it meets the documented requirements
Accreditation Phase: - Management evaluates the capacity of a system to meet the needs of the organization and if so they will accredit it.
Trusted Computer System Evaluation Criteria (TCSEC)
Frequently referred too as the Orange book, was the USG DoD, standard that sets basic standards for the implementation of security protections in computing systems.
TCSEC was used to evaluate, classify, and select computer systems being considered for the processing, storage, and retrieval of sensitive or classified information on military and government systems.
Focused mostly on Confidentiality
Has been superseded by Common Criteria
Trusted computer Base - starts with the principle that there are some functions that simply must be working correctly for security to be possible and consistently enforced in a computing system.
Information Technology Security Evaluation Criteria (ITSEC)
Security requirements are not proscribed in ITSEC. Instead the consumer or the vendor has the ability to define a set of requirements from a menu of possible requirements into a Security Target (ST) and vendors develop products (Target of Evaluation or ToE), and have them evaluated against the target.
Functional levels -
also addresses integrity and availability.
Assurance Levels (E levels) - level of confidence
E0 thru E5 where E0 is the lowest level.
ISO/IEC 27001 and 27002
Are universally recognized as the standards for sound security practices.
ISO/IEC 27001:2013 is focused on the standardization and certification of an organization's information security management system (ISMS). Which is a governance structure supporting an information security program.
- General requirements of the ISMS
- management responsibility
- Internal ISMS audits
- Management review of the ISMS
- ISMS improvement
ISO/IEC 27002:2013 - it provides a "code of practice for information security management" which lists security control objectives and recommends a range of specific security controls according to industry best practices.
This is more of a guideline than a standard.
The recommended control objectives are the "how" they demonstrate the implementation of operational controls.
When no subject can gain access to any object without authorization, this is referred to as complete mediation. It is normally the responsibility of the security kernel implementing the reference monitor concept
Secure Memory Management
With a common pool of memory, a variety of techniques are used to keep subject isolated from objects and from each other. These included the use of processor states, layering and data hiding.
- Supervisor state (kernel mode) the processor is operating at the highest privilege level on the system. It can access any system resource and execute both privileged and non-privileged instructions.
- Problem state (user mode) the processor limits the access to system data and hardware granted to the running process.
One of the ways that privileged parts of the system are protected is through the use of discrete layers that control interactions between more privileged and less privileged processes on the system. This is accomplished through a series of concentric rings where the innermost ring is assigned the lowest number and the outermost ring is assigned the highest number. Ring 0 is associated with core system functions, such as the most sensitive parts of the OS kernel, while Ring 3 is associated to end-user applications.
Data Hiding: maintains activities at different security levels to separate these levels from each other. This assists in preventing data at one security level from being seen by processes operating at the other security levels.
can be used to prevent individual processes from interacting with each other, even when the are assigned to the same ring. This can be done by providing distinct address spaces for each process.
Naming distinctions is also used to distinguish between different processes as is Virtual mapping which assigns randomly chosen areas of actual memory to a process to prevent other processes form finding those locations easily.
Encapsulation of process ca be used to isolate them, since an object includes the functions for operating on it, the details of how it is implemented can be hidden.
involves the removal of characteristics from an entity in order to easily represent its essential properties. Abstraction negates the need for users to know the particulars of how an object functions.
Trusted Platform Module
is an example of a specialized cryptoprocesor that provides for security generation, use, and storage of cryptographic keys. Since each TPM is unique, it is often used to provide hardware authentication using its keys.
Vulnerabilities of SA
Misuse of system privileges
Buffer Overflows and other memory attacks
Denial of Service
System emanations are unintentional electrical, mechanical, optical or acoustical energy signals that contain information or metadata about information being processed,stored or transmitted in a system.
is a set of standards designed to shield buildings and equipment to protect them against eavesdropping and passive emanations gathering attempts.
Red/Black - separation requirements meant installing physical security controls such as shielding between normal unclassified circuits and equipment and classified ones.
Also known as "race conditions" attempt to take advantage of how a system handles multiple requests.
The are also caused by poorly written code and the adoption of applications without assessing the security posture of the system and how it will be integrate into the existing environment.
"A time of check to time of use (toc/tou) is a common race condition but in programming. The attach involves changing the system between the checking of a condition and the action that results from the check.
Are communications mechanisms hidden from the access control and standard monitoring systems of an information system.
They may use irregular methods of communication such as free space sections of a disk or even the timing of processes to transmit information.
- Storage channels that communicate via a stored object
- Timing channels that modify the timing of events relative to each other.
"The only way to mitigate covert channels is through the secure design of an information system"
Mainframe, Middleware, Embedded Systems
Mainframe - centralized thus easy to patch though any unpatched vulnerabilities will be inherited across the entire system, makes them more pervasive and dangerous that other computing platforms. It also requires careful control of privileged and non-privileged subjects to eliminate leakage of information.
Embedded systems - small form factor, with limited processing power. The embed the necessary hardware, firmware, and software into a single platform.
Its frequently more difficult to patch security vulnerabilities in constrained embedded devices.
Pervasive Computing and Mobile Devices
- Mobile devices need antimalware software
- Secure Mobile Communications
- Require strong authentication, use Password controls
- Control of third-party software
- Create separate, secured mobile gateways
- Choose (or require) secure mobile devices, help users lock them down.
- Perform Regular Mobile Security Audits, Penetration Testing.
Refer to NIST Special Publication 800-124 Rev 1
And Appendix A of NIST Special Publication 800-53
Desktops, Laptops, and thin clients
- supported and licensed OS
- Anti-malware and anti-virus
- Host based IDS
- Whole drive or sensitive information encryption
- Changes to OS or new software validated before implemented.
- Operate in limited user mode without admin privileges
- be part of a continuous vulnerabilities monitoring program
Mobile Devices - integration with MDM
- Whole device wipe
- Account management
- GPS/WIFI/Cellular location of the device
- OS, application and firmware updates
Device authentication and enrollment
Information archive with integrity validation for legal hold situations
Secure encrypted container technology for organizational system access
Warehousing: is a repository for information collected from a variety of data sources. The data stored in a data warehouse is not used for operational tasks but rather for analytical purposes.
Data marts are smaller version of data warehouses and may contain the information from just a division or only about a specific topic.
Inference: is the ability to deduce (infer) sensitive or restricted information from observing available information.
Aggregation: is combining non-sensitive data from separate sources to create sensitive information.
Data Mining: is a process of discovering information in data warehouses by running queries on the data. Its used to reveal hidden relationships, patterns, and trends in the data warehouse.
- Mountains of data that contain valuable information
- The abundance of cheap commodity computing resources
- "Free" analytics tools (very low to non-existent barriers t acquire). C & I are challenged with these tools.
Challenges - Distributed Computing Architectures
Trust - Key verification, mitigation of trust-based DoS attacks, and content leakage detection within trusted networks.
Privacy - may include remote authentication schemes for wireless network access to data, traffic masking to obfuscate data, anonymization of large-scale data sets, decentralized access control solutions for cloud-based data access.
General security - response mechanisms in the face of fast-spreading-fast-acting intrusion vectors, existence of inconsistent authorization policies and/or user credentials within distributed database access by cloud-based systems, and concerns associated with securely, efficiently, and flexibly sharing data using public key cryptosystems.
- users log into their own computer and data is saved locally or remotely at various sites. No central authority that administers user authentication and accounts or manages data. No central server is necessary, although servers may have an assortment of roles in such systems.
Grid computing - is the sharing of CPU and other resources across a network in such a manner that all machines function as one larger computer.
- grid computing is heterogeneous while cluster computing is homogenous
- grid computers can have different operating systems, hardware, and software
- Grid computers are also associated with multi-tasking, where as cluster is devoted to a single task.
- Clusters are most often physically close together with a fast bus or network connecting the nodes, while grid is geographically dispersed.
defined as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources, that can be rapidly provisioned and released with minimal management effort or service provider interaction.
- On-demand Self-Service
- Board Network Access
- Resource Pooling
- Rapid Elasticity
- Measured Service
Software as a Service (SaaS) provider provides consumer access to applications running on a cloud infrastructure (Office 365, etc.)
Platform as a Service (PaaS) - provide to the customer the cloud infrastructure, consumer-created or acquired applications created using programming languages, libraries, services and tools supported by the provider. Consumer has control over the applications.
Infrastructure as a Service (IaaS) - customer has control over operating systems, storage, and deployed applications and possibly limited control of select networking components (host firewalls)
Private Cloud - cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers.
Community Cloud - cloud infrastructure is provisioned for exclusive use by specific community of consumers from organization that have shared concerns.
Public Cloud - is provisioned for open use by the general public.
Hybrid Cloud - is a composition of two or more distinct cloud infrastructures that remain unique entities but are bound together by standardized or proprietary technology that enables data and applications portability.
When different encryption keys generate the same ciphertext from the same plaintext message.
Each encryption or decryption request is performed immediately
Encrypt/Decrypt requests are processed queues. Key benefit is utilization of hardware devices and multiprocessor systems for cryptograhic acceleration.
Message hashed to get Message Digest
Message digest is encrypted with the message with senders private key
two different but mathematically related keys are used where on key is used to encrypt and the other is used to decrypt. (i.e., PKI)
an electronic document that contains the name of an organization or individual, the business address, the digital signature of the CA issuing the certificate, the certificate holder's public key, serial number, expiration date.
Certificate authority (CA)
an entity trusted by one or more users as an authority in a network the issues, revokes, and manages digital certificates.
Registration authority (RA)
performs certificate registration services on behalf of a CA. A single-purpose server is responsible for the accuracy of the information contained in the certificate request a well as user validation.
Plaintext or cleartext
a message in its natural format. Human readable and extremely vulnerable from a confidentiality perspective.
Ciphertext or cryptogram
the altered form of a plaintext message, so as to be unreadable for anyone except the intended recipients.
represents the entire cryptographic operation, including the algorithm, key, and key management functions.
is the process of converting the message from its plaintext to ciphertext.
the reverse process from encryption. Converting ciphertext to plaintext.
Key or Cryptovariable
input that controls the operation of the cryptographic algorithm. it determines the behavior of the algorithm and permits the reliable encryption and decryption of the message.
security service by which evidence is maintained so that the sender and the recipient of data cannot deny having participated in the communication. Individually referred to as the "non-repudiation of origin", and non-repudiation of receipt.
An algorithm is a mathematical function that is used in the encryption and decryption process
the study of techniques for attempting to defeat cryptographic techniques and more generally, information security services
the science that deals with hidden, disguised, or encrypted communications. it embraces communications security and communications intelligence
this occurs when a hash function generates the same output for different inputs.
represents the total number of possible values of keys in a cryptographic algorithm or other security measure such as a password. Ex. a 20-bit key would have a key space of 1,048,576
represents the time and effort required to break a protective measure. A high work factor would stop most people because the time commitment to hack may not be worth the effort.
action of changing a message into another format through the use of a code. often done by taking plaintext and converting it into a format that can be transmitted via radio or some other medium. usually used for message integrity instead of secrecy.
the revers process from encoding (MODEM)
Transposition or permutation
process of reordering the plaintext to hide the message.
process of exchanging one letter or byte for another
described by Claude Shannon and used in most block ciphers to increase their strength. SP = substitution and permutation; most block ciphers do a series of repeated substitutions and permutations to add confusion and diffusion to the encryption process.
Uses a series of S-boxes to handle the substitutions of the blocks of data.
Provided by mixing (changing) the key values used during the repeated rounds of encryption. When the key is modified for each round, it provides added complexity that the attacker would encounter.
Provided by mixing up the location of the plaintext throughout the ciphertext. Through transposition, the location of the first character of the plaintext may change several times during the encryption process.
an important consideration in all cryptography used to design algorithms where a minor change in either the key or the plaintext will have a significant change in the resulting ciphertext
Also a feature of a strong hashing algorithm.
Is one of the most difficult because the attacker has so little information in which to start.
The attacker has access to both the ciphertext and the plaintext version of the same message. The goal is to find the cryptographic key (cryptovariable) that was used to encrypt the message.
The attacker knows the algorithm used for the encrypting, or even better, he may have access to the machine used to do the encryption and is trying to determine the key.
Adaptive chosen plantext attack is where the attacker can modify the chosen input files to see what effect that would have on the resulting ciphertext.
The attacker has access to the decryption device or software and is attempting to defeat the cryptographic protection by decrypting chosen pieces of ciphertext to discover the key.
"Asymmetric cryptosystems are vulnerable to chosen ciphertext attacks"
RSA recommends modifying the plaintext using a process called optimal asymmetric encryption padding (OAEP).
Also called a side channel attack, this is more complex attack is executed by measuring the exact execution times and power required by the crypto device to perform the encryption of decryption. Its possible to determine the value of the key and the algorithm used.
Known as plaintext attack and uses a linear approximation to describe the behavior of the block cipher.
Common due to their ease and reliance on system elements outside of the algorithm.
- Side channel attacks, which are passive attacks the rely on a physical attribute of the implementation such as power consumption/emanation.
- Fault analysis, which attempt to force the system into an error state to gain erroneous results.
- Probing attacks, which attempt to watch the circuitry surrounding the cryptographic module in hopes that the complementary components will disclose information about the key or the algorithm.
Attack is meant to disrupt and damage processing by the attacker, through the re-sending of repeated files to the host.
Are a class of techniques that rely for their success on block ciphers exhibiting a high degree of mathematical structure.
Hash functions map plaintext into a hash.
Hash each plaintext until matching has is found
Hash each plaintext but store each generated hash into a table that am be used as a look-up table so hashes do not need to be generated again.
Attack works closely with several other types of attacks. useful when attacking a substitution cipher where the statistics of the plaintext language are known.
Because a hash is a short representation of a message, given enough time and resources, another message that would give the same hash value.
Usual countermeasure is to use a hash algorithm with twice the message digest length as the desired work factor.
Aimed at RSA algorithm, because it uses the product of large prime numbers to generate the public and private keys, this attack attempts to find the keys through solving the factoring of these numbers.
Social Engineering - con game
Purchase Key Attack - bribery and extortion
Rubber Hose Attack - assault and battery
This is the most common type of attack and usually the most successful.
Attack is use most commonly against password files.
Trying all possible keys until one is found that decrypts the ciphertext.
A competing firm buys a crypto product from another firm and then tries to reverse engineer the product.
Attacking Random Number Generators
Because random number generator is too predictable, thus gives attackers the ability to guess the random numbers so critical in setting up initialization vectors or a nonce.
Most cryptosystems will use temporary files to perform their calculations. If these are not deleted or overwritten, they pose a vulnerability.
0+0=0; 1+1=0; 1+0=1
Initialization vector (IV)
a non-secret binary vector used as the initializing input algorithm for the encryption of a plaintext block sequence to increase security by introducing additional cryptographic variance and to synchronize cryptographic equipment.
Reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages. Therefore the IV must be randomly generated at encryption time.
A cryptosystem that performs encryption on a bit-by-bit basis.
The cryptographic operation for a stream-based cipher is to mix the plaintext with a keystream that is generated by the cryptosystem. The mixing operation is usually an Exclusive-OR operation. Very fast mathematical operation.
- It relies primarily on substitution - the substitution of one character or bit for another in a manner governed by the cryptosystem and controlled by the cipher key.
A keystream (sequence of bits used as a key) is generated and combined with the plaintext using an Exclusive-OR (XOR) operation:
- Statistically unpredictable and unbiased
- Keystream should not be linearly related to the cryptovarible (key)
- Functionally complex - each keystream bit should depend on most or all of the cryptovariable bits.
- Long periods with no repeats
Important factors in the implementation are to ensure that the key management processes are secure and cannot be readily compromised or intercepted by and attacker.
Used: Voice or video, WEP, WAP and audio steaming GSM
operates on blocks or chunks of text. As plaintext is fed into the cryptosystem, it is divided into blocks of a preset size.
Most use a combination of substitution and transposition to perform their operations.
- stronger, more computationally intensive and expensive that stream-based ciphers.
Stream-based implemented in HW and Block-based implemented in software.
Because messages may be of any length and because encrypting the same plaintext using the same key always produces the same ciphertext, several modes of operation have been invented to provide confidentiality for messages of arbitrary length
Electronic Code Book (ECB)
- Each block of plaintext is encrypted independently using the same key
In ECB mode, each block is encrypted independently, allowing randomly access files to be encrypted and still accessed without having to process the file in a linear encryption fashion.
Very short messages (less than 64bits in length), such as a DES key
Cipher Block Chaining (CBC)
- The first plaintext block is XOR'd with an IV
- Resulting ciphertext is chained into the next plaintext block.
The result of encrypting one block of data is fed back into the process to encrypt the next block of data.
Cipher Feedback (CFB)
- Similar to CBC
- IV is encrypted and then XOR'd with the first plaintext block
each bit produced in the keystream is the result of a predetermined number of fixed ciphertext bits.
Used for Authentication
Drawback is that if a bit is corrupted or altered, all of the data from that point onward will be damaged.
Output Feedback (OFB)
- Operates very much like CFB
- Only the RESULT of encrypting the IV is fed back to the next operation
the keystream is generated independently of the message. Thus it is possible to generate the entire keystream in advance and store it for later use.
The keystream is chained but the ciphertext is not.
used for Authentication
- Similar to OFB
- A counter value is used instead of the IV
a counter - 64 bit random data block - is used as the first IV
Used in high-speed applications such as IPSec and ATM
A requirement is that the counter must be different for every block of plaintext so for each subsequent block, the counter is incremented by 1.
Because the keystream is independent from the message, it is possible to process several blocks at the same time, thus speeding up the throughput of the algorithm.
Is the size of a key, usually measured in bits or bytes, which cryptographic algorithm used during ciphering and deciphering protected information
Goal is to make breaking the key cost more than the worth of the information and not a penny more.
Block ciphers produce a fixed-length block of ciphertext. If the block size does not come out to be a full block, it is padded.
The padding algorithm is to calculate the smallest nonzero number of bytes, which must be suffixed to the plaintext to bring it up to a multiple of the block size.
Message is encrypted with recipients public key
CT is decrypted with the recipients private key
Proof of Origin
Message is encrypted with senders Private Key
CT is decrypted with the senders public key
Used in the case where encryption is not necessary but yet the fact that no encryption is needed must be configured in order for the system to work.
Null ciphers is used when testing/debugging, low security is needed, or when using authentication-only communications. For example, IPSec and SSL may offer the choice to authenticate only and not encrypt.
Also a reference to an ancient form of ciphering where the plaintext is mixed together with non-cipher material, today regarded as a type of seganography.
simple process of substituting one letter for another based upon a cryptovariable. Caesar cipher and ROT-13 are samples.
Playfair - used by allies in WWII. Sender and receiver agreed upon a key word.
Caeser Cipher - shifted plaintext over three places to create the ciphertext. (monoalphabetic system)
Shift (rotate) alphabet (move letters 3 spaces0
substitute one letter for another
a method of transmitting a message by wrapping a leather belt around a tapered dowel.
A simple substitution cipher that uses multiple alphabets rather than just one.
Designed to make breaking the cipher by frequency analysis more difficult.
Disguises a message by rearranging or transposing the letters (or bits) in the message.
- Rail Fence: the message is written and read in two or more lines, in alternating diagonal rows.
- Rectangular Substitution Tables; the sender and receiver decided on the size and structure of a table to hold the message and the order in which to read the message.
Vigenere Cipher (Blaise de Vigenere)
Polyalphabetic cipher that uses a keyword rather than a number as the key. Based on 26 alphabets, each one offset by one place.
Modern Mathematics and the representation of each letter by its numerical place in the alphabet are the key to many modern ciphers.
Ciphertext=plaintext + key (mod 26)
If the ciphertext number is greater than 26, then 26 is subtracted from the cipertext until a number is derived and that represents the character/letter.
Quantum Cryptography (QKD)
Uses physics to secure data and security should be based on known physical laws rather than on mathematical difficulties.
Quantum cryptography solves the key distribution problem by allowing the exchange of a cryptographic key between two remote parties with completed security, as dictated via the laws of physics.
Running Key Cipher
Uses the numerical value of letters in the plaintext and is coded and decoded by using a copy of the text in a book as the key
"the key is repeated for the same length as the plaintext input"
Also known as Vernam ciphers. Asserted as unbreakable, as long as it is implemented properly. Proposed the use of a key that could only be used once and that must be as long as the plaintext.
One-time pads uses the principle of the running key cipher, using the numerical values of the letters and adding those to the value of the key; however the key is a string of random values the same length as the plaintext. It never repeats, compared to the running key that may repeat several times.
Data Encryption Standard
Based on the work of Harst Feistal,
Takes on piece of information and hides in within another.
Art of hiding information
Prevents a third party from knowing that a secret message exists
Traditionally accomplished via:
- Physical techniques
- Null ciphers
A phrase is converted to a simple value
Visible or invisible marking embedded within a digital file to indicate copyright or other handling instructions, or to embed a fingerprint to detect unauthorized copying and distribution of images.
A key exchange algorithm. Used to enable two users to exchange or negotiate a secret symmetric key that will be used subsequently for message encryption. Useful in PKI based on descete logarithms.
COSO - Committee of Sponsoring Organizations of the Treadway Commission.
Was formed in 1985 to sponsor the National Committee on Fraudulent Financial Reporting, whose studies produced recommendations for public companies, their auditors, securities and exchange commission and other regulators.
An IT governance framework and supporting tool set published by the IT Governance institute. Currently ISACA is rolling out the supporting modules of the new COBIT 5 framework that uses the terms "management processes' instead of controls.
It provides a set of generally accepted processes to assist in maximizing the benefits derived using information technology (IT) and developing appropriate IT Governance.
Often thought of as the base minimum security services that every IT organization will need to implement.
Focused on compliance with the standard that includes prevention, detection, and reaction to security incidents.
Provides best practice recommendations on information security management, risks, and controls within the context of an overall Information Systems Management System.
ISF - Information Security Forum
Standard of Good Practice delivers practical guidance and solutions to overcome wide-ranging security challenges impacting business information.
Standards of Attestation
was developed as a result of CPA's providing attestations on subject matter other than the fairness of the presentation of financial statements.
Cryptography Substitution & Transposition
Substitution Running Key
Stream Ciphers Adv & Dis
- Emulates a one-time pad
- No size difference between plaintext and ciphertext
- Very suitable for hardware implementation and serial communications.
- Can be difficult to implement correctly
- Generally weaker than a block-mod cipher
- Difficult to generate a truly random unbiased keystream
Blocks of plaintext are encrypted into ciphertext blocks
Multiple modes of operation
Variable key size, block size, rounds.
Used for data transport & data storage
CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code Protocol
CCMP is an encryption protocol that forms part of the 802.11i standard for wireless local area networks. Based on AES with CTR with CBC-MAC mode of operation.
Must use 128-bit key, and 128-bit block size with a 48-bit IV. CBC authentication component produces a message integrity code that provides data origin authentication and data integrity for the packet payload data.
Uses block sizes of 128, 192, or 256, with keys of 128, 192, or 256 and rounds of 10, 12, 14
Although Rijndael supports multiple block sizes, AES only support on block size.
Based on Rijndael algorithm, developed by Daemen and Rijment in 1998
AES operation works on the entire 128-bit block of input data by first copying it into a square table tht it calls state. The inputs are placed into the array by column so that the first four bytes of the input would fill the first column of the array.
Block Options: 128, 192, 256
Key Options: 128, 192, 256
Rounds: 10, 12, 14
Substitute bytes: use of an S-box to do a byte-by-byte substitution of the entire block
Shift rows: transposition or permutation through offsetting each row in the table
Mix columns: a substitution of each value in a column based on a function of the that in the column
Add round key: XOR each byte with the key for that round; the key is modified for each round of operation
International Data Encryption Algorithm (IDEA)
Developed as a replacement for DES by Xuejai Lai and James Massey in 1991. Use a 128-bit key and 64-bit block sizes. 8 rounds using modular addition and multiplication, and bitwise exclusive-or (XOR)
Developed in 1996 by Carlisle Adams and Stafford Tavares. CAST-128 can use keys between 40 and 128-bits. and 12 to 16 rounds depending on length.
SAFER (Secure and Fast Encryption Routine)
Patent-free developed by James Massey. 64 or 128-bit blocks. A variation of SAFER is used in Bluetooth.
Symmetrical algorithm developed by Bruce Schneier. Extremely fast cipher and can be implemented in as little as 5K of memory.
Similar to Feistal-type cipher however it works against both halves, not just one. Operates with variable key sizes form 32 to 448-bits on 64-bit input and output blocks
Finalist for the AES. It operates at 128, 192, 256 bits on blocks of 128-bits and performs 16 rounds.
RC5 and RC4
Very versitile and can range from 0 to 2,040 bits; rounds from 0 to 255 and length of the input words can also be between 12, 32, and 64-bits in length.
It operates on two words at a time in a fast and secure manner.
RC4 - stream based cipher and is the most widely use stream cipher. Uses variable length key from 8 to 2,048 bits, and a period of greater than 10,100. Meaning that the keystream should not repeat for at least that length.
If RC4 is used with 128-bit key, there are no practical ways to attack it.
Same key used for both the encryption and decryption operations. Often called single, same or shared key encryption. It can also be called secret or private key encryption because its meant to be a secret key.
- Very fast
- Very difficult to break
- Freely available tools
- Highly efficient, serial communications
- Multiple modes
- Key negotiation / exchange / distribution
- Poor scalability
- Limited security
- Noisy channels and error connecting
And doe not provide many benefits beyond confidentiality, unlike most asymmetric algorithms which provide ability to establish non-repudiation, message integrity, and access control.
Can provide a form of message integrity - message will not decrypt if changed.
And some measure of access control - with out the key the file cannot be decrypted.
Examples of symmetric algorithms
Caesar cipher, the spartan scytale, and the Engma machine.
Asymmetric Key Algorithms
Pair of mathematically related (A and B) used separately for encryption and decryption.
- Brute force, trying all possible private keys
- Mathematical attacks, factoring the product of two large prime numbers
- Timing attacks, measuring the running time of the decryption algorithm
Diffle-Helman - key exchange algorithm. It is used to enable two users to exchange or negotiate a secret symmetric key that will be used subsequently for message encryption. Does not provide for message confidentiality but is used in PKI. Based on discrete logarithms.
El Gamal is based on Diffle-Helman but included the ability to provide message confidentiality and digital signature services, not just session key exchange
Elliptic Curve Cryptography (ECC) - has the highest strength per bit of key length of any of the asymmetric algorithms. Very useful for smartcards, wireless, etc. Provide confidentiality, digital signatures, and message authentication services.
Asymmetric Strengths and Weaknesses
- Key management
- Proof of origin
- access control and authentication
- Computationally intensive
- key size is generally 1024, 2048, 4096 bits
- Significantly slower
Message Integrity Controls (MIC)
- MIC detect alterations (whether intentional or accidental) to a message during transmission
- a MIC is a special value that is calculated based on the message and added to the message to be sent.
Some asymmetric algorithms such as RSA, El Camal, and ECC have message authentication and digital signature functionality built into the implementation.
Parity XOR RAID, HASH, CBC-MAC, Digital Signature, HMAC, Checksum CRC
Message Digest (Hash)
Is a small representation of a larger message. Its used to ensure the authentication and integrity of information, not confidentiality.
Message Authentication Code (MAC) - cryptographic checksum
Is a small block of data that is generated using a secret key and then appended to the message.
- Much smaller than the message generating it
- Given the MAC, its impractical to compute the message that generated it.
- Given the MAC and the message that generated it, it is impractical to find another message generating the same MAC
HMAC provides cryptographic strength similar to hashing algorithm, with the additional protection of a secret key.
Hash Function Characteristics
Accepts an input message of any length and generates, through a one-way operation, a fixed-length output.
It uses a hashing algorithm to generate the hash but does not use a secret key.
The requirements for a hash function are that they must provide some assurance that the message cannot be changed without detection and would be impractical to find any two messages with the same hash value.
- Uniformly Distributed: the hash output value should not be predictable
- Weak Collision Resistant: difficult to find a second input value that hashes to the same value as another input.
- Difficult to Invert: should be one way, should not be able to derive hash input x by reversing the has function on output y
- Strong Collision Resistant: difficult to find any two inputs that hash to the same value
- Deterministic: given an input x, it must always generate the same hash value y
Produces a "condensed representation" of the original message.
Should be a one-way function
Non-linear relationship between hashes
Should derive and hash using the whole, original message.
Common Hash Functions
Message Digest 2, 4, 5, 6
Secure Hash Algorithm (SHA) 1, 2, 3
Attacks on Hashing Algorithms and MAC
Brute force - relies on finding weakness in the hashing algorithm that would allow an attacker to reconstruct the original message from the has value - defeat one-way property
- find another message with the same has value, or find any pair of messages with the same hash value (called collision resistance)
Use of Digital Signatures
Non-repudiation of origin
Integrity of message
Trusted time stamp
Change and Expiry
Public Key Infrastructure (PKI)
Public Key Infrastructure binds a people/entities to their public keys
Public keys are published and certified by digital signatures
Certificate Revocation Lists (CRL's)
Trust and Trust Models
Certification establishes trustworthiness of public keys
Certification Authority (CA)
Certificate Policy (CP)
Certificate Practice Statement (CPS)
Registration Authority (RA)
Validate Certification Path
Purpose of Cryptography
P - Protect proprietary and private data from unauthorized disclosure
- at Rest
- in Transit - link vs end-to-end
A - Authenticate participants in and information exchange
I - Detect integrity or alteration accidents or attacks
N - Non repudiation, digital signature
is designed to allow federated systems with different identity management systems to interact through simplified sign-on and single-sign-on exchanges
It uses straight forward REST/JSON message flows with a design goal of making simple things simple, and complicated things possible. Uses JSON Web Token data structures when signatures are required.
Open Web Applications Project (OWASP)
Provides an accessible and thorough framework with processes for web application security
- OWASP Top 10 project - top 10 web-bases application security flaws and how to mitigate them.
- OWASP Guide Project - aimed at architects, a comprehensive manual for designing secure web applications and services
- OWASP Software Assurance Maturity Model (SAMM) - a framework used to design software that is secure and tailored to an organizations specific risks.
- OWASP Mobile Project - resource for developers to develop and maintain secure mobile applications.
Cyber Physical Systems (CPS)
are smart networked systems with embedded sensors, processors, and actuators that are designed to sense and interact with the physical world and can support real-time guaranteed performance in safety-critical applications
- Risk assessment
- Bad data detection Mechanisms - should not assume random, independent failures but consider detection of sophisticated attackers.
- Architect Resiliency/Survivability of the System Attacks
Industrial Control Systems (ICS)
Supervisory Control and Data Acquisition (SCADA) systems.
- Largest subgroup of ICS systems
- large-scale processes that can include multiple sites and large distances.
Distributed Control Systems (DCS)
Programmable Logic Controllers (PLC)
Crypto Life Cycle
Acceptable - algorithm and key length are safe to use
Depreciated - algorithm and key length is allowed but the user must accept some risk
Restricted - algorithm and or key length is depreciated and there are additional restrictions required to use them
Legacy-use - algorithm or key length may only be used to process already protected information
Kerckhoff's law states
A cryptosystem should be secure even if everything about the system, except the key is public knowledge. The key, therefore is the true strength of the cryptosystem. (size and secrecy of the key are the two most important elements).
Approved cryptographic algorithms and key sizes
Transition plans for weakened or compromised algorithms an keys
Procedures and standards for the use of cryptographic systems in the organization
Key generation, escrow, an destruction
Incident reporting surrounding the loss of keys or the compromise of cryptographic systems.
Segregation of duties
Is used as a crosscheck to ensure the misuse and abuse of assets.
is primarily a business policy and access control issue, that can be compensated for smaller organizations with monitoring, audit trails, and supervision
High-integrity cryptographic environments:
Dual Control - is implemented as a security procedure that requires two or more persons to come together and collude to complete a process
Split knowledge is the unique "what each must bring" and joined together when implementing dual control.
Automated Key Generation: - mechanisms used to automatically generate strong cryptographic keys
Truly random: for a key to be truly effective, it must have and appropriately high work factor.
Random: in the context of cryptography, randomness is the quality of lacking predictability. Randomness intrinsically generated by a computer system is also called pseudo randomness. Computers and software libraries are well known as weak sources of randomness.
Thus random number generators (RNG were developed to generate high-quality randomness
Asymmetric Key Length: The effectiveness of asymmetric cryptographic systems depends on the hard-to-solve nature of certain mathematical problems. Thus asymmetric algorithms keys must be longer for equivalent resistance to attack than symmetric algorithm keys. NIST suggests a 15,360-bit RSA keys are equivalent in strength to 256-bit symmetric keys.
Key Wrapping and Key Encrypting Keys
On solution is to protect the session key with a special purpose long-term use key called a key encrypting key (KEK). They are used as part of key distribution or key exchange. This is called key-wrapping.
Key wrapping uses symmetric ciphers to securely encrypt a plaintext key along with any associated integrity information and data.
Key Distribution Centers (KDC)
Master keys - secret keys shared by each user and the KDC
Session key, which are created when needed, used for the duration of the communications session and discarded when the session is complete.
Is a comprehensive overview of the facility including physical security controls, policy, procedures, and employee safety.
- Threat Definition - what is the threat
- Target Identification - what would be the impact and consequence of the loss of the asset ( create a threat matrix)
- Facility Characteristics -
Facility security control
Personnel and contract security policies & procedures
Site and building access control
Video surveillance, assessment and archiving
Natural surveillance opportunities
Protocols for responding to internal and external security incidents
Degree of integration of security and other building systems
Shipping and receiving security
Property identification and tracking
Proprietary information security
Computer network security
Workplace violence prevention
Mail screening operations, procedures, and recommendations
Parking lot and site security
Data center security
Business continuity planning and evacuation procedures.
Vulnerability Rating Criteria:
Very High: One or more major weaknesses have been identified that make the organizations assets extremely susceptible to an aggressor or hazard
High: One or more significant weaknesses have been identified that makes the organization's assets highly susceptible to an aggressor or hazard.
Medium High: very susceptible
Medium: fairly susceptible
Medium Low: somewhat susceptible
Low: slightly increases the susceptibility
Very Low: No weaknesses exist.
The single most important goal in planning a site is he protection of life, property, and operations.
Crime Prevention through Environment Design (CPTD)
- Organizational (people)
- Mechanical (technology and hardware)
- Natural design (architecture and circulation flow) methods
Types of Class
Tempered glass - in car windshields
Wired glass - provides resistance to impact from blunt objects
Laminated glass - recommended for installation at street-level windows, doorways and other access areas.
Bullet resistant (BR) installed in banks and high risk areas.
Acoustic sensors and shock sensors are used in dual-technology glass break scenarios
Lighting levels of at least 10 to 12 ft candles over parked cars, 15 to 20ft candles in walking and driving aisles
Exterior lights should be placed approximately 12ft above the ground.
Fire Suppression Systems
Fire requires three elements to burn - heat, oxygen and a fuel source.
Class A - Wood and paper
Class B - combustible liquids - gas, kerosene, grease, oil
Class C - electrical equipment
Class D - chemical (laboratory-magnesium, titanium, etc.)
Locate petroleum, oil, and lubricants storage tanks down sloe from all other occupied buildings and at least 100 ft from buildings
Locate utilities systems at least 50ft from loading docks, front entrances, and parking areas
Entrance Facility - service entrance is the point at which the network service cables enter or leave a building. It includes the penetration through the building wall and continues to the entrance facility.
Equipment room - serves the entire building and contains the network interfaces, UPS, computing equipment.
Backbone Distribution System: backbone distribution system provides connection between entrance facilities, equipment rooms, and telecommunications rooms.
Telecommunication Room: telecom room typically serves the needs of a floor. Serves as the main cross-connect between the backbone cabling and the horizontal distribution system.
Horizontal Distribution System: distributes the signals from the telecom room to work areas.
Lighting protection - ground potential rise (GPR)
Best engineering for open-ended budgets is the use of dielectric fiber optic cable for all communications.
Isolate the wire-line communications from remote ground. Which is accomplished using optical isolators or isolation transformers. (High voltage interface - HVI)
Optical detection (photoelectric)-Beam detectors, operate on the principle of light and a receiver; Refraction type, which has a blocker between the light and the receiver
Physical process (ionization) - monitor the air around the sensors constantly.
Flame detectors - IR detectors primarly detect a large mass of hot gases that emit a specific spectral pattern; UV flame detectors detect flames at speeds of 3-4 milliseconds due to the high energy radiation emitted by fires and explosions at the instant of their ignition.
Heat detectors - include fixed temperature or rate of rise detectors. Changes around 10-15 degrees per minute.
Wet systems: have a constant supply of water in the them at all times;
Dry systems: do not have water in them.
Pre-action systems: water is held back until detectors in the area are activated.
Deluge Systems: same function as the pre-action system except all sprinkler heads are in the open position.
Gas systems: Aero-K - uses an aerosol of microscopic potassium compounds in a carrier gas released from small canisters mounted on walls near the ceiling.
FM 200 - is colorless, liquefied compressed gas.