A hybrid cloud is composed of two or more individual clouds, each of which can be private, community, or public clouds. There can be several possible compositions of a hybrid cloud as each constituent cloud may be of one of the five variants as discussed previously.
As a result, each hybrid cloud has different properties in terms of parameters such as performance, cost, security, and so on. A hybrid cloud may change over the period of time as component clouds join and leave. In a hybrid cloud environment, the component clouds are combined through the use of open or proprietary technology such as interoperable standards, architectures, protocols, data formats, application programming interfaces (APIs), and so on.
Virtualizationis the process of abstractingphysical resources,such as compute, storage, and network, and creatingvirtual resources from them. Virtualization is achieved through the use of virtualization software that is deployed on compute systems, storage systems, and network devices. Virtualization software aggregates physical resources into resource pools from which itcreates virtual resources.A resource pool is an aggregation of computing resources, such as processing power, memory, storage, and network bandwidth. For example, storage virtualization software pools thecapacity of multiple storage systems to create a single large storage capacity. Similarly, compute virtualization software pools the processing power and memory capacity of a physical compute system to create an aggregation of the power of all processors (in megahertz) and all memory (in megabytes). 1. Codebase:Put all the code in a single repository that belongs to a version control system
2. Dependencies:Define the dependencies of the application, automate the collection of the dependent components, and isolate the dependencies for minimizing their impact on the application
3. Config: Externalize the values use by the application by connecting to things that might change. Applications at times store configas constants in the code. But, the 12-factor App requires strict separation of configfrom code.
4. Backing services:A backing service is any service the application access over the network during its operation, example services include datastores, messaging/queueing systems and caching systems. Treat backing services same as attached resources, accessed via a URL or other locator stored in the config.
5. Build, Release,and Run: During the build stage, the code is converted in to an executable bundle of scripts, assets, and binaries known as a build. The release stage takes the build, and combines it with the current config. The resulting release contains both the build and the configand is ready for immediate execution. The run stage runs the application in the execution environment. The 12-factor application uses strict separation between the build, release, and run stages. This separation is because the build stage requires lot of work, and developers manage it. The run stage should be as simple as possible. So that application runs well, and that if a server gets restarted, the application starts up again on launch without the need for human intervention.
6. Processes: Run application as one or more stateless processes. Any data that required persistence must be stored in a statefulbacking service, typically a database. Usually the application may run on many servers for providing load balancing and fault tolerance. The right approach is that the state of the system is stored in the database and shared storage, not on the individual server instances. If a server goes down due to some reasons, another server can handle the traffic.
7. Port Binding: Access services through well-defined URLs or ports. The 12-factor application is self-containedand does not rely on runtime creation of a web-facing service. The application exports HTTP as a service by binding to a port, and listening to requests coming in on that port. For example, by using the port binding recommendations, it is possible to point to another service simply by pointing to another URL. That URL could be on the same physical machine or at a public cloud service provider.
8. Concurrency: Scale out via the process model. When anapplication runs, lot of processes are performing various tasks. By running processes independently, and the application scalesbetter. In particular, it allows doing more stuff concurrently by dynamically adding extra servers.
9. Disposability: Maximize robustness with fast startup and graceful shutdown. Factor #6 -Processes describes a stateless process that has nothing to preload, nothing to store on shutdown. This method enables applications to start and shut down quickly. Application should be robust against crashing, if it does, it should always be able to start back up cleanly.
10. Dev/ProdParity: Keep development and production environments, and everything in between as identical as possible. In recent times, organizations have a much more rapid cycle between developing a change to the application and deploying that change into production. For many organizations, this implementationhappens in a matter of hours. To facilitate that shorter cycle, it is desirable to keep a developer's local environment as similar as possible to production.
11. Logs: Treat logs as event streams. This method enables orchestration and management tools to parse these event streams and create alerts. Furthermore, this method makes easier to access and examine logs for debugging and management of the application.
12. Admin Processes: Ensure that all administrative activities become defined processes that can easily repeat by anyone. Do not leave anything that must be completed to operate or maintain the application inside someone's head. If it must be completed as a part of the administrative activity, build a process to perform by anyone.
Cloud-native platform is an integrated stack -from infrastructure through application development framework that is optimized for rapidly scaling modern application to meet demand.
Cloud-native platform supports application developers with multiple languages -such as Java, Ruby, Python, and Node.js, frameworks -such as Spring, and middleware -such as MySQL, RabbitMQ, Redis. It enables IT to develop, deploy, scale, and manage software delivery more quickly and reliably.
•Enables defining services in a service catalog: Cloudservice providers should ensure that the consumers are able to view the available services, service level options, and service cost. This service definitionhelps service providerto make the right choice of services effectively. Cloud services are defined in a service catalog, which is a menu of services offerings from a service provider. The catalog provides a central source of information on the service offerings delivered to the consumersby the provider.
•Enables on-demand, self-provisioning of services: Aservice catalog also allows a consumer to request or order a service from the catalog.The service catalog matches the consumer's need without manual interaction with a service provider. While placing a service request, a consumer commonly submits service demands, such as required resources, needed configurations, and location of data. Once the provider approvethe service request, appropriate resources are provisioned for the requested service.
•Presents cloud interfaces to consume services: Cloud interfaces are the functional interfaces and the management interfaces of thedeployed service instances. Usingthese interfaces,the consumers perform computing activities, such as executing a transaction and administer their use of rented service instances. Some of the examples of rented service instances such as modifying, scaling, stopping, or restarting aserviceinstance.
Management interface: It enables a consumer to control the use of a rented service. Management interface is a self-service interface that enables a consumer to monitor, modify, start, and stop rented service instances without manual interaction with the service provider. It facilitates consumers to prepare desired functional interface. Based on the service model (IaaS, PaaS, or SaaS), the management interface presents different management functions. For IaaS, the management interface enables consumers to manage their use of infrastructure resources. For PaaS, the management interface enables managing the use of platform resources. The SaaS management interface enables consumers to manage their use of business applications. Slide shows examples of IaaS, PaaS, and SaaS management interface functions. •Compute System: A compute system is a computing device (combination of hardware, firmware, and system software) that runs business applications. Examples of compute systems include physical servers, desktops, laptops, and mobile devices. The term compute system refers to physical servers and hosts on which platform software, management software, and business applications of an organization are deployed.
•Storage System: Data created by individuals, businesses, and applications need to be persistently stored so that it can be retrieved when required for processing or analysis. A storage system is the repository for saving and retrieving data and is integral to any cloud infrastructure. A storage system has devices, called storage devices (or storage) that enable the persistent storage and the retrieval of data. Storage capacity is typically offered to consumers along with compute systems. Apart from providing storage along with compute systems,a provider may also offer storage capacity as a service (Storageas a Service), which enables consumers to store their data on the provider's storage systems in the cloud.
•Network System: It establishes communication paths between the devices in an IT infrastructure. Devices that are networked together are typically called nodes. A network enables information exchange and resource sharing among manynodes spread across different locations. A network may also be connected to other networks to enable data transfer between nodes. Cloud providers typically leverage different types of networks supporting different network protocols and transporting different classes of network traffic.
•Processor: It is also known as a Central Processing Unit (CPU). It is an integrated circuit (IC) that executes the instructions of a software program by performing fundamental arithmetical, logical, and input and output operations. A common processor and instruction set architecture is the x86 architecture with 32-bit and 64-bit processing capabilities. Modern processors have multiple cores, each capable of functioning as an individual processor.
•Random Access Memory (RAM): It is also called as main memory. It is a volatile data storage which stores the frequently used software program instructions. It allows data items to be read or written in almost the same amount of time, there by increasing the speed of the system.
•Read Only Memory (ROM): It is a type of semiconductor memory and a nonvolatile memory. It contains the boot firmware, power management firmware, and other device-specific firmware.
•Motherboard: It is a printed circuit board (PCB) to which all compute system components are connected. It holds the major components like processor and memory, to carry out the computing operations. A motherboard consists ofintegrated components, such as a graphics processing unit (GPU), a network interface card (NIC), and adapters to connect to external storage devices.
•Operating System (OS): It is a system software that manages the systems hardware and software resources. It also controls the execution of the application programs and internal programs that run on it. All computer programs, except firmware, requires an operating system to function.It also acts as an interface between the user and the computer.
•It is the hardware that is placed in a downright horizontal rack. The rack contains multiple mounting slots called as bays. Each bay holds a hardware unit in a rack. These types of servers collectively hosts, executes, and manages an application. Typically, a console with a video screen, keyboard, and mouse is mounted on a rack to enable administrators to manage the servers in the rack.
•Examples: Dell PowerEdge R320, R420, and R520.
•Simplifies network cabling.
•It saves physical floor space and other server resources.
•The horizontal rack chassis can simultaneously hold multiple servers placed above each other.
•Since the rack mount server is in horizontal form, it requires dedicated processor, motherboard, storage, and other input and output resources.
•Each rackmount server can work independently, but it requires the power, cooling, and mounting support from the underlying chassis.
•Magnetic Tape Drive: It is a storage device that uses magnetic tape as the storage medium. It is a thin, long strip of plastic film that is coated with a magnetizablematerial. The tape is packed in plastic cassettes and cartridges. It provides linear sequential read and write data access. Organizations use this device to store large amounts of data, data backups, offsite archiving, and disaster recovery.
•Magnetic Disk Drive: It is a primary storage device that uses magnetization process to write, read, and access data. It is covered with a magnetic coating and stores data in form of tracks and sectors. Tracks are the circular divisions of the disk and are further divided into sectors that contain blocks of data. All read and write operations on the magnetic disk are performed on the sectors. Hard disk drive is a common example of magnetic disks. It consists of a rotating magnetic surface and a mechanical arm that circulates over the disk. The mechanical arm is used to read from the disk and to write data to the disk.
•Solid-State Drive (SSD): It uses semiconductor-based memory, such as NAND and NOR chips, to store data. SSDs, also known as "flash drives", deliver the ultrahigh performance required by performance-sensitive applications. These devices, unlike conventional mechanical disk drives, contain no moving parts and therefore do not exhibit the latencies associated with read/write head movement and disk rotation. Compared to other available storage devices, SSDs deliver a relatively higher number of input/output operations per second (IOPS) withlow response times. They also consume less power and typically have a longer lifetime as compared to mechanical drives. However, flash drives do have the highest cost per gigabyte ($/GB) ratio.
•Optical Disk Drive: It is a storage device that uses optical storage techniques to read and write data. It stores data digitally by using laser beams, which is transmitted from a laser head mounted on an optical disk drive to read and write data. It is used as a portable and secondary storage device. Common examples of optical disk drive are compact disks (CD), digital versatile/video disks (DVD), and Blu-ray disks.
•A file-based storage system, also known as Network-Attached Storage (NAS), is a dedicated, high-performance file server having either integrated storage or connected to external storage. NAS enables clients to share files over an IP network. NAS supports NFS and CIFS protocols to give both UNIX and Windows clients the ability to share files using appropriate access and locking mechanisms. NAS systems have integrated hardware and software components, including a processor, memory, NICs, ports to connect and manage physical disk resources, an OS optimized for file serving, and file sharing protocols. A NAS system consolidates distributed data into a large, centralized data pool accessible to, and shared by, heterogeneous clients and application servers across the network. Consolidating data from numerous and dispersed general-purpose servers onto NAS results in more efficient management and improved storage utilization. Consolidation also offers lower operating and maintenance costs.
•In a cloud environment, Cloud File Storage (CFS) service is used. This service is delivered over the internetand is billed on a pay-per-use basis.
•Object-based storage is a way to store file data in the form of objects. It is based on the content and other attributes of the data rather than the name and location of the file. An object contains user data, related metadata, and user defined attributes of data. The additional metadata or attributes enable optimized search, retention, and deletion of objects. For example, an MRI scan of a patient is stored as a file in a NAS system. The metadata is basic and may include information such as file name, date of creation, owner, and file type. The metadata component of the object may include additional information such as patient name, ID, attending physician's name, and so on, apart from the basic metadata.
•A unique identifier known as object ID identifies the object stored in the object-based storage system. The object ID allows easy access to objects without having to specify the storage location. The object ID is generated using specialized algorithms such as a hash function on the data. It guarantees that every object is uniquely identified. Any changes in the object, like user-based edits to the file, results in a new object ID. It makes object-based storage a preferred option for long-term data archiving to meet regulatory or compliance requirements. The object-based storage system uses a flat, nonhierarchical address space to store data, providing the flexibility to scale massively. Cloud service providers use object-based storage systems to offer Storage as a Service because of its inherent security, scalability, and automated data management capabilities. Object-based storage systems support web service access via REST and SOAP.
as the kernel of any OS, including process management, file system management, and memory management. It is designed and optimized to run multiple VMs concurrently. It receives requests for resources through the VMM, and presents the requests to the physical hardware. Each virtual machine is assigned a VMM that gets a share of the processor, memory, I/O devices, and storage from the physical compute system to successfully run the VM. The VMM abstracts the physical hardware, and appears as a physical compute system with processor, memory, I/O devices, and other components that are essential for an OS and applications to run. The VMM receives resource requests from the VM, which it passes to the kernel, and presents the virtual hardware to the VM.
There are two types of hypervisors. Theyare bare-metal hypervisor and hosted hypervisor.
•Bare-metal hypervisor: It is also called asType 1 hypervisor. Itis directly installed on topof the system hardware without any underlying operating system or any other software. It is designed mainly for enterprise data center. Few examples of bare-metal hypervisor are Oracle OVM for SPRAC, ESXi, Hyper-V, and KVM
•Hosted hypervisor:It is also called as Type 2 hypervisor. It is installed as an application or software on an operating system. In this approach, the hypervisor does not have direct access to the hardware. All requests must pass through the operating system running on the physical compute system. Few examplesof hosted hypervisor are VMware Fusion, Oracle Virtual Box, Solaris Zones, and VMware Workstation.
A virtual machine (VM) is a logical compute system with virtual hardware on which a supported guest OS and its applications run. A VM is created by a hosted or a bare-metal hypervisor installed on a physical compute system. An OS, called a "guest OS", is installed on the VM in the same way it is installed on a physical compute system. From the perspective of the guest OS, the VM appears as a physical compute system. A VM has a self-contained operating environment, comprising OS, applications, and virtual hardware, such as a virtual processor, virtual memory, virtual storage, and virtual network resources. As discussed previously, a dedicated virtual machine manager (VMM) is responsible for the execution of a VM. Each VM has its own configuration for hardware, software, network, and security. The VM behaves like a physical compute system, but does not have direct access either to the underlying host OS (when a hosted hypervisor is used) or to the hardware of the physical compute system on which it is created. The hypervisor translates the VM's resource requests and maps the virtual hardware of the VM to the hardware of the physical compute system. For example, a VM's I/O requests that to a virtual disk drive are translated by the hypervisor and mapped to a file on the physical compute system's disk drive. converged infrastructure system includes five logical sections.
They are network, storage, compute, virtualization, and management. Each section performs a specific set of functions and has various types of hardware or software components.
1.Network: Itprovides connectivity for communication between all components inside a converged infrastructure system and between the converged infrastructure system and the organization's core data center network.The network can be logically divided into two parts: an Ethernet local area network or LANand a storage area network or SAN. Converged infrastructure systems may either utilize the Ethernet switches in LAN and FibreChannel. In few cases it uses theEthernet switches for SAN, or unified switches such as FibreChannel over the Ethernet switches as an alternative to both Ethernet and FibreChannel switches.
2.Storage: It provides asecure place to store data in converged infrastructure systems. The core components of the storage layer include storage controllers and storage drives such as flash drives and disk drives. A storage controller runs a purpose-built operating system that is responsible for performing several storage-related functions. Thefunctions areprovisioning block and file storage for application servers, serving I/Osfrom the servers, and replicating data. Each storage controller consists of one or more processors and a certain amount of cache memory to process the I/O requests from the servers.
3.Compute:Itruns operating system such as Linux and Windows and business applications inconverged infrastructure systems. It includes blade servers or rack mount servers, and interconnecting devices. The interconnecting devices connect the blade or rack mount servers to the network and storage layers of the converged infrastructure system. They support both LAN and SAN connectivity for all servers. In this example, a pair of interconnecting devices is used to connect all the blade servers to the network.
4.Virtualization:It uses hypervisors to create virtualization layer in converged infrastructure systems.The hypervisor abstracts the physical hardware of a physical compute system from the operating system and enables the creation of multiple virtual machines. A virtual machine appears as a physical compute system with its own CPU, memory, network controller, and storage drives to the operating system running on it. But, all virtual machines share underlying hardware of the physical compute system and the hypervisor allocates the compute system's hardware resources dynamically to each virtual machine.
5.Management: Managing a converged infrastructure system is accomplished in various ways. The most important aspect is that there are various choices to best fit an organization's needs ranging from individual element managers to more encompassing unified manager.Element managers are softwaresfor managing individual components in a converged infrastructure system. Different element managers are used to configure different components present in compute, network, storage, and virtualization layers of a converged infrastructure system.The unified manager software provides a central platform for end-to-end monitoring and management of a converged infrastructure system. It interacts with individual element managers to collect information on the infrastructure configurations, connectivity, and utilization and provides a consolidated view of converged infrastructure resources. It also automates much error prone, tedious, and day-to-day resource provisioning tasks through interaction with element managers.
1.Simplicity: Many of the benefits of converged infrastructure are from the simplicity and standardization of working with an integrated platform rather than multiple technology stacks. The entire process of deploying infrastructure is simpler and easier that covers planning, purchasing, installation, upgrades, troubleshooting, performance management, and vendor management.
2.Performance: In a highly virtualized environment, server utilization may already be high. Converged infrastructure extends this efficiency to storage and network port utilization and enables better
performance optimization of the overall infrastructure.
3.Availability:Greater reliability means higher availability of infrastructure, applications, and services. Converged infrastructure enables IT to meet its service-level agreements and the business to meet its performance promises to customers.
4.Speed:Converged infrastructure can be deployed in record time. If the new infrastructure is for applications development, it can be spun up almost instantaneously, which means that developers can do their jobs faster. IT can respond to business requests with, "You can have it now," rather than, "You can have it in a few months." And the time to market of technology-based offerings increases.
5.Scalability:With converged infrastructure, it is also easier to expand or shrink available resources with changing workloads and requirements.
6.Staffing:Converged infrastructure requires less IT staff to operate and manage it. It reducesthe cost spent and increases the ability to support business and infrastructure growth without adding staff.If IT professionals spend less time on the mechanics of infrastructure integration and management, they have more time for more value-adding activities, and they can be increasingly customer-facing and responsive.
7.Risk: Converged infrastructure reduces infrastructure supply chain risk through procurement control,testing,and certification of equipment. It reduces operational risk through robust and comprehensive tools for infrastructure control, including security, and automation to minimize human error. Converged infrastructure also reduces risk to business continuity through high availability and reliability, less disruptive upgrades, and a solid platform for disaster recovery.
8.Innovation: Converged infrastructure facilitates business innovation in two powerful ways. One, it provides a simplified path to the cloud.Abusiness can experiment with and use a vast and growing array of innovative and specialized software and services. Two, when software developers have computing environments on demand, they can experiment more, prototype more, iterate with their business partners, and discover superior business solutions.
9.Cost:The cost advantages of converged infrastructure can be sliced and diced many ways, but you should expect to realize and measure savings in four basic areas. They are procurement, physical operations, infrastructure management, and staff.
Advantages: When an organization builds their own solution, they can control the complete system. They can completely customize the solution as per the business requirements. The software that is build in the solution isspecific to the needs of the organization, that are analyzed.
Disadvantages: Building the solution consumes lot of time, resources, and cost. It also takes lot of time to understand the requirements, technology, and learn the required skills. If the organization decides to build, then they have to make sure that they are in the same phase of the technology development with the IT sector. For example, if the IT era is all about virtualization, they should not build a traditional physical solution. Once the product is built they have to test the product, to make sure the requirements are satisfied. The building of the solution needs to bequick before the requirements changes.
Dell EMC VxBlock Systemssimplifies all aspects of IT and enables customers to modernize their infrastructure and achieve better business outcomes faster. By seamlessly integrating enterprise-class compute, network, storage, and virtualization technologies, it delivers most advanced converged infrastructure.It is designed to support large-scale consolidation, peak performance, and high availability for traditional and cloud-based workloads. It is a converged system optimized for data reduction and copy data management. Customers can quickly deploy, easily scale, and manage your systems simply and effectively. Deliver on both midrange and enterprise requirements with the all-flash design, enterprise features, and support for a broad spectrum of general-purpose workloads. Dell EMC XC Series Appliance is a hyper-converged appliance. It integrates with the Dell EMC PowerEdge servers, the Nutanix software, and a choice of hypervisors to run any virtualized workload. It is ideal for enterprise business applications, server virtualization, hybrid or private cloud projects, and virtual desktop infrastructure (VDI). User can deploy an XC Series cluster in 30 minutes and manage it without specialized IT resources. The XC Series makes managing infrastructure efficient with a unified HTML5-based management interface, enterprise-class data management capabilities, cloud integration, and comprehensive diagnostics and analytics.
The features of Dell EMC XC Series are:
•Available in flexible combinations of CPU, memory, and SSD/HDD
•Includes thin provisioning and cloning, replication, and tiering
•Dell EMC validates, tests, and supports globally
•Able to grow one node at a time with nondisruptive, scale-out expansion
Dell EMC Scale IO software is used to deploy software-defined block storage. It applies the principles of server virtualization-abstract, pool, and automate, to standard x86 server direct-attached storage (DAS). It creates a simplified SDS infrastructure that enables easier storage lifecycle management and provisioning. ScaleIOcan lower storage TCO by 50%, accelerate storage deployment by 83%, and enable 32% faster application deployment.
The features of the ScaleIOsoftware are:
•Deploys as all-flash and/or hybrid software-defined block storage.
•Enables storage tieringacross server-based HDDs, SSDs, and the PCIeflash cards.
•Ensures high availability with two copy data mirroring, self-healing, and rebalancing.
•Supports multitenancy storage with data at rest encryption.
•Superior data recovery with consistent, writeable snapshots.
malicious insider could be an organization's, cloud service provider's current or former employee, contractor, or other business partner who has or had authorized access to a cloud service provider's compute systems, network, or storage. These malicious insiders may intentionally misuse that access in ways that negatively impact the confidentiality, integrity, or availability of the cloud service provider's information or resources. Thisthreat can be controlled by having a strict access control policies, disable employee accounts immediately after separation from the company, security audit, encryption, and segregation of duties policies. Also, a background investigation of a candidate before hiring is another key measure that can reduce the risk due to malicious insiders. Technical mechanisms are implemented through the tools or devices deployed on the cloud infrastructure. Toprotect the cloud operations, we further classify the technical security mechanisms into two types:
•Mechanisms deployed at the application level: Application security is a critical component of any IT security strategy. Applications running on cloud infrastructure are more frequently accessed via network, that means, they become vulnerable to variety of threats. Various security mechanisms must be deployed at the application level by the cloud providers and the consumers to provide a secure environment for the application users. These mechanisms include Identity and Access Management (IAM), role-based access control, application hardeningand sandboxing
•Mechanisms deployed at the Infrastructure level: Itis equally important to secure the cloud infrastructure that runs the cloud provider's services. Apart from securing the infrastructure physically, various technical mechanisms must be deployed at the compute, network,and storage level to protect the sensitive data. These mechanisms include: firewalls, Intrusion Detection, and Prevention System (IDPS), network segmentation, Virtual Private Network (VPN),encryption and data shredding.
Applications are themost difficult part of the cloud environment to secure because of its complexities. Application hardeningis a securitypracticefollowed during an application development, with agoal of preventing the exploitation of vulnerabilities that are typically introduced during the development cycle. The most secure applicationsare those applications, which have been built from the start keeping security in mind. Application architects and developers must focus on identifying the security policies and proceduresto define what applications and the application users are allowed to do, and what ports and services the applications can access. Many web applications use authentication mechanism to verify the identity of auser. So, to secure the credentials from eavesdropping,the architects,and developers must consider how the credentials are transmitted over the network, how they are stored and verified. Implement ACLs to restrict which applications can access what resources and the type of operations the applications can perform on the resources. Applications dealing with sensitive data must be designed in such a way that the data remains safe and unaltered. Also, it is important to secure the third party applications and tools that may be used.Because,if they are vulnerable, they can open doors to greater security breach. Governance determines the purpose, strategy, and operational rules by which companies are directed and managed. Regarding cloud computing, governance defines the policies and the principles of the provider that the customers must evaluate for the use of cloud services. The goal of the governance is to secure the cloud infrastructure, applications, and the data by establishing the contract between the cloud provider and the customer. It is also important to establish the contract between the provider and their supporting third party vendors. Because, the effectiveness of governance processes may be diminished when the cloud provider outsources some or all its services to third-parties, including cloud brokers. In such a scenario, provider may not have control over many services that have been outsourced and therefore may impact the commitments given by the provider. Also, the security controls of the third party may change, which may impact the terms and conditions of the provider. RSA Archer eGRCsolutions allow an organization to build an efficient, collaborative enterprise governance, risk and compliance (eGRC) program across IT, finance, operations and legal domains. With RSA Archer eGRC, an organization can manage risks, demonstrate compliance, automate business processes, and gain visibility into corporate risk and security controls. RSA delivers several core enterprise governance, risk, and compliance solutions, built on the RSA Archer eGRCPlatform. Business users have the freedom to tailor the solutions and integrate with multiple data sources through code-free configuration. A backup is an additional copy of production data, created and retained for the sole purpose of recovering the lost or corrupted data. With the growing business and the regulatory demands for data storage, retention, and availability, cloud service providers face the task of backing up an ever-increasing amount of data. This task becomes more challenging with the growth of data, reduced IT budgets, and less time available for taking backups. Moreover, service providers need fast backup and recovery of data to meet their service level agreements. The amount of data loss and downtime that a business can endure in terms of RPO and RTO are the primary considerations in selecting and implementing a specific backup strategy. RPO specifies the time interval between two backups and the amount of data loss a customer can tolerate. For example, if a service requires an RPO of 24 hours, the data need to be backed up every 24 hours. RTO relates to the time taken by the recovery process. To meet the defined RTO, the service provider should choose the appropriate backup media or backup target to minimize the recovery time. ROBOBackup-Remote office/ branch office backup. Today, businesses have their remote or branch offices spread over multiple locations. Typically, these remote offices have their local IT infrastructure. This infrastructure includes file, print, Web, or email servers, workstations, and desktops, and might also house some applications and databases. Too often, business-critical data at remote offices are inadequately protected, exposing the business to the risk of lost data and productivity. As a result, protecting the data of an organization's branch and remote offices across multiple locations is critical for business. Traditionally, remote-office data backup was done manually using tapes, which were transported to offsite locations for DR support. Some of the challenges with this approach were lack of skilled onsite technical resources to manage backups and risk of sending tapes to offsite locations, which could result in loss or theft of sensitive data. Also, branchoffices have less IT infrastructure to manage backup copies, and huge volumes of redundant data available across remote offices. Huge cost is required to manage these ROBO backup environment. A snapshot is a virtual copy of a set of files, VM, or LUN as they appeared at a specific point-in-time (PIT). A point-in-time copy of data contains a consistent image of the data as it appeared ata given point in time.Snapshots can establish recovery points in just a small fraction of time and can significantly reduce RPO by supporting more frequent recovery points. If a file is lost or corrupted, it can typically be restored from the latest snapshot data in just a few seconds. A file system (FS) snapshot creates a copy of a file system at a specific point-in-time (as shown in the slide), even while the original file system continues to be updated and used normally. FS snapshot is a pointer-based replica that requires a fraction of the space used by the production FS. It uses the Copy on First Write (CoFW) principle to create snapshots.
When a snapshot is created, a bitmap and blockmap are created in the metadata of the snapshot FS. The bitmap is used to keep track of blocks that are changed on the production FS after the snapshot creation. The blockmap is used to indicate the exact address from which the data is to be read when the data is accessed from the snapshot FS. Immediately after the creation of the FS snapshot, all reads from the snapshot are actually served by reading the production FS. In a CoFWmechanism, if a write I/O is issued to the production FS for the first time after the creation of a snapshot, the I/O is held and the original data of production FS corresponding to that location is moved to the snapshot FS. Then, the write is allowed to the production FS. The bitmap and the blockmap are updated accordingly. The subsequent writes to the same location will not initiate the CoFWactivity. To read from the snapshot FS, the bitmap is consulted. If the bit is 0, then the read will be directed to the production FS. If the bit is 1, then the block address will be obtained from the blockmap and the data will be read from that address on the snapshot FS. Read requests from the production FS work as normal.
Deduplication is the process of detecting and identifying the unique data segments (chunk) within a given set of data to eliminate redundancy. The use of deduplication techniques significantly reduces the amount of data to be backed up. Data deduplication operates by segmenting a dataset into blocks and identifying redundant data and writing the unique blocks to a backup target. To identify redundant blocks, the data deduplication system creates a hash value or digital signature—like a fingerprint—for each data block and an index of the signatures for a given repository. The index provides the reference list to determine whether blocks already exist in a repository. When the data deduplication system sees a block it has processed before, instead of storing the block again, it inserts a pointer to the original block in the repository. It is important to note that the data deduplication can be performed in backup as well as inproduction environment. In production environment, the deduplication is implemented at primary storage systems to eliminate redundant data in the production volume.
The effectiveness of data deduplication is expressed as a deduplication ratio, denoting the ratio of data before deduplication to the amount of data after deduplication. This ratio is typically depicted as "ratio:1" or "ratio X", (10:1 or 10 X). For example, if 200 GB of data consumes 20 GB of storage capacity after data deduplication, the space reduction ratio is 10:1. Every data deduplication vendor claims that their product offers a certain ratio of data reduction. However, the actual data deduplication ratio varies, based on many factors. These factors are discussed next.
DELL EMC Data Domain deduplication storage system is a target-based data deduplication solution. Using high-speed, inline deduplication technology, the Data Domain system provides a storage footprint that is significantly smaller on average than that of the original data set. Data Domain Data Invulnerability Architecture provides defense against data integrity issues. DELL EMC Data Domain Boost software significantly increases backup performance by distributing the parts of the deduplication process to the backup server. With Data Domain Boost, only unique, compressed data segments are sent to a Data Domain system. For archiving and compliance solutions, Data Domain systems allow customers to cost-effectively archive non-changing data while keeping it online for fast, reliable access and recovery. DELL EMC Data Domain Extended Retention is a solution for long-term retention of backup data. It is designed with an internal tieringapproach to enable cost-effective, long-term retention of data on disk by implementing deduplication technology. Data Domain provides secure multi-tenancy that enables data protection-as-a-service for large enterprises and service providers who are looking to offer services based on Data Domain in a private or public cloud. With secure multi-tenancy, a Data Domain system will logically isolate tenant data, ensuring that each tenant's data is only visible and accessible to them. DELL EMC Data Domain Replicator software transfers only the deduplicatedand compressed unique changes across any IP network, requiring a fraction of the bandwidth, time, and cost, compared to traditional replication methods. Cloud service management:
Management in the cloud has a service-based focus commensurate with the requirements of each cloud service rather than an asset-based focus. It is based on a holistic approach that must be able to span across all the IT assets in a cloud infrastructure. Depending on the size of a cloud environment, service management may encompass a massive IT infrastructure. Thismassive IT infrastructure comprises multivendor assets, various technologies, and multiple data centers. Cloudservice management must be optimized to handle increased flexibility, complexity, data access, change rates, and risk exposure. If it is not optimized, it may lead to service outages and SLA violations. To operate sustainably in a cloud environment, service management must rely on automation and workflow orchestration. Cloud service management may still follow the traditional IT service management processes such as ITIL. However, the processes should support cloud rapid deployment, orchestration, elasticity, and service mobility.
The service catalog design and implementation process consists of a sequence of steps.
1.Create service definition:Creating a definition for each service offering is the first step in designing and implementing the service catalog. A service definition comprisesservice attributes such as service name, service description, features and options, provisioning time,and price. The cloud portal software provides a standard user interface to create service definitions. The interface commonly provides text boxes, check boxes, radio-buttons, and drop-downs to make entries for the service attributes.
2.Define service request: Once service definition is created, define the web form used to request the service. The portal software includes a form designer for creating the service request form that consumers use to request the service.
3.Define fulfillment process: Onceservice request form is defined, the next step is to define the process that fulfills delivery of the service. Once the process is modeled, approved, and validated, it is implemented using workflows in the orchestrator.
4.Publish service: The final step is to publish the service catalog to the consumers. Before publishing, it is a good practice to perform usability and performance testing. After the service is published, it becomes available to consumers on the cloud portal.
The goal of availability management is to ensure that the stated availability commitments are consistently met. The availability management process optimizes the capability of cloud infrastructure, services, and the service management team to deliver a cost effective and sustained level of service that meets SLA requirements. The activities of availability management team are described below:
•Gathers information on the availability requirements for upgraded and new services. Different types of cloud services may be subjected to different availability commitments and recovery objectives. A provider may also decide to offer different availability levels for same type of services, creating tiered services.
•Proactively monitors whether availability of existing cloud services and infrastructure components is maintained within acceptable and agreed levels. The monitoring tools identify differences between the committed availability and the achieved availability of services and notify administrators through alerts.
•Interacts with incident and problem management teams, assisting them in resolving availability-related incidents and problems. Through this interaction, incident and problem management teams provide key input to the availability management team regarding the causes of service failures. Incident and problem management also provide information about errors or faults in the infrastructure components that may cause future service unavailability. With this information, the availability management team can quickly identify new availability requirements and areas where availability must be improved.
•Analyzes, plans, designs, and manages the procedures and technical features required to meet current and future availability needs of services at a justifiable cost. Based on the SLA requirements of enhanced and new services, and areas found for improvement, the team provides inputs. The inputsmay suggest changes in the existing business continuity (BC) solutions or architect new solutions that provide more tolerance and resilience against service failures. Some examples of BC solutions are clustering of compute systems and replicating database and file systems.
The goal of information security management is to prevent the occurrence of incidents or activities adversely affecting the confidentiality, integrity, and availability of informationand processes.Protects corporate and consumer data to the extent required to meet regulatory or compliance concerns both internal and external, and at reasonable/acceptable costs. The interests of all stakeholders of a cloud service, including consumers who rely on information and the IT infrastructure, are considered. Thekey functions of information security management are described below:
•Information security management team implements the cloud service provider's security requirements. It develops information security policies that govern the provider's approach towards information security management. These policies may be specific to a cloud service, an external service provider, an organizational unit, or they can be uniformly applicable. Top executivemanagement approves the Information security policies. These security assurances are often detailed in SLAs and contracts. Information security management requires periodic reviews and, as necessary, revision of these policies.
•Information security management team establishes a security management framework aligned with the security policies. The framework specifies security architecture, processes, mechanisms, tools, responsibilities for both consumers and cloud administrators, and standards needed to ensure information security in a cost-effective manner. The security architecture describes the following:
•The structure and behavior of security processes
•Methods of integrating security mechanisms with the existing IT infrastructure
•Service availability zones
•Locations to store data
•Security management roles.
Dell EMC ViPRSRM is a comprehensive monitoringand reporting solution that helps IT visualize, analyze, and optimize today's storage investments. It provides a management framework that supports investments in software-defined storage.
•Visualize -providesdetailed relationship and topology views from the application, to the virtual or physical host, down to the LUN to identify service dependencies. And also provides a view of performance trends across the data path and identify hosts competing for resources.
•Analyze -helps to analyze health, configurations,and capacity growth. Analysis helps to spot SLA problems through custom dashboards and reports that meet the needs of a wide range of users and roles. ViPRSRM helps to track block, file,and object capacity consumption across the data centers with built-in views. The views help to understand who is using capacity, how much they are using, and when more are required.
•Optimize -ViPRSRM helps to optimize capacity and improve productivity to get the most out of the investments in block, file,and object storage. It shows historical workloads and response times to determine if you have selected the right storage tier. It tracks capacity usage, allowing to create show back or chargeback reports to align application requirements with costs.
The VMware vCenterOperations Management Suite includes a set of tools that automates performance, capacity, and configuration management, and provides an integrated approach to service management. It enables IT organizations to ensure service levels, optimum resource usage, and configuration compliance in virtualized and cloud environments. The vCenterOperations Management Suite includes four components. These components are described below:
•vCenterOperations Manager provides operations dashboards to gain visibility into the cloud infrastructure. It identifies potential performance bottlenecks automatically and helps remediate them before consumers notice problems. Further, it enables optimizing usage of capacity and performs capacity trend analysis.
•vCenterConfiguration Manager automates configuration management tasks such as configuration data collection, configuration change execution, configuration reporting, change auditing, and compliance assessment. This automation enables organizations to maintain configuration compliance and to enforce IT policies, regulatory requirements, and security hardening guidelines.
•vCenterHypericmonitors hardware resources, operating systems, middleware, and applications. It provides immediate notification if application performance degrades or unavailable. The notification enables administrators to ensure availability and reliability of business applications.
•vCenterInfrastructure Navigator automatically discovers application services running on the VMs and maps their dependency on IT infrastructure components.
Adopting cloud enables digital transformation, therefore new roles need to be created that performs tasks related to cloud services. Examples of tasks are service definition and creation, service administration and management, service governance and policy information, and service consumer management. Some of these tasks can be combined to become the responsibility of an individual or organizational role. A few examples of new roles required to perform tasks within a cloud environment include service manager, account manager, cloud architect, and service operation manager.
•A service manager is responsible for understanding consumers'needs and industry trends to drive an effective product strategy. The service manager ensures that IT delivers cost-competitive services that have the features that clients need. The service manager is also responsible for managing consumers' expectations of product offerings and serves as key interface between clients and IT staff.
•Aservice account manager supports service managers in service planning, development, and deployment. The service account manager maintains day-to-day contact to ensure thatconsumers' needs are met. They also assist clients with demand planning and communicate service offerings.
•A cloud architect is responsible for creating detailed designs for the cloud infrastructure.
•The service operations manager is responsible to streamline service delivery and execution. Service operations manager is also responsible to provide early warning for service issues, such as emerging capacity constraints, or unexpected increase in cost. The service operations manager also coordinates with the architecture team to define technology roadmaps and ensure thatservice level objectives are met.
Apart from the above roles, other roles such as cloud engineer, the DevOps engineer, and cloud administrator are also required.
•Acloud engineer is responsible for designing, planning, managing, maintaining, and supporting the cloud infrastructure.
•A cloud automation engineer is responsible for design and implementation of cloud provisioning processes for delivering cloud services.
•A vendor relationship manager is responsible to understand service needs of LOBs, determines which needs are good candidates for CSP, and also works with service managers.
•A DevOps engineer is responsible for development, testing, and operation of an
Agile is an iterative and incremental software development method.Examplesof agile methods are scrum, extreme programming, lean development, and so on.
The agile methodologies are effectiveat improving time to delivery, feature functionality, and quality. The agile methodologies are iterative, value quick delivery of working software for user review and testing, and implemented within the development organization.
Developers frequently build their own infrastructure as needed to investigate and test a particular product feature. These environments can be tweaked to fit the desires of the developer; however, transition to production availability requires operations to reproduce the developer's environment. The smooth transition from development into operations is affected in following circumstances:
•Specific configuration of a development environment is undocumented
•Development environment conflicts with the configuration of another environment
•Development environment deviates from established standards
If a feature requires new equipment, there is often delays to accommodate budgets and maintenance windows. Although these problems are common between development and operations teams, overcoming them is a core principle of theDevOps practices.
Transforming to the DevOps practices takes a clear vision, and more than anything else, it takes commitment from its employeesand management.
DevOps culture in cloud brings the following benefits:
•Faster application development and delivery to meet the business needs
•User demands that are quickly incorporated into the software
•Reduced cost for development, deployment, testing, and operations
A list of common chargeback models along with their descriptions areprovided below.
•Pay-as-you-go: Metering and pricing are based on the consumption of cloud resources by the consumers. Consumers do not pay for unused resources. For example. , the Some cloud providersare offering pricingbased on monthly, hourly, or per second basis of consumption. Thus offering extreme flexibility and cost benefit to the customers.
•Subscription by time: Consumers are billed for a subscription period. The cost of providing a cloud service for the subscription period is divided among a predefined number of consumers. For example, in a private cloud, if three business units are subscribing to a service that costs $60,000 a month to provide. Then the chargeback per business unit is $20,000 for the month.
•Subscription by peak usage: Consumers are billed according to their peak usage of IT resources for a subscription period. For example, a provider may charge a consumer for their share of peak usage of network bandwidth.
•Fixed cost or prepay: Consumers commit upfront on the required cloud resources for the committed period such as one year or three years. They pay fixed charge periodically through a billing cycle for the service they use, regardless of the utilization of resources.
•User-based: Pricing is based on the identity of a user of cloud service. In this model, the number of users logged in is tracked and billed,basedonthatnumber.