is a specific type of ANI, with the goal of giving a computing device access to some store of data and allowing it to learn from it, but nowhere near GAI levels. ML is a subset of AI that includes supervised, unsupervised, reinforcement and deep learning systems. Supervised machine learning algorithms and models use labeled datasets, beginning with an understanding of how the data is classified, whereas unsupervised models use unlabeled datasets and figure out features and patterns from the data without explicit instructions or preexisting categorizations. Reinforcement learning, on the other hand, takes a more iterative approach. Instead of being trained with a single data set, the system learns through trial and error and receiving feedback from data analysis. With faster and bigger computation capabilities, ML capabilities have advanced to deep learning, a specific kind of ML that applies algorithms called "artificial neural networks," composed of decision nodes to more accurately train ML systems for supervised, unsupervised and reinforcement learning tasks. Deep learning offers a wealth of possibilities, and already has promising applications for image recognition, self-driving cars, fraud news detection and more. The important thing is understanding that these techniques can be applied to solve business problems, as long as there is data to train them. Many IoT devices have limited amounts of storage, memory, and processing capability and they often need to be able to operate on lower power, for example, when running on batteries. Security approaches that rely heavily on encryption are not a good fit for these constrained devices, because they are not capable of performing complex encryption and decryption quickly enough to be able to transmit data securely in real-time. These devices are often vulnerable to side-channel attacks, such as power analysis attacks, that can be used to reverse engineer these algorithms. Instead, constrained devices typically only employ fast, lightweight encryption algorithms. IoT systems should make use of multiple layers of defense, for example, segregating devices onto separate networks and using firewalls, to compensate for these device limitations. The potential for disruption as a result of connectivity outages or device failures, or arising as a result of attacks like denial of service attacks, is more than just inconvenience. In some applications, the impact of the lack of availability could mean loss of revenue, damage to equipment, or even loss of life. For example, in connected cities, IoT infrastructure is responsible for essential services such as traffic control, and in healthcare, IoT devices include pacemakers and insulin pumps. To ensure high availability, IoT devices must be protected against cyber-attacks as well as physical tampering. IoT systems must include redundancy to eliminate single points of failure, and should also be designed to be resilient and fault tolerant, so that they can adapt and recover quickly when problems do arise. Because microservices are distributed, stateless, and therefore necessarily independent, there will be more logs. The challenge is that more logs threaten to camouflage issues as they pop up. With microservices running from multiple hosts, it becomes necessary to send logs across all of those hosts to a single, external, and centralized location. For microservices security to be effective, user logging needs to correlate events across multiple, potentially differing platforms, which requires a higher viewpoint to observe from, independent from any single API or service. Containerization has become a major trend in software development as an alternative or companion to virtualization. It involves encapsulating or packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure. Containerization allows developers to create and deploy applications faster and more securely. With traditional methods, code is developed in a specific computing environment which, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system. Containerization eliminates this problem by bundling the application code together with the related configuration files, libraries, and dependencies required for it to run. This single package of software or "container" is abstracted away from the host operating system, and hence, it stands alone and becomes portable—able to run across any platform or cloud, free of issues. Containers are often referred to as "lightweight," meaning they share the machine's operating system kernel and do not require the overhead of associating an operating system within each application. Containers are inherently smaller in capacity than a VM and require less start-up time, allowing far more containers to run on the same compute capacity as a single VM. This drives higher server efficiencies and, in turn, reduces server and licensing costs. Put simply, containerization allows applications to be "written once and run anywhere." Containerized applications inherently have a level of security since they can run as isolated processes and can operate independently of other containers. Truly isolated, this could prevent any malicious code from affecting other containers or invading the host system. However, application layers within a container are often shared across containers. In terms of resource efficiency, this is a plus, but it also opens the door to interference and security breaches across containers. The same could be said of the shared Operating System since multiple containers can be associated with the same host Operating System. Security threats to the common Operating System can impact all of the associated containers, and conversely, a container breach can potentially invade the host Operating System. But, what about the container image itself? How can the applications and open source components packaged within a container improve security? Container technology providers, such as Docker, continue to actively address container security challenges. Containerization has taken a "secure-by-default" approach, believing that security should be inherent in the platform and not a separately deployed and configured solution. To this end, the container engine supports all of the default isolation properties inherent in the underlying operating system. Security permissions can be defined to automatically block unwanted components from entering containers or to limit communications with unnecessary resources. For example, Linux Namespaces helps to provide an isolated view of the system to each container; this includes networking, mount points, process IDs, user IDs, inter-process communication, and hostname settings. Namespaces can be used to limit access to any of those resources through processes within each container. Typically, subsystems which do not have Namespace support are not accessible from within a container. Administrators can easily create and manage these "isolation constraints" on each containerized application through a simple user interface. Containers enable microservices, which increases data traffic and network and access control complexity. Containers rely on a base image, and knowing whether the image comes from a secure or insecure source can be challenging. Images can also contain vulnerabilities that can spread to all containers that use the vulnerable image. Containers have short life spans, so monitoring them, especially during runtime, can be extremely difficult. Another security risk arises from a lack of visibility into an ever-changing container environment. Containers, unlike VMs, are not necessarily isolated from one another. A single compromised container can lead to other containers being compromised. Containerized environments have many more components than traditional VMs, including the Kubernetes orchestrator that poses its own set of security challenges. Can you tell which deployments or clusters are affected by a high-severity vulnerability? Are any exposed to the Internet? What's the blast radius if a given vulnerability is exploited? Is the container running in production or a dev/test environment? Container configuration is yet another area that poses security risks. Are containers running with heightened privileges when they should not? Are images launching unnecessary services that increase the attack surface? Are secrets stored in images? As one of the biggest security drivers, compliance can be a particular challenge given the fast-moving nature of container environments. Many of the traditional components that helped demonstrate compliance, such as firewall rules, take a very different form in a container environment. • Injection Attacks • Authentication Flaws • Cross-site scripting (XSS) • Deserialization Attacks • Microservices using many dependencies and third-party libraries with vulnerabilities • Event-driven sources extend the attack surface beyond merely direct user input • XSS payloads could execute from different sources such as email, cloud storage, and logs • Code injection allows for utilizing of API functionality to interact with other services on the application • Leaked API keys can allow unauthenticated and unauthorized functionality across the application • Denial of Wallet attacks occur due to functions with a high concurrency limit, this is a financial attack that abuses the auto-scaling nature of serverless architecture. • DoS attacks in a containerized environment; Factors are function concurrency limit (triggering a function until the limit is hit), environment disk capacity (filling the /tmp folder), and account writing/reading capacity (triggering max allowed database table scans). • In serverless environments, if the container is not destroyed, any written application data that did not manually delete after use could be accessed by an attacker. • Due to microservices' stateless nature, it is harder to verify that a function should be invoked, leading to access control issues, bypassing the execution flow, and providing access to unauthorized data.