Upgrade to remove ads
Azure Infrastructure and Networking
Terms in this set (152)
1 of 152
What regions of the US are Azure data centers located?
Central US (Iowa).
East US (Virginia).
East US 2 (Virginia).
US Gov Iowa (Iowa).
US Gov Virginia (Virginia).
North Central US (Illinois).
South Central US (Texas).
West US (California).
What regions outside of US are Azure data centers located?
North Europe (Ireland).
West Europe (Netherlands).
East Asia (Hong Kong).
Southeast Asia (Singapore).
Japan East (Saitama Prefecture).
Japan West (Osaka Prefecture).
Brazil South (Sao Paulo State).
Australia East (New South Wales).
Australia Southeast (Victoria).
Which regions correspond to Zone 1, Zone 2, Zone 3?
Zone 1: US and Europe
Zone 2: Asia, Japan, AUS
Zone 3: Brazil
What service allows customers to easily migrate physical, VMware, AWS and Hyper-V virtual machines to Azure (based on acquired InMage Scout technology)?
Name the cloud computing infrastructure:
Customers access cloud services and store documents in large datacenters equipped with hundreds of virtualized servers that house data from multiple organizations.
Name the cloud computing infrastructure:
A single organization uses a dedicated cloud infrastructure.
Name the cloud computing infrastructure:
A private cloud is shared by a group of organizations with
common missions, interests, or concerns. For example, a cloud provider may offer an instance of their services in
a cloud dedicated for only government customers.
Name the cloud computing infrastructure:
A private cloud is extended to the public cloud to extend an organization's datacenter; or two or more cloud types are linked to enable data and applications to flow between them in a controlled way.
What are the 3 characteristics of highly available cloud services?
Although these characteristics are interrelated, it is important to understand each and how they contribute to the overall availability of the solution.
What is availability considered to be in Azure?
An available application considers the availability of its underlying infrastructure and dependent services. Available applications remove single points of failure through redundancy and resilient design. When we talk about availability in Azure, it is important to understand the concept of the effective availability of the platform. Effective availability considers the Service Level Agreements (SLA) of each dependent service and their cumulative effect on the total system availability.
System availability is the measure of the percentage of a time window the system will be able to operate. For example, the availability SLA of at least two instances of a web or worker role in Azure is 99.95% (out of 100%). It does not measure the performance or functionality of the services running on those roles. However, the effective availability of your cloud service is also affected by the various SLA of the other dependent services. The more moving parts within the system, the more care you must take to ensure the application can resiliently meet the availability requirements of its end users.
For actual SLA's (i.e. 99.9%), see the Azure Overview cards.
If an application cannot scale, it directly affects what?
An application that fails under increased load is no longer available. Scalable applications are able to meet increased demand with consistent results in acceptable time windows. When a system is scalable, it scales horizontally or vertically to manage increases in load while maintaining consistent performance. In basic terms, horizontal scaling adds more machines of the same size (processor, memory, bandwidth) while vertical scaling increases the size of the existing machines. For Azure, you have vertical scaling options for selecting various machine sizes for compute. However, changing the machine size requires a re-deployment. Therefore, the most flexible solutions are designed for horizontal scaling. This is especially true for compute because you can easily increase the number of running instances of any web or worker role.
Applications need to assume that every dependent cloud capability can and will go down at some point in time. What is it called to account for this happening?
A fault tolerant application detects and maneuvers around failed elements to continue and return the correct results within a specific timeframe. For transient error conditions, a fault tolerant system will employ a retry policy. For more serious faults, the application can detect problems and fail over to alternative hardware or contingency plans until the failure is corrected. A reliable application can properly manage the failure of one or more parts and continue operating properly. Fault tolerant applications can use one or more design strategies, such as redundancy, replication, or degraded functionality.
What technology component in the infrastructure is responsible for provisioning and monitoring the condition of the Azure compute instances.
Azure Fabric Controller (FC)
The Fabric Controller checks the status of the hardware and software of the host and guest machine instances. When it detects a failure, it enforces SLAs by automatically relocating the VM instances. The concept of fault and upgrade domains further supports the compute SLA.
When multiple role instances are deployed, Azure deploys these instances to different fault domains. A fault domain boundary is basically a different hardware rack in the same datacenter. Fault domains reduce the probability that a localized hardware failure will interrupt the service of an application. You cannot manage the number of fault domains that are allocated to your worker or web roles. The Fabric Controller uses dedicated resources that are separate from Azure hosted applications. It has 100% uptime because it serves as the nucleus of the Azure system. It monitors and manages role instances across fault domains.
An upgrade domain is a logical unit of instance separation that determines which instances in a particular service will be upgraded at a point in time. By default, for your hosted service deployment, five upgrade domains are defined. However, you can change that value in the service definition file. For example, you have eight instances of your web role. There will be two instances in three upgrade domains and two instances in one upgrade domain. Azure defines the update sequence, but it is based on the number of upgrade domains.
What allows you to group your cloud services by proximity to each other in the Azure datacenter in order to achieve optimal performance?
When you create an affinity group, it lets Azure know to keep all of the services that belong to your affinity group as physically close to each other as possible. For example, if you want to keep the services running your data and your code together, you would specify the same affinity group for those cloud services. They would then run on hardware that is located close together in the datacenter. This can reduce latency and increase performance, while potentially lowering costs.
Note: Affinity groups are being replaced with VNets which also group VM's and Cloud Services into a region. Affinity groups may go away in the future.
Must a VNet be assigned to an affinity group?
Previously, when creating a virtual network (VNet), you were required to associate the VNet with an affinity group, which was in turn, associated with a Region. This requirement has changed. Now VNets are associated directly with a Region (Location) in the Management Portal. This allows you more freedom when creating your VNets.
You can still associate your cloud services with affinity groups when appropriate, but you are not required to do so.
The Region represents where the Virtual Network overlay will be. Anything you deploy to the virtual network will be physically located in the Region. If you want to further designate that you want your resources in close proximity physically to each other within the same region, you can specify an affinity group for those particular resources. That would mean that not only are those resources in the same physical location, they are very close to each other in the datacenter.
Use this Compute to provision on-demand, scalable compute infrastructure when you need flexible resources. These computes can run Windows, Linux, and enterprise applications. Or, capture your own images to create custom VMs.
What are the different virtual machine classifications?
General Purpose (
) - See VM.
Memory Intensive (
) - See VM.
Compute & Network Intensive (
) - See VM.
Optimized Compute (
) - See VM.
- all come with integrated health, monitoring, load-balancing, and auto-scaling.
- A1 is the smallest size recommended for production workloads.
An economical option for development workloads, test servers, and other applications that don't require load balancing, auto-scaling, or memory-intensive virtual machines. A1 is the smallest size recommended for production workloads. Select a virtual machine with 4 or 8 CPU cores when using SQL Server Enterprise Edition.
A0 xSmall, shared core, 768MB RAM
A1 Small, 1 core and 1.75GB RAM
A2 Medium, 2 cores and 3.5GB RAM
A3 Large, 4 cores and 7GB RAM
A4 XLarge, 8 cores and 14GB RAM
A5, 2 cores and 14GB RAM
A6, 4 cores and 28GB RAM
A7, 8 cores and 56GB RAM
Compute & Network Optimized
A8 (8 cores, 56GB RAM)
A9 (16 cores, 112GB)
Ideal for Message Passing Interface (MPI) applications, high-performance clusters, modeling and simulations, video encoding, and other compute or network intensive scenarios. Network optimized with Infiniband and remote direct memory access (RDMA).
Compute Optimized D-Series
D1 (3.5GB RAM)-D4(28GB)
60% faster CPUs, more memory, and local SSD. The faster SSD is used for the D: drive that is used for temporary storage making it ideal for temp DB for a database.
G1-G5 - will provide more memory and more local Solid State Drive (SSD) storage than any current VM size in the public cloud. The largest G-series will offer 448 GB RAM and 6.5 TB of local SSD storage.
Since a storage account is limited to 40 disks, what should be done with the OS disk and data disk to maximize usage of a single storage account for OS disks?
Put OS disks and data disks in different storage accounts.
What is each VM disk (i.e. c, d, e, f etc.) used for?
C: is for OS Disk (Disk Cache used for performance but all writes are written to Blob storage)
D: is the Temporary Disk, also used for SSD in D Series
E:, F:, etc. are for data disks pointing to Blob Storage
What happens to the data disks attached to a VM when the VM is deleted?
The data disks are immediately deleted from Azure Storage.
How can you achieve having 10,000 VM's running?
-> Cloud Service (200 per Subscription)
-> -> VM's (50 per Cloud Service) - Gives 10,000 VM's.
-> VNet (100 per Subscription)
Note: 2048 VM's per Vnet
-> Storage Account (100 per Subscription)
-> -> Storage Container
-> -> -> Storage Blob (no more than 40 VHD's per Storage Account due to (20,000 req/s) / (500 req/s for each VM))
Given the above limitations, what's the minimum number of VNets needed to support 10,000 VM's?
Given the above limitations, what's the minimum number of Subscriptions, Storage Accounts, and Blobs needed to support 10,000 VM's assuming no data disks were needed?
10,000 Blobs, 250 Storage Accounts, 3 Subscriptions
What might be a better Azure service to use if these VM's are used for processing data or computations instead of user interaction?
because it can be done in 1 subscription with 10,000 VM's.
For the latest limits, visit: http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/
How do limitations on create/add/update of services protect resources in Azure?
In a given subscription, only 120 create/add operations can be done in a 5 minute period to protect resources.
Each Cloud Service update takes about 3 minutes to complete which limits how fast changes can be made to a cloud service.
These limits protect the underlying infrastructure from having all resources consumed in adding/updating services.
How can you deploy hundreds of VM's to stay within the 120 create/add/updates per 5 minutes?
Deploy VM's in blocks and spread disks across storage accounts.
When under 500, use block size of # VM's/5. This gives the maximum distribution across UD's. Deploy one VM in each cloud service at a time. Using 500, deploy 100 cloud services with 5 UD's used per cloud service.
When over 500, use a block size of 100. Avoid throttling by waiting 5 minutes for every block of 100 to complete. This will stay under the 120 create/add operation max per Subscription.
How many different scaling units can be used on a given Cloud Service?
A scale unit is A0 or A1 or A2...A9. You can't have 2 different scale units (A0 versus A1) on the same Cloud Service. A cloud service can only have 1 size (a.k.a scaling unit).
If first VM is within A0-A4, then range will be restricted to A0-A4.
If first VM is A7, then range will be A0-A7.
If first VM is A8 or A9, then range will be A8-A9.
If first VM is D1-D14, then range will be the entire A0-A7 and D1-D14.
To get your cloud service into a different range, you need to delete the VM but save the attached disk. Then recreate the VM on whatever size is needed using saved attached disk.
A single affinity group is also bound to one scaling unit but affinity groups are used less since VNets accomplish a similar grouping.
How are the disks used for web and worker role instances?
Web roles and worker roles require more temporary disk space than Azure Virtual Machines because of system requirements. The system files reserve 4 GB of space for the Windows page file, and 2 GB of space for the Windows dump file.
The local resource disk contains Azure logs and configuration files, Azure Diagnostics (which includes your IIS logs), and any local storage resources you define.
The OS disk contains the Windows guest OS and includes the Program Files folder (including installations done via startup tasks unless you specify another disk), registry changes, the System32 folder, and the .NET framework.
The apps (application) disk is where your .cspkg is extracted and includes your website, binaries, role host process, startup tasks, web.config, and so on.
Are there resources available to assist in communicating architecture diagrams and patterns?
Yes, check out the link below for sample diagrams, images, and patterns that can be reused for Azure.
What supported Windows Server Roles and technologies can be run on a VM?
Network Policy and Access Services
Print and Document Services
Remote Access (Web App Proxy)
Remote Desktop Services*
Web Server (IIS)
Windows Server Update Services
What Partner technologies can be run on VM's?
What is an availability group/set?
An availability set is a group of virtual machines that are deployed across fault domains and update domains. An availability set makes sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.
During the creation of a VM, how do you configure it to be part of an Availability Set with other VM's?
There is a field when defining VM's called Availability Set Name which needs to be the same on all VM's in the set.
The VM's within an Availability Set must all be in the same cloud service to be effective. And, there must be 2 or more VM's in the availability set.
The maximum number of VM's in a cloud service is 50 so that is also the maximum number of VM's in an availability set.
VM's not in an availability set will get special notification when the VM is going down for whatever reason. So, its better to have a single VM outside of an availability set to enable this special notification.
Name 2 places in the Portal where you can see the VM's that exist in a cloud service.
VM's in the same cloud service will have the same DNS Name shown in the listing for VM's.
Clicking the Configure tab for a VM will show the VM's that are in the same Availability Set.
Cloud Service Listing
The Cloud Service listing will show the name of the cloud service that the VM's are part of. Click on the cloud service and then Instance tab to view the same Instances/VM's that appear when looking at the VM Listing.
The list of Instances will then show the Update Domains and Fault Domains across the Instances.
What does Update Domains (UD) prevent when you have 2 or more instances in the same availability set?
Prevents all the instances from being rebooted at the same time.
For a given Availability Set, how many non-user-configurable UDs are assigned to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time?
Five (numbers go from 0-4)
When more than five virtual machines are configured within a single Availability Set, the sixth virtual machine will be placed into the same UD as the first virtual machine, the seventh in the same UD as the second virtual machine, and so on. The order of UDs being rebooted may not proceed sequentially during planned maintenance, but only one UD will be rebooted at a time.
When a VM is in a different UD (0 versus 2), they are guaranteed to be on different hardware. But, they may or may not be on the same hardware when in the same UD.
When a VM is in a different FD (0 versus 1), they are guaranteed to be on different rack. But, they may or may not be on a different rack when in the same FD.
If you have a web tier and db tier, should all instances for both be placed into a single availability set, their own availability set, or split across 2 availability sets with some in each?
Their own availability set.
If the virtual machines in your availability set are all nearly identical and serve the same purpose for your application, we recommend that you configure an Availability Set for each tier of your application. If you place two different tiers in the same Availability Set, all virtual machines in the same application tier could be rebooted at once. By configuring at least two virtual machines in an Availability Set for each tier, you guarantee that at least one virtual machine in each tier will be available.
What Azure service can be combined with an Availability Set to get the most application resiliency?
Azure Load Balancer
The Azure Load Balancer distributes traffic between multiple virtual machines. For our Standard tier virtual machines, the Azure Load Balancer is included. Note that not all Virtual Machine tiers include the Azure Load Balancer.
What do Fault Domains (FD) define?
FDs define the group of virtual machines that share a common power source and network switch. Fault domains are usually accomplished by using hardware that exists in different racks.
By default, the virtual machines configured within your Availability Set are separated across how many FDs?
The portal shows this as 0 and 1 giving the indication there are currently 2 fault domains available.
While placing your virtual machines into an Availability Set does not protect your application from operating system nor application-specific failures, what failures does it limit the impact of?
physical hardware failures
During the creation of a VM, how do you configure the VM to be used for a specific purpose such that it becomes an endpoint for website using http/https?
The EndPoint section allows you to choose which endpoints to define. This can be done during creation or after creation as there's a menu option Endpoints that allow it to be done after.
Here are some of the options:
For each of the above endpoint types, the default port can be changed if desired.
During the creation of a VM, how do you configure it to be load balanced with additional VM's?
Set Cloud Service=Create a New Cloud Service
Set Cloud Service DNS Name to something more generalized (like LBWEBSITE) because it will be the DNS name used from the outside that will load balance across the two servers.
Then, select the type of EndPoint and complete the creation of the VM.
To complete the load balancing, click Configure menu for newly created VM. It takes you through the process of adding the EndPoints again and at the end has an option called Create a Load Balanced Set that allows entry of the Load Balanced Set Name.
When creating the 2nd VM, follow the same steps but choose "Add an Endpoint to an Existing Load-Balanced Set" when setting up the Endpoint.
What are probes when setting up load balancing for a VM?
A probe is used to determine if VMs have failed (software or hardware). If a failure is detected, failover to healthy VMs will occur.
When configuring VM load balancing, select a probe protocol and port. Select the probe interval and number of probes you'd like sent before the endpoint is considered unresponsive.
The load balancing service offers the capability to probe for health of the various server instances and to take unhealthy server instances out of rotation. What are three types of probes supported?
Guest Agent probe (on PaaS VMs) - load balancing service queries the Guest Agent in the VM to learn about the status of the service.
HTTP custom probes - load balancing service relies on fetching a specified URL to determine the health of an instance.
TCP custom probes - it relies on successful TCP session establishment to a defined probe port.
How does the load balancer work?
Load balancing services in Microsoft Azure work with all the tenant types (IaaS or PaaS) and all OS flavors (Windows or any Linux based OS supported).
PaaS tenants are configured via the service model. IaaS tenants are configured either via the Management Portal or via PowerShell.
Azure Load Balancer is a Layer-4 type hash-based load balancer. Microsoft Azure load balancer distributes load among a set of available servers (virtual machines) by computing a hash function on the traffic received on a given input endpoint. The hash function is computed such that all the packets from the same connection (TCP or UDP) end up on the same server. The Microsoft Azure Load Balancer uses a 5 tuple (source IP, source port, destination IP, destination port, protocol type) to calculate the hash that is used to map traffic to the available servers. The hash function is chosen such that the distribution of connections to servers is fairly random. However, depending on traffic pattern, it is possible for different connections to get mapped to the same server. (Note that the distribution of connections to servers is NOT round robin, neither there is any queuing of requests, as has been mistakenly mentioned in some articles and some blogs). The basic premise of the hash function is given a lot of requests coming from a lot of different clients, you will get a nice distribution of requests across the servers.
Does Azure Load Balancer allow the swapping of the VIP of two tenants, allowing the move of a tenant that is in "stage" to "production" and vice versa?
The VIP Swap operation allows the client to be using the same VIP to talk to the service, while a new version of the service is deployed. The new version of the service can be deployed and tested without interfering with the production traffic, in a staging environment. Once the new version passes any tests needed, it can be promoted to production, by swapping with the existing production service. Existing connections to the "old" production continue un-altered. New connections are directed to the "new" production environment.
Is the integrated health, monitoring, load-balancing, and auto-scaling features available for PaaS Cloud Services (web role and worker role)?
Yes, these features are all included as part of the PaaS.
Load balancing is also included in Azure Basic/Standard Websites.
Note: In IaaS, you have to purchase the Standard tier to get these features since Basic does not include these features.
What are the 3 pieces to IaaS?
How can VM's and Web/Worker Roles be integrated to work with each other?
Create a VNet that includes the VM's and Web/Worker Roles that need to communicate.
What are the different types of IP addresses that can be used on Virtual Machines?
DIP - If you deploy a VM or PaaS Cloud Service to a virtual network, the VM and each PaaS instance always receives an internal IP address (DIP) from a pool of internal IP addresses that you specify. VMs/PaaS instance communicate within the virtual network by using DIPs.
NOTE: Although Azure assigns the DIP, you can request a static DIP for your VM if you deploy the VM using PowerShell. See Configure a Static Internal IP Address for a VM.
VIP - Your VM is also associated with a VIP, although a VIP is never assigned to the VM directly. A VIP is a public IP address that can be assigned to your cloud service. It is not assigned directly to your virtual machine NIC. The VIP stays with the cloud service it is assigned to until all the virtual machines in that cloud service are all Stop (Deallocated) or deleted. At that point, it is released. You can, optionally, reserve a VIP for your cloud service. See Reserved IP Addresses.
Reserved IP (VIP) - You can also reserve static public IP addresses for your cloud service. Reserved IP allows you to reserve a public Virtual IP address (VIP) in Azure and then assign it to your cloud service. The Reserved IP/VIP address is sticky, meaning once it's associated with the cloud service, it won't change unless you decide to disassociate it. And in a Virtual Machine scenario, the Reserved IP address will remain associated with your cloud service even when all the VMs in the cloud service are stop/deallocated. Reserved IP can only be used for VM cloud services and Cloud Service Web/Worker Roles. You must reserve the IP address first, before deploying.
PIP - Your VM can, optionally, also receive an instance-level public IP address (PIP). The PIP is directly associated with the VM, rather than the cloud service. Make a virtual machine directly addressable by assigning it a public IP (PIP) address. You can assign one PIP for each VM. You can use up to 5 PIPs per subscription.PIP is currently in Preview. See Instance-level Public IP Addresses.
What are some reasons for using the Instance-level Public IP Address?
REASONS TO USE Instance-level Public IP Addresses:
1) If you want to be able to connect to your VM or role instance by an IP address assigned directly to it, rather than using the cloud service VIP:<portnumber>, request a PIP for your VM or your role instance.
2) Passive FTP - By having a PIP on your VM, you can receive traffic on just about any port, you will not have to open up an endpoint to receive traffic. This enables scenarios like passive FTP where the ports are chosen dynamically.
3) Outbound IP - Outbound traffic originating from the VM goes out with PIP as the source and this uniquely identifies the VM to external entities.
50 of 152
How many NIC's (Network Interface Card) can a VM have?
Multiple NIC's enable virtual appliances through Azure Partners (i.e. Barracuda NG Firewall, Citrix Netscaler, etc.).
The Mac/IP addressing will persist the VM lifecycle.
Can be used to separate front-end and back-end traffic.
What service enables you to create private connections between Azure datacenters and infrastructure that's on your premises or in a colocation environment. These connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the Internet.
Networking: Express Routes
What are some reasons for using the Reserved VIP?
REASONS TO USE Reserved IP/VIP Addressses
1) You want to ensure that the VIP is reserved in your subscription until you delete it. You can choose to create a cloud service with the Reserved IP and when the cloud service is deleted, the VIP will remain in your subscription.
2) You want your VIP to stay with your cloud service even when all VM's are stopped/deallocated.
3) You want to ensure that outbound traffic from Azure uses a predictable IP address. You may have your firewall configured to allow only traffic from specific IP addresses. By reserving a VIP, you will know the source IP address and won't have to update your firewall rules due to a VIP change.
What IP address options are available for Compute: Cloud Services?
You can optionally reserve static public IP addresses for your Cloud Services deployments.
How many IP address(es) can be assigned to a cloud service with multiple roles?
The multi-web application service model in Azure allows only one entry point to the web role. This means that all traffic occurs through one IP address.
Since a cloud service can only have 1 IP address assigned to it, how can multiple web roles (websites/web applications) be configured to all have traffic coming over the same port 80 or 443?
Note: This question only pertains to Cloud Services; not Azure Websites. Azure Websites allow multiple sites to be created under the same hosting plan which means multiple websites would run on the same set of resources.
Use Host Header (URL host: www.mydomain.com)
You can configure your web roles in the Service Configuration (ServiceConfiguration.cscfg) to share a port by configuring the host header to direct the request to the correct location. You can also configure your web roles (websites and web applications) to listen to well-known ports on the IP address.
See below for a snippet of a Service Configuration file that shows an example. The website is configured as the default entry location on port 80, and the web applications are configured to receive requests from an alternate host header that is called "mail.mysite.cloudapp.net".
<Setting name="DiagnosticsConnectionString" />
<InputEndpoint name="HttpIn" protocol="http" port="80" />
<InputEndpoint name="Https" protocol="https" port="443" certificate="SSL"/>
<InputEndpoint name="NetTcp" protocol="tcp" port="808" certificate="SSL"/>
<LocalStorage name="Sites" cleanOnRoleRecycle="true" sizeInMB="100" />
<Site name="Mysite" packageDir="Sites\Mysite">
<Binding name="http" endpointName="HttpIn" />
<Binding name="https" endpointName="Https" />
<Binding name="tcp" endpointName="NetTcp" />
<Site name="MailSite" packageDir="MailSite">
<Binding name="mail" endpointName="HttpIn" hostheader="mail.mysite.cloudapp.net" />
<VirtualDirectory name="artifacts" />
<VirtualApplication name="storageproxy" />
<VirtualDirectory name="packages" packageDir="Sites\storageProxy\packages"/>
What integration feature in Websites allows customers to connect to on-premises resources or Azure-hosted virtual machines (VMs) using a VPN connection. This makes Azure Websites an ideal solution for running both internal- and external-facing applications.
Websites Virtual Network (VNET) Integration
With this integration, Websites can easily communicate with Cloud Services and VM's in Azure or on-premises.
Hybrid Connections and Virtual Network are compatible such that you can mix both in in the same website.
While you CANNOT place your Azure Website in an Azure VNET, the Virtual Network Integration feature grants your website access to resources running in your VNET. This includes being able to access web services or databases running on your Azure Virtual Machines. If your VNET is connected to your on premise network with Site-to-Site VPN, then your Azure Website will now be able to access on premise systems through the Azure Websites Virtual Network feature. Azure Websites Virtual Network integration requires your Azure virtual network to have a Dynamic routing gateway and to have Point-to-Site (p2s) enabled.
-> This integration does NOT grant access to your website from the virtual network.
-> It does not allow you to mount a drive.
-> It also currently does not support enabling integration with authentication systems in your VNet.
Note: As of 1/1/2015, the integration is in Preview and some of these restrictions may be removed when it goes GA.
What provides an in-memory, distributed cache platform for Windows Server?
What are the 3 types of Azure cache?
Azure Redis Cache: Built on the open source Redis cache. This is a dedicated service, currently in General Availability. Redis is recommended for all new development.
Managed Cache Service: Built on AppFabric Cache. This is a dedicated service, currently in General Availability.
In-Role Cache: Built on AppFabric Cache. This is a self-hosted cache, available via the Azure SDK. Cloud Services rates apply.
Why is Redis cache popular?
Unlike traditional caches which deal only with key-value pairs, Redis is popular for its highly performant data types. Redis also supports running atomic operations on these types, like appending to a string; incrementing the value in a hash; pushing to a list; computing set intersection, union and difference; or getting the member with highest ranking in a sorted set. Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.
Another key aspect to Redis success is the healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any workload you would build inside of Azure.
Can Azure Redis Cache units be scaled similar to the concept in Managed Cache?
No. Azure Redis Cache units are standalone at the advertised sizes, either as a single node in Basic, or as a Primary/Secondary combination in Standard.
What service enables you to create Virtual Private Networks (VPN) within Azure and securely link these with on-premises IT infrastructure?
Networking: Virtual Network (VNET)
Note: Azure Connect is an old service that no longer exists. VNet's should be used instead. Don't get confused if this appears on the exam.
Are data transfers over the VPN connection charged separately?
Yes. Data transfers between 2 Virtual Networks are charged at the Inter-VNET rates. Other data transfers over the VPN connections to your on-premises sites or the Internet in general are charged separately at the regular Data Transfer rate.
Is data transfer between Virtual Networks located within the same region charged?
No. Only data transfer between two different regions will be charged.
Express Routes can be established via what 2 providers?
Exchange Provider or Network Service Provider.
The associated charges are different for these two cases. Each monthly price includes 2 ports (on 2 routers) for redundancy.
What Networking service allows you to load balance incoming traffic across multiple hosted Azure services whether they're running in the same datacenter or across different datacenters around the world?
Networking: Traffic Manager
What load-balancing policies/methods does Traffic Manager offer?
Performance - chooses closest service to end user.
Round-robin - distributes equally across services or based on a percentage across different services
Fail-over - redirected to secondary service if primary fails. NOTE: This is also included in the other policies and is accomplished through the health check of instances.
When it comes to Round-robin, don't confuse this with the Azure Load Balancer that uses a hashing algorithm instead of Round-robin technique. Only Traffic Manager uses the Round-robin technique.
Each Traffic Manager profile can use only one load balancing profile/method at a time; however, you can nest Traffic Manager profiles by having one Traffic Manager call another Traffic Manager. Think of a Global Traffic Manager (performance profile) calling Regional Traffic Managers (round-robin profile) that distributes traffic differently for a region.
Note that Azure Websites already provides failover and round-robin load balancing functionality for websites within a datacenter, regardless of the website mode. Traffic Manager allows you to specify failover and round-robin load balancing for websites in different datacenters.
What feature of Traffic Manager enables you to improve the availability of your critical applications by monitoring your hosted Azure service endpoints and providing automatic failover capabilities when a service goes down?
How can I reduce my Traffic Manager bill?
Set the TTL value in Traffic Manager to a higher value. The default is 300 seconds, and the minimum is 30 seconds.
By allowing you to configure the TTL value, Traffic Manager enables you to make the best choice of TTL based on your application's business needs.
What Networking term refers to data moving in and out of Azure data centers other than those explicitly covered by the Content Delivery Network or Express Route pricing?
Networking: Data Transfers/Bandwidth
What needs to be configured to allow and handle inbound traffic to a VM from the internet and other virtual networks
All virtual machines that you create in Azure can automatically communicate using a private network channel with other virtual machines in the same cloud service or virtual network. However, other resources on the Internet or other virtual networks require endpoints to handle the inbound network traffic to the virtual machine.
What type of ports does an endpoint have for VM's?
public port and a private port
Default values for the ports and protocol for these endpoints are provided when the endpoints are created through the Management Portal. For all other endpoints, you specify the ports and protocol when you create the endpoint. Resources can connect to an endpoint by using either the TCP or UDP protocol. The TCP protocol includes HTTP and HTTPS communication
What is the private port on an Endpoint used for?
The private port is used internally by the virtual machine to listen for traffic on that endpoint.
What is the public port on an Endpoint used for?
The public port is used by the Azure load balancer to communicate with the virtual machine from external resources.
What provides load balancing between virtual machines that reside
of a cloud service or a virtual network with a regional scope?
Azure Internal Load Balancing (ILB)
ILB enables the following new types of load balancing:
VM's in a Cloud Service
ILB handles traffic from VM's to VM's within a cloud service.
From within VNet
ILB handles traffic from VM's in a VNet to VM's within a cloud service in the VNet.
From cross-premises VNet
ILB handles traffic from on-premises computers to a set of VM's within a cloud service in the VNet.
Note: ILB does not support adding an on-premises server to the set of servers to which traffic is being distributed. For example, you cannot add a database server in the on-premises network to the set of servers in the database tier.
What are the 3 steps for configuring an ILB?
Configuring an ILB must be done with PowerShell.
1) Create an instance of an ILB, which gets assigned a virtual IP (VIP) address from the pool of addresses for the cloud service or virtual network. You can also specify the VIP and subnet when the virtual machines are inside a virtual network.
2) Add endpoints corresponding to the virtual machines that will receive the load-balanced traffic.
3) Configure the servers whose traffic will be load-balanced to send their traffic to the VIP of the ILB instance, or to a DNS name that resolves to the VIP of ILB instance.
What is used to define rules that help isolate and control the incoming traffic on the public port of an Endpoint?
Network Access Control List (ACL)
When you create an ACL and apply it to a virtual machine endpoint, packet filtering takes place on the
node of your VM; meaning the host hardware. This means the traffic from remote IP addresses is filtered by the host node for matching ACL rules instead of on your VM. This prevents your VM from spending the precious CPU cycles on packet filtering.
When defining the Endpoint, click the Manage ACL to define these rules.
An ACL provides the ability to selectively permit or deny traffic for a virtual machine endpoint. This packet filtering capability provides an additional layer of security. Currently, you can specify network ACLs for endpoints only.
You can't specify an ACL for a virtual network or a specific subnet contained in a virtual network. ACLs can be configured by using either PowerShell or in the Management Portal.
If your VMs are in a VNet, you'll want to configure Network Security Groups (NSGs) instead of Network ACLs. NSGs provide more granular control and are available only for VMs that are deployed in VNets.
Important: Firewall configuration is done automatically for ports associated with Remote Desktop and Secure Shell (SSH), and in most cases for Windows PowerShell Remoting. However, you have option to leave these off during initial creation. For ports specified for all other endpoints, no configuration is done automatically to the firewall in the guest operating system. When you create an endpoint, you'll need to configure the appropriate ports in the firewall to allow the traffic you intend to route through the endpoint. In other words by default, when an endpoint is created, all traffic is denied to the endpoint. EXCEPTION: All traffic is allowed by default when a VNet is connected to on-premises or another VNet through a VPN or Express Route.
What can be done with Network ACL's?
Selectively permit or deny incoming traffic based on remote subnet IPv4 address range to a virtual machine input endpoint.
Blacklist IP addresses
Create multiple rules per virtual machine endpoint
Specify up to 50 ACL rules per virtual machine endpoint
Use rule ordering to ensure the correct set of rules are applied on a given virtual machine endpoint (lowest to highest)
Specify an ACL for a specific remote subnet IPv4 address.
Network ACLs can be specified on a Load balanced set (LB Set) endpoint. If an ACL is specified for a LB Set, the Network ACL is applied to all Virtual Machines in that LB Set. For example, if a LB Set is created with "Port 80" and the LB Set contains 3 VMs, the Network ACL created on endpoint "Port 80" of one VM will automatically apply to the other VMs.
How can ACL's be added for Cloud Services in PaaS?
Modify the service config file to include ACL's.
Notice below that a remoteSubnet can be specified with a CIDR range. The order attribute specifies the order. The rules section works just like FW rules where Deny All should be the last rule with the highest value for the order attribute.
Here's a sample of the config section for ACL's:
< AccessControl name="test">
<Rule action="permit" description="test" order="100" remoteSubnet="188.8.131.52/32" />
<Rule action="deny" description="test" order="200" remoteSubnet="0.0.0.0./0" />
< EndpointAcl role="WebRole1" endPoint="Endpoint1" accessControl="test" />
Can I define ACLs on my virtual networks?
We do not support ACLs for subnets within virtual networks. However, ACLs can be defined on input endpoints for virtual machines that have been deployed to a virtual network. Note: a virtual machine does not have to be deployed to a virtual network in order to define an ACL for the input endpoint.
If your VMs are in a VNet, what should be used instead of Network ACLs?
Network Security Groups (NSGs)
A network security group (NSG) is a top level object that is associated to your subscription. NSGs provide more granular control and are available only for VMs that are deployed in VNets. A NSG requires a regional VNet.
An NSG contains access control rules that allow or deny traffic to VM instances. The rules of an NSG can be changed at any time, and changes are applied to all associated instances.
NSGs are not compatible with VNets that are associated with an affinity group.
I already use Endpoint ACLs on my VM endpoints, can I also use Network Security Groups?
No, you can use one or the other; not both. You can remove the endpoint ACLs from the VM and associate the VM to a Network Security Group.
I have multiple NICs in my VM, will the Network Security Group rules apply to traffic on all the NICs?
No, the Network Security Group rules apply only to the traffic in primary NIC.
Note: This will likely be added in the future.
What can an NSG be associated with? A subnet, VM, or both?
Both, as long as they reside in a VNet.
When associated with a VM, the NSG applies to all the traffic that is sent and received by the VM instance.
When applied to a subnet within your VNet, it applies to all the traffic that is sent and received by ALL the VM instances in the subnet.
A VM or subnet can be associated with only 1 NSG, but each NSG can contain up to 200 rules. You can have 100 NSGs per subscription.
How are NSG's different from ACL's?
Network ACLs work only on the public port that is exposed through the Input endpoint.
Where as, an NSG works on a Subnet, one VM, or multiple VM instances and controls all the traffic that is inbound and outbound on the VM.
How are NSG's created?
NSGs can be configured and modified by using PowerShell cmdlets and REST APIs only. You cannot configure NSGs by using the Management Portal.
Here are sample cmdlets...
Create a Network Security Group
New-AzureNetworkSecurityGroup -Name "MyVNetSG" -Location uswest -Label "Security group for my Vnet in West US"
Add or Update rules
Get-AzureNetworkSecurityGroup -Name "MyVNetSG" | Set-AzureNetworkSecurityRule -Name WEB -Type Inbound -Priority 100 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '
' -DestinationAddressPrefix '
' -DestinationPortRange '*' -Protocol TCP
Delete a rule from an NSG
Get-AzureNetworkSecurityGroup -Name "MyVNetSG" | Remove-AzureNetworkSecurityRule -Name WEB
Associate an NSG to a VM
Get-AzureVM -ServiceName "MyWebsite" -Name "Instance1" | Set-AzureNetworkSecurityGroupConfig -NetworkSecurityGroupName "MyVNetSG" | Update-AzureVM
Remove an NSG from a VM
Get-AzureVM -ServiceName "MyWebsite" -Name "Instance1" | Remove-AzureNetworkSecurityGroupConfig -NetworkSecurityGroupName "MyVNetSG" | Update-AzureVM
Associate an NSG to a subnet
Get-AzureNetworkSecurityGroup -Name "MyVNetSG" | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName 'VNetUSWest' -SubnetName 'FrontEndSubnet'
Remove an NSG from the subnet
Get-AzureNetworkSecurityGroup -Name "MyVNetSG" | Remove-AzureNetworkSecurityGroupFromSubnet -VirtualNetworkName 'VNetUSWest' -SubnetName 'FrontEndSubnet'
Delete an NSG
Remove-AzureNetworkSecurityGroup -Name "MyVNetSG"
Get the details of an NSG along with rules
Get Details of Network Secuirty group along with rules
Get-AzureNetworkSecurityGroup -Name "MyVNetSG" -Detailed
How would you architect a VNet to have a DMZ, trusted zone, and restricted zone?
Create Azure VNET:
Address Space: 10.0.0.0/20
with these Subnets:
Azure-DMZ - 10.0.0.0/11
Azure-App - 10.32.0.0/11
Azure-DB - 10.64.0.0/10
Are NSG applied at the subnet or VM level?
Either one, Subnet ACL's or VM Level ACL's
How it works ...
A network security group has a Name, is associated to a Region (one of the supported Azure locations), and has a descriptive label. It contains two types of rules, Inbound and Outbound. The Inbound rules first filter at the Subnet and then the VM; and the Outbound rules are first filtered at the VM and then the Subnet. VM ACL's are applied at the server machine where the VM is located. An incoming or outgoing packet has to match an 'Allow' rule for it be permitted, if not it will be dropped.
Rules are processed in the order of priority. For example, a rule with a lower priority number (e.g. 100) is processed before rules with a higher priority numbers (e.g. 200). Once a match is found, no more rules are processed.
A rule specifies the following:
Name: A unique identifier for the rule
Priority: <You can specify an integer between 100 and 4096>
Source IP Address: CIDR of source IP range
Source Port Range: <integer or range between 0 and 65000>
Destination IP Range: CIDR of the destination IP Range
Destination Port Range: <integer or range between 0 and 65000>
Protocol: <TCP, UDP or '*' is allowed>
What are the default rules for NSG's?
- VNet traffic allowed.
- Azure Load Balancer health check probe of VM allowed
- All other traffic, including from internet, denied.
- VNet traffic allowed.
- Internet traffic allowed for all ports.
- All other traffic is denied.
NOTE: The default rules cannot be deleted, but because they are assigned the lowest priority, they can be overridden by the rules that you create. The default rules describe the default settings recommended by the platform. As illustrated by the default rules below, traffic originating and ending in a VNet is allowed both in Inbound and Outbound directions.
NOTE: The platform will not insert any implicit rule to allow traffic to a particular port. In this case, if you create an endpoint in the VM, you also have to create a rule to allow traffic from the Internet. If you don't do this, the VIP:<Port> will not be accessible from outside.
Can one NSG be applied to the VM and a different NSG be applied to the subnet?
In this case, the VM gets two layers of protection.
For Inbound traffic, the packet goes through the access rules specified for the subnet followed by rules for the VM.
For Outbound traffic, the packet goes through the rules specified for the VM first before going through the rules specified for the subnet.
What are the 3 basic network configuration categories and when is a VNet needed?
- Used when VM's and cloud services do not need to communicate directly with each other.
- Used when VM's and cloud services need to communicate directly with each other.
(which includes Hybrid solutions) - VNet is needed when extended corporate network to access VM's or cloud services. This allows VNet to VNet; or configure either the VPN or Express Route option (just one; not both).
Here are some other reasons a VNet might be needed:
If you want to connect to your VMs and cloud services by hostname or SRV records, rather than using the IP address and/or port number, you'll need name resolution. When you deploy VMs and cloud services to a virtual network you can use Azure-provided name resolution or your own DNS solution, depending on your name resolution requirements. For information about name resolution options, see Name Resolution (DNS).
Enhanced security and isolation
Since each virtual network is run as an overlay, only virtual machines and services that are part of the same network can access each other. Services outside the virtual network have no way to identify or connect to services hosted within virtual networks. This provides an added layer of isolation to your services.
Extended trust and security boundary
The virtual network extends the trust boundary from a single service to the virtual network boundary. You can create several cloud services and virtual machines within a single virtual network and have them communicate with each other without having to go through the internet. You can also setup services that use a common backend database tier or use a shared management service.
Extend your on-premises network to the cloud
You can join VMs in Azure to your domain running on-premises. You can access and leverage all on-premises investments around monitoring and identity for your services hosted in Azure.
Use persistent private IP addresses
Virtual machines within a VNet will have a stable private IP address. We assign an IP address from the address range you specify and offer an infinite DHCP lease on it. You can also choose to configure your virtual machine with a specific private IP address from the address range when you create it. This ensures that your virtual machine retains its private IP address even when Stop/Deallocated. See Configure a static internal IP address for a VM.
Name resolution is used when you want to refer to VMs and role instances by hostname or FQDN directly, instead of by IP address and port number. Before deploying role instances or virtual machines, you must consider how you want name resolution to be handled. What 2 DNS options are available?
Internal name resolution provided by Azure
choose to specify a DNS server not maintained by Azure
Not all configuration options are available for every deployment type. Carefully consider your deployment scenario before making this choice. Here are some scenarios...
When to use Azure-provided name resolution (internal):
- Name resolution between role instances located in the same cloud service
-Name resolution between VMs located in the same cloud service
- Name resolution between computers on the internet and your public endpoints
- can be used for scenarios in a VNET. NOTE: For resolution using FQDN, you can use Azure-provided name resolution for the first 100 cloud services in the virtual network
- cross-premises because its using a VNET
- Use name resolution to direct traffic between datacenters
What 2 things make up a cloud service in Azure?
code and configuration of an application
When you create an application and run it in Azure, the code and configuration together are called an Azure cloud service (known as a hosted service in earlier Azure releases).
By creating a cloud service, you can deploy a multi-tier web application in Azure, defining multiple roles to distribute processing and allow flexible scaling of your application. A cloud service consists of one or more web roles and/or worker roles, each with its own application files and configuration.
Can VMs and role instances be located in the same cloud service?
No. This is mixing IaaS and PaaS. If the desire is to have them communicating locally, a VNet should be used.
What are the 3 options for cross-premises connectivity?
Site-to-Site - VPN connection over IPsec (IKE v1 and IKE v2)
Allows you to create a secure connection between your on-premises site and your virtual network. To create a site-to-site connection, a VPN device that is located on your on-premises network is configured to create a secure connection with the Azure Virtual Network Gateway. Once the connection is created, resources on your local network and resources located in your virtual network can communicate directly and securely. Site-to-site connections do not require you to establish a separate connection for each client computer on your local network to access resources in the virtual network.
Use a site-to-site connection when, you want to create a branch office solution, OR, you want a connection between your on-premises location and your virtual network that's available without requiring additional client-side configurations.
NOTE: You must have an externally facing IPv4 IP address and a VPN device or Windows Server 2012 Routing and Remote Access (RRAS) to configure a site-to-site VPN connection.
Point-to-Site - VPN connection over SSTP (Secure Sockets Tunneling Protocol)
Also allows you to create a secure connection to your virtual network. In a point-to-site configuration, the connection is configured individually on each client computer that you want to connect to the virtual network. Point-to-site connections do not require a VPN device. They work by using a VPN client that you install on each client computer. The VPN is established by manually starting the connection from the on-premises client computer. You can also configure the VPN client to automatically restart.
NOTE: Point-to-site and site-to-site configurations can exist concurrently.
Lets you create private connections between Azure datacenters and infrastructure that's on your premises or in a co-location environment. Express Route connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the Internet. In some cases, using Express Route connections to transfer data between on-premises and Azure can also yield significant cost benefits. With Express Route, you can establish connections to Azure at an Express Route location (Exchange Provider facility) or directly connect to Azure from your existing MPLS/WAN network provided by a network service provider.
Rank the cross-premises connectivity options by speed (slowest to fastest).
Point-to-site: slowest because it goes through 2 tunnels and typically connects a PC from anywhere.
Site-to-site: faster than point-to-site because it connects 2 infrastructures.
Express Route: Fastest option and does not go over internet.
What is the "tunneling" called that lets you redirect or "force" all Internet-bound traffic back to your on-premises location via a S2S VPN tunnel for inspection and auditing?
Without forced tunneling, Internet-bound traffic from your VMs in Azure will always traverse from Azure network infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic. This feature can prevent information disclosure.
What cross-premises connectivity option allows you to have Azure on your own MPLS network just like all other corporate resources?
Which cross-premises connectivity option(s) can be used to access public Azure services not within the VNet like SQL Database, Storage, Websites, etc.?
Express Route is the only option that allows this. Customers without this would have to access the Azure public services through the internet endpoint.
Websites have an integration that allow them to access resources within a VNet but resources in a VNet CANNOT access the website. This means that on-premises resources also CANNOT access websites through a P2S or S2S VPN. Thus, websites are considered a public Azure service.
What can IT administrators control with a Virtual Network?
IT administrators can control network topology, including configuration of DNS and IP address ranges.
100 of 152
What Azure services can be used with a Virtual Network?
Virtual Network can contain Cloud Services (PaaS) and virtual machines ONLY. Virtual Network cannot contain any other services at this time.
Websites have an integration that allow them to access resources within a VNet but websites cannot be within a VNet and resources in a VNet cannot access the website.
What tools can be used to create a VNET?
The network configuration file (netcfg)
What IP ranges can be used with a VNET?
You can use public IP address ranges and any IP address range defined in RFC1918 (10.
, 172.16-172.31, 192.168.
Is there a limit to the number of subnets that can be in a VNET?
There is no limit on the number of subnets you use within a virtual network. All the subnets must be fully contained in the virtual network address space and should not overlap with one another.
Can custom routing be used on VNETs?
No. We do not support custom routing policies with virtual networks.
Does VNET support multicast and broadcast?
No. We do not support multicast or broadcast.
What protocols are supported on VNETs?
Can ping and tracert be used on VNET's?
No. Its blocked.
Do VNETs support IPv6?
Can a VNET be connected to another VNET in Azure?
Yes. You can create VNet to VNet communication by using REST APIs or Windows PowerShell
Can a single VNET connect to multiple sites?
You can connect to multiple sites by using Windows PowerShell and the Azure REST APIs.
Can any software VPN client for point-to-site that supports SSTP be used?
No. Support is limited only to the Windows operating system versions: Windows 7/8 (64-bit only) and Windows Server 2008/2012
How many Client VPN endpoints can exist for Point-to-Site connectivity?
Up to 128 VPN clients
Can proxies and firewalls be traverse using point-to-site capability?
We use SSTP (Secure Sockets Tunneling Protocol) to tunnel through firewalls. This tunnel will appear as a HTTPs connection.
How much throughput can I expect through site-to-site or point-to-site connections?
It's difficult to determine the exact throughput through the VPN tunnels. IPsec and SSTP are crypto-heavy VPN protocols. Throughput is also limited by the latency and bandwidth between your premises and the internet.
How does my VPN tunnel for Point-to-Site and Site-to-Site get authenticated?
Azure generates a pre-shared key (PSK) when they create the VPN gateway. You must use the PSK to authenticate. The PSK can be re-generated at any time and the PSK length can be changed as needed.
Azure by default generates different pre-shared keys for different VPN connections. You can use the newly introduced Set VPN Gateway Key REST API or PowerShell cmdlet to set the key value you prefer so they are all the same across all your VPN connections. The key MUST be alphanumerical string of length between 1 to 128 characters.
Note: This is the only authentication option available as of 12/12/2014.
How do I specify which traffic goes through the VPN gateway?
Add each range that you want sent through the gateway for your virtual network on the Networks page under Local Networks.
When setting up a Site-to-Site VNET VPN in Azure, you'll be asked to enter which fields for the
- The name you want to call your local (on-premises) network site.
VPN Device IP Address
- This is public facing IPv4 address of your on-premises VPN device that you'll use to connect to Azure. The VPN device cannot be located behind a NAT.
- including Starting IP and CIDR (Address Count). This is where you specify the address range(s) that you want sent through the virtual network gateway to your local on-premises location. If a destination IP address falls within the ranges that you specify here, it will be routed through the virtual network gateway.
When setting up a Site-to-Site OR Point-to-Site VNET VPN in Azure, you'll be asked to enter which fields for the
The following address spaces are the
dynamic IP addresses (DIPS)
that will be assigned to the VMs and other role instances that you deploy to this virtual network. There are quite a few rules regarding virtual network address space, so you will want to see the Virtual Network Address Spaces page for more information. It's especially important to select a range that does not overlap with any of the ranges that are used for your on-premises network.
- including Starting IP and Address Count. Verify that the address spaces you specify don't overlap any of the address spaces that you have on your on-premises network.
- including Starting IP and Address Count. Additional subnets are not required, but you may want to create a separate subnet for VMs that will have static DIPS. Or you might want to have your VMs in a subnet that is separate from your other role instances. The smallest supported subnet is /29.
Add gateway subnet
- The gateway subnet is used only for the virtual network gateway and is required for this configuration. After VNet is created, click Create Gateway at bottom of page and select either Static Routing or Dynamic Routing to setup the gateway.
When setting up a Point-to-Site VNET VPN in Azure, you'll be asked to enter which fields for the
Specify the IP address range from which your VPN clients will receive an IP address when connected. There are a few rules regarding the address ranges that you are able to specify. It's very important to verify that the range that you specify doesn't overlap with any of the ranges located on your on-premises network.
Client Address Space
- Include the Starting IP and CIDR (Address Count) for the clients connecting via VPN.
Are static or dynamic routing VPNs also referred to as policy-based VPNs?
Policy-based VPNs encrypt and route packets through an interface based on a customer-defined policy. The policy is usually defined as an access list. Static routing VPNs require a static routing VPN gateway.
Note - Multi-Site VPN, VNet to VNet, and Point-to-Site are not supported with static routing VPN gateways.
Are static or dynamic routing VPNs also referred to as route-based VPNs?
Route-based VPNs depend on a tunnel interface specifically created for forwarding packets. Any packet arriving on the tunnel interface will be forwarded through the VPN connection. Dynamic routing VPNs require a dynamic routing VPN gateway.
Note - A dynamic routing VPN gateway is required for Multi-Site VPN, VNet to VNet, and Point-to-Site.
Do the high performance VPN gateways use policy-based or route-based VPN's?
Route-based (dynamic routing VPN)
High performance VPN gateways offer the same features as dynamic routing VPN gateways but increase performance and support 30 site-to-site VPNs instead of just 10.
Does a point-to-site connection require a virtual network with a dynamic or static routing gateway?
A point-to-site connection requires a virtual network with a dynamic routing gateway.
For Point-to-Site, what other steps are required during setup within Azure?
Create dynamic routing gateway
Create certificates for authentication of client VPN connections.
Which type of gateways can support multi-site and VNet-to-VNet connectivity?
Only the Dynamic Routing VPNs.
If trying to connect to another VNET, then both VNET's require dynamic routing gateways.
Note: This coincides with the Point-to-Site. Seems we need a dynamic routing gateway when we have multiple connection points coming into our VNET.
Is the VNet-to-VNet traffic secure?
Yes, it is protected by IPsec/IKE encryption.
How many on-premises sites and virtual networks can one virtual network connect to?
Max. 10 combined. For example, one Azure virtual network can connect to 6 on-premises sites and 4 virtual networks.
Can VNETs be connected across subscriptions?
Can a virtual network (VNET) SPAN regions?
No. A virtual network is limited to a single region.
Can a virtual network (VNET) SPAN subscriptions?
No. If this was possible, we would see everyone's VNets listed when creating the VM and choosing a region/affinitygroup/Vnet. VNets can be connected across subscriptions but they just can't be spanned across subscriptions.
Can a cloud service or a load balancing endpoint span across virtual networks even though they are connected together?
No, this means you would be trying to split the service in pieces which doesn't make much sense. You can, however, have 2 cloud services in different VNets that are accessing each other and you could even use Traffic Manager to load balance traffic to them.
Can I connect virtual networks in different Azure regions?
Yes. In fact, there is no region constraint. One virtual network can connect to another virtual network in the same region, or in different Azure region.
Does Azure charge for traffic between virtual networks?
Azure charges only for traffic traversing from one Azure region to another. The traffic is charged based on the same rate as the egress data transfer charge listed in the Azure pricing page.
Can I specify DNS servers for virtual networks?
Yes. You can specify DNS server IP addresses in the virtual network definition. This will be applied as the default DNS server(s) for all virtual machines in the virtual network. You can specify up to 12 DNS servers.
Can I move VMs from one subnet to another subnet in a virtual network without re-deploying?
Yes. You can use PowerShell to do so.
Can I move my services in and out of virtual networks?
We do not support the ability to move services in and out of virtual networks. You will have to delete and re-deploy the service into virtual networks.
If I want to RDP to my virtual machine, do I connect by public VIP or internal DIP?
You can do either.
CONNECT WITH A VIP
If you have RDP enabled and you have created an endpoint, you can connect to your virtual machine by using the VIP. In that case, you would specify the VIP and the port that you want to connect to. You'll need to configure the port on your virtual machine for the traffic. Typically, you would go to the Management Portal and save the settings for the RDP connection to your computer. The settings will contain the necessary connection information.
CONNECT WITH DIP using CROSS-PREMISES VNET
If you have a virtual network with cross-premises connectivity configured, you can connect to your virtual machine by using the internal DIP. You can also connect to your virtual machine by internal DIP from another virtual machine that's located on the same virtual network. You can't RDP to your virtual machine by using the DIP if you are connecting from a location outside of your virtual network. For example, if you have a point-to-site virtual network configured and you don't establish a connection from your computer, you can't connect to the virtual machine by DIP.
If my virtual machine is in a virtual network with cross-premises connectivity, does all the traffic from my VM go through that connection?
Only the traffic that has a destination IP that is contained in the virtual network Local Network IP address ranges that you specified will go through the virtual network gateway. Traffic has a destination IP located within the virtual network will stay within the virtual network. Other traffic is sent through the load balancer to the public networks. If you are troubleshooting, it's important to make sure that you have all the ranges listed in your Local Network that you want to send through the gateway. Verify that the Local Network address ranges do not overlap with any of the address ranges in the virtual network. Also, you'll want to verify that the DNS server you are using is resolving the name to the proper IP address.
What services can I use with Virtual Network?
Only compute services, specifically Cloud Services (web and worker roles) and virtual machines.
What is the security model for virtual networks?
Virtual networks are completely isolated from one another and other services hosted in the Azure infrastructure. Trust boundary = virtual network boundary.
When you create a new VM with Azure Virtual Machines, you can choose to run it standalone or make it part of a group of VMs running together. What is this group called?
When you create a virtual machine, a cloud service is automatically created to contain the machine. You can create multiple virtual machines within the same cloud service so virtual machines can communicate with each other. The cloud service created with VM's is the same as Cloud Services that appear for PaaS; as a matter of fact, the VM cloud service will appear in the portal under Cloud Services.
Standalone VM is assigned its own public IP address, while all of the VMs in the same cloud service are accessed through a single public IP address.
You can use Azure load balancing for VMs in the same cloud service, spreading user requests across them. VMs connected in this way can also communicate directly with one another over the local network within an Azure datacenter.
VMs in the same cloud service can also be grouped into one or more availability sets.
This article explains more detail: http://azure.microsoft.com/en-us/documentation/articles/fundamentals-application-models/
Having a VM in a cloud service is also considered being in the same what?
A customer subscription can include multiple deployments, and each deployment can contain multiple VMs.
Windows Azure provides network isolation at several points:
Deployment: Each deployment is isolated from other deployments. Multiple VMs within a deployment are allowed to communicate with each other through private IP addresses.
Virtual Network: Multiple deployments (inside the same subscription) can be assigned to the same virtual network, and then allowed to communicate with each other through private IP addresses. Each virtual network is isolated from other virtual networks.
How can VM's be secured with respect to networking within the same subscription?
Virtual machines inside a cloud service/deployment are allowed to communicate with each other via private IP addresses. Communication between VMs in multiple cloud services/deployments of a subscription can be secured by using virtual networks.
In addition to virtual networks, for enhanced security (similar to on-premises networks), it is possible to use IPsec-based security for all communications.
How can VM's be secured with respect to networking across multiple subscriptions?
A customer may have multiple subscriptions, and VMs may need to communicate between multiple subscriptions. For this case, there are 2 options:
1. Create VNet in both subscriptions and use VNet to VNet. Connecting a VNet to another VNet is very similar to connecting a virtual network to an on-premises site location. Both connectivity types use a virtual network gateway to provide a secure tunnel using IPsec/IKE. The VNets you connect can be in different subscriptions and different regions. You can even combine VNet to VNet communication with multi-site configurations.
2. VMs can be configured to communicate via public virtual IP addresses, and IP ACLs on input endpoints can be used to allow those VMs to initiate connections only with each other. However, creating ACLs based on IP addresses is not ideal, since the ACLs must be updated any time the public virtual addresses change. This can result in service failures and puts additional burden on the administrator. Public virtual IP addresses can change after compute resources are de-allocated when a virtual machine is shut down, or after a deployment is deleted. Using in-place upgrade enables administrators to deploy new versions of their service without the Public IPs of the VMs changing.
How can VM's be secured with respect to networking across multiple regions?
If an application sends or receives any sensitive data across Windows Azure regions, then communications must be encrypted. Cross-region traffic transits over a WAN and is more open to interception.
Within Windows Azure regions, customers with security concerns should use encryption for all communication that leaves a VM. For example, regulatory compliance standards may require this extra security measure.
VNet to Vnet communication could also be used. Connecting a VNet to another VNet is very similar to connecting a virtual network to an on-premises site location. Both connectivity types use a virtual network gateway to provide a secure tunnel using IPsec/IKE. The VNets you connect can be in different subscriptions and different regions. You can even combine VNet to VNet communication with multi-site configurations.
Would these on-premises applications be more difficult to migrate to IaaS or PaaS?
Applications that rely on local file system and expects locally stored data to be persistent between restarts
Applications that rely on dynamic TCP and UDP ports
Applications that rely on MAC address for licensing
Applications that require reboots during installation (e.g. installation of a driver)
PaaS because they would require rework to work properly.
What are the advantages and disadvantages of using IaaS?
Advantages of IaaS
Business: Quick transition to Cloud - Due to the excellent portability enabled by IaaS, ISVs now can easily start offering cloud hosted services to their customers with minimal effort
Technology: Mature ISV Ecosystem - Mature ISV ecosystem readily offers various solution and operational components that are popular in an on-premise setting.
Technology: Complete Control - The developers and IT professionals have access to the complete app platform stack, user mode subsystems and kernel level control so that the VM can be customized to the needs of the business domains they serve.
Technology: Solution Portability - IaaS allows excellent design time portability of the application assets as the granularity of the deployment is a Virtual Hard Drive (VHD) containing both OS and application bits.
Disadvantages of IaaS
Business: Expensive to Operate - Expensive to operate as the solutions have to factor in the higher server maintenance for software patching and upgrades.
Business: Slows Down Innovation - The complete control on the OS and application server stack encourages developers to take dependencies on specific versions of the OS and app server. As a result, application migration to future versions of the OS and app server ecosystem becomes progressively harder and harder.
Business: Security Risks from Unpatched Servers - An unpatched server hosting sensitive data and processing logic can pose a huge PR risk for the company. There are no such problems with PaaS as server patching is automatically taken care of.
Technology: Difficult to Maintain Legacy Apps - Can be stuck with the older version of the operating systems and application stacks. This can result in applications that are difficult to maintain and add new functionality over the period of time.
Technology: Requires Rigorous Processes for Enabling DevOps - IaaS based applications suffer from the same DevOps issues that plague on-premise deployments. It requires rigorous processes to bring developers and IT Pros together to build operations' friendly applications.
Technology: Requires Rigorous Server Maintenance Processes - Diligent processes are required for server patching and upgrades; this is more so for smaller companies than larger companies with mature server maintenance practices.
What are the advantages and disadvantages of using PaaS?
Advantages of PaaS
Business: Low Total Cost of Ownership - Automated server maintenance and auto scaling of compute resources for meeting temporal resource demands are the two significant contributors towards lowering the cost of operations. Optimizing operational cost is a key requirement for services operated by cost centers like corporate IT shops which services internal employees.
Business: Accelerates Innovation - Due to the surface area between the application and the underlying platform is optimal in PaaS, developers will be able to move to new releases easily and build innovative solutions to meet the market demands. With IaaS, applications tend to be sticky to the underlying platform due to the tight coupling resulting from the complete control developers have on the OS and application platform stack.
Technology: Better Development Operations - Developers are no longer needed to work at the levels that require deep understanding of the OS and the networking infrastructure. OS patch management and upgrades are no longer needed to be part of the runbook for operating PaaS hosted applications.
Technology: Mitigates Vulnerability Risks - Azure PaaS team takes care of the infrastructure health by keeping the infrastructure updated against all the known vulnerabilities for which fixes have been distributed.
Disadvantages of PaaS
Harder Transition to Cloud - Applications that rely on local file system, expect locally stored data to be persistent between restarts, applications that rely on dynamic TCP and UDP ports, applications that rely on MAC address for licensing, and applications that require reboots during installation (e.g. installation of a driver) are some examples that require rework if at all if they can be migrated to Azure PaaS without sacrificing the core functionality.
Technology: Application Portability Issues - Due to the run time environment differences between Azure PaaS and the on-premise setup, applications have to be modified to be more transparent in terms of the telemetry they generate so that IT Professional can gain more insights into the operations and proactively mitigate the availability and scalability risks. Rewiring the diagnostics, accommodating resource governance in a multi-tenant setting, local file system access and implementing software metering are a few PaaS specific work items that will impact time-to-market and application portability.
Technology: PaaS ISV Ecosystem is not as mature as IaaS - If an app requires a specific RMS implementation, management & monitoring product or a specific license enforcement product, the chances are that these may not be available on Azure PaaS yet.
Technology: Different Codebases for Cloud and Premise - ISVs will have to maintain two different build scripts and two sets of libraries that will adapt the build to multiple deployment and run time environments.
What are some application patterns that can leverage the advantages of Virtual Machines?
Existing non-mission critical database applications
New database applications to be deployed to SQL Server in Virtual Machines when Microsoft Azure SQL Database does not provide all the necessary features
A quick and easy development and test environment for new database applications
A backup solution for on-premises database applications
A solution that can scale on-demand easily and rapidly at peak times
A solution that can overcome virtualization platform inefficiencies on-premises
A solution that have dependencies on resources that require virtual machines such as SQL Server, Active Directory, MongoDB, MySQL, or SharePoint.
150 of 152
What are the key concepts for VM's?
An operating system image is a set of one or more files to be used as a template to create a new virtual machine. An image acts like a template because it doesn't have the personalized settings that a configured virtual machine has, such as the computer name and user account settings.
A virtual machine OS disk is virtual hard disk (in .vhd file format) that can be booted and mounted as a running version of an operating system instance. Virtual machines can also use one or more data disks, which can be attached to the virtual machine at any time.
An Microsoft Azure application can have multiple virtual machines. All virtual machines that you create in Microsoft Azure can automatically communicate using a private network channel with other virtual machines in the same cloud service or virtual network. Microsoft Azure allows you to load-balance traffic between them.
How many usable IP's are available with 10.0.0.0/24-32?
/24 - 254
/25 - 126
/26 - 62
/27 - 30
/28 - 14
/29 - 6
/30 - 2
/31 - 2
/32 - 1
See other Quizlet card sets that cover these topics at a high level.
Azure Overview - CDN and Service Bus
Azure Security - Multi-Factor Authentication and Azure Active Directory (Azure AD)
THIS SET IS OFTEN IN FOLDERS WITH...
Azure API Mgmt, Websites, Cloud Services, and Batch
Azure Storage/Data Strategies & Mobile Services
Azure Security II
Azure Security I
YOU MIGHT ALSO LIKE...
Comptia A+ Chapter 20 - Network Services, Cloud Co…
220-902 Chapter 20: Network Services, Cloud Comput…
IS 6640 Networking and Servers Exam 1 Ch. Q's
OTHER SETS BY THIS CREATOR
PCI-DSS Difficult (Detailed)
PCI-DSS - Easy (High Level)
Azure Design Patterns
OTHER QUIZLET SETS
Study Guide Questions For Physiology*
The union dissolves history quiz
Julius Caesar Quotes