SAC-853T: Inside Windows Azure: the cloud working framework .


40 views
Uploaded on:
Category: Art / Culture
Description
SAC-853T. Inside Windows Azure: the cloud operating system. Mark Russinovich Technical Fellow Microsoft Corporation. About Me. Technical Fellow, Windows Azure, Microsoft Author of Windows Sysinternals tools Coauthor of Windows Internals book series With Dave Solomon and Alex Ionescu
Transcripts
Slide 1

SAC-853T Inside Windows Azure: the cloud working framework Mark Russinovich Technical Fellow Microsoft Corporation

Slide 2

About Me Technical Fellow, Windows Azure, Microsoft Author of Windows Sysinternals instruments Coauthor of Windows Internals book arrangement With Dave Solomon and Alex Ionescu Coauthor of Sysinternals Administrator\'s Reference With Aaron Margosis Author of Zero Day: A Novel

Slide 3

Agenda Windows Azure Datacenter Architecture Deploying Services Maintaining Service Health Developing and Operating Windows Azure

Slide 4

Windows Azure Datacenter Architecture

Slide 5

Datacenter Architecture Datacenter Routers Aggregation Routers and Load Balancers Agg LB Top of Rack Switches TOR … Racks Nodes PDU Power Distribution Units

Slide 6

Windows Azure Datacenters

Slide 7

Datacenter Clusters Datacenters are separated into "bunches" Approximately 1000 rack-mounted server (we call them "hubs") Provides a unit of blame seclusion Each bunch is overseen by a Fabric Controller (FC) FC is in charge of: Blade provisioning Blade administration Service organization and lifecycle Datacenter organize FC Cluster 1 Cluster 2 … Cluster n

Slide 8

The Fabric Controller (FC) The "bit" of the cloud working framework Manages datacenter equipment Manages Windows Azure administrations Four fundamental duties: Datacenter asset designation Datacenter asset provisioning Service l ifecycle administration Service wellbeing administration Inputs: Description of the equipment and system assets it will control Service model and parallels for cloud applications Datacenter Fabric Controller Service Server Kernel Process Word SQL Server Exchange Online SQL Azure Windows Kernel Fabric Controller Server Datacenter

Slide 9

Cluster Resource Description The Fabric Controller is bootstrapped with an Utility Fabric Controller (UFC) Single-example FC Used for bootstrap and FC redesigns UFC encourages FC a portrayal of the bunch physical and coherent assets in Datacenter.xml Server IP addresses Pool of system IP locations to dole out administrations Network equipment and Power Distribution Unit addresses

Slide 10

Inside a Cluster FC is an appropriated, stateful application running on hubs (servers) spread crosswise over blame spaces Top sharp edges are saved for FC One FC case is the essential and all others keep perspective of world in a state of harmony Supports moving overhaul, and administrations keep on running regardless of the possibility that FC flops totally TOR AGG LB FC3 FC4 FC5 FC2 FC1 FC3 Nodes … Rack

Slide 11

Provisioning a Node Fabric Controller Windows Deployment Server Image Repository Power on hub PXE-boot Maintenance OS Agent positions circle and downloads Host OS by means of Windows Deployment Services (WDS) Host OS boots, runs Sysprep/practice, reboots FC interfaces with the "Host Agent" Maintenance OS Windows Azure OS Maintenance OS Parent OS Role Images Role Images Role Images Role Images PXE Server Node Windows Azure OS FC Host Agent Windows Azure Hypervisor

Slide 12

Deploying Services

Slide 13

Service Deploying a Service to the Cloud: The 10,000 foot see System Center AppManager Windows Azure Portal Package u pload to entrance System Center Concero gives IT Pro transfer encounter Windows Azure gateway gives designer transfer encounter Service bundle go to RDFE sends administration to a Fabric Controller (FC) in view of target district and partiality assemble FC stores picture in storehouse and conveys benefit RDFE Service US-North Central Datacenter Fabric Controller

Slide 14

RDFE serves as the front end for all Windows Azure administrations Subscription administration Billing User get to Service administration RDFE is in charge of picking groups to send administrations and capacity accounts First datacenter locale Then fondness gathering or group stack Normalized VIP and center use A(h, g) = C(h, g)/

Slide 15

The Several Service Models of Windows Azure Service Model experiences a few interpretations while in transit to a hub .CSDEF/.CSCFG sent to RDFE SDK diagram for outer utilize Internal pattern for foundation administrations (more Update Domains, Fault Domains, early access to highlights) RDFE changes over .CSDEF to .WAZ Includes propelled benefit ideas Native equipment Role colocation RDFE then changes over .WAZ to .SVD/.TRD Fabric Controller inward model

Slide 16

Service Deployment Steps Process benefit display documents Determine asset necessities Create part pictures Allocate register and system assets Prepare hubs Place part pictures on hubs Create virtual machines Start virtual machines and parts Configure organizing Dynamic IP addresses (DIPs) appointed to cutting edges Virtual IP addresses (VIPs) + ports apportioned and mapped to sets of DIPs Configure parcel channel for VM to VM activity Programs stack balancers to permit movement

Slide 17

Service Resource Allocation Goal: dispense benefit segments to accessible assets while fulfilling every hard imperative HW prerequisites: CPU, Memory, Storage, Network Fault areas Secondary objective: Satisfy delicate limitations Prefer allotments which will rearrange adjusting the host OS/hypervisor Optimize arrange closeness: pack hubs Service distribution delivers the objective state for the assets doled out to the administration segments Node and VM setup (OS, facilitating environment) Images and arrangement records to send Processes to begin Assign and arrange assets, for example, LB and VIPs

Slide 18

Deploying a Service Role B Worker Role Count: 2 Update Domains : 2 Size: Medium Role A Web Role (Front End) Count: 3 Update Domains: 3 Size: Large www.mycloudapp.net Load Balancer 10.100.0.185 10.100.0.36 10.100.0.122

Slide 19

Mapping Instances to Update/Fault Domains Algorithm should round-robin crosswise over two tomahawks: blame areas and upgrade spaces APIs return FD and UD for an occasion, however the mapping is invariant IN_3 IN_0 IN_6 IN_7 IN_4 IN_1 IN_8 IN_5 IN_2

Slide 20

Inside a Deployed Node Physical Node Guest Partition Guest Partition Guest Partition Guest Partition Role Instance Role Instance Role Instance Role Instance Trust limit Guest Agent Guest Agent Guest Agent Guest Agent Host Partition Image Repository (OS VHDs, part ZIP documents) FC Host Agent Fabric Controller (Primary) Fabric Controller (Replica) Fabric Controller (Replica) …

Slide 21

Deploying a Role Instance FC pushes part records and setup data to target hub have specialist Host operator makes three VHDs: Differencing VHD for OS picture (D:\) Host specialist infuses FC visitor operator into VHD for Web/Worker parts Resource VHD for brief documents (C:\) Role VHD for part records (most readily accessible drive letter e.g. E:\, F:\) Host specialist makes VM, connects VHDs, and begins VM Guest operator begins part have, which calls part passage direct Starts wellbeing pulse toward and gets summons from host specialist Load balancer just courses to outside endpoint when it reacts to basic HTTP GET (LB test)

Slide 22

Role Instance VHDs Role Virtual Machine C:\ Resource Disk D:\ Windows Differencing Disk E:\ or F:\ Role Image Differencing Disk Role VHD Windows VHD

Slide 23

Inside a Role VM OS Volume Resource Volume Role Volume Guest Agent Role Host Role Entry Point

Slide 24

Performance Improvements: JBOD Today: sharp edge plates are striped VM VHDs share the multi-circle volume I/O conflict can be high amid boot of various VMs Failure of one circle influences all VMs Improvement: make one volume for every plate VHDs traverse volumes just if fundamental Leads to higher I/O confinement Failure of a circle influences just VMs facilitated on that circle VHD2 VHD1 VHD2 VHD1 Volume 1 Volume 2 Volume 3 Disk 1 Disk 2 Disk 3 Disk 1 Disk 2 Disk 3 JBOD Striping

Slide 25

Performance Improvements: Preallocation and Prefetching Today: Differencing VHDs developed request Causes looks for and zero-filling for extension Volume gets to be distinctly divided after some time Improvement: Preallocate and prefetch VHDs Differencing and dynamic VHDs preallocated and request zero-filled OS VHD prefetched from lab-produced prefetch record VHD1 VHD2 Preallocation Dynamic Allocation

Slide 26

Updating the Host OS Initiated by the Windows Azure group Typically close to once every month Goal: redesign all machines as fast as could reasonably be expected Constraint: must not damage benefit SLA Service needs no less than two overhaul spaces and part cases for SLA Can\'t permit more than one upgrade area of any support of be disconnected at once Essentially a diagram shading issue Edges exist between vertices (hubs) if the two hubs have occasions of a similar administration part in various upgrade areas Nodes that don\'t have edges between them can redesign in parallel Note: your part example keeps the same VM and VHDs, safeguarding reserved information in the asset volume

Slide 27

Allocation Constraints Service A Role A-1 UD 1 Service B Role A-1 UD 1 Service A Role A-1 UD 2 Service B Role A-1 UD 2 Host OS update rollout is 2x quicker with portion 1 Both assignments are legitimate from the administration\'s perspective Allocation 1 takes into consideration 2 hubs rebooting all the while Allocation 2 permits just a single hub to be down whenever Allocation calculation: Prefer hubs facilitating same UD as part occurrence\'s UD Service A Role B-1 UD 1 Service B Role B-1 UD 1 Service A Role B-2 UD 2 Service B Role B-2 UD 2 Allocation 1 Service A Role A-1 UD 1 Service A Role A-1 UD 2 Service B Role B-1 UD 1 Service B Role A-1 UD 2 Service A Role B-1 UD 1 Service A Role B-2 UD 2 Service B Role B-2 UD 2 Service B Role A-1 UD 1 Allocation 2

Slide 28

Updating a Cluster: The Long Tail Cluster Host OS overhauls can take numerous hours becaus

Recommended
View more...