Server-based Characterization and Inference of Internet Performance .


21 views
Uploaded on:
Category: Art / Culture
Description
Server-based Characterization and Inference of Internet Performance. Venkat Padmanabhan Lili Qiu Helen Wang Microsoft Research UCLA/IPAM Workshop March 2002. Outline. Overview Server-based characterization of performance Server-based inference of performance Passive Network Tomography
Transcripts
Slide 1

Server-based Characterization and Inference of Internet Performance Venkat Padmanabhan Lili Qiu Helen Wang Microsoft Research UCLA/IPAM Workshop March 2002

Slide 2

Outline Overview Server-based portrayal of execution Server-based deduction of execution Passive Network Tomography Summary and future work

Slide 3

Overview Goals describe end-to-end execution derive qualities of inside connections Approach: server-based observing aloof checking  generally modest empowers substantial scale estimations differing qualities of system ways

Slide 4

Web server ACKs DATA customers

Slide 5

Research Questions Server-based portrayal of end-to-end execution relationship with topological measurements spatial area worldly strength Server-based derivation of inward connection attributes recognizable proof of lossy connections

Slide 6

Related Work Server-based latent estimation 1996 Olympics Web server study (Berkeley, 1997 & 1998) portrayal of TCP properties (Allman 2000) Active estimation NPD (Paxson 1997) stationarity of Internet way properties (Zhang et al. 2001)

Slide 7

Experiment Setting Packet sniffer at microsoft.com 550 MHz Pentium III sits on traversing port of Cisco Catalyst 6509 parcel drop rate < 0.3% follows up to 2+ hours in length, 20-125 million bundles, 50-950K customers Traceroute source sits on a different Microsoft organize, yet all outside bounces are shared rare and out of sight

Slide 8

Topological Metrics and Loss Rate Topological separation is a poor indicator of parcel misfortune rate. All connections are not equivalent  need to recognize the lossy connections

Slide 9

Spatial Locality Do customers in similar bunch see comparable misfortune rates? Misfortune rate is quantized into containers 0-0.5%, 0.5-2%, 2-5%, 5-10%, 10-20%, 20+% recommended by Zhang et al. (IMW 2002) Focus on lossy bunches normal misfortune rate > 5% Spatial area  there might be shared reason for bundle misfortune

Slide 10

Temporal Stability Loss rate again quantized into cans Metric of intrigue: solidness period (i.e., time until move into new pail) Median security period ≈ 10 minutes Consistent with past discoveries in light of dynamic estimations

Slide 11

Putting everything together All connections are not equivalent  need to distinguish the lossy connections Spatial region of parcel misfortune rate  lossy connections may well be shared Temporal steadiness  beneficial to attempt and recognize the lossy connections

Slide 12

Passive Network Tomography Goal: decide qualities of inner system joins utilizing end-to-end , aloof estimations We concentrate on the connection misfortune rate metric essential objective: recognizing lossy connections Why is this fascinating? finding inconvenience spots in the system monitoring your ISP server arrangement and server choice

Slide 13

Web server Why is it so moderate? AT&T Sprint C&W Earthlink UUNET Darn, it\'s moderate! AOL Qwest

Slide 14

Related Work MINC (Caceres et al. 1999) multicast-based dynamic testing Striped unicast (Duffield et al. 2001) unicast-based dynamic examining Passive estimation (Coates et al. 2002) search for consecutive parcels Shared bottleneck discovery Padmanabhan 1999, Rubenstein et al. 2000, Katabi et al. 2001

Slide 15

S A B Active Network Tomography S A B Striped unicast tests Multicast tests

Slide 16

Problem Formulation server Collapse straight chains into virtual connections (1-l 1 )*(1-l 2 )*(1-l 4 ) = (1-p 1 ) (1-l 1 )*(1-l 2 )*(1-l 5 ) = (1-p 2 ) … (1-l 1 )*(1-l 3 )*(1-l 8 ) = (1-p 5 ) Under-compelled arrangement of conditions l 1 l 3 l 2 l 4 l 5 l 6 l 7 l 8 p 1 p 2 p 3 p 4 p 5 customers

Slide 17

#1: Random Sampling Randomly test the arrangement space Repeat this few times Draw conclusions in view of general measurements How to do arbitrary inspecting? decide misfortune rate headed for every connection utilizing best downstream customer repeat over all connections: pick misfortune rate aimlessly inside limits redesign limits for different connections Problem: little resistance for estimation mistake server l 1 l 3 l 2 l 4 l 5 l 6 l 7 l 8 p 1 p 2 p 3 p 4 p 5 customers

Slide 18

#2: Linear Optimization Goals Parsimonious clarification Robust to estimation blunder L i = log(1/(1-l i )), P j = log(1/(1-p j )) minimize L i + |S j | L 1 +L 2 +L 4 + S 1 = P 1 L 1 +L 2 +L 5 + S 2 = P 2 … L 1 +L 3 +L 8 + S 5 = P 5 L i >= 0 Can be transformed into a direct program server l 1 l 3 l 2 l 4 l 5 l 6 l 7 l 8 p 1 p 2 p 3 p 4 p 5 customers

Slide 19

#3: Bayesian Inference Basics: D: watched information s j : # bundles effectively sent to customer j f j : # parcels that customer j neglects to get Θ : obscure model parameters l i : bundle misfortune rate of connection i Goal: decide the back P( Θ |D) surmising depends on misfortune occasions , not misfortune rates Bayes hypothesis P( Θ |D) = P(D| Θ )P( Θ )/∫ P(D| Θ )P( Θ )d Θ difficult to process since Θ is multidimensional server l 1 l 3 l 2 l 4 l 5 l 6 l 7 l 8 ( s 1 ,f 1 ) ( s 2 ,f 2 ) ( s 3 ,f 3 ) ( s 4 ,f 4 ) ( s 5 ,f 5 ) customers

Slide 20

Gibbs Sampling Markov Chain Monte Carlo (MCMC) develop a Markov chain whose stationary dispersion is P( Θ |D) Gibbs Sampling: characterizes the move part begin with a subjective introductory task of l i consider every connection i thusly figure P(l i |D) accepting l j is settled for j≠i draw test from P(l i |D) and overhaul l i after blaze in period, we get tests from the back P( Θ |D)

Slide 21

Gibbs Sampling Algorithm 1) Initialize interface misfortune rates self-assertively 2) For j = 1 : smolder in for every connection i compute P(l i |D, {l i \'}) where l i is misfortune rate of connection i, and {l i \'} =  ji l j 3) For j = 1 : realSamples for every connection i compute P(l i |D, {l i \'}) Use every one of the specimens got at step 3 to inexact P(|D)

Slide 22

Experimental Evaluation Simulation tests Internet activity follows

Slide 23

Simulation Experiments Advantage: no vulnerability about connection misfortune rate Methodology Topologies utilized: arbitrarily created: 20 - 3000 hubs, max degree = 5-50 genuine topology acquired by following ways to microsoft.com customers haphazardly produced parcel misfortune occasions at every connection a portion f of the connections are great, and the rest are "terrible" LM1: great connections: 0 – 1%, awful connections: 5 – 10% LM2: great connections: 0 – 1%, awful connections: 1 – 100% Goodness measurements: Coverage: # accurately derived lossy connections False positives: # inaccurately gathered lossy connections

Slide 24

Simulation Results

Slide 25

Simulation Results

Slide 26

Simulation Results High trust in main couple of derivations

Slide 27

Trade-off

Slide 28

Internet Traffic Traces Challenge: approval Divide customer follows into two: tomography set and approval set Tomography information set => misfortune deduction Validation set => check if customers downstream of the induced lossy connections encounter high misfortune Results false positive rate is between 5 – 30% likely contender for lossy connections: joins crossing a between AS limit connections having an expansive deferral (e.g. cross-country joins) interfaces that end at customers illustration lossy connections: San Francisco (AT&T)  Indonesia (Indo.net) Sprint  PacBell in California Moscow  Tyumen, Siberia (Sovam Teleport)

Slide 29

Summary Poor relationship between\'s topological measurements & execution Significant spatial area and transient strength Passive system tomography is possible Tradeoff between computational cost and precision Future bearings ongoing deduction particular dynamic examining Acknowledgments: MSR: Dimitris Achlioptas, Christian Borgs, Jennifer Chayes, David Heckerman, Chris Meek, David Wilson Infrastructure: Rob Emanuel, Scott Hogan http://www.research.microsoft.com/~padmanab

Recommended
View more...