Linux-HA Discharge 2 Instructional exercise.


37 views
Uploaded on:
Description
On the off chance that genuine, hubs we have never gotten notification from are fenced. Else, we just fence hubs that leave the bunch subsequent to having been individuals from it first ...
Transcripts
Slide 1

Alan Robertson Project Leader – Linux-HA venture alanr@unix.sh IBM Linux Technology Center Linux-HA Release 2 Tutorial

Slide 2

Tutorial Overview HA Principles Installing Linux-HA Basic Linux-HA setup Configuring Linux-HA Sample HA Configurations Testing Clusters Advanced components

Slide 3

Part I General HA standards Architectural diagram of Linux-HA Compilation and establishment of the Linux-HA ("heartbeat") programming

Slide 4

What Is HA Clustering? Assembling a gathering of PCs which believe each other to give an administration notwithstanding when framework parts come up short When one machine goes down, others assume control over its work This includes IP address takeover, administration takeover, and so on. New work goes to the "takeover" machine Not fundamentally intended for superior

Slide 5

What Can HA Clustering Do For You? It can\'t accomplish 100% accessibility – nothing can. HA Clustering intended to recuperate from single deficiencies It can make your blackouts short From around a second to a couple of minutes It resemble a Magician\'s (Illusionist\'s) trap: When it goes well, the hand is speedier than the eye When it goes not really well, it can be sensibly noticeable A decent HA bunching framework includes a "9" to your base accessibility 99->99.9, 99.9->99.99, 99.99->99.999, and so on. Unpredictability is the adversary of dependability!

Slide 6

High-Availability Workload Failover

Slide 7

Lies, Damn Lies, and Statistics Counting nines

Slide 8

How is HA Clustering Different from Disaster Recovery? HA: Failover is modest Failover times measured in seconds Reliable between hub correspondence DR: Failover is costly Failover times regularly measured in hours Unreliable between hub correspondence expected 2.0.7 doesn\'t bolster DR well, however 2.0.8 or so will...

Slide 9

Single Points of Failure (SPOFs) A solitary purpose of disappointment is a part whose disappointment will bring about close prompt disappointment of a whole framework or administration Good HA plan takes out of single purposes of disappointment

Slide 10

Non-Obvious SPOFs Replication connections are once in a while single purposes of disappointment The framework may fall flat when another disappointment happens Some plate controllers have SPOFs inside them which aren\'t evident without schematics Redundant connections covered in the same wire run have a typical SPOF Non-Obvious SPOFs can require profound ability to spot

Slide 11

The " Three R\'s " of High-Availability R edundancy R edundancy R edundancy If this sounds repetitive, that is most likely proper... ;- ) Most SPOFs are killed by excess HA Clustering is a decent method for giving and overseeing repetition

Slide 12

Redundant Communications Intra-bunch correspondence is basic to HA framework operation Most HA grouping frameworks give instruments to excess inner correspondence for heartbeats, and so forth. Outer correspondences is generally key to arrangement of administration Exernal correspondence repetition is normally proficient through directing traps Having a specialist in BGP or OSPF is an assistance

Slide 13

Fencing Guarantees asset honesty on account of certain troublesome cases Three Common Methods: FiberChannel Switch lockouts SCSI Reserve/Release (hard to make solid) Self-Fencing (like IBM ServeRAID) STONITH – Shoot The Other Node In The Head Linux-HA underpins the last two models

Slide 14

Data Sharing - None Strangely enough, some HA designs needn\'t bother with any formal circle information sharing Firewalls Load Balancers (Caching) Proxy Servers Static web servers whose substance is duplicated from a solitary source

Slide 15

Data Sharing – Replication Some applications give their own particular replication DNS, DHCP, LDAP, DB2, and so forth. Linux has amazing circle replication strategies accessible DRBD is my most loved DRBD-based HA bunches are shockingly shabby Some situations can live with less "exact" replication techniques – rsync, and so on. By and large does not bolster parallel access Fencing generally required EXTREMELY practical

Slide 17

Data Sharing – ServeRAID IBM ServeRAID plate is self-fencing This helps honesty in failover situations This makes bunch filesystems, and so on inconceivable No Oracle RAC, no GPFS, and so forth. ServeRAID failover requires a script to perform volume handover Linux-HA gives such a script in open source Linux-HA is ServerProven with ServeRAID

Slide 18

Data Sharing – FiberChannel The most great information sharing component Allows for failover mode Allows for genuine parallel access Oracle RAC, Cluster filesystems, and so on. Fencing constantly required with FiberChannel

Slide 20

Data Sharing – Back-End Network Attached Storage can go about as an information sharing strategy Existing Back End databases can likewise go about as an information sharing component Both make solid and repetitive information sharing Somebody Else\'s Problem (SEP). On the off chance that they benefited an occupation, you can profit by them. Be careful SPOFs in your nearby system

Slide 21

Linux-HA Background The most seasoned and most surely understood open-group HA venture - giving complex come up short over and restart abilities for Linux (and different OSes) In presence since 1998; ~ 30k mission-basic bunches underway since 1999 Active, open advancement group drove by IBM and Novell Wide assortment of businesses, applications bolstered Shipped with most Linux dispersions (everything except Red Hat) No exceptional equipment prerequisites; no part conditions, all client space All discharges tried via robotized test suites

Slide 22

Linux-HA Capabilities Supports n-hub bunches – where "n" <= something like 16 Can utilize serial, UDP bcast, mcast, ucast comm. Comes up short over on hub disappointment, or on administration disappointment Fails over on loss of IP availability, or self-assertive criteria Active/Passive or full Active/Active Built-in asset checking Support for the OCF asset standard Sophisticated reliance model with rich requirement bolster (assets, bunches, incarnations, expert/slave) (required for SAP) XML-based asset setup Configuration and observing GUI Support for OCFS group filesystem Multi-state (expert/slave) asset support

Slide 23

Some Linux-HA Terminology Node – a PC (genuine or virtual) which is a piece of the group and running our bunch programming stack Resource – something we oversee – an administration, or IP address, or plate drive, or whatever. On the off chance that we oversee it and it\'s not a hub, it\'s an asset Resource Agent – a script which goes about as an intermediary to control an asset. Most are firmly demonstrated after standard framework init scripts. DC – Designated Coordinator – the "expert hub" in the group STONITH – Acronym for Shoot The Other Node In The Head – a strategy for fencing out hubs which are acting up by resetting them Partitioned bunch or Split-Brain – a condition where the group is part into two or more pieces which don\'t think about each other through equipment or programming disappointment. Kept from doing BadThings by STONITH Quorum – typically allocated to at most one single segment in a bunch to keep part mind from creating harm. Regularly controlled by a voting convention

Slide 24

Key Linux-HA Processes CRM – Cluster Resource Manager – The fundamental administration element in the bunch CIB – The group Information Base – attendant of data about assets, hubs. Additionally used to allude to the data oversaw by the CIB procedure. The CIB is XML-based. PE – Policy Engine – figures out what ought to be done given the present arrangement as a result – makes a chart for the TE containing the things that should be done to align the bunch back with approach (just keeps running on the DC) TE – Carries out the orders made by the PE – through it\'s diagram (just keeps running on the DC) CCM – Consensus Cluster Manager – figures out who is in the group, and who is most certainly not. A kind of guard for group hubs. LRM – Local Resource Manager – low level process that does everything that necessities doing – not group mindful – no information of strategy – at last determined by the TE (through the different CRM forms) stonithd – daemon doing STONITH orders pulse – low level instatement and correspondence module

Slide 25

Linux-HA Release 2 Architecture

Slide 26

Compiling and Installing Linux-HA from source by means of RPM or .deb Grab a late stable tar ball >= 2.0.7 from: http://linux-ha.org/download/index.html untar it with: tar tzf pulse 2.0.7.tar.gz disc pulse 2.0.7 ./ConfigureMe bundle rpm –install full-RPM-pathnames ./ConfigureMe bundle produces bundles fitting to the present environment (counting Debian, Solaris, FreeBSD, and so forth.)

Slide 27

Pre-manufactured Packages The Linux-HA download website incorporates SUSE-good bundles Debian incorporates pulse bundles – for Sid and Sarge Fedora clients can utilize yum to get bundles $ sudo yum introduce pulse RHEL-perfect variants are accessible from CentOS http://dev.centos.org/centos/4/testing/i386/RPMS/http://dev.centos.org/centos/4/testing/x86_64/RPMS/

Slide 28

RPM Package names pulse pils – module stacking framework pulse stonith – STONITH libraries and doubles pulse – fundamental pulse bundle pulse ldirectord – code for overseeing Linux Virtual Server establishments The ldirectord subpackage is discretionary All different subpackages are obligatory. Fedora dropped the pulse prefix from the pils and stonith subpackages.

Slide 29

Installing RPMs rpm –install pulse 2.0.7-1.xxx.rpm \ pulse pils-2.0.7-1.xxx.rpm \ pulse stonith-2.0.7-1.xxx.rpm That was straightforward, would it say it wasn\'t?

Slide 30

Initial arrangement Create the accompanying records by replicating layouts found in your framework\'s documentation registry/usr/offer/doc/pulse rendition into/and so on/ha.d ha.cf - >/and so on/ha.d/ha.cf authkeys - >/and so forth/ha.d/authkeys

Slide 31

Fixing up/and so forth/ha.d/ha.cf Add the accompanying mandates to your ha.cf document: hub node1 node2 node3 # or empower autojo

Recommended
View more...