(Frame-mount)The IBM System p5 595 server uses fifth-generation 64-bit IBM POWER5 technology in up to 64-core symmetric multiprocessing (SMP) configurations with IBM Advanced POWER Virtualization and offers the performance, flexibility, scalability, manageability and security features needed for the consolidation of mission-critical AIX 5L and Linux applications on a single system to save on hardware, software, energy and space costs.
Processor cores
16 to 64 IBM POWER5+
Clock rates (Min/Max)
2.1 / 2.3 GHz
System memory (Std/Max)
8GB / 2TB
Internal storage (Std/Max)
146.8GB / 28.1TB (using optional I/O drawers)
Performance (rPerf range)*
Saturday, December 15, 2007
IBM System p5 570
(Rack-mount)Easily scale from 2- to 16-cores with the IBM System p5™ 570. Unique IBM modular SMP architecture lets you add more powerful IBM POWER5+™ processing capability exactly when needed.
Processor cores
2, 4, 8, 12, 16 POWER5+
Clock rates (Min/Max)
1.9 GHz / 2.2 GHz
System memory (Std/Max)
2GB / 512GB
Internal disk storage (Std/Max)
73.4GB / 79.2TB (with optional I/O drawers)
Performance (rPerf range)*
Processor cores
2, 4, 8, 12, 16 POWER5+
Clock rates (Min/Max)
1.9 GHz / 2.2 GHz
System memory (Std/Max)
2GB / 512GB
Internal disk storage (Std/Max)
73.4GB / 79.2TB (with optional I/O drawers)
Performance (rPerf range)*
IBM System p 570
(Rack-mount) Easily scale from 2- to 16-cores with the IBM System p™ 570. Unique IBM modular SMP architecture lets you add more powerful IBM POWER6™ 3.5, 4.2 or 4.7 GHz processing capability exactly when needed. Innovative RAS features and leadership virtualization capabilities make the p570 well suited as a mid-range application or database server, or for server consolidation. And the flexibility to use both the leading-edge AIX® and Linux® operating systems broadens the application offerings available and increases the ways clients can manage growth, complexity and risk.
Processor cores
2, 4, 8, 12, 16 POWER6
Clock rates (Min/Max)
3.5 GHz / 4.7 GHz
System memory (Std/Max)
2 GB / 768 GB
Internal storage (Std/Max)
73.4 GB / 79.2 TB (with optional I/O drawers)
Performance (rPerf range)*
15.85 / 134.35
Processor cores
2, 4, 8, 12, 16 POWER6
Clock rates (Min/Max)
3.5 GHz / 4.7 GHz
System memory (Std/Max)
2 GB / 768 GB
Internal storage (Std/Max)
73.4 GB / 79.2 TB (with optional I/O drawers)
Performance (rPerf range)*
15.85 / 134.35
AIX Files Modified by HACMP
The following AIX files are modified to support HACMP. They are not distributed with HACMP.
/etc/hosts
The cluster event scripts use the /etc/hosts file for name resolution. All cluster node IP interfaces must be added to this file on each node.
If you delete service IP labels from the cluster configuration using SMIT, we recommend that you remove them from /etc/hosts too. This reduces the possibility of having conflicting entries if the labels are reused with different addresses in a future configuration.
Note that DNS and NIS are disabled during HACMP-related name resolution. This is why HACMP IP addresses must be maintained locally.
HACMP may modify this file to ensure that all nodes have the necessary information in their /etc/hosts file, for proper HACMP operations.
/etc/inittab
During installation, the following entry is made to the /etc/inittab file to start the Cluster Communication Daemon at boot:
clcomdES:2:once:startsrc -s clcomdES >dev/console 2>&1
The /etc/inittab file is modified in each of the following cases:
HACMP is configured for IP address takeover
The Start at System Restart option is chosen on the SMIT Start Cluster Services panel
Concurrent Logical Volume Manager (CLVM) is installed with HACMP 5.2.
Modifications to the /etc/inittab File due to IP Address Takeover
The following entry is added to the /etc/inittab file for HACMP network startup with IP address takeover:
harc:2:wait:/usr/es/sbin/cluster/etc/harc.net # HACMP network startup
When IP address takeover is enabled, the system edits /etc/inittab to change the rc.tcpip and inet-dependent entries from run level “2” (the default multi-user level) to run level “a”. Entries that have run level “a” are processed only when the telinit command is executed specifying that specific run level.
Modifications to the /etc/inittab File due to System Boot
The /etc/inittab file is used by the init process to control the startup of processes at boot time. The following line is added to /etc/inittab during HACMP install:
clcomdES:2:once:startsrc -s clcomdES >/dev/console 2>&1
This entry starts the Cluster Communications Daemon (clcomd) at boot.
The following entry is added to the /etc/inittab file if the Start at system restart option is chosen on the SMIT Start Cluster Services panel:
hacmp:2:wait:/usr/sbin/etc/rc.cluster -boot> /dev/console 2>&1 # Bringup Cluster
When the system boots, the /etc/inittab file calls the /usr/es/sbin/cluster/etc/rc.cluster script to start HACMP.
Because the inet daemons must not be started until after HACMP-controlled interfaces have swapped to their service address, HACMP also adds the following entry to the end of the /etc/inittab file to indicate that /etc/inittab processing has completed:
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit #HACMP for AIX These must be the last entry in run level “a” in inittab! pst_clinit:a:wait/bin/echo Created /usr/es/sbin/cluster/ .telinit >/dev/console #HACMP for AIX These must be the last entry in run level “a” in inittab!
See Chapter 8: Starting and Stopping Cluster Services, for more information about the files involved in starting and stopping HACMP.
/etc/rc.net
The /etc/rc.net file is called by cfgmgr to configure and start TCP/IP during the boot process. It sets hostname, default gateway and static routes. The following entry is added at the beginning of the file for a node on which IP address takeover is enabled:
# HACMP for AIX # HACMP for AIX These lines added by HACMP for AIX software [ "$1" = "-boot" ] && shift { ifconfig 1o0 127.0.0.1 up; exit 0; }#HACMP for AIX # HACMP for AIX
The entry prevents cfgmgr from reconfiguring boot and service addresses while HACMP is running.
/etc/services
The /etc/services file defines the sockets and protocols used for network services on a system. The ports and protocols used by the HACMP components are defined here.
#clinfo_deadman 6176/tcp #clm_keepalive 6255/udp #clm_pts 6200/tcp #clsmuxpd 6270/tcp #clm_lkm 6150/tcp #clm_smux 6175/tcp #godm 6177/tcp #topsvcs 6178/udp #grpsvcs 6179/udp #emsvcs 6180/udp #clver 6190/tcp #clcomd 6191/tcp
/etc/snmpd.conf
Note: The version of snmpd.conf depends on whether you are using AIX 5L v.5.1 or v.5.2. The default version for v.5.2. is snmpdv3.conf.
The SNMP daemon reads the /etc/snmpd.conf configuration file when it starts up and when a refresh or kill -1 signal is issued. This file specifies the community names and associated access privileges and views, hosts for trap notification, logging attributes, snmpd-specific parameter configurations, and SMUX configurations for the snmpd. The HACMP installation process adds the clsmuxpd password to this file. The following entry is added to the end of the file, to include the HACMP MIB managed by the clsmuxpd:
smux 1.3.6.1.4.1.2.3.1.2.1.5 "clsmuxpd_password" # HACMP clsmuxpd
HACMP supports SNMP Community Names other than “public.” If the default SNMP Community Name has been changed in /etc/snmpd.conf to something different from the default of “public” HACMP will function correctly. The SNMP Community Name used by HACMP is the first name found that is not “private” or “system” using the lssrc -ls snmpd command.
The Clinfo service also gets the SNMP Community Name in the same manner. The Clinfo service supports the -c option for specifying SNMP Community Name but its use is not required. The use of the -c option is considered a security risk because doing a ps command could find the SNMP Community Name. If it is important to keep the SNMP Community Name protected, change permissions on /tmp/hacmp.out, /etc/snmpd.conf, /smit.log and /usr/tmp/snmpd.log to not be world readable.
/etc/snmpd.peers
The /etc/snmpd.peers file configures snmpd SMUX peers. The HACMP install process adds the following entry to include the clsmuxpd:
clsmuxpd 1.3.6.1.4.1.2.3.1.2.1.5 "clsmuxpd_password" # HACMP clsmuxpd
/etc/syslog.conf
The /etc/syslog.conf file is used to control output of the syslogd daemon, which logs system messages. During the install process HACMP adds entries to this file that direct the output from HACMP-related problems to certain files.
# example: # "mail messages, at debug or higher, go to Log file. File must exist." # "all facilities, at debug and higher, go to console" # "all facilities, at crit or higher, go to all users" # mail.debug /usr/spool/mqueue/syslog # *.debug /dev/console # *.crit * # HACMP Critical Messages from HACMP local0.crit /dev/console # HACMP Informational Messages from HACMP local0.info /usr/es/adm/cluster.log # HACMP Messages from Cluster Scripts user.notice /usr/es/adm/cluster.log
The /etc/syslog.conf file should be identical on all cluster nodes.
/etc/trcfmt
The /etc/trcfmt file is the template file for the system trace logging and report utility, trcrpt. The install process adds HACMP tracing to the trace format file. HACMP tracing applies to the following daemons: clstrmgr, clinfo, and clsmuxpd.
/var/spool/cron/crontab/root
The /var/spool/cron/crontab/root file contains commands needed for basic system control. The install process adds HACMP logfile rotation to the file.
IBM System p 570 with POWER 6
* Advanced IBM POWER6™ processor cores for enhanced performance and reliability* Building block architecture delivers flexible scalability and modular growth* Advanced virtualization features facilitate highly efficient systems utilization* Enhanced RAS features enable improved application availabilityThe IBM POWER6 processor-based System p™ 570 mid-range server delivers outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades and innovative virtualization technologies. This powerful 19-inch rack-mount system, which can handle up to 16 POWER6 cores, can be used for database and application serving, as well as server consolidation. The modular p570 is designed to continue the tradition of its predecessor, the IBM POWER5+™ processor-based System p5™ 570 server, for resource optimization, secure and dependable performance and the flexibility to change with business needs. Clients have the ability to upgrade their current p5-570 servers and know that their investment in IBM Power Architecture™ technology has again been rewarded.The p570 is the first server designed with POWER6 processors, resulting in performance and price/performance advantages while ushering in a new era in the virtualization and availability of UNIX® and Linux® data centers. POWER6 processors can run 64-bit applications, while concurrently supporting 32-bit applications to enhance flexibility. They feature simultaneous multithreading,1 allowing two application “threads” to be run at the same time, which can significantly reduce the time to complete tasks.The p570 system is more than an evolution of technology wrapped into a familiar package; it is the result of “thinking outside the box.” IBM’s modular symmetric multiprocessor (SMP) architecture means that the system is constructed using 4-core building blocks. This design allows clients to start with what they need and grow by adding additional building blocks, all without disruption to the base system.2 Optional Capacity on Demand features allow the activation of dormant processor power for times as short as one minute. Clients may start small and grow with systems designed for continuous application availability.Specifically, the System p 570 server provides:Common features Hardware summary* 19-inch rack-mount packaging* 2- to 16-core SMP design with building block architecture* 64-bit 3.5, 4.2 or 4.7 GHz POWER6 processor cores* Mainframe-inspired RAS features* Dynamic LPAR support* Advanced POWER Virtualization1 (option)o IBM Micro-Partitioning™ (up to 160 micro-partitions)o Shared processor poolo Virtual I/O Servero Partition Mobility2* Up to 32 optional I/O drawers* IBM HACMP™ software support for near continuous operation** Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL 4 Update 5 or later) and SUSE Linux (SLES 10 SP1 or later) operating systems* 4U 19-inch rack-mount packaging* One to four building blocks* Two, four, eight, 12 or 16 3.5 GHz, 4.2 GHz or 4.7 GHz 64-bit POWER6 processor cores* L2 cache: 8 MB to 64 MB (2- to 16-core)* L3 cache: 32 MB to 256 MB (2- to 16-core)* 2 GB to 192 GB of 667 MHz buffered DDR2 or 16 GB to 384 GB of 533 MHz buffered DDR2 or 32 GB to 768 GB of 400 MHz buffered DDR2 memory3* Four hot-plug, blind-swap PCI Express 8x and two hot-plug, blind-swap PCI-X DDR adapter slots per building block* Six hot-swappable SAS disk bays per building block provide up to 7.2 TB of internal disk storage* Optional I/O drawers may add up to an additional 188 PCI-X slots and up to 240 disk bays (72 TB additional)4* One SAS disk controller per building block (internal)* One integrated dual-port Gigabit Ethernet per building block standard; One quad-port Gigabit Ethernet per building block available as optional upgrade; One dual-port 10 Gigabit Ethernet per building block available as optional upgrade* Two GX I/O expansion adapter slots* One dual-port USB per building block* Two HMC ports (maximum of two), two SPCN ports per building block* One optional hot-plug media bay per building block* Redundant service processor for multiple building block systems2
IBM System Cluster 1350
Reduced time to deployment IBM HPC clustering offers significant price/performance advantages for many high-performance workloads by harnessing the advantages of low cost servers plus innovative, easily available open source software.Today, some businesses are building their own Linux and Microsoft clusters using commodity hardware, standard interconnects and networking technology, open source software, and in-house or third-party applications. Despite the apparent cost advantages offered by these systems, the expense and complexity of assembling, integrating, testing and managing these clusters from disparate, piece-part components often outweigh any benefits gained.IBM has designed the IBM System Cluster 1350 to help address these challenges. Now clients can benefit from IBM’s extensive experience with HPC to help minimize this complexity and risk. Using advanced Intel® Xeon®, AMD Opteron™, and IBM PowerPC® processor-based server nodes, proven cluster management software and optional high-speed interconnects, the Cluster 1350 offers the best of IBM and third-party technology. As a result, clients can speed up installation of an HPC cluster, simplify its management, and reduce mean time to payback.The Cluster 1350 is designed to be an ideal solution for a broad range of application environments, including industrial design and manufacturing, financial services, life sciences, government and education. These environments typically require excellent price/performance for handling high performance computing (HPC) and business performance computing (BPC) workloads. It is also an excellent choice for applications that require horizontal scaling capabilities, such as Web serving and collaboration.
Common features
Hardware summary
Rack-optimized Intel Xeon dual-core and quad-core and AMD Opteron processor-based servers
Intel Xeon, AMD and PowerPC processor-based blades
Optional high capacity IBM System Storage™ DS3200, DS3400, DS4700, DS4800 and EXP3000 Storage Servers and IBM System Storage EXP 810 Storage Expansion
Industry-standard Gigabit Ethernet cluster interconnect
Optional high-performance Myrinet-2000 and Myricom 10g cluster interconnect
Optional Cisco, Voltaire, Force10 and PathScale InfiniBand cluster interconnects
Clearspeed Floating Point Accelerator
Terminal server and KVM switch
Space-saving flat panel monitor and keyboard
Runs with RHEL 4 or SLES 10 Linux operating systems or Windows Compute Cluster Server
Robust cluster systems management and scalable parallel file system software
Hardware installed and integrated in 25U or 42U Enterprise racks
Scales up to 1,024 cluster nodes (larger systems and additional configurations available—contact your IBM representative or IBM Business Partner)
Optional Linux cluster installation and support services from IBM Global Services or an authorized partner or distributor
Clients must obtain the version of the Linux operating system specified by IBM from IBM, the Linux Distributor or an authorized reseller
Common features
Hardware summary
Rack-optimized Intel Xeon dual-core and quad-core and AMD Opteron processor-based servers
Intel Xeon, AMD and PowerPC processor-based blades
Optional high capacity IBM System Storage™ DS3200, DS3400, DS4700, DS4800 and EXP3000 Storage Servers and IBM System Storage EXP 810 Storage Expansion
Industry-standard Gigabit Ethernet cluster interconnect
Optional high-performance Myrinet-2000 and Myricom 10g cluster interconnect
Optional Cisco, Voltaire, Force10 and PathScale InfiniBand cluster interconnects
Clearspeed Floating Point Accelerator
Terminal server and KVM switch
Space-saving flat panel monitor and keyboard
Runs with RHEL 4 or SLES 10 Linux operating systems or Windows Compute Cluster Server
Robust cluster systems management and scalable parallel file system software
Hardware installed and integrated in 25U or 42U Enterprise racks
Scales up to 1,024 cluster nodes (larger systems and additional configurations available—contact your IBM representative or IBM Business Partner)
Optional Linux cluster installation and support services from IBM Global Services or an authorized partner or distributor
Clients must obtain the version of the Linux operating system specified by IBM from IBM, the Linux Distributor or an authorized reseller
IBM System Cluster 1600
IBM System Cluster 1600 systems are comprised of IBM POWER5™ and POWER5+™ symmetric multiprocessing (SMP) servers running AIX 5L™ or Linux®. Cluster 1600 is a highly scalable cluster solution for large-scale computational modeling and analysis, large databases and business intelligence applications and cost-effective datacenter, server and workload consolidation. Cluster 1600 systems can be deployed on Ethernet networks, InfiniBand networks, or with the IBM High Performance Switch and are typically managed with Cluster Systems Management (CSM) software, a comprehensive tool designed specifically to streamline initial deployment and ongoing management of cluster systems.Common features· Highly scalable AIX 5L or Linux cluster solutions for large-scale computational modeling, large databases and cost-effective data center, server and workload consolidation· Cluster Systems Management (CSM) software for comprehensive, flexible deployment and ongoing management· Cluster interconnect options: industry standard 1/10Gb Ethernet (AIX 5L or Linux), IBM High Performance Switch (AIX 5L and CSM) SP Switch2 (AIX 5L and PSSP); 4x/12x InfiniBand (AIX 5L or SLES 9); or Myrinet (Linux)· Operating system options: AIX 5L Version 5.2 or 5.3, SUSE Linux Enterprise Server 8 or 9, Red Hat Enterprise Linux 4· Complete software suite for creating, tuning and running parallel applications: Engineering & Scientific Subroutine Library (ESSL), Parallel ESSL, Parallel Environment, XL Fortran, VisualAge C++· High-performance, high availability, highly scalable cluster file system General Parallel File System (GPFS)· Job scheduling software to optimize resource utilization and throughput: LoadLeveler®· High availability software for continuous access to data and applications: High Availability Cluster Multiprocessing (HACMP™)Hardware summary· Mix and match IBM POWER5 and POWER5+ servers:· IBM System p5™ 595, 590, 575, 570, 560Q, 550Q, 550, 520Q, 520, 510Q, 510, 505Q and 505· IBM eServer™ p5 595, 590, 575, 570, 550, 520, and 510· Up to 128 servers or LPARs (AIX 5L or Linux operating system images) per cluster depending on hardware; higher scalability by special order
Storage management concepts
The fundamental concepts used by LVM are physical volumes, volume groups,physical partitions, logical volumes, logical partitions, file systems, and rawdevices. Some of their characteristics are presented as follows: Each individual disk drive is a named physical volume (PV) and has a namesuch as hdisk0 or hdisk1. One or more PVs can make up a volume group (VG). A physical volume canbelong to a maximum of one VG. You cannot assign a fraction of a PV to one VG. A physical volume isassigned entirely to a volume group. Physical volumes can be assigned to the same volume group even thoughthey are of different types, such as SCSI or SSA. Storage space from physical volumes is divided into physical partitions (PPs).The size of the physical partitions is identical on all disks belonging to thesame VG. Within each volume group, one or more logical volumes (LVs) can be defined.Data stored on logical volumes appears to be contiguous from the user pointof view, but can be spread on different physical volumes from the samevolume group. Logical volumes consist of one or more logical partitions (LPs). Each logicalpartition has at least one corresponding physical partition. A logical partitionand a physical partition always have the same size. You can have up to threecopies of the data located on different physical partitions. Usually, physicalpartitions storing identical data are located on different physical disks forredundancy purposes. Data from a logical volume can be stored in an organized manner, having theform of files located in directories. This structured and hierarchical form oforganization is named a file system. Data from a logical volume can also be seen as a sequential string of bytes.This type of logical volumes are named raw logical volumes. It is theresponsibility of the application that uses this data to access and interpret itcorrectly. The volume group descriptor area (VGDA) is an area on the disk that containsinformation pertinent to the volume group that the physical volume belongs to.It also includes information about the properties and status of all physical andlogical volumes that are part of the volume group. The information from VGDAis used and updated by LVM commands. There is at least one VGDA perphysical volume. Information from VGDAs of all disks that are part of thesame volume group must be identical. The VGDA internal architecture andChapter 6. Disk storage management 213location on the disk depends on the type of the volume group (original, big, orscalable). The volume group status area (VGSA) is used to describe the state of allphysical partitions from all physical volumes within a volume group. TheVGSA indicates if a physical partition contains accurate or stale information.VGSA is used for monitoring and maintained data copies synchronization.The VGSA is essentially a bitmap and its architecture and location on the diskdepends on the type of the volume group. A logical volume control block (LVCB) contains important information aboutthe logical volume, such as the number of the logical partitions or diskallocation policy. Its architecture and location on the disk depends on the typeof the volume group it belongs to. For standard volume groups, the LVCBresides on the first block of user data within the LV. For big volume groups,there is additional LVCB information in VGDA on the disk. For scalable volumegroups, all relevant logical volume control information is kept in the VGDA aspart of the LVCB information area and the LV entry area.
System management
Cluster Systems Management (CSM) for AIX and LinuxCSM is designed to minimize the cost and complexity of administering clustered and partitioned systems by enabling comprehensive management and monitoring of the entire environment from a single point of control. CSM provides:
Software distribution, installation and update (operating system and applications)
Comprehensive system monitoring with customizable automated responses
Distributed command execution
Hardware control
Diagnostic tools
Management by group
Both a graphical interface and a fully scriptable command line interfaceIn addition to providing all the key functions for administration and maintenance of distributed systems, CSM is designed to deliver the parallel execution required to manage clustered computing environments effectively. CSM supports homogeneous or mixed environments of IBM servers running AIX or Linux.
Parallel System Support Programs (PSSP) for AIXPSSP is the systems management predecessor to Cluster Systems Management (CSM) and does not support IBM System p servers or AIX 5L™ V5.3 or above. New cluster deployments should use CSM and existing PSSP clients with software maintenance will be transitioned to CSM at no charge.
Software distribution, installation and update (operating system and applications)
Comprehensive system monitoring with customizable automated responses
Distributed command execution
Hardware control
Diagnostic tools
Management by group
Both a graphical interface and a fully scriptable command line interfaceIn addition to providing all the key functions for administration and maintenance of distributed systems, CSM is designed to deliver the parallel execution required to manage clustered computing environments effectively. CSM supports homogeneous or mixed environments of IBM servers running AIX or Linux.
Parallel System Support Programs (PSSP) for AIXPSSP is the systems management predecessor to Cluster Systems Management (CSM) and does not support IBM System p servers or AIX 5L™ V5.3 or above. New cluster deployments should use CSM and existing PSSP clients with software maintenance will be transitioned to CSM at no charge.
AIX Control Book Creation
List the licensed program productslslpp -LList the defined devices lsdev -C -HList the disk drives on the system lsdev -Cc diskList the memory on the system lsdev -Cc memory (MCA)List the memory on the system lsattr -El sys0 -a realmem (PCI)lsattr -El mem0List system resources lsattr -EHl sys0List the VPD (Vital Product Data) lscfg -vDocument the tty setup lscfg or smit screen capture F8Document the print queues qchk -ADocument disk Physical Volumes (PVs) lspvDocument Logical Volumes (LVs) lslvDocument Volume Groups (long list) lsvg -l vgnameDocument Physical Volumes (long list) lspv -l pvnameDocument File Systems lsfs fsname/etc/filesystemsDocument disk allocation dfDocument mounted file systems mountDocument paging space (70 - 30 rule) lsps -aDocument paging space activation /etc/swapspacesDocument users on the system /etc/passwdlsuser -a id home ALLDocument users attributes /etc/security/userDocument users limits /etc/security/limitsDocument users environments /etc/security/environDocument login settings (login herald) /etc/security/login.cfgDocument valid group attributes /etc/grouplsgroup ALLDocument system wide profile /etc/profileDocument system wide environment /etc/environmentDocument cron jobs /var/spool/cron/crontabs/*Document skulker changes if used /usr/sbin/skulkerDocument system startup file /etc/inittabDocument the hostnames /etc/hostsDocument network printing /etc/hosts.lpdDocument remote login host authority /etc/hosts
What is Hot Spare
What is an LVM hot spare?
A hot spare is a disk or group of disks used to replace a failing disk. LVM marks a physicalvolume missing due to write failures. It then starts the migration of data to the hot sparedisk.Minimum hot spare requirementsThe following is a list of minimal hot sparing requirements enforced by the operatingsystem.- Spares are allocated and used by volume group- Logical volumes must be mirrored- All logical partitions on hot spare disks must be unallocated- Hot spare disks must have at least equal capacity to the smallest disk alreadyin the volume group. Good practice dictates having enough hot spares tocover your largest mirrored disk.Hot spare policyThe chpv and the chvg commands are enhanced with a new -h argument. This allows youto designate disks as hot spares in a volume group and to specify a policy to be used in thecase of failing disks.The following four values are valid for the hot spare policy argument (-h):Synchronization policyThere is a new -s argument for the chvg command that is used to specify synchronizationcharacteristics.The following two values are valid for the synchronization argument (-s):ExamplesThe following command marks hdisk1 as a hot spare disk:# chpv -hy hdisk1The following command sets an automatic migration policy which uses the smallest hotspare that is large enough to replace the failing disk, and automatically tries to synchronizestale partitions:# chvg -hy -sy testvgArgument Descriptiony (lower case)Automatically migrates partitions from one failing disk to one sparedisk. From the pool of hot spare disks, the smallest one which is bigenough to substitute for the failing disk will be used.Y (upper case)Automatically migrates partitions from a failing disk, but might usethe complete pool of hot spare disks.nNo automatic migration will take place. This is the default value for avolume group.rRemoves all disks from the pool of hot spare disks for this volume
A hot spare is a disk or group of disks used to replace a failing disk. LVM marks a physicalvolume missing due to write failures. It then starts the migration of data to the hot sparedisk.Minimum hot spare requirementsThe following is a list of minimal hot sparing requirements enforced by the operatingsystem.- Spares are allocated and used by volume group- Logical volumes must be mirrored- All logical partitions on hot spare disks must be unallocated- Hot spare disks must have at least equal capacity to the smallest disk alreadyin the volume group. Good practice dictates having enough hot spares tocover your largest mirrored disk.Hot spare policyThe chpv and the chvg commands are enhanced with a new -h argument. This allows youto designate disks as hot spares in a volume group and to specify a policy to be used in thecase of failing disks.The following four values are valid for the hot spare policy argument (-h):Synchronization policyThere is a new -s argument for the chvg command that is used to specify synchronizationcharacteristics.The following two values are valid for the synchronization argument (-s):ExamplesThe following command marks hdisk1 as a hot spare disk:# chpv -hy hdisk1The following command sets an automatic migration policy which uses the smallest hotspare that is large enough to replace the failing disk, and automatically tries to synchronizestale partitions:# chvg -hy -sy testvgArgument Descriptiony (lower case)Automatically migrates partitions from one failing disk to one sparedisk. From the pool of hot spare disks, the smallest one which is bigenough to substitute for the failing disk will be used.Y (upper case)Automatically migrates partitions from a failing disk, but might usethe complete pool of hot spare disks.nNo automatic migration will take place. This is the default value for avolume group.rRemoves all disks from the pool of hot spare disks for this volume
Subscribe to:
Posts (Atom)