DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 1.00 out of 5)
Loading...

To install Device Manager

cd Agent_for_RAID, run install.sh cd Agent_for_Server_System, run install.sh cd HDVM/Solaris, run install.sh
Would you like to setup the Device Manager agent? (Y)es or (N)o. (default:Y)
y
Do you want to specify the Device Manager server information? (Y)es or (N)o.(default:Y)
 
Enter the IP address or hostname of the Device Manager server.(default:255.255.255.255)
hcs.domain.com
Enter the port number of the Device Manager server.(default:2001)
 
If you want to use the default value (HaUser), leave this field blank and press the Enter key.
Enter the user ID for logging on to the Device Manager server.
 
If you want to use the default value, leave this field blank and press the Enter key.
Enter the password for logging on to the Device Manager server.
haset
Connecting to the server...
The connection to the server has been verified.
 
Do you want to set the execution period of the HiScan command? (Y)es or (N)o.(default:Y)
 
 
Enter execution period:(H)ourly,(D)aily,(W)eekly (default:D)
H
 
Do you want to set the default time (*:30) to the execution time? (Y)es or (N)o. (default:N)
y
 
This will set the HiScan automatic execution schedule.
Are you sure? (Y)es or (N)o. (default:Y)
(H)ourly
*:30
 
 
Configuration of the HiScan automatic execution schedule has completed.
 
Do you want to specify the RAID Manager installation directory? (Y)es or (N)o.(default:Y)
 
Enter the RAID Manager installation directory.(default:/HORCM)
 
Enter (Y)es when a single host centrally manages the creation, status change, and deletion of copy pairs.
Do you want to enable centrally manage pair configuration?  (Y)es or (N)o.(default:N)
y
 
The Device Manager agent setup has completed successfully.
(End of installation and setup)

To start device manager

/opt/HDVM/HBaseAgent/bin/hbsasrv start cd /opt/HDVM/HBaseAgent/bin/ ./HiScan -s 10.20.10.68

To configure the gaent

# Note: Same process for Agent for unix just jpcinssetup agtdu -inst cci1
cd /opt/jp1pc/tools
./jpcinssetup agtd -inst cci1
 Storage Model (1:Thunder/AMS/HUS100 or 2:Lightning/USP/USP V/VSP) : 2
 Command Device File Name [/dev/rdsk/c0t0d0s2] : /dev/rdsk/c0t60060E801606AC00000106AC00000006d0s2
 Unassigned Open Volume Monitoring (Y/N) [N]                  : y
 Mainframe Volume Monitoring (Y/N) [N]                  : 
 Store Version            [2.0]                : 
KAVE05080-I The instance environment is now being created. (servicekey=RAID, inst=cci1)
KAVE05081-I The instance environment has been created. (servicekey=RAID, inst=cci1)
 
./jpcnshostname -s hcs
./jpcstart all

#to list all agents
cd /opt/jp1pc/tools
./jpcctrl list "*"

CCI Reference

To install CCI

cd /usr/lib/
cpio -idmv < /tmp/RMLIB 
cd RMLIB
./rmlibinstall.sh
cd /
cpio -idmv < /tmp/RMHORC 
cd /HORCM
./horcminstall.sh

Create startup script

cat /etc/init.d/horcm
#!/bin/bash
#
#
#
case "$1" in
'start')
        /usr/bin/horcmstart.sh 1
        ;;
'stop')
        /usr/bin/horcmshutdown.sh 1
        ;;
'restart')
        /usr/bin/horcmshutdown.sh 1
	sleep 1
        /usr/bin/horcmstart.sh 1
        ;;
*)
        echo "Usage: $0 { start | stop |restart }"
        exit 1
        ;;
esac
exit 0
ln -s /etc/init.d/horcm /etc/rc2.d/S50horcm

To display pair status with hex ID(-fcex)

# For TC
pairdisplay -g VSP_test1 -fcex -IH10
 
# For SI
pairdisplay -g vsp-9985v_rpt1-si -fcex -IM14
Group   PairVol(L/R) (Port#,TID, LU-M) ,Seq#,LDEV#.P/S,Status,   % ,P-LDEV# M CTG CM EM       E-Seq# E-LDEV#
vsp-9985v_rpt1-si   rpt1-si1(L) (CL3-A-1, 1,   1-0 )67201     b.P-VOL COPY,    4     10b -   -  N  -            -       -
vsp-9985v_rpt1-si   rpt1-si1(R) (CL1-B-1, 4,   6-0 )67201   10b.S-VOL COPY,    4       b -   -  N  V        36011     10b
Note: on location
  • -IH0 – DC1
  • -IH1 – DC2

To create a new pair

Note: The below options are used for the instance, IH[#] for TC, and IM[#] for for SI
  • -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
  • -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
# For SI
paircreate -g vsp-9985v_rpt1-si -vl -IM14 11
 
# For TC
paircreate -g  VSP_test1 -vl -f async -jp 01 -js 01 -IH10 1
 
# For COW
paircreate -g vsp-rpt1-cow -vl -pid 2 -IM14
Options are
  • -v option
    • -vl is working on the local system
    • -vr is working on the remote system
  • -f async – is for to use async insted of sync
  • -jp & -js – used for the local & remote jurnal group, one jernal group per consistency group
    • -IM – SI instance name
    • -IH – TC instance name
  • Consistency group
  • For COW
    • -pid is the COW-SSD pool id
Note: Make sure to change the inflow control to no on the jernal group, what it dose is explained below.

Notes on HUR

[u]CWP[/u]: Link between sites and the amount of data that is being replicated can cause potential performance impact. If huge amounts of data is being replicated causing a congestion, then it may cause "Cache Write Pending" to go high. High sustained CWPs over 33% will cause the host IO shutdown and the data will be destaged to backend disk as a priority. Even the pairs may be suspended as their may not be enough cache to cope with the load. So first thing I would check is if there were any spikes at the time of replication using Tuning Manager.

[u]HUR Inflow Control[/u]: You need to set this option carefully based on the environment. Below are the definitions.

The HUR ?Inflow Control? parameter allows a reduction in the number of I/Os which the storage subsystem receives by delaying the response to the host I/Os (in other words, whether to slow or delay response to hosts). This function can restrict any data overflow from the journal volume

Yes indicates inflow will be restricted.
No indicates inflow will not be restricted.

Having Inflow control set to ON effectively can cause perceived performance issues to the host.

[u]HUR Line of Speed setting[/u]:

When using Universal Replicator software with low bandwidth between the production and recovery sites, you may have to change the Speed of Line parameter to a lower value to avoid potential pair suspension. The value should be set to be equal to or less than the available telecommunication bandwidth between sites.

To split a pair

To split temporarily so the S-vol lun can be used
# For SI
pairsplit -g 9985v-test-si -IM14
 
# For TC
pairsplit -g 9985v-test-tc -IH10
To split permanent. (delete the pair)
# For SI
pairsplit -S -g 9985v-test-si -IM14
 
# For TC
pairsplit -S -g 9985v-test-tc -IH10

To re-sync the spit volume

# For SI
pairresync -g 9985v-test-si -IM14
 
# For TC
pairresync -g 9985v-test-tc -IH10

To switch P-vol and S-vol

# For TC
horctakeover -g VSP_test1 -IH1
 
# For SI
horctakeover -g si_test1 -IM1

return lun (hex) status

# Gives lun numbers in hex, the way it should be
raidscan -p CL3-A -fx
 
raidscan -p CL3-A -CLI

To configure DeviceManager on each host

# /opt/HDVM/HBaseAgent/agent/config/server.properties
server.server.serverIPAddress=hcs (chnage to none qfdn)
server.agent.rm.centralizePairConfiguration=enable 
/opt/HDVM/HBaseAgent/bin/hdvmagt_account (to set login / password, HaUser/haset)
#To report in to hcs as an agent
/opt/HDVM/HBaseAgent/bin/HiScan -s hcs

To get a lun block size

raidcom get ldev -ldev_id 70 -IM3

Server locations

DC1

Server name: cci-srv1
Script locations for regular TC work (split/sync): /HORCM/scripts
Script locations for takeover (reverse): /HORCM/scripts/TAKEOVER

DC2

Server name: cci-srv2
Script locations for regular TC work (split/sync): /HORCM/scripts
Script locations for takeover (reverse): /HORCM/scripts/TAKEOVER
5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: