In case of usage of
Virtual Appliance
- Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
- Use /home/stor2rrd/stor2rrd as the product home
Hitachi VSP G/F/E/N, HUS-VM monitoring setups
- REST API, Export Tool: use it on new models which support it, it has much easier setup
- CCI, SNMP, Export Tool: it works on all models
- Hitachi Configuration Manager REST API / Export Tool
1. Hitachi VSPG REST API, Export Tool
Supported models
Prerequisites
-
Allow communication from STOR2RRD host to all Hitachi VSPG storage SVP IP and nodes on TCP port 1099.
There mus be open even some other ports for Hitach Export Tool like: 51100, 51101 or 51099 depends on the version of the Export Tool.
Nodes IP: enable TCP port 443.
At least VSP 5500 has not available Nodes, then use SVP IP instead and allow also TCP port 443.
New storage devices do not have SVP, put Node1 IP/hostname into SVP field.
$ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 1099
Connection to "192.168.1.3" on port "1099" is ok
$ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 443
Connection to "192.168.1.3" on port "443" is ok
- Storage configuration:
Create user stor2rrd on the storage, read only access
Do not use shell special characters like #!?|$*[]\{}`"'& in the password, use ;:.+-%@ instead.
You can also follow this docu to fully prepare storage for monitoring
Installation of Hitachi Export Tool
It is typically located on a CD that comes packaged with the Service Processor on the HDS USP Array. The Export Tool can also be obtained by contacting HDS support.
(CD location: /ToolsPack/ExportTool)
Hitachi produces a new Export Tool for each release of the firmware. So unless you make sure all of the storage are running on the same firmware version then you will need to obtain the appropriate version of the Export Tool to meet the firmware version you are running at the site.
Find our firmware release of your storage (like 83-01-28/00).
Export Tool version must match the SVP firmware version.
Install each version of the Export Tool into separate directory named as firmware of your storage (just 6 numbers like in this example firmware 83-01-28) under root user:
# mkdir /opt/hds
# mkdir /opt/hds/83-01-28
# cd /opt/hds/83-01-28
# tar xvf export-tool.tar
# chmod 755 runUnix.sh runUnix.bat # note one of these files exists only
# chown -R stor2rrd /opt/hds
# chown -R lpar2rrd /opt/hds # do this on the Virtual Appliance where is all under "lpar2rrd" user
Might work that higher Export Tool version works even with lower storage firmware like in that example (Export Tool 83-01-28 and storage firmware 73-03-57).
In this case you do not need to install older Export Tool, just make a symlink.
# cd /opt/hds/
# ls
83-01-28
# ln -s 83-01-28 73-03-57
Test Export Tool 2
$ cd /opt/hds/<firmware level> # example /opt/hds/88-03-23
$ sh ./runUnix.bat show interval -ip <ip controller> -login <user> <password>
Interval : 5 min
show interval command success
Directory /opt/hds is optional, it is configurable in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg : VSP_CLIDIR=/opt/hds
The HDS Performance Monitor License must exist for each array and monitoring must be enabled.
Storage configuration example
Java 11 is required by the Export Tool for new VSPG 5500 boxes.
Allow monitoring of CU and WWN
Note this configuration option do not have to be in place on all modes or firmwares, you might ignore it if you do not find it on your storage.
- CU
- WWN
Note that monitoring apparently cannot be enabled when WWNs per Port exceeds the maximum of 32.
In this case you will not have direct per host data but host data will be aggregated from attached volumes (it might mislead when volumes have attached more hosts).
When you still do not have data then re-enable of monitoring might help.
STOR2RRD storage configuration
2. Hitachi VSPG CCI, SNMP, Export Tool
The program uses 3 Hitachi APIs. You have to install and configure all of them.
- Command Control Interface (CCI)
- Hitachi Export tool
- SNMP API to get Health status
You might also look into very detaily described installation procedure on
www.sysaix.com.
Note that Virtual Storage Machines (VSM) feature is not supported by the tool.
Storage configuration
Installation of Hitachi CCI
-
Allow communication from STOR2RRD host to all Hitachi VSPG storage SVP IP on TCP ports 1099, 51100
Note new firmwares might use 51101 or 51099 ports instead of 51100.
New storage devices do not have SVP, put Node1 IP/hostname into SVP field.
Test open ports for TCP protocols to SVP IP:
$ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 1099
Connection to "192.168.1.3" on port "1099" is ok
$ perl /home/stor2rrd/stor2rrd/bin/conntest.pl 192.168.1.1 51100
Connection to "192.168.1.3" on port "51100" is ok
Allow communication from STOR2RRD host to all storage node IP on UDP 31001
How to test UDP
-
Create user stor2rrd on the storage, read only access
Do not use shell special characters like #!?|$*[]\{}`"'& in the password, use ;:.+-%@ instead.
- Obtain CCI installation package from your Hitachi representatives.
-
Install it from .iso image under root account
Mount ISO image:
- AIX command:
# loopmount -i /HS042_77.iso -o "-V cdrfs -o ro" -m /mnt
- Linux (Virtual Appliance) command:
# mount -o loop,ro HS042_77.iso /mnt
Assure you have installed 32 bits libraries, install them if do not
# yum -y install glibc.i686
-
Create target directory:
# mkdir /etc/HORCM
# cd /mnt
# ./RMinstsh
-
Install from CD
# mkdir /opt
# cpio -idmu < /dev/XXXX # where XXXX = I/O device with install media
# ln -s /opt/HORCM /HORCM
- Execute the CCI installation command:
-
Verify installation of the proper version using the raidqry command:
# raidqry -h
Model: RAID-Manager/HP-UX
Ver&Rev: 01-29-03/05
Usage: raidqry [options]
-
Assure that everything is executable and writeable by stor2rrd user
Use lpar2rrd user on the Virtual Appliance.
This is a must! Under root identification execute this:
# touch /HORCM/etc/USE_OLD_IOCT
# chown stor2rrd /HORCM
# chown -R stor2rrd /HORCM/* /HORCM/.uds
# chown -R lpar2rrd /HORCM/* /HORCM/.uds # do this on the Virtual Appliance where is all under "lpar2rrd" user
# chmod 755 /HORCM /HORCM/usr/bin /HORCM/usr/bin/* /HORCM/log* /HORCM/etc/horcmgr /HORCM/etc/*conf /HORCM/.uds/
Configuration of CCI
- CCI communication with storage can be done either via LAN (it is something described below) or via command device (SAN attached volume from the storage).
When you have many storage in place 40+ then use rather command device as LAN communication might not be reliable enough. CCI command device configuration procedure
- Each storage must have its own config file /etc/horcmXX.conf
- Check if local ports 10001 and 10002 are not used (nothing is listening there)
# netstat -an|grep -i listen| egrep "11001|11002"
- storage with controllers IP 192.168.1.1 and 192.168.1.2, conf file /etc/horcm1.conf will use local port 11001 (UDP)
Use storage node IP. SVP IP must be used in etc/storage-list.cfg further.
# vi /etc/horcm1.conf
HORCM_MON
# ip_address service poll(10ms) timeout(10ms)
localhost 11001 1000 3000
HORCM_CMD
# dev_name dev_name dev_name
\\.\IPCMD-192.168.1.1-31001 \\.\IPCMD-192.168.1.2-31001
- storage with IP 192.168.1.10 and 192.168.1.11, conf file /etc/horcm2.conf
change localhost port to 11002 (from 11001 which is used above)
# vi /etc/horcm2.conf
HORCM_MON
# ip_address service poll(10ms) timeout(10ms)
localhost 11002 1000 3000
HORCM_CMD
# dev_name dev_name dev_name
\\.\IPCMD-192.168.1.10-31001 \\.\IPCMD-192.168.1.11-31001
- Start it under stor2rrd account (definitely not under root!). Use lpar2rrd account on the Virtual Appliance
This starts HORM instance 1 (/etc/horcm1.conf)
# su - stor2rrd
$ /HORCM/usr/bin/horcmstart.sh 1
- Start HORM instance 1 & 2 (/etc/horcm1.conf & /etc/horcm2.conf)
# su - stor2rrd
$ /HORCM/usr/bin/horcmstart.sh 1 2
- Check if they are running
$ ps -ef | grep horcm
stor2rrd 19660912 1 0 Feb 26 - 0:03 horcmd_02
stor2rrd 27590770 1 0 Feb 26 - 0:09 horcmd_01
- Place it into operating system start/stop scripts
# su - stor2rrd -c "/HORCM/usr/bin/horcmstart.sh 1 2"
# su - stor2rrd -c "/HORCM/usr/bin/horcmshutdown.sh 1 2"
-
When HORCM does not want to start then
-
Assure that filesystem permission are fine for /HORCM (owned by stor2rrd user)
-
Check if connection to storage IP nodes is allowed: how to test UDP
Installation of Hitachi Export Tool
It is typically located on a CD that comes packaged with the Service Processor on the HDS USP Array. The Export Tool can also be obtained by contacting HDS support.
(CD location: /ToolsPack/ExportTool)
Hitachi produces a new Export Tool for each release of the firmware. So unless you make sure all of the storage are running on the same firmware version then you will need to obtain the appropriate version of the Export Tool to meet the firmware version you are running at the site.
Find our firmware release of your storage (83-01-28/00) identified by /etc/horcm1.conf (-I1):
Export Tool version must match the SVP firmware version.
Under stor2rrd account!
# su - stor2rrd
$ raidcom -login stor2rrd <password> -I1
$ raidqry -l -I1
No Group Hostname HORCM_ver Uid Serial# Micro_ver Cache(MB)
1 --- localhost 01-35-03-08 0 471234 83-01-28/00 320000
$ raidcom -logout -I1
Install each version of the Export Tool into separate directory named as firmware of your storage (just 6 numbers like in this example firmware 83-01-28) under root user:
# mkdir /opt/hds
# mkdir /opt/hds/83-01-28
# cd /opt/hds/83-01-28
# tar xvf export-tool.tar
# chmod 755 runUnix.sh runUnix.bat # note one of these files exists only
# chown -R stor2rrd /opt/hds
# chown -R lpar2rrd /opt/hds # do this on the Virtual Appliance where is all under "lpar2rrd" user
Might work that higher Export Tool version works even with lower storage firmware like in that example (Export Tool 83-01-28 and storage firmware 73-03-57).
In this case you do not need to install older Export Tool, just make a symlink.
# cd /opt/hds/
# ls
83-01-28
# ln -s 83-01-28 73-03-57
Test Export Tool 2
$ cd /opt/hds/<firmware level> # example /opt/hds/88-03-23
$ sh ./runUnix.bat show interval -ip <ip controller> -login <user> <password>
Interval : 5 min
show interval command success
Directory /opt/hds is optional, it is configurable in /home/stor2rrd/stor2rrd/etc/stor2rrd.cfg : VSP_CLIDIR=/opt/hds
The HDS Performance Monitor License must exist for each array and monitoring must be enabled.
Storage configuration example
Allow monitoring of CU and WWN
Note this configuration option do not have to be in place on all modes or firmwares, you might ignore it if you do not find it on your storage.
- CU
- WWN
Note that monitoring apparently cannot be enabled when WWNs per Port exceeds the maximum of 32.
In this case you will not have direct per host data but host data will be aggregated from attached volumes (it might mislead when volumes have attached more hosts).
When you still do not have data then re-enable of monitoring might help.
Health status
STOR2RRD storage configuration
3. Hitachi Configuration Manager, Export Tool
Hitachi Configuration Manager can be used as a replacement of Hitachi CCI, its configuration is much easier than CCI.
Either use direct storage REST API instead of CCI, which is not however available on older systems like the VSP G1000 or VSP G200/400 which have no REST API support.
Therefore its implemented Hitachi Configuration Manager support to enable REST API access even for these storage devices.
Configuration is easy, create an account on Hitachi Configuration Manager and put its hostname and user credentials into STOR2RRD configuration.
You have to define also device id of the storage.
Hitachi Configuration Manager is apparently free for Hitachi customers.
Prerequisities
- Open port are 23450 (HTTP) or 23451 (HTTPS) from STOR2RRD server
- It uses same user account which is used for Export Tool already created on the back-end storage
Configuration
Follow above chapters for setting up Export Tool and the storage
- Installation of Hitachi Export Tool
- Allow monitoring of CU and WWN
STOR2RRD storage configuration
-
Let run the storage agent for 15 - 20 minutes to get data, then:
$ cd /home/stor2rrd/stor2rrd
$ ./load.sh
- Go to the web UI: http://<your web server>/stor2rrd/
Use Ctrl-F5 to refresh the web browser cache.
In case of usage of
Virtual Appliance
- Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
- Use /home/stor2rrd/stor2rrd as the product home
The tool uses SNMP protocol to get performance and configuration data from the HNAS storage.
There is apparently two HW different HNAS products.
- Block storage system with a separate NAS head in front ➡ use EVS IP in connection details
- Unfied system which is a blockstorage with an integrated NAS head ➡ connect it to one of the nodes.
Install Prerequisites (skip that in case of Virtual Appliance)
- AIX
Download Net-SNMP packages and install them.
Do not use the latest packages on AIX, it does not work, use net-snmp-5.6.2.1-1!
# umask 0022
# rpm -Uvh net-snmp-5.6.2.1-1 net-snmp-utils-5.6.2.1-1 net-snmp-perl-5.6.2.1-1
Make sure
- you use PERL=/opt/freeware/bin/perl in etc/stor2rrd.cfg
- PERL5LIB in etc/stor2rrd.cfg contains /opt/freeware/lib/perl5/vendor_perl/5.8.8/ppc-thread-multi path
- Linux
# umask 0022
# yum install net-snmp
# yum install net-snmp-utils
# yum install net-snmp-perl
Note you might need to allow optional repositories on RHEL to yum can find it
# subscription-manager repos --list
...
# subscription-manager repos --enable rhel-7-server-optional-rpms
Use rhel-7-for-power-le-optional-rpms for Linux on Power etc ...
- Linux Debian/Ubuntu
% umask 0022
% apt-get install snmp libsnmp-perl snmp-mibs-downloader
Assure that this line is commented out in /etc/snmp/snmp.conf
#mibs :
If apt-get does not find snmp-mibs-downloader package then enable contrib and non-free repositories.
Enable SNMP v2c or v3 protocol on the storage
- SNMP v2c
Hitachi docu, follow section "Configuring SNMP access"
Navigate to: Home ➡ Server ➡ Settings ➡ SNMP Access Configuration
Select SNMP v2c, leave port 161, add stor2rrd server as alllowed host and set community string
Note: do not use SNMP v1, it does not work
- SNMP v3
Hitachi docu, follow section "Configuring SNMPv3 access"
Use the CLI command snmp-protocol to configure SNMPv3.
When SNMPv3 is enabled the SNMP agent will not respond to SNMPv1 or SNMPv2c requests.
$ ssh supervisor@<EVS hostname/IP>
HNAS1:$ snmp-protocol -v v3
HNAS1:$ snmp-protocol
Protocol: SNMPv3
Add users with the snmpv3-user-add command.
HNAS1:$ snmpv3-user-add stor2rrd
Please enter the authentication password: ********
Please re-enter the authentication password: ********
Please enter the privacy password: ********
Please re-enter the privacy password: ********
Enable enhanced stats
Allow network access
-
Allow access from the STOR2RRD host to the storage (EVS IP) on port 161 UDP .
Test if port is open:
$ perl /home/stor2rrd/stor2rrd/bin/conntest_udp.pl 192.168.1.1 161
Connection to "192.168.1.1" on port "161" is ok
STOR2RRD storage configuration
-
Let run the storage agent for 15 - 20 minutes to get data, then:
$ cd /home/stor2rrd/stor2rrd
$ ./load.sh
- Go to the web UI: http://<your web server>/stor2rrd/
Use Ctrl-F5 to refresh the web browser cache.
In case of usage of
Virtual Appliance
- Use local account lpar2rrd for hosting of STOR2RRD on the virtual appliance
- Use /home/stor2rrd/stor2rrd as the product home
STOR2RRD uses Hitachi HCP REST API interface natively provided by the storage to get configuration and performance data.
Event logs and health status is obtained via SNMP.
Storage connectivity
-
Allow access from the STOR2RRD host to the Hitachi HCP storage on port 161 (UDP) and 9090 (API)
$ perl /home/stor2rrd/stor2rrd/bin/conntest_udp.pl 192.168.1.1 161
Connection to "192.168.1.1" on port "161" is ok
$ perl /home/stor2rrd/stor2rrd/bin/conntest_udp.pl 192.168.1.1 9090
Connection to "192.168.1.1" on port "9090" is ok
- Storage users
1. Create global user (preferably stor2rrd) for access to the storage with Monitor role.
2. Create additional user in each monitored Tenant with Monitor role.
-
Allow Management API access from STOR2RRD hosts/network/globaly for storage and each tenant
- Enable SNMP:
STOR2RRD storage configuration