Table of Contents
NetApp: A Complete Setup of a Netapp Filer
After reinitializing and resetting a filer to factory defaults there is always a time when you want to re-use your prescious 100k+ baby. Thinking, this should be a piece of cake, I encountered some unforeseen surprises which led to this document. Here I'll show you how to setup a filer from the start to the end. Notice this is a filer with a partner, so there is a lot of switching around.
Note that I copied a lot of output from the filers to this article for your convenience. Also note, that I mixed output from the two filers and that all steps need to be done on both filers.
Initial Configuration
When starting up the filer and connecting to the management console (serial cable, COM1 etc, all default settings if using a Windows machine with Putty) you'll see a configuration setup. Simply answer the questions, and don't be shy if you're not sure, everything can be changed afterwards:
Note: This wizard can be started by issuing the commandsetup
.
Please enter the new hostname []: filer01b Do you want to enable IPv6? [n]: Do you want to configure virtual network interfaces? [n]: For environments that use dedicated LANs to isolate management traffic from data traffic, e0M is the preferred Data ONTAP interface for the management LAN. The e0M interface is separate from the RLM interface even though they share the same external connector (port with wrench icon). It is highly recommended that you configure both interfaces. Please enter the IP address for Network Interface e0M []: 10.18.1.32 Please enter the netmask for Network Interface e0M [255.255.0.0]: Should interface e0M take over a partner IP address during failover? [n]: y The clustered failover software is not yet licensed. To enable network failover, you should run the 'license' command for clustered failover. Please enter the IPv4 address or interface name to be taken over by e0M []: 10.18.1.31 Please enter flow control for e0M {none, receive, send, full} [full]: Please enter the IP address for Network Interface e0a []: Should interface e0a take over a partner IP address during failover? [n]: Please enter the IP address for Network Interface e0b []: Should interface e0b take over a partner IP address during failover? [n]: Would you like to continue setup through the web interface? [n]: Please enter the name or IP address of the IPv4 default gateway: 10.18.1.254 The administration host is given root access to the filer's /etc files for system administration. To allow /etc root access to all NFS clients enter RETURN below. Please enter the name or IP address of the administration host: Where is the filer located? []: Den Haag Do you want to run DNS resolver? [n]: y Please enter DNS domain name []: getshifting.com You may enter up to 3 nameservers Please enter the IP address for first nameserver []: Bad IP address entered - must be of the form a.b, a.b.c, or a.b.c.d Please enter the IP address for first nameserver []: 10.16.110.1 Do you want another nameserver? [n]: Do you want to run NIS client? [n]: The Remote LAN Module (RLM) provides remote management capabilities including console redirection, logging and power control. It also extends autosupport by sending additional system event alerts. Your autosupport settings are used for sending these alerts via email over the RLM LAN interface. Would you like to configure the RLM LAN interface [y]: Would you like to enable DHCP on the RLM LAN interface [y]: n Please enter the IP address for the RLM: 10.16.121.96 Please enter the netmask for the RLM []: 255.255.0.0 Please enter the IP address for the RLM gateway: Bad IP address entered - must be of the form a.b, a.b.c, or a.b.c.d Please enter the IP address for the RLM gateway: 10.16.1.254 The mail host is required by your system to send RLM alerts and local autosupport email. Please enter the name or IP address of the mail host [mailhost]: You may use the autosupport options to configure alert destinations. Name of primary contact (Required) []: sjoerd @ getshifting.com Phone number of primary contact (Required) []: 0151234567 Alternate phone number of primary contact []: Primary Contact e-mail address or IBM WebID? []: sjoerd @ getshifting.com Name of secondary contact []: Phone number of secondary contact []: Alternate phone number of secondary contact []: Secondary Contact e-mail address or IBM WebID? []: Business name (Required) []: SHIFT Business address (Required) []: Street 1 City where business resides (Required) []: Delft State where business resides []: 2-character country code (Required) []: NL Postal code where business resides []: 1234AA The Shelf Alternate Control Path Management process provides the ability to recover from certain SAS shelf module failures and provides a level of availability that is higher than systems not using the Alternate Control Path Management process. Do you want to configure the Shelf Alternate Control Path Management interface for SAS shelves [n]: The initial aggregate currentSetting the administrative (root) password for filer01b ... New password: Retype new password:
IP Addresses
Setup IP addresses:
filer01b> ifconfig -a e0M: flags=0x2948867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500 inet 10.18.1.132 netmask-or-prefix 0xffff0000 broadcast 10.18.255.255 partner inet 10.18.1.131 (not in use) ether 00:a0:98:29:16:32 (auto-100tx-fd-up) flowcontrol full e0a: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500 ether 00:a0:98:29:16:30 (auto-unknown-cfg_down) flowcontrol full e0b: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500 ether 00:a0:98:29:16:31 (auto-unknown-cfg_down) flowcontrol full lo: flags=0x1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188 inet 127.0.0.1 netmask-or-prefix 0xff000000 broadcast 127.0.0.1 filer01b> ifconfig e0M 10.18.1.132 netmask 255.255.0.0 partner 10.18.1.131
Route
Set default route:
filer01a> route delete default delete net default filer01a> route add default 10.18.1.254 1 add net default: gateway 10.18.1.254
Set Up Startup Files
By setting these files correctly your settings will be persistent over reboots:
Node A:
#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012 hostname filer01a ifconfig e0a `hostname`-e0a flowcontrol full partner 10.18.1.32 route add default 10.18.1.254 1 routed on options dns.domainname getshifting.com options dns.enable on options nis.enable off savecore
#Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012 127.0.0.1 localhost 10.18.1.31 filer01a filer01a-e0a # 0.0.0.0 filer01a-e0M # 0.0.0.0 filer01a-e0b
Node B:
filer01b> rdfile /etc/rc #Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012 hostname filer01b ifconfig e0a `hostname`-e0a flowcontrol full partner 10.18.1.31 route add default 10.18.1.254 1 routed on options dns.domainname getshifting.com options dns.enable on options nis.enable off savecore
filer01b> rdfile /etc/hosts #Auto-generated by setup Mon Sep 17 13:34:05 GMT 2012 127.0.0.1 localhost 10.18.1.32 filer01b filer01b-e0a # 0.0.0.0 filer01b-e0M # 0.0.0.0 filer01b-e0b
Password
Setup a password for the root user. Notice that you'll get a warning if you use a password with less than 8 characters, but you're still allowed to set it:
filer01a> passwd New password: Retype new password: Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from this user. Reason: Password is too short (SNMPv3 requires at least 8 characters). Mon Sep 17 14:44:09 GMT [snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too short (SNMPv3 requires at least 8 characters). Mon Sep 17 14:44:09 GMT [passwd.changed:info]: passwd for user 'root' changed.
CIFS And Authentication
This is a little bit confusing part. Allthough we're not using CIFS, I have to do a cifs setup
to configure the normal authentication for these filers:
filer01a> cifs setup This process will enable CIFS access to the filer from a Windows(R) system. Use "?" for help at any prompt and Ctrl-C to exit without committing changes. Your filer does not have WINS configured and is visible only to clients on the same subnet. Do you want to make the system visible via WINS? [n]: ? Answer 'y' if you would like to configure CIFS to register its names with WINS servers, and to use WINS server queries to locate domain controllers. You will be prompted to add the IPv4 addresses of up to 4 WINS servers. Answer 'n' if you are not using WINS servers in your environment or do not want to use them. Do you want to make the system visible via WINS? [n]: n A filer can be configured for multiprotocol access, or as an NTFS-only filer. Since multiple protocols are currently licensed on this filer, we recommend that you configure this filer as a multiprotocol filer (1) Multiprotocol filer (2) NTFS-only filer Selection (1-2)? [1]: CIFS requires local /etc/passwd and /etc/group files and default files will be created. The default passwd file contains entries for 'root', 'pcuser', and 'nobody'. Enter the password for the root user []: Retype the password: The default name for this CIFS server is 'filer01a'. Would you like to change this name? [n]: Data ONTAP CIFS services support four styles of user authentication. Choose the one from the list below that best suits your situation. (1) Active Directory domain authentication (Active Directory domains only) (2) Windows NT 4 domain authentication (Windows NT or Active Directory domains) (3) Windows Workgroup authentication using the filer's local user accounts (4) /etc/passwd and/or NIS/LDAP authentication Selection (1-4)? [1]: 3 What is the name of the Workgroup? [WORKGROUP]: SHIFT Wed Sep 19 13:40:08 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from this user. Reason: Password is too short (SNMPv3 requires at least 8 characters). Wed Sep 19 13:40:08 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too short (SNMPv3 requires at least 8 characters). Wed Sep 19 13:40:08 GMT [filer01a: passwd.changed:info]: passwd for user 'root' changed. CIFS - Starting SMB protocol... It is recommended that you create the local administrator account (filer01a\administrator) for this filer. Do you want to create the filer01a\administrator account? [y]: Enter the new password for filer01a\administrator: Retype the password: Welcome to the SHIFT Windows(R) workgroup CIFS local server is running. filer01a>
setup ssh
Configure secure access to the filer:
filer01a*> secureadmin setup ssh SSH Setup --------- Determining if SSH Setup has already been done before...no SSH server supports both ssh1.x and ssh2.0 protocols. SSH server needs two RSA keys to support ssh1.x protocol. The host key is generated and saved to file /etc/sshd/ssh_host_key during setup. The server key is re-generated every hour when SSH server is running. SSH server needs a RSA host key and a DSA host key to support ssh2.0 protocol. The host keys are generated and saved to /etc/sshd/ssh_host_rsa_key and /etc/sshd/ssh_host_dsa_key files respectively during setup. SSH Setup will now ask you for the sizes of the host and server keys. For ssh1.0 protocol, key sizes must be between 384 and 2048 bits. For ssh2.0 protocol, key sizes must be between 768 and 2048 bits. The size of the host and server keys must differ by at least 128 bits. Please enter the size of host key for ssh1.x protocol [768] : Please enter the size of server key for ssh1.x protocol [512] : Please enter the size of host keys for ssh2.0 protocol [768] : You have specified these parameters: host key size = 768 bits server key size = 512 bits host key size for ssh2.0 protocol = 768 bits Is this correct? [yes] Setup will now generate the host keys. It will take a minute. After Setup is finished the SSH server will start automatically. filer01a*> Thu Sep 20 08:14:06 GMT [filer01a: secureadmin.ssh.setup.success:info]: SSH setup is done and ssh2 should be enabled. Host keys are stored in /etc/sshd/ssh_host_key, /etc/sshd/ssh_host_rsa_key, and /etc/sshd/ssh_host_dsa_key.
Apply a Software Update
Notice that after reinitializing a netapp you don't only remove the data, you also remove the software of a filer, leaving you with a basic version. You need to get the software update from NetApp / your reseller. You can't download it, so make sure you can get this before you proceed with the wiping.
There is a really easy way to get a new software version on your filer. If you have a local webserver you can download it from there, or use the miniweb webserver:
- Download the miniweb webserver from: http://sourceforge.net/projects/miniweb/
- Extract the package, run the miniweb executable and place the software update file in the webroot folder.
Now continue on the filer by issuing these commands:
- software get
http://<webserver ip address>/<software update filename>
- software list
- software install <software update filename>
- download
- version -b
- reboot
filer01a> software get http://10.16.61.16/26405.211.788.737_setup_q.exe software: copying to /etc/software/26405.211.788.737_setup_q.exe software: 100% file read from location. software: /etc/software/26405.211.788.737_setup_q.exe has been copied. filer01a> software list 26405.211.788.737_setup_q.exe filer01a> software install 26405.211.788.737_setup_q.exe software: You can cancel this operation by hitting Ctrl-C in the next 6 seconds. software: Depending on system load, it may take many minutes software: to complete this operation. Until it finishes, you will software: not be able to use the console. software: installing software, this could take a few minutes... software: installation of 26405.211.788.737_setup_q.exe completed. Thu Sep 20 08:46:55 GMT [filer01a: cmds.software.installDone:info]: Software: Installation of 26405.211.788.737_setup_q.exe was completed. Please type "download" to load the new software, and "reboot" subsequently for the changes to take effect. filer01a> download download: Reminder: upgrade both the nodes in the Cluster download: You can cancel this operation by hitting Ctrl-C in the next 6 seconds. download: Depending on system load, it may take many minutes download: to complete this operation. Until it finishes, you will download: not be able to use the console. Thu Sep 20 08:47:33 GMT [filer01a: download.request:notice]: Operator requested download initiated download: Downloading boot device download: If upgrading from a version of Data ONTAP prior to 7.3, please ensure download: there is at least 3% of available space on each aggregate before download: upgrading. Additional information can be found in the release notes. ........Thu Sep 20 08:48:45 GMT [filer01a: raid.disk.offline:notice]: Marking Disk /aggr0/plex0/rg0/1a.00.0 Shelf 0 Bay 0 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JMZ14F] offline. Thu Sep 20 08:48:45 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.0 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JMZ14F] selected for background disk firmware update. Thu Sep 20 08:48:45 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1 disk(s) of plex [Pool0]... ....Thu Sep 20 08:49:11 GMT [filer01a: raid.disk.online:notice]: Onlining Disk /aggr0/plex0/rg0/1a.00.0 Shelf 0 Bay 0 [NETAPP X306_HJUPI02TSSM NA02] S/N [B9JMZ14F]. Disk I/O attempted while disk 1a.00.0 is being zeroed. Thu Sep 20 08:49:11 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk ?.? removed from local mailbox set. Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk. Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.2 is a local HA mailbox disk. Thu Sep 20 08:49:12 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk 1a.00.2 removed from local mailbox set. Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk. Thu Sep 20 08:49:12 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk. .....Thu Sep 20 08:50:00 GMT [filer01a: mgr.stack.openFail:warning]: Unable to open function name/address mapping file /etc/boot/mapfile_7.3.4.2O: No such file or directory download: Downloading boot device (Service Area) ........Thu Sep 20 08:51:11 GMT [filer01a: raid.disk.offline:notice]: Marking Disk /aggr0/plex0/rg0/1a.00.1 Shelf 0 Bay 1 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JM4AUT] offline. Thu Sep 20 08:51:11 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.1 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JM4AUT] selected for background disk firmware update. Thu Sep 20 08:51:11 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1 disk(s) of plex [Pool0]... .....Thu Sep 20 08:51:37 GMT [filer01a: raid.disk.online:notice]: Onlining Disk /aggr0/plex0/rg0/1a.00.1 Shelf 0 Bay 1 [NETAPP X306_HJUPI02TSSM NA02] S/N [B9JM4AUT]. Disk I/O attempted while disk 1a.00.1 is being zeroed. Thu Sep 20 08:51:37 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk ?.? removed from local mailbox set. Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk. Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.2 is a local HA mailbox disk. Thu Sep 20 08:51:38 GMT [filer01a: fmmb.lock.disk.remove:info]: Disk 1a.00.2 removed from local mailbox set. Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk. Thu Sep 20 08:51:38 GMT [filer01a: fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk. .... filer01a> Thu Sep 20 08:52:17 GMT [filer01a: download.requestDone:notice]: Operator requested download completed filer01a> version -b 1:/x86_64/kernel/primary.krn: OS 7.3.7 1:/backup/x86_64/kernel/primary.krn: OS 7.3.4 1:/x86_64/diag/diag.krn: 5.6.1 1:/x86_64/firmware/excelsio/firmware.img: Firmware 1.9.0 1:/x86_64/firmware/DrWho/firmware.img: Firmware 2.5.0 1:/x86_64/firmware/SB_XV/firmware.img: Firmware 4.4.0 1:/boot/loader: Loader 1.8 1:/common/firmware/zdi/zdi_fw.zpk: Flash Cache Firmware 2.2 (Build 0x201012201350) 1:/common/firmware/zdi/zdi_fw.zpk: PAM II Firmware 1.10 (Build 0x201012200653) 1:/common/firmware/zdi/zdi_fw.zpk: X1936A FPGA Configuration PROM 1.0 (Build 0x200706131558) filer01a> filer01a> reboot CIFS local server is shutting down... CIFS local server has shut down... Thu Sep 20 08:54:53 GMT [filer01a: kern.shutdown:notice]: System shut down because : "reboot". Thu Sep 20 08:54:53 GMT [filer01a: perf.archive.stop:info]: Performance archiver stopped. Phoenix TrustedCore(tm) Server Copyright 1985-2006 Phoenix Technologies Ltd. All Rights Reserved BIOS version: 4.4.0 Portions Copyright (c) 2007-2009 NetApp. All Rights Reserved. CPU= Dual-Core AMD Opteron(tm) Processor 2216 X 1 Testing RAM 512MB RAM tested 4096MB RAM installed Fixed Disk 0: STEC Boot Loader version 1.8 Copyright (C) 2000-2003 Broadcom Corporation. Portions Copyright (C) 2002-2009 NetApp CPU Type: Dual-Core AMD Opteron(tm) Processor 2216 Starting AUTOBOOT press Ctrl-C to abort... Loading x86_64/kernel/primary.krn:................0x200000/49212136 0x30eeae8/23754912 0x4796388/7980681 0x4f32a11/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu The platform doesn't support service processor nvram: Need to update primary image on flash from version 2 to 4 nvram: Need to update secondary image on flash from version 2 to 4 Updating nvram firmware, memory valid is off. The system will automatically reboot when the update is complete. and nvram6 boot block................... Thu Sep 20 08:56:14 GMT [nvram.new.fw.downloaded:CRITICAL]: Firmware version 4 has been successfully downloaded on to the NVRAM card. The system will automatically reboot now. Starting AUTOBOOT press Ctrl-C to abort... Loading x86_64/kernel/primary.krn:................0x200000/49212136 0x30eeae8/23754912 0x4796388/7980681 0x4f32a11/7 Entry at 0x00202018 Starting program at 0x00202018 Press CTRL-C for special boot menu The platform doesn't support service processor Thu Sep 20 08:56:58 GMT [nvram.battery.state:info]: The NVRAM battery is currently OFF. Thu Sep 20 08:57:00 GMT [nvram.battery.turned.on:info]: The NVRAM battery is turned ON. It is turned OFF during system shutdown. Thu Sep 20 08:57:02 GMT [cf.nm.nicTransitionUp:info]: Interconnect link 0 is UP Thu Sep 20 08:57:02 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1a from version 01.08.00.00 to version 01.10.00.00. Thu Sep 20 08:57:02 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1c from version 01.08.00.00 to version 01.10.00.00. Thu Sep 20 08:57:04 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1a from version 01.09.01.00 to version 01.10.14.00. Thu Sep 20 08:57:04 GMT [sas.adapter.firmware.download:info]: Updating firmware on SAS adapter 1c from version 01.09.01.00 to version 01.10.14.00. Thu Sep 20 08:57:12 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. Thu Sep 20 08:57:12 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0c. Data ONTAP Release 7.3.7: Thu May 3 04:27:32 PDT 2012 (IBM) Copyright (c) 1992-2012 NetApp. Starting boot on Thu Sep 20 08:56:56 GMT 2012 Thu Sep 20 08:57:28 GMT [kern.version.change:notice]: Data ONTAP kernel version was changed from Data ONTAP Release 7.3.4 to Data ONTAP Release 7.3.7. Thu Sep 20 08:57:31 GMT [diskown.isEnabled:info]: software ownership has been enabled for this system Thu Sep 20 08:57:34 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1d after 10 seconds. Offlining the adapter. Thu Sep 20 08:57:34 GMT [cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed Thu Sep 20 08:57:34 GMT [cf.nm.nicTransitionDown:warning]: Cluster Interconnect link 0 is DOWN Thu Sep 20 08:57:34 GMT [cf.rv.notConnected:error]: Connection for cfo_rv failed Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.0 is a local HA mailbox disk. Thu Sep 20 08:57:34 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.1 is a local HA mailbox disk. Thu Sep 20 08:57:34 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side. Thu Sep 20 08:57:35 GMT [sas.link.error:error]: Could not recover link on SAS adapter 1b after 10 seconds. Offlining the adapter. Thu Sep 20 08:57:35 GMT [shelf.config.spha:info]: System is using single path HA attached storage only. Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.12 is a partner HA mailbox disk. Thu Sep 20 08:57:35 GMT [fmmb.current.lock.disk:info]: Disk 1a.00.13 is a partner HA mailbox disk. Thu Sep 20 08:57:35 GMT [fmmb.instStat.change:info]: normal mailbox instance on partner side. Thu Sep 20 08:57:35 GMT [cf.fm.partner:info]: Cluster monitor: partner 'filer01b' Thu Sep 20 08:57:35 GMT [cf.fm.kernelMismatch:warning]: Cluster monitor: possible kernel mismatch detected local 'Data ONTAP/7.3.7', partner 'Data ONTAP/7.3.4' Thu Sep 20 08:57:35 GMT [cf.fm.timeMasterStatus:info]: Acting as cluster time slave Thu Sep 20 08:57:36 GMT [cf.nm.nicTransitionUp:info]: Interconnect link 0 is UP Thu Sep 20 08:57:36 GMT [ses.multipath.ReqError:CRITICAL]: SAS-Shelf24 detected without a multipath configuration. Thu Sep 20 08:57:36 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks. Thu Sep 20 08:57:36 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes. Thu Sep 20 08:57:37 GMT [localhost: cf.fm.launch:info]: Launching cluster monitor Thu Sep 20 08:57:38 GMT [localhost: cf.fm.partner:info]: Cluster monitor: partner 'filer01b' Thu Sep 20 08:57:38 GMT [localhost: cf.fm.notkoverClusterDisable:warning]: Cluster monitor: cluster takeover disabled (restart) sparse volume upgrade done. num vol 0. Thu Sep 20 08:57:38 GMT [localhost: cf.fsm.takeoverOfPartnerDisabled:notice]: Cluster monitor: takeover of filer01b disabled (cluster takeover disabled) Tadd net 127.0.0.0: gateway 127.0.0.1h u Sep 20 08:57:40 GMT [localhost: cf.nm.nicReset:warning]: Initiating soft reset on Cluster Interconnect card 0 due to rendezvous reset Vdisk Snap Table for host:0 is initialized Thu Sep 20 08:57:40 GMT [localhost: rc:notice]: The system was down for 164 seconds Thu Sep 20 08:57:41 GMT [localhost: rc:info]: Registry is being upgraded to improve storing of local changes. Thu Sep 20 08:57:41 GMT [filer01a: rc:info]: Registry upgrade successful. Thu Sep 20 08:57:41 GMT [filer01a: cf.partner.short_uptime:warning]: Partner up for 2 seconds only ***** 5 disks have been identified as having an incorrect ***** firmware revision level. ***** Please consult the man pages for disk_fw_update ***** to upgrade the firmware on these disks. Thu Sep 20 08:57:42 GMT [filer01a: dfu.firmwareDownrev:error]: Downrev firmware on 5 disk(s) Thu Sep 20 08:57:43 GMT [filer01a: sfu.partnerNotResponding:error]: Partner either responded in the negative, or did not respond in 20 seconds. Aborting shelf firmware update. Thu Sep 20 08:57:43 GMT [filer01a: perf.archive.start:info]: Performance archiver started. Sampling 23 objects and 211 counters. Thu Sep 20 08:57:47 GMT [filer01a: netif.linkUp:info]: Ethernet e0a: Link up. add net default: gateway 10.18.1.254 Thu Sep 20 08:57:48 GMT [filer01a: rpc.dns.file.not.found:error]: Cannot enable DNS: /etc/resolv.conf does not exist Thu Sep 20 08:57:48 GMT [filer01a: snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too short (SNMPv3 requires at least 8 characters). Thu Sep 20 08:57:48 GMT [filer01a: mgr.boot.disk_done:info]: Data ONTAP Release 7.3.7 boot complete. Last disk update written at Thu Sep 20 08:54:54 GMT 2012 Reminder: you should also set option timed.proto on the partner node or the next takeover may not function correctly. Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.notifyEnableOn:info]: Cluster hw_assist: hw_assist functionality has been enabled by user. Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.intfUnConfigured:info]: Cannot get IP address of preferred e0M ethernet interface for hardware assist functionality. Thu Sep 20 08:57:48 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options cf.hw_assist.partner.address' and to make the hardware-assisted takeover work. Thu Sep 20 08:57:48 GMT [filer01a: mgr.boot.reason_ok:notice]: System rebooted after a reboot command. CIFS local server is running. Thu Sep 20 08:57:49 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error: Unknown host) filer01a> Thu Sep 20 08:57:50 GMT [filer01a: rlm.driver.mailhost:warning]: RLM setup could not access the mailhost specified in Data ONTAP. Thu Sep 20 08:57:50 GMT [filer01a: unowned.disk.reminder:info]: 24 disks are currently unowned. Use 'disk assign' to assign the disks to a filer. Ipspace "acp-ipspace" created Thu Sep 20 08:57:55 GMT [filer01a: rlm.firmware.upgrade.reqd:warning]: The RLM firmware 3.1 is incompatible with Data ONTAP for IPv6. Thu Sep 20 08:57:55 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options cf.hw_assist.partner.address' and to make the hardware-assisted takeover work. Thu Sep 20 08:57:56 GMT [filer01a: cf.hwassist.emptyPrtnrAddr:warning]: Partner address is empty; set it using the command 'options cf.hw_assist.partner.address' and to make the hardware-assisted takeover work. Thu Sep 20 08:58:05 GMT [filer01a: monitor.globalStatus.critical:CRITICAL]: Cluster failover of filer01b is not possible: cluster takeover disabled. Thu Sep 20 08:58:12 GMT [filer01a: nbt.nbns.registrationComplete:info]: NBT: All CIFS name registrations have completed for the local server. Thu Sep 20 08:58:14 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error: Unknown host) Thu Sep 20 08:58:50 GMT [filer01a: asup.post.host:info]: Autosupport (BATTERY_LOW) cannot connect to url eccgw01.boulder.ibm.com/support/electronic/nas (Could not find hostname 'eccgw01.boulder.ibm.com', hostname lookup resolution error: Unknown host) Thu Sep 20 08:59:43 GMT [filer01a: raid.disk.offline:notice]: Marking Disk 1a.00.3 Shelf 0 Bay 3 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JLT7ST] offline. Thu Sep 20 08:59:43 GMT [filer01a: bdfu.selected:info]: Disk 1a.00.3 [NETAPP X306_HJUPI02TSSM NA00] S/N [B9JLT7ST] selected for background disk firmware update. Thu Sep 20 08:59:44 GMT [filer01a: dfu.firmwareDownloading:info]: Now downloading firmware file /etc/disk_fw/X306_HJUPI02TSSM.NA02.LOD on 1 disk(s) of plex [Pool0]... Firmware update opmerking: reboot na een tijdje weer: Thu Sep 20 09:29:24 GMT [filer01a: dfu.firmwareUpToDate:info]: Firmware is up-to-date on all disk drives
Set Vol0 Settings
Set the Vol0 to 20 GB and some additional settings (see NetApp Data Planning for more information on these settings):
filer01a> vol size vol0 20g vol size: Flexible volume 'vol0' size set to 20g. filer01a> vol options vol0 no_atime_update on filer01a> vol options vol0 fractional_reserve 0
DNS And NTP
DNS
Configure DNS:
filer01a> rdfile /etc/resolv.conf search getshifting.com intranet nameserver 10.16.110.1 filer01a> options dns.enable on You are changing option dns.enable which applies to both members of the cluster in takeover mode. This value must be the same in both cluster members prior to any takeover or giveback, or that next takeover/giveback may not work correctly. Thu Sep 20 12:31:00 GMT [filer01a: reg.options.cf.change:warning]: Option dns.enable changed on one cluster node.
Note: Wrfile needs an empty line at the end and save with CTRL+C.
NTP
Configure NTP:
filer01a> options timed timed.enable on (same value in local+partner recommended) timed.log off (same value in local+partner recommended) timed.max_skew 30m (same value in local+partner recommended) timed.min_skew 0 (same value in local+partner recommended) timed.proto rtc (same value in local+partner recommended) timed.sched 1h (same value in local+partner recommended) timed.servers (same value in local+partner recommended) timed.window 0s (same value in local+partner recommended) filer01a> options timed.servers 10.16.123.123 Reminder: you should also set option timed.servers on the partner node or the next takeover may not function correctly. filer01a> options timed.proto ntp Reminder: you should also set option timed.proto on the partner node or the next takeover may not function correctly. filer01a> options timed timed.enable on (same value in local+partner recommended) timed.log off (same value in local+partner recommended) timed.max_skew 30m (same value in local+partner recommended) timed.min_skew 0 (same value in local+partner recommended) timed.proto ntp (same value in local+partner recommended) timed.sched 1h (same value in local+partner recommended) timed.servers 10.16.123.123 (same value in local+partner recommended) timed.window 0s (same value in local+partner recommended)
Autosupport
Confgure the Autosupport settings:
options autosupport.from filer01a @ getshifting.com options autosupport.mailhost 10.16.102.111 options autosupport.support.transport smtp options autosupport.to sjoerd @ getshifting.com
Cluster Failover
Configure the Cluster Failover:
First configure RLM failover since IP failover is already managed by the startup files:
filer01a> options cf.hw_assist.partner.address 10.18.1.32 Validating the new hw-assist configuration. Please wait... Thu Sep 20 12:45:19 GMT [filer01a: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive. cf hw_assist Error: can not validate new config. No response from partner(filer01b), timed out. filer01a> Thu Sep 20 12:46:00 GMT [filer01a: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP address: 10.18.1.31 port: 4444 Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.localMonitor:warning]: Cluster hw_assist: hw_assist functionality is inactive. Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.missedKeepAlive:warning]: Cluster hw_assist: missed keep alive alert from partner(filer01a). Thu Sep 20 12:45:46 GMT [filer01b: cf.hwassist.hwasstActive:info]: Cluster hw_assist: hw_assist functionality is active on IP address: 10.18.1.32 port: 4444 filer01b> options cf.hw_assist.partner.address 10.18.1.31 Validating the new hw-assist configuration. Please wait... cf hw_assist Error: can not validate new config. No response from partner(filer01a), timed out.
The check and enable (cf enable
) the clustering:
filer01a> cf status Cluster disabled. filer01a> cf partner filer01b filer01a> cf monitor current time: 20Sep2012 12:48:38 UP 03:19:42, partner 'filer01b', cluster monitor disabled filer01a> cf enable filer01a> Thu Sep 20 12:50:48 GMT [filer01a: cf.misc.operatorEnable:warning]: Cluster monitor: operator initiated enabling of cluster Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverOfPartnerEnabled:notice]: Cluster monitor: takeover of filer01b enabled Thu Sep 20 12:50:48 GMT [filer01a: cf.fsm.takeoverByPartnerEnabled:notice]: Cluster monitor: takeover of filer01a by filer01b enabled filer01a> Thu Sep 20 12:51:01 GMT [filer01a: monitor.globalStatus.ok:info]: The system's global status is normal. filer01a> cf status Cluster enabled, filer01b is up. filer01a> cf partner filer01b filer01a> cf monitor current time: 20Sep2012 12:51:21 UP 03:22:22, partner 'filer01b', cluster monitor enabled VIA Interconnect is up (link 0 up, link 1 up), takeover capability on-line partner update TAKEOVER_ENABLED (20Sep2012 12:51:21)
Syslog
Configure syslog to send the logging to a central syslog server, see NetApp Syslog for more information:
filer01a> wrfile /etc/syslog.conf # Log messages of priority info or higher to the console and to /etc/messages *.err /dev/console *.info /etc/messages *.info @10.18.2.240 read: error reading standard input: Interrupted system call filer01a> rdfile /etc/syslog.conf # Log messages of priority info or higher to the console and to /etc/messages *.err /dev/console *.info /etc/messages *.info @10.18.2.240
Disable Telnet
Disable telnet access to your filers:
filer01a> options telnet.enable off Reminder: you MUST also set option telnet.enable on the partner node or the next takeover will not function correctly.
Disk Config
Now this is quite personal I guess, you need to configure you're disks. Assign the correct disks to the correct heads. How you do this is up to you, I usually devide them equally:
First a few commands:
- find disk speed:
- storage show disk -a
See all disk and their ownerships (to only see unowned disks do disk show -n):
filer01a> disk show -v DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1a.00.13 filer01b (151762803) Pool0 B9JGMTRT 1a.00.12 filer01b (151762803) Pool0 B9JB8DGF 1a.00.16 filer01b (151762803) Pool0 B9JMT0JF 1a.00.15 filer01b (151762803) Pool0 B9JM4N7T 1a.00.19 filer01b (151762803) Pool0 B9JMDT2T 1a.00.18 filer01b (151762803) Pool0 B9JHNPST 1a.00.14 filer01b (151762803) Pool0 B9JHNPLT 1a.00.17 filer01b (151762803) Pool0 B9JLT7UT 1c.01.8 Not Owned NONE 6SL3PDBR0000N2407Y4X 1c.01.9 Not Owned NONE 6SL3T0EP0000N2401BQE 1c.01.10 Not Owned NONE 6SL3QWQF0000N238L2U8 1c.01.4 Not Owned NONE 6SL3SHRZ0000N240BGZE 1c.01.23 Not Owned NONE 6SL3PR9T0000N237EL3Q 1c.01.12 Not Owned NONE 6SL3QHLG0000N238D6W8 1c.01.11 Not Owned NONE 6SL3PNB30000N2392UAG 1c.01.1 Not Owned NONE 6SL3PF180000N24007YD 1c.01.0 Not Owned NONE 6SL3SRHH0000N2409944 1c.01.6 Not Owned NONE 6SL3SEE80000N2404HQC 1c.01.16 Not Owned NONE 6SL3R6KL0000N23907K2 1c.01.22 Not Owned NONE 6SL3PBX20000N237NBWY 1c.01.13 Not Owned NONE 6SL3RBAS0000N2395038 1c.01.20 Not Owned NONE 6SL3PND40000N238H5GX 1c.01.19 Not Owned NONE 6SL3P3NH0000N238608X 1c.01.18 Not Owned NONE 6SL3PWDK0000N238L4QM 1c.01.5 Not Owned NONE 6SL3S87F0000N239GPHT 1c.01.15 Not Owned NONE 6SL3RC5M0000M125NTE1 1c.01.2 Not Owned NONE 6SL3SB6E0000N2402QZE 1c.01.14 Not Owned NONE 6SL3R1FP0000N239553G 1c.01.17 Not Owned NONE 6SL3R3LJ0000N239575J 1c.01.21 Not Owned NONE 6SL3QQQP0000N23903DJ 1c.01.7 Not Owned NONE 6SL3SG830000N2404H3Z 1c.01.3 Not Owned NONE 6SL3SWBE0000N24007QV 1a.00.2 filer01a (151762815) Pool0 B9JMDP0T 1a.00.0 filer01a (151762815) Pool0 B9JMZ14F 1a.00.7 filer01a (151762815) Pool0 B9JLS1MT 1a.00.1 filer01a (151762815) Pool0 B9JM4AUT 1a.00.5 filer01a (151762815) Pool0 B9JHNPJT 1a.00.4 filer01a (151762815) Pool0 B9JMYGNF 1a.00.3 filer01a (151762815) Pool0 B9JLT7ST 1a.00.6 filer01a (151762815) Pool0 B9JMDL9T
Now assign disks to owner. There are 24 unowned disks, 12 for one head, 12 for the other:
filer01a> disk assign 1c.01.0 -o filer01a Sep 21 15:01:08 [filer01a: diskown.changingOwner:info]: changing ownership for disk 1c.01.3 (S/N 6SL3SWBE0000N24007QV) from unowned (ID -1) to filer01a (ID 151762815)
Now, because I forgot to set this option:
filer01a*> priv set advanced filer01a*> options disk.auto_assign off You are changing option disk.auto_assign which applies to both members of the cluster in takeover mode. This value must be the same in both cluster members prior to any takeover or giveback, or that next takeover/giveback may not work correctly.
THIS happened, all disks got assigned:
filer01a> disk show -v DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1a.00.13 filer01b (151762803) Pool0 B9JGMTRT 1a.00.12 filer01b (151762803) Pool0 B9JB8DGF 1a.00.16 filer01b (151762803) Pool0 B9JMT0JF 1a.00.15 filer01b (151762803) Pool0 B9JM4N7T 1a.00.19 filer01b (151762803) Pool0 B9JMDT2T 1a.00.18 filer01b (151762803) Pool0 B9JHNPST 1a.00.14 filer01b (151762803) Pool0 B9JHNPLT 1a.00.17 filer01b (151762803) Pool0 B9JLT7UT 1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE 1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8 1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE 1c.01.23 filer01a (151762815) Pool0 6SL3PR9T0000N237EL3Q 1c.01.12 filer01a (151762815) Pool0 6SL3QHLG0000N238D6W8 1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG 1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD 1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC 1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X 1c.01.16 filer01a (151762815) Pool0 6SL3R6KL0000N23907K2 1c.01.22 filer01a (151762815) Pool0 6SL3PBX20000N237NBWY 1c.01.13 filer01a (151762815) Pool0 6SL3RBAS0000N2395038 1c.01.20 filer01a (151762815) Pool0 6SL3PND40000N238H5GX 1c.01.19 filer01a (151762815) Pool0 6SL3P3NH0000N238608X 1c.01.18 filer01a (151762815) Pool0 6SL3PWDK0000N238L4QM 1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT 1c.01.15 filer01a (151762815) Pool0 6SL3RC5M0000M125NTE1 1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE 1c.01.14 filer01a (151762815) Pool0 6SL3R1FP0000N239553G 1c.01.17 filer01a (151762815) Pool0 6SL3R3LJ0000N239575J 1c.01.21 filer01a (151762815) Pool0 6SL3QQQP0000N23903DJ 1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z 1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV 1a.00.2 filer01a (151762815) Pool0 B9JMDP0T 1a.00.0 filer01a (151762815) Pool0 B9JMZ14F 1a.00.7 filer01a (151762815) Pool0 B9JLS1MT 1a.00.1 filer01a (151762815) Pool0 B9JM4AUT 1a.00.5 filer01a (151762815) Pool0 B9JHNPJT 1a.00.4 filer01a (151762815) Pool0 B9JMYGNF 1a.00.3 filer01a (151762815) Pool0 B9JLT7ST 1a.00.6 filer01a (151762815) Pool0 B9JMDL9T 1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944
So I had to remove the ownership of the disks that got wrong assigned:
filer01a*> disk remove_ownership 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22 1c.01.23 Disk 1c.01.12 will have its ownership removed Disk 1c.01.13 will have its ownership removed Disk 1c.01.14 will have its ownership removed Disk 1c.01.15 will have its ownership removed Disk 1c.01.16 will have its ownership removed Disk 1c.01.17 will have its ownership removed Disk 1c.01.18 will have its ownership removed Disk 1c.01.19 will have its ownership removed Disk 1c.01.20 will have its ownership removed Disk 1c.01.21 will have its ownership removed Disk 1c.01.22 will have its ownership removed Disk 1c.01.23 will have its ownership removed Volumes must be taken offline. Are all impacted volumes offline(y/n)?? y
This gave me this result:
filer01a*> disk show -v DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1a.00.13 filer01b (151762803) Pool0 B9JGMTRT 1a.00.12 filer01b (151762803) Pool0 B9JB8DGF 1a.00.16 filer01b (151762803) Pool0 B9JMT0JF 1a.00.15 filer01b (151762803) Pool0 B9JM4N7T 1a.00.19 filer01b (151762803) Pool0 B9JMDT2T 1a.00.18 filer01b (151762803) Pool0 B9JHNPST 1a.00.14 filer01b (151762803) Pool0 B9JHNPLT 1a.00.17 filer01b (151762803) Pool0 B9JLT7UT 1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE 1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8 1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE 1c.01.14 Not Owned NONE 6SL3R1FP0000N239553G 1c.01.22 Not Owned NONE 6SL3PBX20000N237NBWY 1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG 1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD 1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC 1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X 1c.01.23 Not Owned NONE 6SL3PR9T0000N237EL3Q 1c.01.19 Not Owned NONE 6SL3P3NH0000N238608X 1c.01.18 Not Owned NONE 6SL3PWDK0000N238L4QM 1c.01.15 Not Owned NONE 6SL3RC5M0000M125NTE1 1c.01.17 Not Owned NONE 6SL3R3LJ0000N239575J 1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT 1c.01.21 Not Owned NONE 6SL3QQQP0000N23903DJ 1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE 1c.01.12 Not Owned NONE 6SL3QHLG0000N238D6W8 1c.01.20 Not Owned NONE 6SL3PND40000N238H5GX 1c.01.16 Not Owned NONE 6SL3R6KL0000N23907K2 1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z 1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV 1c.01.13 Not Owned NONE 6SL3RBAS0000N2395038 1a.00.2 filer01a (151762815) Pool0 B9JMDP0T 1a.00.0 filer01a (151762815) Pool0 B9JMZ14F 1a.00.7 filer01a (151762815) Pool0 B9JLS1MT 1a.00.1 filer01a (151762815) Pool0 B9JM4AUT 1a.00.5 filer01a (151762815) Pool0 B9JHNPJT 1a.00.4 filer01a (151762815) Pool0 B9JMYGNF 1a.00.3 filer01a (151762815) Pool0 B9JLT7ST 1a.00.6 filer01a (151762815) Pool0 B9JMDL9T 1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944
So now I only had to assign the other disks:
filer01a*> disk assign 1c.01.12 1c.01.13 1c.01.14 1c.01.15 1c.01.16 1c.01.17 1c.01.18 1c.01.19 1c.01.20 1c.01.21 1c.01.22 1c.01.23 -o filer01b filer01a*> disk show -v DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1a.00.13 filer01b (151762803) Pool0 B9JGMTRT 1a.00.12 filer01b (151762803) Pool0 B9JB8DGF 1a.00.16 filer01b (151762803) Pool0 B9JMT0JF 1a.00.15 filer01b (151762803) Pool0 B9JM4N7T 1a.00.19 filer01b (151762803) Pool0 B9JMDT2T 1a.00.18 filer01b (151762803) Pool0 B9JHNPST 1a.00.14 filer01b (151762803) Pool0 B9JHNPLT 1a.00.17 filer01b (151762803) Pool0 B9JLT7UT 1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE 1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8 1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE 1c.01.12 filer01b (151762803) Pool0 6SL3QHLG0000N238D6W8 1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG 1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD 1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC 1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X 1c.01.14 filer01b (151762803) Pool0 6SL3R1FP0000N239553G 1c.01.13 filer01b (151762803) Pool0 6SL3RBAS0000N2395038 1c.01.17 filer01b (151762803) Pool0 6SL3R3LJ0000N239575J 1c.01.22 filer01b (151762803) Pool0 6SL3PBX20000N237NBWY 1c.01.23 filer01b (151762803) Pool0 6SL3PR9T0000N237EL3Q 1c.01.20 filer01b (151762803) Pool0 6SL3PND40000N238H5GX 1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT 1c.01.16 filer01b (151762803) Pool0 6SL3R6KL0000N23907K2 1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE 1c.01.19 filer01b (151762803) Pool0 6SL3P3NH0000N238608X 1c.01.21 filer01b (151762803) Pool0 6SL3QQQP0000N23903DJ 1c.01.15 filer01b (151762803) Pool0 6SL3RC5M0000M125NTE1 1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z 1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV 1c.01.18 filer01b (151762803) Pool0 6SL3PWDK0000N238L4QM 1a.00.2 filer01a (151762815) Pool0 B9JMDP0T 1a.00.0 filer01a (151762815) Pool0 B9JMZ14F 1a.00.7 filer01a (151762815) Pool0 B9JLS1MT 1a.00.1 filer01a (151762815) Pool0 B9JM4AUT 1a.00.5 filer01a (151762815) Pool0 B9JHNPJT 1a.00.4 filer01a (151762815) Pool0 B9JMYGNF 1a.00.3 filer01a (151762815) Pool0 B9JLT7ST 1a.00.6 filer01a (151762815) Pool0 B9JMDL9T 1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944
Which got them nicely devided:
filer01a*> disk show -o filer01a DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1c.01.9 filer01a (151762815) Pool0 6SL3T0EP0000N2401BQE 1c.01.10 filer01a (151762815) Pool0 6SL3QWQF0000N238L2U8 1c.01.4 filer01a (151762815) Pool0 6SL3SHRZ0000N240BGZE 1c.01.11 filer01a (151762815) Pool0 6SL3PNB30000N2392UAG 1c.01.1 filer01a (151762815) Pool0 6SL3PF180000N24007YD 1c.01.6 filer01a (151762815) Pool0 6SL3SEE80000N2404HQC 1c.01.8 filer01a (151762815) Pool0 6SL3PDBR0000N2407Y4X 1c.01.5 filer01a (151762815) Pool0 6SL3S87F0000N239GPHT 1c.01.2 filer01a (151762815) Pool0 6SL3SB6E0000N2402QZE 1c.01.7 filer01a (151762815) Pool0 6SL3SG830000N2404H3Z 1c.01.3 filer01a (151762815) Pool0 6SL3SWBE0000N24007QV 1a.00.2 filer01a (151762815) Pool0 B9JMDP0T 1a.00.0 filer01a (151762815) Pool0 B9JMZ14F 1a.00.7 filer01a (151762815) Pool0 B9JLS1MT 1a.00.1 filer01a (151762815) Pool0 B9JM4AUT 1a.00.5 filer01a (151762815) Pool0 B9JHNPJT 1a.00.4 filer01a (151762815) Pool0 B9JMYGNF 1a.00.3 filer01a (151762815) Pool0 B9JLT7ST 1a.00.6 filer01a (151762815) Pool0 B9JMDL9T 1c.01.0 filer01a (151762815) Pool0 6SL3SRHH0000N2409944 filer01a*> disk show -o filer01b DISK OWNER POOL SERIAL NUMBER ------------ ------------- ----- ------------- 1a.00.13 filer01b (151762803) Pool0 B9JGMTRT 1a.00.12 filer01b (151762803) Pool0 B9JB8DGF 1a.00.16 filer01b (151762803) Pool0 B9JMT0JF 1a.00.15 filer01b (151762803) Pool0 B9JM4N7T 1a.00.19 filer01b (151762803) Pool0 B9JMDT2T 1a.00.18 filer01b (151762803) Pool0 B9JHNPST 1a.00.14 filer01b (151762803) Pool0 B9JHNPLT 1a.00.17 filer01b (151762803) Pool0 B9JLT7UT 1c.01.12 filer01b (151762803) Pool0 6SL3QHLG0000N238D6W8 1c.01.14 filer01b (151762803) Pool0 6SL3R1FP0000N239553G 1c.01.13 filer01b (151762803) Pool0 6SL3RBAS0000N2395038 1c.01.17 filer01b (151762803) Pool0 6SL3R3LJ0000N239575J 1c.01.22 filer01b (151762803) Pool0 6SL3PBX20000N237NBWY 1c.01.23 filer01b (151762803) Pool0 6SL3PR9T0000N237EL3Q 1c.01.20 filer01b (151762803) Pool0 6SL3PND40000N238H5GX 1c.01.16 filer01b (151762803) Pool0 6SL3R6KL0000N23907K2 1c.01.19 filer01b (151762803) Pool0 6SL3P3NH0000N238608X 1c.01.21 filer01b (151762803) Pool0 6SL3QQQP0000N23903DJ 1c.01.15 filer01b (151762803) Pool0 6SL3RC5M0000M125NTE1 1c.01.18 filer01b (151762803) Pool0 6SL3PWDK0000N238L4QM
Create aggr
Now you can create the aggregates, and consider reading this page before you continue:
First see the aggregate status:
filer01a> aggr status -r Aggregate aggr0 (online, raid_dp) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block or zoned checksum traditional volumes or aggregates spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 spare 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 spare 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 spare 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Partner disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816 filer01a>
Now create the aggregates considering the speed and type of disks (do not mix that):
filer01a> aggr add aggr0 -d 1a.00.3 1a.00.4 1a.00.5 1a.00.6 Addition of 4 disks to the aggregate has completed.
Note: If you do not have mixed types of disks (so nothing to worry about you can add disks like this:aggr add aggr0 7
, which will just add 7 disks to the aggregate.
filer01a> aggr status -r Aggregate aggr0 (online, raid_dp) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block or zoned checksum traditional volumes or aggregates spare 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Partner disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816 filer01a> filer01a> aggr create aggr1_SAS -T SAS -t raid_dp 11 Creation of an aggregate with 11 disks has completed. filer01a> aggr status -r Aggregate aggr1_SAS (online, raid_dp) (block checksums) Plex /aggr1_SAS/plex0 (online, normal, active) RAID group /aggr1_SAS/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 1c.01.0 1c 1 0 SA:A - SAS 15000 560000/1146880000 560208/1147307688 parity 1c.01.1 1c 1 1 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.2 1c 1 2 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.3 1c 1 3 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.4 1c 1 4 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.5 1c 1 5 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.6 1c 1 6 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.7 1c 1 7 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.8 1c 1 8 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.9 1c 1 9 SA:A - SAS 15000 560000/1146880000 560208/1147307688 data 1c.01.10 1c 1 10 SA:A - SAS 15000 560000/1146880000 560208/1147307688 Aggregate aggr0 (online, raid_dp) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal) RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity 1a.00.0 1a 0 0 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 parity 1a.00.1 1a 0 1 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.2 1a 0 2 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.3 1a 0 3 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.4 1a 0 4 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.5 1a 0 5 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 data 1a.00.6 1a 0 6 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block or zoned checksum traditional volumes or aggregates spare 1c.01.11 1c 1 11 SA:A - SAS 15000 560000/1146880000 560208/1147307688 spare 1a.00.7 1a 0 7 SA:A - BSAS 7200 1695466/3472315904 1695759/3472914816 Partner disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- partner 1c.01.22 1c 1 22 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.15 1c 1 15 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.16 1c 1 16 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.21 1c 1 21 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.20 1c 1 20 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.17 1c 1 17 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.18 1c 1 18 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.13 1c 1 13 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.19 1c 1 19 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.12 1c 1 12 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.14 1c 1 14 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1c.01.23 1c 1 23 SA:A - SAS 15000 560000/1146880000 560208/1147307688 partner 1a.00.16 1a 0 16 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.12 1a 0 12 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.15 1a 0 15 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.13 1a 0 13 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.19 1a 0 19 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.17 1a 0 17 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.18 1a 0 18 SA:A - BSAS 7200 0/0 1695759/3472914816 partner 1a.00.14 1a 0 14 SA:A - BSAS 7200 0/0 1695759/3472914816 filer01a>
This makes your files ready for use!