All About Adaptec Arcconf, ZFS, and OpenIndiana

This is a quick article about using Adaptec RAID controllers with OpenIndiana. In this case, I am using an Adaptec 5805. The driver appears to handle hardware based raid as well as JBOD. This is an improvement over the Oracle shipped driver in Solaris, or OpenSolaris. OpenIndiana does not ship with the StorMan package, which provides arcconf, so you will have to fetch StorMan from Adaptec yourself and install it.

Like most RAID command line tools, arcconf has a myriad of options to support your ZFS NAS server. We are going to cover these typical operations. It is my hope that by seeing these examples it will give readers a feel for how to use arcconf.

  1. How to destroy all JBODS
  2. How to create a hardware RAID10 logical disk
  3. How to check status of a logical disk
  4. Using iostat to check existing a logical drive
  5. How to lay down a UFS file system on a logical disk
  6. How to destroy a logical disk
  7. How to get a list of all physical drives
  8. How to turn all physical drives into JBODs suitable for ZFS
  9. How to create a hardware RAID6 logical disk
  10. How to verify the status of a logical disk
  11. How to start a verify_fix operation How to extract the event log from the RAID controller in XML format
  12. How to extract the event log from the RAID controller in tabular format
  13. How to extract the device log from the RAID controller in XML format
  14. How to extract the device log from the RAID controller in tabular format
  15. How to convert time stamps in the RAID controller to real time
  16. How to rescan the Bus for new or removed drives
  17. How to convert a RAID10 volume to RAID5 (I never recommend using RAID5)
  18. Replace a failed JBOD disk

Destroy all JBODS

 

root@vs-lm1577:/opt/StorMan# ./arcconf delete 1 JBOD ALL noprompt
Controllers found: 1

All data in JBOD 0,8 will be lost.
Deleting: JBOD 0,8

All data in JBOD 0,9 will be lost.
Deleting: JBOD 0,9

All data in JBOD 0,10 will be lost.
Deleting: JBOD 0,10

All data in JBOD 0,11 will be lost.
Deleting: JBOD 0,11

// there are 24 disks on box, we'll stop here as by now you get the idea

Command completed successfully.

Create RAID10 Volume

I recommend using ZFS for your mirrors instead of the RAID controller. I am sure you have your reasons for wanting to do this, but ZFS is better than hardware raid.

root@vs-lm1577:/opt/StorMan# ./arcconf create 1 LOGICALDRIVE \
 Name TestVolume Max 10 0,8 0,9 0,10 0,11 noprompt
Controllers found: 1

Creating logical device: TestVolume

Command completed successfully.

 

Check Status of Existing Volume

 

Check status of raid volume 0

root@vs-lm1577:/opt/StorMan# ./arcconf getconfig 1 LD 0
Controllers found: 1
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical device number 0
Logical device name                      : TestVolume
RAID level                               : 10
Status of logical device                 : Optimal
Size                                     : 1906678 MB
Stripe-unit size                         : 256 KB
Read-cache mode                          : Enabled
MaxIQ preferred cache setting            : Disabled
MaxIQ cache setting                      : Disabled
Write-cache mode                         : Enabled (write-back)
Write-cache setting                      : Enabled (write-back) when protected by battery/ZMM
Partitioned                              : Yes
Protected by Hot-Spare                   : No
Bootable                                 : Yes
Failed stripes                           : No
Power settings                           : Disabled
--------------------------------------------------------
Logical device segment information
--------------------------------------------------------
Group 0, Segment 0                       : Present (0,8)      WD-WMATV4097675
Group 0, Segment 1                       : Present (0,9)      WD-WMATV4111023
Group 1, Segment 0                       : Present (0,10)      WD-WMATV4095249
Group 1, Segment 1                       : Present (0,11)      WD-WMATV4109136

 

Using iostat to Check Existence of a Logical Drive

 

In OpenIndiana, iostat -En sees the RAID10 volume

root@vs-lm1577:/opt/StorMan# iostat -En

c1t0d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: Adaptec  Product: RAID 5805        Revision: V1.0 Serial No:
Size: 1999.31GB <1999307276288 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

 

How to Lay Down a UFS File System on a Logical Disk

root@vs-lm1577:/opt/StorMan# newfs /dev/dsk/c1t0d0s0
newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y
Warning: 304 sector(s) in last cylinder unallocated
/dev/rdsk/c1t0d0s0:     3904880336 sectors in 635560 cylinders of 48 tracks, 128 sectors
1906679.9MB in 4445 cyl groups (143 c/g, 429.00MB/g, 448 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 878752, 1757472, 2636192, 3514912, 4393632, 5272352, 6151072, 7029792,
7908512,
Initializing cylinder groups:
...............................................................................
.........
super-block backups for last 10 cylinder groups at:
3896557984, 3897436704, 3898315424, 3899194144, 3900072864, 3900951584,
3901830304, 3902709024, 3903587744, 3904466464
root@vs-lm1577:/opt/StorMan# mkdir /tmp/a
root@vs-lm1577:/opt/StorMan# mount /dev/dsk/c1t0d0s0 /tmp/a
root@vs-lm1577:/opt/StorMan# df -h /tmp/a
Filesystem            Size  Used Avail Use% Mounted on
/dev/dsk/c1t0d0s0     1.9T  257M  1.8T   1% /tmp/a

Destroy a Logical Disk

root@vs-lm1577:/opt/StorMan# ./arcconf delete 1 logicaldrive 0 noprompt
Controllers found: 1

WARNING: logical device 0 may contain a partition.
All data in logical device 0 will be lost.
Deleting: logical device 0 ("TestVolume")

Command completed successfully.

Get list of all physical drives

root@vs-lm1577:/opt/StorMan# ./arcconf  getconfig 1 PD | \
grep "Reported Channel,Device(T:L)" |grep -v 0:0 | \
cut -d: -f 3 |cut -d'(' -f 1

0,8
0,9
0,11
0,12
0,13
0,14
0,15
0,16
0,17
0,18
0,19
0,21
0,22
0,23
0,24
0,25
0,26
0,27
0,28
0,29
0,31

How to turn all physical drives into JBODs suitable for ZFS

root@vs-lm1577:/opt/StorMan# list=`./arcconf  getconfig 1 PD |\
 grep "Reported Channel,Device(T:L)" |grep -v 0:0 |cut -d: -f 3 | \
cut -d'(' -f 1 | sed 's/,/ /g'` && ./arcconf create 1 jbod $list

Controllers found: 1
Created JBOD: 0,8
Created JBOD: 0,9
Created JBOD: 0,11
Created JBOD: 0,12
Created JBOD: 0,13
Created JBOD: 0,14
Created JBOD: 0,15
Created JBOD: 0,16
Created JBOD: 0,17
Created JBOD: 0,18
Created JBOD: 0,19
Created JBOD: 0,21
Created JBOD: 0,22
Created JBOD: 0,23
Created JBOD: 0,24
Created JBOD: 0,25
Created JBOD: 0,26
Created JBOD: 0,27
Created JBOD: 0,28
Created JBOD: 0,29
Created JBOD: 0,31

Command completed successfully.

How to create a hardware RAID6 logical disk

root@vs-lm1577:/opt/StorMan# ./arcconf create 1 logicaldrive max 6 $list
Controllers found: 1

Do you want to add a logical device to the configuration?
Press y, then ENTER to continue or press ENTER to abort: y

Creating logical device: Device 0

Command completed successfully.

How to start a verify_fix operation

What to do when your array has an ‘impacted’ status. Basically this means the array build process has not completed. You can verify this with the following command:

root@vs-lm1577:/opt/StorMan# ./arcconf getstatus 1
Controllers found: 1
Logical device Task:
Logical device                 : 0
Task ID                        : 100
Current operation              : Build/Verify
Status                         : In Progress
Priority                       : High
Percentage complete            : 0

Here we see that while we created the giant RAID6 volume, there is still background initialization going on. The volume is healthy, but performance is degraded while it completes the initialization.

This is how you start a build/verify check on your volume. This fails in my example as the job is already running.

root@vs-lm1577:/opt/StorMan# ./arcconf task start 1 logicaldrive 0 verify_fix noprompt
Controllers found: 1
Task 'Build/Verify' is already running on this device.  Aborting

Command aborted.

How to extract the event log from the RAID controller in XML format

root@vs-lm1577:/opt/StorMan# ./arcconf getlogs 1 event
Controllers found: 1

Command completed successfully.

How to extract the event log from the RAID controller in tabular format

root@vs-lm1577:/opt/StorMan# ./arcconf getlogs 1 device tabular
Controllers found: 1

ControllerLog
controllerID ..................... 0
type ............................. 0
time ............................. 1308362117
version .......................... 3
tableFull ........................ false

How to convert time stamps in the RAID controller to real time

root@vs-lm1577:/opt/StorMan# perl -e 'print scalar localtime(shift),"\n";' 1308362117
Fri Jun 17 18:55:17 2011

How to rescan the Bus for new or removed drives

root@vs-lm1577:/opt/StorMan# ./arcconf rescan 1
Controllers found: 1

Command completed successfully.

How to convert a RAID10 volume to RAID5. He we are using the above RAID10.
First, let’s view our RAID10 volume:

root@vs-lm1577:/opt/StorMan# ./arcconf getconfig 1 ld 0
Controllers found: 1
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical device number 0
   Logical device name                      : TestVolume
   RAID level                               : 10
   Status of logical device                 : Optimal
   Size                                     : 1906678 MB
   Stripe-unit size                         : 256 KB
   Read-cache mode                          : Enabled
   MaxIQ preferred cache setting            : Disabled
   MaxIQ cache setting                      : Disabled
   Write-cache mode                         : Enabled (write-back)
   Write-cache setting                      : Enabled (write-back) when protected by battery/ZMM
   Partitioned                              : No
   Protected by Hot-Spare                   : No
   Bootable                                 : Yes
   Failed stripes                           : No
   Power settings                           : Disabled
   --------------------------------------------------------
   Logical device segment information
   --------------------------------------------------------
   Group 0, Segment 0                       : Present (0,8)      WD-WMATV4097675
   Group 0, Segment 1                       : Present (0,9)      WD-WMATV4111023
   Group 1, Segment 0                       : Present (0,10)      WD-WMATV4095249
   Group 1, Segment 1                       : Present (0,11)      WD-WMATV4109136

yep, it is still there.

Now, let’s convert it to RAID5

root@vs-lm1577:/opt/StorMan# ./arcconf modify 1 from 0 to max \
5 0 8 0 9 0 10 0 11 noprompt

Controllers found: 1
Reconfiguring logical device: TestVolume

Command completed successfully.

and now verify…

root@vs-lm1577:/opt/StorMan# ./arcconf getconfig 1 ld 0
Controllers found: 1
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical device number 0
Logical device name                      : TestVolume
RAID level                               : 5
Status of logical device                 : Logical device Reconfiguring
Size                                     : 2860022 MB
Stripe-unit size                         : 256 KB
Read-cache mode                          : Enabled
MaxIQ preferred cache setting            : Disabled
MaxIQ cache setting                      : Disabled
Write-cache mode                         : Enabled (write-back)
Write-cache setting                      : Enabled (write-back) when protected by battery/ZMM
Partitioned                              : No
Protected by Hot-Spare                   : No
Bootable                                 : Yes
Failed stripes                           : No
Power settings                           : Disabled
--------------------------------------------------------
Logical device segment information
--------------------------------------------------------
Segment 0                                : Present (0,8)      WD-WMATV4097675
Segment 1                                : Present (0,9)      WD-WMATV4111023
Segment 2                                : Present (0,10)      WD-WMATV4095249
Segment 3                                : Present (0,11)      WD-WMATV4109136

Yep, it worked as advertised.

Replace a Failed JBOD Disk

See my previous article on how to replace a failed JBOD


 

This entry was posted in Adaptec, arcconf, OpenIndiana, UNIX CLI, ZFS and tagged , , , , . Bookmark the permalink.

6 Responses to All About Adaptec Arcconf, ZFS, and OpenIndiana

  1. Jason J. W. Williams says:

    Do the Adaptec cards expose the hard drive model and serial information to the OS when in JBOD mode?

    • Yes they do. It is a decent JBOD implementation. I advise against using the Adaptec 5805 cards with 1TB WD drives. At AOL we saw a number of database (on Linux) become corrupted. WD pointed their finger at Adaptec and Adaptec pointed their finger at WD.

      ZFS would occasionally detect checksum errors on various blocks. On two occasions I observed corrupted blocks on both mirrors. In my case I was able to copy the corrupted file from a sister system, but you may want to avoid the problem in the first place.

      thanks,
      j.

  2. Joe says:

    Hello, Do you know what it means when all disks are present but one volume shows as degraded? I have two mirrors and one shows as degraded. ALl disks are present. Its been about 2 weeks since the volume started to show as degreaded.

    • A degraded volume generally means one of the disks has failed and you are either at risk of losing all or data, have degraded performance, or both.

      I recommend you replace the failed disk as soon as possible. Even though the disk still shows up in inventory itmay be experiencing a large number of read or write errors and the raid controller failed.

      Good luck,
      j.

  3. Thanks for generating the errata.

Leave a Reply

Your email address will not be published. Required fields are marked *

*


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>