Monitor/Rebuild Hardware RAID (Windows)

Please Note:

The text on this page was translated by translation software. A revised version from our editors will be available soon.

For Dedicated Server Windows with hardware RAID

Here you will learn how to check the status of the hardware RAIDs and rebuild it if necessary after a hard disk replacement.

Please note: The values given in this article are exemplary and may differ from your RAID.

Identify hardware controller

Two types of hardware controllers are used in the 1&1 IONOS root servers: LSI 3ware and Areca.

You can check which controller is installed in your server in the Windows Device Manager in the Memory Controller section.

LSI 3ware RAID

tw_cli

Download the 3ware Command Line Interface (tw_cli) and run it on your server. (On the linked provider page, simply search for "CLI" and select "Software" on the search results page)

The help command returns all available commands:

# tw_cli
//XXX> help

Copyright(c) 2012 LSI

LSI/3ware CLI (version 2.00.11.022)


Commands Description
-------------------------------------------------------------------
focus Changes from one object to another. For Interactive Mode Only!
show Displays information about controller(s), unit(s) and port(s).
flush Flush write cache data to units in the system.
rescan Rescan all empty ports for new unit(s) and disk(s).
update Update controller firmware from an image file.
commit Commit dirty DCB to storage on controller(s). (Windows only)
/cx Controller specific commands.
/cx/ux Unit specific commands.
/cx/px Port specific commands.
/cx/phyx Phy specific commands.
/cx/bbu BBU specific commands. (9000 series)
/cx/ex Enclosure specific commands. (9690SA, 9750)
/ex Enclosure specific commands. (9550SX/9650SE)


Certain commands are qualified with constraints of controller type/model support.
Please consult the twi_cli documentation for explanation of the controller-qualifiers.

Type help <command> to get more details about a particular command.
For more detail information see twi_cli's documentation.

//XXX>

info displays information about the RAID and its current status. This is a RAID5 with a capacity of 1.36 TB, which consists of 3 hard disks.

//XXXX> info

Ctl Model Ports Drives Units NotOpt RRate VRate BBU
------------------------------------------------------------------------
c0 9750-4i 3 3 1 0 2 1 -

//XXXX> info c0

Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
-----------------------------------------------------------------------------
u0 RAID-5 OK - - 256K 2793.95 RIW ON

VPort Status Unit Size Type Phy Encl-Slot Model
-------------------------------------------------------------------------------
p0 OK u0 1.36 TB SATA 0 ST1500L003-9VT16L
p1 OK u0 1.36 TB SATA 0 ST1500L003-9VT16L
p2 OK u0 1.36 TB SATA 0 ST1500L003-9VT16L

show alarms displays the latest alarm messages:

//XXXX> show alarms

Ctl Date Severity AEN Message
------------------------------------------------------------------------------
c0 [Wed Feb 01 2014 03:25:11] INFO Rebuild startet: unit=0
c0 [Wed Feb 01 2014 08:13:31] INFO Rebuild completed: unit=0
c0 [Wed Feb 01 2014 08:14:13] INFO Initialize started: unit=0
c0 [Wed Feb 01 2014 08:14:13] INFO Initialize completed: unit=0

In case of an error, the output would look like this, for example. Here the third disk (unit=0, vport 2) has failed:

//XXXX> show alarms

Ctl Date Severity AEN Message
------------------------------------------------------------------------------
c0 [Wed Feb 02 2014 08:22:10] INFO Rebuild started: unit=0
c0 [Wed Feb 02 2014 08:14:13] ERROR Unit degraded: unit=0, vport 2

maint remove c0 p2 removes the defective hard disk on the third port (p2) from the RAID:

//XXXX> maint remove c0 p2
Removing port /c0/p2 ... Done.

After replacing the defective disk, a maint rescan is necessary for the controller to recognize the new disk:

//XXXX> maint rescan
Rescanning controller /c0 for units and drives ...Done.
Found the following unit(s): [none].
Found the following drive(s): [/c0/p2].

Then the disk can be mounted on the third port with maint rebuild c0 u0 p2 and rebuilt:

//XXXX> maint rebuild c0 u0 p2
Sending rebuild start request to /c0/u0 on 1 disk(s) [2] ... Done.

Display the rebuild status:

//XXXX> info c0

Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------------
u0 RAID-5 REBUILDING 0 - 256k 232.885 RiW ON

Port Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------------
p0 OK u0 1.36 TB SATA 0 - ST1500L003-9VT16L
p1 OK u0 1.36 TB SATA 0 - ST1500L003-9VT16L
p2 DEGRADED u0 1.36 TB SATA 1 - ST1500L003-9VT16L
3dm2 (3ware Drive Manager)

For further information on installation, configuration and application, please refer to the 3ware documentation (http://www.3ware.com/support/userdocs.asp).

Areca RAID

Download the Windows - CLI utility and run it on your server.

You can download the complete CLI manual from Areca at http://www.areca.us/support/download/RaidCards/Documents/Manual_Spec/.

Some example commands are listed below:

Copyright (c) 2004 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.71.240( Windows )


Controllers List
----------------------------------------
Controller#01(PCI): ARC-1110
Current Controller: Controller#01
----------------------------------------

CMD Description
==========================================================
main Show Command Categories.
set General Settings.
rsf RaidSet Functions.
vsf VolumeSet Functions.
disk Physical Drive Functions.
sys System Functions.
net Ethernet Functions.
event Event Functions.
hw Hardware Monitor Information.
exit Exit CLI.
==========================================================
Command Format: <CMD> [Sub-Command] [Parameters].
Note: Use <CMD> -h or -help to get details.
CLI>

With the command <cmd> information you can query system information, e.g. the hardware monitor information (temperature):

CLI> hw info
The Hardware Monitor Information
===========================================
Fan#1 Speed (RPM) : 2673
HDD #1 Temp. : 48
HDD #2 Temp. : 47
HDD #3 Temp. : 51
HDD #4 Temp. : 0
===========================================
GuiErrMsg<0x00>: Success.

CLI>

disk info displays information about the hard disks:

CLI> disk info
# ModelName Serial# FirmRev Capacity State
===============================================================================
1 ST3750640AS 5QD5G7Z1 3.AAK 750.2GB RaidSet Member(1)
2 ST3750640AS 5QD5G6JR 3.AAK 750.2GB RaidSet Member(1)
3 ST3750640AS 5QD5G7XQ 3.AAK 750.2GB RaidSet Member(1)
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

sys info provides information about the controller itself:

CLI> sys info
The System Information
===========================================
Main Processor : 500MHz
CPU ICache Size : 32KB
CPU DCache Size : 32KB
System Memory : 256MB/333MHz
Firmware Version : V1.43 2007-4-17
BOOT ROM Version : V1.43 2007-4-17
Serial Number : Y813CAAAAR101890
Controller Name : ARC-1110
===========================================
GuiErrMsg<0x00>: Success.

CLI>

event info shows current events:

CLI> event info
Date-Time Device Event Type
===============================================================================
2013-07-09 07:23:14 H/W MONITOR Raid Powered On
2013-09-29 08:06:24 H/W MONITOR Raid Powered On
2013-09-29 07:51:37 H/W MONITOR Raid Powered On
...

rsf info shows information about the current raid set (3*750 GB are installed here):

CLI> rsf info
 #  Name             Disks TotalCap  FreeCap DiskChannels       State
===============================================================================
 1  Raid Set # 00        3 2250.5GB    0.0GB 123                Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

vsf info provides information about the logical RAID volumes:

CLI> vsf info
# Name Raid# Level Capacity Ch/Id/Lun State
===============================================================================
1 ARC-1110-VOL#00 1 Raid5 1500.3GB 00/00/00 Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

Rebuild of a defective RAID on an Areca controller

A broken raid could look like this:

CLI> rsf info
# Name Disks TotalCap FreeCap DiskChannels State
===============================================================================
1 Raid Set # 00 3 2250.5GB 0.0GB 1x3 Degrade
2 Raid Set # 00 3 2250.5GB 2250.5GB x2x Incompleted
===============================================================================
GuiErrMsg<0x00>: Success.

Raid Set 2 has the status imcompleted.

The controller password must be entered so that you can make changes to the configuration. The default password is 0000:

<CLI> set password=0000. 

The Raid Set with the status Incompleted should be deleted. In this example it is raid #2, which is deleted with the command rsf delete raid=2:

CLI> rsf delete raid=2
GuiErrMsg<0x00>: Success.
CLI> rsf info
# Name Disks TotalCap FreeCap DiskChannels State
===============================================================================
1 Raid Set # 00 3 2250.5GB 0.0GB 1x3 Degrade
===============================================================================
GuiErrMsg<0x00>: Success.

Afterwards the disk can be included again as hot spare with rsf createhs drv=2:

CLI> rsf createhs drv=2
GuiErrMsg<0x00>: Success.

The Areca controller automatically detects a new disk. Including and triggering a rebuild is therefore not necessary.

The rebuild starts automatically and can be monitored:

CLI> rsf info
# Name Disks TotalCap FreeCap DiskChannels State
===============================================================================
1 Raid Set # 00 3 2250.5GB 0.0GB 123 Rebuilding
===============================================================================

GuiErrMsg<0x00>: Success.