Forum Discussion

nycdude's avatar
nycdude
Icon for Nimbostratus rankNimbostratus
Jul 30, 2023

Unable to add disk to array

Hello everyone,

 

We have an F5 7000S that we have purchased new in 2013 and kept it running pretty much 24/7 all this time.

After a datacenter outage one of the drives in the device died. We have replaced it with another disk, but addig it back into the array using GUI gives "NO access" error.

Attempting to follow https://my.f5.com/manage/s/article/K12756 does not help.

When executing from command line the following error is given:

tm_install::process::Proces_array_member_add --sdb ownership is none

The F5 is still running 11.4.1.

The replacement disk is of bigger capacity (2TB instead of 1TB).

Does anyone know of this specific error - searching this forum and internet returns nothing. I wonder if this is fixed in later OS images or maybe F5 refuses to work with different disk sizes (most normal raids can accept bigger disk as replacement)

 

Any help is apprecaited!

8 Replies

    • nycdude's avatar
      nycdude
      Icon for Nimbostratus rankNimbostratus

      That is correct.

      The replacement disk is a Samsung EVO 870 SSD.

      I tried replacing it with the same identical vendor (not F5 sourced though) WD-Velociraptor 1TB, but the results are the same.

      here is the output of the command. The disk is recognized, just wont add to raid.

      ---------------------------------------------------------------------
      Sys::Raid::Disk
      Name Serial Number Array Array Status Model
      Member
      ---------------------------------------------------------------------
      HD1 WD-WXR1E63HMUYD yes ok ATA WDC WD1000CHTZ-0
      HD2 S6PNNJ0W412826X no undefined ATA Samsung SSD 870

  • Thanks for the outputs.
    So, let's go back to the output log in your first query. SD-A and SD-B should be the the lables of the disk in the array. In your log, it refers no ownership to SD-B so there might still be issues with "accepting" the new disk even if output shows it's present. 

    Try running these commands:

    dmesg |grep --color sd[a-b]
    sfdisk -l /dev/sdb    # I think new disk should be seen as sdb while existing should be sda 
    sfdisk -l /dev/sda

    If they return disk as failed, or if they return no output, the problem is the new disk, likely. 
    One more question, just fr myself to understand this better -- what was the status of the old disk that was removed? 

    I'd also like to check if the linux subsystem sees the same status of the array as well .. Can you run array command too? You'll want to confront its output with the following tmsh commands (again - just to confirm again disk status)
    tmsh list /sys disk all-properties   # should list both disks
    tmsh show sys raid   # it's like the one I gave u last time, just has more output 

    Last thing would be checking if the mdadm process is running and in sync -- but I'm not sure if it already exists in version v11, and anyways having a disk in that status will likely result in bad output for the mdadm. 

     grep -e "_U" -e "U_" -e "__" /proc/mdstat | wc -l

     

  • The error "tm_install::process::Process_array_member_add --sdb ownership is none" suggests that there may be a configuration or ownership issue with the replacement disk on your F5 7000S running OS version 11.4.1. Since the replacement disk is of a bigger capacity, it's possible that the RAID configuration is not recognizing the new disk properly. Consider updating the F5 OS to the latest available version, as newer versions may have addressed compatibility issues with different disk sizes.

  • Thank you CA_Valli, i really appreciate your responses.

    About original hard drive - when we had DC outage (complete power failure) the device restarted and basically showed No disk in bay 2 (Despite the disk being there). When i did the commands to check array status with the old disk still inserted it confirmed that the device was not seeing it in either TMOS or BSH. (I have had disks fail like that in PC's where it was spinning and powered but not visible to OS).

    Here are the outputs:

    [:Active:Standalone] ~ # dmesg |grep --color sd[a-b]
    sd 1:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
    sd 1:0:0:0: [sda] 4096-byte physical blocks
    sd 1:0:0:0: [sda] Write Protect is off
    sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00
    sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    sda: sda1 sda2
    sd 1:0:0:0: [sda] Attached SCSI disk
    md: bind<sda1>
    md: could not bd_claim sda1.
    sd 0:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
    sd 0:0:0:0: [sdb] Write Protect is off
    sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00
    sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    sdb: unknown partition table
    sd 0:0:0:0: [sdb] Attached SCSI disk
    sdb: unknown partition table
    sdb:
    [:Active:Standalone] ~ # sfdisk -l /dev/sdb

    Disk /dev/sdb: 243201 cylinders, 255 heads, 63 sectors/track
    Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

    Device Boot Start End #cyls #blocks Id System
    /dev/sdb1 0 - 0 0 0 Empty
    /dev/sdb2 0 - 0 0 0 Empty
    /dev/sdb3 0 - 0 0 0 Empty
    /dev/sdb4 0 - 0 0 0 Empty
    [:Active:Standalone] ~ #


    (cfg-sync Standalone)(Active)(/Common)(tmos)# list /sys disk all-properties
    sys disk application-volume afmdata {
    logical-disk MD1
    owner afm
    preservability precious
    resizeable false
    size 3900
    volume-set-visibility-restraint none
    }
    sys disk application-volume avrdata {
    logical-disk MD1
    owner avr
    preservability precious
    resizeable false
    size 3900
    volume-set-visibility-restraint none
    }
    sys disk application-volume mysqldb_MD1.1 {
    logical-disk MD1
    owner mysql
    preservability precious
    resizeable false
    size 12288
    volume-set-visibility-restraint MD1.1
    }
    sys disk logical-disk HD1 {
    mode control
    size 953869
    vg-free 867148
    vg-in-use 85784
    vg-reserved 40
    }
    sys disk logical-disk HD2 {
    mode none
    size 1907729
    vg-free 0
    vg-in-use 0
    vg-reserved 30720
    }
    sys disk logical-disk MD1 {
    mode mixed
    size 953869
    vg-free 863192
    vg-in-use 89740
    vg-reserved 30720
    }
    (cfg-sync Standalone)(Active)(/Common)(tmos)#


    (cfg-sync Standalone)(Active)(/Common)(tmos)# show sys raid

    ---------------------
    Sys::Raid::Array: MD1
    ---------------------
    Size (MB) 931.5K

    ---------------------------------------------------------
    Sys::Raid::ArrayMembers
    Bay ID Serial Number Name Array Member Array Status
    ---------------------------------------------------------
    1 WD-WXR1E63HMUYD HD1 yes ok

    -------------------------------------------------------------
    Sys::Raid::Bay
    Bay Shelf Name Serial Number Array Member Array Status
    -------------------------------------------------------------
    1 1 HD1 WD-WXR1E63HMUYD yes ok
    2 1 HD2 S6PNNJ0W412826X no undefined

    ---------------------------------------------------------------------
    Sys::Raid::Disk
    Name Serial Number Array Array Status Model
    Member
    ---------------------------------------------------------------------
    HD1 WD-WXR1E63HMUYD yes ok ATA WDC WD1000CHTZ-0
    HD2 S6PNNJ0W412826X no undefined ATA Samsung SSD 870

    (cfg-sync Standalone)(Active)(/Common)(tmos)#

    [Active:Standalone] ~ # grep -e "_U" -e "U_" -e "__" /proc/mdstat | wc -l
    19

     

    Thanks again and very much appreciated!!!

    • CA_Valli's avatar
      CA_Valli
      Icon for MVP rankMVP

      I've reviewed this further.
      According to https://my.f5.com/manage/s/article/K15525 (old article, but should be relevant for your version) we should perform a logical rebuild of the RAID array.

      This brings us back, again, to that ownership log.
      If you run the following command:    ls /dev/ -l | grep sd 
      do you see different permissions assinged to the sda* / sdb* partitions? 

       

      If that's the case (my suspect would be SDB1 to SDB4 do not have the root:disk ownership) , I think  that you'll need to change owner and/or permissions of the mount.
      However!Before attemtping to change the permissions, I would try to perform a physical rebuild of the array.

      According to https://my.f5.com/manage/s/article/K12756#logical%20rebuild 

      steps would be:
      0- backup configuration in an ucs archive and copy it to another system
      1- shut down the device
      2- remove "bad" drive
      3- start device with single disk and wait for RAID master to be assigned to remaining drive
      4- shutdown device again
      5- reinsert drive
      6- start appliance
      7- (my addition) potentially check for this https://my.f5.com/manage/s/article/K12380


      If this still fails, you might want to try with changing permissions + logical rebuild of the array ..

      << Disclaimer >> I would normally suggest you to contact support for this type of issue but I recognize software has been out of support for quite a long time. Use at your own risk. 
      This would be done via: 
      chown root:disk < /path/to/the/mount >  ## check group name (disk) via ls -l /dev/  as said before
      chmod g+rw < /path/to/the/mount >
      tmsh modify sys raid array <array name> remove <disk name>
      ## might not be required -- disk name should be HD2, I guess?
      tmsh modify sys raid array <array name> add <disk name>

  • Unfortunately I have not had a chance to try anything that requires a reboot as this is our active ingress/outgress device.

    We have made a choice to replace it with Netgate PFS+ since we no longer have the requirement to aggregate over 10G of througput (multiwan).

    Once it is done i will try the steps on the device when it is no longer in service. I appreciate all the help here!