Topic
  • 5 replies
  • Latest Post - ‏2010-11-08T05:24:56Z by Rachid.Keeslar
jsanz
jsanz
7 Posts

Pinned topic Command create sfs_server fails if the size of log&data lvol's are up 2GB.

‏2010-06-29T14:53:04Z |
I'm working with TXSeries V7.1 HP-UX 11.31 Itanium platform and we need to create a big sfs_server (up to 100 GB) to manage application data and CICS queues.

1) If the size of the log&data raw lvol's are up to 2GB the commmand fails.

2) If the size of the log&data raw lvol's are 2GB the commmand runs ok.

We atach a file with thist two tests.
How can i create an sfs_server with the log&data raw lvols size up to 2GB?.
Thanks in advance.
Updated on 2010-11-08T05:24:56Z at 2010-11-08T05:24:56Z by Rachid.Keeslar
  • nikhilraj
    nikhilraj
    3 Posts

    Re: Command create sfs_server fails if the size of log&data lvol's are up 2GB.

    ‏2010-07-01T14:34:32Z  
    Hello,

    For all supported versions of TXSeries maximum permitted log volume size is 2 GB. If you set the SFS log volume size to greater than the maximum permissible limit, unexpected behavior such as an SFS crash and failure to warm start can result.

    Please check the following documentation on log volumes size in TXSeries.

    http://www-01.ibm.com/support/docview.wss?uid=swg21417300
  • IainBoyle
    IainBoyle
    37 Posts

    Re: Command create sfs_server fails if the size of log&data lvol's are up 2GB.

    ‏2010-07-01T16:27:28Z  
    Jsanz,

    Think of the log volume as a temporary store for uncommitted transaction data. Once the data is committed to the data volume, and information in the log volume should be deleted. If the SFS fails to commit the data to the data volume then the log volume information will be used during the recovery process. Once the data is committed to the data volume, there is nothing to recover, hence the data in the log volume is deleted. The log volume only really needs to be big enough to store the maximum amount of data written during a transaction.

    Try setting your log volume to something much smaller such as 100Mb and the data volume to 100Gb - hopefully that should work for you.
  • jsanz
    jsanz
    7 Posts

    Re: Command create sfs_server fails if the size of log&data lvol's are up 2GB.

    ‏2010-07-15T08:21:26Z  
    • IainBoyle
    • ‏2010-07-01T16:27:28Z
    Jsanz,

    Think of the log volume as a temporary store for uncommitted transaction data. Once the data is committed to the data volume, and information in the log volume should be deleted. If the SFS fails to commit the data to the data volume then the log volume information will be used during the recovery process. Once the data is committed to the data volume, there is nothing to recover, hence the data in the log volume is deleted. The log volume only really needs to be big enough to store the maximum amount of data written during a transaction.

    Try setting your log volume to something much smaller such as 100Mb and the data volume to 100Gb - hopefully that should work for you.
    Thnaks for your reply.

    I tried to create an sfs like this:
    1) log_Sprue (2GB) sfs_Sprue (2GB), the server starts correctly.
    2) log_Sprue (2GB) sfs_Sprue (3GB), the server doesn´t works ok.
    /ndsc#lvcreate -L 2048 -n log_Sprue vgsfs
    Logical volume "/dev/vgsfs/log_Sprue" has been successfully created with
    character device "/dev/vgsfs/rlog_Sprue".
    Logical volume "/dev/vgsfs/log_Sprue" has been successfully extended.
    Volume Group configuration for /dev/vgsfs has been saved in /etc/lvmconf/vgsfs.conf
    /ndsc#vgdisplay -v /dev/vgsfs
    • Volume groups ---
    VG Name /dev/vgsfs
    VG Write Access read/write
    VG Status available
    Max LV 255
    Cur LV 4
    Open LV 4
    Max PV 64
    Cur PV 8
    Act PV 8
    Max PE per PV 30000
    VGDA 16
    PE Size (Mbytes) 32
    Total PE 12936
    Alloc PE 288
    Free PE 12648
    Total PVG 0
    Total Spare PVs 0
    Total Spare PVs in use 0
    VG Version 1.0
    VG Max Size 60000g
    VG Max Extents 1920000

    --- Logical volumes ---
    LV Name /dev/vgsfs/log_Sndsc
    LV Status available/syncd
    LV Size (Mbytes) 2048
    Current LE 64
    Allocated PE 64
    Used PV 1

    LV Name /dev/vgsfs/sfs_Sndsc
    LV Status available/syncd
    LV Size (Mbytes) 2048
    Current LE 64
    Allocated PE 64
    Used PV 1

    LV Name /dev/vgsfs/log_Sprue
    LV Status available/syncd
    LV Size (Mbytes) 2048
    Current LE 64
    Allocated PE 64
    Used PV 1

    LV Name /dev/vgsfs/sfs_Sprue
    LV Status available/syncd
    LV Size (Mbytes) 3072
    Current LE 96
    Allocated PE 96
    Used PV 1
    --- Physical volumes ---
    PV Name /dev/disk/disk31
    PV Status available
    Total PE 1617
    Free PE 1329
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk32
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk33
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk34
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk46
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk45
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk44
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On

    PV Name /dev/disk/disk43
    PV Status available
    Total PE 1617
    Free PE 1617
    Autoswitch On
    Proactive Polling On
    /ndsc#cd /dev/vgsfs
    /dev/vgsfs#ll S
    brw-r----- 1 cics cics 64 0x130001 Jun 29 14:41 log_Sndsc
    brw-r----- 1 root sys 64 0x130003 Jul 15 09:25 log_Sprue
    crw-r----- 1 cics cics 64 0x130001 Jun 29 14:41 rlog_Sndsc
    crw-r----- 1 root sys 64 0x130003 Jul 15 09:25 rlog_Sprue
    crw-r----- 1 cics cics 64 0x130002 Jun 29 14:41 rsfs_Sndsc
    crw-r----- 1 cics cics 64 0x130004 Jul 1 13:31 rsfs_Sprue
    brw-r----- 1 cics cics 64 0x130002 Jun 29 14:41 sfs_Sndsc
    brw-r----- 1 cics cics 64 0x130004 Jul 1 13:31 sfs_Sprue
    /dev/vgsfs#chown cics:cics /dev/vgsfs/*Sprue
    /dev/vgsfs#ll
    total 0
    crw-r--r-- 1 cics cics 64 0x130000 Jun 16 14:48 group
    brw-r----- 1 cics cics 64 0x130001 Jun 29 14:41 log_Sndsc
    brw-r----- 1 cics cics 64 0x130003 Jul 15 09:25 log_Sprue
    crw-r----- 1 cics cics 64 0x130001 Jun 29 14:41 rlog_Sndsc
    crw-r----- 1 cics cics 64 0x130003 Jul 15 09:25 rlog_Sprue
    crw-r----- 1 cics cics 64 0x130002 Jun 29 14:41 rsfs_Sndsc
    crw-r----- 1 cics cics 64 0x130004 Jul 1 13:31 rsfs_Sprue
    brw-r----- 1 cics cics 64 0x130002 Jun 29 14:41 sfs_Sndsc
    brw-r----- 1 cics cics 64 0x130004 Jul 1 13:31 sfs_Sprue
    /dev/vgsfs#cicssfscreate $ENCINA_SFS_SERVER ShortName=Sprue
    ERZ105006I/0011: Directory '/var/cics_servers/SSD/cics/sfs/prue' created
    ERZ084009W/8429: No runtime recovery image for server '/.:/cics/sfs/prue', cold start assumed

    ERZ010130I/0734: Creating subsystem 'cicssfs.Sprue'
    /dev/vgsfs#cicslssrc
    Subsystem Group PID Status
    cicssfs.Sndsc 20548 active
    cicssfs.Sprue inoperative
    cics.ndsc inoperative
    /dev/vgsfs#cicscp -v start sfs_server $ENCINA_SFS_SERVER
    ERZ058504I/0107: Starting RPC daemon.
    ERZ058502I/0101: RPC daemon is already running.
    ERZ096111I/0224: Processing a start sfs_server command
    ERZ096141I/0224: Starting SFS server '/.:/cics/sfs/prue'
    ERZ038214I/0168: Authorization for server '/.:/cics/sfs/prue' has been set to 'none'
    ERZ038216I/0175: Subsystem 'cicssfs.Sprue' has been initialized.
    ERZ038219I/0179: Server '/.:/cics/sfs/prue' is responding to RPCs.
    ERZ036204I/0251: Created logical volume 'log_Sprue' for server '/.:/cics/sfs/prue'
    ERZ036206I/0253: Initialized logical volume 'log_Sprue' initialized for logging by server '/.:/cics/sfs/prue'
    ERZ036208I/0255: Created log file 'log_Sprue/logfile' for server '/.:/cics/sfs/prue'
    ERZ036231I/0260: Log file 'logfile' on server '/.:/cics/sfs/prue' has been enabled.
    ERZ036233I/0262: Logical volumes on server '/.:/cics/sfs/prue' have been recovered.
    ERZ038228I/0189: Server '/.:/cics/sfs/prue' has been enabled.
    ERZ036225E/0102: Server operation 'admin_vol_InitializeDisk' failed with error (2019012644) ENC-vol-0036: A region or physical volume is too small.
    ERZ038073E/0191: Unable to create logical volume 'sfs_Sprue' for server '/.:/cics/sfs/prue'.
    ERZ105093E/0022: Server '/.:/cics/sfs/prue' started, but has not been initialized
  • SystemAdmin
    SystemAdmin
    308 Posts

    Re: Command create sfs_server fails if the size of log&data lvol's are up 2GB.

    ‏2010-08-09T14:26:33Z  
    I am also observing the same problem while creating sfs data volume more than 2GB in TXSeries V7.1 HP-UX 11.31 Itanium platform. Better to contact IBM TXSeries support team for more help.
  • Rachid.Keeslar
    Rachid.Keeslar
    1 Post

    Re: Command create sfs_server fails if the size of log&data lvol's are up 2GB.

    ‏2010-11-08T05:24:56Z  
    nikhilraj wrote:
    Hello,

    For all supported versions of TXSeries maximum permitted log volume size is 2 GB. If you set the SFS log volume size to greater than the maximum permissible limit, unexpected behavior such as an SFS crash and failure to warm start can result.

    Please check the following documentation on log volumes size in TXSeries.

    http://www-01.ibm.com/support/docview.wss?uid=swg21417300

    Thank you! The information is nice, but it has nothing to do with my issue.