Unable to Init Ssl Communications Veritas Continue Using Normal Sockets Veritas
Veritas Volume Manager known issues
The following issues were reported for this release of VxVM.
Installation and upgrade Issues
Array support libraries
The upgrade procedure will attempt to remove any array support library (ASL) packages from previous releases. After an upgrade has been completed, the output from swlist should not show any such ASL packages. If a pre-5.0 ASL package is not removed for some reason, you can use the following command to remove it:
# swremove
ASL_pkg_name
Upgrading systems running VxVM 3.5 Prior to Command Cumulative Patch 06
Before upgrading a system that is running under VxVM 3.5 at a patch level prior to Command Cumulative Patch 06 (PHCO_30834
), it is strongly recommended that you download and apply this patch, and then run the ckpublen.sh
utility script, as documented in TechNote 270407, available at: http://support.Veritas.com/docs/270407. If the script reports that any disks need to be re-initialized, back up the file systems and data residing on the volumes on those disks, and restore them after re-initializing the disks and recreating the volumes. You can then proceed to upgrade the system with the Veritas Storage Foundation 5.0 software.
Utility issues
Current naming scheme
There is no option in the vxddladm
command to display the current naming scheme. The naming scheme that is in operation can be deduced from the output to the vxdisk
list
command. [611320]
vxdiskadm displays error V-5-1-9764 when excluding devices
The vxdiskadm
operation displays error V-5-1-9764 if a vendor and product ID combination are specified to exclude devices from multipathing. This error is harmless and can be ignored. The error is not seen if controller or device names are specified instead. [587435]
Specifying an enclosure to the vxdmpadm getportids command
The enclosure
attribute should be used to specify an enclosure name to the vxdmpadm
getportids
command, instead of the enclr
attribute that is shown in the Veritas Volume Manager Administrator's Guide and the vxdmpadm
(1M) manual page.
Running vxdctl enable causes a core dump
The VxVM configuration daemon, vxconfigd
, can dump core under rare conditions if the vxdctl
enable
command is run on a system with an HDS array. [543803]
Disk group is disabled if private region sizes differ
A disk group is disabled if the vxdg
init
command is used to create it from a set of disks that have pre-existing private regions that differ in size. This may occur if the disks previously belonged to disk groups in older releases of VxVM.
The workaround is to reinitialize the disks before creating the disk group (for example, by using the vxdisk
-f
init
command), or to use the vxdg
adddisk
command to add the disks to the disk group after it has been created. [592180]
Maximum size of a VxVM volume
VxVM supports volume lengths up to 256TB. However, any 32-bit legacy applications that use system calls such as seek
, lseek
, read
and write
are limited to a maximum offset that is determined by the operating system. This value is usually 231-1 bytes (1 byte less than 2 terabytes).
Resizing a volume set with an unmounted file system
It is not possible to use the vxresize
command to change the size of a component volume of a volume set that has an unmounted file system. This is because the extendfs
command is not supported for volume sets with unmounted file systems. [574134, 571997]
Resizing volumes with detached remote plexes
If a volume in a Remote Mirror configuration has detached plexes at a remote site, you can use the following procedure to resize it:
- Turn off the
allsites
attribute for the volume:#
vxvol -g
diskgroup
set allsites=off
volume
- Remove the detached plexes:
#
vxassist -g
diskgroup
remove mirror
volume
\
plexnames=plex1
,
plex2
,...
- Use the
vxresize
command to resize the volume.
When the remote site comes back up:
- Replace the removed plexes using storage at the remote site:
#
vxassist -g
diskgroup
mirror
volume
nmirror=
N
\
site:remote_site_name
- Turn on the
allsites
attribute for the volume:#
vxvol -g
diskgroup
set allsites=on
volume
Warning message at boot time
A message such as the following is displayed if an attempt is made to open a volume at boot time before any disk group has been imported.
WARNING: VxVM vxio V-5-0-23 Open on an spurious volume device (hex_id) encountered. This device may be valid, but has not yet been configured in the kernel.
This message may be ignored. Once the disk group has been imported successfully, there should be no problem in accessing its volumes.
Shrinking a swap volume
vxassist
has no built-in protection to prevent you from shrinking the swap
volume without first shrinking what the system sees as available swap space. If it is necessary to shrink the swap
volume, the operation must be done in single user mode and the system must be rebooted immediately. Failing to take these precautions can result in unknown system behavior or lock-up. [6154]
Adding a log and mirror to a volume
The vxassist
command does not add a mirror and a log when processing a command such as the following:
#
vxassist mirror
volume
layout=log ...
The mirror is added, but the log is silently omitted. To add a log and a mirror, add them in two separate vxassist
invocations, as follows:
#
vxassist mirror
volume
...
#
vxassist addlog
volume
...
[13488]
Using vxdiskadm to replace a failed disk
The vxdiskadm
command requires two attempts to replace a failed disk. The first attempt can fail with a message of the form:
/usr/lib/vxvm/voladm.d/bin/disk.repl: test: argument expected
The command is not completed and the disk is not replaced. If you now rerun the command, using Option 5, the replacement successfully completes. [102381]
Replacement of the old_layout attribute
The vxdisksetup
command gives the error message Attribute
unrecognized
when the old_layout
attribute is used to make a disk into a VxVM controlled disk. The old_layout
attribute is no longer supported. Use the noreserve
attribute instead.
[121258]
Using vxvol and vxmend with layered volumes
The vxvol
and vxmend
commands do not handle layered volumes very well. When vxmend
is executed on the top level volume to change the state of a volume, it is executed only on the top level volume; the change is not propagated to the lower level volumes. As a result, the volume states can become inconsistent and a subsequent vxvol
init
command might fail.
The vxvol
command also exhibits the same problem. When a vxvol
init
command is executed on the top level volume, the change is not propagated to the volumes corresponding to its subvolumes.
Workaround: When executing the vxvol
or vxmend
command on a layered volume, first issue the command to the lower level volumes in a bottom-up fashion; then execute the command on the top-level volume.
In this example, a volume, vol
, has two subvolumes, vol-L01
and vol-L02
. The state of the volumes is first set to empty
, and then the initialization commands are executed:
#
vxmend -o force -g mydg fix empty vol
#
vxmend -o force -g mydg fix empty vol-L01
#
vxmend -o force -g mydg fix empty vol-L02
#
vxvol -g mydg init zero vol
#
vxvol -g mydg init zero vol-L01
#
vxvol -g mydg init zero vol-L02
[134932]
Growing or shrinking layered volumes
Due to the current implementation of a resize of layered volumes, it is recommended that you do not grow or shrink layered volumes (for example; stripe-mirror
, concat-mirror
) during resynchronization. This limitation does not apply to ISP layered volumes.
Internally, VxVM converts the layout of layered volumes and updates the configuration database before it does the actual resize. This causes any ongoing operation, such as a resynchronization, to fail.
If the system reboots before the grow
or shrink
of a layered volume completes, the volume is left with an intermediate layout. In this case, you have to use vxassist convert
to restore the volume to its original layout.
After a layered volume is resized, the volume, plex and subdisk names associated with the subvolumes, are changed.
vxconfigd hangs due to a faulty disk
If I/O hangs for some reason such as a disk failing while the VxVM configuration daemon, vxconfigd
, is performing I/O from/to the disks, there is no way to communicate with vxconfigd
via signals or native interprocess communication. This can potentially cause two kinds of problem:
- The node becomes unavailable for VxVM administrative commands.
- In a clustered or HA environment where Veritas Cluster Server agents need to communicate with
vxconfigd
to determine the health of VxVM components, service groups start timing out and failing.
Device issues
Unsupported disk arrays
To ensure that DMP is set up correctly on a multiported JBOD or other disk array that is not supported by VxVM, use the procedure given in "Adding Unsupported Disk Arrays to the DISKS Category" in the "Administering Disks" chapter of the Veritas Volume Manager Administrator's Guide. Otherwise, VxVM treats the independent paths to the disks as separate devices, which can result in data corruption.
Hitachi arrays in Active/Active mode
When Hitachi DF400 and DF500 arrays are configured in Active/Active mode, performance is degraded. [73154]
Adding HP-EVA disks
When HP-EVA disks are added to VxVM 5.0, debug messages such as the following are displayed:
# vxdctl enable
Printing Name-Value Pair
CAB_SERIAL_NO : 50001FE100270DF0Printing Name-Value Pair
CAB_SERIAL_NO : Printing Name-Value Pair
Printing Name-Value Pair
Printing Name-Value Pair
CAB_SERIAL_NO : Printing Name-Value Pair
50001FE100270DF0
LUN_SERIAL_NO : 50001FE100270DF0 600508B40010293D00006000012A0000Printing
Name-Value Pair
Printing Name-Value Pair
Printing Name-Value Pair
.
.
.
These messages are harmless and can be ignored.
Hot-relocation issues
Impact of hot-relocation on performance
Except for rootvol
and swapvol
, hot-relocation does not guarantee the same layout of data or performance after relocation. It is therefore possible that a single subdisk that existed before relocation may be split into two or more subdisks on separate disks after relocation (if there is not enough contiguous space on a single disk to accommodate that subdisk). [14894]
Disk information in notification messages
When a disk failure occurs, the hot-relocation feature notifies the system administrator of the failure and any relocation attempts through electronic mail messages. The messages typically include information about the device offset and disk access name affected by the failure. However, if a disk fails completely or a disk is turned off, the disk access name and device offset information is not included in the mail messages. This is because VxVM no longer has access to this information. [14895]
DMP issues
I/O is not restored on a path
If a path is re-enabled after a failback or a non-disruptive upgrade (NDU) operation, I/O may not be restored on that path . To unblock I/O on the path, run the vxdisk
scandisks
command. [617331]
DMP obtains incorrect serial numbers
DMP cannot obtain the correct serial number for a device if its LUN serial number contains a comma (,). This problem has been seen on EMC Symmetrix arrays with more than 8096 LUNs. [611333]
DMP threads appear as processes
Unlike the VxVM I/O daemons, DMP daemons, which are also kernel threads, appear in the output from the ps
command as they have an associated process table entry. This difference in behavior is harmless. [498970]
Default I/O policy
The default I/O policy for Active/Active (A/A) arrays has been changed from balanced
to minimumq
. The default I/O policy for Asymmetric Active/Active (A/A-A) and Active/Passive (A/P) arrays has been changed from singleactive
to round-robin
.
Cluster functionality issues
Node rejoin causes I/O failures with A/PF arrays
A cluster node should not be rejoined to a cluster if both the primary and secondary paths are enabled to an A/PF array, but all the other nodes are using only the secondary paths. This is because the joining node does not have any knowledge of the cluster configuration before the join takes place, and it attempts to use the primary path for I/O. As a result, the other cluster nodes can experience I/O failures and leave the cluster.
Workaround
- Before joining the node to the cluster, disconnect the cable that corresponds to the primary path between the node and the A/PF array.
- Check that the node has joined the cluster by using the following command:
# vxclustadm nidmap
The output from this command should show an entry for the node.
- Reconnect the cable that corresponds to the primary path between the node and the array.
- Use the following command to trigger cluster-wide failback:
# vxdisk scandisks
All the nodes should now be using the primary path.
[579536]
Volume persists in SYNC state
If a node leaves the cluster while a plex is being attached to a volume, the volume can remain in the SYNC state indefinitely. To avoid this, after the plex attach completes, resynchronize the volume manually with the following command:
#
vxvol -f resync
volume
[20448]
RAID-5 volumes
VxVM does not currently support RAID-5 volumes in cluster-shareable disk groups.
File systems supported in cluster-shareable disk groups
The use of file systems other than Veritas Storage Foundation Cluster File System (SFCFS) on volumes in cluster-shareable disk groups can cause system deadlocks.
Reliability of information about cluster-shareable disk groups
If the vxconfigd
program is stopped on both the master and slave nodes and then restarted on the slaves first, VxVM output and VEA displays are not reliable until the vxconfigd
program is started on the master and the slave is reconnected (which can take about 30 seconds). In particular, shared disk groups are marked disabled
and no information about them is available during this time. The vxconfigd
program must therefore be started on the master first.
Messages caused by open volume devices
When a node terminates from the cluster, open volume devices in shared disk groups on which I/O is not active are not removed until the volumes are closed. If this node later joins the cluster as the master while these volumes are still open, the presence of these volumes does not cause a problem. However, if the node tries to rejoin the cluster as a slave, this can fail with the following error message:
cannot assign minor #
This message is accompanied by the console message:
WARNING:minor number ### disk group
group
in use
Remote Mirror issues
Volume relayout
Volume relayout is not supported for site-confined volumes or for site-consistent volumes in this release. [528677]
Setting site consistency on a volume
The vxvol
command cannot be used to set site consistency on a volume unless sites and site consistency have first been set up for the disk group. [530484]
Adding a remote mirror
Adding a remote mirror to a new site for a site-consistent volume does not also create a DRL log plex or a DCO plex at that site. The workaround is to use the vxassist
addlog
command to add a DRL log plex, or the vxsnap
command to add a version 20 DCO plex at the specified site (site=
sitename). [533208]
Replacing a failed disk
It is not possible to replace a failed disk while its site is detached. You must first reattach the site and recover the disk group by running these commands:
# vxdg -g
diskgroup
reattachsite
sitename
# vxrecover
-g
diskgroup
The vxdiskadm
command gives an error when replacing disk on which the site
tag had been set. Before replacing such a failed disk, use the following commands to set the correct site name on the replacement disk:
# vxdisk -f init
disk
# vxdisk settag
disk
site=
sitename
[536853, 536881]
Reattaching a site
Reattaching a site when the disks are in the serial-split brain condition gives an error message similar to the following if the -o
overridessb
option is not specified:
VxVM vxdg ERROR V-5-1-10127 disassociating sitename: Record not in disk group
Use the following commands to reattach the site and recover the disk group:
# vxdg -g
diskgroup
-o overridessb reattachsite
sitename
# vxrecover
-g
diskgroup
[540351]
Site records are not propagated during disk group split, move or join
Split, join and move operations fail on a source disk group that has any site-confined volumes. This is because site records cannot be propagated to a target disk group during such operations.
One of the following messages is displayed as a result of a failed disk group split, join or move operation:
There are volume(s) with allsites flag which do not have a plex on site sitename. Use -f flag to move all such the volumes turning off allsites flag on them.
The volume(s) with allsites flags are being moved to the target disk group that doesn't have any site records. Use -f flag to add all such volumes turning off allsites flag on them.
The suggested workaround is to ensure that allsites=off
is set on all the volumes that are being moved between disk groups:
- Run the following command on each of the volumes that is being moved split or joined to find out if
allsites=on
is set on any of them.#
vxprint -g
diskgroup
-F %allsites
volume
- Run the following command on each of the volumes with
allsites=on
set that you found in the previous step.#
vxvol -g
diskgroup
set allsites=off
volume
- Proceed with the disk group split, join or move operation.
[563524]
Restoring site records
The vxmake
command can be used to recreate a disk group configuration, but not to restore site records. After restoring a disk group configuration, use the following command to recreate the site records manually:
# vxdg -g
diskgroup
addsite
site
[584200]
Snapshot and snapback issues
Using snapshots as root disks
It is recommended that you do not use snapshots of the root volume as a bootable volume. A snapshot can be taken to preserve the data of the root volume, but the snapshot will not be bootable. The data from the snapshot would have to be restored to the original root volume before the system could be booted with the preserved data.
Warning message when taking a snapshot of a SFCFS file system
When taking a snapshot of a SFCFS file system, the following warning message might appear:
V
xVM vxio WARNING V-5-0-4 Plex plex
detached from volume volume
Workaround: No action is required. This behavior is normal and is not the result of an error condition.
File system check of a snapshot
Normally, a file system would have no work to do when a snapshot is taken. However, if an SFCFS file system is not mounted, it is likely that the fsck
of the snapshot will take longer than is usually necessary, depending on the I/O activity at the time of the snapshot.
Workaround: When taking a snapshot of an SFCFS file system, you should ensure that at least one of the volumes defined in the command line is mounted on the cluster master.
Mount operation can cause inconsistencies in snapshots
Inconsistencies can arise in point-in-time copies if any of the following snapshot operations are performed on a volume while a file system in the volume is being mounted: vxassist
snapshot
, vxplex
snapshot
, vxsnap
make
, vxsnap
refresh
, or vxsnap
restore
.
Cache volumes in volume sets
Do not add cache volumes (used by space-optimized instant snapshots) to volume sets. This causes data corruption and system panics.
[614061, 614787]
Intelligent Storage Provisioning issues
Creating application volumes
To create application volumes successfully, the appropriate licenses must be present on your system. For example, you need a full Veritas Volume Manager license to use the instant snapshot feature. Vendors of disk arrays may also provide capabilities that require special licenses for certain features of their hardware. [137185]
Miscellaneous issues
Disks with write-back caches
Disk drives configured to use a write-back cache, or disk arrays configured with volatile write-back cache, exhibit data integrity problems. The problems occur after a power failure, SCSI bus reset, or other event in which the disk has cached data, but has not yet written it to non-volatile storage. Contact your disk drive or disk array manufacturer to determine whether your system disk drives use a write-back cache, and if the configuration can be changed to disable write-back-caching.
Auto-import of disk groups
If a disk that failed while a disk group was imported returns to life after the group has been deported, the disk group is auto-imported the next time the system boots. This contradicts the normal rule that only disk groups that are (non-temporarily) imported at the time of a crash are auto-imported.
If it is important that a disk group not be auto-imported when the system is rebooted, the disk group should be imported temporarily when the intention is to deport the disk group (for example, in HA configurations). Use the -t
flag to vxdg
import
. [13741]
Volumes not started following a reboot
During very fast boots on a system with many volumes, vxconfigd
may not be able to auto-import all of the disk groups by the time vxrecover
-s
is run to start the volumes. As a result, some volumes may not be started when an application starts after reboot.
Workaround: Check the state of the volumes before starting the application, or place a sleep (sleep
sec
) before the last invocation of vxrecover
. [14450]
Forcibly starting a volume
The vxrecover
command starts a volume only if it has at least one plex that is in the ACTIVE or CLEAN state and is not marked STALE, IOFAIL, REMOVED, or NODAREC. If such a plex is not found, VxVM assumes that the volume no longer contains valid up-to-date data, so the volume is not started automatically. A plex can be marked STALE or IOFAIL as a result of a disk failure or an I/O failure. In such cases, to force the volume to start, use the following command:
#
vxvol -f start
volume
However, try to determine what caused the problem before you run this command. It is likely that the volume needs to be restored from backup, and it is also possible that the disk needs to be replaced. [14915]
Failure of memory allocation
On machines with very small amounts of memory (32 megabytes or less), under heavy I/O stress conditions against high memory usage volumes (such as RAID-5 volumes), a situation occurs where the system cannot allocate physical memory pages any more.
Messages about VVR licenses
The following messages may get displayed on the console during a system reboot or during VxVM initialization when you are running vxinstall
:
No VVR license installed on the system; vradmind not started
No VVR license installed on the system; in.vxrsyncd not started
These messages are informational only, and can be safely ignored if you are not a Veritas Volume Replicator (VVR) user.
Number of columns in a RAID-5 ISP volume
If an ISP volume is created with the RAID-5 capability, the parameters ncols
and nmaxcols
refer only to the number of data columns, and do not include the parity column. For this reason, the actual number of columns that are created in such a volume is always one more than the number specified.
Veritas Enterprise Administrator issues
Note Refer to the Veritas Storage Foundation Installation Guide for information on how to set up and start the VEA server and client.
Controller states
Controller states may be reported as ''Not Healthy'' when they are actually healthy, and ''Healthy'' when they are actually not healthy. [599060]
Remote Mirror (campus cluster)
There is no option to create site-based snapshots. [541104]
Action pull-down menu items
No Action pull-down menu items exist for the Layout View, the Disk View or the Volume View. [596284]
Java exception error in the Statistics View
A Java exception error occurs in the Statistics View. [618146]
Out of bounds exception error
When connecting to the central host, an ''OutOfBoundException'' error occurs. [616661]
Volume tags not displayed
On Microsoft Windows systems, existing volume tags are not displayed when adding a new volume tag. [602953]
Cache volumes shown as available for volume sets
The volume set creation wizard shows cache volumes in the ''Available Volumes'' list. Cache volumes should not be listed as available. Including cache volumes in volume sets can cause data corruption and system panics. [614761]
Storage Agent dumps core if there are many LUNs
Configurations with more than 10240 LUNs can cause the Storage Agent to dump core in the directory /var/vx/isis
. [584092]
Workaround
- Rename the Device Discovery Layer (DDL) library file:
#
mv /opt/VRTSddlpr/lib/ddl.sl /opt/VRTSddlpr/lib/ddl.sl.orig
This prevents the DDL provider from loading, but has the effect of making enclosure, path and controller objects no longer available in the VEA client GUI.
- Restart the Storage Agent:
#
/opt/VRTSobc/pal33/bin/vxpal -a StorageAgent
Name service switch configuration file
For VEA to operate successfully, the name service switch configuration file, /etc/nsswitch.conf
, must be present on the system.
See the nsswitch.conf
(4) manual page.
Setting a comment on an ISP volume
If you create a new ISP volume by right-clicking on a user template and selecting the New Volume menu item, a comment that you specify to the Create Volume Dialog is not set on the volume. To specify a comment for the newly created volume, select the volume, choose Properties from the pop-up menu, enter a comment in the Comment field and then click OK. [137098]
Administering a cache volume created on an ISP volume
It may not be possible to use the VEA GUI to add or remove mirrors to or from a cache volume (used by space-optimized instant snapshots) that is created on an ISP volume, or to delete a cache volume. The cache object, but not the cache volume, is visible in the graphical interface.
Workaround: Stop and restart the VEA server. [137625]
Permitting remote access to the X Windows server
The following X Windows system error may occur when starting VEA:
Xlib: connection to "
hostname :0.0" refused by server
Xlib: Client is not authorized to connect to Server
Workaround: Allow access to the local X server by using the following command:
# xhost + [
hostname
]
Disk group creation failure with duplicate disk ID
VEA fails to create a disk group with a duplicate disk ID, and gives no other options.
Incorrect vxpool command
The VEA GUI may incorrectly show the -p
option as an argument to the vxpool
list
command, although the command is not actually invoked. [135566].
Comments in Japanese on a snapshot volume are not saved or displayed correctly
Comments that are entered in the Japanese character set in the Snapshot Options dialog of the Create Instant Snapshot screen of the VEA GUI are not saved or displayed correctly. [322954]
Veritas Volume Manager Web GUI issues
Creating a file system on a disabled volume
Creating a file system on a disabled volume returns both success and failure messages. In fact, the operation fails. [565072]
Maximum size of a volume
The maximum size of a volume is shown as a rounded-down integer number of gigabytes. If the maximum size is less than 1GB, the maximum size is shown as 0GB. [573897]
Creating a volume without an existing disk group
Attempting to create a volume without an existing disk group produces the following misleading error:
Info V-46-1-300 No Volume available to create a file system
[574410]
Disabling paths to SENA storage arrays
Disabling a path to a SENA storage array produces the following dialog:
pathname
is the last path to its root disk. Are you sure you want
to disable it?
Press Next to continue with this operation or press Cancel to exit this operation.
The message is erroneous, and it is safe to continue the operation. [575262]
Failures when importing disk groups
Messages about failures to import disk groups are not displayed by the Web GUI. [596648]
Failures when creating ISP volumes
Messages about failures to create ISP volumes are not displayed by the Web GUI. [601157]
All Active Alerts View
The All Active Alerts View does not display correct information. [601167]
Deleting an active cache volume
Attempting to delete an active cache volume fails with an error message that is incomplete. [615395]
Corrupted import disk group dialog
If some objects are not present, the import disk group dialog may be displayed as blank or may show the text <!--td align="center" height="287" valign="midd"
. For example, this can occur when attempting to import a disk group from a host that is being rebooted. [607096]
Initializing a disk
At least one object must be selected in the GUI before proceeding to initialize a disk. [607026]
Veritas Storage Foundation Basic soft limitation messages
Messages about exceeding the Storage Foundation Basic soft limitations are not displayed by the Web GUI. [619039]
Create disk group wizard
The create disk group wizard shows internal disks as being available for the creation of shared disk groups. [574717]
Object not found error on creating a volume set
An ''object not found error'' may be displayed when a volume set is created. [615960]
Java exception when deleting a volume
Deleting a volume that has just been deleted produces a Java exception. This can happen if you do not wait for the Web page to be refreshed after the first delete operation. [608573]
Available controllers not shown
The Scan Disks By Controller View does not list the available controllers. [566619]
Message when forcibly removing a volume from a volume set
Forcibly removing a volume from a volume set displays a message that recommends that the force option be selected. [605468]
Java exception when removing a volume from a volume set
Removing a volume from a volume set returns an incorrect Java exception on success. [564455]
Error message when removing a disk from a disk group
Removing a disk from a disk group gives the incorrect error message ''no valid disk selected.'' [611894]
Disconnecting a disk produces a ghost entry
Ghost entries for disconnected disks in the All Disks View cannot be removed by using the GUI. A command such as vxdg
-g
diskgroup
rmdisk
diskname
must be used instead. [576794]
Move selected disks window
When managing an HP Legacy Managed Host (LMH), the move selected disks window is very small. [605251]
Site consistency wizard
When managing an HP Legacy Managed Host (LMH), the site consistency wizard window is blank at times. [603701]
Internationalization issues
Some ISP attributes have not been translated
The Intelligent Storage Provisioning (ISP) window for annotating a disk is not fully localized. In particular, auto-discovered attributes such as DiskGroup and Enclosure are not translated. [139162]
Inaccuracies in ISP attribute fields
The ISP User Template Wizard shows two "attribute value" fields rather than one "attribute value" and one "attribute name" field. [139762]
Warning messages about exceeding SF Basic limitations are not propogated to Web GUI
When the SF Basic limitations are exceeded, the warning message regarding this is sent to the task log, not to the GUI. This only occurs if a volume is successfully created. [619039]
Upgrading disk group versions
All disk groups have a version number associated with them. Each VxVM release supports a specific set of disk group versions and can import and perform tasks on disk groups with those versions. Some new features and tasks work only on disk groups with the current disk group version, so you need to upgrade existing disk groups before you can perform the tasks. The following table summarizes the disk group versions that correspond to each VxVM release from 2.0 onward:
VxVM Release | Cluster Protocol Versions | Disk Group Version | Supported Disk Group Versions |
---|---|---|---|
3.0 | n/a | 60 | 60 |
3.1 | n/a | 60 | 60 |
3.2 | 30 | 60 | 60 |
3.5 | 40 | 90 | 60, 90 |
4.1 | 60 | 120 | 60, 90, 120 |
5.0 | 70 | 140 | 60, 90, 120, 140 |
You can use the following command to find out the version number of a disk group:
# vxdg list
diskgroup
You can also determine the disk group version by using the vxprint
(1M) command with the -l
format option.
To upgrade a disk group, use the following command:
# vxdg [-T
version
] upgrade
diskgroup
Unless a disk group version is specified, this command upgrades the disk group to the highest version supported by the VxVM version on your system.
For shared disk groups, the latest disk group version is only supported by the latest cluster protocol version. To see the current cluster protocol version, type:
# vxdctl support
To upgrade the protocol version for the entire cluster, enter the following command on the master node:
# vxdctl upgrade
See the "Administering Cluster Functionality" chapter of the Veritas Volume Manager Administrator's Guide.
Source: https://sort.veritas.com/public/documents/sf/5.0/hpux/html/sf_notes/rn_ch_notes_hpux_sf28.html
0 Response to "Unable to Init Ssl Communications Veritas Continue Using Normal Sockets Veritas"
Post a Comment