[Info-vax] Migrating OpenVMS to a Fiber Channel storage device.

Jan-Erik Soderholm jan-erik.soderholm at telia.com
Sat Dec 21 08:25:51 EST 2013


Jerome Ibanes wrote 2013-12-21 06:33:
> Our ES47 currently has two fiber channel cards, as such:
>
> P00>>>show dev
> dkc0.0.0.1.2               DKC0              COMPAQ BF14689BC5  HPB1
> dkc100.1.0.1.2             DKC100            COMPAQ BF14689BC5  HPB1
> dqa0.0.0.2.2               DQA0                     DW-224E- A  A.SD
> ega0.0.0.3.1               EGA0              00-10-18-10-00-F6
> egb0.0.0.4.1               EGB0              00-10-18-10-00-78
> pga0.0.0.1.0               PGA0        WWN 1000-0000-c945-3594
> pgb0.0.0.1.1               PGB0        WWN 1000-0000-c945-350e
> pka0.7.0.2.1               PKA0                  SCSI Bus ID 7
> pkb0.7.0.102.1             PKB0                  SCSI Bus ID 7
> pkc0.7.0.1.2               PKC0                  SCSI Bus ID 7
>
> The system runs OpenVMS 8.4 from dkc0 (HP 146GB at 15k drive), my goal would
> be to "copy" the contents of this block device to a lun of an equal size
> on the (netapp) SAN.
>
> Would there be an easier alternative to boot from a SAN (I imagine
> installing OpenVMS directly to a SAN isn't an option but please correct
> me if I'm wrong).
>
>
> Cheers,
> Jerome
>

I "migrated" all storage (system disk and two data disks) for our DS20e
from a HSG80/HSZ80 storage to IBM SAN solution this Sunday (15-dec)

The IBM SAN has specific support for OpenVMS and Alphaserves and the
*IBM* documentation has good information about a few things worth
concidering when running on a non-StorageWorks SAN. You can google
"SG24-6786" and look up the chapter "16.5 OpenVMS".

I quick google gave this forum thread where Netapp is dicussed.
There is also a link to the PDF from Netapp about OPenVMS:

https://communities.netapp.com/servlet/JiveServlet/download/28032-16260/NetApp_OpenVMS_Guide_to_Best_Practices.pdf

Now over to what I did last Sunday.

There was three systems involved (all DS20e systems):

Sys1. Our old prod envir. No access to IBM SAN. V8.2
Sys2. Our test envir, On the IBM SAN for 2-3 years. V8.4
Sys3. Out new prod envir. FC setup against the IBM SAN.

The IBM SAN has volumes setup for all our systems and all volumes
are all visible to all our DS20e's. Yes, one has to be carefull
not to mount the same volumes on multiple systems... :-)

Steps:

1. Image backup of all disks on sys1 to local disk files.
2. FTP the image backups to sys2.
3. Restore of system disk using sys2 to new system disk for sys3.
4. Mount sys3 system disk on sys2, a few changes in the startup.
5. Dism sys3 system disk from sys2.
6. Using consol of sys3, booted the system from the SAN volume.
7. Using sys3, restored the other two image backups to the SAN volues.

After that it was more or less "running". There was a few COM files
that used the old DSA (shadowed) device names that needed update.

I also took the oportunity to rebuild the Rdb database using params
more in line with todays storages and systems, not the defaults
from 1999 when the database was last rebuilt. Larger buffer/page
sizes and using "ranked" indexes.

We skipped the 8.2 => 8.4 upgrade at this moment not to have to
many variables. Both 8.2 and 8.4 seems to be running well.

We do not run shadowing now, we decided there was no need.

We have two FC cards in the servers and two FC ports on the
IBM SAN, so there are 4 "paths" for each device.

This setup is *way* faster then the old HZS/HZG setup, 4-5
times higher I/O per sec. Our nightly batch run went from
14-15 minutes to 3-4 minutes.

The changes to the Rdb database also makes all Rdb related
work (most of the work on the system) run much faster. The
*avarage* system load at daytime went from 10-15% to just a
few %. When looking at the MONITOR screen I first thought
that something was wrong, but they was working as usual. :-)

Here is what it looks like from our prod system:

$ show dev dga

Device        Device     Error    Volume         Free
  Name         Status     Count     Label        Blocks
$1$DGA1300:   Online         0
$1$DGA1301:   Online         0
$1$DGA1310:   Online         0
$1$DGA1311:   Online         0
$1$DGA1320:   Online         0
$1$DGA1400:   Online         0
$1$DGA1401:   Online         0
$1$DGA1410:   Online         0
$1$DGA1411:   Online         0
$1$DGA1420:   Online         0
$1$DGA1421:   Online         0
$1$DGA1600:   Mounted        0  ALPHASYS      42563556
$1$DGA1601:   Online         0
$1$DGA1610:   Mounted        0  DATA2         20333144
$1$DGA1611:   Online         0
$1$DGA1620:   Mounted        0  DATA3         54316432
$1$DGA1621:   Online         0
$1$DGA1901:   Online         0
$1$DGA1902:   Online         0
$

The volumes named DGA13xx are for the dev envir.
The volumes named DGA14xx are for the test envir.
The volumes named DGA16xx are for the prod envir.

The same command from your test envir thus looks like:

$ show dev dga

Device        Device     Error    Volume         Free
  Name         Status     Count     Label        Blocks
$1$DGA1300:   Online         0
$1$DGA1301:   Online         0
$1$DGA1310:   Online         0
$1$DGA1311:   Online         0
$1$DGA1320:   Online         0
$1$DGA1400:   Mounted        0  ALPHASYS      17242272
$1$DGA1401:   Online         0
$1$DGA1410:   Mounted        0  DATA2         15748824
$1$DGA1411:   Online         0
$1$DGA1420:   Mounted        0  DATA3         37703344
$1$DGA1421:   Online         0
$1$DGA1600:   Online         0
$1$DGA1601:   Online         0
$1$DGA1610:   Online         0
$1$DGA1611:   Online         0
$1$DGA1620:   Online         0
$1$DGA1621:   Online         0
$1$DGA1901:   Online         0
$1$DGA1902:   Online         0

So, it's VMS s usual, just better... :-)

Jan-Erik.





More information about the Info-vax mailing list