[Info-vax] VMS to VMS data copy options/performance when losing a DECnet link

Rich Jordan jordan at ccs4vms.com
Fri Apr 29 14:54:33 EDT 2022


On Friday, April 22, 2022 at 7:51:46 PM UTC-5, Mark Berryman wrote:
> On 4/20/22 3:18 PM, Rich Jordan wrote: 
> > On Thursday, April 14, 2022 at 7:12:07 PM UTC-5, Mark Daniel wrote: 
> >> On 15/4/22 4:22 am, Jeffrey H. Coffield wrote: 
> >>> 
> >>> 
> >>> On 04/14/2022 11:31 AM, Rich Jordan wrote: 
> >>>> On Thursday, April 7, 2022 at 6:24:03 PM UTC-5, Rich Jordan wrote: 
> >> 8< snip 8<> We had to move large save sets over FTP when a system was 
> >> replaced with 
> >>> another system at a different location and we used the following setting 
> >>> to speed up the transfers: 
> >>> 
> >>> $ TCPIP 
> >>> sysconfig -r socket sb_max=2000000 
> >>> sysconfig -r socket somaxconn=10240 
> >>> sysconfig -r socket sominconn=10240 
> >>> sysconfig -r inet tcp_sendspace=300000 tcp_recvspace=300000 
> >>> sysconfig -q socket 
> >>> sysconfig -q inet tcp_sendspace tcp_recvspace 
> >>> 
> >>> I know I didn't figure this out but I don't remember where I found these 
> >>> settings. 
> >> A quick consult with Dr Google shows lamentably few hits for 
> >> 
> >> "openvms sysconfig -r socket sb_max" 
> >> 
> >> and the rest (though some which may be of interest). 
> >> 
> >> I also notice an online manual 
> >> 
> >> "HP TCP/IP Services for OpenVMS Tuning and Troubleshooting" 
> >> 
> >> available from various sites, e.g. (quoted to prevent wrapping) 
> >> 
> >>> https://www.digiater.nl/openvms/doc/alpha-v8.3/83final/documentation/pdf/aa_rn1vb_te.pdf 
> >> 
> >> which seems to be missing from the VSI collection 
> >> 
> >> https://docs.vmssoftware.com/ 
> >> 
> >> Are the directives and recommendations still applicable to VSI TCP/IP 
> >> Services 5 and 6? 
> >> 
> >>> Jeff 
> >>> www.digitalsynergyinc.com 
> >> 
> >> -- 
> >> Anyone, who using social-media, forms an opinion regarding anything 
> >> other than the relative cuteness of this or that puppy-dog, needs 
> >> seriously to examine their critical thinking. 
> > 
> > I actually did go through the TCPIP troubleshooting manual and tried a couple of the suggestions. Benefits were minimal and could have just been random impact of actual network load at the time of testing. Could not do jumbo packets (but also found no reference to indicate that FTP would benefit from jumbo packets) because the intermediate network doesn't support them. 
> > 
> > Main VMS server and PC backup server on the same LAN, second VMS server at the remote site. The test saveset was one of the small ones; I'll need to get times on the three much larger ones. 
> > 
> > Main VMS to local PC backup server transfers a 6.7M block backup file in 3 minutes 41 seconds 
> > Main VMS to remote VMS transfers same file in 32 minutes 21 seconds (push or pull) 
> > Remote VMS server pulls the same backup file from the PC backup server in 10 minutes 30 seconds. 
> > 
> > So it is faster to relay backups (and presumably other data of any significant size) through the PC backup server than doing it directly VMS to VMS. 
> > 
> > The sysconfig changes above did not make a measurable difference when set on either VMS system or both; maybe 2% difference in time that was likely more due to network usage. 
> > 
> > Setting TCP protocol DELAY_ACK to disabled made a few percentage point difference overall but still nothing major. 
> > 
> > For now I guess we'll have to live with it. I'll try setting up sftp/scp to test but everything I've read says those will be slower.
> I think there may be something wrong with your network. 
> For me, VMS to VMS is around 309 Mbits/sec. Both FTP and DECnet are 
> essentially the same. 
> VMS to Mac, using FTP, is around 763 Mbits/sec. However, I have jumbo 
> frames turned on (FTP can take advantage of jumbo frames, DECnet not so 
> much) and my Mac uses SSD instead of physical disks. 
> 
> If you are really only getting 1.4 to 1.6 Mbps then either there is 
> something wrong with your network or something is seriously slowing I/O 
> on your VMS systems. If you have the space, how fast does the backup 
> file copy disk to disk on the same VMS system? I used a 4GB file as a 
> test and it took about a minute. 
> 
> Mark Berryman

Mark
     Unfortunately the two servers are not colocated; one is remote connected by some config of their 'metropolitan area network' but we have no control or access over that,

     We have tried tweaking the sysconfig settings on both boxes, and the few possibly relevant FTP logicals, and run through the HP troubleshooting guide (will look at the VSI one).  Eventually production will be upgraded to VSI but we're still waiting on dev support to be available to test because we expect issues with the SSL and SSH version changes.  To be clear the backup server would be running HPE VMS also if it is brought up; the alternate boot disk that it lives on to do the transfers and restore the savesets each night is running VSI.

     The local disk to disk backups on the main server (which is HPE VMS V8.4 still), running to compressed savesets have times as follows.  The settings used were the result of a lot of testing and tweaking of RMS and backup parameters on the previous RX3600 server, and two backup streams are running simultaneously, again after testing showed that gave us the best overall throughput.   The destination disk is a unit on a mirrorset on the raid controller, the source disks are on a four-drive ADG array.

Backups are image backups with data disks fully mounted but all activity quiesced so no open files.
Some time samples:

System disk DKA7:  Saveset size 6,598,848 blocks compressed, elapsed time 16 minutes 42 seconds.  Source data is 52M blocks including the /NOBACKUP system files
User disk DKA0:   Saveset size 23,283,008 blocks compressed, elapsed time  48 minutes 26 seconds.  Source data is 82M blocks, no /NOBACKUP files
Primary data disk DKA5:   Saveset size 59,619,040 blocks compressed, elapsed time 3 hours 4 minutes.   Source data is 273M blocks, no /NOBACKUP files

I tested copying the DKA7 saveset disk to disk (this time from the mirrored backup disk to a plain old disk used for transfer staging) and back. 29 seconds and 27 seconds respecitvely
Copying the DKA5 saveset as above took     3 minutes 57 seconds and 3 minutes 54 seconds.

Doesn't sound like we have disk level issues on the copies.  The backups do seem long but the user and production disks have a couple hundred thousand small files along with quite a few very large ones.  They're also completing in about 60% of the time they ran on the retired RX3600 server with SA controller and universal SCSI disks.  But we didn't get to do the full retuning and testing on backups to see if we could get them running faster on the new box due to time constraints so its still the same backup commands and RMS extend settings.

We'll see if the tweaks from the VSI troubleshooting guide, if any are applicable, affect things.

BTW when the backup server was still in our office, our FTP transfers from an HP V8.4 Alphaserver DS10 with GbE and an RX2660 running V8.3-1H1 with GbE on the same Procurve switch were terrible also, though I don't recall the exact numbers. 

Thanks for responding, sorry for the delay.

Rich



More information about the Info-vax mailing list