[Info-vax] VSI and Process Software announcement
Kerry Main
kemain.nospam at gmail.com
Mon Sep 26 16:10:40 EDT 2016
> -----Original Message-----
> From: Info-vax [mailto:info-vax-bounces at rbnsn.com] On Behalf
> Of Dirk Munk via Info-vax
> Sent: 26-Sep-16 2:55 PM
> To: info-vax at rbnsn.com
> Cc: Dirk Munk <munk at home.nl>
> Subject: Re: [Info-vax] VSI and Process Software announcement
>
> Stephen Hoffman wrote:
> > On 2016-09-25 11:43:29 +0000, Dirk Munk said:
> >
> >> Kerry Main wrote:
> >>> You seem to want to make it so easy that an end user could
> install
> >>> an OS into a prod environment.
> >
> > Everybody has to go through this, just because we had to walk
> to
> > school all winter long, in the snow, up hill, both ways?
> >
> >>> Imho, that is just crazy. Regardless of the method, I want
> >>> experienced SysAdmins to have their hands on new OS
> deployments.
> >>> Yes, I know there is work to be done to make it easier than it
> is
> >>> today, but the bottom line is there are just way to many
> variables
> >>> and landmines that could impact other OS's to let a rookie
> deploy a
> >>> new OS to a prod environment.
> >>
> >> You're absolutely right. I've worked in an environment where
> >> everything was set up with easy to deploy templates. etc. The
> result
> >> was that no one understood what they were doing, they
> didn't
> >> understand all these settings because they didn't have to
> think about
> >> them. If there were problems, they didn't know where to
> look, or how
> >> to fix the problems.
> >
> > I remember the uproar over the shift to depot repairs and
> board
> > swapping, too. When the repair techs stopped using solder
> and a 'scope.
> >
> > Welcome to modern computing technology. We each — we all
> — depend on
> > the knowledge of other folks. Of the code and the tools of
> others.
> > None of us are experts in everything. We are increasingly
> integrating
> > our servers and software with more packages and tools and
> platforms.
> > Trying to make our configurations and deployments easier,
> more
> > manageable, more repeatable, and requiring less human
> interaction is
> > the goal that most of us have.
> >
> > I'm glad that OpenVMS moved forward. Part of moving
> forward is keeping
> > the best of the old ideas, and rethinking or replacing the areas
> that
> > are no longer advantageous. That includes reworking or
> rethinking or
> > replacing the console serial line, and manually-configured, local
> > deployments booted from DVD, among other approaches that
> seem
> > increasingly antebellum.
>
>
> Let me give you a real world example of the results of this kind of
> thinking.
>
> An applications wasn't performing very well, it was a bit unclear
> why, but there were more problems at that datacenter.
>
> So we gave that application a brand new x86 server with 32GB of
> memory, in another datacenter.
>
> First it got a *standard* linux installation. Then the SAN luns were
> added. I know how important disk alignment is, so I personally
> took care of partitioning and formatting the luns, that was not a
> standard way of setting up the luns.
>
> Then the application was installed. I had told the database
> administrator several times to make sure his databases got
> sufficient cache.
>
> A few months later my network colleague asked me if it was
> normal that a database fetch took a certain amount of time. I told
> him that was far too much, and asked him about the application.
>
> And as you can guess it was this application. First I couldn't
> understand why, but then I got a suspicion.
>
> I went to the database group, and asked how much free memory
> that system had, and yes, it was 29GB.
>
> So I asked about the database settings. Well, they had done a
> *standard* Oracle install, with 1GB of cache per database, leaving
> 29GB wasted.
>
> I suppressed some curses, and kindly asked them to increase the
> caches, and to use the memory in the system.
>
> That solved the problem, as you can guess.
>
> An exception? No a *standard* problem with all those
> *standard* installations. So much so that the support people now
> have the
> *standard* reply "check your database cache and your free
> memory" when they are confronted with performance problems.
>
And to put things into further perspective as to why one needs L2 folks to install (L1 ok to do daily admin) - things are going to get a lot more complex.
As an example - BL890 OpenVMS blade servers now support up to 1.5TB of local memory. How big is even a selective system crash dump file going to be? Would you even create one or wait for issues? What are process quota's?
And that TB number will surely be going up as cheap new non-volatile memories appear next year from Intel (3D XPoint) and others.
This is going to require a re-think of not only the server tuning but also DB and storage tuning as well. The classic "many, many small rack servers connected by slow LAN latency networks" vs "much smaller numbers of very large blade servers" is going to come up (app specific considerations).
Even with file storage - I have a local 2TB disk (mirrored 3rd party 2 TB drives- not supported but works great) on my local rx2600. Cost for 2 drives, case enclosure and LSI controller was total about CAD$400. For Cust's who struggle with disk storage, this is going to require a re-think as well. I can store 20 full system image backup files (6GB or less each) and at 180GB have not even used up 10% of the available space on that volume. With 10TB single drives now available (support in next file system on OpenVMS), one has to again, re-think how files will be managed in the future.
With a 2TB common system disk, other than app specific hot files, is there still the same push to move apps off the system disk? How to do layout the App directory structures?
[google BarraCuda Pro drives for info on 10TB drives]
Here is output from this big (and really cheap) disk "show device" on my system:
Free blocks = 3556200960 (1.65TB)
Total blocks = 3905980417
:-)
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
More information about the Info-vax
mailing list