[Info-vax] Doing More With RMS Data on OpenVMS - Live Webinar on 2011-06-22,

Hein RMS van den Heuvel heinvandenheuvel at gmail.com
Sat Jul 16 11:11:57 EDT 2011


On Jul 16, 8:31 am, Jan-Erik Soderholm <jan-erik.soderh... at telia.com>
wrote:
> VAXman- @SendSpamHere.ORG wrote 2011-07-16 13:49:
>
> > In article<ivrhp0$hm... at news.albasani.net>, Jan-Erik Soderholm<jan-erik.soderh... at telia.com>  writes:
> >> {...snip...}
> >> That was exactly what I ment. Why this whole new toolset when the
> >> same thing could have been done for years with an existing
> >> toolset ? You didn't got it wrong.
>
> > What/which toolset gave/gives you realtime access to RMS updates?
>
> I do not know what "access to RMS updates" is, but DBI can at least
> read and write from RMS files., which is what most users probably need.

Read/Write access to RMS files typically done through an ODBC or JDBC
driver of which there are several, both freeware and commercial such
as ConnX, Easysoft and Attunity. Hoff gives an overview in:
http://labs.hoffmanlabs.org/node/668
No problem.

That's great for a java application lookup, or to pump data into excel/
access and so on. But you are restricted to the RMS access methods.
Modern databases can do so much more, with smart query optimizers,
index-only lookups, multy-index lookups, smart heuristics based joins,
and so on.
While RMS can typically nicely fullfill insert/update speed
requirements, it can not offer the query speed needed expect for the
simple singleton selects (where it shines). So folks want their data
in a 'real' database.

Assuming that the system of reference remains the data in RMS files,
nowadays we see a need to replicate that live, or near live, RMS data
in off-box databases like Oracle or SQLserver, or remote - non-
clustered OpenVMS system.
That data is then used to run reports against, or general lookups.

For smallish (tens of gigabytes) files, and daily or even hourly
updates this can be done in many ways, notably with simple full
extract - ftp - import, or odbc queries or more advanced method which
try to find out what change with a compare and ship the differences
(ConnX has a very fine implementation for this).

When deal with hundreds of gigabytes and looking to up to the minute
or up to the second synchronization, one can no longer process entire
files to get there.

This is where RMS CDC comes in.  Brian made a clever RMS intercept
(and then some) which captures before and after images for records
modified in 'marked' rms files. Attunity already had a neat Change
Data Capture infrastructure for other data sources. Now they than hook
up RMS files and 'magically' make all changes to RMS files appear as
rows in SQL addressable table on a staging system. From there
relatively straightforward (java) jobs can update the appropriate
target databases.

For reports one may want to choose to apply changes in bulk
replicating the data state for a given time of day. For OLTP-ish work
you want update to happen within a few seconds, or even sub second.
This allows for an evolutionary way to reduce dependencies on the
OpenVMS backend, a desire which unfortunately many customer seem to
have.

New implementations can be done with modern tools, probably browser
based, and use for example a linux hosted database as datasource while
the real data still lives in RMS files. Over time more and more
subsystems can use the non RMS data until at some point in time a
flipover can be made and the other database becomes the system of
reference. But all along read-applications can already be coding
against that new database. Updates can be done to the RMS files
through ODBC, and those can be flipped over to target the new database
as well.

Enough rambling, back to the yard work,
Hein






More information about the Info-vax mailing list