[Info-vax] Licenses on VAX/VMS 4.0/4.1 source code listing scans

Bill Gunshannon bill.gunshannon at gmail.com
Mon Dec 13 15:51:11 EST 2021


On 12/13/21 3:44 PM, Arne Vajhøj wrote:
> On 12/13/2021 3:26 PM, Bill Gunshannon wrote:
>> On 12/13/21 1:53 PM, Arne Vajhøj wrote:
>>> On 12/12/2021 9:22 AM, Bill Gunshannon wrote:
>>>> On 12/11/21 7:38 PM, Arne Vajhøj wrote:
>>>>> On 12/11/2021 7:12 PM, Bill Gunshannon wrote:
>>>>>> On 12/11/21 2:25 PM, Arne Vajhøj wrote:
>>>>>>> On 12/11/2021 1:40 PM, Bill Gunshannon wrote:
>>>>>>>> On 12/11/21 11:51 AM, Arne Vajhøj wrote:
>>>>>>>>> And all the largest systems are distributed. They use
>>>>>>>>> Hadoop, Cassandra, Kafka etc.. Traditional technologies
>>>>>>>>> does simply not scale to that level.
>>>>>>>>
>>>>>>>> You wanna bet?  While some of the frontend stuff has mofrated to
>>>>>>>> the typical web crap the IRS for example is still a Unisys OS2200
>>>>>>>> shop with the code being mostly Legacy ACOB carried forward from
>>>>>>>> its origination on a UNIVAC 1100.
>>>>>>>
>>>>>>> Yes. And that system may have been a big system 30 years ago.
>>>>>>
>>>>>> The US IRS is one of the biggest ISes in the world.  Large enough
>>>>>> that some of the biggest contracting companies in the United States
>>>>>> looked at an RFP to replace it and said it probably couldn't be
>>>>>> done.  And so it is still written mostly in COBOL running on Unisys
>>>>>> OS2200.
>>>>>>
>>>>>>>
>>>>>>> But today large systems are NNN/NNNN nodes, NNNN CPU's, N/NN TB 
>>>>>>> memory and N PB disk.
>>>>>>
>>>>>> In what way does that contradict what I said above?  Or are you one
>>>>>> of those people who think IBM Mainframe still means 360/40.
>>>>>
>>>>> A z15 max out at 24 CPU with 190 cores for application and OS
>>>>> and 40 TB memory 192 IO cards.
>>>>>
>>>>> The largest Unisys (the 8300) is as far as I can read only
>>>>> 8 CPU with 64 cores for application and OS and 512 GB of memory.
>>>>>
>>>>> It just doesn't scale to what companies with large data processing
>>>>> requirements need today.
>>>>>
>>>>> 11 years ago(!) the largest Hadoop cluster had 2000 CPU with 22400
>>>>> cores, 64 TB memory and 21 PB data on disk.
>>>>
>>>> And yet the IRS is doing it just fine.  Go figure.
>>>
>>> Sure. They got a mid-size problem and their system capable
>>> of handling mid-size problems does fine.
>>
>> Mid-size?  Do you have any iodea what the US IRS is and what they do?
> 
> Yes.
> 
> But you told what HW they are running on. And from that it is
> an mid-size task.
> 
>>> Those that have a very large problem would not be fine.
>>>
>>> Of course IRS could get the same mid-size capability for way
>>> less money on a different platform, but porting is probably
>>> expensive. And they do not have any competitors to worry about! :-)
>>
>> Expense wasn't the problem.  They have pretty deep pockets. :-)
>> The problem was the ability to accomplish a port given the constraints
>> they run under.
> 
> It is a hard thing to port. The CPU/memory/disk requirement are
> mid size. But the requirements are very large.
> 
> US Tax rules supposedly consist of 2500 pages of law and 9000
> pages of regulations. That is 29 volumes of 400 pages. Hard
> problem.
> 
> And anybody think that they would simplify rules to make a
> port easier or even just freeze rules during a port??
> 

You missed the big one.  No down time.  The new system would have
to go into operation functioning perfectly from not just day one,
but from minute one.  It's a 24 hours a day 365 days a year (except
every four years when it is a 366 day) job :-).  How many large
scale porting projects have you seen accomplish that?

bill




More information about the Info-vax mailing list