[Info-vax] DECserver/LAT across DECnet areas?

Jan-Erik Söderholm jan-erik.soderholm at telia.com
Wed Jul 26 06:47:53 EDT 2023


Den 2023-07-26 kl. 02:30, skrev Arne Vajhøj:
> On 7/25/2023 6:24 PM, Johnny Billquist wrote:
>> On 2023-07-25 02:06, Arne Vajhøj wrote:
>>> On 7/24/2023 7:53 PM, Scott Dorsey wrote:
>>>>                    Not to mention the added overhead from all those 
>>>> layers.
>>>
>>> Are those layers that bad?
>>>
>>> Sure SSL handshake takes time, but that is not due to the layers
>>> but due to the nature of the key exchange.
>>
>> Let me put a question for you? Are there any number of layers, in your 
>> opinion, where it becomes a problem?
>>
>> 1 layer? 10? 100? 1000? At which point does it become a problem, and why? 
>> And if you say 100, for example. Why is 99 not a problem, but 100 is then?
> 
> Good question. And we all know how it is with answers to good
> questions.  :-)
> 
> It is in many ways similar to "how many software layers in an
> application are too much?" and "how few lines per
> routine/function/method is too much splitting up?".
> 
> There is not a single provable correct answer. Based on
> some industry experience most have a feeling for when
> a good thing becomes too much.


Correct. As I have understood this discussion, it has mainly been around 
layers specficially in the communication stacks.

I have seen a lot of issues where there are "too many" layers in the 
application architecture also. Most often in "modern" solutions using
some fancy framework where the developer comes longer and longer from the 
core system for each new version of these frameworks.

We have an old VMS solution where the Cobol codes mostly talks directly to 
equipment (like barcode scanners, label printers and PLC in machines) and 
our response times are in 10s of ms. Another (commersial and highly 
"modern") similar system takes up to 1 min for a simple label printout.

There have also been some public projects that have failed due to what I 
see as a failure to understand the whole application architecture...


> 
> application protocol (the web service API's - not what is called 
> application layer in typical network stack pictures)
> HTTP
> TLS
> TCP
> IP
> the more physical stuff
> 
> does not seem to be too much. It get used. And the number of
> layers is not a frequent complaint.
> 
> The (in)famous OSI model was considered to be too much. Not only
> because of the number of layers, but still.
> 
> So it seems to me that experience shows around 7-8 being too
> much.
> 
>> And if everything is depending on TLS to provide security, then it means 
>> if SSL is compromised, you have no security anywhere suddenly. That's the 
>> "all eggs in one basket" point.
>>
>> The fact that TLS supports multiple cryptos does not suddenly make it 
>> several different baskets.
>> TLS have a common framework, which is one single piece, and it's also 
>> always a negotiation between the two sides on cryptos. So if you identify 
>> a problem with a crypto, it's basically an open exploit everywhere where 
>> you can negotiate that crypto. Which then would mean pretty much 
>> everywhere, until that crypto is removed, which will certainly take some 
>> time for a lot of places.
> 
> TLS has a single point of failure in itself (unless one
> consider 1.2 and 1.3 to be sufficient different to provide
> some redundancy) but some redundancy in the algorithms.
> 
> Redundancy in algorithms is I believe a design goal in itself.
> SHA-3 was not created because SHA-2 was considered weak - it
> was create to have two alternatives.
> 
> It does not take that long time to disable old algorithms and
> enable new ones. In case of emergency it could happen very
> quickly. Servers get patched and people have to update their
> browsers if they want to access the servers.
> 
> It takes extremely long time to create and check new
> algorithms. Which is why it makes sense to have more than
> one algorithm in a given category on the shelf.
> 
> I have no idea about how long time it takes to create
> a new TLS version. If it takes years then having an
> alternative on the shelf may make sense. 1.4A and 1.4B
> that are sufficiently different to not be vulnerable to
> same problem.
> 
>> Yes, if someone else is also using that crypto, even without TLS, then 
>> yes, that is just as vulnerable. But if they had used TLS, they would not 
>> have been any less vulnerable. But if they have some other crypto, or if 
>> the problem found would be in the TLS code itself, then you likely dodged 
>> that bullet.
> 
> Algorithm ABC used in TLS and algorithm ABC used in XXX are
> obviously the same.
> 
> But unless XXX would a standard with huge industry support, then
> XXX would be more risky than TLS.
> 
> The effort that goes into checking TLS is huge. It would cost
> thousands maybe tens of thousands of man years to check
> XXX to the same level as TLS.
> 
> Arne
> 
> 




More information about the Info-vax mailing list