[Info-vax] SAMBA and Ransomeware

Stephen Hoffman seaohveh at hoffmanlabs.invalid
Fri Jul 28 18:12:45 EDT 2017


On 2017-07-28 17:16:35 +0000, seasoned_geek said:

> On Monday, July 17, 2017 at 9:49:03 AM UTC-5, Stephen Hoffman wrote:
> 
> Hoff,
> 
> Sorry for taking snippets from multiple messages here, but was editing 
> off-line due to flaky Internet. Please forgive if I pasted this in the 
> wrong message thread as well. I had to be off-line for a while.
> 
>> That written, Microsoft has decided to follow your suggestions here, 
>> but is doing so with Windows 10.   Not with Windows XP or older 
>> releases.    We'll eventually learn how well the Windows 10 approach 
>> works for Microsoft, for Microsoft partners and ISVs, and for end-users 
>> of Windows, too.  How well this works from a technical approach, and 
>> around corporate financial, marketing and partnerships?  Even if 
>> software compatibility continues, some new features are inevitably 
>> going to exclude older hardware, and which will force hardware 
>> upgrades; older hardware is inevitably going to age out.
> 
> Odd this since Microsoft is in the process of abandoning Windows in reality,

Microsoft is not abandoning Windows.   No vendor willingly abandons a 
profitable installed base in the billions.    Though the desktop market 
has been drifting downward in size, as the mobile market massively 
increases in size (and which also reduces the influence of Microsoft in 
the client market).    Mr. Nadella is looking toward the future of 
Microsoft with Azure and hosted services, however.   With the 
commoditization of and the competition among the the operating systems, 
they have to look at the next five and ten years.  Whether the bet on 
Azure, and on associated hosted services such as hosted Active 
Directory and Exchange Server, and on apps such as Office365, pays off?

> not name, and Google is in the process of abandoning Android. Microsoft 
> has already issued EOL for Windows Mobile without announcing any 
> replacement.

Microsoft was not successful in mobile, and — much like HPE and Itanium 
— have decided to exit the market.    They got stuck between Android 
and iOS.

> Both Google and Cannonical are fast at work on their own forks of 
> Fuschia. Both companies have chosen that platform as "the one OS to 
> rule them all." Pieces of Ubuntu have already moved in under Windows 
> 10. It will soon be a few Windows APIs on top of Ubuntu with a 
> Microsoft looking desktop. This was one of the big pushes behind ".Net 
> Anywhere" or ".Net Lite" or whatever it was called. Given the rousing 
> success of Mono I don't hold much hope for Microsoft getting it to work 
> on a non-Microsoft platform, hence the multi-year transition to Linux 
> under the hood.

I'm well aware of the Unix subsystem available in the most recent 
Windows 10 installations.   It's rather like GNV and OpenVMS, but far 
more extensive and better integrated with the operating system.

Whether Linux is also going to be the new kernel?    Donno.   But I 
doubt it.   If the Microsoft folks were even going to try re-hosting 
Windows onto a new kernel, they'd almost certainly be aiming well past 
the existing kernels.

The advertising giant needs a way to advertise, and Android was how 
they avoided getting strangled by Apple and Microsoft and others as 
mobile really got rolling.   Google then got themselves into some 
trouble with Android and support for earlier versions, particularly due 
to how they positioned and licensed Android to the various handset 
vendors.   Which is why I expect they're headed toward Fuchsia, if 
they're going to replace it with something sufficiently better.   
That's if they're not simply looking to use Fuchsia for their own 
internal use.   Nobody outside of Google really knows what they're up 
to here.    They're certainly seemingly approaching it as a way to get 
apps from both iOS and Android, though.

Microsoft got themselves into some trouble with mobile because their 
approach was at odds with that of their competitors; the Microsoft 
folks couldn't price Windows Mobile underneath Android, and iOS was 
vacuuming most of the profits.  Among other details.

Here's some fodder for thought...
https://qz.com/1037753/the-windows-phone-failure-was-easily-preventable-but-microsofts-culture-made-it-unavoidable/ 



> I haven't looked at the Fuschia code base, but, it is an off-shoot of 
> another project. I don't know if that project fully jettisoned the 
> Linux kernel or not.  The Linux kernel has some serious legacy design 
> flaws which are getting worse now that they are trying to utilize CUDA 
> cores.

Legacy software is any complex software package where various 
subsystems are no longer optimally designed for current requirements 
and environments.

As for CUDA or Metal or OpenCL or Vulkan or DirectX, or of GPU or GPGPU 
support, I've only been particularly following those topics on macOS 
and iOS platforms and not particularly over on Linux or Windows.

> I understand, back in the day compiling the video driver into the 
> kernel made some sense.   It no longer does. We can no longer count on 
> some 90% of hardware providing a VGA address space at a specific 
> address range. Automatic upgrades of the kernel for Nvidia users are 
> currently problematic at least for the YABU distros. Hopefully Neon 
> will be full Debian soon and a large part of the problem "might" go 
> away. At least the Ubuntu don't test sh*t part will go away.

There are some rather long and interesting discussions of the 
trade-offs involved with having the drivers in the kernel, as compared 
with the safety of having more of the code outside of the kernel.   For 
details on that, rummage around for the Windows Driver Model and the 
Windows Driver Framework discussions, as compared with the Graphics 
Device Interface (GDI).     Copying blocks of memory around gets... 
expensive.

Here's a decent starting point for that particular Windows NT GDI 
design discussion:
https://technet.microsoft.com/en-us/library/cc750820.aspx#XSLTsection124121120120 


Also see:
https://docs.microsoft.com/en-us/windows-hardware/drivers/display/submitting-a-command-buffer 


Also have a look at the wrestling that Qubes OS has been having with 
isolating the potential for device shenanigans.
https://blog.invisiblethings.org/index.html

Different operating systems make different trade-offs, too.

> A full redesign of the Linux Kernel making it small and API, not 
> pointer, driven with shelled APIs for future external specialized 
> processors was/is long overdue.

I don't expect to see the Linux kernel redesigned that way, though 
stranger things have happened.   I would expect to see further interest 
in L4 and maybe DragonFly BSD.  There's been more than a little 
research and testing around lowering the overhead of message-passing 
with L4 kernels, and faster hardware certainly helps.

> CUDA is not going to be the last.  There already are a few low market 
> CUDA competitors, but when you can get a 2Gig video card having 384 
> CUDA for under $50, that's a lot of market inertia for competitors to 
> overcome. Yes those cores are specialized and can be morphed to do many 
> things, but, the reality is this quad-core now has 388 cores of varying 
> capabilities. From at least one perspective it is a desktop sized 
> Connection Machine much like people at Fermi and a few other places 
> were creating with cast off microvaxes back in the day. The next 
> logical step is for cards to come with 4Gig of RAM and close to 1024 
> something more general than CUDA consuming low power card people drop 
> into their desktops for massive crunching/big data capabilities. The 
> small form factor desktop or even fuller sized ATX mobo now becomes a 
> backplane other computing capabilities get stuck into.

There'll continue to be better integration between scalar cores and 
GPUs, for those folks that need that.   CUDA is how NVIDIA allows folks 
to access GPUs.   Metal is what Apple has been using for that in recent 
times.

There are substantial differences in how scalar cores and GPUs work, 
and it's been interesting working with them; GPUs are screaming fast at 
various tasks, and utterly glacial at others.   There's been 
substantial work toward support of machine learning on macOS with Core 
ML, for instance, and other tasks that are well suited to GPU 
computing.   Getting data into and out of the GPUs has been problematic 
in recent years, though that access is improving with each generation.  
 With what I've been working with, there's also the overhead of 
compiling the code for the GPU, whether that's compiled ahead or 
happens while the application is running.

And for now, folks get to choose Metal, Vulkan or NVIDIA's CUDA as the 
interface, or some higher-level framework that abstracts that, or they 
can use Unity or Unreal and let those tools deal with Metal or CUDA or 
whatever.

> In short, what's old is new again and the Linux kernel was in no shape 
> to handle it. The current ham-fisted CUDA stuff is proof of that. Even 
> my friend who from time to time works with Linus himself readily admits 
> that. He's just not quite ready to get a new baby and bath water.

Linus is most definitely not a fool.   As for what's been happening 
with NVIDIA CUDA support over on Linux, I haven't been following that.  
But it wouldn't surprise me that there's some skepticism around 
supporting a vendor-specific framework such as CUDA in Linux — NVIDIA 
is not the only graphics vendor around — and graphics hardware support 
in general has been a long-running thorn for various open-source 
operating systems.   Yes, there are fully-documented graphics 
controllers, and that's been a very nice change from earlier years.   
The performance of various recent commodity integrated graphics such as 
Intel HD and Iris graphics is actually quite decent, too.  And various 
vendors are interested in Vulkan in addition to or in place of CUDA, 
too.

https://en.wikipedia.org/wiki/Vulkan_(API)

>> Again: we can live in and can desire and seek Y2K-era security and 
>> long-term server stability and the rest of the uptime era, or we can 
>> deal with the environment we have now, with the need to deploy patches 
>> more quickly, and prepare for the environment we're clearly headed 
>> toward.   Wrist watches commonly have more capacity and more 
>> performance than most of the VAX servers.   For those folks here that 
>> are fond of disparaging or ranting about Microsoft or other vendors,  
>> please do look at what they're doing, what they have available now, and 
>> what they're working on.  Microsoft and Linux and other choices are far 
>> more competitive than they once were, far more secure, and are far 
>> ahead of OpenVMS in various areas.   The sorts of areas that show up on 
>> RFPs and bid requests, too.   Times and requirements and environments 
>> all change.   We either change, or we and our apps and our servers 
>> retire in place.
>> 
> 
> 
> I hear what you are saying, but firmly believe it is based on a false 
> premise. Long ago, before VMS got an "Open" pre-pended by the sales 
> resistance force, disgusting low life cretins paid lots of money to 
> even lower forms of biological life, namely The Gartern Group and what 
> became Accenture, to market a false statement:
> 
> <b>Proprietary bad, OpenSource good.</b>
> 
> This was a completely false statement. It was massive spin on the 
> reality "Proprietary expensive, OpenSource cheap" and it completely 
> overlooked the real definition of "cheap" there. North Korean knock-off 
> sold at Walmart cheap, not high quality at low cost.
> 
> This "Proprietary bad, OpenSource good" mantra got beat into people's 
> brains so much they believe it is true today. It's not.

You seem to be misinterpreting my comments.   I'm specifically 
referring to the current and future environments, and not to what 
analysts in the 1980s and 1990s stated, nor about what investments and 
what guesses made back then that worked or not, nor am I even remotely 
interested in rehashing the product management decision to rename the 
VMS product to OpenVMS.  Nope.    Wrong direction.   Forward.   History 
and how we got here is fun and interesting and a good foundation for 
learning from successes and not repeating mistakes, but we're just not 
going back that way again.

> Where is it written that every business system must connect directly to 
> the Internet?

Outside of the military and intelligence communities, there are few 
air-gapped systems around, and IPv6 means most every other server is 
connected.   Whether those servers communicate outside of the local 
network is dependent on local requirements.

> Where is it written that your core critical cluster must use TCP/IP?

I wouldn't expect TCP, though I do expect to see DTLS and UDP.   
Because — as many OpenVMS sites learned — local IT requires IP, or the 
servers cannot be networked.

> Where is it written that external XML messages must feed directly from 
> the Internet into a server which is directly connected to a critical 
> database?

XML and JSON are how data can be packaged, and frameworks and tools are 
available for those.   Folks are free to use other approaches, though a 
bespoke format or network protocol or database or other such is code 
that's not particularly differentiated, and that must be written and 
maintained and updated.   Trade-offs and reasons for bespoke code 
certainly do exist, but such decisions are best approached skeptically.

> Where is it written the exact same CPU with the exact same BIOS/UEFI 
> with the exact same cheap hard drive containing programmable firmware 
> as the nearly worthless desktop must run your critical systems?

Ayup.   Alpha was certainly fun and a very nice design, as was DEC's 
DLT and RA and RF storage and the rest of that era.   Or IBM and their 
DASD storage.  Much of the traditional middle market from the 1980s and 
1990s lost out to commodity components and lower prices and higher 
volumes, and the high-end got higher and more expensive.   OpenVMS 
isn't anywhere near the high end.   And for the foreseeable future, 
OpenVMS doesn't have the volume to have dedicated and custom hardware, 
beyond commodity-based servers that've been tested for OpenVMS 
compatibility.   Apple, Microsoft and other vendors are of the scale 
where custom hardware can be feasible, and Apple has the volume where 
A10 and A10X and such are not just possible but advantageous, but the 
folks at VSI have at least a year or two or ten before they're building 
and supporting bespoke microprocessors, custom memory and extreme 
storage devices.   Or requiring such.  Until then, mid- and upper-end 
commodity hardware from existing providers will have to suffice for 
OpenVMS.    But again, looking forward and not backwards.   VSI is in 
2017.   With limited staff and funding.   And with a port to x86-64 
well underway.

> These are __all__ dramatic system architecture failures of biblical 
> proportions. By moving to an x86 platform OpenVMS is now placing itself 
> in the same position other worthless platforms now are in. Processor 
> level binary code which can execute independent of OS can now penetrate 
> OpenVMS infecting the BIOS/UEFI and commodity drive firmware. The 
> OpenSource code gives hackers who've never seen anything other than a 
> PC the way to penetrate and trigger its execution. Firmware viruses are 
> the new frontier for both mafia and clandestine types.

I've yet to encounter binary code that's transportable to OpenVMS in 
the fashion described, nor malware executables that are portable across 
a mix of operating systems that includes OpenVMS.   The executable code 
may or may not run, but — absent some sort of compatibility framework — 
the I/O and system calls will fail.   Malware binary executables — any 
meaningful binary executables, for that matter — are simply not 
magically portable across disparate operating systems.   Sure, maybe 
you somehow get an RCE and manage to get a loop running.   Beyond that? 
  The code is specific to the operating system context.   I've stated 
all this before, of course.

As for malware that targets outboard or low-level components of the 
platform such as the Intel management engine or HPE iLO or other 
similar components, or that's written to target (for instance) SMH or 
Java or such execution environments, that's all certainly in play 
irrespective of the operating system might be in use on the server.   
Or reflection attacks or denial-of-service attacks against some network 
servers or related.   That's irrespective of whether x86-64 is in use.

That written, security has gotten much more complex, certainly.

I do not wish to again debate whether or not anybody thinks that x86-64 
is elegant or wonderful or even particularly sane — I certainly don't 
believe it to be all that and a bag of chips — but x86-64 also the only 
volume server platform processor this side of some future ARM and 
AArch64 / ARMv8.x and SBSA server market.  If an operating system 
doesn't support x86-64 servers, then purchases and installations of 
that operating system are going to be at a competitive disadvantage 
because they can't run on commodity hardware and interoperate with 
standard tools such as virtual machines.   Performance-competitive 
microprocessor designs are expensive, and extremely expensive when the 
producer lacks the production volume of Intel or AMD, or of the ARM 
producers, and when there's the need to design and support low-volume 
custom servers.    Then there's also ending up beholden to the producer 
of the custom processor or server design you're based on if not x86-64 
(or maybe eventually on some commodity ARM AArch64 server designs or in 
some potential and currently-distant future RISC V servers), but that's 
a rather more advanced business-related topic.

At the very base of the whole discussion of a commercial operating 
system such as OpenVMS is making a profit.    The related details such 
as business economics, production and sales costs, and product 
forecasting are all such fun discussions, of course.   VSI has to sell 
enough OpenVMS licenses and support to cover their costs and recouping 
sufficient profits for their investor, and purchasing and running and 
supporting OpenVMS has to make financial sense to enough third-party 
providers and customers to matter.

If y'all can show a path to better profits than those likely arising 
from commodity hardware and x86-64 processors and the current port, the 
folks at VSI will probably be interested.   Line up enough paying 
customers and you'll have their full attention.   But that new port and 
that lower-volume or bespoke hardware will also lock the VSI team for 
another three or five years and now's not an audacious time for that, 
and the VSI folks have to be able to sell that hardware and software to 
other folks — to many of us — in sufficient volume to matter.   Or if 
you really think there's a market here and have a decent chunk of a 
billion dollars tucked into the couch cushions, start designing and 
building your own operating system and high-end hardware and your 
preferred or your own microprocessor, and clobber everybody else in the 
computing business in ten or twenty years...



-- 
Pure Personal Opinion | HoffmanLabs LLC 




More information about the Info-vax mailing list