[Info-vax] clock problems with OpenVMS x86 on VirtualBox
Johnny Billquist
bqt at softjar.se
Mon May 15 13:49:38 EDT 2023
On 2023-05-15 16:12, Dan Cross wrote:
> In article <u3t0se$5jt$6 at news.misty.com>,
> Johnny Billquist <bqt at softjar.se> wrote:
>> On 2023-05-13 02:16, Arne Vajhøj wrote:
>>> On 5/12/2023 1:30 PM, Simon Clubley wrote:
>>>> On 2023-05-12, Dave Froble <davef at tsoft-inc.com> wrote:
>>>>> On 5/12/2023 8:14 AM, Simon Clubley wrote:
>>>>>> That's going to make for some "interesting" real-time program
>>>>>> behaviour... :-)
>>>>>
>>>>> Do you think any serious real time programmer will run a real time
>>>>> task inside a
>>>>> VM? I'm not a real time programmer, and I'd still not do that.
>>>>
>>>> As well as traditional real-time stuff (which I agree with you about
>>>> BTW),
>>>
>>> I would not want to do it on a type 2 hypervisor - there must be
>>> cases where what is happening on the host OS impact the performance
>>> of the guest OS.
>>>
>>> But with a type 1 hypervisor and no over allocation of resources -
>>> would it be worse than running on bare metal?
>>
>> For sure. You have no guarantee that you will get the CPU cycles when
>> you need them. No matter what kind of hypervisor we're talking about,
>> there is overhead in the host that can affect things way more than what
>> might happen on bare metal.
>
> There are many issues at play when we talk about virtualization
> and soft real-time systems. What you're describing has to do
> with scheduling of guest tasks, and best-in-class techniques can
> mostly deal with that (e.g., schedulers like Tableau or applying
> an EDF scheduler to VCPU management).
>
> But even without over-commit, there are other aspects of the
> overall system configuration that can affect performance. For
> example, there is overhead in managing the guest's physical
> address space: while current x86 architectures can support a set
> of second-level page tables for this, there is still the issue
> of needing to walk those tables, the pressure that puts on the
> equivalent of the TLB, etc. Not to mention that the real TLB
> for virtual addresses (both guest and host) will be put under
> pressure due to both the host and other guest tasks (even a
> non-overcommitted type-1 hypervisor may move tasks around
> between physical host CPUs). And if you're using nested virt
> or shadow-paging...well, good luck.
>
> And separately there are issues of IO: in a type-1 scenario, IO
> is offloaded to another VM (or even a separate system) on behalf
> of a guest; that introduces non-determinate latency and
> noisy-neighbor problems without careful construction. Even with
> SR-IOV and passthru techniques for hypervisor-bypass, there are
> classes of IO that must go through the host, and these can take
> arbitrarily long. What do you do when the `outb` to your
> virtualized console UART blocks? There are some techniques to
> amortize the overhead of these events (posted IOs, for example)
> but they are not perfect.
>
> With careful construction, one can side-step most of the cost
> and get latencies down to within a couple of percentage points
> of running on bare-metal, but there _is_ overhead, and in many
> scenarios, it can be unbounded, even if in practice it is
> relatively small.
Thanks for the exhaustive answer. I don't think I need to add much more
to this.
Johnny
More information about the Info-vax
mailing list