[Info-vax] Hard links on VMS ODS5 disks
John Reagan
xyzzy1959 at gmail.com
Mon Aug 28 20:31:55 EDT 2023
On Monday, August 28, 2023 at 4:10:00 PM UTC-4, Brian Schenkenberger wrote:
> On 2023-07-26 00:53:52 +0000, Arne Vajh j said:
>
> > On 7/25/2023 8:52 PM, Arne Vajhøj wrote:
> >> On 7/24/2023 4:38 AM, h... at end.of.inter.net wrote:
> >>> On Monday, July 24, 2023 at 1:31:46 AM UTC+2, Arne Vajhøj wrote:
> >>>> But difficult to check the difference between /OPT and /NOOPT,
> >>>> because for some unknown reason /LIST/MACH does not list the
> >>>> generated code.
> >>>
> >>> MACRO-32 on x86?
> >>
> >> Yes.
> >>
> >>> You can always (except you compile with /NOOBJECT :-) get the machine
> >>> code listing with ANALYZE/OBJECT/DISASSEMBLE.
> >>
> >> I did not know that.
> >>
> >> But also weird.
> >>
> >> I do not see a difference between /OPT and /NOOPT at all.
> >
> > Tested with a trivial piece of code:
> >
> > .title str
> > .psect $CODE quad,pic,con,lcl,shr,exe,nowrt
> > .entry str_int_val,^m<r2,r3,r4>
> > movl B^4(ap),r0
> > movl B^4(r0),r1 ; address of string
> > movzwl (r0),r2 ; length of string
> > movl #0,r0 ; value=0
> > tstl r2 ; test if empty string
> > bleq 400$
> > clrl r3
> > movl #1,r4 ; scale=1
> > 100$: movb (r1),r3
> > cmpb #32,r3 ; test if " " => skip
> > beql 300$
> > cmpb #45,r3 ; test if "-" => scale=-1*scale
> > bneq 200$
> > mull2 #-1,r4
> > brb 300$
> > 200$: subb2 #48,r3 ; value=10*value+digit
> > mull2 #10,r0
> > addl2 r3,r0
> > 300$: incl r1
> > decl r2
> > tstl r2
> > bgtr 100$
> > mull2 r4,r0 ; value=value*scale
> > 400$: movl r0, at B^8(ap)
> > ret
> > .end
> >
> > with /NOOPT:
> >
> > .section $CODE, "ax",
> > "progbits" # EXE,SHR
> > .align 16
> > .cfi_startproc
> > STR_INT_VAL::
> > 55 00000000: pushq %rbp
> > # 000003
> > E5 89 48 00000001: movq %rsp,%rbp
> > 53 00000004: pushq %rbx
> > 57 41 00000005: pushq %r15
> > 56 41 00000007: pushq %r14
> > 55 41 00000009: pushq %r13
> > DC B6 0F 0000000B: movzbl %ah,%ebx
> > 00 00 00 00 E8 0000000E: callq
> > LIB$ALPHA_REG_VECTOR_BASE at PLT
> > 93 48 00000013: xchgq %rax,%rbx
> > 54 41 00000015: pushq %r12
> > 00 00 00 80 BB 89 48 00000017: movq %rdi,00000080(%rbx)
> > 00 00 00 88 B3 89 48 0000001E: movq %rsi,00000088(%rbx)
> > 20 73 FF 00000025: pushq 20(%rbx)
> > 18 73 FF 00000028: pushq 18(%rbx)
> > 10 73 FF 0000002B: pushq 10(%rbx)
> > 00 6A 0000002E: pushq $00
> > E4 89 49 00000030: movq %rsp,%r12
> > _$$L1:
> > 00 00 00 80 93 63 4C 00000033: movslq
> > 00000080(%rbx),%r10 #
> > 000004
> > 13 89 4C 0000003A: movq %r10,(%rbx)
> > 13 8B 4C 0000003D: movq (%rbx),%r10
> > # 000005
> > 04 5A 63 4D 00000040: movslq 04(%r10),%r11
> > 08 5B 89 4C 00000044: movq %r11,08(%rbx)
> > 13 8B 4C 00000048: movq (%rbx),%r10
> > # 000006
> > 1A B7 0F 4D 0000004B: movzwq (%r10),%r11
> > 10 5B 89 4C 0000004F: movq %r11,10(%rbx)
> > 00 00 00 00 03 C7 48 00000053: movq $00000000,(%rbx)
> > # 000007
> > 10 53 63 4C 0000005A: movslq 10(%rbx),%r10
> > # 000008
> > 00 FA 83 41 0000005E: cmpl $00,%r10d
> > 9F 00000062: lahf
> > C7 89 41 66 00000063: movw %ax,%r15w
> > F8 89 44 66 00000067: movw %r15w,%ax
> > # 000009
> > FF FE E0 81 66 0000006B: andw $-0002,%ax
> > 9E 00000070: sahf
> > 00 00 00 BC 8E 0F 00000071: jle 3_400$
> > C0 31 48 00000077: xorq %rax,%rax
> > # 000010
> > 18 43 89 48 0000007A: movq %rax,18(%rbx)
> > 00 00 00 01 20 43 C7 48 0000007E: movq
> > $00000001,20(%rbx) #
> > 000011
> > 3_100$:
> > 08 53 8B 4C 00000086: movq 08(%rbx),%r10
> > # 000012
> > 1A B6 0F 4D 0000008A: movzbq (%r10),%r11
> > 18 5B 88 44 0000008E: movb %r11l,18(%rbx)
> > 18 53 B6 0F 4C 00000092: movzbq 18(%rbx),%r10
> > # 000013
> > 00 20 BB 41 66 00000097: movw $0020,%r11w
> > D2 BE 0F 4D 0000009C: movsbq %r10l,%r10
> > D3 39 45 66 000000A0: cmpw %r10w,%r11w
> > 9F 000000A4: lahf
> > C7 89 41 66 000000A5: movw %ax,%r15w
> > F8 89 44 66 000000A9: movw %r15w,%ax
> > # 000014
> > 9E 000000AD: sahf
> > 00 00 00 4A 84 0F 000000AE: je 3_300$
> > 18 53 B6 0F 4C 000000B4: movzbq 18(%rbx),%r10
> > # 000015
> > 00 2D BB 41 66 000000B9: movw $002D,%r11w
> > D2 BE 0F 4D 000000BE: movsbq %r10l,%r10
> > D3 39 45 66 000000C2: cmpw %r10w,%r11w
> > 9F 000000C6: lahf
> > C7 89 41 66 000000C7: movw %ax,%r15w
> > F8 89 44 66 000000CB: movw %r15w,%ax
> > # 000016
> > 9E 000000CF: sahf
> > 00 00 00 0A 85 0F 000000D0: jne 3_200$
> > 00 00 00 8B E9 000000D6: jmpq _$$L3
> > # 000016
> > 00 00 00 86 E9 000000DB: jmpq _$$L3
> > 3_200$:
> > 30 18 6B 80 000000E0: subb $30,18(%rbx)
> > # 000019
> > 0A 13 6B 44 000000E4: imul $0A,(%rbx),%r10d
> > # 000020
> > DA 63 4D 000000E8: movslq %r10d,%r11
> > 1B 89 4C 000000EB: movq %r11,(%rbx)
> > 18 53 63 4C 000000EE: movslq 18(%rbx),%r10
> > # 000021
> > 1B 8B 4C 000000F2: movq (%rbx),%r11
> > D3 01 45 000000F5: addl %r10d,%r11d
> > CB 63 4D 000000F8: movslq %r11d,%r9
> > 0B 89 4C 000000FB: movq %r9,(%rbx)
> > 3_300$:
> > 01 08 43 83 48 000000FE: addq $01,08(%rbx)
> > # 000022
> > 01 10 6B 83 48 00000103: subq $01,10(%rbx)
> > # 000023
> > 10 53 63 4C 00000108: movslq 10(%rbx),%r10
> > # 000024
> > 00 FA 83 41 0000010C: cmpl $00,%r10d
> > 9F 00000110: lahf
> > C7 89 41 66 00000111: movw %ax,%r15w
> > F8 89 44 66 00000115: movw %r15w,%ax
> > # 000025
> > FF FE E0 81 66 00000119: andw $-0002,%ax
> > 9E 0000011E: sahf
> > FF FF FF 61 8F 0F 0000011F: jg 3_100$
> > 20 53 63 4C 00000125: movslq 20(%rbx),%r10
> > # 000026
> > 13 AF 0F 44 00000129: imull (%rbx),%r10d
> > DA 63 4D 0000012D: movslq %r10d,%r11
> > 1B 89 4C 00000130: movq %r11,(%rbx)
> > 3_400$:
> > 13 63 4C 00000133: movslq (%rbx),%r10
> > # 000027
> > 00 00 00 88 9B 8B 4C 00000136: movq 00000088(%rbx),%r11
> > 13 89 45 0000013D: movl %r10d,(%r11)
> > _$$_0:
> > C0 65 8D 48 00000140: leaq -40(%rbp),%rsp
> > # 000028
> > FE 00 00 00 F0 A3 80 00000144: andb $-02,000000F0(%rbx)
> > 10 43 8F 0000014B: popq 10(%rbx)
> > 18 43 8F 0000014E: popq 18(%rbx)
> > 20 43 8F 00000151: popq 20(%rbx)
> > 03 8B 48 00000154: movq (%rbx),%rax
> > 08 53 8B 48 00000157: movq 08(%rbx),%rdx
> > 5C 41 0000015B: popq %r12
> > 5D 41 0000015D: popq %r13
> > 5E 41 0000015F: popq %r14
> > 5F 41 00000161: popq %r15
> > 5B 00000163: popq %rbx
> > 5D 00000164: popq %rbp
> > C3 00000165: retq
> > _$$L3:
> > FF 20 53 6B 44 00000166: imul
> > $-01,20(%rbx),%r10d #
> > 000017
> > DA 63 4D 0000016B: movslq %r10d,%r11
> > 20 5B 89 4C 0000016E: movq %r11,20(%rbx)
> > FF FF FF 87 E9 00000172: jmpq 3_300$
> > # 000018
> > .cfi_endproc
> >
> > with /OPT:
> >
> > .cfi_startproc
> > STR_INT_VAL::
> > 55 00000000: pushq %rbp
> > # 000003
> > E5 89 48 00000001: movq %rsp,%rbp
> > 53 00000004: pushq %rbx
> > 57 41 00000005: pushq %r15
> > 56 41 00000007: pushq %r14
> > 55 41 00000009: pushq %r13
> > DC B6 0F 0000000B: movzbl %ah,%ebx
> > 00 00 00 00 E8 0000000E: callq
> > LIB$ALPHA_REG_VECTOR_BASE at PLT
> > 93 48 00000013: xchgq %rax,%rbx
> > 54 41 00000015: pushq %r12
> > 00 00 00 80 BB 89 48 00000017: movq %rdi,00000080(%rbx)
> > 00 00 00 88 B3 89 48 0000001E: movq %rsi,00000088(%rbx)
> > 20 73 FF 00000025: pushq 20(%rbx)
> > 18 73 FF 00000028: pushq 18(%rbx)
> > 10 73 FF 0000002B: pushq 10(%rbx)
> > 00 6A 0000002E: pushq $00
> > E4 89 49 00000030: movq %rsp,%r12
> > _$$L1:
> > 00 00 00 80 93 63 4C 00000033: movslq
> > 00000080(%rbx),%r10 #
> > 000004
> > 13 89 4C 0000003A: movq %r10,(%rbx)
> > 13 8B 4C 0000003D: movq (%rbx),%r10
> > # 000005
> > 04 5A 63 4D 00000040: movslq 04(%r10),%r11
> > 08 5B 89 4C 00000044: movq %r11,08(%rbx)
> > 13 8B 4C 00000048: movq (%rbx),%r10
> > # 000006
> > 1A B7 0F 4D 0000004B: movzwq (%r10),%r11
> > 10 5B 89 4C 0000004F: movq %r11,10(%rbx)
> > 00 00 00 00 03 C7 48 00000053: movq $00000000,(%rbx)
> > # 000007
> > 10 53 63 4C 0000005A: movslq 10(%rbx),%r10
> > # 000008
> > 00 FA 83 41 0000005E: cmpl $00,%r10d
> > 9F 00000062: lahf
> > C7 89 41 66 00000063: movw %ax,%r15w
> > F8 89 44 66 00000067: movw %r15w,%ax
> > # 000009
> > FF FE E0 81 66 0000006B: andw $-0002,%ax
> > 9E 00000070: sahf
> > 00 00 00 BC 8E 0F 00000071: jle 3_400$
> > C0 31 48 00000077: xorq %rax,%rax
> > # 000010
> > 18 43 89 48 0000007A: movq %rax,18(%rbx)
> > 00 00 00 01 20 43 C7 48 0000007E: movq
> > $00000001,20(%rbx) #
> > 000011
> > 3_100$:
> > 08 53 8B 4C 00000086: movq 08(%rbx),%r10
> > # 000012
> > 1A B6 0F 4D 0000008A: movzbq (%r10),%r11
> > 18 5B 88 44 0000008E: movb %r11l,18(%rbx)
> > 18 53 B6 0F 4C 00000092: movzbq 18(%rbx),%r10
> > # 000013
> > 00 20 BB 41 66 00000097: movw $0020,%r11w
> > D2 BE 0F 4D 0000009C: movsbq %r10l,%r10
> > D3 39 45 66 000000A0: cmpw %r10w,%r11w
> > 9F 000000A4: lahf
> > C7 89 41 66 000000A5: movw %ax,%r15w
> > F8 89 44 66 000000A9: movw %r15w,%ax
> > # 000014
> > 9E 000000AD: sahf
> > 00 00 00 4A 84 0F 000000AE: je 3_300$
> > 18 53 B6 0F 4C 000000B4: movzbq 18(%rbx),%r10
> > # 000015
> > 00 2D BB 41 66 000000B9: movw $002D,%r11w
> > D2 BE 0F 4D 000000BE: movsbq %r10l,%r10
> > D3 39 45 66 000000C2: cmpw %r10w,%r11w
> > 9F 000000C6: lahf
> > C7 89 41 66 000000C7: movw %ax,%r15w
> > F8 89 44 66 000000CB: movw %r15w,%ax
> > # 000016
> > 9E 000000CF: sahf
> > 00 00 00 0A 85 0F 000000D0: jne 3_200$
> > 00 00 00 8B E9 000000D6: jmpq _$$L3
> > # 000016
> > 00 00 00 86 E9 000000DB: jmpq _$$L3
> > 3_200$:
> > 30 18 6B 80 000000E0: subb $30,18(%rbx)
> > # 000019
> > 0A 13 6B 44 000000E4: imul $0A,(%rbx),%r10d
> > # 000020
> > DA 63 4D 000000E8: movslq %r10d,%r11
> > 1B 89 4C 000000EB: movq %r11,(%rbx)
> > 18 53 63 4C 000000EE: movslq 18(%rbx),%r10
> > # 000021
> > 1B 8B 4C 000000F2: movq (%rbx),%r11
> > D3 01 45 000000F5: addl %r10d,%r11d
> > CB 63 4D 000000F8: movslq %r11d,%r9
> > 0B 89 4C 000000FB: movq %r9,(%rbx)
> > 3_300$:
> > 01 08 43 83 48 000000FE: addq $01,08(%rbx)
> > # 000022
> > 01 10 6B 83 48 00000103: subq $01,10(%rbx)
> > # 000023
> > 10 53 63 4C 00000108: movslq 10(%rbx),%r10
> > # 000024
> > 00 FA 83 41 0000010C: cmpl $00,%r10d
> > 9F 00000110: lahf
> > C7 89 41 66 00000111: movw %ax,%r15w
> > F8 89 44 66 00000115: movw %r15w,%ax
> > # 000025
> > FF FE E0 81 66 00000119: andw $-0002,%ax
> > 9E 0000011E: sahf
> > FF FF FF 61 8F 0F 0000011F: jg 3_100$
> > 20 53 63 4C 00000125: movslq 20(%rbx),%r10
> > # 000026
> > 13 AF 0F 44 00000129: imull (%rbx),%r10d
> > DA 63 4D 0000012D: movslq %r10d,%r11
> > 1B 89 4C 00000130: movq %r11,(%rbx)
> > 3_400$:
> > 13 63 4C 00000133: movslq (%rbx),%r10
> > # 000027
> > 00 00 00 88 9B 8B 4C 00000136: movq 00000088(%rbx),%r11
> > 13 89 45 0000013D: movl %r10d,(%r11)
> > _$$_0:
> > C0 65 8D 48 00000140: leaq -40(%rbp),%rsp
> > # 000028
> > FE 00 00 00 F0 A3 80 00000144: andb $-02,000000F0(%rbx)
> > 10 43 8F 0000014B: popq 10(%rbx)
> > 18 43 8F 0000014E: popq 18(%rbx)
> > 20 43 8F 00000151: popq 20(%rbx)
> > 03 8B 48 00000154: movq (%rbx),%rax
> > 08 53 8B 48 00000157: movq 08(%rbx),%rdx
> > 5C 41 0000015B: popq %r12
> > 5D 41 0000015D: popq %r13
> > 5E 41 0000015F: popq %r14
> > 5F 41 00000161: popq %r15
> > 5B 00000163: popq %rbx
> > 5D 00000164: popq %rbp
> > C3 00000165: retq
> > _$$L3:
> > FF 20 53 6B 44 00000166: imul
> > $-01,20(%rbx),%r10d #
> > 000017
> > DA 63 4D 0000016B: movslq %r10d,%r11
> > 20 5B 89 4C 0000016E: movq %r11,20(%rbx)
> > FF FF FF 87 E9 00000172: jmpq 3_300$
> > # 000018
> > .cfi_endproc
> >
> > Arne
>
> I'd sure like to see VMS on X86.
Macro doesn't use the LLVM optimizer (it doesn't use the GEM one either). All the
branching between routines doesn't fit the high-level model.
Macro does a limited job of pulling address computations out of loops if there are
free registers (and on x86, the answer is "almost never").
With GEM, AMACRO/IMACRO gets the benefit of GEM's instruction level peephole
optimizer. That doesn't exist in that form for LLVM. So XMACRO today has several
"branches to branches" and is sloppy with x86 condition codes. However, we think
the microarchitecture and predictive execution takes care of those branches to branches
and the like.
In general Macro-32 code is optimized by the human while typing it in.
More information about the Info-vax
mailing list