Branch: refs/heads/Cog
Home: https://github.com/OpenSmalltalk/opensmalltalk-vm
Commit: 1cbb668e7f45385357bf7480f44ba5edc17a4bbc
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/1cbb668e7f45385357…
Author: Eliot Miranda <eliot.miranda(a)gmail.com>
Date: 2020-01-30 (Thu, 30 Jan 2020)
Changed paths:
M nsspur64src/vm/cogit.c
M nsspur64src/vm/cogit.h
M nsspur64src/vm/cogitX64SysV.c
M nsspur64src/vm/cogitX64WIN64.c
M nsspur64src/vm/cointerp.c
M nsspur64src/vm/cointerp.h
M nsspur64src/vm/gcc3x-cointerp.c
M nsspursrc/vm/cogit.h
M nsspursrc/vm/cogitARMv5.c
M nsspursrc/vm/cogitIA32.c
M nsspursrc/vm/cogitMIPSEL.c
M nsspursrc/vm/cointerp.c
M nsspursrc/vm/cointerp.h
M nsspursrc/vm/gcc3x-cointerp.c
M nsspurstack64src/vm/gcc3x-interp.c
M nsspurstack64src/vm/interp.c
M nsspurstacksrc/vm/gcc3x-interp.c
M nsspurstacksrc/vm/interp.c
M platforms/Cross/vm/sq.h
M platforms/Mac OS/vm/sqPlatformSpecific.h
M platforms/iOS/vm/OSX/sqPlatformSpecific.h
M platforms/iOS/vm/iPhone/sqPlatformSpecific.h
M platforms/unix/vm/sqPlatformSpecific.h
M platforms/unix/vm/sqUnixSpurMemory.c
M platforms/win32/vm/sqPlatformSpecific.h
M platforms/win32/vm/sqWin32SpurAlloc.c
M scripts/revertIfEssentiallyUnchanged
M spur64src/vm/cogit.h
M spur64src/vm/cogitX64SysV.c
M spur64src/vm/cogitX64WIN64.c
M spur64src/vm/cointerp.c
M spur64src/vm/cointerp.h
M spur64src/vm/cointerpmt.c
M spur64src/vm/cointerpmt.h
M spur64src/vm/gcc3x-cointerp.c
M spur64src/vm/gcc3x-cointerpmt.c
M spurlowcode64src/vm/cogit.c
M spurlowcode64src/vm/cogit.h
M spurlowcode64src/vm/cogitX64SysV.c
M spurlowcode64src/vm/cogitX64WIN64.c
M spurlowcode64src/vm/cointerp.c
M spurlowcode64src/vm/cointerp.h
M spurlowcode64src/vm/gcc3x-cointerp.c
M spurlowcodesrc/vm/cogit.h
M spurlowcodesrc/vm/cogitARMv5.c
M spurlowcodesrc/vm/cogitIA32.c
M spurlowcodesrc/vm/cogitMIPSEL.c
M spurlowcodesrc/vm/cointerp.c
M spurlowcodesrc/vm/cointerp.h
M spurlowcodesrc/vm/gcc3x-cointerp.c
M spurlowcodestack64src/vm/gcc3x-interp.c
M spurlowcodestack64src/vm/interp.c
M spurlowcodestacksrc/vm/gcc3x-interp.c
M spurlowcodestacksrc/vm/interp.c
M spursista64src/vm/cogit.c
M spursista64src/vm/cogit.h
M spursista64src/vm/cogitX64SysV.c
M spursista64src/vm/cogitX64WIN64.c
M spursista64src/vm/cointerp.c
M spursista64src/vm/cointerp.h
M spursista64src/vm/gcc3x-cointerp.c
M spursistasrc/vm/cogit.h
M spursistasrc/vm/cogitARMv5.c
M spursistasrc/vm/cogitIA32.c
M spursistasrc/vm/cogitMIPSEL.c
M spursistasrc/vm/cointerp.c
M spursistasrc/vm/cointerp.h
M spursistasrc/vm/gcc3x-cointerp.c
M spursrc/vm/cogit.h
M spursrc/vm/cogitARMv5.c
M spursrc/vm/cogitIA32.c
M spursrc/vm/cogitMIPSEL.c
M spursrc/vm/cointerp.c
M spursrc/vm/cointerp.h
M spursrc/vm/cointerpmt.c
M spursrc/vm/cointerpmt.h
M spursrc/vm/gcc3x-cointerp.c
M spursrc/vm/gcc3x-cointerpmt.c
M spurstack64src/vm/gcc3x-interp.c
M spurstack64src/vm/interp.c
M spurstack64src/vm/validImage.c
M spurstacksrc/vm/gcc3x-interp.c
M spurstacksrc/vm/interp.c
M spurstacksrc/vm/validImage.c
M src/plugins/B2DPlugin/B2DPlugin.c
M src/plugins/BitBltPlugin/BitBltPlugin.c
M src/plugins/DSAPrims/DSAPrims.c
M src/plugins/FileAttributesPlugin/FileAttributesPlugin.c
M src/plugins/FloatMathPlugin/FloatMathPlugin.c
M src/plugins/GeniePlugin/GeniePlugin.c
M src/plugins/LargeIntegers/LargeIntegers.c
M src/plugins/MD5Plugin/MD5Plugin.c
M src/plugins/MiscPrimitivePlugin/MiscPrimitivePlugin.c
M src/plugins/SHA256Plugin/SHA256Plugin.c
M src/plugins/SqueakFFIPrims/ARM32FFIPlugin.c
M src/plugins/SqueakFFIPrims/ARM64FFIPlugin.c
M src/plugins/SqueakFFIPrims/IA32FFIPlugin.c
M src/plugins/SqueakFFIPrims/X64SysVFFIPlugin.c
M src/plugins/SqueakFFIPrims/X64Win64FFIPlugin.c
M src/vm/cogit.h
M src/vm/cogitARMv5.c
M src/vm/cogitIA32.c
M src/vm/cogitMIPSEL.c
M src/vm/cointerp.c
M src/vm/cointerp.h
M src/vm/cointerpmt.c
M src/vm/cointerpmt.h
M src/vm/gcc3x-cointerp.c
M src/vm/gcc3x-cointerpmt.c
M stacksrc/vm/gcc3x-interp.c
M stacksrc/vm/interp.c
Log Message:
-----------
CogVM source as per VMMaker.oscog-eem.2692
ThreadedFFIPlugins:
See https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/443
FFI support for returning of a packed struct by value in X64 SysV
On X64/SysV struct up to 16 byte long can be passed by value into a pair of
8-byte registers. The problem is to know whether these are int (RAX RDX) or
float (XMM0 XMM1) registers or eventually a mix of...
For each 8-byte, we must know if it contains at least an int (in which case we
have to use an int register), or exclusively floating points (a pair of float
or a double). Previous algorithm did check first two fields, or last two
fields which does not correctly cover all cases...
For example int-int-float has last two fields int-float, though it will use
RAX XMM0.
So we have to know about struct layout... Unfortunately, this information is
not included into the compiledSpec. The idea here is to reconstruct the
information. See #registerTypeForStructSpecs:OfLength:
It's also impossible to cover the exotic alignments like packed structure
cases... But if we really want to pass that, this will mean passing the
alignment information, a more involved change of #compiledSpec (we need
up to 16 bits by field to handle that information since our FFI struct
are limited to 65535 bytes anyway).
For returning a struct, that's the same problem. We have four possible
combinations of int-float registers. Consequently, the idea is to analyze
the ffiRetSpec compiledSpec object thru CalloutState (it's the Smalltalk
WordArray object, not a pointer to its firstIndexableField) for performing
this analysis... Not sure if the best choice.
Since we have 4 different SixteenByte types, I have changed
value, since it's what will be used to memcpy to allocated ByteArray handle.
Checking the size of a struct is not the only condition for returning a struct
via registers. Some ABI like X64 SysV also mandates that struct fields be
properly aligned. Therefore, we cannot just rely on #returnStructInRegisters:.
Rename #returnStructInRegisters: -> #canReturnInRegistersStructOfSize:
Perform a more thorough analysis during the setup in #ffiCheckReturn:With:in:
The ABI will #encodeStructReturnTypeIn: a new callout state.
This structReturnType is telling how the struct should be returned
- via registers (and which registers)
- or via pointer to memory allocated by caller
This structReturnType will be used at time of:
- allocating the memory in caller - see #ffiCall:ArgArrayOrNil:NumArgs:
- dispatching to the correct FFI prototype - see ThreadedX64SysVFFIPlugin>>#ffiCalloutTo:SpecOnStack:in:
- copying back the struct contents to ExternalStructure handle (a ByteArray) - see #ffiReturnStruct:ofType:in:
Since structReturnType is encoded, it is not necessarily accessed directly,
but rather via new implementation of #returnStructInRegisters: whch now
takes the calloutState and knows how to decode its structReturnType.
Check for unaligned struct and pass them in MEMORY (alloca'd memory passed
thru a pointer).
Use a new (branchless) formulation for aligning the byteSize to next multiple
of fieldAlignment.
Encode registryType of invalid unaligned candidate as 2r110, and pass the
struct address returned by the foreign function in $RAX register in place
of callout limit when stuct is returned by MEMORY.
CoInterpreter: eliminate all but one compiler warning.
Cogit/Slang: fix several C compiler warnings re the Cogits.
Cogit: DUAL_MAPPED_CODE_ZONE (require -DDUAL_MAPPED_CODE_ZONE=1 to enable)
Fix denial of write/execute facility on modern Linuxes by dual mapping the code
zone in to a read/execute address range for code execution and a read/write
address range for code editing. Maintain codeToDataDelta and provide
codeXXXAt:put: to write at address + codeToDataDelta to the offset writable
address range.
Hence DUAL_MAPPED_CODE_ZONE requires a new executbale permissions applyer that
will also do the dual mapping, sqMakeMemoryExecutableFrom:To:CodeToDataDelta:.
Provide writableMethodFor: as a convenience for obtaining a writable cogMethod.
No longer have the fillInXXXHeaderYYY: methods answer anything since they're
given the writable header, not the actual header.
Cogit:
Refactor indexForSelector:in:at: to indexForSelector:in: in the back end so it
can be inlined (via a macro).
Slang:
emit constant for (M << N) and (M - N) - L for constant integers.
Fix in slang case statement expansion labels.
During expansion in case statements, trees are duplicated and expanded.
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2691.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.2691
Author: eem
Time: 30 January 2020, 3:51:09.421807 pm
UUID: d6b8b9b3-701f-47a6-a62b-6f95bfd498c3
Ancestors: VMMaker.oscog-nice.2690
CoInterpreter: eliminate all but one compiler warning.
Cogit: hey ho, initializeCodeZoneFrom:upTo:writableCodeZone: has got to go.
=============== Diff against VMMaker.oscog-nice.2690 ===============
Item was changed:
----- Method: CoInterpreter>>callbackEnter: (in category 'callback support') -----
callbackEnter: callbackID
"Re-enter the interpreter for executing a callback"
| currentCStackPointer currentCFramePointer savedReenterInterpreter
wasInMachineCode calledFromMachineCode |
<volatile>
<export: true>
+ <var: #currentCStackPointer type: #usqIntptr_t>
+ <var: #currentCFramePointer type: #usqIntptr_t>
- <var: #currentCStackPointer type: #'void *'>
- <var: #currentCFramePointer type: #'void *'>
<var: #callbackID type: #'sqInt *'>
<var: #savedReenterInterpreter type: #'jmp_buf'>
"For now, do not allow a callback unless we're in a primitiveResponse"
(self asserta: primitiveFunctionPointer ~= 0) ifFalse:
[^false].
self assert: primFailCode = 0.
"Check if we've exceeded the callback depth"
(self asserta: jmpDepth < MaxJumpBuf) ifFalse:
[^false].
jmpDepth := jmpDepth + 1.
wasInMachineCode := self isMachineCodeFrame: framePointer.
calledFromMachineCode := instructionPointer <= objectMemory startOfMemory.
"Suspend the currently active process"
suspendedCallbacks at: jmpDepth put: self activeProcess.
"We need to preserve newMethod explicitly since it is not activated yet
and therefore no context has been created for it. If the caller primitive
for any reason decides to fail we need to make sure we execute the correct
method and not the one 'last used' in the call back"
suspendedMethods at: jmpDepth put: newMethod.
self flag: 'need to debug this properly. Conceptually it is the right thing to do but it crashes in practice'.
false
ifTrue:
["Signal external semaphores since a signalSemaphoreWithIndex: request may
have been issued immediately prior to this callback before the VM has any
chance to do a signalExternalSemaphores in checkForEventsMayContextSwitch:"
self signalExternalSemaphores.
"If no process is awakened by signalExternalSemaphores then transfer
to the highest priority runnable one."
(suspendedCallbacks at: jmpDepth) = self activeProcess ifTrue:
[self transferTo: self wakeHighestPriority from: CSCallbackLeave]]
ifFalse:
[self transferTo: self wakeHighestPriority from: CSCallbackLeave].
"Typically, invoking the callback means that some semaphore has been
signaled to indicate the callback. Force an interrupt check as soon as possible."
self forceInterruptCheck.
"Save the previous CStackPointers and interpreter entry jmp_buf."
currentCStackPointer := CStackPointer.
currentCFramePointer := CFramePointer.
self memcpy: savedReenterInterpreter asVoidPointer
_: reenterInterpreter
_: (self sizeof: #'jmp_buf').
cogit assertCStackWellAligned.
(self setjmp: (jmpBuf at: jmpDepth)) = 0 ifTrue: "Fill in callbackID"
[callbackID at: 0 put: jmpDepth.
self enterSmalltalkExecutive.
self assert: false "NOTREACHED"].
"Restore the previous CStackPointers and interpreter entry jmp_buf."
self setCFramePointer: currentCFramePointer setCStackPointer: currentCStackPointer.
self memcpy: reenterInterpreter
_: (self cCoerceSimple: savedReenterInterpreter to: #'void *')
_: (self sizeof: #'jmp_buf').
"Transfer back to the previous process so that caller can push result"
self putToSleep: self activeProcess yieldingIf: preemptionYields.
self transferTo: (suspendedCallbacks at: jmpDepth) from: CSCallbackLeave.
newMethod := suspendedMethods at: jmpDepth. "see comment above"
argumentCount := self argumentCountOf: newMethod.
self assert: wasInMachineCode = (self isMachineCodeFrame: framePointer).
calledFromMachineCode
ifTrue:
[instructionPointer asUnsignedInteger >= objectMemory startOfMemory ifTrue:
[self iframeSavedIP: framePointer put: instructionPointer.
instructionPointer := cogit ceReturnToInterpreterPC]]
ifFalse:
["Even if the context was flushed to the heap and rebuilt in transferTo:from:
above it will remain an interpreted frame because the context's pc would
remain a bytecode pc. So the instructionPointer must also be a bytecode pc."
self assert: (self isMachineCodeFrame: framePointer) not.
self assert: instructionPointer > objectMemory startOfMemory].
self assert: primFailCode = 0.
jmpDepth := jmpDepth-1.
^true!
Item was changed:
----- Method: CoInterpreter>>mcprimFunctionForPrimitiveIndex: (in category 'cog jit support') -----
mcprimFunctionForPrimitiveIndex: primIndex
<api>
primIndex = PrimNumberHashMultiply ifTrue:
+ [^self cCoerceSimple: #mcprimHashMultiply: to: #sqInt].
- [^#mcprimHashMultiply:].
self error: 'unknown mcprim'.
^nil!
Item was changed:
----- Method: CoInterpreter>>saveCStackStateForCallbackContext: (in category 'callback support') -----
saveCStackStateForCallbackContext: vmCallbackContext
<var: #vmCallbackContext type: #'VMCallbackContext *'>
vmCallbackContext
+ savedCStackPointer: CStackPointer asVoidPointer;
+ savedCFramePointer: CFramePointer asVoidPointer.
- savedCStackPointer: CStackPointer;
- savedCFramePointer: CFramePointer.
super saveCStackStateForCallbackContext: vmCallbackContext!
Item was removed:
- ----- Method: Cogit>>initializeCodeZoneFrom:upTo:writableCodeZone: (in category 'initialization') -----
- initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone
- <api>
- <var: 'startAddress' type: #usqInt>
- <var: 'endAddress' type: #usqInt>
- <var: 'writableCodeZone' type: #usqInt>
- "If the OS platform requires dual mapping to achieve a writable code zone
- then startAddress will be the non-zero address of the read/write zone and
- executableCodeZone will be the non-zero address of the read/execute zone.
- If the OS platform does not require dual mapping then startAddress will be
- the first address of the read/write/executable zone and executableCodeZone
- will be zero."
- self initializeBackend.
- codeToDataDelta := writableCodeZone = 0 ifTrue: [0] ifFalse: [writableCodeZone - startAddress].
- backEnd stopsFrom: startAddress to: endAddress - 1.
- self cCode:
- [writableCodeZone = 0 ifTrue:
- [self sqMakeMemoryExecutableFrom: startAddress To: endAddress]]
- inSmalltalk:
- [startAddress = self class guardPageSize ifTrue:
- [backEnd stopsFrom: 0 to: endAddress - 1].
- self initializeProcessor].
-
- codeBase := methodZoneBase := startAddress.
- minValidCallAddress := (codeBase min: coInterpreter interpretAddress) min: coInterpreter primitiveFailAddress.
- methodZone manageFrom: methodZoneBase to: endAddress.
- self assertValidDualZone.
- self maybeGenerateCheckFeatures.
- self maybeGenerateCheckLZCNT.
- self maybeGenerateICacheFlush.
- self generateVMOwnerLockFunctions.
- self genGetLeafCallStackPointer.
- self generateStackPointerCapture.
- self generateTrampolines.
- self computeEntryOffsets.
- self computeFullBlockEntryOffsets.
- self generateClosedPICPrototype.
- self alignMethodZoneBase.
- "repeat so that now the methodZone ignores the generated run-time"
- methodZone manageFrom: methodZoneBase to: endAddress.
- "N.B. this is assumed to be the last thing done in initialization; see Cogit>>initialized"
- self generateOpenPICPrototype!
Nicolas Cellier uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-nice.2690.mcz
==================== Summary ====================
Name: VMMaker.oscog-nice.2690
Author: nice
Time: 30 January 2020, 11:04:41.785937 pm
UUID: e61056e6-e4d9-4686-be96-bfcfa8f3afc2
Ancestors: VMMaker.oscog-eem.2689
Finish FFI support for returning of a packed struct by value in X64 SysV
Checking the size of a struct is not the only condition for returning a struct via registers.
Some ABI like X64 SysV also mandates that struct fields be properly aligned.
Therefore, we cannot just rely on #returnStructInRegisters:.
Rename it (and preserve timestamps by courtesy):
#returnStructInRegisters: -> #canReturnInRegistersStructOfSize:
Perform a more thorough analysis during the setup in #ffiCheckReturn:With:in:
The ABI will #encodeStructReturnTypeIn: a new callout state.
This structReturnType is telling how the struct should be returned
- via registers (and which registers)
- or via pointer to memory allocated by caller
This structReturnType will be used at time of:
- allocating the memory in caller - see #ffiCall:ArgArrayOrNil:NumArgs:
- dispatching to the correct FFI prototype - see ThreadedX64SysVFFIPlugin>>#ffiCalloutTo:SpecOnStack:in:
- copying back the struct contents to ExternalStructure handle (a ByteArray) - see #ffiReturnStruct:ofType:in:
Since structReturnType is encoded, it is not necessarily accessed directly, but rather via new implementation of #returnStructInRegisters: whch now takes the calloutState and knows how to decode its structReturnType.
TO DO (reminder):
The analysis of structReturnType is not necessarily cheap and should better be cached in some unused bits of compiledSpec literal.
This cache shall be cleaned up when restarting the image on a different OS.
TO DO (reminder):
Not all composite are struct, we should also handle Unions!
TO DO (reminder):
Other ABI like IA32 also fails the new test in SysV, that should be fixed too.
=============== Diff against VMMaker.oscog-eem.2689 ===============
Item was added:
+ ----- Method: ThreadedARM32FFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is returned in registers or not."
+ ^returnStructSize <= self wordSize!
Item was added:
+ ----- Method: ThreadedARM32FFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to true if returning the struct via register"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ calloutState structReturnType: (self canReturnInRegistersStructOfSize: calloutState structReturnSize)!
Item was changed:
----- Method: ThreadedARM32FFIPlugin>>ffiReturnStruct:ofType:in: (in category 'callout support') -----
ffiReturnStruct: longLongRetPtr ofType: ffiRetType in: calloutState
<var: #longLongRetPtr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
"Create a structure return value from an external function call. The value has been stored in
alloca'ed space pointed to by the calloutState or in the return value passed by pointer."
| retOop retClass oop |
<inline: true>
retClass := interpreterProxy fetchPointer: 1 ofObject: ffiRetType.
retOop := interpreterProxy instantiateClass: retClass indexableSize: 0.
self remapOop: retOop
in: [oop := interpreterProxy
instantiateClass: interpreterProxy classByteArray
indexableSize: calloutState structReturnSize].
self memcpy: (interpreterProxy firstIndexableField: oop)
+ _: ((self returnStructInRegisters: calloutState)
- _: ((self returnStructInRegisters: calloutState structReturnSize)
ifTrue: [longLongRetPtr]
ifFalse: [calloutState limit])
_: calloutState structReturnSize.
interpreterProxy storePointer: 0 ofObject: retOop withValue: oop.
^retOop!
Item was changed:
----- Method: ThreadedARM32FFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Return thrue register if structReturnType is true"
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not."
- ^returnStructSize <= self wordSize!
Item was added:
+ ----- Method: ThreadedARM64FFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is able to be returned in registers.
+ NB: this is a predicate!! #canReturnInRegistersStructOfSize: does NOT return a struct in anything!!"
+ ^returnStructSize <= (2 * self wordSize)!
Item was added:
+ ----- Method: ThreadedARM64FFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to true if returning the struct via register"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ calloutState structReturnType: (self canReturnInRegistersStructOfSize: calloutState structReturnSize)!
Item was changed:
----- Method: ThreadedARM64FFIPlugin>>ffiCalloutTo:SpecOnStack:in: (in category 'callout support') -----
ffiCalloutTo: procAddr SpecOnStack: specOnStack in: calloutState
<var: #procAddr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
<var: #loadFloatRegs declareC: 'extern void loadFloatRegs(double, double, double, double, double, double, double, double)'>
"Go out, call this guy and create the return value. This *must* be inlined because of
the alloca of the outgoing stack frame in ffiCall:WithFlags:NumArgs:Args:AndTypes:"
| myThreadIndex atomicType floatRet intRet x1Ret |
<var: #floatRet type: #double>
<var: #intRet type: #usqLong>
<var: #x1Ret type: #usqLong>
<inline: true>
myThreadIndex := interpreterProxy disownVM: (self disownFlagsFor: calloutState).
calloutState floatRegisterIndex > 0 ifTrue:
[self loadFloatRegs:
(calloutState floatRegisters at: 0)
_: (calloutState floatRegisters at: 1)
_: (calloutState floatRegisters at: 2)
_: (calloutState floatRegisters at: 3)
_: (calloutState floatRegisters at: 4)
_: (calloutState floatRegisters at: 5)
_: (calloutState floatRegisters at: 6)
_: (calloutState floatRegisters at: 7)].
(self allocaLiesSoSetSpBeforeCall or: [self mustAlignStack]) ifTrue:
[self setsp: calloutState argVector].
atomicType := self atomicTypeOf: calloutState ffiRetHeader.
(atomicType >> 1) = (FFITypeSingleFloat >> 1) ifTrue:
[atomicType = FFITypeSingleFloat
ifTrue:
[floatRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'float (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5)
with: (calloutState integerRegisters at: 6)
with: (calloutState integerRegisters at: 7)]
ifFalse: "atomicType = FFITypeDoubleFloat"
[floatRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'double (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5)
with: (calloutState integerRegisters at: 6)
with: (calloutState integerRegisters at: 7)].
"undo any callee argument pops because it may confuse stack management with the alloca."
(self isCalleePopsConvention: calloutState callFlags) ifTrue:
[self setsp: calloutState argVector].
interpreterProxy ownVM: myThreadIndex.
^interpreterProxy floatObjectOf: floatRet].
"If struct address used for return value, call is special"
(self mustReturnStructOnStack: calloutState structReturnSize)
ifTrue: [
intRet := 0.
self setReturnRegister: (self cCoerceSimple: calloutState limit to: 'sqLong') "stack alloca'd struct"
andCall: (self cCoerceSimple: procAddr to: 'sqLong')
withArgsArray: (self cCoerceSimple: (self addressOf: calloutState integerRegisters) to: 'sqLong').
] ifFalse: [
intRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'usqIntptr_t (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5)
with: (calloutState integerRegisters at: 6)
with: (calloutState integerRegisters at: 7).
x1Ret := self getX1register. "Capture x1 immediately. No problem if unused"
].
"If struct returned in registers,
place register values into calloutState integerRegisters"
(calloutState structReturnSize > 0
+ and: [self returnStructInRegisters: calloutState]) ifTrue:
- and: [self returnStructInRegisters: calloutState structReturnSize]) ifTrue:
["Only 2 regs used in ARMv8/Aarch64 current"
calloutState integerRegisters at: 0 put: intRet. "X0"
calloutState integerRegisters at: 1 put: x1Ret]. "X1"
"undo any callee argument pops because it may confuse stack management with the alloca."
(self isCalleePopsConvention: calloutState callFlags) ifTrue:
[self setsp: calloutState argVector].
interpreterProxy ownVM: myThreadIndex.
(calloutState ffiRetHeader anyMask: FFIFlagPointer+FFIFlagStructure) ifTrue:
["Note: Order is important here since FFIFlagPointer + FFIFlagStructure is used to represent
'typedef void* VoidPointer' and VoidPointer must be returned as pointer *not* as struct."
(calloutState ffiRetHeader anyMask: FFIFlagPointer) ifTrue:
[^self ffiReturnPointer: intRet ofType: (self ffiReturnType: specOnStack) in: calloutState].
^self ffiReturnStruct: (self addressOf: intRet) ofType: (self ffiReturnType: specOnStack) in: calloutState].
^self ffiCreateIntegralResultOop: intRet ofAtomicType: atomicType in: calloutState!
Item was changed:
----- Method: ThreadedARM64FFIPlugin>>ffiReturnStruct:ofType:in: (in category 'callout support') -----
ffiReturnStruct: longLongRetPtr ofType: ffiRetType in: calloutState
<var: #longLongRetPtr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
"Create a structure return value from an external function call. The value has been stored in
alloca'ed space pointed to by the calloutState or in the integer registers."
| retOop retClass oop |
<inline: true>
retClass := interpreterProxy fetchPointer: 1 ofObject: ffiRetType.
retOop := interpreterProxy instantiateClass: retClass indexableSize: 0.
self remapOop: retOop
in: [oop := interpreterProxy
instantiateClass: interpreterProxy classByteArray
indexableSize: calloutState structReturnSize].
self memcpy: (interpreterProxy firstIndexableField: oop)
+ _: ((self returnStructInRegisters: calloutState)
- _: ((self returnStructInRegisters: calloutState structReturnSize)
ifTrue: [self addressOf: calloutState integerRegisters]
ifFalse: [calloutState limit])
_: calloutState structReturnSize.
interpreterProxy storePointer: 0 ofObject: retOop withValue: oop.
^retOop!
Item was changed:
----- Method: ThreadedARM64FFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Return thrue register if structReturnType is true"
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is able to be returned in registers.
- NB: this is a predicate!! #returnStructInRegisters: does NOT return a struct in anything!!"
- ^returnStructSize <= (2 * self wordSize)!
Item was changed:
VMStructType subclass: #ThreadedFFICalloutState
+ instanceVariableNames: 'argVector currentArg limit structReturnSize structReturnType callFlags ffiArgSpec ffiArgSpecSize ffiArgHeader ffiRetHeader ffiRetSpec stringArgIndex stringArgs'
- instanceVariableNames: 'argVector currentArg limit structReturnSize callFlags ffiArgSpec ffiArgSpecSize ffiArgHeader ffiRetHeader ffiRetSpec stringArgIndex stringArgs'
classVariableNames: ''
poolDictionaries: ''
category: 'VMMaker-Plugins-FFI'!
+ !ThreadedFFICalloutState commentStamp: 'nice 1/30/2020 21:13' prior: 0!
- !ThreadedFFICalloutState commentStamp: '<historical>' prior: 0!
Instances of the receiver hold the per-thread state of a call-out.
long *argVector pointer to the start of the alloca'ed argument marshalling area
long *currentArg pointer to the position in argVector to write the current argument
long *limit the limit of the argument marshalling area (for bounds checking)
structReturnSize the size of the space allocated for the structure return, if any
+ structReturnType an integer encoding how the struct is returned (typically via registers or pointer to allocated memory)
callFlags the value of the ExternalFunctionFlagsIndex field in the ExternalFunction being called
ffiArgSpec et al type information for the current argument being marshalled
stringArgIndex the count of temporary strings used for marshalling Smalltalk strings to character strings.
stringArgs pointers to the temporary strings used for marshalling Smalltalk strings to character strings.!
Item was added:
+ ----- Method: ThreadedFFICalloutState>>structReturnType (in category 'accessing') -----
+ structReturnType
+ "Answer the value of structReturnType
+ It is an OS dependent field encoding for example if struct are to be returned
+ - via register
+ - via pointer to memory allocated by caller"
+
+ ^ structReturnType!
Item was added:
+ ----- Method: ThreadedFFICalloutState>>structReturnType: (in category 'accessing') -----
+ structReturnType: anObject
+ "Set the value of structReturnType"
+
+ ^structReturnType := anObject!
Item was added:
+ ----- Method: ThreadedFFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size can be returned via registers or not.
+ Size is a necessary condition, but it might not be a sufficient condition.
+ For example, SysV X64 also require that struct fields be properly aligned."
+ ^self subclassResponsibility!
Item was added:
+ ----- Method: ThreadedFFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set a variable encoding how the struct is to be returned.
+ It is an OS dependent encoding, leaved to subclass responsibility."
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+ self subclassResponsibility
+ !
Item was changed:
----- Method: ThreadedFFIPlugin>>ffiCall:ArgArrayOrNil:NumArgs: (in category 'callout support') -----
ffiCall: externalFunction ArgArrayOrNil: argArrayOrNil NumArgs: nArgs
"Generic callout. Does the actual work. If argArrayOrNil is nil it takes args from the stack
and the spec from the method. If argArrayOrNil is not nil takes args from argArrayOrNil
and the spec from the receiver."
| flags argTypeArray address argType oop argSpec argClass err theCalloutState calloutState requiredStackSize stackSize allocation result primNumArgs |
<inline: true>
<var: #theCalloutState type: #'CalloutState'>
<var: #calloutState type: #'CalloutState *'>
<var: #allocation type: #'char *'>
primNumArgs := interpreterProxy methodArgumentCount.
(interpreterProxy is: externalFunction KindOfClass: interpreterProxy classExternalFunction) ifFalse:
[^self ffiFail: FFIErrorNotFunction].
"Load and check the values in the externalFunction before we call out"
flags := interpreterProxy fetchInteger: ExternalFunctionFlagsIndex ofObject: externalFunction.
interpreterProxy failed ifTrue:
[^self ffiFail: FFIErrorBadArgs].
"This must come early for compatibility with the old FFIPlugin. Image-level code
may assume the function pointer is loaded eagerly. Thanks to Nicolas Cellier."
address := self ffiLoadCalloutAddress: externalFunction.
interpreterProxy failed ifTrue:
[^0 "error code already set by ffiLoadCalloutAddress:"].
argTypeArray := interpreterProxy fetchPointer: ExternalFunctionArgTypesIndex ofObject: externalFunction.
"must be array of arg types"
((interpreterProxy isArray: argTypeArray)
and: [(interpreterProxy slotSizeOf: argTypeArray) = (nArgs + 1)]) ifFalse:
[^self ffiFail: FFIErrorBadArgs].
"check if the calling convention is supported"
self cppIf: COGMTVM
ifTrue:
[(self ffiSupportsCallingConvention: (flags bitAnd: FFICallTypesMask)) ifFalse:
[^self ffiFail: FFIErrorCallType]]
ifFalse: "not masking causes threaded calls to fail, which is as they should if the plugin is not threaded."
[(self ffiSupportsCallingConvention: flags) ifFalse:
[^self ffiFail: FFIErrorCallType]].
requiredStackSize := self externalFunctionHasStackSizeSlot
ifTrue: [interpreterProxy
fetchInteger: ExternalFunctionStackSizeIndex
ofObject: externalFunction]
ifFalse: [-1].
interpreterProxy failed ifTrue:
[^interpreterProxy primitiveFailFor: (argArrayOrNil isNil
ifTrue: [PrimErrBadMethod]
ifFalse: [PrimErrBadReceiver])].
stackSize := requiredStackSize < 0 ifTrue: [DefaultMaxStackSize] ifFalse: [requiredStackSize].
self cCode: [] inSmalltalk: [theCalloutState := self class calloutStateClass new].
calloutState := self addressOf: theCalloutState.
self cCode: [self memset: calloutState _: 0 _: (self sizeof: #CalloutState)].
calloutState callFlags: flags.
"Fetch return type and args"
argType := interpreterProxy fetchPointer: 0 ofObject: argTypeArray.
argSpec := interpreterProxy fetchPointer: 0 ofObject: argType.
argClass := interpreterProxy fetchPointer: 1 ofObject: argType.
(err := self ffiCheckReturn: argSpec With: argClass in: calloutState) ~= 0 ifTrue:
[^self ffiFail: err]. "cannot return"
"alloca the outgoing stack frame, leaving room for marshalling args, and including space for the return struct, if any.
Additional space reserved for saving register args like mandated by Win64 X64 or PPC ABI, will be managed by the call itself"
allocation := self alloca: stackSize + calloutState structReturnSize + self cStackAlignment.
self mustAlignStack ifTrue:
[allocation := self cCoerce: (allocation asUnsignedIntegerPtr bitClear: self cStackAlignment - 1) to: #'char *'].
calloutState
argVector: allocation;
currentArg: allocation;
limit: allocation + stackSize.
(calloutState structReturnSize > 0
and: [self nonRegisterStructReturnIsViaImplicitFirstArgument
+ and: [(self returnStructInRegisters: calloutState) not]]) ifTrue:
- and: [(self returnStructInRegisters: calloutState structReturnSize) not]]) ifTrue:
[err := self ffiPushPointer: calloutState limit in: calloutState.
err ~= 0 ifTrue:
[self cleanupCalloutState: calloutState.
self cppIf: COGMTVM ifTrue:
[err = PrimErrObjectMayMove negated ifTrue:
[^PrimErrObjectMayMove]]. "N.B. Do not fail if object may move because caller will GC and retry."
^self ffiFail: err]].
1 to: nArgs do:
[:i|
argType := interpreterProxy fetchPointer: i ofObject: argTypeArray.
argSpec := interpreterProxy fetchPointer: 0 ofObject: argType.
argClass := interpreterProxy fetchPointer: 1 ofObject: argType.
oop := argArrayOrNil isNil
ifTrue: [interpreterProxy stackValue: nArgs - i]
ifFalse: [interpreterProxy fetchPointer: i - 1 ofObject: argArrayOrNil].
err := self ffiArgument: oop Spec: argSpec Class: argClass in: calloutState.
err ~= 0 ifTrue:
[self cleanupCalloutState: calloutState.
self cppIf: COGMTVM ifTrue:
[err = PrimErrObjectMayMove negated ifTrue:
[^PrimErrObjectMayMove]]. "N.B. Do not fail if object may move because caller will GC and retry."
^self ffiFail: err]]. "coercion failed or out of stack space"
"Failures must be reported back from ffiArgument:Spec:Class:in:.
Should not fail from here on in."
self assert: interpreterProxy failed not.
self ffiLogCallout: externalFunction.
(requiredStackSize < 0
and: [self externalFunctionHasStackSizeSlot]) ifTrue:
[stackSize := calloutState currentArg - calloutState argVector.
interpreterProxy storeInteger: ExternalFunctionStackSizeIndex ofObject: externalFunction withValue: stackSize].
"Go out and call this guy"
result := self ffiCalloutTo: address SpecOnStack: argArrayOrNil notNil in: calloutState.
self cleanupCalloutState: calloutState.
"Can not safely use argumentCount (via e.g. methodReturnValue:) since it may have been changed by a callback."
interpreterProxy pop: primNumArgs + 1 thenPush: result.
^result!
Item was changed:
----- Method: ThreadedFFIPlugin>>ffiCheckReturn:With:in: (in category 'callout support') -----
ffiCheckReturn: retSpec With: retClass in: calloutState
<var: #calloutState type: #'CalloutState *'>
"Make sure we can return an object of the given type"
<inline: true>
retClass = interpreterProxy nilObject ifFalse:
[(interpreterProxy
includesBehavior: retClass
ThatOf: interpreterProxy classExternalStructure) ifFalse:
[^FFIErrorBadReturn]].
((interpreterProxy isWords: retSpec)
and: [(interpreterProxy slotSizeOf: retSpec) > 0]) ifFalse:
[^FFIErrorWrongType].
calloutState ffiRetSpec: retSpec.
calloutState ffiRetHeader: (interpreterProxy fetchLong32: 0 ofObject: retSpec).
(self isAtomicType: calloutState ffiRetHeader) ifFalse:
[retClass = interpreterProxy nilObject ifTrue:
[^FFIErrorBadReturn]].
(calloutState ffiRetHeader bitAnd: (FFIFlagPointer bitOr: FFIFlagStructure)) = FFIFlagStructure ifTrue:
+ [calloutState structReturnSize: (calloutState ffiRetHeader bitAnd: FFIStructSizeMask).
+ self encodeStructReturnTypeIn: calloutState].
- [calloutState structReturnSize: (calloutState ffiRetHeader bitAnd: FFIStructSizeMask)].
^0!
Item was changed:
----- Method: ThreadedFFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Answer if struct result is returned in registers or not.
+ Use the OS specific encoding stored in structReturnType.
+ Since it is OS dependent, leave the responsibility to subclass"
+ <var: #calloutState type: #'CalloutState *'>
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not."
^self subclassResponsibility!
Item was added:
+ ----- Method: ThreadedIA32FFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is returned in registers or not."
+ <cmacro: '(sz) (WIN32_X86_STRUCT_RETURN && (sz) <= 8 && !!((sz)&((sz)-1)))'>
+ ^returnStructSize <= 8 and: [returnStructSize isPowerOfTwo]!
Item was added:
+ ----- Method: ThreadedIA32FFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to true if returning the struct via register"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ calloutState structReturnType: (self canReturnInRegistersStructOfSize: calloutState structReturnSize)!
Item was changed:
----- Method: ThreadedIA32FFIPlugin>>ffiReturnStruct:ofType:in: (in category 'callout support') -----
ffiReturnStruct: longLongRetPtr ofType: ffiRetType in: calloutState
<var: #longLongRetPtr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
"Create a structure return value from an external function call. The value as been stored in
alloca'ed space pointed to by the calloutState or in the return value passed by pointer."
| retOop retClass oop |
<inline: true>
retClass := interpreterProxy fetchPointer: 1 ofObject: ffiRetType.
retOop := interpreterProxy instantiateClass: retClass indexableSize: 0.
self remapOop: retOop
in: [oop := interpreterProxy
instantiateClass: interpreterProxy classByteArray
indexableSize: calloutState structReturnSize].
self memcpy: (interpreterProxy firstIndexableField: oop)
+ _: ((self returnStructInRegisters: calloutState)
- _: ((self returnStructInRegisters: calloutState structReturnSize)
ifTrue: [longLongRetPtr]
ifFalse: [calloutState limit])
_: calloutState structReturnSize.
interpreterProxy storePointer: 0 ofObject: retOop withValue: oop.
^retOop!
Item was changed:
----- Method: ThreadedIA32FFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Return thrue register if structReturnType is true"
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not."
- <cmacro: '(sz) (WIN32_X86_STRUCT_RETURN && (sz) <= 8 && !!((sz)&((sz)-1)))'>
- ^returnStructSize <= 8 and: [returnStructSize isPowerOfTwo]!
Item was added:
+ ----- Method: ThreadedPPCBEFFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is returned in registers or not.
+ The ABI spec defines return in registers, but some linux gcc versions implemented an
+ erroneous draft which does not return any struct in memory. Implement the SysV ABI."
+ ^returnStructSize <= 8!
Item was added:
+ ----- Method: ThreadedPPCBEFFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to true if returning the struct via register"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ calloutState structReturnType: (self canReturnInRegistersStructOfSize: calloutState structReturnSize)!
Item was changed:
----- Method: ThreadedPPCBEFFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Return thrue register if structReturnType is true"
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not.
- The ABI spec defines return in registers, but some linux gcc versions implemented an
- erroneous draft which does not return any struct in memory. Implement the SysV ABI."
- ^returnStructSize <= 8!
Item was added:
+ ----- Method: ThreadedX64SysVFFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is returned in registers or not."
+ ^returnStructSize <= (WordSize * 2)!
Item was added:
+ ----- Method: ThreadedX64SysVFFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to an integer encoding the type of registers used to return the struct
+ * 2r00 for float float (XMM0 XMM1)
+ * 2r01 for int float (RAX XMM0)
+ * 2r10 for float int (XMM0 RAX)
+ * 2r11 for int int (RAX RDX)
+ * 2r100 for float (XMM0)
+ * 2r101 for int (RAX)
+ * 2r110 For return thru memory (struct field not aligned or struct too big)"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ | registerType |
+ registerType := (self canReturnInRegistersStructOfSize: calloutState structReturnSize)
+ ifTrue:
+ [ self
+ registerTypeForStructSpecs: (interpreterProxy firstIndexableField: calloutState ffiRetSpec)
+ OfLength: (interpreterProxy slotSizeOf: calloutState ffiRetSpec)]
+ ifFalse:
+ [ "We cannot return via register, struct is too big"
+ 2r110 ].
+ calloutState structReturnType: registerType!
Item was changed:
----- Method: ThreadedX64SysVFFIPlugin>>ffiCalloutTo:SpecOnStack:in: (in category 'callout support') -----
ffiCalloutTo: procAddr SpecOnStack: specOnStack in: calloutState
<var: #procAddr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
<var: #loadFloatRegs declareC: 'extern void loadFloatRegs(double, double, double, double, double, double, double, double)'>
"Go out, call this guy and create the return value. This *must* be inlined because of
the alloca of the outgoing stack frame in ffiCall:WithFlags:NumArgs:Args:AndTypes:"
| myThreadIndex atomicType floatRet intRet sddRet sdiRet sidRet siiRet returnStructByValue registerType sRetPtr |
<var: #floatRet type: #double>
<var: #intRet type: #sqInt>
<var: #siiRet type: #SixteenByteReturnII>
<var: #sidRet type: #SixteenByteReturnID>
<var: #sdiRet type: #SixteenByteReturnDI>
<var: #sddRet type: #SixteenByteReturnDD>
<var: #sRetPtr type: #'void *'>
<inline: true>
returnStructByValue := (calloutState ffiRetHeader bitAnd: FFIFlagStructure + FFIFlagPointer + FFIFlagAtomic) = FFIFlagStructure.
- returnStructByValue
- ifTrue:
- [(self returnStructInRegisters: calloutState structReturnSize)
- ifTrue: [registerType := self registerTypeForStructSpecs: (interpreterProxy firstIndexableField: calloutState ffiRetSpec) OfLength: (interpreterProxy slotSizeOf: calloutState ffiRetSpec)]
- ifFalse: [registerType := 2r110 "cannot pass by register"]].
myThreadIndex := interpreterProxy disownVM: (self disownFlagsFor: calloutState).
calloutState floatRegisterIndex > 0 ifTrue:
[self
load: (calloutState floatRegisters at: 0)
Flo: (calloutState floatRegisters at: 1)
a: (calloutState floatRegisters at: 2)
t: (calloutState floatRegisters at: 3)
R: (calloutState floatRegisters at: 4)
e: (calloutState floatRegisters at: 5)
g: (calloutState floatRegisters at: 6)
s: (calloutState floatRegisters at: 7)].
(self allocaLiesSoSetSpBeforeCall or: [self mustAlignStack]) ifTrue:
[self setsp: calloutState argVector].
atomicType := self atomicTypeOf: calloutState ffiRetHeader.
(atomicType >> 1) = (FFITypeSingleFloat >> 1) ifTrue:
[atomicType = FFITypeSingleFloat
ifTrue:
[floatRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'float (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5)]
ifFalse: "atomicType = FFITypeDoubleFloat"
[floatRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'double (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5)].
interpreterProxy ownVM: myThreadIndex.
^interpreterProxy floatObjectOf: floatRet].
returnStructByValue ifFalse:
[intRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'sqInt (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
interpreterProxy ownVM: myThreadIndex.
(calloutState ffiRetHeader anyMask: FFIFlagPointer) ifTrue:
[^self ffiReturnPointer: intRet ofType: (self ffiReturnType: specOnStack) in: calloutState].
^self ffiCreateIntegralResultOop: intRet ofAtomicType: atomicType in: calloutState].
+ registerType := calloutState structReturnType.
registerType
caseOf:
{[2r00] ->
[sddRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnDD (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: sddRet) asVoidPointer].
[2r01] ->
[sidRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnID (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: sidRet) asVoidPointer].
[2r10] ->
[sdiRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnDI (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: sdiRet) asVoidPointer].
[2r11] ->
[siiRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnII (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: siiRet) asVoidPointer].
[2r100] ->
[floatRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'double (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: floatRet) asVoidPointer].
[2r101] ->
[intRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'sqInt (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := (self addressOf: intRet) asVoidPointer].
[2r110] ->
["return a pointer to alloca'd memory"
intRet := self
dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'sqInt (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)')
with: (calloutState integerRegisters at: 0)
with: (calloutState integerRegisters at: 1)
with: (calloutState integerRegisters at: 2)
with: (calloutState integerRegisters at: 3)
with: (calloutState integerRegisters at: 4)
with: (calloutState integerRegisters at: 5).
sRetPtr := intRet asVoidPointer "address of struct is returned in RAX, which also is calloutState limit"]}
otherwise:
[interpreterProxy ownVM: myThreadIndex.
self ffiFail: FFIErrorWrongType. ^nil].
interpreterProxy ownVM: myThreadIndex.
^self ffiReturnStruct: sRetPtr ofType: (self ffiReturnType: specOnStack) in: calloutState!
Item was changed:
----- Method: ThreadedX64SysVFFIPlugin>>registerTypeForStructSpecs:OfLength: (in category 'marshalling') -----
registerTypeForStructSpecs: specs OfLength: specSize
"Answer with a number characterizing the register type for passing a struct of size <= 16 bytes.
The bit at offset i of registerType is set to 1 if eightbyte at offset i is a int register (RAX ...)
The bit at offset 2 indicates if there is a single eightbyte (struct size <= 8)
* 2r00 for float float (XMM0 XMM1)
* 2r01 for int float (RAX XMM0)
* 2r10 for float int (XMM0 RAX)
* 2r11 for int int (RAX RDX)
* 2r100 for float (XMM0)
* 2r101 for int (RAX)
* 2r110 INVALID (not aligned)
Beware, the bits must be read from right to left for decoding register type.
Note: this method reconstructs the struct layout according to X64 alignment rules.
Therefore, it will not work for packed struct or other exotic alignment."
<var: #specs type: #'unsigned int*'>
<var: #subIndex type: #'unsigned int'>
+ <inline: false>
| eightByteOffset byteOffset index registerType spec fieldSize alignment atomic subIndex isInt |
index := 0.
(self checkAlignmentOfStructSpec: specs OfLength: specSize StartingAt: index)
ifFalse: [^2r110].
eightByteOffset := 0.
byteOffset := 0.
registerType := ((specs at: index) bitAnd: FFIStructSizeMask) <= 8 ifTrue: [2r100] ifFalse: [0].
[(index := index + 1) < specSize]
whileTrue:
[spec := specs at: index.
isInt := false.
spec = FFIFlagStructure "this marks end of structure and should be ignored"
ifFalse:
[(spec anyMask: FFIFlagPointer)
ifTrue:
[fieldSize := BytesPerWord.
alignment := fieldSize.
isInt := true]
ifFalse:
[(spec bitAnd: FFIFlagStructure + FFIFlagAtomic)
caseOf:
{[FFIFlagStructure] ->
[fieldSize := 0.
subIndex := index.
alignment := self alignmentOfStructSpec: specs OfLength: specSize StartingAt: (self addressOf: subIndex)].
[FFIFlagAtomic] ->
[fieldSize := spec bitAnd: FFIStructSizeMask.
alignment := fieldSize.
atomic := self atomicTypeOf: spec.
isInt := (atomic >> 1) ~= (FFITypeSingleFloat >> 1)]}
otherwise: ["invalid spec" ^-1]].
(byteOffset bitAnd: alignment - 1) = 0
ifFalse:
["this field requires alignment"
byteOffset := (byteOffset bitClear: alignment - 1) + alignment].
byteOffset + fieldSize > 8
ifTrue:
["Not enough room on current eightbyte for this field, skip to next one"
eightByteOffset := eightByteOffset + 1.
byteOffset := 0].
isInt
ifTrue:
["If this eightbyte contains an int field, then we must use an int register"
registerType := registerType bitOr: 1 << eightByteOffset].
"where to put the next field?"
byteOffset := byteOffset + fieldSize.
byteOffset >= 8
ifTrue:
["This eightbyte is full, skip to next one"
eightByteOffset := eightByteOffset + 1.
byteOffset := 0]]].
^registerType!
Item was changed:
----- Method: ThreadedX64SysVFFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Use the register type encoding stored in structReturnType
+ 2r110 means impossible, pass thru memory.
+ Anything smaller encodes the type of register used, thus means true."
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType < 2r110!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not."
- ^returnStructSize <= (WordSize * 2)!
Item was added:
+ ----- Method: ThreadedX64Win64FFIPlugin>>canReturnInRegistersStructOfSize: (in category 'marshalling') -----
+ canReturnInRegistersStructOfSize: returnStructSize
+ "Answer if a struct result of a given size is returned in registers or not."
+ ^returnStructSize <= WordSize and: ["returnStructSize isPowerOfTwo" (returnStructSize bitAnd: returnStructSize-1) = 0]!
Item was added:
+ ----- Method: ThreadedX64Win64FFIPlugin>>encodeStructReturnTypeIn: (in category 'callout support') -----
+ encodeStructReturnTypeIn: calloutState
+ "Set the return type to true if returning the struct via register"
+ <var: #calloutState type: #'CalloutState *'>
+ <inline: true>
+
+ calloutState structReturnType: (self canReturnInRegistersStructOfSize: calloutState structReturnSize)!
Item was changed:
----- Method: ThreadedX64Win64FFIPlugin>>ffiReturnStruct:ofType:in: (in category 'callout support') -----
ffiReturnStruct: intRetPtr ofType: ffiRetType in: calloutState
<var: #intRetPtr type: #'void *'>
<var: #calloutState type: #'CalloutState *'>
"Create a structure return value from an external function call. The value has been stored in
alloca'ed space pointed to by the calloutState or in the return value passed by pointer."
| retOop retClass oop |
<inline: true>
retClass := interpreterProxy fetchPointer: 1 ofObject: ffiRetType.
retOop := interpreterProxy instantiateClass: retClass indexableSize: 0.
self remapOop: retOop
in: [oop := interpreterProxy
instantiateClass: interpreterProxy classByteArray
indexableSize: calloutState structReturnSize].
self memcpy: (interpreterProxy firstIndexableField: oop)
+ _: ((self returnStructInRegisters: calloutState)
- _: ((self returnStructInRegisters: calloutState structReturnSize)
ifTrue: [intRetPtr]
ifFalse: [calloutState limit])
_: calloutState structReturnSize.
interpreterProxy storePointer: 0 ofObject: retOop withValue: oop.
^retOop!
Item was changed:
----- Method: ThreadedX64Win64FFIPlugin>>returnStructInRegisters: (in category 'marshalling') -----
+ returnStructInRegisters: calloutState
+ "Return thrue register if structReturnType is true"
+ <var: #calloutState type: #'CalloutState *'>
+ ^calloutState structReturnType!
- returnStructInRegisters: returnStructSize
- "Answer if a struct result of a given size is returned in memory or not."
- ^returnStructSize <= WordSize and: ["returnStructSize isPowerOfTwo" (returnStructSize bitAnd: returnStructSize-1) = 0]!
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2689.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.2689
Author: eem
Time: 30 January 2020, 1:33:22.601531 pm
UUID: f47416a6-8fa0-4af1-b9d2-5f46f6e2ea73
Ancestors: VMMaker.oscog-eem.2688
Cogit: DUAL_MAPPED_CODE_ZONE, build on the realization that the original code zone can be mapped shared to the writable zone to revert to the original layout with the code zone housed in the initial alloc of memory at the base. This requires a new executbale permissions applyer that will also do the dual mapping, sqMakeMemoryExecutableFrom:To:CodeToDataDelta:, whose C code will be written sortly.
=============== Diff against VMMaker.oscog-eem.2688 ===============
Item was changed:
StackInterpreterPrimitives subclass: #CoInterpreter
+ instanceVariableNames: 'cogit cogMethodZone gcMode cogCodeSize desiredCogCodeSize heapBase lastCoggableInterpretedBlockMethod deferSmash deferredSmash primTraceLog primTraceLogIndex traceLog traceLogIndex traceSources cogCompiledCodeCompactionCalledFor statCodeCompactionCount statCodeCompactionUsecs lastUncoggableInterpretedBlockMethod flagInterpretedMethods maxLiteralCountForCompile minBackwardJumpCountForCompile CFramePointer CStackPointer'
- instanceVariableNames: 'cogit cogMethodZone gcMode cogCodeSize desiredCogCodeSize heapBase lastCoggableInterpretedBlockMethod deferSmash deferredSmash primTraceLog primTraceLogIndex traceLog traceLogIndex traceSources cogCompiledCodeCompactionCalledFor statCodeCompactionCount statCodeCompactionUsecs lastUncoggableInterpretedBlockMethod flagInterpretedMethods maxLiteralCountForCompile minBackwardJumpCountForCompile CFramePointer CStackPointer effectiveCogCodeSize'
classVariableNames: 'CSCallbackEnter CSCallbackLeave CSCheckEvents CSEnterCriticalSection CSExitCriticalSection CSOwnVM CSResume CSSignal CSSuspend CSSwitchIfNeccessary CSThreadBind CSThreadSchedulingLoop CSWait CSYield HasBeenReturnedFromMCPC HasBeenReturnedFromMCPCOop MFMethodFlagFrameIsMarkedFlag MinBackwardJumpCountForCompile PrimNumberHashMultiply PrimTraceLogSize RumpCStackSize TraceBlockActivation TraceBlockCreation TraceBufferSize TraceCodeCompaction TraceContextSwitch TraceDisownVM TraceFullGC TraceIncrementalGC TraceIsFromInterpreter TraceIsFromMachineCode TraceOwnVM TracePreemptDisowningThread TracePrimitiveFailure TracePrimitiveRetry TraceSources TraceStackOverflow TraceThreadSwitch TraceVMCallback TraceVMCallbackReturn'
poolDictionaries: 'CogMethodConstants VMStackFrameOffsets'
category: 'VMMaker-JIT'!
!CoInterpreter commentStamp: 'eem 10/10/2019 09:08' prior: 0!
I am a variant of the StackInterpreter that can co-exist with the Cog JIT. I interpret unjitted methods, either because they have been found for the first time or because they are judged to be too big to JIT. See CogMethod class's comment for method interoperability.
cogCodeSize
- the current size of the machine code zone
cogCompiledCodeCompactionCalledFor
- a variable set when the machine code zone runs out of space, causing a machine code zone compaction at the next available opportunity
cogMethodZone
- the manager for the machine code zone (instance of CogMethodZone)
cogit
- the JIT (co-jit) (instance of SimpleStackBasedCogit, StackToRegisterMappoingCogit, etc)
deferSmash
- a flag causing deferral of smashes of the stackLimit around the call of functionSymbol (for assert checks)
deferredSmash
- a flag noting deferral of smashes of the stackLimit around the call of functionSymbol (for assert checks)
desiredCogCodeSize
- the desred size of the machine code zone, set at startup or via primitiveVMParameter to be written at snapshot time
flagInterpretedMethods
- true if methods that are interpreted shoudl have their flag bit set (used to identity methods that are interpreted because they're unjittable for some reason)
gcMode
- the variable holding the gcMode, used to inform the cogit of how to scan the machine code zone for oops on GC
heapBase
- the address in memory of the base of the objectMemory's heap, which is immediately above the machine code zone
lastCoggableInterpretedBlockMethod
- a variable used to invoke the cogit for a block mehtod being invoked repeatedly in the interpreter
lastUncoggableInterpretedBlockMethod
- a variable used to avoid invoking the cogit for an unjittable method encountered on block evaluation
maxLiteralCountForCompile
- the variable controlling which methods to jit. methods with a literal count above this value will not be jitted (on the grounds that large methods are typically used for initialization, and take up a lot of space in the code zone)
minBackwardJumpCountForCompile
- the variable controlling when to attempt to jit a method being interpreted. If as many backward jumps as this occur, the current method will be jitted
primTraceLog
- a small array implementing a crcular buffer logging the last N primitive invocations, GCs, code compactions, etc used for crash reporting
primTraceLogIndex
- the index into primTraceLog of the next entry
reenterInterpreter
- the jmpbuf used to jmp back into the interpreter when transitioning from machine code to the interpreter
statCodeCompactionCount
- the count of machine code zone compactions
statCodeCompactionUsecs
- the total microseconds spent in machine code zone compactions
traceLog
- a log of various events, used in debugging
traceLogIndex
- the index into traceLog of the next entry
traceSources
- the names associated with the codes of events in traceLog
CFramePointer
- if in use, the value of the C frame pointer on most recent entry to the interpreter after start-up or a callback. Used to establish the C stack when calling the run-time from generated machine code.
CStackPointer
- the value of the C stack pointer on most recent entry to the interpreter after start-up or a callback. Used to establish the C stack when calling the run-time from generated machine code.!
Item was changed:
----- Method: CoInterpreter>>effectiveCogCodeSize (in category 'accessing') -----
effectiveCogCodeSize
"With the single-mapped regime the effectiveCogCodeSize is cogCodeSize.
+ With the dual-mapped regime then in production effectiveCogCodeSize is
+ cogCodeSize and in simulation it is two times cogCodeSize (meaning we
+ simulate a dual zone by having two copies of the zone at the start of memory)."
+ ^cogCodeSize!
- With the dual-mapped regime then in production effectiveCogCodeSize is zero
- (meaning that the code zone is outside of memory), and in simulation it is
- two times cogCodeSize (meaning we simulate a dual zone by having two
- copies of the zone at the start of memory)."
- ^effectiveCogCodeSize!
Item was added:
+ ----- Method: CoInterpreter>>initializeCodeGenerator (in category 'initialization') -----
+ initializeCodeGenerator
+ cogit
+ initializeCodeZoneFrom: (self cCode: [objectMemory memory] inSmalltalk: [Cogit guardPageSize])
+ upTo: (self cCode: [objectMemory memory] inSmalltalk: [Cogit guardPageSize]) + cogCodeSize!
Item was changed:
----- Method: CoInterpreter>>readImageFromFile:HeapSize:StartingAt: (in category 'image save/restore') -----
readImageFromFile: f HeapSize: desiredHeapSize StartingAt: imageOffset
"Read an image from the given file stream, allocating an amount of memory to its object heap.
V3: desiredHeapSize is the total size of the heap. Fail if the image has an unknown format or
requires more than the specified amount of memory.
Spur: desiredHeapSize is ignored; this routine will attempt to provide at least extraVMMemory's
+ ammount of free space after the image is loaded, taking any free space in the image into account.
- ammount of free space after the image is loaded, taking any free space in teh image into account.
extraVMMemory is stored in the image header and is accessible as vmParameterAt: 23. If
extraVMMemory is 0, the value defaults to the default grow headroom. Fail if the image has an
unknown format or if sufficient memory cannot be allocated.
Details: This method detects when the image was stored on a machine with the opposite byte
ordering from this machine and swaps the bytes automatically. Furthermore, it allows the header
information to start 512 bytes into the file, since some file transfer programs for the Macintosh
apparently prepend a Mac-specific header of this size. Note that this same 512 bytes of prefix
area could also be used to store an exec command on Unix systems, allowing one to launch
Smalltalk by invoking the image name as a command."
| swapBytes headerStart headerSize dataSize oldBaseAddr
minimumMemory heapSize bytesRead bytesToShift firstSegSize
+ hdrNumStackPages hdrEdenBytes hdrCogCodeSize headerFlags hdrMaxExtSemTabSize allocationReserve |
- hdrNumStackPages hdrEdenBytes hdrCogCodeSize headerFlags hdrMaxExtSemTabSize
- allocationReserve executableZone writableCodeZone |
<var: #f type: #sqImageFile>
<var: #heapSize type: #usqInt>
<var: #dataSize type: #'size_t'>
<var: #minimumMemory type: #usqInt>
<var: #desiredHeapSize type: #usqInt>
<var: #allocationReserve type: #usqInt>
<var: #headerStart type: #squeakFileOffsetType>
<var: #imageOffset type: #squeakFileOffsetType>
- <var: #executableZone type: #'char *'>
- <var: #writableCodeZone type: #'char *'>
metaclassNumSlots := 6. "guess Metaclass instSize"
classNameIndex := 6. "guess (Class instVarIndexFor: 'name' ifAbsent: []) - 1"
swapBytes := self checkImageVersionFrom: f startingAt: imageOffset.
headerStart := (self sqImageFilePosition: f) - 4. "record header start position"
headerSize := self getWord32FromFile: f swap: swapBytes.
dataSize := self getLongFromFile: f swap: swapBytes.
oldBaseAddr := self getLongFromFile: f swap: swapBytes.
objectMemory specialObjectsOop: (self getLongFromFile: f swap: swapBytes).
objectMemory lastHash: (self getLongFromFile: f swap: swapBytes). "N.B. not used."
savedWindowSize := self getLongFromFile: f swap: swapBytes.
headerFlags := self getLongFromFile: f swap: swapBytes.
self setImageHeaderFlagsFrom: headerFlags.
extraVMMemory := self getWord32FromFile: f swap: swapBytes. "N.B. ignored in V3."
hdrNumStackPages := self getShortFromFile: f swap: swapBytes.
"4 stack pages is small. Should be able to run with as few as
three. 4 should be comfortable but slow. 8 is a reasonable
default. Can be changed via vmParameterAt: 43 put: n.
Can be set as a preference (Info.plist, VM.ini, command line etc).
If desiredNumStackPages is already non-zero then it has been
set as a preference. Ignore (but preserve) the header's default."
numStackPages := desiredNumStackPages ~= 0
ifTrue: [desiredNumStackPages]
ifFalse: [hdrNumStackPages = 0
ifTrue: [self defaultNumStackPages]
ifFalse: [hdrNumStackPages]].
desiredNumStackPages := hdrNumStackPages.
"This slot holds the size of the native method zone in 1k units. (pad to word boundary)."
hdrCogCodeSize := (self getShortFromFile: f swap: swapBytes) * 1024.
cogCodeSize := desiredCogCodeSize ~= 0
ifTrue: [desiredCogCodeSize]
ifFalse:
[hdrCogCodeSize = 0
ifTrue: [cogit defaultCogCodeSize]
ifFalse: [hdrCogCodeSize]].
cogCodeSize > cogit maxCogCodeSize ifTrue:
[cogCodeSize := cogit maxCogCodeSize].
hdrEdenBytes := self getWord32FromFile: f swap: swapBytes.
objectMemory edenBytes: (desiredEdenBytes ~= 0
ifTrue: [desiredEdenBytes]
ifFalse:
[hdrEdenBytes = 0
ifTrue: [objectMemory defaultEdenBytes]
ifFalse: [hdrEdenBytes]]).
desiredEdenBytes := hdrEdenBytes.
hdrMaxExtSemTabSize := self getShortFromFile: f swap: swapBytes.
hdrMaxExtSemTabSize ~= 0 ifTrue:
[self setMaxExtSemSizeTo: hdrMaxExtSemTabSize].
"pad to word boundary. This slot can be used for anything else that will fit in 16 bits.
Preserve it to be polite to other VMs."
the2ndUnknownShort := self getShortFromFile: f swap: swapBytes.
firstSegSize := self getLongFromFile: f swap: swapBytes.
objectMemory firstSegmentSize: firstSegSize.
- "To cope with modern OSs that disallow executing code in writable memory we dual-map
- the code zone, one mapping with read/write permissions and the other with read/execute
- permissions. In such a configuration the code zone has already been alloated and is not
- included in (what is no longer) the initial alloc."
- self ioAllocateDualMappedCodeZone: (self addressOf: executableZone) OfSize: cogCodeSize WritableZone: (self addressOf: writableCodeZone).
- effectiveCogCodeSize := executableZone > 0 ifTrue: [0] ifFalse: [cogCodeSize].
"compare memory requirements with availability"
allocationReserve := self interpreterAllocationReserveBytes.
+ minimumMemory := cogCodeSize "no need to include the stackZone; this is alloca'ed"
- minimumMemory := "no need to include the stackZone; this is alloca'ed"
- effectiveCogCodeSize
+ dataSize
+ objectMemory newSpaceBytes
+ allocationReserve.
objectMemory hasSpurMemoryManagerAPI
ifTrue:
[| freeOldSpaceInImage headroom |
freeOldSpaceInImage := self getLongFromFile: f swap: swapBytes.
headroom := objectMemory
initialHeadroom: extraVMMemory
givenFreeOldSpaceInImage: freeOldSpaceInImage.
heapSize := objectMemory roundUpHeapSize:
+ cogCodeSize "no need to include the stackZone; this is alloca'ed"
- effectiveCogCodeSize "no need to include the stackZone; this is alloca'ed"
+ dataSize
+ headroom
+ objectMemory newSpaceBytes
+ (headroom > allocationReserve
ifTrue: [0]
ifFalse: [allocationReserve])]
ifFalse:
+ [heapSize := cogCodeSize "no need to include the stackZone; this is alloca'ed"
- [heapSize := effectiveCogCodeSize "no need to include the stackZone; this is alloca'ed"
+ desiredHeapSize
+ objectMemory newSpaceBytes
+ (desiredHeapSize - dataSize > allocationReserve
ifTrue: [0]
ifFalse: [allocationReserve]).
heapSize < minimumMemory ifTrue:
[self insufficientMemorySpecifiedError]].
"allocate a contiguous block of memory for the Squeak heap and ancilliary data structures"
objectMemory memory: (self
allocateMemory: heapSize
minimum: minimumMemory
imageFile: f
headerSize: headerSize) asUnsignedInteger.
objectMemory memory ifNil:
[self insufficientMemoryAvailableError].
heapBase := objectMemory
+ setHeapBase: objectMemory memory + cogCodeSize
- setHeapBase: objectMemory memory + effectiveCogCodeSize
memoryLimit: objectMemory memory + heapSize
+ endOfMemory: objectMemory memory + cogCodeSize + dataSize.
- endOfMemory: objectMemory memory + effectiveCogCodeSize + dataSize.
"position file after the header"
self sqImageFile: f Seek: headerStart + headerSize.
"read in the image in bulk, then swap the bytes if necessary"
bytesRead := objectMemory readHeapFromImageFile: f dataBytes: dataSize.
bytesRead ~= dataSize ifTrue: [self unableToReadImageError].
self ensureImageFormatIsUpToDate: swapBytes.
"compute difference between old and new memory base addresses"
bytesToShift := objectMemory memoryBaseForImageRead - oldBaseAddr.
self initializeInterpreter: bytesToShift. "adjusts all oops to new location"
+ self initializeCodeGenerator.
- self initializeCodeGenerator: writableCodeZone executableZone: objectMemory memory.
^dataSize!
Item was changed:
CoInterpreterMT subclass: #CogVMSimulator
+ instanceVariableNames: 'parent enableCog byteCount lastPollCount lastExtPC sendCount lookupCount printSends traceOn myBitBlt displayForm fakeForm imageName pluginList mappedPluginEntries quitBlock transcript displayView eventTransformer printFrameAtEachStep printBytecodeAtEachStep systemAttributes uniqueIndices uniqueIndex breakCount atEachStepBlock startMicroseconds lastYieldMicroseconds externalSemaphoreSignalRequests externalSemaphoreSignalResponses extSemTabSize debugStackDepthDictionary performFilters eventQueue effectiveCogCodeSize expectedSends expecting'
- instanceVariableNames: 'parent enableCog byteCount lastPollCount lastExtPC sendCount lookupCount printSends traceOn myBitBlt displayForm fakeForm imageName pluginList mappedPluginEntries quitBlock transcript displayView eventTransformer printFrameAtEachStep printBytecodeAtEachStep systemAttributes uniqueIndices uniqueIndex breakCount atEachStepBlock startMicroseconds lastYieldMicroseconds externalSemaphoreSignalRequests externalSemaphoreSignalResponses extSemTabSize debugStackDepthDictionary performFilters eventQueue expectedSends expecting'
classVariableNames: 'ByteCountsPerMicrosecond ExpectedSends NLRFailures NLRSuccesses'
poolDictionaries: ''
category: 'VMMaker-JITSimulation'!
!CogVMSimulator commentStamp: 'eem 9/3/2013 11:16' prior: 0!
This class defines basic memory access and primitive simulation so that the CoInterpreter can run simulated in the Squeak environment. It also defines a number of handy object viewing methods to facilitate pawing around in the object memory. Remember that you can test the Cogit using its class-side in-image compilation facilities.
To see the thing actually run, you could (after backing up this image and changes), execute
(CogVMSimulator new openOn: Smalltalk imageName) test
and be patient both to wait for things to happen, and to accept various things that may go wrong depending on how large or unusual your image may be. We usually do this with a small and simple benchmark image.
Here's an example to launch the simulator in a window. The bottom-right window has a menu packed with useful stuff:
(CogVMSimulator newWithOptions: #(Cogit StackToRegisterMappingCogit))
desiredNumStackPages: 8;
openOn: '/Users/eliot/Cog/startreader.image';
openAsMorph;
run
Here's a hairier example that I (Eliot) actually use in daily development with some of the breakpoint facilities commented out.
| cos proc opts |
CoInterpreter initializeWithOptions: (opts := Dictionary newFromPairs: #(Cogit StackToRegisterMappingCogit)).
CogVMSimulator chooseAndInitCogitClassWithOpts: opts.
cos := CogVMSimulator new.
"cos initializeThreadSupport." "to test the multi-threaded VM"
cos desiredNumStackPages: 8. "to set the size of the stack zone"
"cos desiredCogCodeSize: 8 * 1024 * 1024." "to set the size of the Cogit's code zone"
cos openOn: '/Users/eliot/Squeak/Squeak4.4/trunk44.image'. "choose your favourite image"
"cos setBreakSelector: 'r:degrees:'." "set a breakpoint at a specific selector"
proc := cos cogit processor.
"cos cogit sendTrace: 7." "turn on tracing"
"set a complex breakpoint at a specific point in machine code"
"cos cogit singleStep: true; breakPC: 16r56af; breakBlock: [:cg| cos framePointer > 16r101F3C and: [(cos longAt: cos framePointer - 4) = 16r2479A and: [(cos longAt: 16r101F30) = (cos longAt: 16r101F3C) or: [(cos longAt: 16r101F2C) = (cos longAt: 16r101F3C)]]]]; sendTrace: 1".
"[cos cogit compilationTrace: -1] on: MessageNotUnderstood do: [:ex|]." "turn on compilation tracing in the StackToRegisterMappingCogit"
"cos cogit setBreakMethod: 16rB38880."
cos
openAsMorph;
"toggleTranscript;" "toggleTranscript will send output to the Transcript instead of the morph's rather small window"
halt;
run!
Item was added:
+ ----- Method: CogVMSimulator>>effectiveCogCodeSize (in category 'accessing') -----
+ effectiveCogCodeSize
+ "With the single-mapped regime the effectiveCogCodeSize is cogCodeSize.
+ With the dual-mapped regime then in production effectiveCogCodeSize is
+ cogCodeSize and in simulation it is two times cogCodeSize (meaning we
+ simulate a dual zone by having two copies of the zone at the start of memory)."
+ ^effectiveCogCodeSize!
Item was changed:
----- Method: CogVMSimulator>>openOn:extraMemory: (in category 'initialization') -----
openOn: fileName extraMemory: extraBytes
"CogVMSimulator new openOn: 'clone.im' extraMemory: 100000"
| f version headerSize dataSize count oldBaseAddr bytesToShift swapBytes
headerFlags firstSegSize heapSize
+ hdrNumStackPages hdrEdenBytes hdrMaxExtSemTabSize hdrCogCodeSize
+ stackZoneSize methodCacheSize primTraceLogSize allocationReserve |
- hdrNumStackPages hdrEdenBytes hdrMaxExtSemTabSize
- hdrCogCodeSize stackZoneSize methodCacheSize primTraceLogSize
- allocationReserve executableZone writableCodeZone |
"open image file and read the header"
(f := self openImageFileNamed: fileName) ifNil: [^self].
"Set the image name and the first argument; there are
no arguments during simulation unless set explicitly."
systemAttributes at: 1 put: fileName.
["begin ensure block..."
imageName := f fullName.
f binary.
version := self getWord32FromFile: f swap: false. "current version: 16r1968 (=6504) vive la revolucion!!"
(self readableFormat: version)
ifTrue: [swapBytes := false]
ifFalse: [(version := version byteSwap32) = self imageFormatVersion
ifTrue: [swapBytes := true]
ifFalse: [self error: 'incomaptible image format']].
headerSize := self getWord32FromFile: f swap: swapBytes.
dataSize := self getLongFromFile: f swap: swapBytes. "length of heap in file"
oldBaseAddr := self getLongFromFile: f swap: swapBytes. "object memory base address of image"
objectMemory specialObjectsOop: (self getLongFromFile: f swap: swapBytes).
objectMemory lastHash: (self getLongFromFile: f swap: swapBytes). "Should be loaded from, and saved to the image header"
savedWindowSize := self getLongFromFile: f swap: swapBytes.
headerFlags := self getLongFromFile: f swap: swapBytes.
self setImageHeaderFlagsFrom: headerFlags.
extraVMMemory := self getWord32FromFile: f swap: swapBytes.
hdrNumStackPages := self getShortFromFile: f swap: swapBytes.
"4 stack pages is small. Should be able to run with as few as
three. 4 should be comfortable but slow. 8 is a reasonable
default. Can be changed via vmParameterAt: 43 put: n"
numStackPages := desiredNumStackPages ~= 0
ifTrue: [desiredNumStackPages]
ifFalse: [hdrNumStackPages = 0
ifTrue: [self defaultNumStackPages]
ifFalse: [hdrNumStackPages]].
desiredNumStackPages := hdrNumStackPages.
stackZoneSize := self computeStackZoneSize.
"This slot holds the size of the native method zone in 1k units. (pad to word boundary)."
hdrCogCodeSize := (self getShortFromFile: f swap: swapBytes) * 1024.
cogCodeSize := desiredCogCodeSize ~= 0
ifTrue: [desiredCogCodeSize]
ifFalse:
[hdrCogCodeSize = 0
ifTrue: [cogit defaultCogCodeSize]
ifFalse: [hdrCogCodeSize]].
desiredCogCodeSize := hdrCogCodeSize.
self assert: f position = (objectMemory wordSize = 4 ifTrue: [40] ifFalse: [64]).
hdrEdenBytes := self getWord32FromFile: f swap: swapBytes.
objectMemory edenBytes: (desiredEdenBytes ~= 0
ifTrue: [desiredEdenBytes]
ifFalse:
[hdrEdenBytes = 0
ifTrue: [objectMemory defaultEdenBytes]
ifFalse: [hdrEdenBytes]]).
desiredEdenBytes := hdrEdenBytes.
hdrMaxExtSemTabSize := self getShortFromFile: f swap: swapBytes.
hdrMaxExtSemTabSize ~= 0 ifTrue:
[self setMaxExtSemSizeTo: hdrMaxExtSemTabSize].
"pad to word boundary. This slot can be used for anything else that will fit in 16 bits.
Preserve it to be polite to other VMs."
the2ndUnknownShort := self getShortFromFile: f swap: swapBytes.
self assert: f position = (objectMemory wordSize = 4 ifTrue: [48] ifFalse: [72]).
firstSegSize := self getLongFromFile: f swap: swapBytes.
objectMemory firstSegmentSize: firstSegSize.
"For Open PICs to be able to probe the method cache during
simulation the methodCache must be relocated to memory."
methodCacheSize := methodCache size * objectMemory wordSize.
primTraceLogSize := primTraceLog size * objectMemory wordSize.
"To cope with modern OSs that disallow executing code in writable memory we dual-map
the code zone, one mapping with read/write permissions and the other with read/execute
+ permissions. In simulation all we can do is use memory, so if we're simulating dual mapping
+ we use double the memory and simulate the memory sharing in the Cogit's backEnd."
+ effectiveCogCodeSize := (InitializationOptions at: #DUAL_MAPPED_COG_ZONE ifAbsent: [false])
+ ifTrue: [cogCodeSize * 2]
+ ifFalse: [cogCodeSize].
- permissions. In such a configuration the code zone has already been alloated and is not
- included in (what is no longer) the initial alloc."
- self ioAllocateDualMappedCodeZone: (self addressOf: executableZone put: [:v| executableZone := v])
- OfSize: cogCodeSize
- WritableZone: (self addressOf: writableCodeZone put: [:v| writableCodeZone := v]).
- "In simulation all we can do is use memory, so if we're simulating dual mapping we
- use double the memory and simulate the memory sharing in the Cogit's backEnd."
- effectiveCogCodeSize := writableCodeZone > 0 ifTrue: [cogCodeSize * 2] ifFalse: [cogCodeSize].
"allocate interpreter memory. This list is in address order, low to high.
In the actual VM the stack zone exists on the C stack."
heapBase := (Cogit guardPageSize
+ effectiveCogCodeSize
+ stackZoneSize
+ methodCacheSize
+ primTraceLogSize
+ self rumpCStackSize) roundUpTo: objectMemory allocationUnit.
"compare memory requirements with availability"
allocationReserve := self interpreterAllocationReserveBytes.
objectMemory hasSpurMemoryManagerAPI
ifTrue:
[| freeOldSpaceInImage headroom |
freeOldSpaceInImage := self getLongFromFile: f swap: swapBytes.
headroom := objectMemory
initialHeadroom: extraVMMemory
givenFreeOldSpaceInImage: freeOldSpaceInImage.
heapSize := objectMemory roundUpHeapSize:
dataSize
+ headroom
+ objectMemory newSpaceBytes
+ (headroom > allocationReserve
ifTrue: [0]
ifFalse: [allocationReserve])]
ifFalse:
[heapSize := dataSize
+ extraBytes
+ objectMemory newSpaceBytes
+ (extraBytes > allocationReserve
ifTrue: [0]
ifFalse: [allocationReserve])].
heapBase := objectMemory
setHeapBase: heapBase
memoryLimit: heapBase + heapSize
endOfMemory: heapBase + dataSize.
self assert: cogCodeSize \\ 4 = 0.
self assert: objectMemory memoryLimit \\ 4 = 0.
self assert: self rumpCStackSize \\ 4 = 0.
objectMemory allocateMemoryOfSize: objectMemory memoryLimit.
"read in the image in bulk, then swap the bytes if necessary"
f position: headerSize.
count := objectMemory readHeapFromImageFile: f dataBytes: dataSize.
count ~= dataSize ifTrue: [self halt]]
ensure: [f close].
self moveMethodCacheToMemoryAt: objectMemory cogCodeBase + effectiveCogCodeSize + stackZoneSize.
self movePrimTraceLogToMemoryAt: objectMemory cogCodeBase + effectiveCogCodeSize + stackZoneSize + methodCacheSize.
self ensureImageFormatIsUpToDate: swapBytes.
bytesToShift := objectMemory memoryBaseForImageRead - oldBaseAddr. "adjust pointers for zero base address"
UIManager default
informUser: 'Relocating object pointers...'
during: [self initializeInterpreter: bytesToShift].
+ self initializeCodeGenerator!
- writableCodeZone ~= 0
- ifTrue:
- [self initializeCodeGenerator: cogCodeSize + (Cogit guardPageSize * 2)
- executableZone: Cogit guardPageSize]
- ifFalse:
- [self initializeCodeGenerator: 0
- executableZone: Cogit guardPageSize]!
Item was added:
+ ----- Method: Cogit>>initializeCodeZoneFrom:upTo: (in category 'initialization') -----
+ initializeCodeZoneFrom: startAddress upTo: endAddress
+ <api>
+ self initializeBackend.
+ backEnd stopsFrom: startAddress to: endAddress - 1.
+ self sqMakeMemoryExecutableFrom: startAddress
+ To: endAddress
+ CodeToDataDelta: (self addressOf: codeToDataDelta put: [:v| codeToDataDelta := v]).
+ self cCode: '' inSmalltalk:
+ [backEnd stopsFrom: 0 to: guardPageSize - 1.
+ self initializeProcessor].
+ codeBase := methodZoneBase := startAddress.
+ minValidCallAddress := (codeBase min: coInterpreter interpretAddress)
+ min: coInterpreter primitiveFailAddress.
+ methodZone manageFrom: methodZoneBase to: endAddress.
+ self assertValidDualZone.
+ self maybeGenerateCheckFeatures.
+ self maybeGenerateCheckLZCNT.
+ self maybeGenerateICacheFlush.
+ self generateVMOwnerLockFunctions.
+ self genGetLeafCallStackPointer.
+ self generateStackPointerCapture.
+ self generateTrampolines.
+ self computeEntryOffsets.
+ self computeFullBlockEntryOffsets.
+ self generateClosedPICPrototype.
+ self alignMethodZoneBase.
+ "repeat so that now the methodZone ignores the generated run-time"
+ methodZone manageFrom: methodZoneBase to: endAddress.
+ "N.B. this is assumed to be the last thing done in initialization; see Cogit>>initialized"
+ self generateOpenPICPrototype!
Item was changed:
----- Method: Cogit>>initializeCodeZoneFrom:upTo:writableCodeZone: (in category 'initialization') -----
initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone
<api>
+ <var: 'startAddress' type: #usqInt>
+ <var: 'endAddress' type: #usqInt>
+ <var: 'writableCodeZone' type: #usqInt>
"If the OS platform requires dual mapping to achieve a writable code zone
then startAddress will be the non-zero address of the read/write zone and
executableCodeZone will be the non-zero address of the read/execute zone.
If the OS platform does not require dual mapping then startAddress will be
the first address of the read/write/executable zone and executableCodeZone
will be zero."
self initializeBackend.
codeToDataDelta := writableCodeZone = 0 ifTrue: [0] ifFalse: [writableCodeZone - startAddress].
backEnd stopsFrom: startAddress to: endAddress - 1.
self cCode:
[writableCodeZone = 0 ifTrue:
[self sqMakeMemoryExecutableFrom: startAddress To: endAddress]]
inSmalltalk:
[startAddress = self class guardPageSize ifTrue:
[backEnd stopsFrom: 0 to: endAddress - 1].
self initializeProcessor].
codeBase := methodZoneBase := startAddress.
minValidCallAddress := (codeBase min: coInterpreter interpretAddress) min: coInterpreter primitiveFailAddress.
methodZone manageFrom: methodZoneBase to: endAddress.
self assertValidDualZone.
self maybeGenerateCheckFeatures.
self maybeGenerateCheckLZCNT.
self maybeGenerateICacheFlush.
self generateVMOwnerLockFunctions.
self genGetLeafCallStackPointer.
self generateStackPointerCapture.
self generateTrampolines.
self computeEntryOffsets.
self computeFullBlockEntryOffsets.
self generateClosedPICPrototype.
self alignMethodZoneBase.
"repeat so that now the methodZone ignores the generated run-time"
methodZone manageFrom: methodZoneBase to: endAddress.
"N.B. this is assumed to be the last thing done in initialization; see Cogit>>initialized"
self generateOpenPICPrototype!
Item was added:
+ ----- Method: Cogit>>sqMakeMemoryExecutableFrom:To:CodeToDataDelta: (in category 'initialization') -----
+ sqMakeMemoryExecutableFrom: startAddress To: endAddress CodeToDataDelta: codeToDataDeltaPtr
+ <doNotGenerate>
+ "Simulate setting executable permissions on the code zone. In production this will apply execute permission
+ to startAddress throguh endAddress - 1. If starting up in the DUAL_MAPPED_CODE_ZONE regime then it
+ will also create a writable mapping for the code zone and assign the distance from executable zone to the
+ writable zone throguh codeToDataDeltaPtr. If in this regime when simulating, the CogVMSimulator will
+ have allocated twice as much code memory as asked for (see CogVMSimulator openOn:extraMemory:) and
+ so simply set the delta to the code size."
+ (InitializationOptions at: #DUAL_MAPPED_COG_ZONE ifAbsent: [false]) ifTrue:
+ [codeToDataDeltaPtr at: 0 put: coInterpreter cogCodeSize]!
Item was added:
+ ----- Method: RegisterAllocatingCogit>>initializeCodeZoneFrom:upTo: (in category 'initialization') -----
+ initializeCodeZoneFrom: startAddress upTo: endAddress
+ scratchSimStack := self cCode: [self malloc: self simStackSlots * (self sizeof: CogSimStackEntry)]
+ inSmalltalk: [CArrayAccessor on: ((1 to: self simStackSlots) collect: [:ign| CogRegisterAllocatingSimStackEntry new])].
+ super initializeCodeZoneFrom: startAddress upTo: endAddress!
Item was removed:
- ----- Method: RegisterAllocatingCogit>>initializeCodeZoneFrom:upTo:writableCodeZone: (in category 'initialization') -----
- initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone
- scratchSimStack := self cCode: [self malloc: self simStackSlots * (self sizeof: CogSimStackEntry)]
- inSmalltalk: [CArrayAccessor on: ((1 to: self simStackSlots) collect: [:ign| CogRegisterAllocatingSimStackEntry new])].
- super initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone!
Item was added:
+ ----- Method: SistaCogit>>initializeCodeZoneFrom:upTo: (in category 'initialization') -----
+ initializeCodeZoneFrom: startAddress upTo: endAddress
+ initialCounterValue := MaxCounterValue.
+ super initializeCodeZoneFrom: startAddress upTo: endAddress!
Item was removed:
- ----- Method: SistaCogit>>initializeCodeZoneFrom:upTo:writableCodeZone: (in category 'initialization') -----
- initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone
- initialCounterValue := MaxCounterValue.
- super initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone!
Item was added:
+ ----- Method: SistaCogitClone>>initializeCodeZoneFrom:upTo: (in category 'initialization') -----
+ initializeCodeZoneFrom: startAddress upTo: endAddress
+ initialCounterValue := MaxCounterValue.
+ super initializeCodeZoneFrom: startAddress upTo: endAddress!
Item was removed:
- ----- Method: SistaCogitClone>>initializeCodeZoneFrom:upTo:writableCodeZone: (in category 'initialization') -----
- initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone
- initialCounterValue := MaxCounterValue.
- super initializeCodeZoneFrom: startAddress upTo: endAddress writableCodeZone: writableCodeZone!