Eliot Miranda uploaded a new version of VMMaker to project VM Maker Inbox: http://source.squeak.org/VMMakerInbox/VMMaker.oscog-eem.3369.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.3369 Author: eem Time: 26 April 2024, 11:17:55.33664 am UUID: 3beb7d31-43b2-44b0-b295-5ce40a3386e3 Ancestors: VMMaker.oscog-eem.3368
Simulation: Simulate getAttributeString: much more relevantly. Save a little space & time by using cCode: nil instead of cCode: [] or cCode: '' (I know, a foolish consistency is the hobgoblin of little minds...). Fix a bug in attemptToComputeTempNamesFor: with full blocks containing no temps (multiple blocks with the same initialPC).
=============== Diff against VMMaker.oscog-eem.3368 ===============
Item was changed: ----- Method: BitBltSimulation>>copyLoop (in category 'inner loop') ----- copyLoop | prevWord thisWord skewWord halftoneWord mergeWord hInc y unskew skewMask notSkewMask mergeFnwith destWord | "This version of the inner loop assumes noSource = false." <inline: false> <var: 'prevWord' type: #'unsigned int'> <var: 'thisWord' type: #'unsigned int'> <var: 'skewWord' type: #'unsigned int'> <var: 'halftoneWord' type: #'unsigned int'> <var: 'mergeWord' type: #'unsigned int'> <var: 'destWord' type: #'unsigned int'> <var: 'skewMask' type: #'unsigned int'> <var: 'notSkewMask' type: #'unsigned int'> <var: 'unskew' type: #int> "unskew is a bitShift and MUST remain signed, while skewMask is unsigned." <var: 'mergeFnwith' declareC: 'unsigned int (*mergeFnwith)(unsigned int, unsigned int)'> mergeFnwith := self cCoerce: (opTable at: combinationRule+1) to: #'unsigned int (*)(unsigned int, unsigned int)'. mergeFnwith. "null ref for compiler"
self deny: (preload and: [skew = 0]). self assert: (skew between: -31 and: 31).
hInc := hDir * 4. "Byte delta" skew < 0 ifTrue: [unskew := skew + 32. skewMask := AllOnes << (0 - skew). + self cCode: nil inSmalltalk: [skewMask := skewMask bitAnd: 16rFFFFFFFF]] - self cCode: [] inSmalltalk: [skewMask := skewMask bitAnd: 16rFFFFFFFF]] ifFalse: [skew = 0 ifTrue: [unskew := 0. skewMask := AllOnes] ifFalse: [unskew := skew - 32. skewMask := AllOnes >> skew]].
notSkewMask := skewMask bitInvert32. noHalftone ifTrue: [halftoneWord := AllOnes. halftoneHeight := 0] ifFalse: [halftoneWord := self halftoneAt: 0].
y := dy. "Here is the vertical loop, in two versions, one for the combinationRule = 3 copy mode, one for the general case." combinationRule = 3 ifTrue: [1 to: bbH do: "here is the vertical loop for combinationRule = 3 copy mode; no need to call merge" [ :i | halftoneHeight > 1 ifTrue: "Otherwise, its always the same" [halftoneWord := self halftoneAt: y. y := y + vDir]. preload ifTrue: "load the 64-bit shifter" [prevWord := self srcLongAt: sourceIndex. self incSrcIndex: hInc] ifFalse: [prevWord := 0].
"Note: the horizontal loop has been expanded into three parts for speed:"
"This first section requires masking of the destination store..." destMask := mask1. thisWord := self srcLongAt: sourceIndex. "pick up next word" self incSrcIndex: hInc. skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). prevWord := thisWord. destWord := self dstLongAt: destIndex. destWord := (destMask bitAnd: (skewWord bitAnd: halftoneWord)) bitOr: (destWord bitAnd: destMask bitInvert32). self dstLongAt: destIndex put: destWord. self incDestIndex: hInc.
"This central horizontal loop requires no store masking" destMask := AllOnes. (skew = 0 and: [halftoneWord = AllOnes]) ifTrue: "Very special inner loop for STORE mode with no skew -- just move words" [(preload and: [hDir = 1]) ifTrue: [2 to: nWords-1 do: [ :word | "Note loop starts with prevWord loaded (due to preload)" self dstLongAt: destIndex put: prevWord. self incDestIndex: hInc. prevWord := self srcLongAt: sourceIndex. self incSrcIndex: hInc]] ifFalse: [2 to: nWords-1 do: [ :word | thisWord := self srcLongAt: sourceIndex. self incSrcIndex: hInc. self dstLongAt: destIndex put: thisWord. self incDestIndex: hInc]. prevWord := thisWord]] ifFalse: [2 to: nWords-1 do: [ :word | thisWord := self srcLongAt: sourceIndex. self incSrcIndex: hInc. skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). prevWord := thisWord. self dstLongAt: destIndex put: (skewWord bitAnd: halftoneWord). self incDestIndex: hInc]].
"This last section, if used, requires masking of the destination store..." nWords > 1 ifTrue: [destMask := mask2. thisWord :=((skewMask bitShift: skew) bitAnd: mask2) = 0 ifTrue: [0 "we don't need more bits, they will all come from prevWord"] ifFalse: [self srcLongAt: sourceIndex. "pick up last bits from next word".]. self incSrcIndex: hInc. "Note: this will be undone by inncSrcIndex: sourceDelta below if undue" skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). destWord := self dstLongAt: destIndex. destWord := (destMask bitAnd: (skewWord bitAnd: halftoneWord)) bitOr: (destWord bitAnd: destMask bitInvert32). self dstLongAt: destIndex put: destWord. self incDestIndex: hInc].
self incSrcIndex: sourceDelta. self incDestIndex: destDelta]] ifFalse: [1 to: bbH do: "here is the vertical loop for the general case (combinationRule ~= 3)" [ :i | halftoneHeight > 1 ifTrue: "Otherwise, its always the same" [halftoneWord := self halftoneAt: y. y := y + vDir]. preload ifTrue: "load the 64-bit shifter" [prevWord := self srcLongAt: sourceIndex. self incSrcIndex: hInc] ifFalse: [prevWord := 0].
"Note: the horizontal loop has been expanded into three parts for speed:"
"This first section requires masking of the destination store..." destMask := mask1. thisWord := self srcLongAt: sourceIndex. "pick up next word" self incSrcIndex: hInc. skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). prevWord := thisWord. destWord := self dstLongAt: destIndex. mergeWord := self mergeFn: (skewWord bitAnd: halftoneWord) with: destWord. destWord := (destMask bitAnd: mergeWord) bitOr: (destWord bitAnd: destMask bitInvert32). self dstLongAt: destIndex put: destWord. self incDestIndex: hInc.
"This central horizontal loop requires no store masking" destMask := AllOnes. 2 to: nWords-1 do: "Normal inner loop does merge:" [ :word | thisWord := self srcLongAt: sourceIndex. "pick up next word" self incSrcIndex: hInc. skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). prevWord := thisWord. mergeWord := self mergeFn: (skewWord bitAnd: halftoneWord) with: (self dstLongAt: destIndex). self dstLongAt: destIndex put: mergeWord. self incDestIndex: hInc].
"This last section, if used, requires masking of the destination store..." nWords > 1 ifTrue: [destMask := mask2. thisWord :=((skewMask bitShift: skew) bitAnd: mask2) = 0 ifTrue: [0 "we don't need more bits, they will all come from prevWord"] ifFalse: [self srcLongAt: sourceIndex. "pick up last bits from next word".]. self incSrcIndex: hInc. "Note: this will be undone by incSrcIndex: sourceDelta below if undue" skewWord := ((prevWord bitAnd: notSkewMask) bitShift: unskew) bitOr: "32-bit rotate" ((thisWord bitAnd: skewMask) bitShift: skew). destWord := self dstLongAt: destIndex. mergeWord := self mergeFn: (skewWord bitAnd: halftoneWord) with: destWord. destWord := (destMask bitAnd: mergeWord) bitOr: (destWord bitAnd: destMask bitInvert32). self dstLongAt: destIndex put: destWord. self incDestIndex: hInc].
self incSrcIndex: sourceDelta. self incDestIndex: destDelta]]!
Item was changed: ----- Method: CoInterpreter>>ceCheckForInterrupt (in category 'trampolines') ----- ceCheckForInterrupt <api> | switched | + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [self maybeCheckStackDepth: 0 sp: stackPointer pc: instructionPointer]. switched := self checkForEventsMayContextSwitch: true. self returnToExecutive: false postContextSwitch: switched!
Item was changed: ----- Method: CoInterpreter>>ceTakeProfileSample: (in category 'cog jit support') ----- ceTakeProfileSample: aCogMethodOrNil "A primitive has succeeded and the nextProfileTick has been reached (all done in machine code). If aCogMethodOrNil is not nil then it is the cog method containing the primitive call. If aCogMethodOrNil is nil then this has been called from primReturnEnterCogCode and newMethod Now take a sample. c.f. checkProfileTick:" <api> <var: 'aCogMethodOrNil' type: #'CogMethod *'> <inline: false> "Slang type inferrence can't deal with self ceTakeProfileSample: nil..." + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [aCogMethodOrNil isInteger ifTrue: [^self ceTakeProfileSample: (aCogMethodOrNil = 0 ifFalse: [cogit cCoerceSimple: aCogMethodOrNil to: #'CogMethod *'])]]. profileProcess := self activeProcess. profileMethod := aCogMethodOrNil ifNil: [newMethod] ifNotNil: [aCogMethodOrNil methodObject]. self forceInterruptCheck. self zeroNextProfileTick!
Item was changed: ----- Method: CoInterpreter>>ceTraceLinkedSend: (in category 'debug support') ----- ceTraceLinkedSend: theReceiver | cogMethod | <api> <var: #cogMethod type: #'CogMethod *'> cogMethod := self cCoerceSimple: (self stackTop - cogit traceLinkedSendOffset) to: #'CogMethod *'. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [cogit checkStackDepthOnSend ifTrue: [self maybeCheckStackDepth: (cogMethod cmNumArgs > cogit numRegArgs ifTrue: [cogMethod cmNumArgs + 1] ifFalse: [0]) sp: stackPointer + objectMemory wordSize pc: (self stackValue: 1)]]. "cogit recordSendTrace ifTrue: is implicit; wouldn't compile the call otherwise." self recordTrace: (objectMemory fetchClassOf: theReceiver) thing: cogMethod selector source: TraceIsFromMachineCode. cogit printOnTrace ifTrue: [self printActivationNameFor: cogMethod methodObject receiver: theReceiver isBlock: false firstTemporary: (self cCode: [nil] inSmalltalk: [0]); cr]. self sendBreakpoint: cogMethod selector receiver: theReceiver!
Item was changed: ----- Method: CoInterpreter>>flushExternalPrimitives (in category 'plugin primitive support') ----- flushExternalPrimitives "Flush the references to external functions from plugin primitives. Then continue execution answering self. This will force a reload of those primitives when accessed next. Note: We must flush the method cache here also, so that any failed primitives are looked up again. Override to ensure that any and all activations of an external method have a bytecode pc so that if code generation changes (e.g. a primitive method is used, unloaded, and the reloaded primitive is marked with the FastCPrimitiveFlag) stale machine code pcs have been eliminated. THIS MUST BE INVOKED IN THE CONTEXT OF A PRIMITIVE." | activeContext theFrame thePage | activeContext := self divorceAllFramesSuchThat: #isMachineCodeFrameForExternalPrimitiveMethod:. objectMemory allObjectsDo: [:oop| (objectMemory isCompiledMethod: oop) ifTrue: [self flushExternalPrimitiveOf: oop] ifFalse: [(objectMemory isContext: oop) ifTrue: [self mapToBytecodePCIfActivationOfExternalMethod: oop]]]. cogit unlinkSendsToMethodsSuchThat: #cogMethodHasExternalPrim: AndFreeIf: true. self flushMethodCache. self flushExternalPrimitiveTable. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self assert: (cogit methodZone cogMethodsSelect: [:cogMethod| cogMethod isCMFree not and: [cogit cogMethodHasExternalPrim: cogMethod]]) isEmpty]. "If flushing led to divorce continue in the interpreter." (self isStillMarriedContext: activeContext) ifFalse: [self nilStackPage. "to avoid assert in marryContextInNewStackPageAndInitializeInterpreterRegisters:" self marryContextInNewStackPageAndInitializeInterpreterRegisters: activeContext. self popStack. "pop pushed instructionPointer" self pop: argumentCount. cogit ceInvokeInterpret "NOTREACHED"]. "If not, work out where we are and continue" theFrame := self frameOfMarriedContext: activeContext. thePage := stackPages stackPageFor: theFrame. self assert: thePage headFP = theFrame. self setStackPageAndLimit: thePage. self setStackPointersFromPage: thePage. instructionPointer := self popStack. instructionPointer = cogit ceReturnToInterpreterPC ifTrue: [instructionPointer := self iframeSavedIP: framePointer]. self pop: argumentCount!
Item was changed: ----- Method: CoInterpreter>>followForwardingPointersOfReceiverAndTemporariesInStackZone (in category 'object memory support') ----- followForwardingPointersOfReceiverAndTemporariesInStackZone "A more thorough version of followForwardingPointersInStackZone that also follows all temporaries (but not stack contents after the temps). This allows removal of the TempVectReadBarrier in the IGC" <inline: false>
stackPage ifNil: "the system must be snapshotting; nothing to do..." [self assert: (stackPages mostRecentlyUsedPage isNil or: [stackPages mostRecentlyUsedPage isFree]). + self cCode: nil inSmalltalk: [self assert: stackPages allPagesFree]. - self cCode: [] inSmalltalk: [self assert: stackPages allPagesFree]. ^self].
self externalWriteBackHeadFramePointers.
0 to: numStackPages - 1 do: [:i| | thePage theFP theIPPtr theSP callerFP oop offset frameRcvrOffset methodHeader | thePage := stackPages stackPageAt: i. thePage isFree ifFalse: [self assert: (self ifCurrentStackPageHasValidHeadPointers: thePage). theFP := thePage headFP. "Skip the instruction pointer on top of stack of inactive pages." theIPPtr := thePage = stackPage ifFalse: [thePage headSP asUnsignedInteger]. [self assert: (thePage addressIsInPage: theFP). self assert: (theIPPtr isNil or: [thePage addressIsInPage: theIPPtr asVoidPointer]). (self isMachineCodeFrame: theFP) ifTrue: [oop := stackPages longAt: theFP + FoxMFReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxMFReceiver put: (objectMemory followForwarded: oop)]. self assert: (objectMemory isForwarded: (self mframeHomeMethod: theFP) methodObject) not. frameRcvrOffset := theFP + FoxMFReceiver. methodHeader := (self mframeHomeMethod: theFP) methodHeader] ifFalse: [oop := stackPages longAt: theFP + FoxIFReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxIFReceiver put: (objectMemory followForwarded: oop)]. oop := self iframeMethod: theFP. (objectMemory isForwarded: oop) ifTrue: [| newOop | newOop := objectMemory followForwarded: oop. offset := newOop - oop. (theIPPtr notNil and: [(stackPages longAt: theIPPtr) > oop]) ifTrue: [stackPages longAt: theIPPtr put: (stackPages longAt: theIPPtr) + offset]. stackPages longAt: theFP + FoxIFSavedIP put: (stackPages longAt: theFP + FoxIFSavedIP) + offset. stackPages longAt: theFP + FoxMethod put: (oop := newOop)]. frameRcvrOffset := theFP + FoxIFReceiver. methodHeader := objectMemory methodHeaderOf: oop]. theSP := frameRcvrOffset - ((self temporaryCountOfMethodHeader: methodHeader) * objectMemory wordSize). [theSP <= frameRcvrOffset] whileTrue: [oop := stackPages longAt: theSP. (objectMemory isOopForwarded: oop) ifTrue: [oop := objectMemory followForwarded: oop. stackPages longAt: theSP put: oop]. theSP := theSP + objectMemory wordSize].
((self frameHasContext: theFP) and: [(objectMemory isForwarded: (self frameContext: theFP))]) ifTrue: [stackPages longAt: theFP + FoxThisContext put: (objectMemory followForwarded: (self frameContext: theFP))]. offset := self frameStackedReceiverOffset: theFP. oop := stackPages longAt: theFP + offset. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + offset put: (objectMemory followForwarded: oop)]. (callerFP := self frameCallerFP: theFP) ~= 0] whileTrue: [theIPPtr := (theFP + FoxCallerSavedIP) asUnsignedInteger. theFP := callerFP]. "And finally follow the saved context and the caller context." theSP := thePage baseAddress - objectMemory wordSize. [theSP <= thePage baseAddress] whileTrue: [oop := stackPages longAt: theSP. (objectMemory isForwarded: oop) ifTrue: [stackPages longAt: theSP put: (objectMemory followForwarded: oop)]. theSP := theSP + objectMemory wordSize]]]!
Item was changed: ----- Method: CoInterpreter>>followForwardingPointersOfReceiversInStackZone (in category 'object memory support') ----- followForwardingPointersOfReceiversInStackZone "Spur's become: is lazy, turning the becommed object into a forwarding object to the other. The read-barrier is minimised by arranging that forwarding pointers will fail a method cache probe, since notionally objects' internals are accessed only via sending messages to them, the exception is primitives that access the internals of the non-receiver argument(s).
To avoid a read barrier on bytecode, literal and inst var fetch and non-local return, we scan the receivers (including the stacked receiver for non-local return) and method references in the stack zone and follow any forwarded ones. This is of course way cheaper than scanning all of memory as in the old become.
Override to handle machine code frames" <inline: false>
stackPage ifNil: "the system must be snapshotting; nothing to do..." [self assert: (stackPages mostRecentlyUsedPage isNil or: [stackPages mostRecentlyUsedPage isFree]). + self cCode: nil inSmalltalk: [self assert: stackPages allPagesFree]. - self cCode: [] inSmalltalk: [self assert: stackPages allPagesFree]. ^self].
self externalWriteBackHeadFramePointers.
0 to: numStackPages - 1 do: [:i| | thePage theFP theIPPtr theSP callerFP oop offset | thePage := stackPages stackPageAt: i. thePage isFree ifFalse: [self assert: (self ifCurrentStackPageHasValidHeadPointers: thePage). theFP := thePage headFP. "Skip the instruction pointer on top of stack of inactive pages." theIPPtr := thePage = stackPage ifFalse: [thePage headSP asUnsignedInteger]. [self assert: (thePage addressIsInPage: theFP). self assert: (theIPPtr isNil or: [thePage addressIsInPage: theIPPtr asVoidPointer]). (self isMachineCodeFrame: theFP) ifTrue: [oop := stackPages longAt: theFP + FoxMFReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxMFReceiver put: (objectMemory followForwarded: oop)]. self assert: (objectMemory isForwarded: (self mframeHomeMethod: theFP) methodObject) not] ifFalse: [oop := stackPages longAt: theFP + FoxIFReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxIFReceiver put: (objectMemory followForwarded: oop)]. oop := self iframeMethod: theFP. (objectMemory isForwarded: oop) ifTrue: [| newOop | newOop := objectMemory followForwarded: oop. offset := newOop - oop. (theIPPtr notNil and: [(stackPages longAt: theIPPtr) > oop]) ifTrue: [stackPages longAt: theIPPtr put: (stackPages longAt: theIPPtr) + offset]. stackPages longAt: theFP + FoxIFSavedIP put: (stackPages longAt: theFP + FoxIFSavedIP) + offset. stackPages longAt: theFP + FoxMethod put: (oop := newOop)]]. ((self frameHasContext: theFP) and: [(objectMemory isForwarded: (self frameContext: theFP))]) ifTrue: [stackPages longAt: theFP + FoxThisContext put: (objectMemory followForwarded: (self frameContext: theFP))]. offset := self frameStackedReceiverOffset: theFP. oop := stackPages longAt: theFP + offset. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + offset put: (objectMemory followForwarded: oop)]. (callerFP := self frameCallerFP: theFP) ~= 0] whileTrue: [theIPPtr := (theFP + FoxCallerSavedIP) asUnsignedInteger. theFP := callerFP]. "And finally follow the saved context and the caller context." theSP := thePage baseAddress - objectMemory wordSize. [theSP <= thePage baseAddress] whileTrue: [oop := stackPages longAt: theSP. (objectMemory isForwarded: oop) ifTrue: [stackPages longAt: theSP put: (objectMemory followForwarded: oop)]. theSP := theSP + objectMemory wordSize]]]!
Item was changed: ----- Method: CoInterpreter>>isCodeCompactingPrimitiveIndex: (in category 'primitive support') ----- isCodeCompactingPrimitiveIndex: primIndex "If instVarAt:, slotAt: or shallowCopy operate on a Context then they compute a bytecode pc and hence may provoke a code compaction. Hence primitive invocation from these primitives must use a static return address (cePrimReturnEnterCogCode:). Note that the process switch primitives may also provoke a code compaction, which happens when switching to a process whose top context has a machine code pc but the method is no longer in the code cache. However, in this case they are switching process and don't go through the normal return. So we don't include them here." <inline: #always> + self cCode: nil inSmalltalk: [#primitiveClone. #primitiveInstVarAt. #primitiveSlotAt. #primitiveFlushExternalPrimitives. #primitiveUnloadModule]. "For senders..." - self cCode: [] inSmalltalk: [#primitiveClone. #primitiveInstVarAt. #primitiveSlotAt. #primitiveFlushExternalPrimitives. #primitiveUnloadModule]. "For senders..." ^primIndex = PrimNumberInstVarAt or: [primIndex = PrimNumberShallowCopy or: [primIndex = PrimNumberSlotAt or: [primIndex = PrimNumberFlushExternalPrimitives or: [primIndex = PrimNumberUnloadModule]]]]!
Item was changed: ----- Method: CoInterpreter>>isPerformPrimitive: (in category 'primitive support') ----- isPerformPrimitive: primIndex <inline: #always> + self cCode: nil inSmalltalk: [#primitivePerform. #primitivePerformWithArgs]. "For senders..." - self cCode: [] inSmalltalk: [#primitivePerform. #primitivePerformWithArgs]. "For senders..." ^primIndex = PrimNumberPerform or: [primIndex = PrimNumberPerformWithArgs]!
Item was changed: ----- Method: CoInterpreter>>printCogMethod: (in category 'debug printing') ----- printCogMethod: cogMethod <api> <var: #cogMethod type: #'CogMethod *'> | address primitive | + self cCode: nil - self cCode: '' inSmalltalk: [transcript ensureCr. cogMethod isInteger ifTrue: [^self printCogMethod: (self cCoerceSimple: cogMethod to: #'CogMethod *')]]. address := cogMethod asInteger. self printHex: address; print: ' <-> '; printHex: address + cogMethod blockSize. cogMethod isCMMethodEtAl ifTrue: [self print: ': method: '; printHex: cogMethod methodObject. primitive := self primitiveIndexOfMethod: cogMethod methodObject header: cogMethod methodHeader. primitive ~= 0 ifTrue: [self print: ' prim '; printNum: primitive]. (objectMemory addressCouldBeObj: cogMethod methodObject) ifTrue: [cogMethod cmIsFullBlock ifTrue: [self print: ' [full]'] ifFalse: [(objectMemory addressCouldBeObj: (self methodClassOf: cogMethod methodObject)) ifTrue: [self space; printNameOfClass: (self methodClassOf: cogMethod methodObject) count: 2]]]]. cogMethod isCMBlock ifTrue: [self print: ': block home: '; printHex: (self cCoerceSimple: cogMethod to: #'CogBlockMethod *') cmHomeMethod asUnsignedInteger]. cogMethod isCMClosedPIC ifTrue: [self print: ': Closed PIC N: '; printHex: cogMethod cPICNumCases]. cogMethod isCMOpenPIC ifTrue: [self print: ': Open PIC ']. self print: ' selector: '; printHex: cogMethod selector. cogMethod selector = objectMemory nilObject ifTrue: [| s | (cogMethod isCMMethodEtAl and: [(s := self maybeSelectorOfMethod: cogMethod methodObject) notNil]) ifTrue: [self print: ' (nil: '; printStringOf: s; print: ')'] ifFalse: [self print: ' (nil)']] ifFalse: [self space; printStringOf: cogMethod selector]. self cr!
Item was changed: ----- Method: CoInterpreter>>printFrame:WithSP: (in category 'debug printing') ----- printFrame: theFP WithSP: theSP <var: #theFP type: #'char *'> <var: #theSP type: #'char *'> <public> "useful for VM debugging" <inline: false> | theMethod theMethodEnd numArgs numTemps rcvrAddress topThing | <var: #addr type: #'char *'> + self cCode: nil inSmalltalk: [transcript ensureCr]. - self cCode: '' inSmalltalk: [transcript ensureCr]. (stackPages couldBeFramePointer: theFP) ifNil: ['%P is not in the stack zone?!!\n' f: transcript printf: theFP. ^nil]. (self isMachineCodeFrame: theFP) ifTrue: [| cogMethod homeMethod | cogMethod := self mframeCogMethod: theFP. homeMethod := self mframeHomeMethod: theFP. theMethod := homeMethod asInteger. theMethodEnd := homeMethod asInteger + homeMethod blockSize. numArgs := cogMethod cmNumArgs. numTemps := self temporaryCountOfMethodHeader: homeMethod methodHeader] ifFalse: [theMethod := self frameMethodObject: theFP. theMethodEnd := theMethod + (objectMemory sizeBitsOfSafe: theMethod). numArgs := self iframeNumArgs: theFP. numTemps := self tempCountOf: theMethod]. (self frameIsBlockActivation: theFP) ifTrue: [| rcvrOrClosure | "No BlockLocalTempCounter in the Cogit's C code, so quick hack is to use numCopied + numArgs" rcvrOrClosure := self pushedReceiverOrClosureOfFrame: theFP. ((objectMemory isNonImmediate: rcvrOrClosure) and: [(objectMemory addressCouldBeObj: rcvrOrClosure) and: [(objectMemory fetchClassOfNonImm: rcvrOrClosure) = (objectMemory splObj: ClassBlockClosure)]]) ifTrue: [numTemps := numArgs + (self stSizeOf: rcvrOrClosure)] ifFalse: [numTemps := numArgs]]. self shortPrintFrame: theFP. rcvrAddress := theFP + (self frameStackedReceiverOffsetNumArgs: numArgs). (self isBaseFrame: theFP) ifTrue: [self frameRange: rcvrAddress + (2 * objectMemory wordSize) to: theSP. self printFrameOop: '(caller ctxt' at: rcvrAddress + (2 * objectMemory wordSize). self printFrameOop: '(saved ctxt' at: rcvrAddress + (1 * objectMemory wordSize)] ifFalse: [self frameRange: rcvrAddress to: theSP]. self printFrameOop: 'rcvr/clsr' at: rcvrAddress. numArgs to: 1 by: -1 do: [:i| self printFrameOop: 'arg' index: numArgs - i at: theFP + FoxCallerSavedIP + (i * objectMemory wordSize)]. self printFrameThing: 'caller ip' at: theFP + FoxCallerSavedIP extraString: ((stackPages longAt: theFP + FoxCallerSavedIP) = cogit ceReturnToInterpreterPC ifTrue: ['ceReturnToInterpreter']). self printFrameThing: 'saved fp' at: theFP + FoxSavedFP. self printFrameMethodFor: theFP. (self isMachineCodeFrame: theFP) ifTrue: [self printFrameFlagsForFP: theFP]. self printFrameOop: 'context' at: theFP + FoxThisContext. (self isMachineCodeFrame: theFP) ifFalse: [self printFrameFlagsForFP: theFP]. (self isMachineCodeFrame: theFP) ifTrue: [rcvrAddress := theFP + FoxMFReceiver] ifFalse: [self printFrameThing: 'saved ip' at: theFP + FoxIFSavedIP extra: ((self iframeSavedIP: theFP) = 0 ifTrue: [0] ifFalse: [(self iframeSavedIP: theFP) - theMethod + 2 - objectMemory baseHeaderSize]). rcvrAddress := theFP + FoxIFReceiver]. self printFrameOop: 'receiver' at: rcvrAddress. topThing := stackPages longAt: theSP. (self oop: topThing isGreaterThanOrEqualTo: theMethod andLessThan: theMethodEnd) ifTrue: [rcvrAddress - objectMemory wordSize to: theSP + objectMemory wordSize by: objectMemory wordSize negated do: [:addr| | index | index := rcvrAddress - addr / objectMemory wordSize + numArgs. index <= numTemps ifTrue: [self printFrameOop: 'temp' index: index - 1 at: addr] ifFalse: [self printFrameOop: ((self frameIsBlockActivation: theFP) ifTrue: ['temp/stck'] ifFalse: ['stck']) at: addr]]. self printFrameThing: 'frame ip' at: theSP extra: ((self isMachineCodeFrame: theFP) ifTrue: [topThing - theMethod] ifFalse: [topThing - theMethod + 2 - objectMemory baseHeaderSize])] ifFalse: [rcvrAddress - objectMemory wordSize to: theSP by: objectMemory wordSize negated do: [:addr| | index | index := rcvrAddress - addr / objectMemory wordSize + numArgs. index <= numTemps ifTrue: [self printFrameOop: 'temp' index: index - 1 at: addr] ifFalse: [self printFrameOop: ((self frameIsBlockActivation: theFP) ifTrue: ['temp/stck'] ifFalse: ['stck']) at: addr]]]!
Item was changed: ----- Method: CoInterpreter>>printMethodCacheFor: (in category 'debug printing') ----- printMethodCacheFor: thing <public> "useful for VM debugging" | n | n := 0. 0 to: MethodCacheSize - 1 by: MethodCacheEntrySize do: [:i | | s c m p | s := methodCache at: i + MethodCacheSelector. c := methodCache at: i + MethodCacheClass. m := methodCache at: i + MethodCacheMethod. p := methodCache at: i + MethodCachePrimFunction. ((thing = -1 or: [s = thing or: [c = thing or: [p = thing or: [m = thing or: [(objectMemory addressCouldBeObj: m) and: [(self maybeMethodHasCogMethod: m) and: [(self cogMethodOf: m) asInteger = thing]]]]]]]) and: [(objectMemory addressCouldBeOop: s) and: [c ~= 0 and: [(self addressCouldBeClassObj: c) or: [self addressCouldBeClassObj: (objectMemory classForClassTag: c)]]]]) ifTrue: [n := n + 1. + self cCode: nil inSmalltalk: [self transcript ensureCr]. - self cCode: [] inSmalltalk: [self transcript ensureCr]. '%ld %lx\n\t' f: transcript printf: { i. i }. (objectMemory isBytesNonImm: s) ifTrue: ['%P %.*s\n' f: transcript printf: { s asVoidPointer. objectMemory numBytesOfBytes: s. objectMemory firstIndexableField: s }] ifFalse: [self shortPrintOop: s]. self tab. (self addressCouldBeClassObj: c) ifTrue: [self shortPrintOop: c] ifFalse: [self printNum: c; space; printHexnp: c; space; shortPrintOop: (objectMemory classForClassTag: c)]. self tab; shortPrintOop: m; tab. self cCode: [p > 1024 ifTrue: [self printHexnp: p] ifFalse: [self printNum: p]] inSmalltalk: [p isSymbol ifTrue: [self print: p] ifFalse: [self printNum: p]]. self cr]]. n > 1 ifTrue: [self printNum: n; cr]!
Item was changed: ----- Method: CoInterpreter>>returnToMachineCodeFrame (in category 'return bytecodes') ----- returnToMachineCodeFrame "Return to the previous context/frame after assigning localIP, localSP and localFP." <inline: true> cogit assertCStackWellAligned. self assert: localIP asUnsignedInteger < objectMemory startOfMemory. self assert: (self isMachineCodeFrame: localFP). self assertValidExecutionPointe: localIP asUnsignedInteger r: localFP s: localSP imbar: false line: #'__LINE__'. self internalStackTopPut: localIP. self internalPush: localReturnValue. self externalizeFPandSP. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self maybeCheckStackDepth: 1 sp: stackPointer pc: localIP]. cogit ceEnterCogCodePopReceiverReg "NOTREACHED"!
Item was changed: ----- Method: CoInterpreterMT>>disownVM: (in category 'vm scheduling') ----- disownVM: flags "Release the VM to other threads and answer the current thread's index. Currently valid flags: DisownVMForFFICall - informs the VM that it is entering an FFI call DisownVMForThreading - informs the VM that it is entering code during which threading should be permitted OwnVMForeignThreadFlag - indicates lowest-level entry from a foreign thread - not to be used explicitly by clients - only set by ownVMFromUnidentifiedThread VMAlreadyOwnedHenceDoNotDisown - indicates an ownVM from a callback was made when the vm was still owned. - not to be used explicitly by clients - only set by ownVMFromUnidentifiedThread
This is the entry-point for plugins and primitives that wish to release the VM while performing some operation that may potentially block, and for callbacks returning back to some blocking operation. If this thread does not reclaim the VM before- hand then when the next heartbeat occurs the thread manager will schedule a thread to acquire the VM which may start running the VM in place of this thread.
N.B. Most of the state needed to resume after preemption is set in preemptDisowningThread." <public> <inline: false> <returnTypeC: #'void *'> | vmThread activeProc | self assert: flags >= 0. self assert: self successful. self assert: (cogThreadManager vmOwnerIs: cogThreadManager ioGetThreadLocalThreadIndex).
cogit recordEventTrace ifTrue: [self recordTrace: TraceDisownVM thing: (objectMemory integerObjectOf: flags) source: 0]. processHasThreadAffinity ifFalse: [willNotThreadWarnCount < 10 ifTrue: [self print: 'warning: VM parameter 48 indicates Process doesn''t have threadId; VM will not thread'; cr. willNotThreadWarnCount := willNotThreadWarnCount + 1]]. vmThread := cogThreadManager currentVMThread. (flags anyMask: VMAlreadyOwnedHenceDoNotDisown) ifTrue: [disowningVMThread := vmThread. vmThread setVmThreadState: CTMUnavailable. ^nil]. self assertCStackPointersBelongToCurrentThread. self assertValidNewMethodPropertyFlags. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [cogThreadManager saveRegisterStateForCurrentProcess. cogThreadManager clearRegisterStates.]. (flags anyMask: DisownVMForProcessorRelinquish) ifTrue: [| proc | (proc := objectMemory splObj: foreignCallbackProcessSlot) ~= objectMemory nilObject ifTrue: [foreignCallbackPriority := self quickFetchInteger: PriorityIndex ofObject: proc]. relinquishing := true. self sqLowLevelMFence]. disownCount := disownCount + 1. "If we're disowning the VM because there's no active process to run, there's nothing to preempt later, so don't indicate that there's a disowningVMThread that needs to be restored later." activeProc := self activeProcess. activeProc ~= objectMemory nilObject ifTrue: [disowningVMThread := vmThread. vmThread priority: (self quickFetchInteger: PriorityIndex ofObject: activeProc).].
"OwnVMForeignThreadFlag indicates lowest-level of entry by a foreign thread. If that's where we are then release the vmThread. Otherwise indicate the vmThread is off doing something outside of the VM." (flags anyMask: OwnVMForeignThreadFlag) ifTrue: ["I don't think this is quite right. Josh's use case is creating some foreign thread and then registering it with the VM. That's not the same as binding a process to a foreign thread given that the foreign callback process is about to terminate anyway (it is returning from a callback here). So do we need an additional concept, that of a vmThread being either of the set known to the VM or floating?" self flag: 'issue with registering foreign threads with the VM'. (self isBoundProcess: self activeProcess) ifFalse: [cogThreadManager unregisterVMThread: vmThread]] ifFalse: [vmThread setVmThreadState: CTMUnavailable].
vmThread disownFlags: (flags bitOr: (activeProcessAffined ifTrue: [0] ifFalse: [ProcessUnaffinedOnDisown])). cogThreadManager releaseVM. ^vmThread!
Item was changed: ----- Method: CoInterpreterMT>>loadInitialContext (in category 'initialization') ----- loadInitialContext | activeProc threadAffinity | super loadInitialContext. activeProc := self activeProcess. threadAffinity := self threadAffinityOfProcess: activeProc. self assert: (threadAffinity = 0 or: [threadAffinity = 1]). + self cCode: nil inSmalltalk: [self flag: #todoMT "Ensure we cannot save an image, where the 'activeProc' is affined to another thread!!"]. - self cCode: [] inSmalltalk: [self flag: #todoMT "Ensure we cannot save an image, where the 'activeProc' is affined to another thread!!"]. activeProcessAffined := threadAffinity ~= 0. cogThreadManager startThreadSubsystem.!
Item was changed: ----- Method: CoInterpreterMT>>ownVMFromUnidentifiedThread (in category 'vm scheduling') ----- ownVMFromUnidentifiedThread "Attempt to take ownership from a thread that as yet doesn't know its index. This supports callbacks where the callback could originate from any thread. Answer 0 if the owning thread is known to the VM. Answer 1 if the owning thread is unknown to the VM and now owns the VM. Answer -1 if the owning thread is unknown to the VM and fails to own the VM. Answer -2 if the owning thread is unknown to the VM and there is no foreign callback process installed." | count threadIndex vmThread | <var: #vmThread type: #'CogVMThread *'> <inline: false> + self cCode: nil inSmalltalk: [self halt: 'TODO: Implement processor register switching']. - self cCode: [] inSmalltalk: [self halt: 'TODO: Implement processor register switching']. (threadIndex := cogThreadManager ioGetThreadLocalThreadIndex) ~= 0 ifTrue: [ "this is a callback from a known thread" (cogThreadManager vmOwnerIs: threadIndex) ifTrue: "the VM has not been disowned" [self assert: (disowningVMThread isNil or: [disowningVMThread = self currentVMThread]). disowningVMThread := nil. self currentVMThread setVmThreadState: CTMAssignableOrInVM. ^VMAlreadyOwnedHenceDoNotDisown]]. (threadIndex = 0 and: [foreignCallbackPriority = 0]) ifTrue: [^-2]. count := 0. "Before we can proceed, we need to temporarily lock the vm, so we can either find our CogVMThread struct or allocate a new one." cogThreadManager acquireVMForIndex: threadIndex withPriority: foreignCallbackPriority. threadIndex ~= 0 ifTrue: ["this is a callback from a known thread. Simply own the VM for that thread." vmThread := cogThreadManager vmThreadAt: threadIndex. ^ self ownVM: vmThread]. "If the current thread doesn't have an index it's new to the vm and we need to allocate a new threadInfo, failing if we can't. We also need a process in the foreignCallbackProcessSlot upon which to run the thread's eventual callback." [(objectMemory splObj: foreignCallbackProcessSlot) = objectMemory nilObject] whileTrue: [cogThreadManager releaseVM. (count := count + 1) > 1000 ifTrue: [^-2]. cogThreadManager ioMilliSleep: 1. cogThreadManager acquireVMForIndex: threadIndex withPriority: foreignCallbackPriority].
vmThread := cogThreadManager unusedThreadInfo. "N.B. Keep the VM locked anonymously so that we reserve the non-nil ForeignCallbackProcess for this thread, avoiding the race between competing foreign callbacks. The acquireVMFor: in ownVM: will set the vmOwner to the actual index. So only unlock on failure." vmThread ifNil: [cogThreadManager releaseVM. ^-1]. cogThreadManager setVMOwner: vmThread index. vmThread priority: foreignCallbackPriority; disownFlags: OwnVMForeignThreadFlag; setVmThreadState: CTMWantingOwnership. cogThreadManager registerVMThread: vmThread. ^self ownVM: vmThread!
Item was changed: ----- Method: CoInterpreterMT>>primitiveForceDisplayUpdate (in category 'I/O primitives') ----- primitiveForceDisplayUpdate "On some platforms, this primitive forces enqueued display updates to be processed immediately. On others, it does nothing. + Override so that if the platform requires GUI activity to be restricted to a given thread, we only force update if on the right thread." - Override so that if the platform requires GUI activity to be restricted to a given thread, we ony force update if on the right thread."
(self ioEventThreadAffinity > 0 and: [self ioEventThreadAffinity ~= cogThreadManager getVMOwner]) ifFalse: [self ioForceDisplayUpdate]!
Item was changed: ----- Method: CoInterpreterMT>>primitiveProcessBindToThreadAffinity (in category 'process primitives') ----- primitiveProcessBindToThreadAffinity "Attempt to bind the receiver to the thread affinity of the argument or nil, where the receiver is a Process. The thread affinity may be an integer where: 0 - means no thread affinity, the process is free to run on any thread. > 0 - positive values mean the process has to run on the thread with this specific index. < 0 - negative values mean the process may run on on any thread **APART** from the thread with the absolute value of the index. Usually values of 1, -1 and 0 are used. Thread number 1 is the thread the VM started with. On some OSes this thread has special priviliges. I.e. on macOS only thread 1 can make draw calls. Therefore it is mostly important whether a thread must run on thread 1, must **not** run on thread 1 or whether it doesn't care. If successful the VM will ensure that there is at least one compatible thread active." | aProcess affinity waitingPriority activePriority | <export: true> + self cCode: nil inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. - self cCode: [] inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. processHasThreadAffinity ifFalse: [^self primitiveFailFor: PrimErrUnsupported]. affinity := self stackTop. aProcess := self stackValue: 1. ((affinity = objectMemory nilObject or: [(objectMemory isIntegerObject: affinity) and: [affinity ~= (objectMemory integerObjectOf: 0)]]) and: [(objectMemory isPointers: aProcess) and: [(objectMemory slotSizeOf: aProcess) >= (ThreadIdIndex + 1)]]) ifFalse: [^self primitiveFailFor: PrimErrBadArgument]. affinity := affinity = objectMemory nilObject ifTrue: [0] ifFalse: [objectMemory integerValueOf: affinity]. affinity abs >= cogThreadManager maxNumThreads ifTrue: [^self primitiveFailFor: PrimErrLimitExceeded].
(self bindProcess: aProcess toAffinity: affinity) ifNotNil: [:ec| ^self primitiveFailFor: ec]. self methodReturnReceiver.
waitingPriority := self getMaxWaitingPriority. activePriority := self quickFetchInteger: PriorityIndex ofObject: aProcess. affinity := self threadAffinityOfProcess: aProcess. (aProcess = self activeProcess and: [(activeProcessAffined := affinity ~= 0) and: [(cogThreadManager vmOwnerIsCompatibleWith: affinity) not]]) ifTrue: [activePriority < waitingPriority ifTrue: [self reduceWaitingPriorityFrom: waitingPriority to: activePriority "TODO: Check if this is correct?"]. self threadSwitchIfNecessary: aProcess from: CSThreadBind]!
Item was changed: ----- Method: CoInterpreterMT>>primitiveProcessBoundThreadId (in category 'process primitives') ----- primitiveProcessBoundThreadId "Answer the receiver's current threadAffinity or nil, where the receiver is a Process. If the threadAffinity is positive then the receiver is bound to the thread with that id. If the threadAffinity is negative then the receiver is excluded from running on the thread with that id." | aProcess id | <export: true> + self cCode: nil inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. - self cCode: [] inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. processHasThreadAffinity ifFalse: [^self primitiveFailFor: PrimErrUnsupported]. aProcess := self stackTop. id := self threadAffinityOfProcess: aProcess. self methodReturnValue: (id = 0 ifTrue: [objectMemory nilObject] ifFalse: [objectMemory integerObjectOf: id])!
Item was changed: ----- Method: CoInterpreterMT>>primitivePropertyFlagsForSpur:numArgs: (in category 'cog jit support') ----- primitivePropertyFlagsForSpur: primIndex numArgs: numArgs <inline: true> "Answer any special requirements of the given primitive. Spur always needs to set newMethod so primitives can retry on failure due to forwarders." + self cCode: nil inSmalltalk: [#(primitiveRelinquishProcessor)]. "For senders..." - self cCode: [] inSmalltalk: [#(primitiveRelinquishProcessor)]. "For senders..." primIndex = PrimNumberRelinquishProcessor ifTrue: [^profileSemaphore ~= objectMemory nilObject ifTrue: [PrimCallNeedsNewMethod + PrimCallMayEndureCodeCompaction + PrimCallCollectsProfileSamples] ifFalse: [PrimCallNeedsNewMethod + PrimCallMayEndureCodeCompaction]]. ^super primitivePropertyFlagsForSpur: primIndex numArgs: numArgs!
Item was changed: ----- Method: CoInterpreterMT>>primitiveRelinquishProcessor (in category 'I/O primitives') ----- primitiveRelinquishProcessor "Relinquish the processor for up to the given number of microseconds. The exact behavior of this primitive is platform dependent. Override to check for waiting threads."
| microSecs vmHandle currentCStackPointer currentCFramePointer | <var: #currentCStackPointer type: #'volatile usqIntptr_t'> <var: #currentCFramePointer type: #'volatile usqIntptr_t'> microSecs := self stackTop. (objectMemory isIntegerObject: microSecs) ifFalse: [^self primitiveFail]. self assert: (objectMemory fetchPointer: MyListIndex ofObject: self activeProcess) = objectMemory nilObject. self assert: relinquishing not. "DO NOT allow relinquishing the processor while we are profiling since this may skew the time base for our measures (it may reduce processor speed etc). Instead we go full speed, therefore measuring the precise time we spend in the inner idle loop as a busy loop." nextProfileTick = 0 ifTrue: "Presumably we have nothing to do; this primitive is typically called from the background process. So we should /not/ try and activate any threads in the pool; they will waste cycles finding there is no runnable process, and will cause a VM abort if no runnable process is found. But we /do/ want to allow FFI calls that have completed, or callbacks a chance to get into the VM; they do have something to do. DisownVMForProcessorRelinquish indicates this." [currentCStackPointer := CStackPointer. currentCFramePointer := CFramePointer. vmHandle := self disownVM: DisownVMForProcessorRelinquish. self assert: relinquishing. self ioRelinquishProcessorForMicroseconds: (objectMemory integerValueOf: microSecs). self assert: relinquishing. self ownVM: vmHandle. self assert: relinquishing not. self assert: cogThreadManager currentVMThread vmThreadState = CTMAssignableOrInVM. self assert: currentCStackPointer = CStackPointer. self assert: currentCFramePointer = CFramePointer. "In simulation we allow ioRelinquishProcessorForMicroseconds: to fail so that we can arrange that the simulator responds to input events promptly. This *DOES NOT HAPPEN* in the real vm." + self cCode: nil inSmalltalk: [primFailCode ~= 0 ifTrue: [^self]]]. - self cCode: [] inSmalltalk: [primFailCode ~= 0 ifTrue: [^self]]]. self assert: (objectMemory fetchPointer: MyListIndex ofObject: self activeProcess) = objectMemory nilObject. self pop: 1 "microSecs; leave rcvr on stack"!
Item was changed: ----- Method: CoInterpreterMT>>primitiveVMCurrentThreadId (in category 'process primitives') ----- primitiveVMCurrentThreadId <export: true> "Answer the VM's current thread's Id" + self cCode: nil inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. - self cCode: [] inSmalltalk: [cogThreadManager isNil ifTrue: [^self primitiveFail]]. self methodReturnInteger: cogThreadManager getVMOwner!
Item was changed: ----- Method: CoInterpreterMT>>restoreVMStateFor:andFlags: (in category 'vm scheduling') ----- restoreVMStateFor: vmThread andFlags: flags "We've been preempted; we must restore state and update the threadId in our process, and may have to put the active process to sleep." | sched activeProc myProc | sched := self schedulerPointer. activeProc := objectMemory fetchPointer: ActiveProcessIndex ofObject: sched. (flags anyMask: OwnVMForeignThreadFlag) ifTrue: [self assert: foreignCallbackProcessSlot == ForeignCallbackProcess. myProc := objectMemory splObj: foreignCallbackProcessSlot. self assert: myProc ~= objectMemory nilObject. objectMemory splObj: foreignCallbackProcessSlot put: objectMemory nilObject] ifFalse: [myProc := self popProcessWithTemporaryAffinity: vmThread index fromList: (objectMemory splObj: ProcessInExternalCodeTag)]. self assert: (myProc ~= objectMemory nilObject and: [activeProc ~= myProc]). (activeProc ~= objectMemory nilObject and: [(objectMemory fetchPointer: MyListIndex ofObject: activeProc) = objectMemory nilObject]) ifTrue: ["If the activeProcess doesn't have a context yet, it needs one from which we can resume later. This mostly only happens when a threadSwitchIfNecessary:from: ends up switching to a thread that's CTMUnavailable (this thread). See the comment in threadSwitchIfNecessary:from:" self ensureProcessHasContext: activeProc. self putToSleep: activeProc yieldingIf: preemptionYields]. objectMemory storePointerUnchecked: MyListIndex ofObject: myProc withValue: objectMemory nilObject; storePointer: ActiveProcessIndex ofObject: sched withValue: myProc.
self setTemporaryThreadAffinityOfProcess: myProc to: 0. self initPrimCall. self cCode: [self externalSetStackPageAndPointersForSuspendedContextOfProcess: myProc] inSmalltalk: ["Bypass the no-offset stack depth check in the simulator's externalSetStackPageAndPointersForSuspendedContextOfProcess:" super externalSetStackPageAndPointersForSuspendedContextOfProcess: myProc. "We're in ownVM:, hence in a primitive, hence need to include the argument count" (self isMachineCodeFrame: framePointer) ifTrue: [self maybeCheckStackDepth: vmThread argumentCount sp: stackPointer pc: instructionPointer]]. "If this primitive is called from machine code maintain the invariant that the return pc of an interpreter callee calling a machine code caller is ceReturnToInterpreterPC." (vmThread inMachineCode and: [instructionPointer >= objectMemory startOfMemory]) ifTrue: [self iframeSavedIP: framePointer put: instructionPointer. instructionPointer := cogit ceReturnToInterpreterPC]. newMethod := vmThread newMethodOrNull. argumentCount := vmThread argumentCount. vmThread newMethodOrNull: nil. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [| range | range := self cStackRangeForThreadIndex: vmThread index. self assert: ((range includes: vmThread cStackPointer) and: [range includes: vmThread cFramePointer])]. self setCFramePointer: vmThread cFramePointer setCStackPointer: vmThread cStackPointer. self assert: newMethod notNil !
Item was changed: ----- Method: CoInterpreterMT>>setCFramePointer:setCStackPointer: (in category 'callback support') ----- setCFramePointer: cFramePointer setCStackPointer: cStackPointer + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [| index range | index := CStackPointer ifNil: [1] "first time..." ifNotNil: "subsequently..." [self assert: cogThreadManager getVMOwner = cogThreadManager currentVMThread index. cogThreadManager getVMOwner]. range := self cStackRangeForThreadIndex: index. self assert: ((range includes: cFramePointer) and: [range includes: cStackPointer])]. super setCFramePointer: cFramePointer setCStackPointer: cStackPointer!
Item was changed: ----- Method: CoInterpreterMT>>threadSwitchIfNecessary:from: (in category 'process primitive support') ----- threadSwitchIfNecessary: newProc from: sourceCode "Invoked from transferTo:from: or primitiveProcessBindToThreadId to switch threads if the new process is bound or affined to some other thread." | newProcThreadAffinity vmThread threadSwitchNecessary | self assert: (cogThreadManager vmOwnerIs: cogThreadManager ioGetThreadLocalThreadIndex). deferThreadSwitch ifTrue: [^self].
cogThreadManager assertValidProcessorStackPointersForIndex: cogThreadManager getVMOwner.
"If the current process is unaffined or it is affined to the current thread we're ok to run, but we should yield asap if a higher-priority thread wants the VM." newProcThreadAffinity := self threadAffinityOfProcess: newProc. threadSwitchNecessary := (activeProcessAffined := newProcThreadAffinity ~= 0) and: [(cogThreadManager vmOwnerIsCompatibleWith: newProcThreadAffinity) not]. threadSwitchNecessary ifFalse: [(self quickFetchInteger: PriorityIndex ofObject: newProc) < self getMaxWaitingPriority ifTrue: [checkThreadActivation := true. self forceInterruptCheck]. "We're done, no thread switch necessary" ^self].
"The current process is affined to a thread, but not to the current owner. So switch to that owner." + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [transcript ensureCr; f: 'threadSwitchIfNecessary: %08x from: %s(%d) owner %d -> %d\n' printf: { newProc. TraceSources at: sourceCode. sourceCode. cogThreadManager getVMOwner. newProcThreadAffinity }].
"In most cases, we can just switch the thread here, without externalizing the stack pages. If the Processes context is nil, it's state is on the stack. As we're already done context switching, the new thread can just use the interpreter state as-is, without restoring the state from the context. tryToExecuteSmalltalk: already includes a check whether the SuspendedContext is nil. If it is, it leaves the interpreter state alone and just assumes it's correct. This is nice and fast. Otherwise it calls externalSetStackPageAndPointersForSuspendedContextOfProcess: to restore the interpreter state. There is however a special case. When we switch to a thread that is currently CTMUnavailable, that thread will need to restore its process when it tries to own the VM again. The check to restore the context has been moved there (in restoreVMStateFor:andFlags:), so that it only happens in that one case and not every time. In case there are other such special-cases later, adding a call to ensureProcessHasContext: here should fix it."
newProcThreadAffinity < 0 ifTrue: [self assert: newProcThreadAffinity negated = cogThreadManager getVMOwner. vmThread := cogThreadManager ensureWillingThread. self deny: vmThread index = cogThreadManager getVMOwner. self assert: (cogThreadManager threadIndex: vmThread index isCompatibleWith: newProcThreadAffinity)] ifFalse: [vmThread := cogThreadManager vmThreadAt: newProcThreadAffinity. vmThread priority: (self quickFetchInteger: PriorityIndex ofObject: newProc). vmThread vmThreadState = CTMUnavailable ifTrue: [vmThread setVmThreadState: CTMWantingOwnership]]. self returnToSchedulingLoopAndReleaseVMOrWakeThread: vmThread source: CSSwitchIfNeccessary!
Item was changed: ----- Method: CoInterpreterStackPages>>initializeStack:numSlots:pageSize: (in category 'initialization') ----- initializeStack: theStackPages numSlots: stackSlots pageSize: slotsPerPage "Initialize the stack pages. In the C VM theStackPages will be alloca'ed memory to hold the stack pages on the C stack. In the simulator they are housed in the memory between the cogMethodZone and the heap."
<var: #theStackPages type: #'char *'> <returnTypeC: #void> | numPages page structStackPageSize pageStructBase count | <var: #page type: #'StackPage *'> <var: #pageStructBase type: #'char *'> + self cCode: nil - self cCode: [] inSmalltalk: [self assert: objectMemory startOfMemory - coInterpreter effectiveCogCodeSize - Cogit guardPageSize - coInterpreter methodCacheSize - coInterpreter primTraceLogSize - coInterpreter rumpCStackSize = (stackSlots * objectMemory wordSize roundUpTo: objectMemory allocationUnit)]. structStackPageSize := coInterpreter sizeof: CogStackPage. bytesPerPage := slotsPerPage * objectMemory wordSize. numPages := coInterpreter numStkPages.
"Because stack pages grow down baseAddress is at the top of a stack page and so to avoid subtracting BytesPerWord from baseAddress and lastAddress in the init loop below we simply push the stackPage array up one word to avoid the overlap. This word is extraStackBytes." pageStructBase := theStackPages + (numPages * bytesPerPage) + objectMemory wordSize. pages := self cCode: [self cCoerceSimple: pageStructBase to: #'StackPage *'] inSmalltalk: [pageMap := Dictionary new. ((0 to: numPages - 1) collect: [:i| CogStackPage surrogateClass new address: pageStructBase + (i * structStackPageSize) simulator: coInterpreter zoneBase: coInterpreter stackZoneBase zoneLimit: objectMemory startOfMemory]) do: [:pageSurrogate| pageMap at: pageSurrogate address put: pageSurrogate]; yourself]. "make sure there's enough headroom" self assert: coInterpreter stackPageByteSize - coInterpreter stackLimitBytes - coInterpreter stackLimitOffset >= coInterpreter stackPageHeadroom. 0 to: numPages - 1 do: [:index| page := self stackPageAt: index. page lastAddress: theStackPages + (index * bytesPerPage); baseAddress: page lastAddress + bytesPerPage; stackLimit: page baseAddress - coInterpreter stackLimitBytes; realStackLimit: page stackLimit; baseFP: 0; nextPage: (self stackPageAt: (index = (numPages - 1) ifTrue: [0] ifFalse: [index + 1])); prevPage: (self stackPageAt: (index = 0 ifTrue: [numPages - 1] ifFalse: [index - 1]))].
"Now compute stackBasePlus1 so that the pageIndexFor: call maps all addresses from aPage baseAddress to aBase limitAddress + 1 to the same index (stacks grow down)" stackBasePlus1 := (self cCoerceSimple: theStackPages to: #'char *') + 1. + self cCode: nil - self cCode: [] inSmalltalk: [minStackAddress := theStackPages. maxStackAddress := theStackPages + (numPages * bytesPerPage) + objectMemory wordSize - 1].
"The overflow limit is the amount of stack to retain when moving frames from an overflowing stack to reduce thrashing. See stackOverflowOrEvent:mayContextSwitch:" page := self stackPageAt: 0. overflowLimit := page baseAddress - page realStackLimit * 3 // 5. 0 to: numPages - 1 do: [:index| page := self stackPageAt: index. self assert: (self pageIndexFor: page baseAddress) == index. self assert: (self pageIndexFor: page baseAddress - (slotsPerPage - 1 * objectMemory wordSize)) == index. self assert: (self stackPageFor: page baseAddress) == page. self assert: (self stackPageFor: page stackLimit) == page. + self cCode: nil - self cCode: [] inSmalltalk: [| memIndex | memIndex := index * slotsPerPage + 1. "this is memIndex in the block above" self assert: (self memIndexFor: (self oopForPointer: page baseAddress)) == (memIndex + slotsPerPage - 1). index < (numPages - 1) ifTrue: [self assert: (self stackPageFor: page baseAddress + objectMemory wordSize) == (self stackPageAt: index + 1)]]. coInterpreter initializePageTraceToInvalid: page].
mostRecentlyUsedPage := self stackPageAt: 0. page := mostRecentlyUsedPage. count := 0. [| theIndex | count := count + 1. theIndex := self pageIndexFor: page baseAddress. self assert: (self stackPageAt: theIndex) == page. self assert: (self pageIndexFor: page baseAddress) == theIndex. self assert: (self pageIndexFor: page stackLimit) == theIndex. self assert: (self pageIndexFor: page lastAddress + 1) == theIndex. (page := page nextPage) ~= mostRecentlyUsedPage] whileTrue. self assert: count == numPages. self assert: self pageListIsWellFormed!
Item was changed: ----- Method: CoInterpreterStackPages>>whereIsMaybeStackThing: (in category 'debug printing') ----- whereIsMaybeStackThing: anOop "If anOop is an address within the stack zone answer a string stating that, otherwise answer nil." <returnTypeC: 'char *'> + self cCode: nil inSmalltalk: [stackBasePlus1 isNil ifTrue: [^nil]]. - self cCode: '' inSmalltalk: [stackBasePlus1 isNil ifTrue: [^nil]]. (self oop: anOop isGreaterThanOrEqualTo: stackBasePlus1 - 1 andLessThan: (self cCode: [pages] inSmalltalk: [(self stackPageAt: 0) asUnsignedInteger])) ifTrue: [^' is in the stack zone']. ^nil!
Item was changed: ----- Method: CogAbstractInstruction>>jmpTarget: (in category 'accessing') ----- jmpTarget: anAbstractInstruction "Set the target of a jump instruction. These all have the target in the first operand." <returnTypeC: #'AbstractInstruction *'> <var: #anAbstractInstruction type: #'AbstractInstruction *'> + self cCode: nil "check for inadvertent smashing of already-set jmpTargets; development only" - self cCode: [] "check for inadvertent smashing of already-set jmpTargets; development only" inSmalltalk: [self assert: ((operands at: 0) ifNil: [true] ifNotNil: [:o| o = 0 or: [(self isAFixup: o) or: [self isAnInstruction: anAbstractInstruction]]])]. operands at: 0 put: anAbstractInstruction asUnsignedInteger. ^anAbstractInstruction!
Item was changed: ----- Method: CogAbstractInstruction>>resolveJumpTarget (in category 'generate machine code') ----- resolveJumpTarget <var: #fixup type: #'BytecodeFixup *'> | fixup | self assert: self isJump. fixup := cogit cCoerceSimple: (operands at: 0) to: #'BytecodeFixup *'. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [(fixup isKindOf: CogBytecodeFixup) ifTrue: [self assert: (self isAFixup: fixup)]]. (self isAFixup: fixup) ifTrue: [self assert: (cogit addressIsInInstructions: fixup targetInstruction). self jmpTarget: fixup targetInstruction]!
Item was changed: ----- Method: CogBytecodeFixup>>recordBcpc: (in category 'simulation') ----- recordBcpc: theBytecodePC <inline: true> + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [(bcpc isNil or: [bcpc = theBytecodePC]) ifTrue: [bcpc := theBytecodePC] ifFalse: [bcpc := bcpc isInteger ifTrue: [{bcpc. theBytecodePC}] ifFalse: [(bcpc includes: theBytecodePC) ifTrue: [^self]. bcpc, {theBytecodePC}]]]!
Item was changed: ----- Method: CogMIPSELCompiler>>targetFromITypeAtAddress: (in category 'inline cacheing') ----- targetFromITypeAtAddress: mcpc <var: #offset type: #usqInt> <var: #mcpc type: #usqInt> <returnTypeC: #usqInt> | offset | offset := (objectMemory longAt: mcpc) bitAnd: 16rFFFF. + self cCode: nil inSmalltalk: [offset >= 16r8000 ifTrue: [offset := offset - 16r10000]]. - self cCode: '' inSmalltalk: [offset >= 16r8000 ifTrue: [offset := offset - 16r10000]]. offset := offset << 2. ^mcpc + offset + OneInstruction. "Offset is relative to the delay slot"!
Item was changed: ----- Method: CogMethodZone>>addToOpenPICList: (in category 'accessing') ----- addToOpenPICList: anOpenPIC <var: #anOpenPIC type: #'CogMethod *'> self assert: anOpenPIC isCMOpenPIC. self assert: (openPICList == nil or: [openPICList isCMOpenPIC]). cogit assertValidDualZoneWriteAddress: anOpenPIC. anOpenPIC nextOpenPIC: openPICList asUnsignedInteger. openPICList := cogit cCoerceSimple: anOpenPIC asUnsignedInteger - cogit getCodeToDataDelta to: #'CogMethod *'. + self cCode: nil inSmalltalk: [self deny: openPICList isInteger]! - self cCode: '' inSmalltalk: [self deny: openPICList isInteger]!
Item was changed: ----- Method: CogMethodZone>>allocate: (in category 'allocating') ----- allocate: numBytes | roundedBytes allocation | roundedBytes := numBytes + 7 bitAnd: -8. mzFreeStart + roundedBytes >= self allocationLimit ifTrue: [^0]. allocation := mzFreeStart. mzFreeStart := mzFreeStart + roundedBytes. methodCount := methodCount + 1. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [(cogit breakPC isInteger and: [cogit breakPC between: allocation and: mzFreeStart]) ifTrue: [cogit singleStep: true]]. ^allocation!
Item was changed: ----- Method: CogMethodZone>>freeOlderMethodsForCompaction (in category 'compaction') ----- freeOlderMethodsForCompaction "Free methods, preferring older methods for compaction, up to some fraction, currently a quarter." | zoneSize amountToFree initialFreeSpace freedSoFar freeableUsage cogMethod | <var: #cogMethod type: #'CogMethod *'> zoneSize := self effectiveLimit - baseAddress. initialFreeSpace := self effectiveLimit - mzFreeStart + methodBytesFreedSinceLastCompaction. freedSoFar := initialFreeSpace. amountToFree := zoneSize // 4. "4 needs to be e.g. a start-up parameter" freeableUsage := 0. + [self cCode: nil - [self cCode: '' inSmalltalk: [coInterpreter transcript nextPutAll: 'freeing methods with usage '; print: freeableUsage; cr; flush]. cogMethod := cogit cCoerceSimple: baseAddress to: #'CogMethod *'. [cogMethod asUnsignedInteger < mzFreeStart and: [freedSoFar < amountToFree]] whileTrue: [(self shouldFreeMethod: cogMethod given: freeableUsage) ifTrue: [self freeMethod: cogMethod. freedSoFar := freedSoFar + cogMethod blockSize]. cogMethod := self methodAfter: cogMethod]. freedSoFar < amountToFree and: [(freeableUsage := freeableUsage + 1) < CMMaxUsageCount]] whileTrue. + self cCode: nil - self cCode: '' inSmalltalk: [coInterpreter transcript nextPutAll: 'Code Compaction freeing '; print: freedSoFar; nextPutAll: ' of '; print: zoneSize; nextPutAll: ' (target: '; print: amountToFree; nextPutAll: ' (newly freed: '; print: freedSoFar - initialFreeSpace; cr; flush]!
Item was changed: ----- Method: CogObjectRepresentationFor64BitSpur>>bitAndByteOffsetOfIsFullBlockBitInto: (in category 'in-line cacheing') ----- bitAndByteOffsetOfIsFullBlockBitInto: aBlock <inline: true> "This supplies the bitmask for the isFullBlock bit, and the offset of the byte containing that bit in a CogMethod header to aBlock. We don't have named variables holding this offset. The following assert tests whether the values are correct by creating a surrogate on an empty ByteArray, setting the bit, and checking that the expected values are set in the ByteArray." + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [| pragma | pragma := (CogMethodSurrogate >> #cpicHasMNUCaseOrCMIsFullBlock) pragmaAt: #bitPosition:width:. self assert: pragma arguments = #(12 "4th bit of 2nd byte after objectHeader" 1)]. aBlock value: 16 value: objectMemory baseHeaderSize + 1 "zero-relative"!
Item was changed: ----- Method: CogObjectRepresentationForSpur>>generateObjectRepresentationTrampolines (in category 'initialization') ----- generateObjectRepresentationTrampolines "Do the store check. Answer the argument for the benefit of the code generator; ReceiverResultReg may be caller-saved and hence smashed by this call. Answering it allows the code generator to reload ReceiverResultReg cheaply. In Spur the only thing we leave to the run-time is adding the receiver to the remembered set and setting its isRemembered bit." self cppIf: IMMUTABILITY ifTrue: + [self cCode: nil inSmalltalk: - [self cCode: [] inSmalltalk: [ceStoreTrampolines := CArrayAccessor on: (Array new: NumStoreTrampolines)]. 0 to: NumStoreTrampolines - 1 do: [:instVarIndex | ceStoreTrampolines at: instVarIndex put: (self genStoreTrampolineCalled: (cogit trampolineName: 'ceStoreTrampoline' numArgs: instVarIndex limit: NumStoreTrampolines - 2) instVarIndex: instVarIndex)]]. ceNewHashTrampoline := self genNewHashTrampoline: false called: 'ceNewHash'. SistaVM ifTrue: [ceInlineNewHashTrampoline := self genNewHashTrampoline: true called: 'ceInlineNewHash']. ceStoreCheckTrampoline := self genStoreCheckTrampoline. ceStoreCheckContextReceiverTrampoline := self genStoreCheckContextReceiverTrampoline. ceScheduleScavengeTrampoline := cogit genTrampolineFor: #ceScheduleScavenge called: 'ceScheduleScavengeTrampoline' regsToSave: CallerSavedRegisterMask. ceSmallActiveContextInMethodTrampoline := self genActiveContextTrampolineLarge: false inBlock: 0 called: 'ceSmallMethodContext'. ceSmallActiveContextInBlockTrampoline := self genActiveContextTrampolineLarge: false inBlock: InVanillaBlock called: 'ceSmallBlockContext'. SistaV1BytecodeSet ifTrue: [ceSmallActiveContextInFullBlockTrampoline := self genActiveContextTrampolineLarge: false inBlock: InFullBlock called: 'ceSmallFullBlockContext']. ceLargeActiveContextInMethodTrampoline := self genActiveContextTrampolineLarge: true inBlock: 0 called: 'ceLargeMethodContext'. ceLargeActiveContextInBlockTrampoline := self genActiveContextTrampolineLarge: true inBlock: InVanillaBlock called: 'ceLargeBlockContext'. SistaV1BytecodeSet ifTrue: [ceLargeActiveContextInFullBlockTrampoline := self genActiveContextTrampolineLarge: true inBlock: InFullBlock called: 'ceLargeFullBlockContext']. LowcodeVM ifTrue: [ self generateLowcodeObjectTrampolines ]!
Item was changed: ----- Method: CogThreadManager>>populate:from:to: (in category 'thread set') ----- populate: vmThreadPointers from: start to: finish "Populate vmThreadPointers with vmThreads over the given range." <var: #vmThreadPointers type: #'CogVMThread **'> | nThreads vmThreads | <var: #vmThreads type: #'CogVMThread *'> <var: #vmThread type: #'CogVMThread *'> <inline: true> nThreads := finish - start + 1. vmThreads := self cCode: [self calloc: nThreads _: (self sizeof: CogVMThread)] inSmalltalk: [CArrayAccessor on: ((1 to: nThreads) collect: [:ign| CogVMThread new])]. vmThreads ifNil: [^false]. "Since 0 is not a valid index, in C we allocate one extra CogVMThread and use 1-relative indices." self cCode: [start = 1 ifTrue: [vmThreadPointers at: 0 put: nil]] + inSmalltalk: nil. - inSmalltalk: []. start to: finish do: [:i| | vmThread | vmThread := self addressOf: (vmThreads at: i - start). vmThread initializeThreadState. (self ioNewOSSemaphore: (self addressOf: vmThread osSemaphore put: [:sem| vmThread osSemaphore: sem])) ~= 0 ifTrue: [start to: i - 1 do: [:j| vmThread := self addressOf: (vmThreads at: j - start). self ioDestroyOSSemaphore: (self addressOf: vmThread osSemaphore)]. self free: vmThreads. ^false]. vmThreadPointers at: i put: vmThread. vmThread index: i. + self cCode: nil inSmalltalk: [vmThread reenterThreadSchedulingLoop: ReenterThreadSchedulingLoop new]]. - self cCode: [] inSmalltalk: [vmThread reenterThreadSchedulingLoop: ReenterThreadSchedulingLoop new]]. ^true!
Item was changed: ----- Method: CogThreadManager>>saveOwnerSwitchTo:successful: (in category 'logging') ----- saveOwnerSwitchTo: newOwner successful: aBoolean <var: #logEntry type: 'CogVMOwnerLog *'> <var: #currentIndex type: 'int'> <var: #newIndex type: 'int'> | currentIndex newIndex timestamp logEntry | timestamp := coInterpreter ioUTCMicrosecondsNow. currentIndex := self atomic_load: (self addressOf: ownerLogIndex). + self cCode: nil inSmalltalk: [currentIndex := AtomicValue newFrom: currentIndex]. - self cCode: '' inSmalltalk: [currentIndex := AtomicValue newFrom: currentIndex]. [newIndex := currentIndex + 1 \ OwnerLogSize. self atomic: (self addressOf: ownerLogIndex) _compare: (self addressOf: currentIndex) _exchange_strong: newIndex] whileFalse: []. newIndex < currentIndex ifTrue: [ownerLogWrapped := true]. logEntry := (self addressOf: (ownerLog at: (self cCode: [currentIndex] inSmalltalk: [currentIndex value]))). logEntry timestamp: timestamp; successfulSwitch: aBoolean; vmOwner: newOwner.!
Item was changed: ----- Method: CogThreadManager>>wakeVMThread: (in category 'scheduling') ----- wakeVMThread: vmThread <var: #vmThread type: #'CogVMThread *'> <returnTypeC: #void> | threadState | self assert: (self vmIsOwned and: [(self vmOwnerIs: vmThread index) not]). "Instead of going through a #disownVM: call, directly set the new VM owner. This has the advantage of avoiding a race for the different threads to become the new VM owner. In Simulation, this means we need to simulate a thread-switch." + self cCode: nil inSmalltalk: [ - self cCode: [] inSmalltalk: [ self saveRegisterStateForCurrentProcess. self loadOrInitializeRegisterStateFor: vmThread index]. self setVMOwner: vmThread index.
threadState := vmThread vmThreadState. threadState = CTMUninitialized ifTrue: [(self startThreadForThreadInfo: vmThread) ifFalse: [self releaseVM. "TODO: IS THIS SANE?"]] ifFalse: [self assert: ((threadState = CTMWantingOwnership or: [threadState = CTMAssignableOrInVM]) or: [threadState = CTMInitializing]). self ioSignalOSSemaphore: (self addressOf: vmThread osSemaphore)]. self ioTransferTimeslice!
Item was changed: ----- Method: CogVMThread>>initializeThreadState (in category 'initialize-release') ----- initializeThreadState "Unfortunately this cannot be inlined as Slang otherwise screws up the generation of the `atomic_store` call." <inline: false> "In comparision to #initialize, this is also called in C code to initialize the VMThread, not just in the Smalltalk simulation." + self cCode: nil inSmalltalk: [state := AtomicValue new]. - self cCode: [] inSmalltalk: [state := AtomicValue new]. self atomic_store: (self addressOf: self state) _: CTMUninitialized. self cCode: [awolProcesses := self malloc: AWOLProcessesIncrement * (self sizeof: #sqInt)] inSmalltalk: [awolProcesses := CArrayAccessor on: (Array new: AWOLProcessesIncrement)]. awolProcIndex := 0. awolProcLength := AWOLProcessesIncrement.!
Item was changed: ----- Method: Cogit class>>attemptToComputeTempNamesFor: (in category 'in-image compilation support') ----- attemptToComputeTempNamesFor: aCompiledMethod (aCompiledMethod respondsTo: #tempNames) ifTrue: [| schematicTemps blocks | schematicTemps := aCompiledMethod methodNode schematicTempNamesString. blocks := aCompiledMethod embeddedBlockClosures. InitializationOptions at: #tempNames + put: (Dictionary newFrom: + ({aCompiledMethod initialPC -> (self decomposeSchematicTemps: (schematicTemps copyUpTo: $[))}, - put: (Dictionary newFrom: {aCompiledMethod initialPC -> (self decomposeSchematicTemps: (schematicTemps copyUpTo: $[))}, (blocks ifEmpty: [#()] ifNotEmpty: [aCompiledMethod embeddedBlockClosures with: (schematicTemps first = $[ ifTrue: [schematicTemps piecesCutWhere: [:a :b| b = $[]] ifFalse: [(schematicTemps piecesCutWhere: [:a :b| b = $[]) allButFirst]) + collect: [:c :s| c startpc -> (self decomposeSchematicTemps: (s copyWithoutAll: '[]'))]]) + reject: [:assoc| assoc value isEmpty]))]! - collect: [:c :s| c startpc -> (self decomposeSchematicTemps: (s copyWithoutAll: '[]'))]]))]!
Item was changed: ----- Method: Cogit>>addToMap:instruction:byte:at:for: (in category 'method map') ----- addToMap: annotation instruction: instruction byte: byte at: address for: mcpc <inline: true> self codeByteAt: address put: byte. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [| s bytecode | (compilationTrace anyMask: 64) ifTrue: [(s := coInterpreter transcript) ensureCr; print: annotation; nextPut: $/; nextPutAll: byte hex; space; nextPutAll: address hex; space; nextPutAll: mcpc hex; space; nextPutAll: (AnnotationConstantNames detect: [:name| (Cogit classPool at: name ifAbsent: []) = annotation]); cr; flush. (instruction notNil and: [instruction bcpc isInteger]) ifTrue: [s tab; print: instruction bcpc; nextPut: $/. instruction bcpc printOn: s base: 16. s space. instruction printStateOn: s. s space. bytecode := objectMemory fetchByte: instruction bcpc ofObject: methodObj. bytecode := bytecode + (self bytecodeSetOffsetForHeader: methodHeader). (self generatorAt: bytecode) printStateOn: s. s cr; flush]]]!
Item was changed: ----- Method: Cogit>>addressIsInCurrentCompilation: (in category 'testing') ----- addressIsInCurrentCompilation: address <inline: true> + self cCode: nil inSmalltalk: [address < 0 ifTrue: [^false]]. - self cCode: '' inSmalltalk: [address < 0 ifTrue: [^false]]. ^address asUnsignedInteger >= methodLabel address and: [address asUnsignedInteger < (methodZone youngReferrers min: methodLabel address + MaxMethodSize)]!
Item was changed: ----- Method: Cogit>>cPICCompactAndIsNowEmpty: (in category 'in-line cacheing') ----- cPICCompactAndIsNowEmpty: cPIC "Scan the CPIC for target methods that have been freed and eliminate them. Since the first entry cannot be eliminated, answer that the PIC should be freed if the first entry is to a free target. Answer if the PIC is now empty or should be freed." <var: #cPIC type: #'CogMethod *'> | pc entryPoint targetMethod targets tags methods used | <var: #targetMethod type: #'CogMethod *'> <var: #tags declareC: 'int tags[MaxCPICCases]'> <var: #targets declareC: 'sqInt targets[MaxCPICCases]'> <var: #methods declareC: 'sqInt methods[MaxCPICCases]'> + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [tags := CArrayAccessor on: (Array new: MaxCPICCases). targets := CArrayAccessor on: (Array new: MaxCPICCases). methods := CArrayAccessor on: (Array new: MaxCPICCases)]. used := 0. 1 to: cPIC cPICNumCases do: [:i| | valid | pc := self addressOfEndOfCase: i inCPIC: cPIC. entryPoint := i = 1 ifTrue: [backEnd jumpLongTargetBeforeFollowingAddress: pc] ifFalse: [backEnd jumpLongConditionalTargetBeforeFollowingAddress: pc]. valid := true. "Collect all target triples except for triples whose entry-point is a freed method" (cPIC containsAddress: entryPoint) ifFalse: [targetMethod := self cCoerceSimple: entryPoint - cmNoCheckEntryOffset to: #'CogMethod *'. self assert: (targetMethod isCMMethodEtAl or: [targetMethod isCMFree]). targetMethod isCMFree ifTrue: [i = 1 ifTrue: [^true]. "cannot filter out the first entry cuz classTag is at point of send." valid := false]]. valid ifTrue: [tags at: used put: (i > 1 ifTrue: [backEnd literal32BeforeFollowingAddress: pc - backEnd jumpLongConditionalByteSize]). targets at: used put: entryPoint. methods at: used put: (backEnd literalBeforeFollowingAddress: pc - (i = 1 ifTrue: [backEnd jumpLongByteSize] ifFalse: [backEnd jumpLongConditionalByteSize + backEnd cmpC32RTempByteSize])). used := used + 1]]. used = cPIC cPICNumCases ifTrue: [^false]. used = 0 ifTrue: [^true].
(self writableMethodFor: cPIC) cPICNumCases: used. used = 1 ifTrue: [pc := self addressOfEndOfCase: 2 inCPIC: cPIC. self rewriteCPIC: cPIC caseJumpTo: pc. ^false]. "the first entry cannot change..." 1 to: used - 1 do: [:i| pc := self addressOfEndOfCase: i + 1 inCPIC: cPIC. self rewriteCPICCaseAt: pc tag: (tags at: i) objRef: (methods at: i) target: (targets at: i)].
"finally, rewrite the jump 3 instr before firstCPICCaseOffset to jump to the beginning of this new case" self rewriteCPIC: cPIC caseJumpTo: pc - cPICCaseSize. ^false!
Item was changed: ----- Method: Cogit>>ceCPICMiss:receiver: (in category 'in-line cacheing') ----- ceCPICMiss: cPIC receiver: receiver "Code entry closed PIC miss. A send has fallen through a closed (finite) polymorphic inline cache. Either extend it or patch the send site to an open PIC. The stack looks like: receiver args sp=> sender return address" <var: #cPIC type: #'CogMethod *'> <api> "Marked <api> so the code generator won't delete it." <static: true> | outerReturn newTargetMethodOrNil errorSelectorOrNil cacheTag result | + self cCode: nil - self cCode: '' inSmalltalk: [cPIC isInteger ifTrue: [^self ceCPICMiss: (self cogMethodSurrogateAt: cPIC) receiver: receiver]]. (objectMemory isOopForwarded: receiver) ifTrue: [^coInterpreter ceSendFromInLineCacheMiss: cPIC]. outerReturn := coInterpreter stackTop. self deny: (backEnd inlineCacheTagAt: outerReturn) = self picAbortDiscriminatorValue. cPIC cPICNumCases < MaxCPICCases ifTrue: [self lookup: cPIC selector for: receiver methodAndErrorSelectorInto: [:method :errsel| newTargetMethodOrNil := method. errorSelectorOrNil := errsel]] ifFalse: [newTargetMethodOrNil := errorSelectorOrNil := nil]. "We assume lookupAndCog:for: will *not* reclaim the method zone" self assert: outerReturn = coInterpreter stackTop. self ensureWritableCodeZone. cacheTag := objectRepresentation inlineCacheTagForInstance: receiver. (cPIC cPICNumCases >= MaxCPICCases or: [(errorSelectorOrNil notNil and: [errorSelectorOrNil ~= SelectorDoesNotUnderstand]) or: [(objectRepresentation inlineCacheTagIsYoung: cacheTag) or: [newTargetMethodOrNil isNil or: [objectMemory isYoung: newTargetMethodOrNil]]]]) ifTrue: [result := self patchToOpenPICFor: cPIC selector numArgs: cPIC cmNumArgs receiver: receiver. self assert: result not. "If patchToOpenPICFor:.. returns we're out of code memory" self ensureExecutableCodeZone. ^coInterpreter ceSendFromInLineCacheMiss: cPIC]. "Now extend the PIC with the new case." self cogExtendPIC: cPIC CaseNMethod: newTargetMethodOrNil tag: cacheTag isMNUCase: errorSelectorOrNil = SelectorDoesNotUnderstand. self ensureExecutableCodeZone. "Jump back into the pic at its entry in case this is an MNU." coInterpreter executeCogPIC: cPIC fromLinkedSendWithReceiver: receiver andCacheTag: (backEnd inlineCacheTagAt: outerReturn). "NOTREACHED" ^nil!
Item was changed: ----- Method: Cogit>>exclude: (in category 'simulation only') ----- exclude: aMethodObj "For debugging, allow excluding methods based on selector or methodClass. Answer if the mehtod should be excluded." <inline: true> + self cCode: nil inSmalltalk: "for debugging, allow excluding methods based on selector or methodClass" - self cCode: [] inSmalltalk: "for debugging, allow excluding methods based on selector or methodClass" [self class initializationOptions at: #DoNotJIT ifPresent: [:excluded| (excluded anySatisfy: [:exclude| aMethodObj = exclude]) ifTrue: [coInterpreter transcript ensureCr; nextPutAll: 'EXCLUDING '; nextPutAll: aMethodObj; nextPutAll: ' (compiled block)'; cr; flush. ^true]]. (compilationTrace anyMask: 1) ifTrue: [| methodClass | methodClass := coInterpreter nameOfClass: (coInterpreter methodClassOf: aMethodObj). coInterpreter transcript ensureCr; nextPutAll: 'compiling compiled block in '; nextPutAll: methodClass; cr; flush]]. ^false!
Item was changed: ----- Method: Cogit>>exclude:selector: (in category 'simulation only') ----- exclude: aMethodObj selector: aSelectorOop "For debugging, allow excluding methods based on selector or methodClass. Answer if the mehtod should be excluded." <inline: true> + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [| methodClass selector | self class initializationOptions at: #DoNotJIT ifPresent: [:excluded| methodClass := coInterpreter nameOfClass: (coInterpreter methodClassOf: aMethodObj). selector := coInterpreter stringOf: aSelectorOop. (excluded anySatisfy: [:exclude| selector = exclude or: [methodClass = exclude]]) ifTrue: [coInterpreter transcript ensureCr; nextPutAll: 'EXCLUDING '; nextPutAll: methodClass; nextPutAll: '>>#'; nextPutAll: selector; cr; flush. ^true]]. (compilationTrace anyMask: 1) ifTrue: [methodClass := coInterpreter nameOfClass: (coInterpreter methodClassOf: aMethodObj). selector := coInterpreter stringOf: aSelectorOop. selector isEmpty ifTrue: [selector := coInterpreter stringOf: (coInterpreter maybeSelectorOfMethod: aMethodObj)]. coInterpreter transcript ensureCr; nextPutAll: 'compiling '; nextPutAll: methodClass; nextPutAll: '>>#'; nextPutAll: selector; cr; flush]]. ^false!
Item was changed: ----- Method: Cogit>>gen: (in category 'compile abstract instructions') ----- gen: opcode "<Integer>" | abstractInstruction | <inline: false> <returnTypeC: #'AbstractInstruction *'> <var: #abstractInstruction type: #'AbstractInstruction *'> self assert: opcodeIndex < numAbstractOpcodes. abstractInstruction := self abstractInstructionAt: opcodeIndex. opcodeIndex := opcodeIndex + 1. abstractInstruction opcode: opcode. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC. self maybeBreakGeneratingInstructionWithIndex: opcodeIndex - 1]. ^abstractInstruction!
Item was changed: ----- Method: Cogit>>gen:operand: (in category 'compile abstract instructions') ----- gen: opcode "<Integer>" operand: operand "<Integer|CogAbstractInstruction>" | abstractInstruction | <inline: false> <returnTypeC: #'AbstractInstruction *'> <var: #abstractInstruction type: #'AbstractInstruction *'> self assert: opcodeIndex < numAbstractOpcodes. abstractInstruction := self abstractInstructionAt: opcodeIndex. opcodeIndex := opcodeIndex + 1. abstractInstruction opcode: opcode. abstractInstruction operands at: 0 put: operand. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC. self maybeBreakGeneratingInstructionWithIndex: opcodeIndex - 1]. ^abstractInstruction!
Item was changed: ----- Method: Cogit>>gen:operand:operand: (in category 'compile abstract instructions') ----- gen: opcode "<Integer>" operand: operandOne "<Integer|CogAbstractInstruction>" operand: operandTwo "<Integer|CogAbstractInstruction>" | abstractInstruction | <inline: false> <returnTypeC: #'AbstractInstruction *'> <var: #abstractInstruction type: #'AbstractInstruction *'> self assert: opcodeIndex < numAbstractOpcodes. abstractInstruction := self abstractInstructionAt: opcodeIndex. opcodeIndex := opcodeIndex + 1. abstractInstruction opcode: opcode. abstractInstruction operands at: 0 put: operandOne. abstractInstruction operands at: 1 put: operandTwo. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC. self maybeBreakGeneratingInstructionWithIndex: opcodeIndex - 1]. ^abstractInstruction!
Item was changed: ----- Method: Cogit>>gen:operand:operand:operand: (in category 'compile abstract instructions') ----- gen: opcode "<Integer>" operand: operandOne "<Integer|CogAbstractInstruction>" operand: operandTwo "<Integer|CogAbstractInstruction>" operand: operandThree "<Integer|CogAbstractInstruction>" | abstractInstruction | <inline: false> <returnTypeC: #'AbstractInstruction *'> <var: #abstractInstruction type: #'AbstractInstruction *'> self assert: opcodeIndex < numAbstractOpcodes. abstractInstruction := self abstractInstructionAt: opcodeIndex. opcodeIndex := opcodeIndex + 1. abstractInstruction opcode: opcode. abstractInstruction operands at: 0 put: operandOne. abstractInstruction operands at: 1 put: operandTwo. abstractInstruction operands at: 2 put: operandThree. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [abstractInstruction bcpc: bytecodePC. self maybeBreakGeneratingInstructionWithIndex: opcodeIndex - 1]. ^abstractInstruction!
Item was changed: ----- Method: Cogit>>initializeCodeZoneFrom:upTo: (in category 'initialization') ----- initializeCodeZoneFrom: startAddress upTo: endAddress <api> self initializeBackend. self sqMakeMemoryExecutableFrom: startAddress To: endAddress CodeToDataDelta: (self cppIf: #DUAL_MAPPED_CODE_ZONE ifTrue: [self addressOf: codeToDataDelta put: [:v| codeToDataDelta := v]] ifFalse: [nil]). codeBase := methodZoneBase := startAddress. backEnd stopsFrom: startAddress to: endAddress - 1. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self initializeProcessor. backEnd stopsFrom: objectMemory memoryOffset "first word may not exist in the simulator's memory" to: guardPageSize - 1. backEnd has64BitPerformanceCounter ifTrue: [self initializeSimulationIOHighResClockForProfiling]]. methodZone manageFrom: methodZoneBase to: endAddress. self assertValidDualZone. backEnd detectFeatures. self maybeGenerateCacheFlush. "self generateVMOwnerLockFunctions." self genGetLeafCallStackPointers. self generateStackPointerCapture. self generateTrampolines. self computeEntryOffsets. self computeFullBlockEntryOffsets. self generateClosedPICPrototype. self alignMethodZoneBase.
"None of the above is executed beyond ceCheckFeatures, so a bulk flush now is the leanest thing to do." backEnd flushICacheFrom: startAddress to: methodZoneBase asUnsignedInteger. self maybeFlushWritableZoneFrom: startAddress to: methodZoneBase asUnsignedInteger. "Repeat so that now the methodZone ignores the generated run-time." methodZone manageFrom: methodZoneBase to: endAddress. "N.B. this is assumed to be the last thing done in initialization; see Cogit>>initialized. This is done only to compute openPICSize; the generated code is discarded." self generateOpenPICPrototype!
Item was changed: ----- Method: Cogit>>maybeEnableSingleStep (in category 'simulation only') ----- maybeEnableSingleStep <inline: true> + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [singleStep ifFalse: [singleStep := breakPC singleStepRequiredToTriggerIn: self]]!
Item was changed: ----- Method: Cogit>>outputInstructionsForGeneratedRuntimeAt: (in category 'initialization') ----- outputInstructionsForGeneratedRuntimeAt: startAddress "Output instructions generated for one of the generated run-time routines, a trampoline, etc" | size endAddress | <inline: false> self computeMaximumSizes. methodLabel address: startAddress. "for addressIsInCurrentCompilation:" size := self generateInstructionsAt: startAddress. endAddress := self outputInstructionsAt: startAddress. self assert: startAddress + size = endAddress. methodZoneBase := self alignUptoRoutineBoundary: endAddress. backEnd stopsFrom: endAddress to: methodZoneBase - 1. + self cCode: nil inSmalltalk: [methodZone freeStart: methodZoneBase]. - self cCode: '' inSmalltalk: [methodZone freeStart: methodZoneBase]. ^startAddress!
Item was changed: ----- Method: Cogit>>printMethodHeader:on: (in category 'disassembly') ----- printMethodHeader: cogMethod on: aStream <doNotGenerate> + self cCode: nil - self cCode: '' inSmalltalk: [cogMethod isInteger ifTrue: [^self printMethodHeader: (self cogMethodOrBlockSurrogateAt: cogMethod) on: aStream]]. aStream ensureCr. cogMethod asInteger printOn: aStream base: 16. cogMethod isCMMethodEtAl ifTrue: [aStream crtab; nextPutAll: 'objhdr: '. cogMethod objectHeader printOn: aStream base: 16]. cogMethod isCMBlock ifTrue: [aStream crtab; nextPutAll: 'homemth: '. cogMethod cmHomeMethod asUnsignedInteger printOn: aStream base: 16. aStream nextPutAll: ' (offset '; print: cogMethod homeOffset; nextPut: $); crtab; nextPutAll: 'startpc: '; print: cogMethod startpc]. aStream crtab; nextPutAll: 'nArgs: '; print: cogMethod cmNumArgs; tab; nextPutAll: 'type: '; print: cogMethod cmType. (cogMethod cmType ~= 0 and: [cogMethod isCMBlock]) ifTrue: [aStream crtab; nextPutAll: 'blksiz: '. cogMethod blockSize printOn: aStream base: 16. cogMethod isCMMethodEtAl ifTrue: [aStream crtab; nextPutAll: 'method: '. cogMethod methodObject printOn: aStream base: 16. aStream crtab; nextPutAll: 'mthhdr: '. cogMethod methodHeader printOn: aStream base: 16]. aStream crtab; nextPutAll: 'selctr: '. cogMethod selector printOn: aStream base: 16. (coInterpreter lookupAddress: cogMethod selector) ifNotNil: [:string| aStream nextPut: $=; nextPutAll: string]. cogMethod selector = objectMemory nilObject ifTrue: [aStream space; nextPut: $(; nextPutAll: (coInterpreter stringOf: (coInterpreter maybeSelectorOfMethod: cogMethod methodObject)); nextPut: $)]. cogMethod isCMMethodEtAl ifTrue: [aStream crtab; nextPutAll: 'blkentry: '. cogMethod blockEntryOffset printOn: aStream base: 16. cogMethod blockEntryOffset ~= 0 ifTrue: [aStream nextPutAll: ' => '. cogMethod asInteger + cogMethod blockEntryOffset printOn: aStream base: 16]]]. cogMethod isCMClosedPIC ifTrue: [aStream crtab; nextPutAll: 'cPICNumCases: '. cogMethod cPICNumCases printOn: aStream base: 16. aStream tab; nextPutAll: 'cpicHasMNUCase: '; nextPutAll: (cogMethod cpicHasMNUCase ifTrue: ['yes'] ifFalse: ['no'])] ifFalse: [aStream crtab; nextPutAll: 'stackCheckOffset: '. cogMethod stackCheckOffset printOn: aStream base: 16. cogMethod stackCheckOffset > 0 ifTrue: [aStream nextPut: $/. cogMethod asInteger + cogMethod stackCheckOffset printOn: aStream base: 16]. cogMethod isCMBlock ifTrue: [aStream crtab; nextPutAll: 'cbUsesInstVars '; nextPutAll: (cogMethod cbUsesInstVars ifTrue: ['yes'] ifFalse: ['no'])] ifFalse: [aStream crtab; nextPutAll: 'cmRefersToYoung: '; nextPutAll: (cogMethod cmRefersToYoung ifTrue: ['yes'] ifFalse: ['no']); tab; nextPutAll: 'cmHasMovableLiteral: '; nextPutAll: (cogMethod cmHasMovableLiteral ifTrue: ['yes'] ifFalse: ['no']); tab; nextPutAll: 'cmIsFullBlock: '; nextPutAll: (cogMethod cmIsFullBlock ifTrue: ['yes'] ifFalse: ['no'])]. cogMethod isCMMethodEtAl ifTrue: [([cogMethod nextMethodOrIRCs] on: MessageNotUnderstood do: [:ex| nil]) ifNotNil: [:nmoircs| aStream crtab; nextPutAll: 'nextMethodOrIRCs: '. nmoircs = 0 ifTrue: [aStream print: nmoircs] ifFalse: [coInterpreter printHex: nmoircs]]. ([cogMethod counters] on: MessageNotUnderstood do: [:ex| nil]) ifNotNil: [:cntrs| aStream crtab; nextPutAll: 'counters: '. cntrs = 0 ifTrue: [aStream print: cntrs] ifFalse: [coInterpreter printHex: cntrs]]]]. aStream cr; flush!
Item was changed: ----- Method: Cogit>>trampolineArgConstant: (in category 'initialization') ----- trampolineArgConstant: booleanIntegerOrNil "Encode true and false and 0 to N such that they can't be confused for register numbers (including NoReg) and can be tested for by isTrampolineArgConstant: and decoded by trampolineArgValue:" <inline: true> + self cCode: nil - self cCode: [] inSmalltalk: [booleanIntegerOrNil isInteger ifFalse: [^self trampolineArgConstant: (booleanIntegerOrNil ifNil: [0] ifNotNil: [booleanIntegerOrNil ifTrue: [1] ifFalse: [0]])]]. self assert: booleanIntegerOrNil >= 0. ^-2 - booleanIntegerOrNil "0...N => -2...-(N+2)"!
Item was changed: ----- Method: Cogit>>trampolineName:numArgs:limit: (in category 'initialization') ----- trampolineName: routinePrefix numArgs: numArgs limit: argsLimit "Malloc a string with the contents for the trampoline table" <inline: true> <returnTypeC: #'char *'> <var: #routinePrefix type: #'char *'> <var: #numArgs type: #int> | theString | <var: #theString type: #'char *'> + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [^routinePrefix, (numArgs <= argsLimit ifTrue: [numArgs printString] ifFalse: ['N']), 'Args']. theString := self malloc: (self strlen: routinePrefix) + 6. self s: theString pr: '%s%cArgs' in: routinePrefix tf: (numArgs <= argsLimit ifTrue: [$0 + numArgs] ifFalse: [$N]). ^theString!
Item was changed: ----- Method: CroquetPlugin>>primitiveAdj3 (in category 'transforms') ----- primitiveAdj3 "Computes the adjoint of the Matrix4x4 receiver, placing the results the the Matrix4x4 argument, " <export: true> <primitiveMetadata: #(FastCPrimitiveFlag FastCPrimitiveAlignForFloatsFlag)> | srcOop src dstOop dst m11 m12 m13 m21 m22 m23 m31 m32 m33 c11 c12 c13 c21 c22 c23 c31 c32 c33 | <var: #c11 declareC: 'const int c11 = 0'> <var: #c12 declareC: 'const int c12 = 1'> <var: #c13 declareC: 'const int c13 = 2'> "<var: #c14 declareC: 'const int c14 = 3'>" <var: #c21 declareC: 'const int c21 = 4'> <var: #c22 declareC: 'const int c22 = 5'> <var: #c23 declareC: 'const int c23 = 6'> "<var: #c24 declareC: 'const int c24 = 7'>" <var: #c31 declareC: 'const int c31 = 8'> <var: #c32 declareC: 'const int c32 = 9'> <var: #c33 declareC: 'const int c33 = 10'> "<var: #c34 declareC: 'const int c34 = 11'>" <var: #m11 type: #double> <var: #m12 type: #double> <var: #m13 type: #double> <var: #m21 type: #double> <var: #m22 type: #double> <var: #m23 type: #double> <var: #m31 type: #double> <var: #m32 type: #double> <var: #m33 type: #double>
"then we need the following no-op to make Smalltalk shut up about vars not being initted." + self cCode: nil inSmalltalk: [ - self cCode: '' inSmalltalk: [ c11 := 0. c12 := 1. c13 := 2. "c14 := 3." c21 := 4. c22 := 5. c23 := 6. "c24 := 7." c31 := 8. c32 := 9. c33 := 10. "c34 := 11." ].
"NOTE: the bottom row of a OpenGL-ordered matrix is always 0 0 0 1, so we don't need consts here for those elements."
srcOop := interpreterProxy stackObjectValue: 1. dstOop := interpreterProxy stackObjectValue: 0. interpreterProxy failed ifTrue: [^nil]. src := self cCoerce: (interpreterProxy firstIndexableField: srcOop) to: #'float *'. dst := self cCoerce: (interpreterProxy firstIndexableField: dstOop) to: #'float *'.
"read in the source matrix 3x3, which contains the encoded rotation and scale factors" m11 := src at: c11. m12 := src at: c12. m13 := src at: c13. m21 := src at: c21. m22 := src at: c22. m23 := src at: c23. m31 := src at: c31. m32 := src at: c32. m33 := src at: c33.
"do the actual work"
"compute our cofactors and transpose. adj = transpose of cofactors" dst at: c11 put: ((m22 * m33) - (m23 * m32)) . dst at: c21 put: (0.0 - ((m21 * m33) - (m23 * m31))). dst at: c31 put: ((m21 * m32) - (m22 * m31)).
dst at: c12 put: (0.0 - ((m12 * m33) - (m13 * m32))). dst at: c22 put: ((m11 * m33) - (m13 * m31)). dst at: c32 put: (0.0 - ((m11 * m32) - (m12 * m31))).
dst at: c13 put: ((m12 * m23) - (m13 * m22)). dst at: c23 put: (0.0 - ((m11 * m23) - (m13 * m21))). dst at: c33 put: ((m11 * m22) - (m12 * m21)). ^interpreterProxy methodReturnValue: dstOop!
Item was changed: ----- Method: CroquetPlugin>>primitiveInverseByAdjoint (in category 'transforms') ----- primitiveInverseByAdjoint "Computes the inverse of the Matrix4x4 receiver, using the 'classical adjoint' method, placing the results the the Matrix4x4 argument, " <export: true> <primitiveMetadata: #(FastCPrimitiveFlag FastCPrimitiveAlignForFloatsFlag)> | srcOop src dstOop dst det m11 m12 m13 m21 m22 m23 m31 m32 m33 c11 c12 c13 c14 c21 c22 c23 c24 c31 c32 c33 c34 x y z | <var: #c11 declareC: 'const int c11 = 0'> <var: #c12 declareC: 'const int c12 = 1'> <var: #c13 declareC: 'const int c13 = 2'> <var: #c14 declareC: 'const int c14 = 3'> <var: #c21 declareC: 'const int c21 = 4'> <var: #c22 declareC: 'const int c22 = 5'> <var: #c23 declareC: 'const int c23 = 6'> <var: #c24 declareC: 'const int c24 = 7'> <var: #c31 declareC: 'const int c31 = 8'> <var: #c32 declareC: 'const int c32 = 9'> <var: #c33 declareC: 'const int c33 = 10'> <var: #c34 declareC: 'const int c34 = 11'> <var: #m11 type: #double> <var: #m12 type: #double> <var: #m13 type: #double> <var: #m21 type: #double> <var: #m22 type: #double> <var: #m23 type: #double> <var: #m31 type: #double> <var: #m32 type: #double> <var: #m33 type: #double> <var: #x type:#double> <var: #y type:#double> <var: #z type:#double> <var: #det type:#double>
"then we need the following no-op to make Smalltalk shut up about vars not being initted." + self cCode: nil inSmalltalk: [ - self cCode: '' inSmalltalk: [ c11 := 0. c12 := 1. c13 := 2. c14 := 3. c21 := 4. c22 := 5. c23 := 6. c24 := 7. c31 := 8. c32 := 9. c33 := 10. c34 := 11. ].
"NOTE: the bottom row of a OpenGL-ordered matrix is always 0 0 0 1, so we don't need consts here for those elements."
"do the dance to get our receiver and argument" srcOop := interpreterProxy stackValue: 1. dstOop := interpreterProxy stackValue: 0. interpreterProxy failed ifTrue: [^nil]. src := self cCoerce: (interpreterProxy firstIndexableField: srcOop) to: #'float *'. dst := self cCoerce: (interpreterProxy firstIndexableField: dstOop) to: #'float *'.
"read in the source matrix 3x3, which contains the encoded rotation and scale factors" m11 := src at: c11. m12 := src at: c12. m13 := src at: c13. m21 := src at: c21. m22 := src at: c22. m23 := src at: c23. m31 := src at: c31. m32 := src at: c32. m33 := src at: c33.
"read in the source translation vector" x := src at: c14. y := src at: c24. z := src at: c34.
"do the actual work"
"first, compute the determinant of the upper 3x3 of the source" det := ( m11 * ((m22 * m33) - (m23 * m32))) + (m12 * ((m23 * m31) - (m21 * m33))) + (m13 * ((m21 * m32) - (m22 * m31))).
"Compute the classical adjunct of the source, and divide by the source determinant storing in the destination. adjoint = transpose of cofactors, so we'll transpose as we store."
det := 1 / det. "let's make div by det a multiply" dst at: c11 put: ((m22 * m33) - (m23 * m32)) * det . dst at: c21 put: (0.0 - ((m21 * m33) - (m23 * m31))) * det. dst at: c31 put: ((m21 * m32) - (m22 * m31)) * det.
dst at: c12 put: (0.0 - ((m12 * m33) - (m13 * m32))) * det. dst at: c22 put: ((m11 * m33) - (m13 * m31)) * det. dst at: c32 put: (0.0 - ((m11 * m32) - (m12 * m31))) * det.
dst at: c13 put: ((m12 * m23) - (m13 * m22)) * det. dst at: c23 put: (0.0 - ((m11 * m23) - (m13 * m21))) * det. dst at: c33 put: ((m11 * m22) - (m12 * m21)) * det. "finally, apply the inversed rotation transform to our translation" "read in the source matrix 3x3" m11 := dst at: c11. m12 := dst at: c12. m13 := dst at: c13. m21 := dst at: c21. m22 := dst at: c22. m23 := dst at: c23. m31 := dst at: c31. m32 := dst at: c32. m33 := dst at: c33.
dst at: c14 put: 0.0 - ((x * m11) + (y * m12) + (z * m13)). dst at: c24 put: 0.0 - ((x * m21) + (y * m22) + (z * m23)). dst at: c34 put: 0.0 - ((x * m31) + (y * m32) + (z * m33)).
^interpreterProxy methodReturnValue: dstOop!
Item was changed: ----- Method: FilePlugin>>primitiveDirectoryEntry (in category 'directory primitives') ----- primitiveDirectoryEntry
"Two arguments - directory path, and simple file name; returns an array (see primitiveDirectoryLookup) describing the file or directory, or nil if it does not exist. Note that in general, the directory path name must not contain syntactic sugar for the current platform (e.g., '.' or '..', or on Windows, forward slashes instead of backslashes). These conventions are only fully supported on Unix platforms; on Windows, they are only supported for short non-UNC file paths containing max 260 characters (for the full path concatenated from the directory path, a backslash, and the file name): DON'T: primitiveDirectoryEntry '<very long path>\foo.' ... DON'T: primitiveDirectoryEntry '<very long path>\foo..' ... DON'T (on Windows): primitiveDirectoryEntry '<very long path>\foo/bar' ... See the comment in sqWin32Directory.c for more details. Primitive fails if the outer path does not identify a readable directory. (This is a lookup-by-name variant of primitiveDirectoryLookup.)"
| requestedName pathName pathNameIndex pathNameSize status entryName entryNameSize createDate modifiedDate dirFlag posixPermissions symlinkFlag fileSize okToList reqNameIndex reqNameSize | <var: 'entryName' declareC: 'char entryName[256]'> <var: 'pathNameIndex' type: #'char *'> <var: 'reqNameIndex' type: #'char *'> <var: 'fileSize' type: #squeakFileOffsetType> <export: true>
requestedName := interpreterProxy stackValue: 0. pathName := interpreterProxy stackValue: 1. (interpreterProxy isBytes: pathName) ifFalse: [^interpreterProxy primitiveFail].
"Outbound string parameters" pathNameIndex := interpreterProxy firstIndexableField: pathName. pathNameSize := interpreterProxy byteSizeOf: pathName.
reqNameIndex := interpreterProxy firstIndexableField: requestedName. reqNameSize := interpreterProxy byteSizeOf: requestedName. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [entryName := ByteString new: 256. entryNameSize := createDate := modifiedDate := dirFlag := fileSize := posixPermissions := symlinkFlag := nil]. "If the security plugin can be loaded, use it to check for permission. If not, assume it's ok" okToList := sCLPfn ~= 0 ifTrue: [self cCode: '((sqInt (*)(char *, sqInt))sCLPfn)(pathNameIndex, pathNameSize)' inSmalltalk: [true]] ifFalse: [true]. status := okToList ifTrue: [self dir_EntryLookup: pathNameIndex _: pathNameSize _: reqNameIndex _: reqNameSize _: entryName _: (self addressOf: entryNameSize put: [:v| entryNameSize := v]) _: (self addressOf: createDate put: [:v| createDate := v]) _: (self addressOf: modifiedDate put: [:v| modifiedDate := v]) _: (self addressOf: dirFlag put: [:v| dirFlag := v]) _: (self addressOf: fileSize put: [:v| fileSize := v]) _: (self addressOf: posixPermissions put: [:v| posixPermissions := v]) _: (self addressOf: symlinkFlag put: [:v| symlinkFlag := v])] ifFalse: [DirNoMoreEntries].
interpreterProxy failed ifTrue: [^nil]. status = DirNoMoreEntries ifTrue: "no entry; return nil" [interpreterProxy "pop pathName, index, rcvr" pop: 3 thenPush: interpreterProxy nilObject. ^nil]. status = DirBadPath ifTrue: [^interpreterProxy primitiveFail]."bad path"
interpreterProxy pop: 3 "pop pathName, index, rcvr" thenPush: (self cppIf: PharoVM ifTrue: [self makeDirEntryName: entryName size: entryNameSize createDate: createDate modDate: modifiedDate isDir: dirFlag fileSize: fileSize posixPermissions: posixPermissions isSymlink: symlinkFlag] ifFalse: [self makeDirEntryName: entryName size: entryNameSize createDate: createDate modDate: modifiedDate isDir: dirFlag fileSize: fileSize])!
Item was changed: ----- Method: FilePlugin>>primitiveDirectoryLookup (in category 'directory primitives') ----- primitiveDirectoryLookup "Two arguments - directory path, and an index to an item; returns an array (see primitiveDirectoryLookup) describing the file or directory, or nil if it does not exist. Note that in general, the directory path must not contain syntactic sugar for the current platform (e.g., '.' or '..', or on Windows, forward slashes instead of backslashes). These conventions are only fully supported on Unix platforms; on Windows, they are only supported for short non-UNC file paths containing max 260 characters: DON'T: primitiveDirectoryLookup '<very long path>' 'foo.' DON'T: primitiveDirectoryLookup '<very long path>' foo..' DON'T (on Windows): primitiveDirectoryLookup '<very long path>' 'foo/bar' DO: primitiveDirectoryLookup '<very long path>' '.' See the comment in sqWin32Directory.c for more details. Primitive fails if the outer path does not identify a readable directory. (For a lookup-by-name variant, see primitiveDirectoryEntry.)"
| index pathName pathNameIndex pathNameSize status entryName entryNameSize createDate modifiedDate dirFlag symlinkFlag posixPermissions fileSize okToList | <var: 'entryName' declareC: 'char entryName[256]'> <var: 'pathNameIndex' type: #'char *'> <var: 'fileSize' type: #squeakFileOffsetType> <export: true>
index := interpreterProxy stackIntegerValue: 0. pathName := interpreterProxy stackValue: 1. (interpreterProxy isBytes: pathName) ifFalse: [^interpreterProxy primitiveFail]. pathNameIndex := interpreterProxy firstIndexableField: pathName. pathNameSize := interpreterProxy byteSizeOf: pathName. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [entryName := ByteString new: 256. entryNameSize := createDate := modifiedDate := dirFlag := fileSize := posixPermissions := symlinkFlag := nil]. "If the security plugin can be loaded, use it to check for permission. If not, assume it's ok" okToList := sCLPfn ~= 0 ifTrue: [self cCode: '((sqInt (*)(char *, sqInt))sCLPfn)(pathNameIndex, pathNameSize)' inSmalltalk: [true]] ifFalse: [true]. status := okToList ifTrue: [self dir_Lookup: pathNameIndex _: pathNameSize _: index _: entryName _: (self addressOf: entryNameSize put: [:v| entryNameSize := v]) _: (self addressOf: createDate put: [:v| createDate := v]) _: (self addressOf: modifiedDate put: [:v| modifiedDate := v]) _: (self addressOf: dirFlag put: [:v| dirFlag := v]) _: (self addressOf: fileSize put: [:v| fileSize := v]) _: (self addressOf: posixPermissions put: [:v| posixPermissions := v]) _: (self addressOf: symlinkFlag put: [:v| symlinkFlag := v])] ifFalse: [DirNoMoreEntries]. interpreterProxy failed ifTrue: [^nil]. status = DirNoMoreEntries ifTrue: "no more entries; return nil" [interpreterProxy "pop pathName, index, rcvr" pop: 3 thenPush: interpreterProxy nilObject. ^nil]. status = DirBadPath ifTrue: [^interpreterProxy primitiveFail]."bad path"
interpreterProxy pop: 3 "pop pathName, index, rcvr" thenPush: (self cppIf: PharoVM ifTrue: [self makeDirEntryName: entryName size: entryNameSize createDate: createDate modDate: modifiedDate isDir: dirFlag fileSize: fileSize posixPermissions: posixPermissions isSymlink: symlinkFlag] ifFalse: [self makeDirEntryName: entryName size: entryNameSize createDate: createDate modDate: modifiedDate isDir: dirFlag fileSize: fileSize])!
Item was changed: ----- Method: FilePlugin>>primitiveFileStdioHandles (in category 'file primitives') ----- primitiveFileStdioHandles "Answer an Array of file handles for standard in, standard out and standard error, with nil in entries that are unvailable, e.g. because the platform does not provide standard error, etc. Fail if an error occurs determining the stdio handles, if the security plugin denies access or if memory runs out." <export: true> | fileRecords result validMask | <var: 'fileRecords' declareC: 'SQFile fileRecords[3]'> + self cCode: nil inSmalltalk: [fileRecords := Array new: 3]. - self cCode: '' inSmalltalk: [fileRecords := Array new: 3]. sHFAfn ~= 0 ifTrue: [(self cCode: ' ((sqInt (*)(void))sHFAfn)()' inSmalltalk: [true]) ifFalse: [^interpreterProxy primitiveFailFor: PrimErrUnsupported]]. validMask := self sqFileStdioHandlesInto: fileRecords. validMask < 0 ifTrue: [^interpreterProxy primitiveFailForOSError: validMask]. result := interpreterProxy instantiateClass: interpreterProxy classArray indexableSize: 3. result = nil ifTrue: [^interpreterProxy primitiveFailFor: PrimErrNoMemory]. interpreterProxy pushRemappableOop: result. 0 to: 2 do: [:index| (validMask bitAnd: (1 << index)) ~= 0 ifTrue: [result := interpreterProxy instantiateClass: interpreterProxy classByteArray indexableSize: self fileRecordSize. result = nil ifTrue: [interpreterProxy popRemappableOop. ^interpreterProxy primitiveFailFor: PrimErrNoMemory]. interpreterProxy storePointer: index ofObject: interpreterProxy topRemappableOop withValue: result. self cCode: [self memcpy: (interpreterProxy firstIndexableField: result) _: (self addressOf: (fileRecords at: index)) _: self fileRecordSize] inSmalltalk: [(interpreterProxy firstIndexableField: result) unitSize: interpreterProxy wordSize; at: 0 put: (fileRecords at: index + 1)]]]. "In the non-Spur threaded VM ensure the handles are old, so that sqFileReadIntoAt is unaffected by incremental GCs. See platforms/Cross/plugins/FilePlugin/sqFilePluginBasicPrims.c. The Spur VM uses pinning, so it doesn't need the GC." self cppIf: COGMTVM ifTrue: [self cppIf: SPURVM ifFalse: [interpreterProxy fullGC]]. result := interpreterProxy popRemappableOop. interpreterProxy methodReturnValue: result!
Item was changed: ----- Method: IA32ABIPlugin>>primInIOProcessEventsFlagAddress (in category 'primitives-Windows-VM-specific') ----- primInIOProcessEventsFlagAddress "Answer the address of the int inIOProcessEvents flag. This can be used to disable invocation of ioProcessEvents and is for backward-compatibility. Please use the core VM primitiveEventProcessingControl in new code." | inIOProcessEvents | <export: true> <var: 'inIOProcessEvents' declareC: 'extern int inIOProcessEvents'> + self cCode: nil inSmalltalk: [inIOProcessEvents = 0]. - self cCode: '' inSmalltalk: [inIOProcessEvents = 0]. interpreterProxy methodReturnValue: (self positiveMachineIntegerFor: (self addressOf: inIOProcessEvents) asUnsignedInteger)!
Item was changed: ----- Method: Interpreter>>bytecodePrimGreaterOrEqual (in category 'common selector sends') ----- bytecodePrimGreaterOrEqual | rcvr arg aBool | rcvr := self internalStackValue: 1. arg := self internalStackValue: 0. (self areIntegers: rcvr and: arg) ifTrue: + [self cCode: nil inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) >= (self integerValueOf: arg)]. - [self cCode: '' inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) >= (self integerValueOf: arg)]. ^self booleanCheat: rcvr >= arg].
successFlag := true. aBool := self primitiveFloatGreaterOrEqual: rcvr toArg: arg. successFlag ifTrue: [^self booleanCheat: aBool].
messageSelector := self specialSelector: 5. argumentCount := 1. self normalSend!
Item was changed: ----- Method: Interpreter>>bytecodePrimGreaterThan (in category 'common selector sends') ----- bytecodePrimGreaterThan | rcvr arg aBool | rcvr := self internalStackValue: 1. arg := self internalStackValue: 0. (self areIntegers: rcvr and: arg) ifTrue: ["The C code can avoid detagging since tagged integers are still signed. But this means the simulator must override to do detagging." + self cCode: nil inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) > (self integerValueOf: arg)]. - self cCode: '' inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) > (self integerValueOf: arg)]. ^self booleanCheat: rcvr > arg].
successFlag := true. aBool := self primitiveFloatGreater: rcvr thanArg: arg. successFlag ifTrue: [^self booleanCheat: aBool].
messageSelector := self specialSelector: 3. argumentCount := 1. self normalSend!
Item was changed: ----- Method: Interpreter>>bytecodePrimLessOrEqual (in category 'common selector sends') ----- bytecodePrimLessOrEqual | rcvr arg aBool | rcvr := self internalStackValue: 1. arg := self internalStackValue: 0. (self areIntegers: rcvr and: arg) ifTrue: + [self cCode: nil inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) <= (self integerValueOf: arg)]. - [self cCode: '' inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) <= (self integerValueOf: arg)]. ^ self booleanCheat: rcvr <= arg].
successFlag := true. aBool := self primitiveFloatLessOrEqual: rcvr toArg: arg. successFlag ifTrue: [^self booleanCheat: aBool].
messageSelector := self specialSelector: 4. argumentCount := 1. self normalSend!
Item was changed: ----- Method: Interpreter>>bytecodePrimLessThan (in category 'common selector sends') ----- bytecodePrimLessThan | rcvr arg aBool | rcvr := self internalStackValue: 1. arg := self internalStackValue: 0. (self areIntegers: rcvr and: arg) ifTrue: ["The C code can avoid detagging since tagged integers are still signed. But this means the simulator must override to do detagging." + self cCode: nil inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) < (self integerValueOf: arg)]. - self cCode: '' inSmalltalk: [^self booleanCheat: (self integerValueOf: rcvr) < (self integerValueOf: arg)]. ^ self booleanCheat: rcvr < arg].
successFlag := true. aBool := self primitiveFloatLess: rcvr thanArg: arg. successFlag ifTrue: [^ self booleanCheat: aBool].
messageSelector := self specialSelector: 2. argumentCount := 1. self normalSend!
Item was changed: ----- Method: Interpreter>>fetchIntegerOrTruncFloat:ofObject: (in category 'utilities') ----- fetchIntegerOrTruncFloat: fieldIndex ofObject: objectPointer "Return the integer value of the given field of the given object. If the field contains a Float, truncate it and return its integral part. Fail if the given field does not contain a small integer or Float, or if the truncated Float is out of the range of small integers." "Note: May be called by translated primitive code."
| intOrFloat floatVal frac trunc | <inline: false> <var: #floatVal type: 'double '> <var: #frac type: 'double '> <var: #trunc type: 'double '>
intOrFloat := self fetchPointer: fieldIndex ofObject: objectPointer. (self isIntegerObject: intOrFloat) ifTrue: [^ self integerValueOf: intOrFloat]. self assertClassOf: intOrFloat is: (self splObj: ClassFloat). successFlag ifTrue: [ + self cCode: nil inSmalltalk: [floatVal := Float new: 2]. - self cCode: '' inSmalltalk: [floatVal := Float new: 2]. self fetchFloatAt: intOrFloat + self baseHeaderSize into: floatVal. self cCode: 'frac = modf(floatVal, &trunc)'. "the following range check is for C ints, with range -2^31..2^31-1" self flag: #Dan. "The ranges are INCORRECT if SmallIntegers are wider than 31 bits." self cCode: 'success((-2147483648.0 <= trunc) && (trunc <= 2147483647.0))'.]. successFlag ifTrue: [^ self cCode: '((sqInt) trunc)' inSmalltalk: [floatVal truncated]] ifFalse: [^ 0]. !
Item was changed: ----- Method: Interpreter>>primitiveBeCursor (in category 'I/O primitives') ----- primitiveBeCursor "Set the cursor to the given shape. The Mac only supports 16x16 pixel cursors. Cursor offsets are handled by Smalltalk."
| cursorObj maskBitsIndex maskObj bitsObj extentX extentY depth offsetObj offsetX offsetY cursorBitsIndex ourCursor |
argumentCount = 0 ifTrue: [ cursorObj := self stackTop. maskBitsIndex := nil]. argumentCount = 1 ifTrue: [ cursorObj := self stackValue: 1. maskObj := self stackTop]. self success: (argumentCount < 2).
self success: ((self isPointers: cursorObj) and: [(self lengthOf: cursorObj) >= 5]). successFlag ifTrue: [ bitsObj := self fetchPointer: 0 ofObject: cursorObj. extentX := self fetchInteger: 1 ofObject: cursorObj. extentY := self fetchInteger: 2 ofObject: cursorObj. depth := self fetchInteger: 3 ofObject: cursorObj. offsetObj := self fetchPointer: 4 ofObject: cursorObj]. self success: ((self isPointers: offsetObj) and: [(self lengthOf: offsetObj) >= 2]).
successFlag ifTrue: [ offsetX := self fetchInteger: 0 ofObject: offsetObj. offsetY := self fetchInteger: 1 ofObject: offsetObj.
(argumentCount = 0 and: [depth = 32]) ifTrue: [ "Support arbitrary-sized 32 bit ARGB forms --bf 3/1/2007 23:51" self success: ((extentX > 0) and: [extentY > 0]). self success: ((offsetX >= (extentX * -1)) and: [offsetX <= 0]). self success: ((offsetY >= (extentY * -1)) and: [offsetY <= 0]). cursorBitsIndex := bitsObj + self baseHeaderSize. self success: ((self isWords: bitsObj) and: [(self lengthOf: bitsObj) = (extentX * extentY)]). + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [ourCursor := Cursor extent: extentX @ extentY depth: 32 fromArray: ((1 to: extentX * extentY) collect: [:i | self fetchLong32: i-1 ofObject: bitsObj]) offset: offsetX @ offsetY]] ifFalse: [ self success: ((extentX = 16) and: [extentY = 16 and: [depth = 1]]). self success: ((offsetX >= -16) and: [offsetX <= 0]). self success: ((offsetY >= -16) and: [offsetY <= 0]). self success: ((self isWords: bitsObj) and: [(self lengthOf: bitsObj) = 16]). cursorBitsIndex := bitsObj + self baseHeaderSize. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [ourCursor := Cursor extent: extentX @ extentY fromArray: ((1 to: 16) collect: [:i | ((self fetchLong32: i-1 ofObject: bitsObj) >> 16) bitAnd: 16rFFFF]) offset: offsetX @ offsetY]]].
argumentCount = 1 ifTrue: [ self success: ((self isPointers: maskObj) and: [(self lengthOf: maskObj) >= 5]). successFlag ifTrue: [ bitsObj := self fetchPointer: 0 ofObject: maskObj. extentX := self fetchInteger: 1 ofObject: maskObj. extentY := self fetchInteger: 2 ofObject: maskObj. depth := self fetchInteger: 3 ofObject: maskObj].
successFlag ifTrue: [ self success: ((extentX = 16) and: [extentY = 16 and: [depth = 1]]). self success: ((self isWords: bitsObj) and: [(self lengthOf: bitsObj) = 16]). maskBitsIndex := bitsObj + self baseHeaderSize]].
successFlag ifTrue: [ argumentCount = 0 ifTrue: [ depth = 32 ifTrue: [(self cCode: 'ioSetCursorARGB(cursorBitsIndex, extentX, extentY, offsetX, offsetY)' inSmalltalk: [ourCursor show. Cursor currentCursor == ourCursor]) ifFalse: [^self success: false. ]] ifFalse: [self cCode: 'ioSetCursor(cursorBitsIndex, offsetX, offsetY)' inSmalltalk: [ourCursor show]]] ifFalse: [self cCode: 'ioSetCursorWithMask(cursorBitsIndex, maskBitsIndex, offsetX, offsetY)' inSmalltalk: [cursorBitsIndex == maskBitsIndex. "placate compiler" ourCursor show]]. self pop: argumentCount]!
Item was changed: ----- Method: Interpreter>>primitiveExternalCall (in category 'plugin primitives') ----- primitiveExternalCall "Call an external primitive. The external primitive methods contain as first literal an array consisting of: * The module name (String | Symbol) * The function name (String | Symbol) * The session ID (SmallInteger) [OBSOLETE] * The function index (Integer) in the externalPrimitiveTable For fast failures the primitive index of any method where the external prim is not found is rewritten in the method cache with zero. This allows for ultra fast responses as long as the method stays in the cache. The fast failure response relies on lkupClass being properly set. This is done in #addToMethodCacheSel:class:method:primIndex: to compensate for execution of methods that are looked up in a superclass (such as in primitivePerformAt). With the latest modifications (e.g., actually flushing the function addresses from the VM), the session ID is obsolete. But for backward compatibility it is still kept around. Also, a failed lookup is reported specially. If a method has been looked up and not been found, the function address is stored as -1 (e.g., the SmallInteger -1 to distinguish from 16rFFFFFFFF which may be returned from the lookup). It is absolutely okay to remove the rewrite if we run into any problems later on. It has an approximate speed difference of 30% per failed primitive call which may be noticable but if, for any reasons, we run into problems (like with J3) we can always remove the rewrite. " | lit addr moduleName functionName moduleLength functionLength index | <var: #addr type: 'void *'> "Fetch the first literal of the method" self success: (self literalCountOf: newMethod) > 0. "@@: Could this be omitted for speed?!!" successFlag ifFalse: [^ nil].
lit := self literal: 0 ofMethod: newMethod. "Check if it's an array of length 4" self success: ((self isArray: lit) and: [(self lengthOf: lit) = 4]). successFlag ifFalse: [^ nil].
"Look at the function index in case it has been loaded before" index := self fetchPointer: 3 ofObject: lit. index := self checkedIntegerValueOf: index. successFlag ifFalse: [^ nil]. "Check if we have already looked up the function and failed." index < 0 ifTrue: ["Function address was not found in this session, Rewrite the mcache entry with a zero primitive index." self rewriteMethodCacheSel: messageSelector class: lkupClass primIndex: 0. ^ self success: false].
"Try to call the function directly" (index > 0 and: [index <= MaxExternalPrimitiveTableSize]) ifTrue: [addr := externalPrimitiveTable at: index - 1. addr ~= 0 ifTrue: [self rewriteMethodCacheSel: messageSelector class: lkupClass primIndex: (1000 + index) primFunction: addr. self callExternalPrimitive: addr. ^ nil]. "if we get here, then an index to the external prim was kept on the ST side although the underlying prim table was already flushed" ^ self primitiveFail].
"Clean up session id and external primitive index" self storePointerUnchecked: 2 ofObject: lit withValue: ConstZero. self storePointerUnchecked: 3 ofObject: lit withValue: ConstZero.
"The function has not been loaded yet. Fetch module and function name." moduleName := self fetchPointer: 0 ofObject: lit. moduleName = nilObj ifTrue: [moduleLength := 0] ifFalse: [self success: (self isBytes: moduleName). moduleLength := self lengthOf: moduleName. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [ (#('FloatArrayPlugin' 'Matrix2x3Plugin') includes: (self stringOf: moduleName)) ifTrue: [moduleLength := 0 "Cause all of these to fail"]]]. functionName := self fetchPointer: 1 ofObject: lit. self success: (self isBytes: functionName). functionLength := self lengthOf: functionName. successFlag ifFalse: [^ nil].
addr := self ioLoadExternalFunction: functionName + self baseHeaderSize OfLength: functionLength FromModule: moduleName + self baseHeaderSize OfLength: moduleLength. addr = 0 ifTrue: [index := -1] ifFalse: ["add the function to the external primitive table" index := self addToExternalPrimitiveTable: addr]. self success: index >= 0. "Store the index (or -1 if failure) back in the literal" self storePointerUnchecked: 3 ofObject: lit withValue: (self integerObjectOf: index).
"If the function has been successfully loaded process it" (successFlag and: [addr ~= 0]) ifTrue: [self rewriteMethodCacheSel: messageSelector class: lkupClass primIndex: (1000 + index) primFunction: addr. self callExternalPrimitive: addr] ifFalse: ["Otherwise rewrite the primitive index" self rewriteMethodCacheSel: messageSelector class: lkupClass primIndex: 0]!
Item was changed: ----- Method: Interpreter>>signed32BitValueOf: (in category 'primitive support') ----- signed32BitValueOf: oop "Convert the given object into an integer value. The object may be either a positive ST integer or a four-byte LargeInteger." | sz value largeClass negative | <inline: false> <returnTypeC: #int> <var: #value type: #int> (self isIntegerObject: oop) ifTrue: [^self integerValueOf: oop]. largeClass := self fetchClassOf: oop. largeClass = self classLargePositiveInteger ifTrue:[negative := false] ifFalse:[largeClass = self classLargeNegativeInteger ifTrue:[negative := true] ifFalse:[^self primitiveFail]]. sz := self lengthOf: oop. sz = 4 ifFalse: [^ self primitiveFail]. value := (self fetchByte: 0 ofObject: oop) + ((self fetchByte: 1 ofObject: oop) << 8) + ((self fetchByte: 2 ofObject: oop) << 16) + ((self fetchByte: 3 ofObject: oop) << 24). + self cCode: nil - self cCode: [] inSmalltalk: [(value anyMask: 16r80000000) ifTrue: [value := value - 16r100000000]]. "Filter out values out of range for the signed interpretation such as 16rFFFFFFFF (positive w/ bit 32 set) and -16rFFFFFFFF (negative w/ bit 32 set). Since the sign is implicit in the class we require that the high bit of the magnitude is not set which is a simple test here. Note that we have to handle the most negative 32-bit value -2147483648 specially." value < 0 ifTrue: [self assert: (self sizeof: value) == 4. "Don't fail for -16r80000000/-2147483648 Alas the simple (negative and: [value - 1 > 0]) isn't adequate since in C the result of signed integer overflow is undefined and hence under optimization this may fail. The shift, however, is well-defined." (negative and: [0 = (self cCode: [value << 1] inSmalltalk: [value << 1 bitAnd: (1 << 32) - 1])]) ifTrue: [^value]. ^self primitiveFail]. ^negative ifTrue: [0 - value] ifFalse: [value]!
Item was changed: ----- Method: Interpreter>>signed64BitValueOf: (in category 'primitive support') ----- signed64BitValueOf: oop "Convert the given object into an integer value. The object may be either a positive ST integer or a eight-byte LargeInteger." | sz value largeClass negative szsqLong | <inline: false> <returnTypeC: #sqLong> <var: #value type: #sqLong> (self isIntegerObject: oop) ifTrue: [^self cCoerce: (self integerValueOf: oop) to: #sqLong]. largeClass := self fetchClassOfNonImm: oop. largeClass = self classLargePositiveInteger ifTrue:[negative := false] ifFalse:[largeClass = self classLargeNegativeInteger ifTrue:[negative := true] ifFalse:[^self primitiveFail]]. szsqLong := self sizeof: #sqLong. sz := self lengthOf: oop. sz > szsqLong ifTrue: [^ self primitiveFail]. value := 0. 0 to: sz - 1 do: [:i | value := value + ((self cCoerce: (self fetchByte: i ofObject: oop) to: #sqLong) << (i*8))]. "Filter out values out of range for the signed interpretation such as 16rFFFFFFFF... (positive w/ bit 64 set) and -16rFFFFFFFF... (negative w/ bit 64 set). Since the sign is implicit in the class we require that the high bit of the magnitude is not set which is a simple test here. Note that we have to handle the most negative 64-bit value -9223372036854775808 specially." + self cCode: nil - self cCode: [] inSmalltalk: [(value anyMask: 16r8000000000000000) ifTrue: [value := value - 16r10000000000000000]]. value < 0 ifTrue: [self cCode: [self assert: (self sizeof: value) == 8. self assert: (self sizeof: value << 1) == 8]. "Don't fail for -9223372036854775808/-16r8000000000000000. Alas the simple (negative and: [value - 1 > 0]) isn't adequate since in C the result of signed integer overflow is undefined and hence under optimization this may fail. The shift, however, is well-defined." (negative and: [0 = (self cCode: [value << 1] inSmalltalk: [value << 1 bitAnd: (1 << 64) - 1])]) ifTrue: [^value]. ^self primitiveFail]. ^negative ifTrue:[0 - value] ifFalse:[value]!
Item was changed: ----- Method: InterpreterPlugin>>halt (in category 'debugging') ----- halt + self cCode: nil inSmalltalk: [^super halt "avoid the ^0 below"]. - self cCode: '' inSmalltalk: [^super halt "avoid the ^0 below"]. ^0!
Item was changed: ----- Method: InterpreterPrimitives>>getAttributeString: (in category 'system control primitives') ----- getAttributeString: index + "Fetch the system attribute with the given integer ID. The result is a string, or nil if the attribute is not defined. + Firstly, allow attributes to be overridden explicitly using anInterpreter systemAttributes at: n put: v. + Secondly, compute attributes relevant to the simulator in use. + Thirdly fall back on the underlying vm's implementation." - "Fetch the system attribute with the given integer ID. The result is a string, or nil if the attribute is not defined." <doNotGenerate> + | describer | + "0 is the vm executable name, e.g. '/Applications/Squeak.app/Contents/MacOS/Squeak'" + "1 is the image name; should be set by openOn:extraMemory:" + "2 to n, max 999, are vm arguments; need to be set explicitly." self systemAttributes ifNotNil: [:systemAttributes| + systemAttributes at: index ifPresent: [:attributeString| ^attributeString]]. + (index between: 2 and: 999) ifTrue: + [^nil]. + describer := [:class| (CCodeGenerator monticelloDescriptionFor: class), ' ', (Date fromSeconds: class timeStamp)]. + index caseOf: { + "1001 is OS type: unix, win32, mac, ..." + "1002 is OS name, e.g. solaris2.5, 1207.4 on macOS (12.7.4), winXP on windows, etc" + [1003] -> [self cogit ~~ self "i.e. have a cogit" ifTrue: + [^self cogit backEnd class ISA]]. + "1004 is interpreterVersionString, e.g. 'Open Smalltalk Cog[Spur] VM [CoInterpreterPrimitives VMMaker.oscog-mt.3362] 5.20240419.0112'" + "1005 is window system name, Aqua, X11, Windfows, etc" + "1006 is vm build string, a compiler specific value, e.g. 'Mac OS X built on Apr 19 2024 00:51:32 PDT Compiler: Apple LLVM 14.0.0 (clang-1400.0.29.202)'" + [1007] -> [^describer value: self class theNonSimulatorClass]. + [1008] -> [self cogit ~~ self "i.e. have a cogit" ifTrue: + [^describer value: self cogit class theNonSimulatorClass]] + "1009 is source tree version, repository specific, e.g. 'VM: 202404190112 eliot@Machado.local:oscogvm Date: Thu Apr 18 21:12:04 2024 CommitHash: 5473cde Plugins: 202404190112 eliot@Machado.local:oscogvm'" } + otherwise: []. - (systemAttributes at: index ifAbsent: []) ifNotNil: - [:attributeString| ^attributeString]]. ^Smalltalk vm getSystemAttribute: index!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveBeCursor (in category 'I/O primitives') ----- primitiveBeCursor "Set the cursor to the given shape. The Mac only supports 16x16 pixel cursors. Cursor offsets are handled by Smalltalk."
| cursorObj maskBitsIndex maskObj bitsObj extentX extentY depth offsetObj offsetX offsetY cursorBitsIndex ourCursor |
argumentCount = 0 ifTrue: [ cursorObj := self stackTop. maskBitsIndex := nil]. argumentCount = 1 ifTrue: [ cursorObj := self stackValue: 1. maskObj := self stackTop]. self success: argumentCount < 2.
self successful ifTrue: [ self success: ((objectMemory isPointers: cursorObj) and: [(objectMemory lengthOf: cursorObj) >= 5])]. self successful ifTrue: [ bitsObj := objectMemory fetchPointer: 0 ofObject: cursorObj. extentX := self fetchInteger: 1 ofObject: cursorObj. extentY := self fetchInteger: 2 ofObject: cursorObj. depth := self fetchInteger: 3 ofObject: cursorObj. offsetObj := objectMemory fetchPointer: 4 ofObject: cursorObj. self success: ((objectMemory isPointers: offsetObj) and: [(objectMemory lengthOf: offsetObj) >= 2])].
self successful ifTrue: [ offsetX := self fetchInteger: 0 ofObject: offsetObj. offsetY := self fetchInteger: 1 ofObject: offsetObj. (argumentCount = 0 and: [depth = 32]) ifTrue: [ "Support arbitrary-sized 32 bit ARGB forms --bf 3/1/2007 23:51" self success: ((extentX > 0) and: [extentY > 0]). self success: ((offsetX >= (extentX * -1)) and: [offsetX <= 0]). self success: ((offsetY >= (extentY * -1)) and: [offsetY <= 0]). self success: ((objectMemory isWords: bitsObj) and: [(objectMemory lengthOf: bitsObj) = (extentX * extentY)]). cursorBitsIndex := bitsObj + objectMemory baseHeaderSize. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [ourCursor := Cursor extent: extentX @ extentY depth: 32 fromArray: ((1 to: extentX * extentY) collect: [:i | objectMemory fetchLong32: i-1 ofObject: bitsObj]) offset: offsetX @ offsetY]] ifFalse: [ self success: ((extentX = 16) and: [extentY = 16 and: [depth = 1]]). self success: ((offsetX >= -16) and: [offsetX <= 0]). self success: ((offsetY >= -16) and: [offsetY <= 0]). self success: ((objectMemory isWords: bitsObj) and: [(objectMemory lengthOf: bitsObj) = 16]). cursorBitsIndex := bitsObj + objectMemory baseHeaderSize. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [ourCursor := Cursor extent: extentX @ extentY fromArray: ((1 to: 16) collect: [:i | ((objectMemory fetchLong32: i-1 ofObject: bitsObj) >> (objectMemory wordSize*8 - 16)) bitAnd: 16rFFFF]) offset: offsetX @ offsetY]]].
argumentCount = 1 ifTrue: [ self success: ((objectMemory isPointers: maskObj) and: [(objectMemory lengthOf: maskObj) >= 5]). self successful ifTrue: [ bitsObj := objectMemory fetchPointer: 0 ofObject: maskObj. extentX := self fetchInteger: 1 ofObject: maskObj. extentY := self fetchInteger: 2 ofObject: maskObj. depth := self fetchInteger: 3 ofObject: maskObj].
self successful ifTrue: [ self success: ((extentX = 16) and: [extentY = 16 and: [depth = 1]]). self success: ((objectMemory isWords: bitsObj) and: [(objectMemory lengthOf: bitsObj) = 16]). maskBitsIndex := bitsObj + objectMemory baseHeaderSize]].
self successful ifTrue: [ argumentCount = 0 ifTrue: [ depth = 32 ifTrue: [(self cCode: 'ioSetCursorARGB(cursorBitsIndex, extentX, extentY, offsetX, offsetY)' inSmalltalk: [ourCursor show. Cursor currentCursor == ourCursor]) ifFalse: [^self success: false]] ifFalse: [self cCode: 'ioSetCursor(cursorBitsIndex, offsetX, offsetY)' inSmalltalk: [ourCursor show]]] ifFalse: [self cCode: 'ioSetCursorWithMask(cursorBitsIndex, maskBitsIndex, offsetX, offsetY)' inSmalltalk: [cursorBitsIndex == maskBitsIndex. "placate compiler" ourCursor show]]. self pop: argumentCount]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveBitShift (in category 'arithmetic integer primitives') ----- primitiveBitShift "Perform a bitShift. In 32-bits deal only with non-negative 32-bit integers. In 64-bits deal with signed 64-bit quantities (max (2^63)-1)." | integerReceiver integerArgument shifted | <var: #integerReceiver type: #sqInt> integerArgument := self stackTop. (objectMemory isIntegerObject: integerArgument) ifFalse: [^self primitiveFail]. integerReceiver := self stackValue: 1. objectMemory wordSize = 4 ifTrue: [integerReceiver := self positive32BitValueOf: integerReceiver] ifFalse: [integerReceiver := self signed64BitValueOf: integerReceiver]. self successful ifTrue: [(integerArgument := objectMemory integerValueOf: integerArgument) >= 0 ifTrue: "Left shift -- must fail bits would be lost" [integerArgument <= objectMemory numSmallIntegerBits ifFalse: [^self primitiveFail]. shifted := integerReceiver << integerArgument. + self cCode: nil inSmalltalk: [shifted := objectMemory wordSize = 4 - self cCode: '' inSmalltalk: [shifted := objectMemory wordSize = 4 ifTrue: [shifted signedIntFromLong] ifFalse: [shifted signedIntFromLong64]]. integerReceiver = (objectMemory wordSize = 4 ifTrue: [shifted >> integerArgument] ifFalse: [shifted >>> integerArgument]) ifFalse: [^self primitiveFail]] ifFalse: "Right shift -- OK to lose bits" [integerArgument >= objectMemory numSmallIntegerBits negated ifFalse: [^self primitiveFail]. shifted := objectMemory wordSize = 4 ifTrue: [integerReceiver >> (0 - integerArgument)] ifFalse: [integerReceiver >>> (0 - integerArgument)]]. shifted := objectMemory wordSize = 4 ifTrue: [self positive32BitIntegerFor: shifted] ifFalse: [(objectMemory isIntegerValue: shifted) ifTrue: [objectMemory integerObjectOf: shifted] ifFalse: [self signed64BitIntegerFor: shifted]]. self pop: 2 thenPush: shifted]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveGetNextEvent (in category 'I/O primitives') ----- primitiveGetNextEvent "Primitive. Return the next input event from the VM event queue." | evtBuf arg value eventTypeIs | <var: #evtBuf declareC:'sqIntptr_t evtBuf[8] = { 0, 0, 0, 0, 0, 0, 0, 0 }'> + self cCode: nil inSmalltalk: [evtBuf := objectMemory newInputEventAccessorOfSize: 8]. - self cCode: [] inSmalltalk: [evtBuf := objectMemory newInputEventAccessorOfSize: 8]. arg := self stackTop. ((objectMemory isArray: arg) and:[(objectMemory slotSizeOf: arg) = 8]) ifFalse: [^self primitiveFailFor: PrimErrBadArgument].
self ioGetNextEvent: (self cCoerce: evtBuf to: 'sqInputEvent*'). self successful ifFalse: [^nil].
(eventTraceMask ~= 0 and: [eventTraceMask anyMask: (1 bitShift: (evtBuf at: 0))]) ifTrue: [self traceInputEvent: evtBuf].
"Event type" eventTypeIs := evtBuf at: 0. self storeInteger: 0 ofObject: arg withValue: (evtBuf at: 0). self successful ifFalse: [^nil].
eventTypeIs = 6 ifTrue: "Event is Complex, assume evtBuf is populated correctly and return" [1 to: 7 do: [:i | value := evtBuf at: i. self storePointer: i ofObject: arg withValue: value]] ifFalse: ["Event time stamp" self storeInteger: 1 ofObject: arg withValue: ((evtBuf at: 1) bitAnd: MillisecondClockMask). self successful ifFalse: [^nil].
"Event arguments" 2 to: 7 do:[:i| value := evtBuf at: i. (objectMemory isIntegerValue: value) ifTrue:[self storeInteger: i ofObject: arg withValue: value] ifFalse: [value := self positiveMachineIntegerFor: value. objectMemory storePointer: i ofObject: arg withValue: value]]].
self successful ifTrue: [self pop: 1]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveGreaterOrEqual (in category 'arithmetic integer primitives') ----- primitiveGreaterOrEqual | integerReceiver integerArgument | integerArgument := self stackTop. integerReceiver := self stackValue: 1. (objectMemory areIntegers: integerReceiver and: integerArgument) + ifTrue: [self cCode: nil inSmalltalk: - ifTrue: [self cCode: '' inSmalltalk: [integerReceiver := objectMemory integerValueOf: integerReceiver. integerArgument := objectMemory integerValueOf: integerArgument]. self pop: 2 thenPushBool: integerReceiver >= integerArgument] ifFalse: [self primitiveFail]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveGreaterThan (in category 'arithmetic integer primitives') ----- primitiveGreaterThan | integerReceiver integerArgument | integerArgument := self stackTop. integerReceiver := self stackValue: 1. (objectMemory areIntegers: integerReceiver and: integerArgument) + ifTrue: [self cCode: nil inSmalltalk: - ifTrue: [self cCode: '' inSmalltalk: [integerReceiver := objectMemory integerValueOf: integerReceiver. integerArgument := objectMemory integerValueOf: integerArgument]. self pop: 2 thenPushBool: integerReceiver > integerArgument] ifFalse: [self primitiveFail]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveLessOrEqual (in category 'arithmetic integer primitives') ----- primitiveLessOrEqual | integerReceiver integerArgument | integerArgument := self stackTop. integerReceiver := self stackValue: 1. (objectMemory areIntegers: integerReceiver and: integerArgument) + ifTrue: [self cCode: nil inSmalltalk: - ifTrue: [self cCode: '' inSmalltalk: [integerReceiver := objectMemory integerValueOf: integerReceiver. integerArgument := objectMemory integerValueOf: integerArgument]. self pop: 2 thenPushBool: integerReceiver <= integerArgument] ifFalse: [self primitiveFail]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveLessThan (in category 'arithmetic integer primitives') ----- primitiveLessThan | integerReceiver integerArgument | integerArgument := self stackTop. integerReceiver := self stackValue: 1. (objectMemory areIntegers: integerReceiver and: integerArgument) + ifTrue: [self cCode: nil inSmalltalk: - ifTrue: [self cCode: '' inSmalltalk: [integerReceiver := objectMemory integerValueOf: integerReceiver. integerArgument := objectMemory integerValueOf: integerArgument]. self pop: 2 thenPushBool: integerReceiver < integerArgument] ifFalse: [self primitiveFail]!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveRelinquishProcessor (in category 'I/O primitives') ----- primitiveRelinquishProcessor "Relinquish the processor for up to the given number of microseconds. The exact behavior of this primitive is platform dependent."
| microSecs | microSecs := self stackIntegerValue: 0. self successful ifTrue: "DO NOT allow relinquishing the processor while we are profiling since this may skew the time base for our measures (it may reduce processor speed etc). Instead we go full speed, therefore measuring the precise time we spend in the inner idle loop as a busy loop." [nextProfileTick = 0 ifTrue: [self ioRelinquishProcessorForMicroseconds: microSecs. "In simulation we allow ioRelinquishProcessorForMicroseconds: to fail so that we can arrange that the simulator responds to input events promply. This *DOES NOT HAPPEN* in the real vm." + self cCode: nil inSmalltalk: [primFailCode ~= 0 ifTrue: [^self]]]. - self cCode: [] inSmalltalk: [primFailCode ~= 0 ifTrue: [^self]]]. self pop: 1] "microSecs; leave rcvr on stack"!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveShortAt (in category 'sound primitives') ----- primitiveShortAt "Treat the receiver, which can be indexible by either bytes or words, as an array of signed 16-bit values. Answer the contents of the given index. Note that the index specifies the i-th 16-bit entry, not the i-th byte or word." <primitiveMetadata: #PrimCallOnSmalltalkStack> | index rcvr value | index := self stackTop. (objectMemory isIntegerObject: index) ifFalse: [^self primitiveFailFor: PrimErrBadArgument]. rcvr := self stackValue: 1. (objectMemory isWordsOrBytes: rcvr) ifFalse: [^self primitiveFailFor: PrimErrInappropriate]. index := objectMemory integerValueOf: index. ((index >= 1) and: [index <= (objectMemory num16BitUnitsOf: rcvr)]) ifFalse: [^self primitiveFailFor: PrimErrBadIndex]. value := objectMemory fetchShort16: index - 1 ofObject: rcvr. + self cCode: nil - self cCode: [] inSmalltalk: [value > 32767 ifTrue: [value := value - 65536]]. self pop: 2 thenPushInteger: value!
Item was changed: ----- Method: InterpreterPrimitives>>primitiveUtcWithOffset (in category 'system control primitives') ----- primitiveUtcWithOffset "Answer an array with UTC microseconds since the Posix epoch and the current seconds offset from GMT in the local time zone. An empty two element array (or any object with two or more slots) may be supplied as a parameter. This is a named (not numbered) primitive in the null module (ie the VM)" | epochDelta resultArray | <export: true> <var: #epochDelta declareC: 'static usqLong epochDelta = 2177452800000000ULL'> + self cCode: nil inSmalltalk: [epochDelta := 2177452800000000]. - self cCode: '' inSmalltalk: [epochDelta := 2177452800000000]. argumentCount > 0 ifTrue: [argumentCount > 1 ifTrue: [^self primitiveFailFor: PrimErrBadNumArgs]. resultArray := self stackTop. ((objectMemory isPointers: resultArray) and: [(objectMemory lengthOf: resultArray) >= 2]) ifFalse: [^self primitiveFailFor: PrimErrBadArgument]] ifFalse: [resultArray := objectMemory instantiateClass: objectMemory classArray indexableSize: 2]. "N.B. No pushRemappableOop:/popRemappableOop in Cog because positive64BitIntegerFor: et al use eeInstantiate... allocators which are guaranteed not to do a GC." objectMemory storePointerUnchecked: 1 ofObject: resultArray withValue: (objectMemory integerObjectOf: self ioLocalSecondsOffset); storePointer: 0 ofObject: resultArray withValue: (self positive64BitIntegerFor: self ioUTCMicrosecondsNow - epochDelta). self pop: argumentCount + 1 thenPush: resultArray!
Item was changed: ----- Method: InterpreterStackPages>>initializeStack:numSlots:pageSize: (in category 'initialization') ----- initializeStack: theStackPages numSlots: stackSlots pageSize: slotsPerPage "Initialize the stack pages. For testing I want stack addresses to be disjoint from normal memory addresses so stack addresses are negative. The first address is -pageSize bytes. So for example if there are 1024 bytes per page and 3 pages then the pages are organized as
byte address: -1024 <-> -2047 | -2048 <-> -3071 | -3072 <-> -4096 | page 3 page 2 page 1 mem index: 769 <-> 513 | 512 <-> 257 | 256 <-> 1 |
The byte address is the external address corresponding to a real address in the VM. mem index is the index in the memory Array holding the stack, an index internal to the stack pages. The first stack page allocated will be the last page in the array of pages at the highest effective address. Its base address be -1024 and grow down towards -2047."
"The lFoo's are to get around the foo->variable scheme in the C call to allocStackPages below." <var: #theStackPages type: #'char *'> | numPages page structStackPageSize pageStructBase count | <var: #page type: #'StackPage *'> <var: #pageStructBase type: #'char *'> + self cCode: nil - self cCode: '' inSmalltalk: [self assert: stackMemory size = stackSlots. self assert: stackMemory == theStackPages]. stackMemory := theStackPages. "For initialization in the C code." + self cCode: nil inSmalltalk: [pageSizeInSlots := slotsPerPage]. - self cCode: '' inSmalltalk: [pageSizeInSlots := slotsPerPage]. structStackPageSize := coInterpreter sizeof: CogStackPage. bytesPerPage := slotsPerPage * objectMemory wordSize. numPages := coInterpreter numStkPages.
"Because stack pages grow down baseAddress is at the top of a stack page and so to avoid subtracting BytesPerWord from baseAddress and lastAddress in the init loop below we simply push the stackPage array up one word to avoid the overlap. This word is extraStackBytes." pageStructBase := theStackPages + (numPages * bytesPerPage) + objectMemory wordSize. pages := self cCode: '(StackPage *)pageStructBase' inSmalltalk: [pageStructBase class. (1 to: numPages) collect: [:i| CogStackPage new]].
"Simulation only. Since addresses are negative the offset is positive. To make all stack addresses negative we make the offset a page more than it needs to be so the address of the last slot in memory (the highest address in the stack, or its start) is - pageByteSize and the address of the first slot (the lowest address, or its end) is - pageByteSize * (numPages + 1)" + self cCode: nil inSmalltalk: [indexOffset := (numPages + 1) * slotsPerPage]. - self cCode: '' inSmalltalk: [indexOffset := (numPages + 1) * slotsPerPage]. "make sure there's enough headroom" self assert: coInterpreter stackPageByteSize - coInterpreter stackLimitBytes - coInterpreter stackLimitOffset >= coInterpreter stackPageHeadroom. 0 to: numPages - 1 do: [:index| page := self stackPageAt: index. page lastAddress: (self cCode: '(char *)theStackPages + (index * GIV(bytesPerPage))' inSmalltalk: [(index * slotsPerPage - indexOffset) * objectMemory wordSize]); baseAddress: (page lastAddress + bytesPerPage); stackLimit: page baseAddress - coInterpreter stackLimitBytes; realStackLimit: page stackLimit; baseFP: 0; nextPage: (self stackPageAt: (index = (numPages - 1) ifTrue: [0] ifFalse: [index + 1])); prevPage: (self stackPageAt: (index = 0 ifTrue: [numPages - 1] ifFalse: [index - 1]))]. + self cCode: nil - self cCode: '' inSmalltalk: [| lowestAddress highestAddress | lowestAddress := (pages at: 1) lastAddress + objectMemory wordSize. highestAddress := (pages at: numPages) baseAddress. "see InterpreterStackPages>>longAt:" self assert: lowestAddress // objectMemory wordSize + indexOffset = 1. self assert: highestAddress // objectMemory wordSize + indexOffset = (numPages * slotsPerPage)].
"The overflow limit is the amount of stack to retain when moving frames from an overflowing stack to reduce thrashing. See stackOverflowOrEvent:mayContextSwitch:" page := self stackPageAt: 0. overflowLimit := page baseAddress - page realStackLimit * 3 // 5.
0 to: numPages - 1 do: [:index| page := self stackPageAt: index. self assert: (self pageIndexFor: page baseAddress) == index. self assert: (self pageIndexFor: page baseAddress - (slotsPerPage - 1 * objectMemory wordSize)) == index. self assert: (self stackPageFor: page baseAddress) == page. self assert: (self stackPageFor: page stackLimit) == page. + self cCode: nil - self cCode: '' inSmalltalk: [| memIndex | memIndex := index * slotsPerPage + 1. "this is memIndex in the block above" self assert: (self memIndexFor: (self oopForPointer: page baseAddress)) == (memIndex + slotsPerPage - 1). index < (numPages - 1) ifTrue: [self assert: (self stackPageFor: page baseAddress + objectMemory wordSize) == (self stackPageAt: index + 1)]]. coInterpreter initializePageTraceToInvalid: page].
mostRecentlyUsedPage := self stackPageAt: 0. page := mostRecentlyUsedPage. count := 0. [| theIndex | count := count + 1. theIndex := self pageIndexFor: page baseAddress. self assert: (self stackPageAt: theIndex) == page. self assert: (self pageIndexFor: page baseAddress) == theIndex. self assert: (self pageIndexFor: page stackLimit) == theIndex. self assert: (self pageIndexFor: page lastAddress + objectMemory wordSize) == theIndex. (page := page nextPage) ~= mostRecentlyUsedPage] whileTrue. self assert: count == numPages. self assert: self pageListIsWellFormed!
Item was changed: ----- Method: LargeIntegersPlugin>>cDigitCopyFrom:to:len: (in category 'C core util') ----- cDigitCopyFrom: pFrom to: pTo len: len <var: #pFrom type: #'unsigned int *'> <var: #pTo type: #'unsigned int *'>
+ self cCode: nil inSmalltalk: [ - self cCode: '' inSmalltalk: [ (interpreterProxy isKindOf: InterpreterSimulator) ifTrue: [ "called from InterpreterSimulator" 0 to: (len - 1) * 4 do: [:i | interpreterProxy byteAt: pTo + i put: (interpreterProxy byteAt: pFrom + i) ]. ^ 0 ]. ]. "Note: don't care about endianness here, copy operation is endian neutral" 0 to: len - 1 do: [:i | pTo at: i put: (pFrom at: i)]. ^ 0!
Item was changed: ----- Method: NewObjectMemory>>dbgFloatValueOf: (in category 'interpreter access') ----- dbgFloatValueOf: oop "Answer the C double precision floating point value of the argument, or if it is not, answer 0."
| isFloat result | <returnTypeC: #double> <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. ^0.0!
Item was changed: ----- Method: NewObjectMemory>>floatValueOf: (in category 'interpreter access') ----- floatValueOf: oop "Answer the C double precision floating point value of the argument, or fail if it is not a Float, and answer 0. Note: May be called by translated primitive code." <api> <returnTypeC: #double> | isFloat result | <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. coInterpreter primitiveFail. ^0.0!
Item was changed: ----- Method: NewObjectMemory>>validate (in category 'simulation') ----- validate "Validate all the objects in the heap." + self cCode: nil inSmalltalk: [ - self cCode: [] inSmalltalk: [ | oop prev | self interpreter transcript show: 'Validating...'. oop := self firstObject. [oop < freeStart] whileTrue: [self validate: oop. prev := oop. "look here if debugging prev obj overlapping this one" oop := self safeObjectAfter: oop]. self touch: prev. "Don't offer to delete this please" self interpreter transcript show: 'done.'; cr]!
Item was changed: ----- Method: ObjectMemory>>dbgFloatValueOf: (in category 'interpreter access') ----- dbgFloatValueOf: oop "Answer the C double precision floating point value of the argument, or if it is not, answer 0."
| isFloat result | <returnTypeC: #double> <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. ^0.0!
Item was changed: ----- Method: ObjectMemory>>floatValueOf: (in category 'interpreter access') ----- floatValueOf: oop "Answer the C double precision floating point value of the argument, or fail if it is not a Float, and answer 0. Note: May be called by translated primitive code." <api> <returnTypeC: #double> | isFloat result | <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. self primitiveFail. ^0.0!
Item was changed: ----- Method: ObjectMemory>>validate (in category 'simulation') ----- validate "Validate all the objects in the heap." + self cCode: nil inSmalltalk: [ - self cCode: [] inSmalltalk: [ | oop prev | self interpreter transcript show: 'Validating...'. oop := self firstObject. [oop < endOfMemory] whileTrue: [ self validate: oop. prev := oop. "look here if debugging prev obj overlapping this one" oop := self objectAfter: oop. ]. self touch: prev. "Don't offer to delete this please" self interpreter transcript show: 'done.'; cr]!
Item was changed: ----- Method: RegisterAllocatingCogit>>assertCorrectSimStackPtr (in category 'compile abstract instructions') ----- assertCorrectSimStackPtr <inline: true> "generates nothing anyway" "self simStackPrintString" + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [deadCode ifFalse: [self assert: simStackPtr + (needsFrame ifTrue: [0] ifFalse: [1]) = (self debugStackPointerFor: bytecodePC). self assert: (simSpillBase >= methodOrBlockNumTemps or: [self maybeCompilingFirstPassOfBlockWithInitialPushNil and: [simSpillBase > methodOrBlockNumArgs]]). (needsFrame and: [simSpillBase > 0]) ifTrue: [self assert: (self simStackAt: simSpillBase - 1) spilled == true. self assert: (simSpillBase >= simStackPtr or: [(self simStackAt: simSpillBase) spilled == false])]]. self deny: self duplicateRegisterAssignmentsInTemporaries]!
Item was changed: ----- Method: RegisterAllocatingCogit>>ensureFixupAt: (in category 'bytecode generator support') ----- ensureFixupAt: targetPC "Make sure there's a flagged fixup at the target pc in fixups. Initially a fixup's target is just a flag. Later on it is replaced with a proper instruction. Override to generate stack merging code if required." | fixup | <var: #fixup type: #'BytecodeFixup *'> self assert: targetPC > bytecodePC. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self assert: simStackPtr + (needsFrame ifTrue: [0] ifFalse: [1]) = (self debugStackPointerFor: targetPC)]. fixup := self fixupAt: targetPC. "If a non-merge fixup has already been defined then where-ever that was done didn't realise there needed to be a merge and forgot to save the stack state for that merge." self deny: fixup isNonMergeFixup. fixup needsFixup ifTrue: [fixup mergeSimStack ifNil: [self setMergeSimStackOf: fixup] ifNotNil: [self copySimStackToScratch: simSpillBase. self mergeCurrentSimStackWith: fixup. self restoreSimStackFromScratch]] ifFalse: [self assert: (fixup mergeSimStack isNil or: [compilationPass = 2]). fixup mergeSimStack ifNil: [self setMergeSimStackOf: fixup] ifNotNil: [self assert: (self simStack: simStack isIdenticalTo: fixup mergeSimStack)]]. ^super ensureFixupAt: targetPC!
Item was changed: ----- Method: RegisterAllocatingCogit>>mergeCurrentSimStackWith: (in category 'bytecode generator support') ----- mergeCurrentSimStackWith: fixup "At a merge point the cogit expects the stack to be in the same state as fixup's mergeSimStack. mergeSimStack is the state as of some jump forward or backward to this point. So make simStack agree with mergeSimStack (it is, um, problematic to plant code at the jump). Values may have to be assigned to registers. Registers may have to be swapped. Generate code to merge the current simStack with that of the target fixup, the goal being to keep as many registers live as possible." "self printSimStack; printSimStack: fixup mergeSimStack" "self simStackPrintString-> fixup simStackPrintString" "abstractOpcodes object copyFrom: startIndex to: opcodeIndex" <var: #fixup type: #'BytecodeFixup *'> | currentRegisters targetRegisters mergeSimStack current target spillOffset them | (mergeSimStack := fixup mergeSimStack) ifNil: [^self]. + self cCode: nil inSmalltalk: [them := {self simStackPrintString. fixup simStackPrintString}]. - self cCode: '' inSmalltalk: [them := {self simStackPrintString. fixup simStackPrintString}]. self assert: simStackPtr = fixup simStackPtr. currentRegisters := self liveRegistersFrom: 0 to: simStackPtr in: simStack. targetRegisters := self liveRegistersFrom: 0 to: simStackPtr in: mergeSimStack. self resolveConflicts: (currentRegisters bitAnd: targetRegisters) with: fixup mergeSimStack to: fixup simStackPtr. self assert: (self conflictsResolvedBetweenSimStackAnd: mergeSimStack). (self pushForMergeWith: mergeSimStack) ifTrue: [0 to: simStackPtr do: [:i| spillOffset := i > methodOrBlockNumTemps ifTrue: [self frameOffsetOfTemporary: i - 1] ifFalse: [0]. ((current := self simStack: simStack at: i) reconcileWith: (target := self simStack: mergeSimStack at: i) spillOffset: spillOffset onSpillOrUnspill: [:targetReg| self deny: current spilled. self assert: spillOffset ~= 0. current ensureSpilledAt: spillOffset from: FPReg. simSpillBase <= i ifTrue: [simSpillBase := i + 1]]) ifTrue: [| targetReg | (i > methodOrBlockNumTemps and: [(targetReg := target registerOrNone) ~= NoReg]) ifTrue: [self deassignRegister: targetReg in: simStack. self deassignRegister: targetReg in: mergeSimStack. self deny: (self register: targetReg isInMask: self liveRegistersInSelfAndTemps)]]]] ifFalse: [simStackPtr to: 0 by: -1 do: [:i| spillOffset := i > methodOrBlockNumTemps ifTrue: [self frameOffsetOfTemporary: i - 1] ifFalse: [0]. ((current := self simStack: simStack at: i) reconcileWith: (target := self simStack: mergeSimStack at: i) spillOffset: spillOffset onSpillOrUnspill: [:targetReg| self assert: current spilled. self assert: spillOffset ~= 0. targetReg ~= NoReg ifTrue: [self PopR: targetReg] ifFalse: [self AddCq: objectRepresentation wordSize R: SPReg]. current spilled: false. simSpillBase > i ifTrue: [simSpillBase := i]]) ifTrue: [| targetReg | (i > methodOrBlockNumTemps and: [(targetReg := target registerOrNone) ~= NoReg]) ifTrue: [self deassignRegister: targetReg in: simStack. self deassignRegister: targetReg in: mergeSimStack. self deny: (self register: targetReg isInMask: self liveRegistersInSelfAndTemps)]]]]. self updateSimSpillBase!
Item was changed: ----- Method: RegisterAllocatingCogit>>resetSimStack: (in category 'bytecode generator support') ----- resetSimStack: startPC <inline: true> simSpillBase := methodOrBlockNumTemps + 1. simStackPtr := methodOrBlockNumTemps. 0 to: simStackPtr do: [:i| (self simStackAt: i) liveRegister: NoReg. + self cCode: nil inSmalltalk: [(self simStackAt: i) bcptr: startPC]]! - self cCode: '' inSmalltalk: [(self simStackAt: i) bcptr: startPC]]!
Item was changed: ----- Method: RegisterAllocatingCogit>>resolveConflicts:with:to: (in category 'bytecode generator support') ----- resolveConflicts: registersInCommon with: mergeSimStack to: simStackPtr "registersInCommon is the register mask of registers in use in both the current simStack and the target mergeSimStack. Swap any and all conflicting register uses in registersInCommon, until register uses in simStack agree with mergeSimStack." | registerExchanges registerLocations agreements visited initialIndex initialStack | "registerLocations records where a register has moved to during an exchange. This allows a single pass of the stack to rename registers, instead of 1/2 N^2; max stack ~ 56" <var: 'registerExchanges' declareC: 'int registerExchanges[NumRegisters]'> <var: 'registerLocations' declareC: 'int registerLocations[NumRegisters]'> self deny: self duplicateRegisterAssignmentsInTemporaries. registersInCommon = (self registerMaskFor: FPReg) ifTrue: [self assert: (self conflictsResolvedBetweenSimStackAnd: mergeSimStack). ^self]. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [registerExchanges := CArrayAccessor on: (Array new: NumRegisters). registerLocations := CArrayAccessor on: (Array new: NumRegisters)]. 0 to: NumRegisters - 1 do: [:i| registerExchanges at: i put: i]. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [initialIndex := opcodeIndex. initialStack := self simStackPrintString]. "for debugging" agreements := visited := 0. 0 to: simStackPtr do: [:i| | currentReg targetReg | currentReg := (self simStackAt: i) registerOrNone. targetReg := (self simStack: mergeSimStack at: i) registerOrNone. (currentReg ~= NoReg and: [targetReg ~= NoReg]) ifTrue: [currentReg := registerExchanges at: currentReg. currentReg = targetReg ifTrue: [(self register: currentReg isInMask: visited) ifFalse: [visited := visited bitOr: (self registerMaskFor: currentReg). agreements := agreements bitOr: (self registerMaskFor: currentReg)]] ifFalse: [((self register: currentReg isInMask: registersInCommon) and: [self register: currentReg isNotInMask: (visited bitOr: agreements)]) ifTrue: [| this that | visited := visited bitOr: (self registerMaskFor: currentReg and: targetReg). self SwapR: targetReg R: currentReg Scratch: RISCTempReg. this := registerExchanges at: currentReg. that := registerExchanges at: targetReg. registerExchanges at: currentReg put: that; at: targetReg put: this]]]]. (visited := visited bitClear: agreements) = 0 ifTrue: [self assert: (self conflictsResolvedBetweenSimStackAnd: mergeSimStack). ^self]. 0 to: NumRegisters - 1 do: [:i| registerLocations at: (registerExchanges at: i) put: i]. 0 to: simStackPtr do: [:i| | ssEntry reg | ssEntry := self simStackAt: i. reg := ssEntry registerOrNone. (reg ~= NoReg and: [(self register: reg isInMask: registersInCommon) and: [reg ~= (self simStack: mergeSimStack at: i) registerOrNone]]) ifTrue: [ssEntry type = SSRegister ifTrue: [ssEntry register: (registerLocations at: reg)] ifFalse: [ssEntry liveRegister: (registerLocations at: reg)]]]. self deny: self duplicateRegisterAssignmentsInTemporaries. self assert: (self conflictsResolvedBetweenSimStackAnd: mergeSimStack) "(initialIndex to: opcodeIndex - 1) collect: [:x| abstractOpcodes at: x]"!
Item was changed: ----- Method: SimpleStackBasedCogit>>compileInterpreterPrimitive:flags: (in category 'primitive generators') ----- compileInterpreterPrimitive: primitiveRoutine flags: flags "Compile a call to an interpreter primitive. Call the C routine with the usual stack-switching dance, test the primFailCode and then either return on success or continue to the method body." <var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'> | jmp continueAfterProfileSample jumpToTakeSample | + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [primitiveRoutine isSymbol ifTrue: [^self compileInterpreterPrimitive: (self simulatedAddressFor: primitiveRoutine) flags: flags]].
self deny: (backEnd hasVarBaseRegister and: [self register: VarBaseReg isInMask: ABICallerSavedRegisterMask]).
"Save processor fp, sp and return pc in the interpreter's frame stack and instruction pointers" self genExternalizePointersForPrimitiveCall. "Switch to the C stack." self genLoadCStackPointersForPrimCall.
"Old old full prim trace is in VMMaker-eem.550 and prior. Old simpler full prim trace is in VMMaker-eem.2969 and prior." (coInterpreter recordPrimTraceForMethod: methodObj) ifTrue: [self genFastPrimTraceUsing: ClassReg and: SendNumArgsReg].
"Clear the primFailCode and set argumentCount" self MoveCq: 0 R: TempReg. self MoveR: TempReg Aw: coInterpreter primFailCodeAddress. methodOrBlockNumArgs ~= 0 ifTrue: [self AddCq: methodOrBlockNumArgs R: TempReg]. "As small or smaller than move on most archs" self MoveR: TempReg Aw: coInterpreter argumentCountAddress.
"If required, set newMethod" (flags anyMask: PrimCallNeedsNewMethod) ifTrue: [self genLoadNewMethod].
"Invoke the primitive. If the primitive (potentially) contains a call-back then its code may disappear and consequently we cannot return here, since here may evaporate. Instead sideways-call the routine, substituting cePrimReturnEnterCogCode[Profiling] as the return address, so the call always returns there." self PrefetchAw: coInterpreter primFailCodeAddress. (flags anyMask: PrimCallMayEndureCodeCompaction) ifTrue: ["The ceActivateFailingPrimitiveMethod: machinery can't handle framelessness." needsFrame := true. backEnd genMarshallNArgs: 0 arg: nil arg: nil arg: nil arg: nil; genSubstituteReturnAddress: ((flags anyMask: PrimCallCollectsProfileSamples) ifTrue: [cePrimReturnEnterCogCodeProfiling] ifFalse: [cePrimReturnEnterCogCode]). self JumpFullRT: primitiveRoutine asInteger. ^0].
"Call the C primitive routine." backEnd genMarshallNArgs: 0 arg: nil arg: nil arg: nil arg: nil. self CallFullRT: primitiveRoutine asInteger. backEnd genRemoveNArgsFromStack: 0. objectRepresentation maybeCompileRetryOf: primitiveRoutine onPrimitiveFail: primitiveIndex flags: flags. "Switch back to the Smalltalk stack. Stack better be in either of these two states: success: stackPointer -> result (was receiver) arg1 ... argN return pc failure: receiver arg1 ... stackPointer -> argN return pc" backEnd genLoadStackPointersForPrimCall: ClassReg. "genLoadStackPointersForPrimCall: leaves the stack in these states: NoLinkRegister LinkRegister success: result (was receiver) stackPointer -> result (was receiver) stackPointer -> arg1 arg1 ... ... argN argN return pc
failure: receiver receiver arg1 arg1 ... ... argN stackPointer -> argN stackPointer -> return pc which corresponds to the stack on entry after pushRegisterArgs. In either case we can write the instructionPointer to top of stack or load it into the LinkRegister to reestablish the return pc." backEnd hasLinkRegister ifTrue: [self MoveAw: coInterpreter instructionPointerAddress R: LinkReg] ifFalse: [self MoveAw: coInterpreter instructionPointerAddress R: ClassReg. self MoveR: ClassReg Mw: 0 r: SPReg]. "Test primitive failure" self MoveAw: coInterpreter primFailCodeAddress R: TempReg. self flag: 'ask concrete code gen if move sets condition codes?'. self CmpCq: 0 R: TempReg. jmp := self JumpNonZero: 0. "placing the test here attributes the tick to the primitive plus any checkForAndFollowForwardedPrimitiveState scanning, but attributes all of a failing primitive to the current method (in ceStackOverflow: on frame build)." (backEnd has64BitPerformanceCounter and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue: [jumpToTakeSample := self genCheckForProfileTimerTick: (self registerMaskFor: NoReg)]. "Fetch result from stack" continueAfterProfileSample := self MoveMw: (backEnd hasLinkRegister ifTrue: [0] ifFalse: [objectMemory wordSize]) r: SPReg R: ReceiverResultReg. self RetN: objectMemory wordSize. "return to caller, popping receiver" (backEnd has64BitPerformanceCounter and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue: [jumpToTakeSample jmpTarget: self Label. self genTakeProfileSample. backEnd genLoadStackPointerForPrimCall: ClassReg. backEnd hasLinkRegister ifTrue: [self MoveAw: coInterpreter instructionPointerAddress R: LinkReg] ifFalse: [self MoveAw: coInterpreter instructionPointerAddress R: ClassReg. self MoveR: ClassReg Mw: 0 r: SPReg]. self Jump: continueAfterProfileSample].
"Jump to restore of receiver reg and proceed to frame build for failure." jmp jmpTarget: self Label. "Restore receiver reg from stack. If on RISCs ret pc is in LinkReg, if on CISCs ret pc is on stack." self MoveMw: objectMemory wordSize * (methodOrBlockNumArgs + (backEnd hasLinkRegister ifTrue: [0] ifFalse: [1])) r: SPReg R: ReceiverResultReg. ^0!
Item was changed: ----- Method: SimpleStackBasedCogit>>compileOnStackExternalPrimitive:flags: (in category 'primitive generators') ----- compileOnStackExternalPrimitive: primitiveRoutine flags: flags "Compile a fast call of a C primitive using the current stack page, avoiding the stack switch except on failure. This convention still uses stackPointer and argumentCount to access operands. Push all operands to the stack, assign stackPointer, argumentCount, and zero primFailCode. Make the call (saving a LinkReg if required). Test for failure and return. On failure on Spur, if there is an accessor depth, assign framePointer and newMethod, do the stack switch, call checkForAndFollowForwardedPrimitiveState, and loop back if forwarders are found. Fall through to frame build." <option: #SpurObjectMemory> <var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'> | calleeSavedRegisterMask linkRegSaveRegister spRegSaveRegister jmpFail retry continueAfterProfileSample jumpToTakeSample | + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [primitiveRoutine isSymbol ifTrue: [^self compileOnStackExternalPrimitive: (self simulatedAddressFor: primitiveRoutine) flags: flags]].
self assert: (objectRepresentation hasSpurMemoryManagerAPI and: [flags anyMask: PrimCallOnSmalltalkStack]). self deny: (backEnd hasVarBaseRegister and: [self register: VarBaseReg isInMask: ABICallerSavedRegisterMask]).
(coInterpreter recordFastCCallPrimTraceForMethod: methodObj) ifTrue: [self genFastPrimTraceUsing: ClassReg and: SendNumArgsReg].
"Clear the primFailCode and set argumentCount" self MoveCq: 0 R: TempReg. self MoveR: TempReg Aw: coInterpreter primFailCodeAddress. methodOrBlockNumArgs ~= 0 ifTrue: [self AddCq: methodOrBlockNumArgs R: TempReg]. "As small or smaller than move on most archs" self MoveR: TempReg Aw: coInterpreter argumentCountAddress. self genExternalizeStackPointerForFastPrimitiveCall. "We may need to save LinkReg and/or SPReg, and given the stack machinations it is much easier to save them in callee saved registers than on the stack itself." calleeSavedRegisterMask := ABICalleeSavedRegisterMask bitClear: (self registerMaskFor: ClassReg). backEnd hasLinkRegister ifTrue: [linkRegSaveRegister := self availableRegisterOrNoneIn: calleeSavedRegisterMask. self deny: linkRegSaveRegister = NoReg. self MoveR: LinkReg R: linkRegSaveRegister. calleeSavedRegisterMask := calleeSavedRegisterMask bitClear: (self registerMaskFor: linkRegSaveRegister)]. spRegSaveRegister := NoReg. (SPReg ~= NativeSPReg and: [(self isCalleeSavedReg: SPReg) not]) ifTrue: [spRegSaveRegister := self availableRegisterOrNoneIn: calleeSavedRegisterMask. self deny: spRegSaveRegister = NoReg. self MoveR: SPReg R: spRegSaveRegister]. retry := self Label. (flags anyMask: PrimCallOnSmalltalkStackAlign2x) ifTrue: [self AndCq: (objectMemory wordSize * 2 - 1) bitInvert R: SPReg R: NativeSPReg] ifFalse: [SPReg ~= NativeSPReg ifTrue: [backEnd genLoadNativeSPRegWithAlignedSPReg]]. backEnd genMarshallNArgs: 0 arg: nil arg: nil arg: nil arg: nil. "If the primitive is in the interpreter then its address won't change relative to the code zone over time, whereas if it is in a plugin its address could change if the module is un/re/over/loaded. So if in the interpreter and in range use a normal call instruction." ((flags noMask: PrimCallIsExternalCall) and: [backEnd isWithinCallRange: primitiveRoutine asInteger]) ifTrue: [self CallRT: primitiveRoutine asInteger] ifFalse: [self CallFullRT: primitiveRoutine asInteger]. backEnd genRemoveNArgsFromStack: 0. "test primFailCode and jump to failure sequence if non-zero" self MoveAw: coInterpreter primFailCodeAddress R: TempReg. spRegSaveRegister ~= NoReg ifTrue: [self MoveR: spRegSaveRegister R: SPReg]. self CmpCq: 0 R: TempReg. jmpFail := self JumpNonZero: 0. "Remember to restore the native stack pointer to point to the C stack, otherwise the Smalltalk frames will get overwritten on an interrupt." SPReg ~= NativeSPReg ifTrue: [backEnd genLoadCStackPointer]. "placing the test here attributes the tick to the primitive plus any checkForAndFollowForwardedPrimitiveState scanning, but attributes all of a failing primitive to the current method (in ceStackOverflow: on frame build)." (backEnd has64BitPerformanceCounter and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue: [jumpToTakeSample := self genCheckForProfileTimerTick: (self registerMaskFor: NoReg)]. "At this point the primitive has cut back stackPointer to point to the result." continueAfterProfileSample := self MoveAw: coInterpreter stackPointerAddress R: TempReg. "get result and restore retpc" backEnd hasLinkRegister ifTrue: [self MoveMw: 0 r: TempReg R: ReceiverResultReg; AddCq: objectMemory wordSize R: TempReg R: SPReg; MoveR: linkRegSaveRegister R: LinkReg] ifFalse: [| retpcOffset | "The original retpc is (argumentCount + 1) words below stackPointer." retpcOffset := (methodOrBlockNumArgs + 1 * objectMemory wordSize) negated. self MoveMw: retpcOffset r: TempReg R: ClassReg; "get retpc" MoveR: TempReg R: SPReg; MoveMw: 0 r: TempReg R: ReceiverResultReg; MoveR: ClassReg Mw: 0 r: TempReg "put it back on stack for the return..."]. self RetN: 0.
(backEnd has64BitPerformanceCounter and: [flags anyMask: PrimCallCollectsProfileSamples]) ifTrue: [jumpToTakeSample jmpTarget: self Label. self genTakeProfileSample. self Jump: continueAfterProfileSample].
"primitive failure. if there is an accessor depth, scan and retry on failure (but what if faling for out of memory?)" jmpFail jmpTarget: self Label. (coInterpreter accessorDepthForPrimitiveMethod: methodObj) >= 0 ifTrue: [| skip | "Given that following primitive state to the accessor depth is recursive, we're asking for trouble if we run the fixup on the Smalltalk stack page. Run it on the full C stack instead. This won't be a performance issue since primitive failure should be very rare." self MoveR: FPReg Aw: coInterpreter framePointerAddress. self MoveCw: primitiveRoutine asInteger R: TempReg. self MoveR: TempReg Aw: coInterpreter primitiveFunctionPointerAddress. self genLoadNewMethod. self genLoadCStackPointersForPrimCall. backEnd genMarshallNArgs: 0 arg: 0 arg: 0 arg: 0 arg: 0. (backEnd isWithinCallRange: (self cCode: [#checkForAndFollowForwardedPrimitiveState asUnsignedIntegerPtr] inSmalltalk: [self simulatedTrampolineFor: #checkForAndFollowForwardedPrimitiveState])) ifTrue: [self CallRT: (self cCode: [#checkForAndFollowForwardedPrimitiveState asUnsignedIntegerPtr] inSmalltalk: [self simulatedTrampolineFor: #checkForAndFollowForwardedPrimitiveState])] ifFalse: [self CallFullRT: (self cCode: [#checkForAndFollowForwardedPrimitiveState asUnsignedIntegerPtr] inSmalltalk: [self simulatedTrampolineFor: #checkForAndFollowForwardedPrimitiveState])]. backEnd genLoadStackPointersForPrimCall: ClassReg. self CmpCq: 0 R: ABIResultReg. skip := self JumpZero: 0. self MoveCq: 0 R: TempReg. self MoveR: TempReg Aw: coInterpreter primFailCodeAddress. self Jump: retry. skip jmpTarget: self Label] ifFalse: "must reload SPReg to undo any alignment change," [(flags anyMask: PrimCallOnSmalltalkStackAlign2x) ifTrue: [backEnd hasLinkRegister ifTrue: [self MoveAw: coInterpreter stackPointerAddress R: SPReg] ifFalse: "remember to include return address; use scratch to avoid an interrupt overwriting retpc" [self MoveAw: coInterpreter stackPointerAddress R: TempReg. self SubCq: objectRepresentation wordSize R: TempReg. self MoveR: TempReg R: SPReg]]]. "Remember to restore the native stack pointer to point to the C stack, otherwise the Smalltalk frames will get overwritten on an interrupt." SPReg ~= NativeSPReg ifTrue: [backEnd genLoadCStackPointer]. "The LinkRegister now contains the return address either of the primitive call or of checkForAndFollowForwardedPrimitiveState. It must be restored to the return address of the send invoking this primtiive method." backEnd hasLinkRegister ifTrue: [self MoveR: linkRegSaveRegister R: LinkReg]. "Finally remember to reload ReceiverResultReg if required. Even if arguments have been pushed, the prolog sequence assumes it is live." (self register: ReceiverResultReg isInMask: ABICallerSavedRegisterMask) ifTrue: [self MoveMw: (methodOrBlockNumArgs + (backEnd hasLinkRegister ifTrue: [0] ifFalse: [1])) * objectMemory wordSize r: SPReg R: ReceiverResultReg]. "continue to frame build..." ^0!
Item was changed: ----- Method: SimpleStackBasedCogit>>compilePrimitive (in category 'primitive generators') ----- compilePrimitive "Compile a primitive. If possible, performance-critical primitives will be generated by their own routines (primitiveGenerator). Otherwise, if there is a primitive at all, we call the C routine with the usual stack-switching dance, test the primFailCode and then either return on success or continue to the method body." <inline: false> | primitiveDescriptor primitiveRoutine code flags | <var: #primitiveDescriptor type: #'PrimitiveDescriptor *'> <var: #primitiveRoutine declareC: 'void (*primitiveRoutine)(void)'> primitiveIndex = 0 ifTrue: [^0]. "If a descriptor specifies an argument count (by numArgs >= 0) then it must match for the generated code to be correct. For example for speed many primitives use ResultReceiverReg instead of accessing the stack, so the receiver better be at numArgs down the stack. Use the interpreter version if not." ((primitiveDescriptor := self primitiveGeneratorOrNil) notNil and: [primitiveDescriptor primitiveGenerator notNil and: [(primitiveDescriptor primNumArgs < 0 "means generator doesn't care" or: [primitiveDescriptor primNumArgs = methodOrBlockNumArgs])]]) ifTrue: [| opcodeIndexAtPrimitive | "Note opcodeIndex so that any arg load instructions for unimplemented primitives can be discarded." opcodeIndexAtPrimitive := opcodeIndex. code := objectRepresentation perform: primitiveDescriptor primitiveGenerator.
(code < 0 and: [code ~= UnimplementedPrimitive]) ifTrue: "Generator failed, so no point continuing..." [^code]. "If the primitive can never fail then there is nothing more that needs to be done." code = UnfailingPrimitive ifTrue: [^0]. "If the machine code version handles all cases the only reason to call the interpreter primitive is to reap the primitive error code. Don't bother if it isn't used." (code = CompletePrimitive and: [(self methodUsesPrimitiveErrorCode: methodObj header: methodHeader) not]) ifTrue: [^0]. "Discard any arg load code generated by the primitive generator." code = UnimplementedPrimitive ifTrue: [opcodeIndex := opcodeIndexAtPrimitive]].
primitiveRoutine := coInterpreter functionPointerForCompiledMethod: methodObj primitiveIndex: primitiveIndex primitivePropertyFlagsInto: (self addressOf: flags put: [:val| flags := val]).
(primitiveRoutine = 0 "no primitive" + or: [primitiveRoutine = (self cCoerceSimple: #primitiveFail to: #'void (*)(void)')]) ifTrue: - or: [primitiveRoutine = (self cCoerceSimple: #primitiveFail to: 'void (*)(void)')]) ifTrue: [^self genFastPrimFail].
(backEnd has64BitPerformanceCounter + and: [primitiveRoutine = (self cCoerceSimple: #primitiveHighResClock to: #'void (*)(void)') - and: [primitiveRoutine = (self cCoerceSimple: #primitiveHighResClock to: 'void (*)(void)') and: [methodOrBlockNumArgs = 0]]) ifTrue: [objectRepresentation wordSize = 8 ifTrue: [code := objectRepresentation genPrimitiveHighResClock64] ifFalse: [code := objectRepresentation genPrimitiveHighResClock32]. code ~= UnimplementedPrimitive ifTrue: [^code]].
(objectRepresentation hasSpurMemoryManagerAPI and: [flags anyMask: PrimCallOnSmalltalkStack]) ifTrue: [^self compileOnStackExternalPrimitive: primitiveRoutine flags: flags]. ^self compileInterpreterPrimitive: primitiveRoutine flags: flags!
Item was changed: ----- Method: SimpleStackBasedCogit>>generateTracingTrampolines (in category 'initialization') ----- generateTracingTrampolines "Generate trampolines for tracing. In the simulator we can save a lot of time and avoid noise instructions in the lastNInstructions log by short-cutting these trampolines, but we need them in the real vm." ceTraceLinkedSendTrampoline := self genTrampolineFor: #ceTraceLinkedSend: called: 'ceTraceLinkedSendTrampoline' arg: ReceiverResultReg regsToSave: CallerSavedRegisterMask. ceTraceBlockActivationTrampoline := self genTrampolineFor: #ceTraceBlockActivation called: 'ceTraceBlockActivationTrampoline' regsToSave: CallerSavedRegisterMask. ceTraceStoreTrampoline := self genTrampolineFor: #ceTraceStoreOf:into: called: 'ceTraceStoreTrampoline' arg: ClassReg arg: ReceiverResultReg regsToSave: CallerSavedRegisterMask. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [ceTraceLinkedSendTrampoline := self simulatedTrampolineFor: #ceShortCutTraceLinkedSend:. ceTraceBlockActivationTrampoline := self simulatedTrampolineFor: #ceShortCutTraceBlockActivation:. ceTraceStoreTrampoline := self simulatedTrampolineFor: #ceShortCutTraceStore:]!
Item was changed: ----- Method: SimpleStackBasedCogit>>primitiveGeneratorOrNil (in category 'primitive generators') ----- primitiveGeneratorOrNil "If there is a generator for the current primitive then answer it; otherwise answer nil." <returnTypeC: #'PrimitiveDescriptor *'> | primitiveDescriptor | <var: #primitiveDescriptor type: #'PrimitiveDescriptor *'> (coInterpreter isQuickPrimitiveIndex: primitiveIndex) ifTrue: [primitiveDescriptor := self addressOf: (primitiveGeneratorTable at: 0). "an unused one" primitiveDescriptor primitiveGenerator: (coInterpreter quickPrimitiveGeneratorFor: primitiveIndex). ^primitiveDescriptor]. (primitiveIndex between: 1 and: MaxCompiledPrimitiveIndex) ifTrue: + [self cCode: nil inSmalltalk: "for debugging, allow excluding specific primitives" - [self cCode: [] inSmalltalk: "for debugging, allow excluding specific primitives" [self class initializationOptions at: #DoNotJIT ifPresent: [:excluded| ((excluded includes: primitiveIndex) and: [(primitiveGeneratorTable at: primitiveIndex) primitiveGenerator notNil]) ifTrue: [coInterpreter transcript nextPutAll: 'EXCLUDING primitive #'; print: primitiveIndex; space; nextPutAll: (primitiveGeneratorTable at: primitiveIndex) primitiveGenerator; cr; flush. ^nil]]]. ^self addressOf: (primitiveGeneratorTable at: primitiveIndex)]. ^nil!
Item was changed: ----- Method: Spur32BitMemoryManager>>dbgFloatValueOf: (in category 'interpreter access') ----- dbgFloatValueOf: oop "Answer the C double precision floating point value of the argument, or if it is not, answer 0."
| isFloat result | <returnTypeC: #double> <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. ^0.0!
Item was changed: ----- Method: Spur32BitMemoryManager>>floatValueOf: (in category 'interpreter access') ----- floatValueOf: oop "Answer the C double precision floating point value of the argument, or fail if it is not a Float, and answer 0. Note: May be called by translated primitive code." <api> <returnTypeC: #double> | isFloat result | <var: #result type: #double> isFloat := self isFloatInstance: oop. isFloat ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]. coInterpreter primitiveFail. ^0.0!
Item was changed: ----- Method: Spur64BitMemoryManager>>dbgFloatValueOf: (in category 'interpreter access') ----- dbgFloatValueOf: oop "Answer the C double precision floating point value of the argument, or if it is not, answer 0."
| result tagBits | <returnTypeC: #double> <var: #result type: #double> (tagBits := oop bitAnd: self tagMask) ~= 0 ifTrue: [tagBits = self smallFloatTag ifTrue: [^self smallFloatValueOf: oop]] ifFalse: [(self classIndexOf: oop) = ClassFloatCompactIndex ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]]. ^0.0!
Item was changed: ----- Method: Spur64BitMemoryManager>>floatValueOf: (in category 'interpreter access') ----- floatValueOf: oop "Answer the C double precision floating point value of the argument, or fail if it is not a Float, and answer 0. Note: May be called by translated primitive code." <api> <returnTypeC: #double> | result tagBits | <var: #result type: #double> (tagBits := oop bitAnd: self tagMask) ~= 0 ifTrue: [tagBits = self smallFloatTag ifTrue: [^self smallFloatValueOf: oop]] ifFalse: [(self classIndexOf: oop) = ClassFloatCompactIndex ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: oop + self baseHeaderSize into: result. ^result]]. coInterpreter primitiveFail. ^0.0!
Item was changed: ----- Method: Spur64BitMemoryManager>>loadFloatOrIntFrom: (in category 'interpreter access') ----- loadFloatOrIntFrom: floatOrIntOop "If floatOrInt is an integer and we enable mixed arithmetic in primitives, then convert it to a C double float and return it. If it is a Float, then load its value and return it. Otherwise fail -- ie return with primErrorCode non-zero."
<inline: true> <returnTypeC: #double> | result tagBits | <var: #result type: #double>
(tagBits := floatOrIntOop bitAnd: self tagMask) ~= 0 ifTrue: [tagBits = self smallFloatTag ifTrue: [^self smallFloatValueOf: floatOrIntOop]. (coInterpreter primitiveDoMixedArithmetic and: [tagBits = self smallIntegerTag]) ifTrue: [^(self integerValueOf: floatOrIntOop) asFloat]] ifFalse: [(self classIndexOf: floatOrIntOop) = ClassFloatCompactIndex ifTrue: + [self cCode: nil inSmalltalk: [result := Float new: 2]. - [self cCode: '' inSmalltalk: [result := Float new: 2]. self fetchFloatAt: floatOrIntOop + self baseHeaderSize into: result. ^result]]. coInterpreter primitiveFail. ^0.0!
Item was changed: ----- Method: SpurGenerationScavenger>>writeScavengeLog (in category 'logging') ----- writeScavengeLog "Output the entire record." <inline: #never> | policyNames | <var: 'policyNames' declareC: 'static char *policyNames[] = {"", "by age", "by class", "to shrink rt", "don''t tenure", "mark on tenure"}'> + self cCode: nil - self cCode: [] inSmalltalk: [policyNames := CLiteralArray on: #('' 'by age' 'by class' 'to shrink rt' 'don''t tenure' 'mark on tenure')]. scavengeLog "log data collected by logStartScavenge" f: 'scavenge %ld eden bytes: 0x%lx/%ld past bytes: 0x%lx/%ld\n\trem set: %ld redzone: %ld size: %ld\n' printf:{ manager statScavenges. scavengeLogRecord sEdenBytes. scavengeLogRecord sEdenBytes. scavengeLogRecord sPastBytes. scavengeLogRecord sPastBytes. scavengeLogRecord sRememberedSetSize. scavengeLogRecord sRememberedSetRedZone. scavengeLogRecord sRememberedSetLimit }. scavengeLog "log data collected by logTenuringPolicy" f: (scavengeLogRecord tTenureCriterion = TenureToShrinkRT ifFalse: [' tenure below 0x%lx/%ld %s\n'] ifTrue: [' tenure below 0x%lx/%ld %s refct %ld\n']) printf:{ scavengeLogRecord tTenureThreshold. scavengeLogRecord tTenureThreshold. policyNames at: scavengeLogRecord tTenureCriterion. scavengeLogRecord tRefCountToShrinkRT }. scavengeLog "log data collected by logEndScavenge" f: ' survivor bytes: 0x%lx/%ld rem set: %ld tenured: %ld usecs: %ld\n' printf:{ scavengeLogRecord eSurvivorBytes. scavengeLogRecord eSurvivorBytes. scavengeLogRecord eRememberedSetSize. scavengeLogRecord eStatTenures - scavengeLogRecord sStatTenures. manager statSGCDeltaUsecs}. scavengeLog fflush!
Item was changed: ----- Method: SpurMemoryManager>>checkHeapIntegrity:classIndicesShouldBeValid: (in category 'debug support') ----- (excessive size, no diff calculated)
Item was changed: ----- Method: SpurMemoryManager>>markAccessibleObjectsAndFireEphemerons (in category 'gc - global') ----- markAccessibleObjectsAndFireEphemerons self assert: marking. self assert: self validClassTableRootPages. self assert: segmentManager allBridgesMarked. + self cCode: nil "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" - self cCode: [] "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" inSmalltalk: [MarkStackRecord ifNotNil: [MarkStackRecord resetTo: 1]].
"This must come first to enable stack page reclamation. It clears the trace flags on stack pages and so must precede any marking. Otherwise it will clear the trace flags of reached pages." coInterpreter initStackPageGC. self markAndTraceHiddenRoots. self markAndTraceExtraRoots. self assert: self validClassTableRootPages. coInterpreter markAndTraceInterpreterOops: true. self assert: self validObjStacks. self markWeaklingsAndMarkAndFireEphemerons. self assert: self validObjStacks!
Item was changed: ----- Method: SpurMemoryManager>>markObjects: (in category 'gc - global') ----- markObjects: objectsShouldBeUnmarkedAndUnmarkedClassesShouldBeExpunged <inline: #never> "for profiling" "Mark all accessible objects. objectsShouldBeUnmarkedAndUnmarkedClassesShouldBeExpunged is true if all objects are unmarked and/or if unmarked classes shoud be removed from the class table." "If the incremental collector is running mark bits may be set; stop it and clear them if necessary." + self cCode: nil inSmalltalk: [coInterpreter transcript nextPutAll: 'marking...'; flush]. - self cCode: '' inSmalltalk: [coInterpreter transcript nextPutAll: 'marking...'; flush]. self runLeakCheckerFor: GCModeFull.
self shutDownGlobalIncrementalGC: objectsShouldBeUnmarkedAndUnmarkedClassesShouldBeExpunged. self initializeUnscannedEphemerons. self initializeMarkStack. self initializeWeaklingStack. marking := true. self markAccessibleObjectsAndFireEphemerons. self expungeDuplicateAndUnmarkedClasses: objectsShouldBeUnmarkedAndUnmarkedClassesShouldBeExpunged. self nilUnmarkedWeaklingSlots. marking := false!
Item was changed: ----- Method: SpurMemoryManager>>nilUnmarkedWeaklingSlots (in category 'weakness and ephemerality') ----- nilUnmarkedWeaklingSlots "Nil the unmarked slots in the weaklings on the weakling stack, finalizing those that lost references. Finally, empty the weaklingStack." <inline: #never> "for profiling" + self cCode: nil inSmalltalk: [coInterpreter transcript nextPutAll: 'nilling...'; flush]. - self cCode: '' inSmalltalk: [coInterpreter transcript nextPutAll: 'nilling...'; flush]. self eassert: [self allOldMarkedWeakObjectsOnWeaklingStack]. weaklingStack = nilObj ifTrue: [^self]. self objStack: weaklingStack from: 0 do: [:weakling| | anyUnmarked | anyUnmarked := self nilUnmarkedWeaklingSlotsIn: weakling. anyUnmarked ifTrue: ["fireFinalization: could grow the mournQueue and if so, additional pages must be marked to avoid being GC'ed." self assert: marking. coInterpreter fireFinalization: weakling]]. self emptyObjStack: weaklingStack!
Item was changed: ----- Method: SpurMemoryManager>>noCheckPush:onObjStack: (in category 'obj stacks') ----- noCheckPush: objOop onObjStack: objStack <inline: false> "Push an element on an objStack. Split from push:onObjStack: for testing." | topx | self eassert: [self isValidObjStack: objStack]. + self cCode: nil "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" - self cCode: '' "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" inSmalltalk: [MarkStackRecord ifNotNil: [(self fetchPointer: ObjStackMyx ofObject: objStack) = MarkStackRootIndex ifTrue: [MarkStackRecord addLast: {#push. objOop}]]]. topx := self fetchPointer: ObjStackTopx ofObject: objStack. topx >= ObjStackLimit ifTrue: [self noCheckPush: objOop onObjStack: (self ensureRoomOnObjStackAt: (self fetchPointer: ObjStackMyx ofObject: objStack))] ifFalse: [self storePointer: ObjStackFixedSlots + topx ofObjStack: objStack withValue: objOop. self storePointer: ObjStackTopx ofObjStack: objStack withValue: topx + 1]. ^objOop!
Item was changed: ----- Method: SpurMemoryManager>>popObjStack: (in category 'obj stacks') ----- popObjStack: objStack | topx top nextPage myx | self eassert: [self isValidObjStack: objStack]. topx := self fetchPointer: ObjStackTopx ofObject: objStack. topx = 0 ifTrue: [self assert: (self fetchPointer: ObjStackNextx ofObject: objStack) = 0. + self cCode: nil "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" - self cCode: '' "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" inSmalltalk: [MarkStackRecord ifNotNil: [(self fetchPointer: ObjStackMyx ofObject: objStack) = MarkStackRootIndex ifTrue: [MarkStackRecord addLast: {#EMPTY. nil}]]]. ^nil]. topx := topx - 1. top := self fetchPointer: topx + ObjStackFixedSlots ofObject: objStack. + self cCode: nil "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" - self cCode: '' "for debugging markAndTrace: set (MarkStackRecord := OrderedCollection new)" inSmalltalk: [MarkStackRecord ifNotNil: [(self fetchPointer: ObjStackMyx ofObject: objStack) = MarkStackRootIndex ifTrue: [(MarkStackRecord last first = #push and: [MarkStackRecord last last = top]) ifTrue: [MarkStackRecord removeLast] ifFalse: [MarkStackRecord addLast: {#pop. top}]]]]. self storePointer: ObjStackTopx ofObjStack: objStack withValue: topx. (topx = 0 and: [(nextPage := self fetchPointer: ObjStackNextx ofObject: objStack) ~= 0]) ifTrue: [self storePointer: ObjStackFreex ofObjStack: nextPage withValue: objStack. self storePointer: ObjStackNextx ofObjStack: objStack withValue: 0. myx := self fetchPointer: ObjStackMyx ofObject: objStack. self updateRootOfObjStackAt: myx with: nextPage. self eassert: [self isValidObjStack: nextPage]] ifFalse: [self eassert: [self isValidObjStack: objStack]]. ^top!
Item was changed: ----- Method: SpurMemoryManager>>return:restoringObjectsIn:savedFirstFields:and:savedHashes: (in category 'image segment in/out') ----- return: errCode restoringObjectsIn: firstArray savedFirstFields: savedFirstFields and: secondArray savedHashes: savedHashes <inline: false> "This is part of storeImageSegmentInto:outPointers:roots:." + self cCode: nil inSmalltalk: [errCode ~= 0 ifTrue: [self halt]]. - self cCode: [] inSmalltalk: [errCode ~= 0 ifTrue: [self halt]]. self restoreObjectsIn: firstArray upTo: -1 savedFirstFields: savedFirstFields. self restoreObjectsIn: secondArray savedHashes: savedHashes. self runLeakCheckerFor: GCCheckImageSegment. self assert: self allObjectsUnmarked. ^errCode!
Item was changed: ----- Method: SpurMemoryManager>>return:restoringObjectsIn:upTo:savedFirstFields: (in category 'image segment in/out') ----- return: errCode restoringObjectsIn: firstArray upTo: limitOrTag savedFirstFields: savedFirstFields <inline: false> "This is part of storeImageSegmentInto:outPointers:roots:." + self cCode: nil inSmalltalk: [errCode ~= 0 ifTrue: [self halt]]. - self cCode: [] inSmalltalk: [errCode ~= 0 ifTrue: [self halt]]. self restoreObjectsIn: firstArray upTo: limitOrTag savedFirstFields: savedFirstFields. self runLeakCheckerFor: GCCheckImageSegment. self assert: self allObjectsUnmarked. ^errCode!
Item was changed: ----- Method: SpurMemoryManager>>shorten:toIndexableSize: (in category 'indexing primitive support') ----- shorten: objOop toIndexableSize: indexableSize "Reduce the number of indexable fields in objOop, an arrayFormat or longFormat object, to nSlots. Convert the unused residual to a free chunk (if in oldSpace). Answer the number of bytes returned to free memory, which may be zero." <api> | delta | self assert: (indexableSize >= 0 and: [indexableSize < (self lengthOf: objOop)]). false ifTrue: [self runLeakCheckerFor: GCCheckShorten]. "assume no leaks before hand..." delta := self doShorten: objOop toIndexableSize: indexableSize. self assert: (self lengthOf: (self followMaybeForwarded: objOop)) = indexableSize. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [(checkForLeaks anyMask: GCCheckShorten) ifTrue: [coInterpreter cr; print: 'leak checking shorten...'; flush]]. self runLeakCheckerFor: GCCheckShorten. ^delta
"coInterpreter printOop: objOop" "{ objOop. self objectAfter: objOop } collect: [:ea| ea hex]" "coInterpreter printOop: (self objectAfter: objOop)"!
Item was changed: ----- Method: SpurMemoryManager>>storePointer:ofObjStack:withValue: (in category 'object access') ----- storePointer: fieldIndex ofObjStack: objStackPage withValue: thang self assert: (self formatOf: objStackPage) = self wordIndexableFormat. + self cCode: nil - self cCode: '' inSmalltalk: [fieldIndex caseOf: { [ObjStackTopx] -> [self assert: (thang between: 0 and: ObjStackLimit)]. [ObjStackMyx] -> [self assert: (thang between: MarkStackRootIndex and: MournQueueRootIndex)]. [ObjStackFreex] -> [self assert: (thang = 0 or: [(self addressCouldBeObj: thang) and: [(self numSlotsOfAny: thang) = ObjStackPageSlots and: [(self formatOf: thang) = self wordIndexableFormat]]])]. [ObjStackNextx] -> [self assert: (thang = 0 or: [(self addressCouldBeObj: thang) and: [(self numSlotsOfAny: thang) = ObjStackPageSlots and: [(self formatOf: thang) = self wordIndexableFormat]]])]. } otherwise: []]. ^self longAt: objStackPage + self baseHeaderSize + (fieldIndex << self shiftForWord) put: thang!
Item was changed: ----- Method: SpurMemoryManager>>unlinkFreeChunk:atIndex:isLilliputianSize: (in category 'free space') ----- unlinkFreeChunk: chunk atIndex: index isLilliputianSize: lilliputian "Unlink and answer a small chunk from one of the fixed size freeLists" <inline: true> "inlining is important because isLilliputianSize: is often true" |next| self assert: ((self bytesInBody: chunk) = (index * self allocationUnit) and: [index > 1 "a.k.a. (self bytesInBody: chunk) > self allocationUnit" and: [(self startOfObject: chunk) = chunk]]). "For some reason the assertion is not compiled correctly" + self cCode: nil inSmalltalk: [self assert: (self isLilliputianSize: (self bytesInBody: chunk)) = lilliputian]. - self cCode: '' inSmalltalk: [self assert: (self isLilliputianSize: (self bytesInBody: chunk)) = lilliputian]. freeLists at: index put: (next := self fetchPointer: self freeChunkNextIndex ofFreeChunk: chunk). (lilliputian not and: [next ~= 0]) ifTrue: [self storePointer: self freeChunkPrevIndex ofFreeChunk: next withValue: 0]. ^chunk!
Item was changed: ----- Method: SpurPlanningCompactor>>freeFrom:upTo:nextObject: (in category 'private') ----- freeFrom: initialToFinger upTo: limit nextObject: nextObject "Free from toFinger up to limit, dealing with possible intervening pinned objects." <inline: false> <var: 'limit' type: #usqInt> <var: 'initialToFinger' type: #usqInt> | toFinger obj objStart | <var: 'objStart' type: #usqInt> <var: 'toFinger' type: #usqInt> + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [coInterpreter cr; cr; print: 'freeing at '; printHexnp: initialToFinger; print: ' up to '; printHexnp: limit; cr]. toFinger := initialToFinger. objStart := manager startOfObject: nextObject. toFinger < objStart ifTrue: [manager addFreeChunkWithBytes: objStart - toFinger at: toFinger]. toFinger := objStart. [objStart < limit] whileTrue: [obj := manager objectStartingAt: objStart. ((manager isMarked: obj) and: [manager isPinned: obj]) ifTrue: [self unmarkPinned: obj. toFinger < objStart ifTrue: [manager addFreeChunkWithBytes: objStart - toFinger at: toFinger]. toFinger := objStart := manager addressAfter: obj] ifFalse: [objStart := manager addressAfter: obj]]. limit > toFinger ifTrue: [manager addFreeChunkWithBytes: limit - toFinger at: toFinger]!
Item was changed: ----- Method: SpurPlanningCompactor>>logPhase: (in category 'private') ----- logPhase: phaseName <inline: true> + self cCode: nil inSmalltalk: [coInterpreter transcript nextPutAll: phaseName; flush].! - self cCode: '' inSmalltalk: [coInterpreter transcript nextPutAll: phaseName; flush].!
Item was changed: ----- Method: SpurSegmentManager>>addSegmentOfSize: (in category 'growing/shrinking memory') ----- addSegmentOfSize: ammount <returnTypeC: #'SpurSegmentInfo *'> <inline: false> | allocatedSize | <var: #newSeg type: #'SpurSegmentInfo *'> <var: #segAddress type: #'void *'> <var: #allocatedSize type: #'usqInt'> + self cCode: nil inSmalltalk: [segments ifNil: [^nil]]. "bootstrap" - self cCode: [] inSmalltalk: [segments ifNil: [^nil]]. "bootstrap" (manager "sent to the manager so that the simulator can increase memory to simulate a new segment" sqAllocateMemorySegmentOfSize: ammount Above: (self firstGapOfSizeAtLeast: ammount) AllocatedSizeInto: (self cCode: [self addressOf: allocatedSize] inSmalltalk: [[:sz| allocatedSize := sz]])) ifNotNil: [:segAddress| | newSegIndex newSeg | newSegIndex := self insertSegmentFor: segAddress asUnsignedIntegerPtr. "Simulation insertion code duplicates entries if newSegIndex ~= numSegments - 1" + self cCode: nil inSmalltalk: [segments at: newSegIndex put: SpurSegmentInfo new]. - self cCode: '' inSmalltalk: [segments at: newSegIndex put: SpurSegmentInfo new]. newSeg := self addressOf: (segments at: newSegIndex). newSeg segStart: segAddress asUnsignedIntegerPtr; segSize: allocatedSize; swizzle: 0. "Required in the C version only" self assert: self segmentOverlap not. "self printSegmentAddresses." self bridgeFrom: (self addressOf: (segments at: newSegIndex - 1)) to: newSeg. self bridgeFrom: newSeg to: (newSegIndex = (numSegments - 1) ifFalse: [self addressOf: (segments at: newSegIndex + 1)]). totalHeapSizeIncludingBridges := totalHeapSizeIncludingBridges + allocatedSize. "test isInMemory:" 0 to: numSegments - 1 do: [:i| self assert: (self isInSegments: (segments at: i) segStart). self assert: (self isInSegments: (segments at: i) segLimit - manager wordSize). self assert: ((self isInSegments: (segments at: i) segLimit) not or: [i < (numSegments - 1) and: [(segments at: i) segLimit = (segments at: i + 1) segStart]]). self assert: ((self isInSegments: (segments at: i) segStart - manager wordSize) not or: [i > 0 and: [(segments at: i - 1) segLimit = (segments at: i) segStart]])]. ^newSeg]. ^nil!
Item was changed: ----- Method: SpurSegmentManager>>collapseSegmentsPostSwizzle (in category 'snapshot') ----- collapseSegmentsPostSwizzle "The image has been loaded, old segments reconstructed, and the heap swizzled into a single contiguous segment. Collapse the segments into one." <inline: false> canSwizzle := false. + self cCode: nil - self cCode: [] inSmalltalk: [segments ifNil: [self allocateOrExtendSegmentInfos]]. numSegments := 1. self computeTotalHeapSizeIncludingBridges. (segments at: 0) segStart: manager oldSpaceStart; segSize: totalHeapSizeIncludingBridges. manager bootstrapping ifTrue: ["finally plant a bridge at the end of the coalesced segment and cut back the manager's notion of the end of memory to immediately before the bridge." self assert: manager endOfMemory = (segments at: 0) segLimit. manager initSegmentBridgeWithBytes: manager bridgeSize at: manager endOfMemory - manager bridgeSize]. self assert: (manager isSegmentBridge: (self bridgeAt: 0)). self assert: (manager numSlotsOfAny: (self bridgeAt: 0)) = 0!
Item was changed: ----- Method: SpurSegmentManager>>removeSegment: (in category 'growing/shrinking memory') ----- removeSegment: emptySeg <var: #emptySeg type: #'SpurSegmentInfo *'> | i | i := self indexOfSegment: emptySeg. self assert: i > 0.
totalHeapSizeIncludingBridges := totalHeapSizeIncludingBridges - emptySeg segSize. manager sqDeallocateMemorySegmentAt: emptySeg segStart asVoidPointer OfSize: emptySeg segSize.
i to: numSegments - 2 do: [:j| segments at: j put: (segments at: j + 1)]. + self cCode: nil inSmalltalk: [segments at: numSegments - 1 put: SpurSegmentInfo new]. - self cCode: [] inSmalltalk: [segments at: numSegments - 1 put: SpurSegmentInfo new]. numSegments := numSegments - 1.
self bridgeFrom: (self addressOf: (segments at: i - 1)) to: (i <= (numSegments - 1) ifTrue: [self addressOf: (segments at: i)]).
manager setLastSegment: (self addressOf: (segments at: numSegments - 1))!
Item was changed: ----- Method: SpurSegmentManager>>segmentOverlap (in category 'growing/shrinking memory') ----- segmentOverlap "Answers true if a segment overlaps with another one." 0 to: numSegments - 1 do: [:i| | starti endi | starti := (segments at: i) segStart. endi := (segments at: i) segLimit. 0 to: numSegments - 1 do: [:j| | startj endj | startj := (segments at: j) segStart. endj := (segments at: j) segLimit. i = j ifFalse: [(starti < startj or: [starti >= endj]) ifFalse: + [self cCode: nil inSmalltalk: [self error: 'segment overlap ' , i printString , ' and ' , j printString]. - [self cCode: '' inSmalltalk: [self error: 'segment overlap ' , i printString , ' and ' , j printString]. ^true]. (endi <= startj or: [endi >= endj]) + ifFalse: [self cCode: nil inSmalltalk: [self error: 'segment overlap ' , i printString , ' and ' , j printString]. - ifFalse: [self cCode: '' inSmalltalk: [self error: 'segment overlap ' , i printString , ' and ' , j printString]. ^true]. "self printSegmentAddresses" ]]]. ^false!
Item was changed: ----- Method: StackInterpreter>>computeStackZoneSize (in category 'initialization') ----- computeStackZoneSize + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [stackPages ifNil: [stackPages := self stackPagesClass new setInterpreter: self]]. ^numStackPages * ((self sizeof: CogStackPage) + self stackPageByteSize) + stackPages extraStackBytes!
Item was changed: ----- Method: StackInterpreter>>eekcr (in category 'debug printing') ----- eekcr "For marking the end of a leak check print message" <api> <inline: #never> self printf: '\n'. + self cCode: nil inSmalltalk: [self halt]! - self cCode: '' inSmalltalk: [self halt]!
Item was changed: ----- Method: StackInterpreter>>followForwardingPointersOfReceiverAndTemporariesInStackZone (in category 'object memory support') ----- followForwardingPointersOfReceiverAndTemporariesInStackZone "A more thorough version of followForwardingPointersInStackZone that also follows all temporaries (but not stack contents after the temps). This allows removal of the TempVectReadBarrier in the IGC" <inline: false>
stackPage ifNil: "the system must be snapshotting; nothing to do..." [self assert: (stackPages mostRecentlyUsedPage isNil or: [stackPages mostRecentlyUsedPage isFree]). + self cCode: nil inSmalltalk: [self assert: stackPages allPagesFree]. - self cCode: [] inSmalltalk: [self assert: stackPages allPagesFree]. ^self].
self externalWriteBackHeadFramePointers.
0 to: numStackPages - 1 do: [:i| | thePage theFP theSP theIPPtr callerFP offset oop frameRcvrOffset methodHeader | thePage := stackPages stackPageAt: i. thePage isFree ifFalse: [self assert: (self ifCurrentStackPageHasValidHeadPointers: thePage). theFP := thePage headFP. "Skip the instruction pointer on top of stack of inactive pages." theIPPtr := thePage = stackPage ifFalse: [thePage headSP asUnsignedInteger]. [self assert: (thePage addressIsInPage: theFP). self assert: (theIPPtr isNil or: [thePage addressIsInPage: theIPPtr asVoidPointer]). oop := stackPages longAt: theFP + FoxReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxReceiver put: (objectMemory followForwarded: oop)]. ((self frameHasContext: theFP) and: [(objectMemory isForwarded: (self frameContext: theFP))]) ifTrue: [stackPages longAt: theFP + FoxThisContext put: (objectMemory followForwarded: (self frameContext: theFP))]. oop := self frameMethod: theFP. (objectMemory isForwarded: oop) ifTrue: [| newOop delta | newOop := objectMemory followForwarded: oop. theIPPtr ifNotNil: [self assert: (stackPages longAt: theIPPtr) > (self frameMethod: theFP). delta := newOop - oop. stackPages longAt: theIPPtr put: (stackPages longAt: theIPPtr) + delta]. stackPages longAt: theFP + FoxMethod put: (oop := newOop)]. methodHeader := objectMemory methodHeaderOf: oop. offset := self frameStackedReceiverOffsetNumArgs: (self argumentCountOfMethodHeader: methodHeader). oop := stackPages longAt: theFP + offset. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + offset put: (objectMemory followForwarded: oop)]. frameRcvrOffset := theFP + FoxIFReceiver. theSP := frameRcvrOffset - ((self temporaryCountOfMethodHeader: methodHeader) * objectMemory wordSize). [theSP <= frameRcvrOffset] whileTrue: [oop := stackPages longAt: theSP. (objectMemory isOopForwarded: oop) ifTrue: [oop := objectMemory followForwarded: oop. stackPages longAt: theSP put: oop]. theSP := theSP + objectMemory wordSize]. (callerFP := self frameCallerFP: theFP) ~= 0] whileTrue: [theIPPtr := (theFP + FoxCallerSavedIP) asUnsignedInteger. theFP := callerFP]. "And finally follow the caller context." self assert: theFP = thePage baseFP. oop := self frameCallerContext: theFP. (objectMemory isForwarded: oop) ifTrue: [self frameCallerContext: theFP put: (objectMemory followForwarded: oop)]]]!
Item was changed: ----- Method: StackInterpreter>>followForwardingPointersOfReceiversInStackZone (in category 'object memory support') ----- followForwardingPointersOfReceiversInStackZone "Spur's become: is lazy, turning the becommed object into a forwarding object to the other. The read-barrier is minimised by arranging that forwarding pointers will fail a method cache probe, since notionally objects' internals are accessed only via sending messages to them, the exception is primitives that access the internals of the non-receiver argument(s).
To avoid a read barrier on bytecode, literal and inst var fetch and non-local return, we scan the receivers (including the stacked receiver for non-local return) and method references in the stack zone and follow any forwarded ones. This is of course way cheaper than scanning all of memory as in the old become." | theIPPtr | <inline: false>
stackPage ifNil: "the system must be snapshotting; nothing to do..." [self assert: (stackPages mostRecentlyUsedPage isNil or: [stackPages mostRecentlyUsedPage isFree]). + self cCode: nil inSmalltalk: [self assert: stackPages allPagesFree]. - self cCode: [] inSmalltalk: [self assert: stackPages allPagesFree]. ^self].
self externalWriteBackHeadFramePointers.
0 to: numStackPages - 1 do: [:i| | thePage theFP callerFP offset oop | thePage := stackPages stackPageAt: i. thePage isFree ifFalse: [self assert: (self ifCurrentStackPageHasValidHeadPointers: thePage). theFP := thePage headFP. "Skip the instruction pointer on top of stack of inactive pages." theIPPtr := thePage = stackPage ifFalse: [thePage headSP asUnsignedInteger]. [self assert: (thePage addressIsInPage: theFP). self assert: (theIPPtr isNil or: [thePage addressIsInPage: theIPPtr asVoidPointer]). oop := stackPages longAt: theFP + FoxReceiver. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + FoxReceiver put: (objectMemory followForwarded: oop)]. ((self frameHasContext: theFP) and: [(objectMemory isForwarded: (self frameContext: theFP))]) ifTrue: [stackPages longAt: theFP + FoxThisContext put: (objectMemory followForwarded: (self frameContext: theFP))]. oop := self frameMethod: theFP. (objectMemory isForwarded: oop) ifTrue: [| newOop delta | newOop := objectMemory followForwarded: oop. theIPPtr ifNotNil: [self assert: (stackPages longAt: theIPPtr) > (self frameMethod: theFP). delta := newOop - oop. stackPages longAt: theIPPtr put: (stackPages longAt: theIPPtr) + delta]. stackPages longAt: theFP + FoxMethod put: (oop := newOop)]. offset := self frameStackedReceiverOffset: theFP. oop := stackPages longAt: theFP + offset. (objectMemory isOopForwarded: oop) ifTrue: [stackPages longAt: theFP + offset put: (objectMemory followForwarded: oop)]. (callerFP := self frameCallerFP: theFP) ~= 0] whileTrue: [theIPPtr := (theFP + FoxCallerSavedIP) asUnsignedInteger. theFP := callerFP]. "And finally follow the caller context." self assert: theFP = thePage baseFP. oop := self frameCallerContext: theFP. (objectMemory isForwarded: oop) ifTrue: [self frameCallerContext: theFP put: (objectMemory followForwarded: oop)]]]!
Item was changed: ----- Method: StackInterpreter>>initializeInterpreter: (in category 'initialization') ----- initializeInterpreter: bytesToShift "Initialize Interpreter state before starting execution of a new image. N.B. do *NOT* initialize variables that can be initialized via command line arguments since command line arguments are proicessed before the image is loaded and this initialization takes place after the image is loaded. Anything that us not initialized to either 0 or false (the C default value) should be initialized in StackInterpeeter class>>declareCVarsIn:" interpreterProxy := self sqGetInterpreterProxy. self dummyReferToProxy. objectMemory initializeObjectMemory: bytesToShift. self checkAssumedCompactClasses. self initializeExtraClassInstVarIndices. method := newMethod := objectMemory nilObject. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [breakSelectorLength ifNil: [breakSelectorLength := objectMemory minSmallInteger]. breakLookupClassTag ifNil: [breakLookupClassTag := -1]. reenterInterpreter := ReenterInterpreter new]. methodDictLinearSearchLimit := 8. self initialCleanup. LowcodeVM ifTrue: [ self setupNativeStack ]. profileSemaphore := profileProcess := profileMethod := objectMemory nilObject. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [InitializationOptions at: #profiling ifPresent: [:profiling| "hack turn on profiling, for testing in the simulator." profiling ifTrue: [profileSemaphore := objectMemory cloneObject: (objectMemory splObj: TheInterruptSemaphore). objectMemory storePointerUnchecked: FirstLinkIndex ofObject: profileSemaphore withValue: objectMemory nilObject; storePointerUnchecked: NextLinkIndex ofObject: profileSemaphore withValue: objectMemory nilObject; storePointerUnchecked: ExcessSignalsIndex ofObject: profileSemaphore withValue: (objectMemory integerObjectOf: 0)]]]. interruptKeycode := 2094. "cmd-. as used for Mac but no other OS" [globalSessionID = 0] whileTrue: [globalSessionID := self cCode: [((self time: #NULL) + self ioMSecs) bitAnd: 16r7FFFFFFF] inSmalltalk: [(Random new next * (SmallInteger maxVal min: 16r7FFFFFFF)) asInteger]]. metaAccessorDepth := -2. super initializeInterpreter: bytesToShift!
Item was changed: ----- Method: StackInterpreter>>isCalloutPrimitiveIndex: (in category 'primitive support') ----- isCalloutPrimitiveIndex: primIndex "This virtual machine provides two primitives that call external code, primitiveExternalCall for plugin primitives, and primitiveCalloutToFFI for FFI calls." <inline: true> + self cCode: nil inSmalltalk: [#(primitiveExternalCall primitiveCalloutToFFI)]. "For senders..." - self cCode: [] inSmalltalk: [#(primitiveExternalCall primitiveCalloutToFFI)]. "For senders..." ^primIndex = PrimNumberExternalCall "#primitiveExternalCall" or: [primIndex = PrimNumberFFICall] "#primitiveCalloutToFFI"!
Item was changed: ----- Method: StackInterpreter>>isMetaPrimitiveIndex: (in category 'primitive support') ----- isMetaPrimitiveIndex: primIndex "This virtual machine provides two primitives that executes arbitrary primitives, one for indexed primitivces and one for named primitives. These meta primitives are used in the debugger to execute primitives while simulating execution. Spur needs to know the accessor depth for a primitive so that failures due to forwarders can be fixed up and retried. This method identifies such meta primitives so that metaAccessorDepth can be substituted when appropriate." <inline: true> + self cCode: nil inSmalltalk: [#(primitiveDoPrimitiveWithArgs primitiveDoNamedPrimitiveWithArgs)]. "For senders..." - self cCode: [] inSmalltalk: [#(primitiveDoPrimitiveWithArgs primitiveDoNamedPrimitiveWithArgs)]. "For senders..." ^primIndex = PrimNumberDoPrimitive or: [primIndex = PrimNumberDoExternalCall]!
Item was changed: ----- Method: StackInterpreter>>loadInitialContext (in category 'initialization') ----- loadInitialContext <inline: false> | activeProc activeContext | + self cCode: nil inSmalltalk: [self initExtensions]. - self cCode: [] inSmalltalk: [self initExtensions]. objectMemory runLeakCheckerFor: GCModeFull. "primitiveSuspend needs to know the class of LinkedList" self getClassTagOfLinkedList. activeProc := self activeProcess. activeContext := objectMemory fetchPointer: SuspendedContextIndex ofObject: activeProc. self marryContextInNewStackPageAndInitializeInterpreterRegisters: activeContext!
Item was changed: ----- Method: StackInterpreter>>printCallStackFP: (in category 'debug printing') ----- printCallStackFP: theFP <var: #theFP type: #'char *'> | context | <inline: false> + self cCode: nil inSmalltalk: [transcript ensureCr]. - self cCode: '' inSmalltalk: [transcript ensureCr]. context := self shortReversePrintFrameAndCallers: theFP. [context = objectMemory nilObject] whileFalse: [(self isMarriedOrWidowedContext: context) ifTrue: [(self checkIsStillMarriedContext: context currentFP: framePointer) ifFalse: [self shortPrintContext: context. ^nil]. context := self shortReversePrintFrameAndCallers: (self frameOfMarriedContext: context)] ifFalse: [context := self printContextCallStackOf: context]]!
Item was changed: ----- Method: StackInterpreter>>printCallStackOf: (in category 'debug printing') ----- printCallStackOf: aContextOrProcessOrFrame <public> "useful for VM debugging" | context | <inline: false> + self cCode: nil inSmalltalk: [transcript ensureCr]. - self cCode: '' inSmalltalk: [transcript ensureCr]. (stackPages couldBeFramePointer: aContextOrProcessOrFrame) ifTrue: [^self printCallStackFP: (self cCoerceSimple: aContextOrProcessOrFrame to: #'char *')]. aContextOrProcessOrFrame = self activeProcess ifTrue: [^self printCallStackOf: self headFramePointer]. (self couldBeProcess: aContextOrProcessOrFrame) ifTrue: [^self printCallStackOf: (objectMemory fetchPointer: SuspendedContextIndex ofObject: aContextOrProcessOrFrame)]. context := aContextOrProcessOrFrame. [context = objectMemory nilObject] whileFalse: [(self isMarriedOrWidowedContext: context) ifTrue: [(self checkIsStillMarriedContext: context currentFP: framePointer) ifFalse: [self shortPrintContext: context. ^nil]. context := self shortReversePrintFrameAndCallers: (self frameOfMarriedContext: context)] ifFalse: [context := self printContextCallStackOf: context]]!
Item was changed: ----- Method: StackInterpreter>>printFrame:WithSP: (in category 'debug printing') ----- printFrame: theFP WithSP: theSP <var: #theFP type: #'char *'> <var: #theSP type: #'char *'> <public> "useful for VM debugging" | theMethod numArgs topThing | <inline: false> <var: #addr type: #'char *'> + self cCode: nil inSmalltalk: [self transcript ensureCr]. - self cCode: '' inSmalltalk: [self transcript ensureCr]. (stackPages couldBeFramePointer: theFP) ifFalse: ['%P is not in the stack zone?!!\n' f: transcript printf: theFP. ^nil]. theMethod := self frameMethod: theFP. numArgs := self frameNumArgs: theFP. self shortPrintFrame: theFP. self frameRange: theFP + (self frameStackedReceiverOffsetNumArgs: numArgs) to: theSP. self printFrameOop: 'rcvr/clsr' at: theFP + (self frameStackedReceiverOffsetNumArgs: numArgs). numArgs to: 1 by: -1 do: [:i| self printFrameOop: 'arg' at: theFP + FoxCallerSavedIP + (i * objectMemory wordSize)]. self printFrameThing: 'cllr ip/ctxt' at: theFP + FoxCallerSavedIP. self printFrameThing: 'saved fp' at: theFP + FoxSavedFP. self printFrameOop: 'method' at: theFP + FoxMethod. self printFrameFlagsForFP: theFP. self printFrameThing: 'context' at: theFP + FoxThisContext. self printFrameOop: 'receiver' at: theFP + FoxReceiver. topThing := stackPages longAt: theSP. (topThing >= theMethod and: [topThing <= (theMethod + (objectMemory sizeBitsOfSafe: theMethod))]) ifTrue: [theFP + FoxReceiver - objectMemory wordSize to: theSP + objectMemory wordSize by: objectMemory wordSize negated do: [:addr| self printFrameOop: 'temp/stck' at: addr]. self printFrameThing: 'frame ip' at: theSP] ifFalse: [theFP + FoxReceiver - objectMemory wordSize to: theSP by: objectMemory wordSize negated do: [:addr| self printFrameOop: 'temp/stck' at: addr]]!
Item was changed: ----- Method: StackInterpreter>>printMethodCacheFor: (in category 'debug printing') ----- printMethodCacheFor: thing <public> "useful for VM debugging" | n | n := 0. 0 to: MethodCacheSize - 1 by: MethodCacheEntrySize do: [:i | | s c m p | s := methodCache at: i + MethodCacheSelector. c := methodCache at: i + MethodCacheClass. m := methodCache at: i + MethodCacheMethod. p := methodCache at: i + MethodCachePrimFunction. ((thing = -1 or: [s = thing or: [c = thing or: [p = thing or: [m = thing]]]]) and: [(objectMemory addressCouldBeOop: s) and: [c ~= 0 and: [(self addressCouldBeClassObj: c) or: [self addressCouldBeClassObj: (objectMemory classForClassTag: c)]]]]) ifTrue: [n := n + 1. + self cCode: nil inSmalltalk: [self transcript ensureCr]. - self cCode: [] inSmalltalk: [self transcript ensureCr]. self printNum: i; space; printHexnp: i; cr; tab. (objectMemory isBytesNonImm: s) ifTrue: ['%p %.*s\n' f: transcript printf: { s. objectMemory numBytesOfBytes: s. objectMemory firstIndexableField: s }] ifFalse: [self shortPrintOop: s]. self tab. (self addressCouldBeClassObj: c) ifTrue: [self shortPrintOop: c] ifFalse: [self printNum: c; space; shortPrintOop: (objectMemory classForClassTag: c)]. self tab; shortPrintOop: m; tab. self cCode: [p > 1024 ifTrue: [self printHexnp: p] ifFalse: [self printNum: p]] inSmalltalk: [p isSymbol ifTrue: [self print: p] ifFalse: [self printNum: p]]. self cr]]. n > 1 ifTrue: [self printNum: n; cr]!
Item was changed: ----- Method: StackInterpreter>>printStackPage:useCount: (in category 'debug printing') ----- printStackPage: page useCount: n <inline: false> <var: #page type: #'StackPage *'> + self cCode: nil inSmalltalk: [page isInteger ifTrue: [^self printStackPage: (stackPages stackPageFor: page) useCount: n]]. - self cCode: '' inSmalltalk: [page isInteger ifTrue: [^self printStackPage: (stackPages stackPageFor: page) useCount: n]]. self print: 'page '; printHexPtrnp: (self cCode: [page] inSmalltalk: [page baseAddress]); print: ' ('; printNum: (stackPages pageIndexFor: page realStackLimit). n >= 0 ifTrue: [self print: ','; printNum: n]. self print: ') (trace: '; printNum: page trace; printChar: $). (stackPages isFree: page) ifTrue: [self print: ' (free)']. page = stackPages mostRecentlyUsedPage ifTrue: [self print: ' (MRU)']. page prevPage = stackPages mostRecentlyUsedPage ifTrue: [self print: ' (LRU)']. self cr; tab; print: 'ba: '; printHexPtrnp: page baseAddress; print: ' - sl: '; printHexPtrnp: page realStackLimit; print: ' - sl-so: '; printHexPtrnp: page realStackLimit - self stackLimitOffset; print: ' - la:'; printHexPtrnp: page lastAddress. (stackPages isFree: page) ifFalse: [self cr; tab; print: 'baseFP '; printHexPtrnp: page baseFP. self "cr;" tab; print: 'headFP '; printHexPtrnp: page headFP. self "cr;" tab; print: 'headSP '; printHexPtrnp: page headSP]. self cr; tab; print: 'prev '; printHexPtrnp: (self cCode: 'page->prevPage' inSmalltalk: [page prevPage baseAddress]); print: ' ('; printNum: (stackPages pageIndexFor: page prevPage realStackLimit); printChar: $). self tab; print: 'next '; printHexPtrnp: (self cCode: 'page->nextPage' inSmalltalk: [page nextPage baseAddress]); print: ' ('; printNum: (stackPages pageIndexFor: page nextPage realStackLimit); printChar: $). self cr!
Item was changed: ----- Method: StackInterpreter>>retryPrimitiveOnFailure (in category 'primitive support') ----- retryPrimitiveOnFailure "In Spur two cases of primitive failure are handled specially. A primitive may fail due to validation encountering a forwarder. On failure, check the accessorDepth for the primitive and if non-negative scan the args to the depth, following any forwarders. Retry the primitive if any are found. Hence lazily and transparently following forwarders on primitive failure. Additionally a primitive might fail due to an allocation failing. Retry if external primitives have failed with PrimErrNoMemory after running first the scavenger and then on a subsequent failure, the global mark-sweep collector. Hence lazily and transparently GC on memory exhaustion." <option: #SpurObjectMemory> <inline: false> | primitiveIndex gcDone followDone canRetry retry retried | primitiveIndex := self primitiveIndexOf: newMethod. self assert: (self saneFunctionPointerForFailureOfPrimIndex: primitiveIndex). gcDone := 0. followDone := canRetry := retried := false. [retry := false. primFailCode = PrimErrNoMemory ifTrue: [(gcDone := gcDone + 1) = 1 ifTrue: [canRetry := primitiveIndex = PrimNumberExternalCall]. canRetry ifTrue: [gcDone = 1 ifTrue: [objectMemory scavengingGC]. gcDone = 2 ifTrue: [objectMemory fullGC]. retry := gcDone <= 2]] ifFalse: [followDone ifFalse: [followDone := true. retry := self checkForAndFollowForwardedPrimitiveState]]. retry] whileTrue: [self assert: primFailCode ~= 0. retried := true. self initPrimCall. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [self maybeMapPrimitiveFunctionPointerBackToSomethingEvaluable]. self dispatchFunctionPointer: primitiveFunctionPointer]. ^retried!
Item was changed: ----- Method: StackInterpreter>>writeImageFileIO (in category 'image save/restore') ----- writeImageFileIO "Write the image header and heap contents to imageFile for snapshot. c.f. writeImageFileIOSimulation. The game below is to maintain 64-bit alignment for all putLong:toFile: occurrences." <inline: #never> | imageName headerStart headerSize f imageBytes bytesWritten sCWIfn okToWrite | <var: 'f' type: #sqImageFile> <var: 'headerStart' type: #squeakFileOffsetType> <var: 'sCWIfn' type: #'void *'> <var: 'imageName' declareC: 'extern char imageName[]'>
+ self cCode: nil inSmalltalk: [imageName := 'sooth compiler'. ^self writeImageFileIOSimulation]. - self cCode: [] inSmalltalk: [imageName := 'sooth compiler'. ^self writeImageFileIOSimulation].
"If the security plugin can be loaded, use it to check for write permission. If not, assume it's ok" sCWIfn := self ioLoadFunction: 'secCanWriteImage' From: 'SecurityPlugin'. sCWIfn ~= 0 ifTrue: [okToWrite := self cCode: '((sqInt (*)(void))sCWIfn)()'. okToWrite ifFalse:[^self primitiveFail]].
"local constants" headerStart := 0. headerSize := objectMemory wordSize * 16. "64 or 128; header size in bytes; do not change!!"
f := self sqImageFile: imageName Open: 'wb'. (self invalidSqImageFile: f) ifTrue: "could not open the image file for writing" [^self primitiveFailFor: PrimErrOperationFailed].
imageBytes := objectMemory imageSizeToWrite. headerStart := self sqImage: f File: imageName StartLocation: headerSize + imageBytes. self cCode: '/* Note: on Unix systems one could put an exec command here, padded to 512 bytes */'. "position file to start of header" self sqImageFile: f Seek: headerStart.
self putWord32: self imageFormatVersionForSnapshot toFile: f. self putWord32: headerSize toFile: f. self putLong: imageBytes toFile: f. self putLong: objectMemory baseAddressOfImageSnapshot toFile: f. self putLong: objectMemory specialObjectsOop toFile: f. self putLong: objectMemory newObjectHash toFile: f. self putLong: self getSnapshotScreenSize toFile: f. self putLong: self getImageHeaderFlags toFile: f. self putWord32: extraVMMemory toFile: f. self putShort: desiredNumStackPages toFile: f. self putShort: self unknownShortOrCodeSizeInKs toFile: f. self putWord32: desiredEdenBytes toFile: f. self putShort: (maxExtSemTabSizeSet ifTrue: [self ioGetMaxExtSemTableSize] ifFalse: [0]) toFile: f. self putShort: the2ndUnknownShort toFile: f. objectMemory hasSpurMemoryManagerAPI ifTrue: [self putLong: objectMemory firstSegmentBytes toFile: f. self putLong: objectMemory bytesLeftInOldSpace toFile: f. 2 timesRepeat: [self putLong: 0 toFile: f] "Pad the rest of the header."] ifFalse: [4 timesRepeat: [self putLong: 0 toFile: f]]. "Pad the rest of the header."
objectMemory wordSize = 8 ifTrue: [3 timesRepeat: [self putLong: 0 toFile: f]]. "Pad the rest of the header."
self assert: headerStart + headerSize = (self sqImageFilePosition: f). "position file after the header" self sqImageFile: f Seek: headerStart + headerSize.
self successful ifFalse: "file write or seek failure" [self sqImageFileClose: f. ^nil].
"write the image data" objectMemory hasSpurMemoryManagerAPI ifTrue: [bytesWritten := objectMemory writeImageSegmentsToFile: f] ifFalse: [bytesWritten := self sq: (self pointerForOop: objectMemory baseAddressOfImageSnapshot) Image: (self sizeof: #char) File: imageBytes Write: f]. self success: bytesWritten = imageBytes. self sqImageFileClose: f!
Item was changed: ----- Method: StackInterpreterPrimitives>>primitiveDeferDisplayUpdates (in category 'I/O primitives') ----- primitiveDeferDisplayUpdates "Set or clear the flag that controls whether modifications of the Display object are propagated to the underlying platform's screen." | flag | cannotDeferDisplayUpdates ifTrue: [^self primitiveFail]. flag := self stackTop. flag = objectMemory trueObject ifTrue: [deferDisplayUpdates := true] ifFalse: [flag = objectMemory falseObject ifTrue: [deferDisplayUpdates := false] ifFalse: [^self primitiveFail]]. + self cCode: nil inSmalltalk: [self fullDisplayUpdate]. - self cCode: [] inSmalltalk: [self fullDisplayUpdate]. self pop: 1!
Item was changed: ----- Method: StackToRegisterMappingCogit>>assertCorrectSimStackPtr (in category 'compile abstract instructions') ----- assertCorrectSimStackPtr <inline: true> "Would like to assert simply simSpillBase > methodOrBlockNumTemps but can't because of the initialNils hack for nested blocks in SqueakV3PlusClosures" self assert: (simSpillBase >= methodOrBlockNumTemps or: [self maybeCompilingFirstPassOfBlockWithInitialPushNil and: [simSpillBase > methodOrBlockNumArgs]]). (needsFrame and: [simSpillBase > 0]) ifTrue: [self assert: ((self simStackAt: simSpillBase - 1) spilled == true or: [self maybeCompilingFirstPassOfBlockWithInitialPushNil and: [simSpillBase > methodOrBlockNumArgs]]). self assert: (simSpillBase > simStackPtr or: [(self simStackAt: simSpillBase) spilled == false])]. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [deadCode ifFalse: [self assert: simStackPtr + (needsFrame ifTrue: [0] ifFalse: [1]) = (self debugStackPointerFor: bytecodePC)]].!
Item was changed: ----- Method: StackToRegisterMappingCogit>>compileCogFullBlockMethod: (in category 'compile abstract instructions') ----- compileCogFullBlockMethod: numCopied <option: #SistaV1BytecodeSet> methodOrBlockNumTemps := coInterpreter tempCountOf: methodObj. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [debugStackPointers := coInterpreter debugStackPointersFor: methodObj]. ^super compileCogFullBlockMethod: numCopied!
Item was changed: ----- Method: StackToRegisterMappingCogit>>compileCogMethod: (in category 'compile abstract instructions') ----- compileCogMethod: selector methodOrBlockNumTemps := coInterpreter tempCountOf: methodObj. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [debugStackPointers := coInterpreter debugStackPointersFor: methodObj]. ^super compileCogMethod: selector!
Item was changed: ----- Method: StackToRegisterMappingCogit>>ensureFixupAt: (in category 'compile abstract instructions') ----- ensureFixupAt: targetPC "Make sure there's a flagged fixup at the target pc in fixups. Initially a fixup's target is just a flag. Later on it is replaced with a proper instruction." <returnTypeC: #'BytecodeFixup *'> | fixup | <var: #fixup type: #'BytecodeFixup *'> fixup := self fixupAt: targetPC. self traceFixup: fixup merge: true. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self assert: simStackPtr = (self debugStackPointerFor: targetPC). (fixup isMergeFixupOrIsFixedUp and: [fixup isBackwardBranchFixup not]) ifTrue: "ignore backward branch targets" [self assert: fixup simStackPtr = simStackPtr]]. fixup isNonMergeFixupOrNotAFixup ifTrue: "convert a non-merge into a merge" [fixup becomeMergeFixup. fixup simStackPtr: simStackPtr. LowcodeVM ifTrue: [ fixup simNativeStackPtr: simNativeStackPtr. fixup simNativeStackSize: simNativeStackSize]] ifFalse: [fixup isBackwardBranchFixup ifTrue: "this is the target of a backward branch and so doesn't have a simStackPtr assigned yet." [fixup simStackPtr: simStackPtr. LowcodeVM ifTrue: [fixup simNativeStackPtr: simNativeStackPtr. fixup simNativeStackSize: simNativeStackSize]] ifFalse: [self assert: fixup simStackPtr = simStackPtr. LowcodeVM ifTrue: [self assert: fixup simNativeStackPtr = simNativeStackPtr. self assert: fixup simNativeStackSize = simNativeStackSize]]]. fixup recordBcpc: bytecodePC. ^fixup!
Item was changed: ----- Method: StackToRegisterMappingCogit>>ensureNonMergeFixupAt: (in category 'compile abstract instructions') ----- ensureNonMergeFixupAt: targetPC "Make sure there's a flagged fixup at the target pc in fixups. Initially a fixup's target is just a flag. Later on it is replaced with a proper instruction." <returnTypeC: #'BytecodeFixup *'> | fixup | <var: #fixup type: #'BytecodeFixup *'> fixup := self fixupAt: targetPC. self traceFixup: fixup merge: true. fixup notAFixup ifTrue: [fixup becomeNonMergeFixup]. + self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [fixup isMergeFixupOrIsFixedUp ifTrue: [self assert: (fixup isBackwardBranchFixup or: [fixup simStackPtr = (self debugStackPointerFor: targetPC)])]]. fixup recordBcpc: bytecodePC. ^fixup!
Item was changed: ----- Method: StackToRegisterMappingCogit>>genStaticallyResolvedSpecialSelectorComparison (in category 'bytecode generator support') ----- genStaticallyResolvedSpecialSelectorComparison "Assumes both operands are ints" <var: #primDescriptor type: #'BytecodeDescriptor *'> | rcvrInt argInt primDescriptor result | primDescriptor := self generatorAt: byte0. argInt := self ssTop constant. rcvrInt := (self ssValue: 1) constant. + self cCode: nil inSmalltalk: "In Simulator ints are unsigned..." - self cCode: '' inSmalltalk: "In Simulator ints are unsigned..." [rcvrInt := objectMemory integerValueOf: rcvrInt. argInt := objectMemory integerValueOf: argInt]. primDescriptor opcode caseOf: { [JumpLess] -> [result := rcvrInt < argInt]. [JumpLessOrEqual] -> [result := rcvrInt <= argInt]. [JumpGreater] -> [result := rcvrInt > argInt]. [JumpGreaterOrEqual] -> [result := rcvrInt >= argInt]. [JumpZero] -> [result := rcvrInt = argInt]. [JumpNonZero] -> [result := rcvrInt ~= argInt] }. "Must annotate the bytecode for correct pc mapping." self ssPop: 2. ^self ssPushAnnotatedConstant: (result ifTrue: [objectMemory trueObject] ifFalse: [objectMemory falseObject])!
Item was changed: ----- Method: StackToRegisterMappingCogit>>generateTracingTrampolines (in category 'initialization') ----- generateTracingTrampolines "Generate trampolines for tracing. In the simulator we can save a lot of time and avoid noise instructions in the lastNInstructions log by short-cutting these trampolines, but we need them in the real vm." ceTraceLinkedSendTrampoline := self genTrampolineFor: #ceTraceLinkedSend: called: 'ceTraceLinkedSendTrampoline' arg: ReceiverResultReg regsToSave: CallerSavedRegisterMask. ceTraceBlockActivationTrampoline := self genTrampolineFor: #ceTraceBlockActivation called: 'ceTraceBlockActivationTrampoline' regsToSave: CallerSavedRegisterMask. ceTraceStoreTrampoline := self genTrampolineFor: #ceTraceStoreOf:into: called: 'ceTraceStoreTrampoline' arg: TempReg arg: ReceiverResultReg regsToSave: CallerSavedRegisterMask. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [ceTraceLinkedSendTrampoline := self simulatedTrampolineFor: #ceShortCutTraceLinkedSend:. ceTraceBlockActivationTrampoline := self simulatedTrampolineFor: #ceShortCutTraceBlockActivation:. ceTraceStoreTrampoline := self simulatedTrampolineFor: #ceShortCutTraceStore:]!
Item was changed: ----- Method: StackToRegisterMappingCogit>>mergeWithFixupIfRequired: (in category 'simulation stack') ----- mergeWithFixupIfRequired: fixup "If this bytecode has a fixup, some kind of merge needs to be done. There are 4 cases: 1) the bytecode has no fixup (fixup isNotAFixup) do nothing 2) the bytecode has a non merge fixup the fixup has needsNonMergeFixup. The code generating non merge fixup (currently only special selector code) is responsible for the merge so no need to do it. We set deadCode to false as the instruction can be reached from jumps. 3) the bytecode has a merge fixup, but execution flow *cannot* fall through to the merge point. the fixup has needsMergeFixup and deadCode = true. ignores the current simStack as it does not mean anything restores the simStack to the state the jumps to the merge point expects it to be. 4) the bytecode has a merge fixup and execution flow *can* fall through to the merge point. the fixup has needsMergeFixup and deadCode = false. flushes the stack to the stack pointer so the fall through execution path simStack is in the state the merge point expects it to be. restores the simStack to the state the jumps to the merge point expects it to be. In addition, if this is a backjump merge point, we patch the fixup to hold the current simStackPtr for later assertions." <var: #fixup type: #'BytecodeFixup *'>
self assertCorrectSimStackPtr.
"case 1" fixup notAFixup ifTrue: [^0].
"case 2" fixup isNonMergeFixup ifTrue: [deadCode := false. ^0].
"cases 3 and 4" self assert: fixup isMergeFixup. self traceMerge: fixup. deadCode ifTrue: "case 3" ["Would like to assert fixup simStackPtr >= methodOrBlockNumTemps but can't because of the initialNils hack." self assert: (fixup simStackPtr >= methodOrBlockNumTemps or: [self maybeCompilingFirstPassOfBlockWithInitialPushNil]). simStackPtr := fixup simStackPtr. LowcodeVM ifTrue: [simNativeStackPtr := fixup simNativeStackPtr. simNativeStackSize := fixup simNativeStackSize]] ifFalse: "case 4" [self ssFlushTo: simStackPtr].
"cases 3 and 4" deadCode := false. fixup isBackwardBranchFixup ifTrue: [fixup simStackPtr: simStackPtr. LowcodeVM ifTrue: [fixup simNativeStackPtr: simNativeStackPtr. fixup simNativeStackSize: simNativeStackSize]]. fixup targetInstruction: self Label. self assert: simStackPtr = fixup simStackPtr. LowcodeVM ifTrue: [self assert: simNativeStackPtr = fixup simNativeStackPtr. self assert: simNativeStackSize = fixup simNativeStackSize].
+ self cCode: nil inSmalltalk: - self cCode: '' inSmalltalk: [self assert: fixup simStackPtr = (self debugStackPointerFor: bytecodePC)]. self restoreSimStackAtMergePoint: fixup.
^0!
Item was changed: ----- Method: ThreadedARM64FFIPlugin>>ffiCalloutTo:SpecOnStack:in: (in category 'callout support') ----- ffiCalloutTo: procAddr SpecOnStack: specOnStack in: calloutState <var: #procAddr type: #'void *'> <var: #calloutState type: #'CalloutState *'> <var: #loadFloatRegs declareC: 'extern void loadFloatRegs(double, double, double, double, double, double, double, double)'> "Go out, call this guy and create the return value. This *must* be inlined because of the alloca of the outgoing stack frame in ffiCall:WithFlags:NumArgs:Args:AndTypes:" | vmHandle atomicType floatRet intRet structRet specSize | <var: 'vmHandle' type: #'void *'> <var: 'floatRet' type: #ThirtyTwoByteReturnDF> <var: 'structRet' type: #SixteenByteReturnII> <var: 'intRet' type: #usqLong> <inline: #always> + self cCode: nil inSmalltalk: [floatRet := ByteArray new: 32]. "Just a hack to placate the Smalltalk compiler; these should be proper struct types..." - self cCode: [] inSmalltalk: [floatRet := ByteArray new: 32]. "Just a hack to placate the Smalltalk compiler; these should be proper struct types..." vmHandle := interpreterProxy disownVM: (self disownFlagsFor: calloutState). calloutState floatRegisterIndex > 0 ifTrue: [self loadFloatRegs: (calloutState floatRegisters at: 0) _: (calloutState floatRegisters at: 1) _: (calloutState floatRegisters at: 2) _: (calloutState floatRegisters at: 3) _: (calloutState floatRegisters at: 4) _: (calloutState floatRegisters at: 5) _: (calloutState floatRegisters at: 6) _: (calloutState floatRegisters at: 7)].
(self allocaLiesSoSetSpBeforeCall or: [self mustAlignStack]) ifTrue: [self setsp: calloutState argVector].
atomicType := self atomicTypeOf: calloutState ffiRetHeader. ((atomicType >> 1) = (FFITypeSingleFloat >> 1) or: [(calloutState ffiRetHeader bitAnd: FFIFlagPointer+FFIFlagStructure) = FFIFlagStructure and: [self structIsHomogenousFloatArrayOfSize: (calloutState ffiRetHeader bitAnd: FFIStructSizeMask) typeSpec: (self cCoerce: (interpreterProxy firstIndexableField: calloutState ffiRetSpec) to: #'unsigned int *') ofLength: (specSize := interpreterProxy byteSizeOf: calloutState ffiRetSpec) / (self sizeof: #'unsigned int')]]) ifTrue: [floatRet d: (self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'struct dprr (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7)).
"undo any callee argument pops because it may confuse stack management with the alloca." (self isCalleePopsConvention: calloutState callFlags) ifTrue: [self setsp: calloutState argVector]. interpreterProxy ownVM: vmHandle.
atomicType = FFITypeDoubleFloat ifTrue: [^interpreterProxy floatObjectOf: (floatRet d doubles at: 0)]. atomicType = FFITypeSingleFloat ifTrue: [^interpreterProxy floatObjectOf: (floatRet f floats at: 0)]. "If the struct is a vector of floats then move float[2] to float[1], float[4] to float[2] and float[6] to float[3], to pack the float data in the double fields. We can tell if the struct is composed of floats if its size is less than the spec size, since the spec size is (1 + n fields) * 4 bytes, and the struct size is n fields * 4 bytes for floats and n fields * 8 bytes for doubles. We can't access the spec post call because it may have moved." specSize > calloutState structReturnSize ifTrue: [floatRet f floats at: 1 put: (floatRet f floats at: 2). floatRet f floats at: 2 put: (floatRet f floats at: 4). floatRet f floats at: 3 put: (floatRet f floats at: 6)]. ^self ffiReturnStruct: (self addressOf: floatRet) ofType: (self ffiReturnType: specOnStack) in: calloutState].
"Integer and Structure returns..." "If struct address used for return value, call is special; struct return pointer must be in x8" (self mustReturnStructOnStack: calloutState structReturnSize) ifTrue: [intRet := 0. self setReturnRegister: (self cCoerceSimple: calloutState limit to: #sqLong) "stack alloca'd struct" andCall: (self cCoerceSimple: procAddr to: #sqLong) withArgsArray: (self cCoerceSimple: (self addressOf: calloutState integerRegisters) to: #sqLong)] ifFalse: [structRet := self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnII (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7). intRet := structRet a]. "X1"
"undo any callee argument pops because it may confuse stack management with the alloca." (self isCalleePopsConvention: calloutState callFlags) ifTrue: [self setsp: calloutState argVector]. interpreterProxy ownVM: vmHandle.
(calloutState ffiRetHeader anyMask: FFIFlagPointer+FFIFlagStructure) ifTrue: [| returnType | "Note: Order is important here since FFIFlagPointer + FFIFlagStructure is used to represent 'typedef void* VoidPointer' and VoidPointer must be returned as pointer *not* as struct." returnType := self ffiReturnType: specOnStack. (calloutState ffiRetHeader anyMask: FFIFlagPointer) ifTrue: [^self ffiReturnPointer: intRet ofType: returnType in: calloutState]. ^self ffiReturnStruct: (((self returnStructInRegisters: calloutState) ifTrue: [self cCoerceSimple: (self addressOf: structRet) to: #'char *'] ifFalse: [calloutState limit])) ofType: returnType in: calloutState]. ^self ffiCreateIntegralResultOop: intRet ofAtomicType: atomicType in: calloutState!
Item was changed: ----- Method: ThreadedFFIPlugin>>ffiCall:ArgArrayOrNil:NumArgs: (in category 'callout support') ----- ffiCall: externalFunction ArgArrayOrNil: argArrayOrNil NumArgs: nArgs "Generic callout. Does the actual work. If argArrayOrNil is nil it takes args from the stack and the spec from the method. If argArrayOrNil is not nil takes args from argArrayOrNil and the spec from the receiver." | flags argTypeArray address argType oop argSpec argClass err theCalloutState calloutState requiredStackSize stackSize allocation result primNumArgs | <inline: #always> <var: #theCalloutState type: #'CalloutState'> <var: #calloutState type: #'CalloutState *'> <var: #allocation type: #'char *'>
primNumArgs := interpreterProxy methodArgumentCount. (interpreterProxy is: externalFunction KindOfClass: interpreterProxy classExternalFunction) ifFalse: [^self ffiFail: FFIErrorNotFunction]. "Load and check the values in the externalFunction before we call out" flags := interpreterProxy fetchInteger: ExternalFunctionFlagsIndex ofObject: externalFunction. interpreterProxy failed ifTrue: [^self ffiFail: FFIErrorBadArgs].
"This must come early for compatibility with the old FFIPlugin. Image-level code may assume the function pointer is loaded eagerly. Thanks to Nicolas Cellier." address := self ffiLoadCalloutAddress: externalFunction. interpreterProxy failed ifTrue: [^0 "error code already set by ffiLoadCalloutAddress:"]. argTypeArray := interpreterProxy fetchPointer: ExternalFunctionArgTypesIndex ofObject: externalFunction. "must be array of arg types" ((interpreterProxy isArray: argTypeArray) and: [(interpreterProxy slotSizeOf: argTypeArray) = (nArgs + 1)]) ifFalse: [^self ffiFail: FFIErrorBadArgs]. "check if the calling convention is supported" self cppIf: COGMTVM ifTrue: [(self ffiSupportsCallingConvention: (flags bitAnd: FFICallTypesMask)) ifFalse: [^self ffiFail: FFIErrorCallType]] ifFalse: "not masking causes threaded calls to fail, which is as they should if the plugin is not threaded." [(self ffiSupportsCallingConvention: flags) ifFalse: [^self ffiFail: FFIErrorCallType]]. requiredStackSize := self externalFunctionHasStackSizeSlot ifTrue: [interpreterProxy fetchInteger: ExternalFunctionStackSizeIndex ofObject: externalFunction] ifFalse: [-1]. interpreterProxy failed ifTrue: [^interpreterProxy primitiveFailFor: (argArrayOrNil isNil ifTrue: [PrimErrBadMethod] ifFalse: [PrimErrBadReceiver])]. stackSize := requiredStackSize < 0 ifTrue: [DefaultMaxStackSize] ifFalse: [requiredStackSize]. + self cCode: nil inSmalltalk: [theCalloutState := self class calloutStateClass new]. - self cCode: [] inSmalltalk: [theCalloutState := self class calloutStateClass new]. calloutState := self addressOf: theCalloutState. self cCode: [self memset: calloutState _: 0 _: (self sizeof: #CalloutState)]. calloutState callFlags: flags. "Fetch return type and args" argType := interpreterProxy fetchPointer: 0 ofObject: argTypeArray. argSpec := interpreterProxy fetchPointer: 0 ofObject: argType. argClass := interpreterProxy fetchPointer: 1 ofObject: argType. "Witten this way to allow Slang to inline ffiCheckReturn:With:in:" err := self ffiCheckReturn: argSpec With: argClass in: calloutState. err ~= 0 ifTrue: [^self ffiFail: err]. "cannot return" "alloca the outgoing stack frame, leaving room for marshalling args, and including space for the return struct, if any. Additional space reserved for saving register args like mandated by Win64 X64 or PPC ABI, will be managed by the call itself" allocation := self alloca: stackSize + calloutState structReturnSize + self cStackAlignment. self mustAlignStack ifTrue: [allocation := self cCoerce: (allocation asUnsignedIntegerPtr bitClear: self cStackAlignment - 1) to: #'char *']. calloutState argVector: allocation; currentArg: allocation; limit: allocation + stackSize. (self nonRegisterStructReturnIsViaImplicitFirstArgument and: [calloutState structReturnSize > 0 and: [(self returnStructInRegisters: calloutState) not]]) ifTrue: [err := self ffiPushPointer: calloutState limit in: calloutState. err ~= 0 ifTrue: [self cleanupCalloutState: calloutState. self cppIf: COGMTVM ifTrue: [err = PrimErrObjectMayMove negated ifTrue: [^PrimErrObjectMayMove]]. "N.B. Do not fail if object may move because caller will GC and retry." ^self ffiFail: err]]. 1 to: nArgs do: [:i| argType := interpreterProxy fetchPointer: i ofObject: argTypeArray. argSpec := interpreterProxy fetchPointer: 0 ofObject: argType. argClass := interpreterProxy fetchPointer: 1 ofObject: argType. oop := argArrayOrNil ifNil: [interpreterProxy stackValue: nArgs - i] ifNotNil: [interpreterProxy fetchPointer: i - 1 ofObject: argArrayOrNil]. err := self ffiArgument: oop Spec: argSpec Class: argClass in: calloutState. err ~= 0 ifTrue: [self cleanupCalloutState: calloutState. self cppIf: COGMTVM ifTrue: [err = PrimErrObjectMayMove negated ifTrue: [^PrimErrObjectMayMove]]. "N.B. Do not fail if object may move because caller will GC and retry." ^self ffiFail: err]]. "coercion failed or out of stack space" "Failures must be reported back from ffiArgument:Spec:Class:in:. Should not fail from here on in." self assert: interpreterProxy failed not. self ffiLogCallout: externalFunction. (requiredStackSize < 0 and: [self externalFunctionHasStackSizeSlot]) ifTrue: [stackSize := calloutState currentArg - calloutState argVector. interpreterProxy storeInteger: ExternalFunctionStackSizeIndex ofObject: externalFunction withValue: stackSize]. "Go out and call this guy" result := self ffiCalloutTo: address SpecOnStack: argArrayOrNil notNil in: calloutState. self cleanupCalloutState: calloutState. "Can not safely use argumentCount (via e.g. methodReturnValue:) since it may have been changed by a callback." interpreterProxy pop: primNumArgs + 1 thenPush: result. ^result!
Item was changed: ----- Method: ThreadedFFIPlugin>>primitiveFFIIntegerAt (in category 'primitives') ----- primitiveFFIIntegerAt "Answer a (signed or unsigned) n byte integer from the given byte offset in the receiver, using the platform's endianness." <export: true> <primitiveMetadata: #FastCPrimitiveFlag> | isSigned byteSize byteOffset rcvr value mask valueOop | <var: 'value' type: #usqLong> <var: 'mask' type: #usqLong> <export: true> isSigned := interpreterProxy booleanValueOf: (interpreterProxy stackValue: 0). byteSize := interpreterProxy stackIntegerValue: 1. byteOffset := interpreterProxy stackIntegerValue: 2. rcvr := interpreterProxy stackObjectValue: 3. interpreterProxy failed ifTrue:[^0]. (byteOffset > 0 and: [(byteSize between: 1 and: 8) and: [(byteSize bitAnd: byteSize - 1) = 0 "a.k.a. isPowerOfTwo"]]) ifFalse: [^interpreterProxy primitiveFail]. (self ffiAddressOf: rcvr startingAt: byteOffset size: byteSize) ifNil: [^interpreterProxy primitiveFailFor: PrimErrBadIndex] ifNotNil: [:addr| byteSize <= 2 ifTrue: [byteSize = 1 ifTrue: [value := self cCoerceSimple: (interpreterProxy byteAt: addr) to: #'unsigned char'] ifFalse: [value := self cCoerceSimple: (interpreterProxy unalignedShortAt: addr) to: #'unsigned short']] ifFalse: [byteSize = 4 ifTrue: [value := self cCoerceSimple: (interpreterProxy unalignedLong32At: addr) to: #'unsigned int'] ifFalse: [value := interpreterProxy unalignedLong64At: addr]]. byteSize < BytesPerWord ifTrue: [isSigned ifTrue: "sign extend value" [mask := 1 asUnsignedLongLong << (byteSize * 8 - 1). value := (value bitAnd: mask-1) - (value bitAnd: mask)]. "note: byte/short (&long if BytesPerWord=8) never exceed SmallInteger range" valueOop := interpreterProxy integerObjectOf: value] ifFalse: "general 64 bit integer; note these never fail" [isSigned ifTrue: [byteSize < 8 ifTrue: "sign extend value" [mask := 1 asUnsignedLongLong << (byteSize * 8 - 1). value := (value bitAnd: mask-1) - (value bitAnd: mask)]. + self cCode: nil inSmalltalk: - self cCode: [] inSmalltalk: [(byteSize = 8 and: [(value bitShift: -56) >= 128]) ifTrue: [value := value - (1 bitShift: 64)]]. valueOop := interpreterProxy signed64BitIntegerFor: value] ifFalse:[valueOop := interpreterProxy positive64BitIntegerFor: value]]. ^interpreterProxy methodReturnValue: valueOop]!
Item was changed: ----- Method: ThreadedFFIPlugin>>setInterpreter: (in category 'simulation') ----- setInterpreter: anInterpreter "Initialization of the plugin in the simulator. The real routine is in the superclass." + self cCode: nil - self cCode: [] inSmalltalk: [self class == thisContext method methodClass ifTrue: [self morphIntoConcreteSubclass: anInterpreter]]. ^super setInterpreter: anInterpreter!
Item was changed: ----- Method: ThreadedRiscV64FFIPlugin>>ffiCalloutTo:SpecOnStack:in: (in category 'callout support') ----- ffiCalloutTo: procAddr SpecOnStack: specOnStack in: calloutState <var: #procAddr type: #'void *'> <var: #calloutState type: #'CalloutState *'> <var: #loadFloatRegs declareC: 'extern void loadFloatRegs(double, double, double, double, double, double, double, double)'> "Go out, call this guy and create the return value. This *must* be inlined because of the alloca of the outgoing stack frame in ffiCall:WithFlags:NumArgs:Args:AndTypes:" | vmHandle atomicType floatRet intRet structRet specSize | <var: 'vmHandle' type: #'void *'> <var: 'doubleRet' type: #double> <var: 'floatRet' type: #ThirtyTwoByteReturnDF> <var: 'structRet' type: #SixteenByteReturnII> <var: 'intRet' type: #usqLong> <inline: #always> + self cCode: nil inSmalltalk: [floatRet := ByteArray new: 32]. "Just a hack to placate the Smalltalk compiler; these should be proper struct types..." - self cCode: [] inSmalltalk: [floatRet := ByteArray new: 32]. "Just a hack to placate the Smalltalk compiler; these should be proper struct types..." vmHandle := interpreterProxy disownVM: (self disownFlagsFor: calloutState). calloutState floatRegisterIndex > 0 ifTrue: [self loadFloatRegs: (calloutState floatRegisters at: 0) _: (calloutState floatRegisters at: 1) _: (calloutState floatRegisters at: 2) _: (calloutState floatRegisters at: 3) _: (calloutState floatRegisters at: 4) _: (calloutState floatRegisters at: 5) _: (calloutState floatRegisters at: 6) _: (calloutState floatRegisters at: 7)].
(self allocaLiesSoSetSpBeforeCall or: [self mustAlignStack]) ifTrue: [self setsp: calloutState argVector].
"float or double returns" atomicType := self atomicTypeOf: calloutState ffiRetHeader. (atomicType >> 1) = (FFITypeSingleFloat >> 1) ifTrue: [| doubleRet | atomicType = FFITypeDoubleFloat ifTrue: [doubleRet := (self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'double (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7))] ifFalse: [doubleRet := (self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'float (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7))]. "undo any callee argument pops because it may confuse stack management with the alloca." (self isCalleePopsConvention: calloutState callFlags) ifTrue: [self setsp: calloutState argVector]. interpreterProxy ownVM: vmHandle. ^self floatObjectOf: doubleRet].
"homogenous array of float/double returns" ((calloutState ffiRetHeader bitAnd: FFIFlagPointer+FFIFlagStructure) = FFIFlagStructure and: [self structIsHomogenousFloatArrayOfSize: (calloutState ffiRetHeader bitAnd: FFIStructSizeMask) typeSpec: (self cCoerce: (interpreterProxy firstIndexableField: calloutState ffiRetSpec) to: #'unsigned int *') ofLength: (specSize := interpreterProxy byteSizeOf: calloutState ffiRetSpec) / (self sizeof: #'unsigned int')]) ifTrue: [floatRet d: (self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'struct dprr (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7)).
"undo any callee argument pops because it may confuse stack management with the alloca." (self isCalleePopsConvention: calloutState callFlags) ifTrue: [self setsp: calloutState argVector]. interpreterProxy ownVM: vmHandle.
"If the struct is a vector of floats then move float[2] to float[1], float[4] to float[2] and float[6] to float[3], to pack the float data in the double fields. We can tell if the struct is composed of floats if its size is less than the spec size, since the spec size is (1 + n fields) * 4 bytes, and the struct size is n fields * 4 bytes for floats and n fields * 8 bytes for doubles. We can't access the spec post call because it may have moved." specSize > calloutState structReturnSize ifTrue: [floatRet f floats at: 1 put: (floatRet f floats at: 2). floatRet f floats at: 2 put: (floatRet f floats at: 4). floatRet f floats at: 3 put: (floatRet f floats at: 6)]. ^self ffiReturnStruct: (self addressOf: floatRet) ofType: (self ffiReturnType: specOnStack) in: calloutState].
"Integer and Structure returns..." (self mustReturnStructOnStack: calloutState structReturnSize) ifTrue: [intRet := 0. self setReturnRegister: (self cCoerceSimple: calloutState limit to: #sqLong) "stack alloca'd struct" andCall: (self cCoerceSimple: procAddr to: #sqLong) withArgsArray: (self cCoerceSimple: (self addressOf: calloutState integerRegisters) to: #sqLong)] ifFalse: [structRet := self dispatchFunctionPointer: (self cCoerceSimple: procAddr to: 'SixteenByteReturnII (*)(sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t, sqIntptr_t)') with: (calloutState integerRegisters at: 0) with: (calloutState integerRegisters at: 1) with: (calloutState integerRegisters at: 2) with: (calloutState integerRegisters at: 3) with: (calloutState integerRegisters at: 4) with: (calloutState integerRegisters at: 5) with: (calloutState integerRegisters at: 6) with: (calloutState integerRegisters at: 7). intRet := structRet a]. "X1"
"undo any callee argument pops because it may confuse stack management with the alloca." (self isCalleePopsConvention: calloutState callFlags) ifTrue: [self setsp: calloutState argVector]. interpreterProxy ownVM: vmHandle.
(calloutState ffiRetHeader anyMask: FFIFlagPointer+FFIFlagStructure) ifTrue: [| returnType | "Note: Order is important here since FFIFlagPointer + FFIFlagStructure is used to represent 'typedef void* VoidPointer' and VoidPointer must be returned as pointer *not* as struct." returnType := self ffiReturnType: specOnStack. (calloutState ffiRetHeader anyMask: FFIFlagPointer) ifTrue: [^self ffiReturnPointer: intRet ofType: returnType in: calloutState]. ^self ffiReturnStruct: (((self returnStructInRegisters: calloutState) ifTrue: [self cCoerceSimple: (self addressOf: structRet) to: #'char *'] ifFalse: [calloutState limit])) ofType: returnType in: calloutState]. ^self ffiCreateIntegralResultOop: intRet ofAtomicType: atomicType in: calloutState!
vm-dev@lists.squeakfoundation.org