Source file src/runtime/mbarrier.go
1 // Copyright 2015 The Go Authors. All rights reserved. 2 // Use of this source code is governed by a BSD-style 3 // license that can be found in the LICENSE file. 4 5 // Garbage collector: write barriers. 6 // 7 // For the concurrent garbage collector, the Go compiler implements 8 // updates to pointer-valued fields that may be in heap objects by 9 // emitting calls to write barriers. The main write barrier for 10 // individual pointer writes is gcWriteBarrier and is implemented in 11 // assembly. This file contains write barrier entry points for bulk 12 // operations. See also mwbbuf.go. 13 14 package runtime 15 16 import ( 17 "internal/abi" 18 "internal/goarch" 19 "internal/goexperiment" 20 "unsafe" 21 ) 22 23 // Go uses a hybrid barrier that combines a Yuasa-style deletion 24 // barrier—which shades the object whose reference is being 25 // overwritten—with Dijkstra insertion barrier—which shades the object 26 // whose reference is being written. The insertion part of the barrier 27 // is necessary while the calling goroutine's stack is grey. In 28 // pseudocode, the barrier is: 29 // 30 // writePointer(slot, ptr): 31 // shade(*slot) 32 // if current stack is grey: 33 // shade(ptr) 34 // *slot = ptr 35 // 36 // slot is the destination in Go code. 37 // ptr is the value that goes into the slot in Go code. 38 // 39 // Shade indicates that it has seen a white pointer by adding the referent 40 // to wbuf as well as marking it. 41 // 42 // The two shades and the condition work together to prevent a mutator 43 // from hiding an object from the garbage collector: 44 // 45 // 1. shade(*slot) prevents a mutator from hiding an object by moving 46 // the sole pointer to it from the heap to its stack. If it attempts 47 // to unlink an object from the heap, this will shade it. 48 // 49 // 2. shade(ptr) prevents a mutator from hiding an object by moving 50 // the sole pointer to it from its stack into a black object in the 51 // heap. If it attempts to install the pointer into a black object, 52 // this will shade it. 53 // 54 // 3. Once a goroutine's stack is black, the shade(ptr) becomes 55 // unnecessary. shade(ptr) prevents hiding an object by moving it from 56 // the stack to the heap, but this requires first having a pointer 57 // hidden on the stack. Immediately after a stack is scanned, it only 58 // points to shaded objects, so it's not hiding anything, and the 59 // shade(*slot) prevents it from hiding any other pointers on its 60 // stack. 61 // 62 // For a detailed description of this barrier and proof of 63 // correctness, see https://github.com/golang/proposal/blob/master/design/17503-eliminate-rescan.md 64 // 65 // 66 // 67 // Dealing with memory ordering: 68 // 69 // Both the Yuasa and Dijkstra barriers can be made conditional on the 70 // color of the object containing the slot. We chose not to make these 71 // conditional because the cost of ensuring that the object holding 72 // the slot doesn't concurrently change color without the mutator 73 // noticing seems prohibitive. 74 // 75 // Consider the following example where the mutator writes into 76 // a slot and then loads the slot's mark bit while the GC thread 77 // writes to the slot's mark bit and then as part of scanning reads 78 // the slot. 79 // 80 // Initially both [slot] and [slotmark] are 0 (nil) 81 // Mutator thread GC thread 82 // st [slot], ptr st [slotmark], 1 83 // 84 // ld r1, [slotmark] ld r2, [slot] 85 // 86 // Without an expensive memory barrier between the st and the ld, the final 87 // result on most HW (including 386/amd64) can be r1==r2==0. This is a classic 88 // example of what can happen when loads are allowed to be reordered with older 89 // stores (avoiding such reorderings lies at the heart of the classic 90 // Peterson/Dekker algorithms for mutual exclusion). Rather than require memory 91 // barriers, which will slow down both the mutator and the GC, we always grey 92 // the ptr object regardless of the slot's color. 93 // 94 // Another place where we intentionally omit memory barriers is when 95 // accessing mheap_.arena_used to check if a pointer points into the 96 // heap. On relaxed memory machines, it's possible for a mutator to 97 // extend the size of the heap by updating arena_used, allocate an 98 // object from this new region, and publish a pointer to that object, 99 // but for tracing running on another processor to observe the pointer 100 // but use the old value of arena_used. In this case, tracing will not 101 // mark the object, even though it's reachable. However, the mutator 102 // is guaranteed to execute a write barrier when it publishes the 103 // pointer, so it will take care of marking the object. A general 104 // consequence of this is that the garbage collector may cache the 105 // value of mheap_.arena_used. (See issue #9984.) 106 // 107 // 108 // Stack writes: 109 // 110 // The compiler omits write barriers for writes to the current frame, 111 // but if a stack pointer has been passed down the call stack, the 112 // compiler will generate a write barrier for writes through that 113 // pointer (because it doesn't know it's not a heap pointer). 114 // 115 // 116 // Global writes: 117 // 118 // The Go garbage collector requires write barriers when heap pointers 119 // are stored in globals. Many garbage collectors ignore writes to 120 // globals and instead pick up global -> heap pointers during 121 // termination. This increases pause time, so we instead rely on write 122 // barriers for writes to globals so that we don't have to rescan 123 // global during mark termination. 124 // 125 // 126 // Publication ordering: 127 // 128 // The write barrier is *pre-publication*, meaning that the write 129 // barrier happens prior to the *slot = ptr write that may make ptr 130 // reachable by some goroutine that currently cannot reach it. 131 // 132 // 133 // Signal handler pointer writes: 134 // 135 // In general, the signal handler cannot safely invoke the write 136 // barrier because it may run without a P or even during the write 137 // barrier. 138 // 139 // There is exactly one exception: profbuf.go omits a barrier during 140 // signal handler profile logging. That's safe only because of the 141 // deletion barrier. See profbuf.go for a detailed argument. If we 142 // remove the deletion barrier, we'll have to work out a new way to 143 // handle the profile logging. 144 145 // typedmemmove copies a value of type typ to dst from src. 146 // Must be nosplit, see #16026. 147 // 148 // TODO: Perfect for go:nosplitrec since we can't have a safe point 149 // anywhere in the bulk barrier or memmove. 150 // 151 //go:nosplit 152 func typedmemmove(typ *abi.Type, dst, src unsafe.Pointer) { 153 if dst == src { 154 return 155 } 156 if writeBarrier.needed && typ.PtrBytes != 0 { 157 bulkBarrierPreWrite(uintptr(dst), uintptr(src), typ.PtrBytes) 158 } 159 // There's a race here: if some other goroutine can write to 160 // src, it may change some pointer in src after we've 161 // performed the write barrier but before we perform the 162 // memory copy. This safe because the write performed by that 163 // other goroutine must also be accompanied by a write 164 // barrier, so at worst we've unnecessarily greyed the old 165 // pointer that was in src. 166 memmove(dst, src, typ.Size_) 167 if goexperiment.CgoCheck2 { 168 cgoCheckMemmove2(typ, dst, src, 0, typ.Size_) 169 } 170 } 171 172 // wbZero performs the write barrier operations necessary before 173 // zeroing a region of memory at address dst of type typ. 174 // Does not actually do the zeroing. 175 // 176 //go:nowritebarrierrec 177 //go:nosplit 178 func wbZero(typ *_type, dst unsafe.Pointer) { 179 bulkBarrierPreWrite(uintptr(dst), 0, typ.PtrBytes) 180 } 181 182 // wbMove performs the write barrier operations necessary before 183 // copying a region of memory from src to dst of type typ. 184 // Does not actually do the copying. 185 // 186 //go:nowritebarrierrec 187 //go:nosplit 188 func wbMove(typ *_type, dst, src unsafe.Pointer) { 189 bulkBarrierPreWrite(uintptr(dst), uintptr(src), typ.PtrBytes) 190 } 191 192 //go:linkname reflect_typedmemmove reflect.typedmemmove 193 func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer) { 194 if raceenabled { 195 raceWriteObjectPC(typ, dst, getcallerpc(), abi.FuncPCABIInternal(reflect_typedmemmove)) 196 raceReadObjectPC(typ, src, getcallerpc(), abi.FuncPCABIInternal(reflect_typedmemmove)) 197 } 198 if msanenabled { 199 msanwrite(dst, typ.Size_) 200 msanread(src, typ.Size_) 201 } 202 if asanenabled { 203 asanwrite(dst, typ.Size_) 204 asanread(src, typ.Size_) 205 } 206 typedmemmove(typ, dst, src) 207 } 208 209 //go:linkname reflectlite_typedmemmove internal/reflectlite.typedmemmove 210 func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer) { 211 reflect_typedmemmove(typ, dst, src) 212 } 213 214 // reflectcallmove is invoked by reflectcall to copy the return values 215 // out of the stack and into the heap, invoking the necessary write 216 // barriers. dst, src, and size describe the return value area to 217 // copy. typ describes the entire frame (not just the return values). 218 // typ may be nil, which indicates write barriers are not needed. 219 // 220 // It must be nosplit and must only call nosplit functions because the 221 // stack map of reflectcall is wrong. 222 // 223 //go:nosplit 224 func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr, regs *abi.RegArgs) { 225 if writeBarrier.needed && typ != nil && typ.PtrBytes != 0 && size >= goarch.PtrSize { 226 bulkBarrierPreWrite(uintptr(dst), uintptr(src), size) 227 } 228 memmove(dst, src, size) 229 230 // Move pointers returned in registers to a place where the GC can see them. 231 for i := range regs.Ints { 232 if regs.ReturnIsPtr.Get(i) { 233 regs.Ptrs[i] = unsafe.Pointer(regs.Ints[i]) 234 } 235 } 236 } 237 238 //go:nosplit 239 func typedslicecopy(typ *_type, dstPtr unsafe.Pointer, dstLen int, srcPtr unsafe.Pointer, srcLen int) int { 240 n := dstLen 241 if n > srcLen { 242 n = srcLen 243 } 244 if n == 0 { 245 return 0 246 } 247 248 // The compiler emits calls to typedslicecopy before 249 // instrumentation runs, so unlike the other copying and 250 // assignment operations, it's not instrumented in the calling 251 // code and needs its own instrumentation. 252 if raceenabled { 253 callerpc := getcallerpc() 254 pc := abi.FuncPCABIInternal(slicecopy) 255 racewriterangepc(dstPtr, uintptr(n)*typ.Size_, callerpc, pc) 256 racereadrangepc(srcPtr, uintptr(n)*typ.Size_, callerpc, pc) 257 } 258 if msanenabled { 259 msanwrite(dstPtr, uintptr(n)*typ.Size_) 260 msanread(srcPtr, uintptr(n)*typ.Size_) 261 } 262 if asanenabled { 263 asanwrite(dstPtr, uintptr(n)*typ.Size_) 264 asanread(srcPtr, uintptr(n)*typ.Size_) 265 } 266 267 if goexperiment.CgoCheck2 { 268 cgoCheckSliceCopy(typ, dstPtr, srcPtr, n) 269 } 270 271 if dstPtr == srcPtr { 272 return n 273 } 274 275 // Note: No point in checking typ.PtrBytes here: 276 // compiler only emits calls to typedslicecopy for types with pointers, 277 // and growslice and reflect_typedslicecopy check for pointers 278 // before calling typedslicecopy. 279 size := uintptr(n) * typ.Size_ 280 if writeBarrier.needed { 281 pwsize := size - typ.Size_ + typ.PtrBytes 282 bulkBarrierPreWrite(uintptr(dstPtr), uintptr(srcPtr), pwsize) 283 } 284 // See typedmemmove for a discussion of the race between the 285 // barrier and memmove. 286 memmove(dstPtr, srcPtr, size) 287 return n 288 } 289 290 //go:linkname reflect_typedslicecopy reflect.typedslicecopy 291 func reflect_typedslicecopy(elemType *_type, dst, src slice) int { 292 if elemType.PtrBytes == 0 { 293 return slicecopy(dst.array, dst.len, src.array, src.len, elemType.Size_) 294 } 295 return typedslicecopy(elemType, dst.array, dst.len, src.array, src.len) 296 } 297 298 // typedmemclr clears the typed memory at ptr with type typ. The 299 // memory at ptr must already be initialized (and hence in type-safe 300 // state). If the memory is being initialized for the first time, see 301 // memclrNoHeapPointers. 302 // 303 // If the caller knows that typ has pointers, it can alternatively 304 // call memclrHasPointers. 305 // 306 // TODO: A "go:nosplitrec" annotation would be perfect for this. 307 // 308 //go:nosplit 309 func typedmemclr(typ *_type, ptr unsafe.Pointer) { 310 if writeBarrier.needed && typ.PtrBytes != 0 { 311 bulkBarrierPreWrite(uintptr(ptr), 0, typ.PtrBytes) 312 } 313 memclrNoHeapPointers(ptr, typ.Size_) 314 } 315 316 //go:linkname reflect_typedmemclr reflect.typedmemclr 317 func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer) { 318 typedmemclr(typ, ptr) 319 } 320 321 //go:linkname reflect_typedmemclrpartial reflect.typedmemclrpartial 322 func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr) { 323 if writeBarrier.needed && typ.PtrBytes != 0 { 324 bulkBarrierPreWrite(uintptr(ptr), 0, size) 325 } 326 memclrNoHeapPointers(ptr, size) 327 } 328 329 //go:linkname reflect_typedarrayclear reflect.typedarrayclear 330 func reflect_typedarrayclear(typ *_type, ptr unsafe.Pointer, len int) { 331 size := typ.Size_ * uintptr(len) 332 if writeBarrier.needed && typ.PtrBytes != 0 { 333 bulkBarrierPreWrite(uintptr(ptr), 0, size) 334 } 335 memclrNoHeapPointers(ptr, size) 336 } 337 338 // memclrHasPointers clears n bytes of typed memory starting at ptr. 339 // The caller must ensure that the type of the object at ptr has 340 // pointers, usually by checking typ.PtrBytes. However, ptr 341 // does not have to point to the start of the allocation. 342 // 343 //go:nosplit 344 func memclrHasPointers(ptr unsafe.Pointer, n uintptr) { 345 bulkBarrierPreWrite(uintptr(ptr), 0, n) 346 memclrNoHeapPointers(ptr, n) 347 } 348