Age Owner Branch data TLA Line data Source code
1 : : /*-------------------------------------------------------------------------
2 : : *
3 : : * inval.c
4 : : * POSTGRES cache invalidation dispatcher code.
5 : : *
6 : : * This is subtle stuff, so pay attention:
7 : : *
8 : : * When a tuple is updated or deleted, our standard visibility rules
9 : : * consider that it is *still valid* so long as we are in the same command,
10 : : * ie, until the next CommandCounterIncrement() or transaction commit.
11 : : * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 : : * generally scanned under the most current snapshot available, rather than
13 : : * the transaction snapshot.) At the command boundary, the old tuple stops
14 : : * being valid and the new version, if any, becomes valid. Therefore,
15 : : * we cannot simply flush a tuple from the system caches during heap_update()
16 : : * or heap_delete(). The tuple is still good at that point; what's more,
17 : : * even if we did flush it, it might be reloaded into the caches by a later
18 : : * request in the same command. So the correct behavior is to keep a list
19 : : * of outdated (updated/deleted) tuples and then do the required cache
20 : : * flushes at the next command boundary. We must also keep track of
21 : : * inserted tuples so that we can flush "negative" cache entries that match
22 : : * the new tuples; again, that mustn't happen until end of command.
23 : : *
24 : : * Once we have finished the command, we still need to remember inserted
25 : : * tuples (including new versions of updated tuples), so that we can flush
26 : : * them from the caches if we abort the transaction. Similarly, we'd better
27 : : * be able to flush "negative" cache entries that may have been loaded in
28 : : * place of deleted tuples, so we still need the deleted ones too.
29 : : *
30 : : * If we successfully complete the transaction, we have to broadcast all
31 : : * these invalidation events to other backends (via the SI message queue)
32 : : * so that they can flush obsolete entries from their caches. Note we have
33 : : * to record the transaction commit before sending SI messages, otherwise
34 : : * the other backends won't see our updated tuples as good.
35 : : *
36 : : * When a subtransaction aborts, we can process and discard any events
37 : : * it has queued. When a subtransaction commits, we just add its events
38 : : * to the pending lists of the parent transaction.
39 : : *
40 : : * In short, we need to remember until xact end every insert or delete
41 : : * of a tuple that might be in the system caches. Updates are treated as
42 : : * two events, delete + insert, for simplicity. (If the update doesn't
43 : : * change the tuple hash value, catcache.c optimizes this into one event.)
44 : : *
45 : : * We do not need to register EVERY tuple operation in this way, just those
46 : : * on tuples in relations that have associated catcaches. We do, however,
47 : : * have to register every operation on every tuple that *could* be in a
48 : : * catcache, whether or not it currently is in our cache. Also, if the
49 : : * tuple is in a relation that has multiple catcaches, we need to register
50 : : * an invalidation message for each such catcache. catcache.c's
51 : : * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 : : * catcaches may need invalidation for a given tuple.
53 : : *
54 : : * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 : : * pg_index tuple, we register a relcache flush operation for the relation
56 : : * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 : : * Likewise for pg_constraint tuples for foreign keys on relations.
58 : : *
59 : : * We keep the relcache flush requests in lists separate from the catcache
60 : : * tuple flush requests. This allows us to issue all the pending catcache
61 : : * flushes before we issue relcache flushes, which saves us from loading
62 : : * a catcache tuple during relcache load only to flush it again right away.
63 : : * Also, we avoid queuing multiple relcache flush requests for the same
64 : : * relation, since a relcache flush is relatively expensive to do.
65 : : * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 : : * Probably not.)
67 : : *
68 : : * Many subsystems own higher-level caches that depend on relcache and/or
69 : : * catcache, and they register callbacks here to invalidate their caches.
70 : : * While building a higher-level cache entry, a backend may receive a
71 : : * callback for the being-built entry or one of its dependencies. This
72 : : * implies the new higher-level entry would be born stale, and it might
73 : : * remain stale for the life of the backend. Many caches do not prevent
74 : : * that. They rely on DDL for can't-miss catalog changes taking
75 : : * AccessExclusiveLock on suitable objects. (For a change made with less
76 : : * locking, backends might never read the change.) The relation cache,
77 : : * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 : : * than the beginning of the next transaction. Hence, when a relevant
79 : : * invalidation callback arrives during a build, relcache.c reattempts that
80 : : * build. Caches with similar needs could do likewise.
81 : : *
82 : : * If a relcache flush is issued for a system relation that we preload
83 : : * from the relcache init file, we must also delete the init file so that
84 : : * it will be rebuilt during the next backend restart. The actual work of
85 : : * manipulating the init file is in relcache.c, but we keep track of the
86 : : * need for it here.
87 : : *
88 : : * Currently, inval messages are sent without regard for the possibility
89 : : * that the object described by the catalog tuple might be a session-local
90 : : * object such as a temporary table. This is because (1) this code has
91 : : * no practical way to tell the difference, and (2) it is not certain that
92 : : * other backends don't have catalog cache or even relcache entries for
93 : : * such tables, anyway; there is nothing that prevents that. It might be
94 : : * worth trying to avoid sending such inval traffic in the future, if those
95 : : * problems can be overcome cheaply.
96 : : *
97 : : * When wal_level=logical, write invalidations into WAL at each command end to
98 : : * support the decoding of the in-progress transactions. See
99 : : * CommandEndInvalidationMessages.
100 : : *
101 : : * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
102 : : * Portions Copyright (c) 1994, Regents of the University of California
103 : : *
104 : : * IDENTIFICATION
105 : : * src/backend/utils/cache/inval.c
106 : : *
107 : : *-------------------------------------------------------------------------
108 : : */
109 : : #include "postgres.h"
110 : :
111 : : #include <limits.h>
112 : :
113 : : #include "access/htup_details.h"
114 : : #include "access/xact.h"
115 : : #include "access/xloginsert.h"
116 : : #include "catalog/catalog.h"
117 : : #include "catalog/pg_constraint.h"
118 : : #include "miscadmin.h"
119 : : #include "storage/sinval.h"
120 : : #include "storage/smgr.h"
121 : : #include "utils/catcache.h"
122 : : #include "utils/inval.h"
123 : : #include "utils/memdebug.h"
124 : : #include "utils/memutils.h"
125 : : #include "utils/rel.h"
126 : : #include "utils/relmapper.h"
127 : : #include "utils/snapmgr.h"
128 : : #include "utils/syscache.h"
129 : :
130 : :
131 : : /*
132 : : * Pending requests are stored as ready-to-send SharedInvalidationMessages.
133 : : * We keep the messages themselves in arrays in TopTransactionContext
134 : : * (there are separate arrays for catcache and relcache messages). Control
135 : : * information is kept in a chain of TransInvalidationInfo structs, also
136 : : * allocated in TopTransactionContext. (We could keep a subtransaction's
137 : : * TransInvalidationInfo in its CurTransactionContext; but that's more
138 : : * wasteful not less so, since in very many scenarios it'd be the only
139 : : * allocation in the subtransaction's CurTransactionContext.)
140 : : *
141 : : * We can store the message arrays densely, and yet avoid moving data around
142 : : * within an array, because within any one subtransaction we need only
143 : : * distinguish between messages emitted by prior commands and those emitted
144 : : * by the current command. Once a command completes and we've done local
145 : : * processing on its messages, we can fold those into the prior-commands
146 : : * messages just by changing array indexes in the TransInvalidationInfo
147 : : * struct. Similarly, we need distinguish messages of prior subtransactions
148 : : * from those of the current subtransaction only until the subtransaction
149 : : * completes, after which we adjust the array indexes in the parent's
150 : : * TransInvalidationInfo to include the subtransaction's messages.
151 : : *
152 : : * The ordering of the individual messages within a command's or
153 : : * subtransaction's output is not considered significant, although this
154 : : * implementation happens to preserve the order in which they were queued.
155 : : * (Previous versions of this code did not preserve it.)
156 : : *
157 : : * For notational convenience, control information is kept in two-element
158 : : * arrays, the first for catcache messages and the second for relcache
159 : : * messages.
160 : : */
161 : : #define CatCacheMsgs 0
162 : : #define RelCacheMsgs 1
163 : :
164 : : /* Pointers to main arrays in TopTransactionContext */
165 : : typedef struct InvalMessageArray
166 : : {
167 : : SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
168 : : int maxmsgs; /* current allocated size of array */
169 : : } InvalMessageArray;
170 : :
171 : : static InvalMessageArray InvalMessageArrays[2];
172 : :
173 : : /* Control information for one logical group of messages */
174 : : typedef struct InvalidationMsgsGroup
175 : : {
176 : : int firstmsg[2]; /* first index in relevant array */
177 : : int nextmsg[2]; /* last+1 index */
178 : : } InvalidationMsgsGroup;
179 : :
180 : : /* Macros to help preserve InvalidationMsgsGroup abstraction */
181 : : #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
182 : : do { \
183 : : (targetgroup)->firstmsg[subgroup] = \
184 : : (targetgroup)->nextmsg[subgroup] = \
185 : : (priorgroup)->nextmsg[subgroup]; \
186 : : } while (0)
187 : :
188 : : #define SetGroupToFollow(targetgroup, priorgroup) \
189 : : do { \
190 : : SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
191 : : SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
192 : : } while (0)
193 : :
194 : : #define NumMessagesInSubGroup(group, subgroup) \
195 : : ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
196 : :
197 : : #define NumMessagesInGroup(group) \
198 : : (NumMessagesInSubGroup(group, CatCacheMsgs) + \
199 : : NumMessagesInSubGroup(group, RelCacheMsgs))
200 : :
201 : :
202 : : /*----------------
203 : : * Invalidation messages are divided into two groups:
204 : : * 1) events so far in current command, not yet reflected to caches.
205 : : * 2) events in previous commands of current transaction; these have
206 : : * been reflected to local caches, and must be either broadcast to
207 : : * other backends or rolled back from local cache when we commit
208 : : * or abort the transaction.
209 : : * Actually, we need such groups for each level of nested transaction,
210 : : * so that we can discard events from an aborted subtransaction. When
211 : : * a subtransaction commits, we append its events to the parent's groups.
212 : : *
213 : : * The relcache-file-invalidated flag can just be a simple boolean,
214 : : * since we only act on it at transaction commit; we don't care which
215 : : * command of the transaction set it.
216 : : *----------------
217 : : */
218 : :
219 : : typedef struct TransInvalidationInfo
220 : : {
221 : : /* Back link to parent transaction's info */
222 : : struct TransInvalidationInfo *parent;
223 : :
224 : : /* Subtransaction nesting depth */
225 : : int my_level;
226 : :
227 : : /* Events emitted by current command */
228 : : InvalidationMsgsGroup CurrentCmdInvalidMsgs;
229 : :
230 : : /* Events emitted by previous commands of this (sub)transaction */
231 : : InvalidationMsgsGroup PriorCmdInvalidMsgs;
232 : :
233 : : /* init file must be invalidated? */
234 : : bool RelcacheInitFileInval;
235 : : } TransInvalidationInfo;
236 : :
237 : : static TransInvalidationInfo *transInvalInfo = NULL;
238 : :
239 : : /* GUC storage */
240 : : int debug_discard_caches = 0;
241 : :
242 : : /*
243 : : * Dynamically-registered callback functions. Current implementation
244 : : * assumes there won't be enough of these to justify a dynamically resizable
245 : : * array; it'd be easy to improve that if needed.
246 : : *
247 : : * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
248 : : * syscache are linked into a list pointed to by syscache_callback_links[id].
249 : : * The link values are syscache_callback_list[] index plus 1, or 0 for none.
250 : : */
251 : :
252 : : #define MAX_SYSCACHE_CALLBACKS 64
253 : : #define MAX_RELCACHE_CALLBACKS 10
254 : :
255 : : static struct SYSCACHECALLBACK
256 : : {
257 : : int16 id; /* cache number */
258 : : int16 link; /* next callback index+1 for same cache */
259 : : SyscacheCallbackFunction function;
260 : : Datum arg;
261 : : } syscache_callback_list[MAX_SYSCACHE_CALLBACKS];
262 : :
263 : : static int16 syscache_callback_links[SysCacheSize];
264 : :
265 : : static int syscache_callback_count = 0;
266 : :
267 : : static struct RELCACHECALLBACK
268 : : {
269 : : RelcacheCallbackFunction function;
270 : : Datum arg;
271 : : } relcache_callback_list[MAX_RELCACHE_CALLBACKS];
272 : :
273 : : static int relcache_callback_count = 0;
274 : :
275 : : /* ----------------------------------------------------------------
276 : : * Invalidation subgroup support functions
277 : : * ----------------------------------------------------------------
278 : : */
279 : :
280 : : /*
281 : : * AddInvalidationMessage
282 : : * Add an invalidation message to a (sub)group.
283 : : *
284 : : * The group must be the last active one, since we assume we can add to the
285 : : * end of the relevant InvalMessageArray.
286 : : *
287 : : * subgroup must be CatCacheMsgs or RelCacheMsgs.
288 : : */
289 : : static void
972 tgl@sss.pgh.pa.us 290 :CBC 2911181 : AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup,
291 : : const SharedInvalidationMessage *msg)
292 : : {
293 : 2911181 : InvalMessageArray *ima = &InvalMessageArrays[subgroup];
294 : 2911181 : int nextindex = group->nextmsg[subgroup];
295 : :
296 [ + + ]: 2911181 : if (nextindex >= ima->maxmsgs)
297 : : {
298 [ + + ]: 224032 : if (ima->msgs == NULL)
299 : : {
300 : : /* Create new storage array in TopTransactionContext */
301 : 198518 : int reqsize = 32; /* arbitrary */
302 : :
303 : 198518 : ima->msgs = (SharedInvalidationMessage *)
304 : 198518 : MemoryContextAlloc(TopTransactionContext,
305 : : reqsize * sizeof(SharedInvalidationMessage));
306 : 198518 : ima->maxmsgs = reqsize;
307 [ - + ]: 198518 : Assert(nextindex == 0);
308 : : }
309 : : else
310 : : {
311 : : /* Enlarge storage array */
312 : 25514 : int reqsize = 2 * ima->maxmsgs;
313 : :
314 : 25514 : ima->msgs = (SharedInvalidationMessage *)
315 : 25514 : repalloc(ima->msgs,
316 : : reqsize * sizeof(SharedInvalidationMessage));
317 : 25514 : ima->maxmsgs = reqsize;
318 : : }
319 : : }
320 : : /* Okay, add message to current group */
321 : 2911181 : ima->msgs[nextindex] = *msg;
322 : 2911181 : group->nextmsg[subgroup]++;
10141 scrappy@hub.org 323 : 2911181 : }
324 : :
325 : : /*
326 : : * Append one subgroup of invalidation messages to another, resetting
327 : : * the source subgroup to empty.
328 : : */
329 : : static void
972 tgl@sss.pgh.pa.us 330 : 889912 : AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest,
331 : : InvalidationMsgsGroup *src,
332 : : int subgroup)
333 : : {
334 : : /* Messages must be adjacent in main array */
335 [ - + ]: 889912 : Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
336 : :
337 : : /* ... which makes this easy: */
338 : 889912 : dest->nextmsg[subgroup] = src->nextmsg[subgroup];
339 : :
340 : : /*
341 : : * This is handy for some callers and irrelevant for others. But we do it
342 : : * always, reasoning that it's bad to leave different groups pointing at
343 : : * the same fragment of the message array.
344 : : */
345 : 889912 : SetSubGroupToFollow(src, dest, subgroup);
8078 346 : 889912 : }
347 : :
348 : : /*
349 : : * Process a subgroup of invalidation messages.
350 : : *
351 : : * This is a macro that executes the given code fragment for each message in
352 : : * a message subgroup. The fragment should refer to the message as *msg.
353 : : */
354 : : #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
355 : : do { \
356 : : int _msgindex = (group)->firstmsg[subgroup]; \
357 : : int _endmsg = (group)->nextmsg[subgroup]; \
358 : : for (; _msgindex < _endmsg; _msgindex++) \
359 : : { \
360 : : SharedInvalidationMessage *msg = \
361 : : &InvalMessageArrays[subgroup].msgs[_msgindex]; \
362 : : codeFragment; \
363 : : } \
364 : : } while (0)
365 : :
366 : : /*
367 : : * Process a subgroup of invalidation messages as an array.
368 : : *
369 : : * As above, but the code fragment can handle an array of messages.
370 : : * The fragment should refer to the messages as msgs[], with n entries.
371 : : */
372 : : #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
373 : : do { \
374 : : int n = NumMessagesInSubGroup(group, subgroup); \
375 : : if (n > 0) { \
376 : : SharedInvalidationMessage *msgs = \
377 : : &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
378 : : codeFragment; \
379 : : } \
380 : : } while (0)
381 : :
382 : :
383 : : /* ----------------------------------------------------------------
384 : : * Invalidation group support functions
385 : : *
386 : : * These routines understand about the division of a logical invalidation
387 : : * group into separate physical arrays for catcache and relcache entries.
388 : : * ----------------------------------------------------------------
389 : : */
390 : :
391 : : /*
392 : : * Add a catcache inval entry
393 : : */
394 : : static void
972 395 : 2395925 : AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group,
396 : : int id, uint32 hashValue, Oid dbId)
397 : : {
398 : : SharedInvalidationMessage msg;
399 : :
4993 rhaas@postgresql.org 400 [ - + ]: 2395925 : Assert(id < CHAR_MAX);
401 : 2395925 : msg.cc.id = (int8) id;
8078 tgl@sss.pgh.pa.us 402 : 2395925 : msg.cc.dbId = dbId;
403 : 2395925 : msg.cc.hashValue = hashValue;
404 : :
405 : : /*
406 : : * Define padding bytes in SharedInvalidationMessage structs to be
407 : : * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
408 : : * multiple processes, will cause spurious valgrind warnings about
409 : : * undefined memory being used. That's because valgrind remembers the
410 : : * undefined bytes from the last local process's store, not realizing that
411 : : * another process has written since, filling the previously uninitialized
412 : : * bytes
413 : : */
414 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
415 : :
972 416 : 2395925 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
8861 inoue@tpf.co.jp 417 : 2395925 : }
418 : :
419 : : /*
420 : : * Add a whole-catalog inval entry
421 : : */
422 : : static void
972 tgl@sss.pgh.pa.us 423 : 101 : AddCatalogInvalidationMessage(InvalidationMsgsGroup *group,
424 : : Oid dbId, Oid catId)
425 : : {
426 : : SharedInvalidationMessage msg;
427 : :
5180 428 : 101 : msg.cat.id = SHAREDINVALCATALOG_ID;
429 : 101 : msg.cat.dbId = dbId;
430 : 101 : msg.cat.catId = catId;
431 : : /* check AddCatcacheInvalidationMessage() for an explanation */
432 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
433 : :
972 434 : 101 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
5180 435 : 101 : }
436 : :
437 : : /*
438 : : * Add a relcache inval entry
439 : : */
440 : : static void
972 441 : 828748 : AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group,
442 : : Oid dbId, Oid relId)
443 : : {
444 : : SharedInvalidationMessage msg;
445 : :
446 : : /*
447 : : * Don't add a duplicate item. We assume dbId need not be checked because
448 : : * it will never change. InvalidOid for relId means all relations so we
449 : : * don't need to add individual ones when it is present.
450 : : */
451 [ + + + + : 2840400 : ProcessMessageSubGroup(group, RelCacheMsgs,
- + + + ]
452 : : if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
453 : : (msg->rc.relId == relId ||
454 : : msg->rc.relId == InvalidOid))
455 : : return);
456 : :
457 : : /* OK, add the item */
8335 458 : 289099 : msg.rc.id = SHAREDINVALRELCACHE_ID;
459 : 289099 : msg.rc.dbId = dbId;
460 : 289099 : msg.rc.relId = relId;
461 : : /* check AddCatcacheInvalidationMessage() for an explanation */
462 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
463 : :
972 464 : 289099 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
465 : : }
466 : :
467 : : /*
468 : : * Add a snapshot inval entry
469 : : *
470 : : * We put these into the relcache subgroup for simplicity.
471 : : */
472 : : static void
473 : 449400 : AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
474 : : Oid dbId, Oid relId)
475 : : {
476 : : SharedInvalidationMessage msg;
477 : :
478 : : /* Don't add a duplicate item */
479 : : /* We assume dbId need not be checked because it will never change */
480 [ + + + + : 672920 : ProcessMessageSubGroup(group, RelCacheMsgs,
+ + ]
481 : : if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
482 : : msg->sn.relId == relId)
483 : : return);
484 : :
485 : : /* OK, add the item */
3939 rhaas@postgresql.org 486 : 226056 : msg.sn.id = SHAREDINVALSNAPSHOT_ID;
487 : 226056 : msg.sn.dbId = dbId;
488 : 226056 : msg.sn.relId = relId;
489 : : /* check AddCatcacheInvalidationMessage() for an explanation */
490 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
491 : :
972 tgl@sss.pgh.pa.us 492 : 226056 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
493 : : }
494 : :
495 : : /*
496 : : * Append one group of invalidation messages to another, resetting
497 : : * the source group to empty.
498 : : */
499 : : static void
500 : 444956 : AppendInvalidationMessages(InvalidationMsgsGroup *dest,
501 : : InvalidationMsgsGroup *src)
502 : : {
503 : 444956 : AppendInvalidationMessageSubGroup(dest, src, CatCacheMsgs);
504 : 444956 : AppendInvalidationMessageSubGroup(dest, src, RelCacheMsgs);
8078 505 : 444956 : }
506 : :
507 : : /*
508 : : * Execute the given function for all the messages in an invalidation group.
509 : : * The group is not altered.
510 : : *
511 : : * catcache entries are processed first, for reasons mentioned above.
512 : : */
513 : : static void
972 514 : 339083 : ProcessInvalidationMessages(InvalidationMsgsGroup *group,
515 : : void (*func) (SharedInvalidationMessage *msg))
516 : : {
517 [ + + ]: 2681061 : ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
518 [ + + ]: 826958 : ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
8335 519 : 339080 : }
520 : :
521 : : /*
522 : : * As above, but the function is able to process an array of messages
523 : : * rather than just one at a time.
524 : : */
525 : : static void
972 526 : 107938 : ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group,
527 : : void (*func) (const SharedInvalidationMessage *msgs, int n))
528 : : {
529 [ + + ]: 107938 : ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
530 [ + + ]: 107938 : ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
5778 531 : 107938 : }
532 : :
533 : : /* ----------------------------------------------------------------
534 : : * private support functions
535 : : * ----------------------------------------------------------------
536 : : */
537 : :
538 : : /*
539 : : * RegisterCatcacheInvalidation
540 : : *
541 : : * Register an invalidation event for a catcache tuple entry.
542 : : */
543 : : static void
8335 544 : 2395925 : RegisterCatcacheInvalidation(int cacheId,
545 : : uint32 hashValue,
546 : : Oid dbId)
547 : : {
7227 548 : 2395925 : AddCatcacheInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs,
549 : : cacheId, hashValue, dbId);
10141 scrappy@hub.org 550 : 2395925 : }
551 : :
552 : : /*
553 : : * RegisterCatalogInvalidation
554 : : *
555 : : * Register an invalidation event for all catcache entries from a catalog.
556 : : */
557 : : static void
5180 tgl@sss.pgh.pa.us 558 : 101 : RegisterCatalogInvalidation(Oid dbId, Oid catId)
559 : : {
560 : 101 : AddCatalogInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs,
561 : : dbId, catId);
562 : 101 : }
563 : :
564 : : /*
565 : : * RegisterRelcacheInvalidation
566 : : *
567 : : * As above, but register a relcache invalidation event.
568 : : */
569 : : static void
7034 570 : 828748 : RegisterRelcacheInvalidation(Oid dbId, Oid relId)
571 : : {
7227 572 : 828748 : AddRelcacheInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs,
573 : : dbId, relId);
574 : :
575 : : /*
576 : : * Most of the time, relcache invalidation is associated with system
577 : : * catalog updates, but there are a few cases where it isn't. Quick hack
578 : : * to ensure that the next CommandCounterIncrement() will think that we
579 : : * need to do CommandEndInvalidationMessages().
580 : : */
5980 581 : 828748 : (void) GetCurrentCommandId(true);
582 : :
583 : : /*
584 : : * If the relation being invalidated is one of those cached in a relcache
585 : : * init file, mark that we need to zap that file at commit. For simplicity
586 : : * invalidations for a specific database always invalidate the shared file
587 : : * as well. Also zap when we are invalidating whole relcache.
588 : : */
2133 andres@anarazel.de 589 [ + + + + ]: 828748 : if (relId == InvalidOid || RelationIdIsInInitFile(relId))
7227 tgl@sss.pgh.pa.us 590 : 29763 : transInvalInfo->RelcacheInitFileInval = true;
8861 inoue@tpf.co.jp 591 : 828748 : }
592 : :
593 : : /*
594 : : * RegisterSnapshotInvalidation
595 : : *
596 : : * Register an invalidation event for MVCC scans against a given catalog.
597 : : * Only needed for catalogs that don't have catcaches.
598 : : */
599 : : static void
3939 rhaas@postgresql.org 600 : 449400 : RegisterSnapshotInvalidation(Oid dbId, Oid relId)
601 : : {
602 : 449400 : AddSnapshotInvalidationMessage(&transInvalInfo->CurrentCmdInvalidMsgs,
603 : : dbId, relId);
604 : 449400 : }
605 : :
606 : : /*
607 : : * PrepareInvalidationState
608 : : * Initialize inval data for the current (sub)transaction.
609 : : */
610 : : static void
159 michael@paquier.xyz 611 :GNC 1867492 : PrepareInvalidationState(void)
612 : : {
613 : : TransInvalidationInfo *myInfo;
614 : :
615 [ + + + + ]: 3625023 : if (transInvalInfo != NULL &&
616 : 1757531 : transInvalInfo->my_level == GetCurrentTransactionNestLevel())
617 : 1757470 : return;
618 : :
619 : : myInfo = (TransInvalidationInfo *)
620 : 110022 : MemoryContextAllocZero(TopTransactionContext,
621 : : sizeof(TransInvalidationInfo));
622 : 110022 : myInfo->parent = transInvalInfo;
623 : 110022 : myInfo->my_level = GetCurrentTransactionNestLevel();
624 : :
625 : : /* Now, do we have a previous stack entry? */
626 [ + + ]: 110022 : if (transInvalInfo != NULL)
627 : : {
628 : : /* Yes; this one should be for a deeper nesting level. */
629 [ - + ]: 61 : Assert(myInfo->my_level > transInvalInfo->my_level);
630 : :
631 : : /*
632 : : * The parent (sub)transaction must not have any current (i.e.,
633 : : * not-yet-locally-processed) messages. If it did, we'd have a
634 : : * semantic problem: the new subtransaction presumably ought not be
635 : : * able to see those events yet, but since the CommandCounter is
636 : : * linear, that can't work once the subtransaction advances the
637 : : * counter. This is a convenient place to check for that, as well as
638 : : * being important to keep management of the message arrays simple.
639 : : */
640 [ - + ]: 61 : if (NumMessagesInGroup(&transInvalInfo->CurrentCmdInvalidMsgs) != 0)
159 michael@paquier.xyz 641 [ # # ]:UNC 0 : elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
642 : :
643 : : /*
644 : : * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
645 : : * which is fine for the first (sub)transaction, but otherwise we need
646 : : * to update them to follow whatever is already in the arrays.
647 : : */
159 michael@paquier.xyz 648 :GNC 61 : SetGroupToFollow(&myInfo->PriorCmdInvalidMsgs,
649 : : &transInvalInfo->CurrentCmdInvalidMsgs);
650 : 61 : SetGroupToFollow(&myInfo->CurrentCmdInvalidMsgs,
651 : : &myInfo->PriorCmdInvalidMsgs);
652 : : }
653 : : else
654 : : {
655 : : /*
656 : : * Here, we need only clear any array pointers left over from a prior
657 : : * transaction.
658 : : */
659 : 109961 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
660 : 109961 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
661 : 109961 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
662 : 109961 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
663 : : }
664 : :
665 : 110022 : transInvalInfo = myInfo;
666 : : }
667 : :
668 : : /* ----------------------------------------------------------------
669 : : * public functions
670 : : * ----------------------------------------------------------------
671 : : */
672 : :
673 : : void
674 : 1935 : InvalidateSystemCachesExtended(bool debug_discard)
675 : : {
676 : : int i;
677 : :
678 : 1935 : InvalidateCatalogSnapshot();
679 : 1935 : ResetCatalogCaches();
680 : 1935 : RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
681 : :
682 [ + + ]: 29400 : for (i = 0; i < syscache_callback_count; i++)
683 : : {
684 : 27465 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
685 : :
686 : 27465 : ccitem->function(ccitem->arg, ccitem->id, 0);
687 : : }
688 : :
689 [ + + ]: 4415 : for (i = 0; i < relcache_callback_count; i++)
690 : : {
691 : 2480 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
692 : :
693 : 2480 : ccitem->function(ccitem->arg, InvalidOid);
694 : : }
695 : 1935 : }
696 : :
697 : : /*
698 : : * LocalExecuteInvalidationMessage
699 : : *
700 : : * Process a single invalidation message (which could be of any type).
701 : : * Only the local caches are flushed; this does not transmit the message
702 : : * to other backends.
703 : : */
704 : : void
8335 tgl@sss.pgh.pa.us 705 :CBC 17417951 : LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
706 : : {
707 [ + + ]: 17417951 : if (msg->id >= 0)
708 : : {
5180 709 [ + + + + ]: 14228209 : if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
710 : : {
3939 rhaas@postgresql.org 711 : 10524638 : InvalidateCatalogSnapshot();
712 : :
2529 tgl@sss.pgh.pa.us 713 : 10524638 : SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
714 : :
4625 715 : 10524638 : CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
716 : : }
717 : : }
5180 718 [ + + ]: 3189742 : else if (msg->id == SHAREDINVALCATALOG_ID)
719 : : {
720 [ + + + + ]: 521 : if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
721 : : {
3939 rhaas@postgresql.org 722 : 361 : InvalidateCatalogSnapshot();
723 : :
5180 tgl@sss.pgh.pa.us 724 : 361 : CatalogCacheFlushCatalog(msg->cat.catId);
725 : :
726 : : /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
727 : : }
728 : : }
8335 729 [ + + ]: 3189221 : else if (msg->id == SHAREDINVALRELCACHE_ID)
730 : : {
7369 731 [ + + + + ]: 1634046 : if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
732 : : {
733 : : int i;
734 : :
2642 peter_e@gmx.net 735 [ + + ]: 1209884 : if (msg->rc.relId == InvalidOid)
904 noah@leadboat.com 736 : 133 : RelationCacheInvalidate(false);
737 : : else
2642 peter_e@gmx.net 738 : 1209751 : RelationCacheInvalidateEntry(msg->rc.relId);
739 : :
5696 tgl@sss.pgh.pa.us 740 [ + + ]: 3324654 : for (i = 0; i < relcache_callback_count; i++)
741 : : {
742 : 2114773 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
743 : :
2411 peter_e@gmx.net 744 : 2114773 : ccitem->function(ccitem->arg, msg->rc.relId);
745 : : }
746 : : }
747 : : }
7034 tgl@sss.pgh.pa.us 748 [ + + ]: 1555175 : else if (msg->id == SHAREDINVALSMGR_ID)
749 : : {
750 : : /*
751 : : * We could have smgr entries for relations of other databases, so no
752 : : * short-circuit test is possible here.
753 : : */
754 : : RelFileLocatorBackend rlocator;
755 : :
564 rhaas@postgresql.org 756 : 220023 : rlocator.locator = msg->sm.rlocator;
648 757 : 220023 : rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
74 heikki.linnakangas@i 758 :GNC 220023 : smgrreleaserellocator(rlocator);
759 : : }
5180 tgl@sss.pgh.pa.us 760 [ + + ]:CBC 1335152 : else if (msg->id == SHAREDINVALRELMAP_ID)
761 : : {
762 : : /* We only care about our own database and shared catalogs */
763 [ + + ]: 464 : if (msg->rm.dbId == InvalidOid)
764 : 296 : RelationMapInvalidate(true);
765 [ + + ]: 168 : else if (msg->rm.dbId == MyDatabaseId)
766 : 71 : RelationMapInvalidate(false);
767 : : }
3939 rhaas@postgresql.org 768 [ + - ]: 1334688 : else if (msg->id == SHAREDINVALSNAPSHOT_ID)
769 : : {
770 : : /* We only care about our own database and shared catalogs */
1203 michael@paquier.xyz 771 [ + + ]: 1334688 : if (msg->sn.dbId == InvalidOid)
3939 rhaas@postgresql.org 772 : 43523 : InvalidateCatalogSnapshot();
1203 michael@paquier.xyz 773 [ + + ]: 1291165 : else if (msg->sn.dbId == MyDatabaseId)
3939 rhaas@postgresql.org 774 : 978018 : InvalidateCatalogSnapshot();
775 : : }
776 : : else
4683 peter_e@gmx.net 777 [ # # ]:UBC 0 : elog(FATAL, "unrecognized SI message ID: %d", msg->id);
10141 scrappy@hub.org 778 :CBC 17417948 : }
779 : :
780 : : /*
781 : : * InvalidateSystemCaches
782 : : *
783 : : * This blows away all tuples in the system catalog caches and
784 : : * all the cached relation descriptors and smgr cache entries.
785 : : * Relation descriptors that have positive refcounts are then rebuilt.
786 : : *
787 : : * We call this when we see a shared-inval-queue overflow signal,
788 : : * since that tells us we've lost some shared-inval messages and hence
789 : : * don't know what needs to be invalidated.
790 : : */
791 : : void
8336 tgl@sss.pgh.pa.us 792 : 1935 : InvalidateSystemCaches(void)
793 : : {
904 noah@leadboat.com 794 : 1935 : InvalidateSystemCachesExtended(false);
795 : 1935 : }
796 : :
797 : : /*
798 : : * AcceptInvalidationMessages
799 : : * Read and process invalidation messages from the shared invalidation
800 : : * message queue.
801 : : *
802 : : * Note:
803 : : * This should be called as the first step in processing a transaction.
804 : : */
805 : : void
8335 tgl@sss.pgh.pa.us 806 : 15916573 : AcceptInvalidationMessages(void)
807 : : {
808 : 15916573 : ReceiveSharedInvalidMessages(LocalExecuteInvalidationMessage,
809 : : InvalidateSystemCaches);
810 : :
811 : : /*----------
812 : : * Test code to force cache flushes anytime a flush could happen.
813 : : *
814 : : * This helps detect intermittent faults caused by code that reads a cache
815 : : * entry and then performs an action that could invalidate the entry, but
816 : : * rarely actually does so. This can spot issues that would otherwise
817 : : * only arise with badly timed concurrent DDL, for example.
818 : : *
819 : : * The default debug_discard_caches = 0 does no forced cache flushes.
820 : : *
821 : : * If used with CLOBBER_FREED_MEMORY,
822 : : * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
823 : : * provides a fairly thorough test that the system contains no cache-flush
824 : : * hazards. However, it also makes the system unbelievably slow --- the
825 : : * regression tests take about 100 times longer than normal.
826 : : *
827 : : * If you're a glutton for punishment, try
828 : : * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
829 : : * This slows things by at least a factor of 10000, so I wouldn't suggest
830 : : * trying to run the entire regression tests that way. It's useful to try
831 : : * a few simple tests, to make sure that cache reload isn't subject to
832 : : * internal cache-flush hazards, but after you've done a few thousand
833 : : * recursive reloads it's unlikely you'll learn more.
834 : : *----------
835 : : */
836 : : #ifdef DISCARD_CACHES_ENABLED
837 : : {
838 : : static int recursion_depth = 0;
839 : :
1006 840 [ - + ]: 15916573 : if (recursion_depth < debug_discard_caches)
841 : : {
2046 tgl@sss.pgh.pa.us 842 :UBC 0 : recursion_depth++;
904 noah@leadboat.com 843 : 0 : InvalidateSystemCachesExtended(true);
2046 tgl@sss.pgh.pa.us 844 : 0 : recursion_depth--;
845 : : }
846 : : }
847 : : #endif
10141 scrappy@hub.org 848 :CBC 15916573 : }
849 : :
850 : : /*
851 : : * PostPrepare_Inval
852 : : * Clean up after successful PREPARE.
853 : : *
854 : : * Here, we want to act as though the transaction aborted, so that we will
855 : : * undo any syscache changes it made, thereby bringing us into sync with the
856 : : * outside world, which doesn't believe the transaction committed yet.
857 : : *
858 : : * If the prepared transaction is later aborted, there is nothing more to
859 : : * do; if it commits, we will receive the consequent inval messages just
860 : : * like everyone else.
861 : : */
862 : : void
6876 tgl@sss.pgh.pa.us 863 : 395 : PostPrepare_Inval(void)
864 : : {
865 : 395 : AtEOXact_Inval(false);
866 : 395 : }
867 : :
868 : : /*
869 : : * xactGetCommittedInvalidationMessages() is called by
870 : : * RecordTransactionCommit() to collect invalidation messages to add to the
871 : : * commit record. This applies only to commit message types, never to
872 : : * abort records. Must always run before AtEOXact_Inval(), since that
873 : : * removes the data we need to see.
874 : : *
875 : : * Remember that this runs before we have officially committed, so we
876 : : * must not do anything here to change what might occur *if* we should
877 : : * fail between here and the actual commit.
878 : : *
879 : : * see also xact_redo_commit() and xact_desc_commit()
880 : : */
881 : : int
5230 simon@2ndQuadrant.co 882 : 174608 : xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs,
883 : : bool *RelcacheInitFileInval)
884 : : {
885 : : SharedInvalidationMessage *msgarray;
886 : : int nummsgs;
887 : : int nmsgs;
888 : :
889 : : /* Quick exit if we haven't done anything with invalidation messages. */
3455 rhaas@postgresql.org 890 [ + + ]: 174608 : if (transInvalInfo == NULL)
891 : : {
892 : 104970 : *RelcacheInitFileInval = false;
893 : 104970 : *msgs = NULL;
894 : 104970 : return 0;
895 : : }
896 : :
897 : : /* Must be at top of stack */
898 [ + - - + ]: 69638 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
899 : :
900 : : /*
901 : : * Relcache init file invalidation requires processing both before and
902 : : * after we send the SI messages. However, we need not do anything unless
903 : : * we committed.
904 : : */
5230 simon@2ndQuadrant.co 905 : 69638 : *RelcacheInitFileInval = transInvalInfo->RelcacheInitFileInval;
906 : :
907 : : /*
908 : : * Collect all the pending messages into a single contiguous array of
909 : : * invalidation messages, to simplify what needs to happen while building
910 : : * the commit WAL message. Maintain the order that they would be
911 : : * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
912 : : * is as similar as possible to original. We want the same bugs, if any,
913 : : * not new ones.
914 : : */
972 tgl@sss.pgh.pa.us 915 : 69638 : nummsgs = NumMessagesInGroup(&transInvalInfo->PriorCmdInvalidMsgs) +
916 : 69638 : NumMessagesInGroup(&transInvalInfo->CurrentCmdInvalidMsgs);
917 : :
918 : 69638 : *msgs = msgarray = (SharedInvalidationMessage *)
919 : 69638 : MemoryContextAlloc(CurTransactionContext,
920 : : nummsgs * sizeof(SharedInvalidationMessage));
921 : :
922 : 69638 : nmsgs = 0;
923 [ + + ]: 69638 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
924 : : CatCacheMsgs,
925 : : (memcpy(msgarray + nmsgs,
926 : : msgs,
927 : : n * sizeof(SharedInvalidationMessage)),
928 : : nmsgs += n));
929 [ + + ]: 69638 : ProcessMessageSubGroupMulti(&transInvalInfo->CurrentCmdInvalidMsgs,
930 : : CatCacheMsgs,
931 : : (memcpy(msgarray + nmsgs,
932 : : msgs,
933 : : n * sizeof(SharedInvalidationMessage)),
934 : : nmsgs += n));
935 [ + + ]: 69638 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
936 : : RelCacheMsgs,
937 : : (memcpy(msgarray + nmsgs,
938 : : msgs,
939 : : n * sizeof(SharedInvalidationMessage)),
940 : : nmsgs += n));
941 [ + + ]: 69638 : ProcessMessageSubGroupMulti(&transInvalInfo->CurrentCmdInvalidMsgs,
942 : : RelCacheMsgs,
943 : : (memcpy(msgarray + nmsgs,
944 : : msgs,
945 : : n * sizeof(SharedInvalidationMessage)),
946 : : nmsgs += n));
947 [ - + ]: 69638 : Assert(nmsgs == nummsgs);
948 : :
949 : 69638 : return nmsgs;
950 : : }
951 : :
952 : : /*
953 : : * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
954 : : * standby_redo() to process invalidation messages. Currently that happens
955 : : * only at end-of-xact.
956 : : *
957 : : * Relcache init file invalidation requires processing both
958 : : * before and after we send the SI messages. See AtEOXact_Inval()
959 : : */
960 : : void
5209 simon@2ndQuadrant.co 961 : 19320 : ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs,
962 : : int nmsgs, bool RelcacheInitFileInval,
963 : : Oid dbid, Oid tsid)
964 : : {
5174 965 [ + + ]: 19320 : if (nmsgs <= 0)
966 : 5120 : return;
967 : :
125 michael@paquier.xyz 968 [ - + - - ]:GNC 14200 : elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
969 : : (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
970 : :
5209 simon@2ndQuadrant.co 971 [ + + ]:CBC 14200 : if (RelcacheInitFileInval)
972 : : {
125 michael@paquier.xyz 973 [ - + ]:GNC 204 : elog(DEBUG4, "removing relcache init files for database %u", dbid);
974 : :
975 : : /*
976 : : * RelationCacheInitFilePreInvalidate, when the invalidation message
977 : : * is for a specific database, requires DatabasePath to be set, but we
978 : : * should not use SetDatabasePath during recovery, since it is
979 : : * intended to be used only once by normal backends. Hence, a quick
980 : : * hack: set DatabasePath directly then unset after use.
981 : : */
2133 andres@anarazel.de 982 [ + - ]:CBC 204 : if (OidIsValid(dbid))
983 : 204 : DatabasePath = GetDatabasePath(dbid, tsid);
984 : :
4625 tgl@sss.pgh.pa.us 985 : 204 : RelationCacheInitFilePreInvalidate();
986 : :
2133 andres@anarazel.de 987 [ + - ]: 204 : if (OidIsValid(dbid))
988 : : {
989 : 204 : pfree(DatabasePath);
990 : 204 : DatabasePath = NULL;
991 : : }
992 : : }
993 : :
5209 simon@2ndQuadrant.co 994 : 14200 : SendSharedInvalidMessages(msgs, nmsgs);
995 : :
996 [ + + ]: 14200 : if (RelcacheInitFileInval)
4625 tgl@sss.pgh.pa.us 997 : 204 : RelationCacheInitFilePostInvalidate();
998 : : }
999 : :
1000 : : /*
1001 : : * AtEOXact_Inval
1002 : : * Process queued-up invalidation messages at end of main transaction.
1003 : : *
1004 : : * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1005 : : * to the shared invalidation message queue. Note that these will be read
1006 : : * not only by other backends, but also by our own backend at the next
1007 : : * transaction start (via AcceptInvalidationMessages). This means that
1008 : : * we can skip immediate local processing of anything that's still in
1009 : : * CurrentCmdInvalidMsgs, and just send that list out too.
1010 : : *
1011 : : * If not isCommit, we are aborting, and must locally process the messages
1012 : : * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1013 : : * since they'll not have seen our changed tuples anyway. We can forget
1014 : : * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1015 : : * the caches yet.
1016 : : *
1017 : : * In any case, reset our state to empty. We need not physically
1018 : : * free memory here, since TopTransactionContext is about to be emptied
1019 : : * anyway.
1020 : : *
1021 : : * Note:
1022 : : * This should be called as the last step in processing a transaction.
1023 : : */
1024 : : void
7227 1025 : 433062 : AtEOXact_Inval(bool isCommit)
1026 : : {
1027 : : /* Quick exit if no messages */
3455 rhaas@postgresql.org 1028 [ + + ]: 433062 : if (transInvalInfo == NULL)
1029 : 323135 : return;
1030 : :
1031 : : /* Must be at top of stack */
1032 [ + - - + ]: 109927 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1033 : :
8335 tgl@sss.pgh.pa.us 1034 [ + + ]: 109927 : if (isCommit)
1035 : : {
1036 : : /*
1037 : : * Relcache init file invalidation requires processing both before and
1038 : : * after we send the SI messages. However, we need not do anything
1039 : : * unless we committed.
1040 : : */
7227 1041 [ + + ]: 107938 : if (transInvalInfo->RelcacheInitFileInval)
4625 1042 : 15446 : RelationCacheInitFilePreInvalidate();
1043 : :
7227 1044 : 107938 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
6756 bruce@momjian.us 1045 : 107938 : &transInvalInfo->CurrentCmdInvalidMsgs);
1046 : :
5778 tgl@sss.pgh.pa.us 1047 : 107938 : ProcessInvalidationMessagesMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1048 : : SendSharedInvalidMessages);
1049 : :
7227 1050 [ + + ]: 107938 : if (transInvalInfo->RelcacheInitFileInval)
4625 1051 : 15446 : RelationCacheInitFilePostInvalidate();
1052 : : }
1053 : : else
1054 : : {
7227 1055 : 1989 : ProcessInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1056 : : LocalExecuteInvalidationMessage);
1057 : : }
1058 : :
1059 : : /* Need not free anything explicitly */
1060 : 109927 : transInvalInfo = NULL;
1061 : : }
1062 : :
1063 : : /*
1064 : : * AtEOSubXact_Inval
1065 : : * Process queued-up invalidation messages at end of subtransaction.
1066 : : *
1067 : : * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1068 : : * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1069 : : * parent's PriorCmdInvalidMsgs list.
1070 : : *
1071 : : * If not isCommit, we are aborting, and must locally process the messages
1072 : : * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1073 : : * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1074 : : * touched the caches yet.
1075 : : *
1076 : : * In any case, pop the transaction stack. We need not physically free memory
1077 : : * here, since CurTransactionContext is about to be emptied anyway
1078 : : * (if aborting). Beware of the possibility of aborting the same nesting
1079 : : * level twice, though.
1080 : : */
1081 : : void
7200 1082 : 9929 : AtEOSubXact_Inval(bool isCommit)
1083 : : {
1084 : : int my_level;
7227 1085 : 9929 : TransInvalidationInfo *myInfo = transInvalInfo;
1086 : :
1087 : : /* Quick exit if no messages. */
3455 rhaas@postgresql.org 1088 [ + + ]: 9929 : if (myInfo == NULL)
1089 : 9167 : return;
1090 : :
1091 : : /* Also bail out quickly if messages are not for this level. */
1092 : 762 : my_level = GetCurrentTransactionNestLevel();
1093 [ + + ]: 762 : if (myInfo->my_level != my_level)
1094 : : {
1095 [ - + ]: 637 : Assert(myInfo->my_level < my_level);
1096 : 637 : return;
1097 : : }
1098 : :
1099 [ + + ]: 125 : if (isCommit)
1100 : : {
1101 : : /* If CurrentCmdInvalidMsgs still has anything, fix it */
7227 tgl@sss.pgh.pa.us 1102 : 42 : CommandEndInvalidationMessages();
1103 : :
1104 : : /*
1105 : : * We create invalidation stack entries lazily, so the parent might
1106 : : * not have one. Instead of creating one, moving all the data over,
1107 : : * and then freeing our own, we can just adjust the level of our own
1108 : : * entry.
1109 : : */
3455 rhaas@postgresql.org 1110 [ + + - + ]: 42 : if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1111 : : {
1112 : 32 : myInfo->my_level--;
1113 : 32 : return;
1114 : : }
1115 : :
1116 : : /*
1117 : : * Pass up my inval messages to parent. Notice that we stick them in
1118 : : * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1119 : : * already been locally processed. (This would trigger the Assert in
1120 : : * AppendInvalidationMessageSubGroup if the parent's
1121 : : * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1122 : : * PrepareInvalidationState.)
1123 : : */
7227 tgl@sss.pgh.pa.us 1124 : 10 : AppendInvalidationMessages(&myInfo->parent->PriorCmdInvalidMsgs,
1125 : : &myInfo->PriorCmdInvalidMsgs);
1126 : :
1127 : : /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
972 1128 : 10 : SetGroupToFollow(&myInfo->parent->CurrentCmdInvalidMsgs,
1129 : : &myInfo->parent->PriorCmdInvalidMsgs);
1130 : :
1131 : : /* Pending relcache inval becomes parent's problem too */
7227 1132 [ - + ]: 10 : if (myInfo->RelcacheInitFileInval)
7227 tgl@sss.pgh.pa.us 1133 :UBC 0 : myInfo->parent->RelcacheInitFileInval = true;
1134 : :
1135 : : /* Pop the transaction state stack */
7160 tgl@sss.pgh.pa.us 1136 :CBC 10 : transInvalInfo = myInfo->parent;
1137 : :
1138 : : /* Need not free anything else explicitly */
1139 : 10 : pfree(myInfo);
1140 : : }
1141 : : else
1142 : : {
7227 1143 : 83 : ProcessInvalidationMessages(&myInfo->PriorCmdInvalidMsgs,
1144 : : LocalExecuteInvalidationMessage);
1145 : :
1146 : : /* Pop the transaction state stack */
7160 1147 : 83 : transInvalInfo = myInfo->parent;
1148 : :
1149 : : /* Need not free anything else explicitly */
1150 : 83 : pfree(myInfo);
1151 : : }
1152 : : }
1153 : :
1154 : : /*
1155 : : * CommandEndInvalidationMessages
1156 : : * Process queued-up invalidation messages at end of one command
1157 : : * in a transaction.
1158 : : *
1159 : : * Here, we send no messages to the shared queue, since we don't know yet if
1160 : : * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1161 : : * list, so as to flush our caches of any entries we have outdated in the
1162 : : * current command. We then move the current-cmd list over to become part
1163 : : * of the prior-cmds list.
1164 : : *
1165 : : * Note:
1166 : : * This should be called during CommandCounterIncrement(),
1167 : : * after we have advanced the command ID.
1168 : : */
1169 : : void
7227 1170 : 517334 : CommandEndInvalidationMessages(void)
1171 : : {
1172 : : /*
1173 : : * You might think this shouldn't be called outside any transaction, but
1174 : : * bootstrap does it, and also ABORT issued when not in a transaction. So
1175 : : * just quietly return if no state to work on.
1176 : : */
1177 [ + + ]: 517334 : if (transInvalInfo == NULL)
1178 : 180323 : return;
1179 : :
1180 : 337011 : ProcessInvalidationMessages(&transInvalInfo->CurrentCmdInvalidMsgs,
1181 : : LocalExecuteInvalidationMessage);
1182 : :
1183 : : /* WAL Log per-command invalidation messages for wal_level=logical */
1361 akapila@postgresql.o 1184 [ + + ]: 337008 : if (XLogLogicalInfoActive())
1185 : 3650 : LogLogicalInvalidations();
1186 : :
7227 tgl@sss.pgh.pa.us 1187 : 337008 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1188 : 337008 : &transInvalInfo->CurrentCmdInvalidMsgs);
1189 : : }
1190 : :
1191 : :
1192 : : /*
1193 : : * CacheInvalidateHeapTuple
1194 : : * Register the given tuple for invalidation at end of command
1195 : : * (ie, current command is creating or outdating this tuple).
1196 : : * Also, detect whether a relcache invalidation is implied.
1197 : : *
1198 : : * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1199 : : * For an update, we are called just once, with tuple being the old tuple
1200 : : * version and newtuple the new version. This allows avoidance of duplicate
1201 : : * effort during an update.
1202 : : */
1203 : : void
4625 1204 : 10187201 : CacheInvalidateHeapTuple(Relation relation,
1205 : : HeapTuple tuple,
1206 : : HeapTuple newtuple)
1207 : : {
1208 : : Oid tupleRelId;
1209 : : Oid databaseId;
1210 : : Oid relationId;
1211 : :
1212 : : /* Do nothing during bootstrap */
1213 [ + + ]: 10187201 : if (IsBootstrapProcessingMode())
1214 : 490542 : return;
1215 : :
1216 : : /*
1217 : : * We only need to worry about invalidation for tuples that are in system
1218 : : * catalogs; user-relation tuples are never in catcaches and can't affect
1219 : : * the relcache either.
1220 : : */
3790 rhaas@postgresql.org 1221 [ + + ]: 9696659 : if (!IsCatalogRelation(relation))
4625 tgl@sss.pgh.pa.us 1222 : 7922005 : return;
1223 : :
1224 : : /*
1225 : : * IsCatalogRelation() will return true for TOAST tables of system
1226 : : * catalogs, but we don't care about those, either.
1227 : : */
1228 [ + + ]: 1774654 : if (IsToastRelation(relation))
1229 : 12203 : return;
1230 : :
1231 : : /*
1232 : : * If we're not prepared to queue invalidation messages for this
1233 : : * subtransaction level, get ready now.
1234 : : */
3455 rhaas@postgresql.org 1235 : 1762451 : PrepareInvalidationState();
1236 : :
1237 : : /*
1238 : : * First let the catcache do its thing
1239 : : */
3939 1240 : 1762451 : tupleRelId = RelationGetRelid(relation);
1241 [ + + ]: 1762451 : if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1242 : : {
1243 [ + + ]: 449400 : databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
1244 : 449400 : RegisterSnapshotInvalidation(databaseId, tupleRelId);
1245 : : }
1246 : : else
1247 : 1313051 : PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1248 : : RegisterCatcacheInvalidation);
1249 : :
1250 : : /*
1251 : : * Now, is this tuple one of the primary definers of a relcache entry? See
1252 : : * comments in file header for deeper explanation.
1253 : : *
1254 : : * Note we ignore newtuple here; we assume an update cannot move a tuple
1255 : : * from being part of one relcache entry to being part of another.
1256 : : */
4625 tgl@sss.pgh.pa.us 1257 [ + + ]: 1762451 : if (tupleRelId == RelationRelationId)
1258 : : {
1259 : 191207 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1260 : :
1972 andres@anarazel.de 1261 : 191207 : relationId = classtup->oid;
4625 tgl@sss.pgh.pa.us 1262 [ + + ]: 191207 : if (classtup->relisshared)
1263 : 6014 : databaseId = InvalidOid;
1264 : : else
1265 : 185193 : databaseId = MyDatabaseId;
1266 : : }
1267 [ + + ]: 1571244 : else if (tupleRelId == AttributeRelationId)
1268 : : {
1269 : 513632 : Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1270 : :
1271 : 513632 : relationId = atttup->attrelid;
1272 : :
1273 : : /*
1274 : : * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1275 : : * even if the rel in question is shared (which we can't easily tell).
1276 : : * This essentially means that only backends in this same database
1277 : : * will react to the relcache flush request. This is in fact
1278 : : * appropriate, since only those backends could see our pg_attribute
1279 : : * change anyway. It looks a bit ugly though. (In practice, shared
1280 : : * relations can't have schema changes after bootstrap, so we should
1281 : : * never come here for a shared rel anyway.)
1282 : : */
1283 : 513632 : databaseId = MyDatabaseId;
1284 : : }
1285 [ + + ]: 1057612 : else if (tupleRelId == IndexRelationId)
1286 : : {
1287 : 30332 : Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1288 : :
1289 : : /*
1290 : : * When a pg_index row is updated, we should send out a relcache inval
1291 : : * for the index relation. As above, we don't know the shared status
1292 : : * of the index, but in practice it doesn't matter since indexes of
1293 : : * shared catalogs can't have such updates.
1294 : : */
1295 : 30332 : relationId = indextup->indexrelid;
1296 : 30332 : databaseId = MyDatabaseId;
1297 : : }
1910 alvherre@alvh.no-ip. 1298 [ + + ]: 1027280 : else if (tupleRelId == ConstraintRelationId)
1299 : : {
1300 : 27337 : Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1301 : :
1302 : : /*
1303 : : * Foreign keys are part of relcache entries, too, so send out an
1304 : : * inval for the table that the FK applies to.
1305 : : */
1306 [ + + ]: 27337 : if (constrtup->contype == CONSTRAINT_FOREIGN &&
1307 [ + - ]: 3537 : OidIsValid(constrtup->conrelid))
1308 : : {
1309 : 3537 : relationId = constrtup->conrelid;
1310 : 3537 : databaseId = MyDatabaseId;
1311 : : }
1312 : : else
1313 : 23800 : return;
1314 : : }
1315 : : else
4625 tgl@sss.pgh.pa.us 1316 : 999943 : return;
1317 : :
1318 : : /*
1319 : : * Yes. We need to register a relcache invalidation event.
1320 : : */
1321 : 738708 : RegisterRelcacheInvalidation(databaseId, relationId);
1322 : : }
1323 : :
1324 : : /*
1325 : : * CacheInvalidateCatalog
1326 : : * Register invalidation of the whole content of a system catalog.
1327 : : *
1328 : : * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1329 : : * changed any tuples as moved them around. Some uses of catcache entries
1330 : : * expect their TIDs to be correct, so we have to blow away the entries.
1331 : : *
1332 : : * Note: we expect caller to verify that the rel actually is a system
1333 : : * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1334 : : */
1335 : : void
5180 1336 : 101 : CacheInvalidateCatalog(Oid catalogId)
1337 : : {
1338 : : Oid databaseId;
1339 : :
3455 rhaas@postgresql.org 1340 : 101 : PrepareInvalidationState();
1341 : :
5180 tgl@sss.pgh.pa.us 1342 [ + + ]: 101 : if (IsSharedRelation(catalogId))
1343 : 18 : databaseId = InvalidOid;
1344 : : else
1345 : 83 : databaseId = MyDatabaseId;
1346 : :
1347 : 101 : RegisterCatalogInvalidation(databaseId, catalogId);
1348 : 101 : }
1349 : :
1350 : : /*
1351 : : * CacheInvalidateRelcache
1352 : : * Register invalidation of the specified relation's relcache entry
1353 : : * at end of command.
1354 : : *
1355 : : * This is used in places that need to force relcache rebuild but aren't
1356 : : * changing any of the tuples recognized as contributors to the relcache
1357 : : * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1358 : : */
1359 : : void
7369 1360 : 55016 : CacheInvalidateRelcache(Relation relation)
1361 : : {
1362 : : Oid databaseId;
1363 : : Oid relationId;
1364 : :
3455 rhaas@postgresql.org 1365 : 55016 : PrepareInvalidationState();
1366 : :
7369 tgl@sss.pgh.pa.us 1367 : 55016 : relationId = RelationGetRelid(relation);
1368 [ + + ]: 55016 : if (relation->rd_rel->relisshared)
1369 : 1822 : databaseId = InvalidOid;
1370 : : else
1371 : 53194 : databaseId = MyDatabaseId;
1372 : :
7034 1373 : 55016 : RegisterRelcacheInvalidation(databaseId, relationId);
7369 1374 : 55016 : }
1375 : :
1376 : : /*
1377 : : * CacheInvalidateRelcacheAll
1378 : : * Register invalidation of the whole relcache at the end of command.
1379 : : *
1380 : : * This is used by alter publication as changes in publications may affect
1381 : : * large number of tables.
1382 : : */
1383 : : void
2642 peter_e@gmx.net 1384 : 47 : CacheInvalidateRelcacheAll(void)
1385 : : {
1386 : 47 : PrepareInvalidationState();
1387 : :
1388 : 47 : RegisterRelcacheInvalidation(InvalidOid, InvalidOid);
1389 : 47 : }
1390 : :
1391 : : /*
1392 : : * CacheInvalidateRelcacheByTuple
1393 : : * As above, but relation is identified by passing its pg_class tuple.
1394 : : */
1395 : : void
7369 tgl@sss.pgh.pa.us 1396 : 34977 : CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
1397 : : {
1398 : 34977 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1399 : : Oid databaseId;
1400 : : Oid relationId;
1401 : :
3455 rhaas@postgresql.org 1402 : 34977 : PrepareInvalidationState();
1403 : :
1972 andres@anarazel.de 1404 : 34977 : relationId = classtup->oid;
7369 tgl@sss.pgh.pa.us 1405 [ + + ]: 34977 : if (classtup->relisshared)
1406 : 913 : databaseId = InvalidOid;
1407 : : else
1408 : 34064 : databaseId = MyDatabaseId;
7034 1409 : 34977 : RegisterRelcacheInvalidation(databaseId, relationId);
8861 inoue@tpf.co.jp 1410 : 34977 : }
1411 : :
1412 : : /*
1413 : : * CacheInvalidateRelcacheByRelid
1414 : : * As above, but relation is identified by passing its OID.
1415 : : * This is the least efficient of the three options; use one of
1416 : : * the above routines if you have a Relation or pg_class tuple.
1417 : : */
1418 : : void
7283 tgl@sss.pgh.pa.us 1419 : 14900 : CacheInvalidateRelcacheByRelid(Oid relid)
1420 : : {
1421 : : HeapTuple tup;
1422 : :
3455 rhaas@postgresql.org 1423 : 14900 : PrepareInvalidationState();
1424 : :
5173 1425 : 14900 : tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
7283 tgl@sss.pgh.pa.us 1426 [ - + ]: 14900 : if (!HeapTupleIsValid(tup))
7283 tgl@sss.pgh.pa.us 1427 [ # # ]:UBC 0 : elog(ERROR, "cache lookup failed for relation %u", relid);
7283 tgl@sss.pgh.pa.us 1428 :CBC 14900 : CacheInvalidateRelcacheByTuple(tup);
1429 : 14900 : ReleaseSysCache(tup);
1430 : 14900 : }
1431 : :
1432 : :
1433 : : /*
1434 : : * CacheInvalidateSmgr
1435 : : * Register invalidation of smgr references to a physical relation.
1436 : : *
1437 : : * Sending this type of invalidation msg forces other backends to close open
1438 : : * smgr entries for the rel. This should be done to flush dangling open-file
1439 : : * references when the physical rel is being dropped or truncated. Because
1440 : : * these are nontransactional (i.e., not-rollback-able) operations, we just
1441 : : * send the inval message immediately without any queuing.
1442 : : *
1443 : : * Note: in most cases there will have been a relcache flush issued against
1444 : : * the rel at the logical level. We need a separate smgr-level flush because
1445 : : * it is possible for backends to have open smgr entries for rels they don't
1446 : : * have a relcache entry for, e.g. because the only thing they ever did with
1447 : : * the rel is write out dirty shared buffers.
1448 : : *
1449 : : * Note: because these messages are nontransactional, they won't be captured
1450 : : * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1451 : : * should happen in low-level smgr.c routines, which are executed while
1452 : : * replaying WAL as well as when creating it.
1453 : : *
1454 : : * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1455 : : * three bytes of the ProcNumber using what would otherwise be padding space.
1456 : : * Thus, the maximum possible ProcNumber is 2^23-1.
1457 : : */
1458 : : void
648 rhaas@postgresql.org 1459 : 45418 : CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
1460 : : {
1461 : : SharedInvalidationMessage msg;
1462 : :
5184 tgl@sss.pgh.pa.us 1463 : 45418 : msg.sm.id = SHAREDINVALSMGR_ID;
648 rhaas@postgresql.org 1464 : 45418 : msg.sm.backend_hi = rlocator.backend >> 16;
1465 : 45418 : msg.sm.backend_lo = rlocator.backend & 0xffff;
564 1466 : 45418 : msg.sm.rlocator = rlocator.locator;
1467 : : /* check AddCatcacheInvalidationMessage() for an explanation */
1468 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1469 : :
5184 tgl@sss.pgh.pa.us 1470 : 45418 : SendSharedInvalidMessages(&msg, 1);
1471 : 45418 : }
1472 : :
1473 : : /*
1474 : : * CacheInvalidateRelmap
1475 : : * Register invalidation of the relation mapping for a database,
1476 : : * or for the shared catalogs if databaseId is zero.
1477 : : *
1478 : : * Sending this type of invalidation msg forces other backends to re-read
1479 : : * the indicated relation mapping file. It is also necessary to send a
1480 : : * relcache inval for the specific relations whose mapping has been altered,
1481 : : * else the relcache won't get updated with the new filenode data.
1482 : : *
1483 : : * Note: because these messages are nontransactional, they won't be captured
1484 : : * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1485 : : * should happen in low-level relmapper.c routines, which are executed while
1486 : : * replaying WAL as well as when creating it.
1487 : : */
1488 : : void
5180 1489 : 195 : CacheInvalidateRelmap(Oid databaseId)
1490 : : {
1491 : : SharedInvalidationMessage msg;
1492 : :
1493 : 195 : msg.rm.id = SHAREDINVALRELMAP_ID;
1494 : 195 : msg.rm.dbId = databaseId;
1495 : : /* check AddCatcacheInvalidationMessage() for an explanation */
1496 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1497 : :
1498 : 195 : SendSharedInvalidMessages(&msg, 1);
1499 : 195 : }
1500 : :
1501 : :
1502 : : /*
1503 : : * CacheRegisterSyscacheCallback
1504 : : * Register the specified function to be called for all future
1505 : : * invalidation events in the specified cache. The cache ID and the
1506 : : * hash value of the tuple being invalidated will be passed to the
1507 : : * function.
1508 : : *
1509 : : * NOTE: Hash value zero will be passed if a cache reset request is received.
1510 : : * In this case the called routines should flush all cached state.
1511 : : * Yes, there's a possibility of a false match to zero, but it doesn't seem
1512 : : * worth troubling over, especially since most of the current callees just
1513 : : * flush all cached state anyway.
1514 : : */
1515 : : void
8021 1516 : 225911 : CacheRegisterSyscacheCallback(int cacheid,
1517 : : SyscacheCallbackFunction func,
1518 : : Datum arg)
1519 : : {
2529 1520 [ + - - + ]: 225911 : if (cacheid < 0 || cacheid >= SysCacheSize)
2529 tgl@sss.pgh.pa.us 1521 [ # # ]:UBC 0 : elog(FATAL, "invalid cache ID: %d", cacheid);
5696 tgl@sss.pgh.pa.us 1522 [ - + ]:CBC 225911 : if (syscache_callback_count >= MAX_SYSCACHE_CALLBACKS)
5696 tgl@sss.pgh.pa.us 1523 [ # # ]:UBC 0 : elog(FATAL, "out of syscache_callback_list slots");
1524 : :
2529 tgl@sss.pgh.pa.us 1525 [ + + ]:CBC 225911 : if (syscache_callback_links[cacheid] == 0)
1526 : : {
1527 : : /* first callback for this cache */
1528 : 181991 : syscache_callback_links[cacheid] = syscache_callback_count + 1;
1529 : : }
1530 : : else
1531 : : {
1532 : : /* add to end of chain, so that older callbacks are called first */
1533 : 43920 : int i = syscache_callback_links[cacheid] - 1;
1534 : :
1535 [ + + ]: 58282 : while (syscache_callback_list[i].link > 0)
1536 : 14362 : i = syscache_callback_list[i].link - 1;
1537 : 43920 : syscache_callback_list[i].link = syscache_callback_count + 1;
1538 : : }
1539 : :
5696 1540 : 225911 : syscache_callback_list[syscache_callback_count].id = cacheid;
2529 1541 : 225911 : syscache_callback_list[syscache_callback_count].link = 0;
5696 1542 : 225911 : syscache_callback_list[syscache_callback_count].function = func;
1543 : 225911 : syscache_callback_list[syscache_callback_count].arg = arg;
1544 : :
1545 : 225911 : ++syscache_callback_count;
8021 1546 : 225911 : }
1547 : :
1548 : : /*
1549 : : * CacheRegisterRelcacheCallback
1550 : : * Register the specified function to be called for all future
1551 : : * relcache invalidation events. The OID of the relation being
1552 : : * invalidated will be passed to the function.
1553 : : *
1554 : : * NOTE: InvalidOid will be passed if a cache reset request is received.
1555 : : * In this case the called routines should flush all cached state.
1556 : : */
1557 : : void
5696 1558 : 20614 : CacheRegisterRelcacheCallback(RelcacheCallbackFunction func,
1559 : : Datum arg)
1560 : : {
1561 [ - + ]: 20614 : if (relcache_callback_count >= MAX_RELCACHE_CALLBACKS)
5696 tgl@sss.pgh.pa.us 1562 [ # # ]:UBC 0 : elog(FATAL, "out of relcache_callback_list slots");
1563 : :
5696 tgl@sss.pgh.pa.us 1564 :CBC 20614 : relcache_callback_list[relcache_callback_count].function = func;
1565 : 20614 : relcache_callback_list[relcache_callback_count].arg = arg;
1566 : :
1567 : 20614 : ++relcache_callback_count;
8021 1568 : 20614 : }
1569 : :
1570 : : /*
1571 : : * CallSyscacheCallbacks
1572 : : *
1573 : : * This is exported so that CatalogCacheFlushCatalog can call it, saving
1574 : : * this module from knowing which catcache IDs correspond to which catalogs.
1575 : : */
1576 : : void
4625 1577 : 10525091 : CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1578 : : {
1579 : : int i;
1580 : :
2529 1581 [ + - - + ]: 10525091 : if (cacheid < 0 || cacheid >= SysCacheSize)
2529 tgl@sss.pgh.pa.us 1582 [ # # ]:UBC 0 : elog(ERROR, "invalid cache ID: %d", cacheid);
1583 : :
2529 tgl@sss.pgh.pa.us 1584 :CBC 10525091 : i = syscache_callback_links[cacheid] - 1;
1585 [ + + ]: 12000426 : while (i >= 0)
1586 : : {
5180 1587 : 1475335 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1588 : :
2529 1589 [ - + ]: 1475335 : Assert(ccitem->id == cacheid);
2411 peter_e@gmx.net 1590 : 1475335 : ccitem->function(ccitem->arg, cacheid, hashvalue);
2529 tgl@sss.pgh.pa.us 1591 : 1475335 : i = ccitem->link - 1;
1592 : : }
5180 1593 : 10525091 : }
1594 : :
1595 : : /*
1596 : : * LogLogicalInvalidations
1597 : : *
1598 : : * Emit WAL for invalidations caused by the current command.
1599 : : *
1600 : : * This is currently only used for logging invalidations at the command end
1601 : : * or at commit time if any invalidations are pending.
1602 : : */
1603 : : void
972 1604 : 15044 : LogLogicalInvalidations(void)
1605 : : {
1606 : : xl_xact_invals xlrec;
1607 : : InvalidationMsgsGroup *group;
1608 : : int nmsgs;
1609 : :
1610 : : /* Quick exit if we haven't done anything with invalidation messages. */
1361 akapila@postgresql.o 1611 [ + + ]: 15044 : if (transInvalInfo == NULL)
1612 : 9834 : return;
1613 : :
972 tgl@sss.pgh.pa.us 1614 : 5210 : group = &transInvalInfo->CurrentCmdInvalidMsgs;
1615 : 5210 : nmsgs = NumMessagesInGroup(group);
1616 : :
1361 akapila@postgresql.o 1617 [ + + ]: 5210 : if (nmsgs > 0)
1618 : : {
1619 : : /* prepare record */
1620 : 4227 : memset(&xlrec, 0, MinSizeOfXactInvals);
1621 : 4227 : xlrec.nmsgs = nmsgs;
1622 : :
1623 : : /* perform insertion */
1624 : 4227 : XLogBeginInsert();
1625 : 4227 : XLogRegisterData((char *) (&xlrec), MinSizeOfXactInvals);
972 tgl@sss.pgh.pa.us 1626 [ + + ]: 4227 : ProcessMessageSubGroupMulti(group, CatCacheMsgs,
1627 : : XLogRegisterData((char *) msgs,
1628 : : n * sizeof(SharedInvalidationMessage)));
1629 [ + + ]: 4227 : ProcessMessageSubGroupMulti(group, RelCacheMsgs,
1630 : : XLogRegisterData((char *) msgs,
1631 : : n * sizeof(SharedInvalidationMessage)));
1361 akapila@postgresql.o 1632 : 4227 : XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1633 : : }
1634 : : }
|