> Not quite true. (at least starting from 9i). The shared pool is
> in to multiple sub-pools if it is greater than 250MB and/or if your
> cpu_count > 4. In this case the shared pool is covered by multiple
> shared pool latches.
I use my chance here to throw in an undocumented parameter _kghdsidx_count, which you could modify to manually control into how many heaps shared pool will be divided.
Each heap has it's own freelist and lru lists and the latch protecting operations on them, this means you could relieve shared pool latch contention in extremely poorly written applications, but also you might introduce unnecessary ORA-4031
problems, when most allocations happen to be non-uniformly distributed to some specific heap resulting in out of memory error, while others heaps have might have enough (but unusable) space in them..
About parent vs child latches. There is no fundamental low level difference between parent and child latches, they are all small regions of memory modified with atomic test-and-set style opcodes.
You see parent (and solitary) latches from x$ksllt where kslltcnm = 0 and child latches have kslltcnm > 0 (their child number is stored there).
V$LATCH_PARENT shows all latches with kslltcnm 0, V$LATCH_CHILDREN shows all latches with cnm > 0. V$LATCH just summarizes & groups all statistics up using the latch number, it doesn't care about parent vs child latches.
It's up to Oracle, how it uses the child and parent latches, normally when child latches are used, parent latches don't get used much (or at all), since all resources to be protected have been spread between child latches already.
However, there is a case with library cache parent latch (as mentioned also in Steve Adams book), it doesn't get used normally, but when you flush the shared pool, it get's used for example.