Guides the allowed concurrency among update operations. Used as a hint for internal sizing. The
table is internally partitioned to try to permit the indicated number of concurrent updates
without contention. Because assignment of entries to these partitions is not necessarily
uniform, the actual concurrency observed may vary. Ideally, you should choose a value to
accommodate as many threads as will ever concurrently modify the table. Using a significantly
higher value than you need can waste space and time, and a significantly lower value can lead
to thread contention. But overestimates and underestimates within an order of magnitude do not
usually have much noticeable impact. A value of one permits only one thread to modify the cache
at a time, but since read operations and cache loading computations can proceed concurrently,
this still yields higher concurrency than full synchronization.
Defaults to 4. Note:The default may change in the future. If you care about this
value, you should always choose it explicitly.
The current implementation uses the concurrency level to create a fixed number of hashtable
segments, each governed by its own write lock. The segment lock is taken once for each explicit
write, and twice for each cache loading computation (once prior to loading the new value,
and once after loading completes). Much internal cache management is performed at the segment
granularity. For example, access queues and write queues are kept per segment when they are
required by the selected eviction algorithm. As such, when writing unit tests it is not
uncommon to specify concurrencyLevel(1)
in order to achieve more deterministic eviction behavior.
if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access. Access time is reset by all cache read and write operations.
Guides the allowed concurrency among update operations.
Guides the allowed concurrency among update operations. Used as a hint for internal sizing. The
table is internally partitioned to try to permit the indicated number of concurrent updates
without contention. Because assignment of entries to these partitions is not necessarily
uniform, the actual concurrency observed may vary. Ideally, you should choose a value to
accommodate as many threads as will ever concurrently modify the table. Using a significantly
higher value than you need can waste space and time, and a significantly lower value can lead
to thread contention. But overestimates and underestimates within an order of magnitude do not
usually have much noticeable impact. A value of one permits only one thread to modify the cache
at a time, but since read operations and cache loading computations can proceed concurrently,
this still yields higher concurrency than full synchronization.
Defaults to 4. Note:The default may change in the future. If you care about this
value, you should always choose it explicitly.
The current implementation uses the concurrency level to create a fixed number of hashtable
segments, each governed by its own write lock. The segment lock is taken once for each explicit
write, and twice for each cache loading computation (once prior to loading the new value,
and once after loading completes). Much internal cache management is performed at the segment
granularity. For example, access queues and write queues are kept per segment when they are
required by the selected eviction algorithm. As such, when writing unit tests it is not
uncommon to specify concurrencyLevel(1)
in order to achieve more deterministic eviction behavior.
Returns Some(
value associated with key
in this cache)
, or None if there is no cached value for key
.
Returns Some(
value associated with key
in this cache)
, or None if there is no cached value for key
.
Returns the value associated with key
in this cache, obtaining that value from
defaultValue
if necessary.
Returns the value associated with key
in this cache, obtaining that value from
defaultValue
if necessary. No observable state associated with this cache is modified
until loading completes. This method provides a simple substitute for the conventional
"if cached, return; otherwise create, cache and return" pattern.
Warning: defaultValue
must not evaluate to null
.
Like getWithDefault
, but useful when defaultValue
is expensive to compute
Like getWithDefault
, but useful when defaultValue
is expensive to compute
Associates value
with key
in this cache.
Associates value
with key
in this cache. If the cache previously contained a
value associated with key
, the old value is atomically replaced by value
.
Like put
, but useful when value
is expensive to compute
Like put
, but useful when value
is expensive to compute
Like put
, but also returns value
Like put
, but also returns value
Like putGet
, but useful when value
is expensive to compute
Like putGet
, but useful when value
is expensive to compute
Removes key
from the underlying cache
Removes key
from the underlying cache
if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access.
if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access. Access time is reset by all cache read and write operations.
Features strong values that might expire.