Class

model.persistence

AbstractCache

Related Doc: package persistence

Permalink

abstract class AbstractCache[Key, Value] extends AnyRef

Features strong values that might expire.

Source
Caches.scala
Linear Supertypes
AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AbstractCache
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AbstractCache(concurrencyLevel: Int = 4, timeoutMinutes: Int = 5)(implicit executionContext: CacheExecutionContext)

    Permalink

    concurrencyLevel

    Guides the allowed concurrency among update operations. Used as a hint for internal sizing. The table is internally partitioned to try to permit the indicated number of concurrent updates without contention. Because assignment of entries to these partitions is not necessarily uniform, the actual concurrency observed may vary. Ideally, you should choose a value to accommodate as many threads as will ever concurrently modify the table. Using a significantly higher value than you need can waste space and time, and a significantly lower value can lead to thread contention. But overestimates and underestimates within an order of magnitude do not usually have much noticeable impact. A value of one permits only one thread to modify the cache at a time, but since read operations and cache loading computations can proceed concurrently, this still yields higher concurrency than full synchronization. Defaults to 4. Note:The default may change in the future. If you care about this value, you should always choose it explicitly. The current implementation uses the concurrency level to create a fixed number of hashtable segments, each governed by its own write lock. The segment lock is taken once for each explicit write, and twice for each cache loading computation (once prior to loading the new value, and once after loading completes). Much internal cache management is performed at the segment granularity. For example, access queues and write queues are kept per segment when they are required by the selected eviction algorithm. As such, when writing unit tests it is not uncommon to specify concurrencyLevel(1) in order to achieve more deterministic eviction behavior.

    timeoutMinutes

    if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access. Access time is reset by all cache read and write operations.

Abstract Value Members

  1. abstract def underlying: Cache[AnyRef, AnyRef]

    Permalink

    The underlying Google Guava Cache instance

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. val concurrencyLevel: Int

    Permalink

    Guides the allowed concurrency among update operations.

    Guides the allowed concurrency among update operations. Used as a hint for internal sizing. The table is internally partitioned to try to permit the indicated number of concurrent updates without contention. Because assignment of entries to these partitions is not necessarily uniform, the actual concurrency observed may vary. Ideally, you should choose a value to accommodate as many threads as will ever concurrently modify the table. Using a significantly higher value than you need can waste space and time, and a significantly lower value can lead to thread contention. But overestimates and underestimates within an order of magnitude do not usually have much noticeable impact. A value of one permits only one thread to modify the cache at a time, but since read operations and cache loading computations can proceed concurrently, this still yields higher concurrency than full synchronization. Defaults to 4. Note:The default may change in the future. If you care about this value, you should always choose it explicitly. The current implementation uses the concurrency level to create a fixed number of hashtable segments, each governed by its own write lock. The segment lock is taken once for each explicit write, and twice for each cache loading computation (once prior to loading the new value, and once after loading completes). Much internal cache management is performed at the segment granularity. For example, access queues and write queues are kept per segment when they are required by the selected eviction algorithm. As such, when writing unit tests it is not uncommon to specify concurrencyLevel(1) in order to achieve more deterministic eviction behavior.

  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. implicit val executionContext: CacheExecutionContext

    Permalink
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def get(key: Key): Option[Value]

    Permalink

    Returns Some(value associated with key in this cache), or None if there is no cached value for key.

    Returns Some(value associated with key in this cache), or None if there is no cached value for key.

    Annotations
    @inline()
  12. def getAll: List[Value]

    Permalink
    Annotations
    @inline()
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. def getWithDefault(key: Key, defaultValue: ⇒ Value): Value

    Permalink

    Returns the value associated with key in this cache, obtaining that value from defaultValue if necessary.

    Returns the value associated with key in this cache, obtaining that value from defaultValue if necessary. No observable state associated with this cache is modified until loading completes. This method provides a simple substitute for the conventional "if cached, return; otherwise create, cache and return" pattern.

    Warning: defaultValue must not evaluate to null.

    Annotations
    @inline()
  15. def getWithDefaultAsync(key: Key, defaultValue: ⇒ Value): Future[Value]

    Permalink

    Like getWithDefault, but useful when defaultValue is expensive to compute

    Like getWithDefault, but useful when defaultValue is expensive to compute

    Annotations
    @inline()
  16. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  17. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  18. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  19. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  20. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  21. def put(key: Key, value: Value): AbstractCache[Key, Value]

    Permalink

    Associates value with key in this cache.

    Associates value with key in this cache. If the cache previously contained a value associated with key, the old value is atomically replaced by value.

    Annotations
    @inline()
  22. def putAsync(key: Key, value: ⇒ Value): Future[AbstractCache[Key, Value]]

    Permalink

    Like put, but useful when value is expensive to compute

    Like put, but useful when value is expensive to compute

    Annotations
    @inline()
  23. def putGet(key: Key, value: Value): AbstractCache[Key, Value]

    Permalink

    Like put, but also returns value

    Like put, but also returns value

    Annotations
    @inline()
  24. def putGetAsync(key: Key, value: ⇒ Value): Future[AbstractCache[Key, Value]]

    Permalink

    Like putGet, but useful when value is expensive to compute

    Like putGet, but useful when value is expensive to compute

    Annotations
    @inline()
  25. def remove(key: Key): AbstractCache[Key, Value]

    Permalink

    Removes key from the underlying cache

    Removes key from the underlying cache

    Annotations
    @inline()
  26. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  27. val timeoutMinutes: Int

    Permalink

    if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access.

    if 0 (minutes), cache entries never expire; otherwise specifies that each entry should be automatically removed from the cache once a fixed duration has elapsed after the entry's creation, the most recent replacement of its value, or its last access. Access time is reset by all cache read and write operations.

  28. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  29. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  30. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped