Skip to content

Quickstart

Installation

pip install atomic-lru (or equivalent for your package manager)

High level API (with automatic serialization/deserialization)

The main use-case is to use it as a cache for your data. You store any kind of data type which will be automatically serialized to bytes. [^2]

from atomic_lru import CACHE_MISS, Cache

# Create a Cache object instance (with a size limit of 1MB)
# (this object is thread-safe, so you can use it from multiple threads)
cache = Cache(size_limit_in_bytes=1_000_000, default_ttl=3600)

# Let's store something (a dictionary here) in the cache with a custom TTL
cache.set(key="user:123", value={"name": "Alice", "age": 30}, ttl=60)

# ...

# Let's retrieve it
user = cache.get(key="user:123")

if user is not CACHE_MISS:
    # cache hit
    print(user["name"])

# Always close to stop the background expiration thread
cache.close()

Low level API (without serialization/deserialization)

But you can use it at a lower level to store any kind of data type without serialization. In that case, you will lose the size_limit_in_bytes feature but you still get the max_items feature.

from atomic_lru import CACHE_MISS, Storage


class ExpensiveObject:
    """An expensive object that is not serializable."""

    pass


# Create a Storage object instance to store ExpensiveObject instances
# (this object is thread-safe, so you can use it from multiple threads)
storage = Storage[ExpensiveObject](max_items=100, default_ttl=3600)

# Create and store an ExpensiveObject instance
value = ExpensiveObject()
storage.set("key1", value, ttl=60)

# ...

# Let's retrieve it
obj = storage.get("key1")

if obj is not CACHE_MISS:
    # cache hit
    assert isinstance(obj, ExpensiveObject)
    assert id(obj) == id(value)  # this is the same object instance

# Always close to stop the background expiration thread
storage.close()