cached/
lib.rs

1/*!
2[![Build Status](https://github.com/jaemk/cached/actions/workflows/build.yml/badge.svg)](https://github.com/jaemk/cached/actions/workflows/build.yml)
3[![crates.io](https://img.shields.io/crates/v/cached.svg)](https://crates.io/crates/cached)
4[![docs](https://docs.rs/cached/badge.svg)](https://docs.rs/cached)
5
6> Caching structures and simplified function memoization
7
8`cached` provides implementations of several caching structures as well as a handy macros
9for defining memoized functions.
10
11Memoized functions defined using [`#[cached]`](proc_macro::cached)/[`#[once]`](proc_macro::once)/[`#[io_cached]`](proc_macro::io_cached)/[`cached!`](crate::macros) macros are thread-safe with the backing
12function-cache wrapped in a mutex/rwlock, or externally synchronized in the case of `#[io_cached]`.
13By default, the function-cache is **not** locked for the duration of the function's execution, so initial (on an empty cache)
14concurrent calls of long-running functions with the same arguments will each execute fully and each overwrite
15the memoized value as they complete. This mirrors the behavior of Python's `functools.lru_cache`. To synchronize the execution and caching
16of un-cached arguments, specify `#[cached(sync_writes = "default")]` / `#[once(sync_writes = true)]` (not supported by `#[io_cached]`.
17
18- See [`cached::stores` docs](https://docs.rs/cached/latest/cached/stores/index.html) cache stores available.
19- See [`proc_macro`](https://docs.rs/cached/latest/cached/proc_macro/index.html) for more procedural macro examples.
20- See [`macros`](https://docs.rs/cached/latest/cached/macros/index.html) for more declarative macro examples.
21
22**Features**
23
24- `default`: Include `proc_macro` and `ahash` features
25- `proc_macro`: Include proc macros
26- `ahash`: Enable the optional `ahash` hasher as default hashing algorithm.
27- `async`: Include support for async functions and async cache stores
28- `async_tokio_rt_multi_thread`: Enable `tokio`'s optional `rt-multi-thread` feature.
29- `redis_store`: Include Redis cache store
30- `redis_async_std`: Include async Redis support using `async-std` and `async-std` tls support, implies `redis_store` and `async`
31- `redis_tokio`: Include async Redis support using `tokio` and `tokio` tls support, implies `redis_store` and `async`
32- `redis_connection_manager`: Enable the optional `connection-manager` feature of `redis`. Any async redis caches created
33                              will use a connection manager instead of a `MultiplexedConnection`
34- `redis_ahash`: Enable the optional `ahash` feature of `redis`
35- `disk_store`: Include disk cache store
36- `wasm`: Enable WASM support. Note that this feature is incompatible with `tokio`'s multi-thread
37   runtime (`async_tokio_rt_multi_thread`) and all Redis features (`redis_store`, `redis_async_std`, `redis_tokio`, `redis_ahash`)
38
39The procedural macros (`#[cached]`, `#[once]`, `#[io_cached]`) offer more features, including async support.
40See the [`proc_macro`](crate::proc_macro) and [`macros`](crate::macros) modules for more samples, and the
41[`examples`](https://github.com/jaemk/cached/tree/master/examples) directory for runnable snippets.
42
43Any custom cache that implements `cached::Cached`/`cached::CachedAsync` can be used with the `#[cached]`/`#[once]`/`cached!` macros in place of the built-ins.
44Any custom cache that implements `cached::IOCached`/`cached::IOCachedAsync` can be used with the `#[io_cached]` macro.
45
46----
47
48The basic usage looks like:
49
50```rust,no_run
51use cached::proc_macro::cached;
52
53/// Defines a function named `fib` that uses a cache implicitly named `FIB`.
54/// By default, the cache will be the function's name in all caps.
55/// The following line is equivalent to #[cached(name = "FIB", unbound)]
56#[cached]
57fn fib(n: u64) -> u64 {
58    if n == 0 || n == 1 { return n }
59    fib(n-1) + fib(n-2)
60}
61# pub fn main() { }
62```
63
64----
65
66```rust,no_run
67use std::thread::sleep;
68use std::time::Duration;
69use cached::proc_macro::cached;
70use cached::SizedCache;
71
72/// Use an explicit cache-type with a custom creation block and custom cache-key generating block
73#[cached(
74    ty = "SizedCache<String, usize>",
75    create = "{ SizedCache::with_size(100) }",
76    convert = r#"{ format!("{}{}", a, b) }"#
77)]
78fn keyed(a: &str, b: &str) -> usize {
79    let size = a.len() + b.len();
80    sleep(Duration::new(size as u64, 0));
81    size
82}
83# pub fn main() { }
84```
85
86----
87
88```rust,no_run
89use cached::proc_macro::once;
90
91/// Only cache the initial function call.
92/// Function will be re-executed after the cache
93/// expires (according to `time` seconds).
94/// When no (or expired) cache, concurrent calls
95/// will synchronize (`sync_writes`) so the function
96/// is only executed once.
97#[once(time=10, option = true, sync_writes = true)]
98fn keyed(a: String) -> Option<usize> {
99    if a == "a" {
100        Some(a.len())
101    } else {
102        None
103    }
104}
105# pub fn main() { }
106```
107
108----
109
110```compile_fail
111use cached::proc_macro::cached;
112
113/// Cannot use sync_writes and result_fallback together
114#[cached(
115    result = true,
116    time = 1,
117    sync_writes = "default",
118    result_fallback = true
119)]
120fn doesnt_compile() -> Result<String, ()> {
121    Ok("a".to_string())
122}
123```
124----
125
126```rust,no_run,ignore
127use cached::proc_macro::io_cached;
128use cached::AsyncRedisCache;
129use thiserror::Error;
130
131#[derive(Error, Debug, PartialEq, Clone)]
132enum ExampleError {
133    #[error("error with redis cache `{0}`")]
134    RedisError(String),
135}
136
137/// Cache the results of an async function in redis. Cache
138/// keys will be prefixed with `cache_redis_prefix`.
139/// A `map_error` closure must be specified to convert any
140/// redis cache errors into the same type of error returned
141/// by your function. All `io_cached` functions must return `Result`s.
142#[io_cached(
143    map_error = r##"|e| ExampleError::RedisError(format!("{:?}", e))"##,
144    ty = "AsyncRedisCache<u64, String>",
145    create = r##" {
146        AsyncRedisCache::new("cached_redis_prefix", 1)
147            .set_refresh(true)
148            .build()
149            .await
150            .expect("error building example redis cache")
151    } "##
152)]
153async fn async_cached_sleep_secs(secs: u64) -> Result<String, ExampleError> {
154    std::thread::sleep(std::time::Duration::from_secs(secs));
155    Ok(secs.to_string())
156}
157```
158
159----
160
161```rust,no_run,ignore
162use cached::proc_macro::io_cached;
163use cached::DiskCache;
164use thiserror::Error;
165
166#[derive(Error, Debug, PartialEq, Clone)]
167enum ExampleError {
168    #[error("error with disk cache `{0}`")]
169    DiskError(String),
170}
171
172/// Cache the results of a function on disk.
173/// Cache files will be stored under the system cache dir
174/// unless otherwise specified with `disk_dir` or the `create` argument.
175/// A `map_error` closure must be specified to convert any
176/// disk cache errors into the same type of error returned
177/// by your function. All `io_cached` functions must return `Result`s.
178#[io_cached(
179    map_error = r##"|e| ExampleError::DiskError(format!("{:?}", e))"##,
180    disk = true
181)]
182fn cached_sleep_secs(secs: u64) -> Result<String, ExampleError> {
183    std::thread::sleep(std::time::Duration::from_secs(secs));
184    Ok(secs.to_string())
185}
186```
187
188
189Functions defined via macros will have their results cached using the
190function's arguments as a key, a `convert` expression specified on a procedural macros,
191or a `Key` block specified on a `cached_key!` declarative macro.
192
193When a macro-defined function is called, the function's cache is first checked for an already
194computed (and still valid) value before evaluating the function body.
195
196Due to the requirements of storing arguments and return values in a global cache:
197
198- Function return types:
199  - For all store types, except Redis, must be owned and implement `Clone`
200  - For the Redis store type, must be owned and implement `serde::Serialize + serde::DeserializeOwned`
201- Function arguments:
202  - For all store types, except Redis, must either be owned and implement `Hash + Eq + Clone`,
203    the `cached_key!` macro is used with a `Key` block specifying key construction, or
204    a `convert` expression is specified on a procedural macro to specify how to construct a key
205    of a `Hash + Eq + Clone` type.
206  - For the Redis store type, must either be owned and implement `Display`, or the `cached_key!` & `Key`
207    or procedural macro & `convert` expression used to specify how to construct a key of a `Display` type.
208- Arguments and return values will be `cloned` in the process of insertion and retrieval. Except for Redis
209  where arguments are formatted into `Strings` and values are de/serialized.
210- Macro-defined functions should not be used to produce side-effectual results!
211- Macro-defined functions cannot live directly under `impl` blocks since macros expand to a
212  `once_cell` initialization and one or more function definitions.
213- Macro-defined functions cannot accept `Self` types as a parameter.
214
215
216*/
217
218#![cfg_attr(docsrs, feature(doc_cfg))]
219
220#[doc(hidden)]
221pub extern crate once_cell;
222
223#[cfg(feature = "proc_macro")]
224#[cfg_attr(docsrs, doc(cfg(feature = "proc_macro")))]
225pub use proc_macro::Return;
226#[cfg(any(feature = "redis_async_std", feature = "redis_tokio"))]
227#[cfg_attr(
228    docsrs,
229    doc(cfg(any(feature = "redis_async_std", feature = "redis_tokio")))
230)]
231pub use stores::AsyncRedisCache;
232pub use stores::{
233    CanExpire, ExpiringValueCache, SizedCache, TimedCache, TimedSizedCache, UnboundCache,
234};
235#[cfg(feature = "disk_store")]
236#[cfg_attr(docsrs, doc(cfg(feature = "disk_store")))]
237pub use stores::{DiskCache, DiskCacheError};
238#[cfg(feature = "redis_store")]
239#[cfg_attr(docsrs, doc(cfg(feature = "redis_store")))]
240pub use stores::{RedisCache, RedisCacheError};
241#[cfg(feature = "async")]
242#[cfg_attr(docsrs, doc(cfg(feature = "async")))]
243use {async_trait::async_trait, futures::Future};
244
245mod lru_list;
246pub mod macros;
247#[cfg(feature = "proc_macro")]
248pub mod proc_macro;
249pub mod stores;
250#[doc(hidden)]
251pub use web_time;
252
253#[cfg(feature = "async")]
254#[doc(hidden)]
255pub mod async_sync {
256    pub use tokio::sync::Mutex;
257    pub use tokio::sync::OnceCell;
258    pub use tokio::sync::RwLock;
259}
260
261/// Cache operations
262///
263/// ```rust
264/// use cached::{Cached, UnboundCache};
265///
266/// let mut cache: UnboundCache<String, String> = UnboundCache::new();
267///
268/// // When writing, keys and values are owned:
269/// cache.cache_set("key".to_string(), "owned value".to_string());
270///
271/// // When reading, keys are only borrowed for lookup:
272/// let borrowed_cache_value = cache.cache_get("key");
273///
274/// assert_eq!(borrowed_cache_value, Some(&"owned value".to_string()))
275/// ```
276pub trait Cached<K, V> {
277    /// Attempt to retrieve a cached value
278    ///
279    /// ```rust
280    /// # use cached::{Cached, UnboundCache};
281    /// # let mut cache: UnboundCache<String, String> = UnboundCache::new();
282    /// # cache.cache_set("key".to_string(), "owned value".to_string());
283    /// // You can use borrowed data, or the data's borrowed type:
284    /// let borrow_lookup_1 = cache.cache_get("key")
285    ///     .map(String::clone);
286    /// let borrow_lookup_2 = cache.cache_get(&"key".to_string())
287    ///     .map(String::clone); // copy the values for test asserts
288    ///
289    /// # assert_eq!(borrow_lookup_1, borrow_lookup_2);
290    /// ```
291    fn cache_get<Q>(&mut self, k: &Q) -> Option<&V>
292    where
293        K: std::borrow::Borrow<Q>,
294        Q: std::hash::Hash + Eq + ?Sized;
295
296    /// Attempt to retrieve a cached value with mutable access
297    ///
298    /// ```rust
299    /// # use cached::{Cached, UnboundCache};
300    /// # let mut cache: UnboundCache<String, String> = UnboundCache::new();
301    /// # cache.cache_set("key".to_string(), "owned value".to_string());
302    /// // You can use borrowed data, or the data's borrowed type:
303    /// let borrow_lookup_1 = cache.cache_get_mut("key")
304    ///     .map(|value| value.clone());
305    /// let borrow_lookup_2 = cache.cache_get_mut(&"key".to_string())
306    ///     .map(|value| value.clone()); // copy the values for test asserts
307    ///
308    /// # assert_eq!(borrow_lookup_1, borrow_lookup_2);
309    /// ```
310    fn cache_get_mut<Q>(&mut self, k: &Q) -> Option<&mut V>
311    where
312        K: std::borrow::Borrow<Q>,
313        Q: std::hash::Hash + Eq + ?Sized;
314
315    /// Insert a key, value pair and return the previous value
316    fn cache_set(&mut self, k: K, v: V) -> Option<V>;
317
318    /// Get or insert a key, value pair
319    fn cache_get_or_set_with<F: FnOnce() -> V>(&mut self, k: K, f: F) -> &mut V;
320
321    /// Get or insert a key, value pair with error handling
322    fn cache_try_get_or_set_with<F: FnOnce() -> Result<V, E>, E>(
323        &mut self,
324        k: K,
325        f: F,
326    ) -> Result<&mut V, E>;
327
328    /// Remove a cached value
329    ///
330    /// ```rust
331    /// # use cached::{Cached, UnboundCache};
332    /// # let mut cache: UnboundCache<String, String> = UnboundCache::new();
333    /// # cache.cache_set("key1".to_string(), "owned value 1".to_string());
334    /// # cache.cache_set("key2".to_string(), "owned value 2".to_string());
335    /// // You can use borrowed data, or the data's borrowed type:
336    /// let remove_1 = cache.cache_remove("key1");
337    /// let remove_2 = cache.cache_remove(&"key2".to_string());
338    ///
339    /// # assert_eq!(remove_1, Some("owned value 1".to_string()));
340    /// # assert_eq!(remove_2, Some("owned value 2".to_string()));
341    /// ```
342    fn cache_remove<Q>(&mut self, k: &Q) -> Option<V>
343    where
344        K: std::borrow::Borrow<Q>,
345        Q: std::hash::Hash + Eq + ?Sized;
346
347    /// Remove all cached values. Keeps the allocated memory for reuse.
348    fn cache_clear(&mut self);
349
350    /// Remove all cached values. Free memory and return to initial state
351    fn cache_reset(&mut self);
352
353    /// Reset misses/hits counters
354    fn cache_reset_metrics(&mut self) {}
355
356    /// Return the current cache size (number of elements)
357    fn cache_size(&self) -> usize;
358
359    /// Return the number of times a cached value was successfully retrieved
360    fn cache_hits(&self) -> Option<u64> {
361        None
362    }
363
364    /// Return the number of times a cached value was unable to be retrieved
365    fn cache_misses(&self) -> Option<u64> {
366        None
367    }
368
369    /// Return the cache capacity
370    fn cache_capacity(&self) -> Option<usize> {
371        None
372    }
373
374    /// Return the lifespan of cached values (time to eviction)
375    fn cache_lifespan(&self) -> Option<u64> {
376        None
377    }
378
379    /// Set the lifespan of cached values, returns the old value
380    fn cache_set_lifespan(&mut self, _seconds: u64) -> Option<u64> {
381        None
382    }
383
384    /// Remove the lifespan for cached values, returns the old value.
385    ///
386    /// For cache implementations that don't support retaining values indefinitely, this method is
387    /// a no-op.
388    fn cache_unset_lifespan(&mut self) -> Option<u64> {
389        None
390    }
391}
392
393/// Extra cache operations for types that implement `Clone`
394pub trait CloneCached<K, V> {
395    /// Attempt to retrieve a cached value and indicate whether that value was evicted.
396    fn cache_get_expired<Q>(&mut self, _key: &Q) -> (Option<V>, bool)
397    where
398        K: std::borrow::Borrow<Q>,
399        Q: std::hash::Hash + Eq + ?Sized;
400}
401
402#[cfg(feature = "async")]
403#[cfg_attr(docsrs, doc(cfg(feature = "async")))]
404#[async_trait]
405pub trait CachedAsync<K, V> {
406    async fn get_or_set_with<F, Fut>(&mut self, k: K, f: F) -> &mut V
407    where
408        V: Send,
409        F: FnOnce() -> Fut + Send,
410        Fut: Future<Output = V> + Send;
411
412    async fn try_get_or_set_with<F, Fut, E>(&mut self, k: K, f: F) -> Result<&mut V, E>
413    where
414        V: Send,
415        F: FnOnce() -> Fut + Send,
416        Fut: Future<Output = Result<V, E>> + Send;
417}
418
419/// Cache operations on an io-connected store
420pub trait IOCached<K, V> {
421    type Error;
422
423    /// Attempt to retrieve a cached value
424    ///
425    /// # Errors
426    ///
427    /// Should return `Self::Error` if the operation fails
428    fn cache_get(&self, k: &K) -> Result<Option<V>, Self::Error>;
429
430    /// Insert a key, value pair and return the previous value
431    ///
432    /// # Errors
433    ///
434    /// Should return `Self::Error` if the operation fails
435    fn cache_set(&self, k: K, v: V) -> Result<Option<V>, Self::Error>;
436
437    /// Remove a cached value
438    ///
439    /// # Errors
440    ///
441    /// Should return `Self::Error` if the operation fails
442    fn cache_remove(&self, k: &K) -> Result<Option<V>, Self::Error>;
443
444    /// Set the flag to control whether cache hits refresh the ttl of cached values, returns the old flag value
445    fn cache_set_refresh(&mut self, refresh: bool) -> bool;
446
447    /// Return the lifespan of cached values (time to eviction)
448    fn cache_lifespan(&self) -> Option<u64> {
449        None
450    }
451
452    /// Set the lifespan of cached values, returns the old value.
453    fn cache_set_lifespan(&mut self, _seconds: u64) -> Option<u64> {
454        None
455    }
456
457    /// Remove the lifespan for cached values, returns the old value.
458    ///
459    /// For cache implementations that don't support retaining values indefinitely, this method is
460    /// a no-op.
461    fn cache_unset_lifespan(&mut self) -> Option<u64> {
462        None
463    }
464}
465
466#[cfg(feature = "async")]
467#[cfg_attr(docsrs, doc(cfg(feature = "async")))]
468#[async_trait]
469pub trait IOCachedAsync<K, V> {
470    type Error;
471    async fn cache_get(&self, k: &K) -> Result<Option<V>, Self::Error>;
472
473    async fn cache_set(&self, k: K, v: V) -> Result<Option<V>, Self::Error>;
474
475    /// Remove a cached value
476    async fn cache_remove(&self, k: &K) -> Result<Option<V>, Self::Error>;
477
478    /// Set the flag to control whether cache hits refresh the ttl of cached values, returns the old flag value
479    fn cache_set_refresh(&mut self, refresh: bool) -> bool;
480
481    /// Return the lifespan of cached values (time to eviction)
482    fn cache_lifespan(&self) -> Option<u64> {
483        None
484    }
485
486    /// Set the lifespan of cached values, returns the old value
487    fn cache_set_lifespan(&mut self, _seconds: u64) -> Option<u64> {
488        None
489    }
490
491    /// Remove the lifespan for cached values, returns the old value.
492    ///
493    /// For cache implementations that don't support retaining values indefinitely, this method is
494    /// a no-op.
495    fn cache_unset_lifespan(&mut self) -> Option<u64> {
496        None
497    }
498}