Put
The cache functions as a unique repository for all application entities. Data goes in through a put()
operation. It comes out with a get()
call.
When data goes in, the cache analyzes each entity and keeps track of all its dependencies. For this to happen each dependent entity must have a unique identifier. Children without a unique id are still cached but not tracked for uniqueness. When you put a changed dependency, the cache updates all its references. This way you don't have to track many versions of the same object. There is only One in the cache.
Performance
The cache freezes all data making it effectively immutable. This means that you cannot change an entity after it has entered the cache. This is true even it you keep your reference to the entity before entering the cache. To make an edit you must get a specific editable ( link needed) version of the entity form the cache. Many versions of the same entity are available through the time travel mechanism.
Advantages:
The cache always returns the same exact unmodified version no matter where in the app it is retrieved.
Get operations are really fast (it's a key look-up).
No risk to edit a specific instance by mistake. Helps enforce writing pure functions.
Fast identity checks. React in particular makes extensive use of this method to test when to render. Since the entity is immutable the cache guarantees that all references to it are the same exact object in memory. React won't re-render unless the memory pointer changes.
Time travel
Read vs write
There is a performance penalty in analyzing each entity to track its dependents. Looping through each object takes linear time. The cache offers two methods to mitigate this. Write penalty for read optimization and queuing.
Read optimized
Most applications have a lot more read operations than write. Because of this the cache is read optimized. All data analysis and tracking is done when data is written into the cache. Once written all get()
operations are very fast (key lookup). Most write operations are edit operations and happen a few entities at a time. Moreover if the data is relatively flat (no deep dependencies) this is even less of an issue.
Queuing
Every once in a while you may need to get a large data set into the cache (although you should use paging to reduce the size of each batch). If a UI operation is waiting for this to complete, the lag could become visible (>100ms). To bypass the write analysis One can queue()
put operations. The queue defers performing write analysis until it commits. The same get()
operation retrieves queued items but their dependents are not yet unique. This is totally ok for "read-only" display of data. Commit the queue before an edit operation to take advantage of object uniqueness.
NOTE: If the item exists both in the cache and on the queue the cache prevails. A get(uid)
operation will return the cached item. To specifically get the queued item use getQueued(uid)
or do a commit()
to push the queue into the cache before getting the item.
Api
put(entityOrArray, threadIds)
put(entityOrArray, threadIds)
Puts an item on the cache and updates all its references to match any present in the item's entity tree
Parameters
entityOrArray:
Object | Array<Object>
, object or array of objects to be put into the cachethreadIds:
String | Array<String>
, thread id or array of thread ids for the put items to be assigned to
Returns
history state:
Object
Example
Works with arrays
No merge
For simplicity and because data can have complex shapes One
does not offer a merge operation. For example it is difficult to merge two arrays of arrays without a unique id for each array. If you need to merge: get an editable copy of the item from the cache, do the merge and put it back into the cache.
Last updated