API Reference¶
mincepy¶
-
class
mincepy.
Archive
[source]¶ An archive provides the persistent storage for the Historian. It is responsible for storing, searching and loading data records and their metadata.
-
class
MetaEntry
(obj_id, meta)¶ Create new instance of MetaEntry(obj_id, meta)
-
meta
¶ Alias for field number 1
-
obj_id
¶ Alias for field number 0
-
-
add_archive_listener
(listener: mincepy.archives.ArchiveListener)[source]¶ Add a listener to be notified of archive events
-
bulk_write
(ops: Sequence[mincepy.operations.Operation])[source]¶ Make a collection of write operations to the database
-
construct_archive_id
(value) → IdT[source]¶ If it’s possible, construct an archive value from the passed value. This is useful as a convenience to the user if, say, the archive id can be constructed from a string. Raise TypeError or ValueError if this is not possible for the given value.
-
count
(obj_id=None, type_id=None, created_by=None, copied_from=None, version=-1, state=None, snapshot_hash=None, meta=None, limit=0)[source]¶ Count the number of entries that match the given query
-
distinct
(key: str, filter: dict = None) → Iterator[T_co][source]¶ Get distinct values of the given record key
Parameters: - key – the key to find distinct values for, see DataRecord for possible keys
- filter – an optional filter to restrict the search to. Should be a dictionary that filters on entries in the DataRecord i.e. the kwargs that can be passed to find().
-
file_store
¶ Get the GridFS file bucket
-
find
(obj_id: Union[IdT, Iterable[IdT], Dict[KT, VT]] = None, type_id=None, created_by: Optional[IdT] = None, copied_from: Optional[IdT] = None, version: int = None, state=None, state_types=None, snapshot_hash=None, meta: dict = None, extras: dict = None, limit=0, sort=None, skip=0) → Iterator[mincepy.records.DataRecord][source]¶ Find records matching the given criteria
Parameters: - type_id – find records with the given type id
- created_by – find records with the given created by id
- copied_from – find records copied from the record with the given id
- version – restrict the search to this version, -1 for latest
- state – find objects with this state filter
- state_types – file objects with this state types filter
- snapshot_hash – find objects with this snapshot hash
- meta – find objects with this meta filter
- extras – the search criteria to apply on the data record extras
- limit – limit the results to this many records
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
- sort – sort the results by the given criteria
- skip – skip the this many entries
-
get_obj_ref_graph
(*obj_ids, direction=1, max_dist: int = None) → networkx.classes.digraph.DiGraph[source]¶ Given one or more object ids the archive will supply the corresponding reference graph(s). The graphs start at the given id and contains all object ids that it references, all object ids they reference and so on.
-
get_snapshot_ids
(obj_id: IdT) → Sequence[mincepy.records.SnapshotId[~IdT][IdT]][source]¶ Returns a list of time ordered snapshot ids
-
get_snapshot_ref_graph
(*snapshot_ids, direction=1, max_dist: int = None) → networkx.classes.digraph.DiGraph[source]¶ Given one or more snapshot ids the archive will supply the corresponding reference graph(s). The graphs start at the given id and contains all snapshots that it references, all snapshots they reference and so on.
-
classmethod
get_types
() → Sequence[T_co][source]¶ This method allows the archive to return either types or type helper that the historian should support. A common example is the type helper for the object id type
-
history
(obj_id: IdT, idx_or_slice) → [<class 'mincepy.records.DataRecord'>, typing.Sequence[mincepy.records.DataRecord]][source]¶ Load the snapshot records for a particular object, can return a single or multiple records
-
load
(snapshot_id: mincepy.records.SnapshotId[~IdT][IdT]) → mincepy.records.DataRecord[source]¶ Load a snapshot of an object with the given reference
-
meta_create_index
(keys: Union[str, List[Tuple]], unique=False, where_exist=False)[source]¶ Create an index on the metadata. Takes either a single key or list of (key, direction) pairs
Parameters: - keys – the key or keys to create the index on
- unique – if True, create a uniqueness constraint on this index
- where_exist – if True the index only applies for documents where the key(s) exist
-
meta_distinct
(key: str, filter: dict = None, obj_id: Union[IdT, Iterable[IdT], Mapping[KT, VT_co]] = None) → Iterator[T_co][source]¶ Yield distinct values found for ‘key’ within metadata documents, optionally marching a search filter.
The search can optionally be restricted to a set of passed object ids.
Parameters: - key – the document key to get distinct values for
- filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
meta_find
(filter: dict = None, obj_id: Union[IdT, Iterable[IdT], Mapping[KT, VT_co]] = None) → Iterator[mincepy.archives.MetaEntry][source]¶ Yield metadata satisfying the given criteria. The search can optionally be restricted to a set of passed object ids.
Parameters: - filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
meta_get_many
(obj_ids: Iterable[IdT]) → Dict[KT, VT][source]¶ Get the metadata for multiple objects. Returns a dictionary mapping the object id to the metadata dictionary
-
meta_set
(obj_id: IdT, meta: Optional[Mapping[KT, VT_co]])[source]¶ Set the metadata on on the object with the corresponding id
-
meta_set_many
(metas: Mapping[IdT, Optional[Mapping[KT, VT_co]]])[source]¶ Set the metadata on multiple objects. This takes a mapping of the object id to the corresponding (optional) metadata dictionary
-
meta_update
(obj_id: IdT, meta: Mapping[KT, VT_co])[source]¶ Update the metadata on the object with the corresponding id
-
meta_update_many
(metas: Mapping[IdT, Mapping[KT, VT_co]])[source]¶ Update the metadata on multiple objects. This method expects to get a mapping of object id to the mapping to be used to update the metadata for that object
-
objects
¶ Access the objects collection
-
save_many
(data_records: Sequence[mincepy.records.DataRecord])[source]¶ Save many data records to the archive
-
snapshots
¶ Access the snapshots collection
-
class
-
class
mincepy.
BaseArchive
[source]¶ -
add_archive_listener
(listener: mincepy.archives.ArchiveListener)[source]¶ Add a listener to be notified of archive events
-
construct_archive_id
(value) → IdT[source]¶ If it’s possible, construct an archive value from the passed value. This is useful as a convenience to the user if, say, the archive id can be constructed from a string. Raise TypeError or ValueError if this is not possible for the given value.
-
history
(obj_id: IdT, idx_or_slice) → [<class 'mincepy.records.DataRecord'>, typing.Sequence[mincepy.records.DataRecord]][source]¶ Load the snapshot records for a particular object, can return a single or multiple records
-
meta_get_many
(obj_ids: Iterable[IdT]) → Dict[IdT, dict][source]¶ Get the metadata for multiple objects. Returns a dictionary mapping the object id to the metadata dictionary
-
meta_set_many
(metas: Mapping[IdT, Mapping[KT, VT_co]])[source]¶ Set the metadata on multiple objects. This takes a mapping of the object id to the corresponding (optional) metadata dictionary
-
-
class
mincepy.
ArchiveListener
[source]¶ Archive listener interface
-
on_bulk_write
(archive: mincepy.archives.Archive, ops: Sequence[mincepy.operations.Operation])[source]¶ Called when an archive is about to perform a sequence of write operations but has not performed them yet. The listener must not assume that the operations will be completed as there are a number of reasons why this process could be interrupted.
-
-
class
mincepy.
Saver
(historian)[source]¶ A depositor that knows how to save records to the archive
-
get_snapshot_id
(obj) → mincepy.records.SnapshotId[source]¶ Get a persistent reference for the given object
-
-
class
mincepy.
Loader
(historian)[source]¶ A loader that knows how to load objects from the archive
-
class
mincepy.
SnapshotLoader
(historian)[source]¶ Responsible for loading snapshots. This object should not be reused and only one external call to load should be made. This is because it keeps an internal cache.
-
class
mincepy.
LiveDepositor
(*args, **kwargs)[source]¶ Depositor with strategy that all objects that get referenced should be saved
-
class
mincepy.
Migrator
(historian)[source]¶ A migrating depositor used to make migrations to database records
-
exception
mincepy.
ModificationError
[source]¶ Raised when a modification of the history encountered a problem
-
exception
mincepy.
ObjectDeleted
[source]¶ Raise when the user tries to interact with a deleted object
-
exception
mincepy.
VersionError
[source]¶ Indicates a version mismatch between the code and the database
-
exception
mincepy.
IntegrityError
[source]¶ Indicates an error that occurred because of an operation that would conflict with a database constraint
-
exception
mincepy.
ReferenceError
(msg, references: set)[source]¶ Raised when there is an operation that causes a problem with references for example if you try to delete an object that is referenced by another this exception will be raised. The objects ids being referenced will be found in .references.
-
exception
mincepy.
ConnectionError
[source]¶ Raise when there is an error in connecting to the backend
-
class
mincepy.
Historian
(archive: mincepy.archives.Archive, equators=())[source]¶ The historian acts as a go-between between your python objects and the archive which is a persistent store of the records. It will keep track of all live objects (i.e. those that have active references to them) that have been loaded and/or saved as well as enabling the user to lookup objects in the archive.
-
copy
(obj)[source]¶ Create a shallow copy of the object. Using this method allows the historian to inject information about where the object was copied from into the record if saved.
Deprecated since version 0.14.5: This will be removed in 0.16.0. Use mincepy.copy() instead
-
create_file
(filename: str = None, encoding: str = None) → mincepy.files.File[source]¶ Create a new file. The historian will supply file type compatible with the archive in use.
-
current_transaction
() → Optional[mincepy.transactions.Transaction][source]¶ Get the current transaction if there is one, otherwise returns None
-
delete
(*obj_or_identifier, imperative=True) → mincepy.result_types.DeleteResult[source]¶ Delete objects.
Parameters: imperative – if True, this means that the caller explicitly expects this call to delete the passed objects and it should therefore raise if an object cannot be found or has been deleted already. If False, the function will ignore these cases and continue. Raises: mincepy.NotFound – if the object cannot be found (potentially because it was already deleted)
-
find
(*filter, obj_type=None, obj_id=None, version: int = -1, state=None, meta: dict = None, sort=None, limit=0, skip=0) → mincepy.frontend.ResultSet[object][object][source]¶ Find objects. This call will search the archive for objects matching the given criteria. In many cases the main arguments of interest will be state and meta which allow you to apply filters on the stored state of the object and metadata respectively. To understand how the state is stored in the database (and therefore how to apply filters to it) it may be necessary to look at the details of the save_instance_state() method for that type. Metadata is always a dictionary containing primitives (strings, dicts, lists, etc).
For the most part, the filter syntax of mincePy conforms to that of MongoDB with convenience functions locate in
mincepy.qops
that can make it easier to to build a query.Examples:
Find all :py:class:`~mincepy.testing.Car`s that are brown or red:
>>> import mincepy as mpy >>> historian = mpy.get_historian() >>> historian.find(mpy.testing.Car.colour.in_('brown', 'red'))
Find all people that are older than 34 and live in Edinburgh:
>>> historian.find(mpy.testing.Person.age > 34, meta=dict(city='Edinburgh'))
Parameters: - obj_type – the object type to look for
- obj_id – an object or multiple object ids to look for
- version – the version of the object to retrieve, -1 means latest
- state (must be subclass of historian.primitive) – the criteria on the state of the object to apply
- meta – the search criteria to apply on the metadata of the object
- sort – the sort criteria
- limit – the maximum number of results to return, 0 means unlimited
- skip – the page to get results from
-
find_distinct
(*args, **kwargs)[source]¶ Get distinct values of the given record key
Has same signature as py:meth:mincepy.Records.distinct.
Deprecated since version 0.15.10: This will be removed in 0.17.0. Use mincepy.records.distinct() instead
-
find_records
(*args, **kwargs) → Iterator[mincepy.records.DataRecord][source]¶ Find records
Has same signature as py:meth:mincepy.Records.find.
Deprecated since version 0.15.10: This will be removed in 0.17.0. Use mincepy.records.find() instead
-
get_current_record
(obj: object) → mincepy.records.DataRecord[source]¶ Get the current record that the historian has cached for the passed object
-
get_obj_id
(obj: object) → Any[source]¶ Get the object ID for a live object.
Returns: the object id or None if the object is not known to the historian
-
get_snapshot_id
(obj: object) → mincepy.records.SnapshotId[source]¶ Get the current snapshot id for a live object. Will return the id or raise
mincepy.NotFound
exception
-
history
(obj_or_obj_id, idx_or_slice='*', as_objects=True) → [typing.Sequence[mincepy.historians.ObjectEntry], typing.Sequence[mincepy.records.DataRecord]][source]¶ Get a sequence of object ids and instances from the history of the given object.
Parameters: - obj_or_obj_id – The instance or id of the object to get the history for
- idx_or_slice – The particular index or a slice of which historical versions to get
- as_objects – if True return the object instances, otherwise returns the DataRecords
Example: >>> import mincepy, mincepy.testing >>> historian = mincepy.get_historian() >>> car = mincepy.testing.Car(‘ferrari’, ‘white’) >>> car_id = historian.save(car) >>> car.colour = ‘red’ >>> historian.save(car) >>> history = historian.history(car_id) >>> len(history) 2 >>> history[0].obj.colour == ‘white’ True >>> history[1].obj.colour == ‘red’ True >>> history[1].obj is car
-
in_transaction
() → Iterator[mincepy.transactions.Transaction][source]¶ This context will either re-use an existing transaction, if one is currently taking place or create a new one if not.
-
is_known
(obj: object) → bool[source]¶ Check if an object has ever been saved and is therefore known to the historian
Returns: True if ever saved, False otherwise
-
is_primitive
(obj) → bool[source]¶ Check if the object is one of the primitives and should be saved by value in the archive
-
is_saved
(obj: object) → bool[source]¶ Test if an object is saved with this historian. This is equivalent to historian.get_obj_id(obj) is not None.
-
classmethod
is_trackable
(obj)[source]¶ Determine if an object is trackable i.e. we can treat these as live objects and automatically keep track of their history when saving. Ultimately this is determined by whether the type is weak referencable or not.
-
merge
(result_set: mincepy.frontend.ResultSet[object][object], *, meta=None, batch_size=1024, progress_callback: Callable[[mincepy.utils.Progress, Optional[mincepy.result_types.MergeResult]], None] = None) → mincepy.result_types.MergeResult[source]¶ Merge a set of objects into this database.
Given a set of results from another archive this will attempt to merge the corresponding records into this historian’s archive.
Parameters: - result_set – the set of records to merge from the source historian
- meta – option for merging metadata, allowed values: None - Don’t merge metadata ‘update’ - Perform dictionary update with existing metadata ‘overwrite’ - In the case of an existing metadata dictionary, overwrite it
-
meta
¶ Access to functions that operate on the metadata
-
migrations
¶ Access the migration possibilities
-
objects
¶ Access the snapshots
-
primitives
¶ A tuple of all the primitive types
-
purge
(deleted=True, unreferenced=True, dry_run=True) → mincepy.result_types.PurgeResult[source]¶ Purge the archive of unused snapshots
-
records
¶ Access methods and properties that act on and return data records
-
references
¶ Access the references collection
-
replace
(old: object, new: object)[source]¶ Replace a live object with a new version.
This is especially useful if you have made a copy of an object and modified it but you want to continue the history of the object as the original rather than a brand new object. Then just replace the old object with the new one by calling this function.
-
save
(*objs)[source]¶ Save multiple objects producing corresponding object identifiers. This returns a sequence of ids that is in the same order as the passed objects.
Parameters: objs – the object(s) to save. Can also be a tuple of (obj, meta) to optionally include metadata to be saved with the object(s)
-
save_one
(obj: object, meta: dict = None)[source]¶ Save the object returning an object id. If metadata is supplied it will be set on the object.
Developer note: this is the front end point-of-entry for a user/client code saving an object however subsequent objects being saved in this transaction will only go through _save_object and therefore any code common to all objects being saved should possibly go there.
-
snapshots
¶ Access the snapshots
-
sync
(obj: object) → bool[source]¶ Update an object with the latest state in the database. If there is no new version in the archive then the current version remains unchanged including any modifications.
Returns: True if the object was updated, False otherwise
-
to_obj_id
(obj_or_identifier)[source]¶ This call will try and get an object id from the passed parameter. The possibilities are:
- Passed an object ID in which case it will be returned unchanged
2. Passed a snapshot ID, in which case the corresponding object ID will be returned 2. Passed a live object instance, in which case the id of that object will be returned 3. Passed a type that can be understood by the archive as an object id e.g. a string of
version, in which case the archive will attempt to convert itReturns None if neither of these cases were True.
-
-
class
mincepy.
ObjectEntry
(ref, obj)¶ Create new instance of ObjectEntry(ref, obj)
-
obj
¶ Alias for field number 1
-
ref
¶ Alias for field number 0
-
-
class
mincepy.
Savable
(*args, **kwargs)[source]¶ Interface for an object that can save and load its instance state
-
class
mincepy.
TypeHelper
[source]¶ This interface provides the basic methods necessary to enable a type to be compatible with the historian.
-
TYPE
= None¶ The type this helper corresponds to
-
ensure_up_to_date
(saved_state, version: Optional[int], loader: mincepy.depositors.Loader)[source]¶ Apply any migrations that are necessary to this saved state. If no migrations are necessary then None is returned
-
get_version
() → Optional[int][source]¶ Gets the version of the latest migration, returns None if there is not migration
-
load_instance_state
(obj, saved_state, loader: mincepy.depositors.Loader)[source]¶ Take the given blank object and load the instance state into it
-
-
class
mincepy.
WrapperHelper
(obj_type: Type[<class 'mincepy.types.SavableObject'>])[source]¶ Wraps up an object type to perform the necessary Historian actions
-
load_instance_state
(obj, saved_state: <class 'mincepy.types.Savable'>, loader)[source]¶ Take the given blank object and load the instance state into it
-
-
class
mincepy.
BaseHelper
[source]¶ A base helper that defaults to yielding hashables directly on the object and testing for equality using == given two objects. This behaviour is fairly standard and therefore most type helpers will want to subclass from this class.
-
mincepy.
connect
(uri: str = '', use_globally=False, timeout=30000) → mincepy.historians.Historian[source]¶ Connect to an archive and return a corresponding historian
Parameters: - uri – the URI of the archive to connect to
- use_globally – if True sets the newly create historian as the current global historian
- timeout – a connection timeout (in milliseconds)
-
mincepy.
get_historian
(create=True) → Optional[mincepy.historians.Historian][source]¶ Get the currently set global historian. If one doesn’t exist and create is True then this call will attempt to create a new default historian using connect()
-
mincepy.
set_historian
(new_historian: Optional[mincepy.historians.Historian], apply_plugins=True)[source]¶ Set the current global historian. Optionally load all plugins. To reset the historian pass None.
-
mincepy.
archive_uri
() → Optional[str][source]¶ Returns the default archive URI. This is currently being taken from the environmental MINCEPY_ARCHIVE, however it may chance to include a config file in the future.
Deprecated since version 0.15.3: This will be removed in 0.16.0. Use default_archive_uri() instead
-
mincepy.
save
(*objs)[source]¶ Save one or more objects. See
mincepy.Historian.save()
-
mincepy.
default_archive_uri
() → Optional[str][source]¶ Returns the default archive URI. This is currently being taken from the environmental MINCEPY_ARCHIVE, however it may chance to include a config file in the future.
-
mincepy.
find
(*args, **kwargs)[source]¶ Find objects. See
mincepy.Historian.find()
-
mincepy.
delete
(*obj_or_identifier)[source]¶ Delete an object. See
mincepy.Historian.delete()
-
mincepy.
db
(type_id_or_type) → <class 'mincepy.helpers.TypeHelper'>[source]¶ Get the database type helper for a type. See
mincepy.Historian.get_helper()
-
mincepy.
create_archive
(uri: str, connect_timeout=30000)[source]¶ Create an archive type based on a uri string
Parameters: - uri – the specification of where to connect to
- connect_timeout – a connection timeout (in milliseconds)
-
mincepy.
create_historian
(archive_uri: str, apply_plugins=True, connect_timeout=30000) → mincepy.historians.Historian[source]¶ Convenience function to create a standard historian directly from an archive URI
Parameters: - archive_uri – the specification of where to connect to
- apply_plugins – register the plugin types with the new historian
- connect_timeout – a connection timeout (in milliseconds)
-
class
mincepy.
ObjRef
(obj=None)[source]¶ A reference to an object instance
-
load_instance_state
(saved_state, loader)[source]¶ Take the given object and load the instance state into it
-
-
class
mincepy.
DataRecord
[source]¶ An immutable record that describes a snapshot of an object
-
child_builder
(**kwargs) → mincepy.utils.NamedTupleBuilder[mincepy.records.DataRecord][mincepy.records.DataRecord][source]¶ Get a child builder from this DataRecord instance. The following attributes will be copied over:
- obj_id
- type_id
- creation_time
- created_by
and version will be incremented by one.
Deprecated since version 0.15.20: This will be removed in 0.17.0. Use make_child_builder free function instead
-
created_by
¶ Convenience property to get the creator from the extras
-
creation_time
¶
-
classmethod
defaults
() → dict[source]¶ Returns a dictionary of default values, the caller owns the dict and is free to modify it
-
extras
¶
-
get_copied_from
() → Optional[mincepy.records.SnapshotId][source]¶ Get the reference of the data record this object was originally copied from
-
get_extra
(name)[source]¶ Convenience function to get an extra from the record, returns None if the extra doesn’t exist
-
get_files
() → List[Tuple[Sequence[Union[str, int]], dict]][source]¶ Get the state dictionaries for all the files contained in this record (if any)
-
get_references
() → Iterable[Tuple[Sequence[Union[str, int]], mincepy.records.SnapshotId]][source]¶ Get the snapshot ids of all objects referenced by this record
-
get_state_schema
() → Mapping[tuple, mincepy.records.SchemaEntry][source]¶ Get the schema for the state. This contains the types and versions for each member of the state
-
classmethod
new_builder
(**kwargs) → mincepy.utils.NamedTupleBuilder[mincepy.records.DataRecord][mincepy.records.DataRecord][source]¶ Get a builder for a new data record, the version will be set to 0
-
obj_id
¶
-
snapshot_hash
¶
-
snapshot_id
¶ The snapshot id for this record
-
snapshot_time
¶
-
state
¶
-
state_types
¶
-
type_id
¶
-
version
¶
-
-
mincepy.
SnapshotRef
¶ alias of
mincepy.records.SnapshotId
-
class
mincepy.
SnapshotId
(obj_id, version: int)[source]¶ A snapshot id identifies a particular version of an object (and the corresponding record), it therefore composed of the object id and the version number.
Create a snapshot id by passing an object id and version
-
class
mincepy.
BaseSavableObject
(*args, **kwargs)[source]¶ A helper class that makes a class compatible with the historian by flagging certain attributes which will be saved/loaded/hashed and compared in __eq__. This should be an exhaustive list of all the attributes that define this class. If more complex functionality is needed, then the standard SavableComparable interface methods should be overwritten.
-
load_instance_state
(saved_state, loader)[source]¶ Take the given object and load the instance state into it
-
-
class
mincepy.
ConvenienceMixin
[source]¶ A mixin that adds convenience methods to your savable object
-
class
mincepy.
SimpleSavable
(*args, **kwargs)[source]¶ A BaseSavableObject with convenience methods mixed in
-
mincepy.
AsRef
(name: str) → mincepy.base_savable.AttrSpec[source]¶ Create an attribute specification for an attribute that should be stored by reference
-
class
mincepy.
ConvenientSavable
(*args, **kwargs)[source]¶ A savable with convenience methods.
See
ConvenienceMixin
-
class
mincepy.
LiveRefList
(init_list=None)[source]¶ A live list that uses references to store objects
-
class
mincepy.
RefList
(init_list=None)[source]¶ A list that stores all entries as references in the database except primitives
-
DATA_TYPE
¶ alias of
builtins.list
-
-
class
mincepy.
RefDict
(*args, **kwargs)[source]¶ A dictionary that stores all values as references in the database.
-
DATA_TYPE
¶ alias of
builtins.dict
-
-
class
mincepy.
LiveRefDict
(*args, **kwargs)[source]¶ A live dictionary that uses references to refer to contained objects
-
mincepy.
BaseFile
¶ alias of
mincepy.files.File
-
class
mincepy.
File
(file_store, filename: str = None, encoding=None)[source]¶ A mincePy file object. These should not be instantiated directly but using Historian.create_file()
-
load_instance_state
(saved_state, loader)[source]¶ Take the given object and load the instance state into it
-
open
(mode='r', **kwargs) → Union[BinaryIO, TextIO][source]¶ Open returning a file like object that supports close() and read()
-
read_text
(encoding=None) → str[source]¶ Read the contents of the file as text. This function is named as to mirror pathlib.Path
-
save_instance_state
(saver)[source]¶ Save the instance state of an object, should return a saved instance
-
-
mincepy.
track
(obj_or_fn)[source]¶ Allows object creation to be tracked. When an object is created within this context, the creator of the object will be saved in the database record.
This can be used either as a decorator to a class method, in which case the object instance will be the creator. Or it can be used as a context in which case the creator should be passed as the argument.
-
mincepy.
copy
(obj)[source]¶ Create a shallow copy of the object. Using this method allows the historian to inject information about where the object was copied from into the record if saved.
-
mincepy.
deepcopy
(obj)[source]¶ Create a shallow copy of the object. Using this method allows the historian to inject information about where the object was copied from into the record if saved.
-
class
mincepy.
Meta
(historian, archive)[source]¶ A class for grouping metadata related methods
-
create_index
(keys, unique=False, where_exist=False)[source]¶ Create an index on the metadata. Takes either a single key or list of (key, direction) pairs
Parameters: - keys – the key or keys to create the index on
- unique – if True, create a uniqueness constraint on this index
- where_exist – if True, only apply this index on documents that contain the key(s)
-
distinct
(key: str, filter: dict = None, obj_id=None) → Iterator[T_co][source]¶ Yield distinct values found for ‘key’ within metadata documents, optionally matching a search filter.
The search can optionally be restricted to a set of passed object ids.
Parameters: - key – the document key to get distinct values for
- filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
find
(filter, obj_id=None) → Iterator[mincepy.archives.MetaEntry][source]¶ Find metadata matching the given criteria. Each returned result is a tuple containing the corresponding object id and the metadata dictionary itself
-
get
(obj_or_identifier) → Optional[dict][source]¶ Get the metadata for an object
Parameters: obj_or_identifier – either the object instance, an object ID or a snapshot reference
-
-
class
mincepy.
References
(historian)[source]¶ A class that can provide reference graph information about objects stored in the archive.
Note
It is deliberately not possible to pass an object directly to methods in this class as what is returned is reference information from the archive and _not_ reference information about the in-memory python object.
-
class
mincepy.
SnapshotsCollection
(historian, archive_collection: mincepy.archives.Collection)[source]¶
-
class
mincepy.
LiveObjectsCollection
(historian, archive_collection: mincepy.archives.Collection)[source]¶
-
mincepy.
field
(attr: str = None, ref=False, default=(), type=None, store_as: str = None, dynamic=False) → mincepy.fields.Field[source]¶ Define a new field
-
class
mincepy.
Expr
[source]¶ The base class for query expressions. Expressions are tuples containing an operator or a field as a first part and a value or expression as second
-
class
mincepy.
WithListOperand
(operand: List[mincepy.expr.Expr])[source]¶ Mixin for expressions that take an operand that is a list
-
class
mincepy.
Comparison
(field, expr: mincepy.expr.Operator)[source]¶ A comparison expression consists of a field and an operator expression e.g. name == ‘frank’ where name is the field, the operator is ==, and the value is ‘frank’
-
class
mincepy.
Logical
(operand: mincepy.expr.Expr)[source]¶ A comparison operation. Consists of an operator applied to an operand which is matched in a particular way
-
class
mincepy.
WithQueryContext
[source]¶ A mixin for Queryable objects that allows a context to be added which is always ‘anded’ with the resulting query condition for any operator
mincepy.hist¶
-
class
mincepy.hist.
Meta
(historian, archive)[source]¶ A class for grouping metadata related methods
-
create_index
(keys, unique=False, where_exist=False)[source]¶ Create an index on the metadata. Takes either a single key or list of (key, direction) pairs
Parameters: - keys – the key or keys to create the index on
- unique – if True, create a uniqueness constraint on this index
- where_exist – if True, only apply this index on documents that contain the key(s)
-
distinct
(key: str, filter: dict = None, obj_id=None) → Iterator[T_co][source]¶ Yield distinct values found for ‘key’ within metadata documents, optionally matching a search filter.
The search can optionally be restricted to a set of passed object ids.
Parameters: - key – the document key to get distinct values for
- filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
find
(filter, obj_id=None) → Iterator[mincepy.archives.MetaEntry][source]¶ Find metadata matching the given criteria. Each returned result is a tuple containing the corresponding object id and the metadata dictionary itself
-
get
(obj_or_identifier) → Optional[dict][source]¶ Get the metadata for an object
Parameters: obj_or_identifier – either the object instance, an object ID or a snapshot reference
-
-
class
mincepy.hist.
References
(historian)[source]¶ A class that can provide reference graph information about objects stored in the archive.
Note
It is deliberately not possible to pass an object directly to methods in this class as what is returned is reference information from the archive and _not_ reference information about the in-memory python object.
mincepy.mongo¶
-
class
mincepy.mongo.
MongoArchive
(database: pymongo.database.Database)[source]¶ MongoDB implementation of the mincepy archive
-
ID_TYPE
¶ alias of
bson.objectid.ObjectId
-
bulk_write
(ops: Sequence[mincepy.operations.Operation])[source]¶ Make a collection of write operations to the database
-
construct_archive_id
(value) → bson.objectid.ObjectId[source]¶ If it’s possible, construct an archive value from the passed value. This is useful as a convenience to the user if, say, the archive id can be constructed from a string. Raise TypeError or ValueError if this is not possible for the given value.
-
count
(obj_id: Optional[bson.objectid.ObjectId] = None, type_id=None, _created_by=None, _copied_from=None, version=-1, state=None, snapshot_hash=None, meta=None, limit=0)[source]¶ Count the number of entries that match the given query
-
distinct
(key: str, filter: dict = None) → Iterator[T_co][source]¶ Get distinct values of the given record key
Parameters: - key – the key to find distinct values for, see DataRecord for possible keys
- filter – an optional filter to restrict the search to. Should be a dictionary that filters on entries in the DataRecord i.e. the kwargs that can be passed to find().
-
file_store
¶ Get the GridFS file bucket
-
find
(obj_id: Union[bson.objectid.ObjectId, Iterable[bson.objectid.ObjectId], Dict[KT, VT]] = None, type_id: Union[bson.objectid.ObjectId, Iterable[bson.objectid.ObjectId], Dict[KT, VT]] = None, _created_by=None, _copied_from=None, version=None, state=None, state_types=None, snapshot_hash=None, meta: dict = None, extras: dict = None, limit=0, sort=None, skip=0)[source]¶ Find records matching the given criteria
Parameters: - type_id – find records with the given type id
- created_by – find records with the given created by id
- copied_from – find records copied from the record with the given id
- version – restrict the search to this version, -1 for latest
- state – find objects with this state filter
- state_types – file objects with this state types filter
- snapshot_hash – find objects with this snapshot hash
- meta – find objects with this meta filter
- extras – the search criteria to apply on the data record extras
- limit – limit the results to this many records
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
- sort – sort the results by the given criteria
- skip – skip the this many entries
-
get_obj_ref_graph
(*obj_ids, direction=1, max_dist: int = None) → Iterator[networkx.classes.digraph.DiGraph][source]¶ Given one or more object ids the archive will supply the corresponding reference graph(s). The graphs start at the given id and contains all object ids that it references, all object ids they reference and so on.
-
get_snapshot_ids
(obj_id: bson.objectid.ObjectId)[source]¶ Returns a list of time ordered snapshot ids
-
get_snapshot_ref_graph
(*snapshot_ids, direction=1, max_dist: int = None) → Iterator[networkx.classes.digraph.DiGraph][source]¶ Given one or more snapshot ids the archive will supply the corresponding reference graph(s). The graphs start at the given id and contains all snapshots that it references, all snapshots they reference and so on.
-
classmethod
get_types
() → Sequence[T_co][source]¶ This method allows the archive to return either types or type helper that the historian should support. A common example is the type helper for the object id type
-
load
(snapshot_id: mincepy.records.SnapshotId) → mincepy.records.DataRecord[source]¶ Load a snapshot of an object with the given reference
-
meta_create_index
(keys, unique=True, where_exist=False)[source]¶ Create an index on the metadata. Takes either a single key or list of (key, direction) pairs
Parameters: - keys – the key or keys to create the index on
- unique – if True, create a uniqueness constraint on this index
- where_exist – if True the index only applies for documents where the key(s) exist
-
meta_distinct
(key: str, filter: dict = None, obj_id: Union[bson.objectid.ObjectId, Iterable[bson.objectid.ObjectId], Mapping[KT, VT_co]] = None) → Iterator[T_co][source]¶ Yield distinct values found for ‘key’ within metadata documents, optionally marching a search filter.
The search can optionally be restricted to a set of passed object ids.
Parameters: - key – the document key to get distinct values for
- filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
meta_find
(filter: dict = None, obj_id: Union[bson.objectid.ObjectId, Iterable[bson.objectid.ObjectId], Dict[KT, VT]] = None) → Iterator[Tuple[bson.objectid.ObjectId, Dict[KT, VT]]][source]¶ Yield metadata satisfying the given criteria. The search can optionally be restricted to a set of passed object ids.
Parameters: - filter – a query filter for the search
- obj_id – an optional restriction on the object ids to search. This ben be either: 1. a single object id 2. an iterable of object ids in which is treated as {‘$in’: list(obj_ids)} 3. a general query filter to be applied to the object ids
-
meta_get_many
(obj_ids: Iterable[bson.objectid.ObjectId]) → Dict[bson.objectid.ObjectId, dict][source]¶ Get the metadata for multiple objects. Returns a dictionary mapping the object id to the metadata dictionary
-
meta_set_many
(metas: Mapping[bson.objectid.ObjectId, Optional[dict]])[source]¶ Set the metadata on multiple objects. This takes a mapping of the object id to the corresponding (optional) metadata dictionary
-
meta_update
(obj_id, meta: Mapping[KT, VT_co])[source]¶ Update the metadata on the object with the corresponding id
-
objects
¶ Access the objects collection
-
snapshots
¶ Access the snapshots collection
-