mdocument.client.MDocumentAsyncIOMotorCollection¶
-
class
mdocument.client.
MDocumentAsyncIOMotorCollection
(database, name, codec_options=None, read_preference=None, write_concern=None, read_concern=None, _delegate=None)¶ Bases:
motor.motor_asyncio.AsyncIOMotorCollection
-
__init__
(database, name, codec_options=None, read_preference=None, write_concern=None, read_concern=None, _delegate=None)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
(database, name[, codec_options, …])Initialize self.
aggregate
(pipeline, **kwargs)Execute an aggregation pipeline on this collection.
aggregate_raw_batches
(pipeline, **kwargs)Perform an aggregation and retrieve batches of raw BSON.
bulk_write
(requests[, ordered, …])Send a batch of write operations to the server.
count_documents
(filter[, session])Count the number of documents in this collection.
create_index
(keys[, session])Creates an index on this collection.
create_indexes
(indexes[, session])Create one or more indexes on this collection.
delete_many
(documents, *args, **kwargs)Deletes multiple documents in database.
delete_one
(document, *args, **kwargs)Deletes one document in database.
distinct
(key[, filter, session])Get a list of distinct values for key among all documents in this collection.
drop
([session])Alias for
drop_collection
.drop_index
(index_or_name[, session])Drops the specified index on this collection.
drop_indexes
([session])Drops all indexes on this collection.
estimated_document_count
(**kwargs)Get an estimate of the number of documents in this collection using collection metadata.
find
(document_query, *args, **kwargs)Finds multiple documents and returns them with provided type.
find_one
(document_query, *args, **kwargs)Finds one document and returns it with provided type.
find_one_and_delete
(filter[, projection, …])Finds a single document and deletes it, returning the document.
find_one_and_replace
(filter, replacement[, …])Finds a single document and replaces it, returning either the original or the replaced document.
find_one_and_update
(filter, update[, …])Finds a single document and updates it, returning either the original or the updated document. By default
find_one_and_update()
returns the original version of the document before the update was applied::.find_raw_batches
(*args, **kwargs)Query the database and retrieve batches of raw BSON.
get_io_loop
()index_information
([session])Get information on this collection’s indexes.
inline_map_reduce
(map, reduce[, …])Perform an inline map/reduce operation on this collection.
insert_many
(documents, *args, **kwargs)Inserts multiple documents to database.
insert_one
(document, *args, **kwargs)Inserts one document to database.
list_indexes
([session])Get a cursor over the index documents for this collection. ::.
map_reduce
(map, reduce, out[, …])Perform a map/reduce operation on this collection.
options
([session])Get the options set on this collection.
reindex
([session])DEPRECATED: Rebuild all indexes on this collection.
rename
(new_name[, session])Rename this collection.
replace_one
(filter, replacement[, upsert, …])Replace a single document matching the filter.
update_many
(documents, *args, **kwargs)Updates multiple documents in database.
update_one
(document, *args, **kwargs)Updates one document in database.
watch
([pipeline, full_document, …])Watch changes on this collection.
with_options
([codec_options, …])Get a clone of this collection changing the specified settings.
wrap
(obj)Attributes
Read only access to the
CodecOptions
of this instance.The full name of this
Collection
.The name of this
Collection
.Read only access to the
ReadConcern
of this instance.Read only access to the read preference of this instance.
Read only access to the
WriteConcern
of this instance.-
aggregate
(pipeline, **kwargs)¶ Execute an aggregation pipeline on this collection.
The aggregation can be run on a secondary if the client is connected to a replica set and its
read_preference
is notPRIMARY
.- Parameters
pipeline: a single command or list of aggregation commands
session (optional): a
ClientSession
, created withstart_session()
.**kwargs: send arbitrary parameters to the aggregate command
Returns a
MotorCommandCursor
that can be iterated like a cursor fromfind()
:async def f(): pipeline = [{'$project': {'name': {'$toUpper': '$name'}}}] async for doc in collection.aggregate(pipeline): print(doc)
MotorCommandCursor
does not allow theexplain
option. To explain MongoDB’s query plan for the aggregation, useMotorDatabase.command()
:async def f(): plan = await db.command( 'aggregate', 'COLLECTION-NAME', pipeline=[{'$project': {'x': 1}}], explain=True) print(plan)
Changed in version 2.1: This collection’s read concern is now applied to pipelines containing the $out stage when connected to MongoDB >= 4.2.
Changed in version 1.0:
aggregate()
now always returns a cursor.Changed in version 0.5:
aggregate()
now returns a cursor by default, and the cursor is returned immediately without anawait
. See aggregation changes in Motor 0.5.Changed in version 0.2: Added cursor support.
-
aggregate_raw_batches
(pipeline, **kwargs)¶ Perform an aggregation and retrieve batches of raw BSON.
Similar to the
aggregate()
method but returns each batch as bytes.This example demonstrates how to work with raw batches, but in practice raw batches should be passed to an external library that can decode BSON into another data type, rather than used with PyMongo’s
bson
module.async def get_raw(): cursor = db.test.aggregate_raw_batches() async for batch in cursor: print(bson.decode_all(batch))
Note that
aggregate_raw_batches
does not support sessions.New in version 2.0.
-
bulk_write
(requests, ordered=True, bypass_document_validation=False, session=None)¶ Send a batch of write operations to the server.
Requests are passed as a list of write operation instances imported from
pymongo
:InsertOne
,UpdateOne
,UpdateMany
,ReplaceOne
,DeleteOne
, orDeleteMany
).For example, say we have these documents:
{'x': 1, '_id': ObjectId('54f62e60fba5226811f634ef')} {'x': 1, '_id': ObjectId('54f62e60fba5226811f634f0')}
We can insert a document, delete one, and replace one like so:
# DeleteMany, UpdateOne, and UpdateMany are also available. from pymongo import InsertOne, DeleteOne, ReplaceOne async def modify_data(): requests = [InsertOne({'y': 1}), DeleteOne({'x': 1}), ReplaceOne({'w': 1}, {'z': 1}, upsert=True)] result = await db.test.bulk_write(requests) print("inserted %d, deleted %d, modified %d" % ( result.inserted_count, result.deleted_count, result.modified_count)) print("upserted_ids: %s" % result.upserted_ids) print("collection:") async for doc in db.test.find(): print(doc)
This will print something like:
inserted 1, deleted 1, modified 0 upserted_ids: {2: ObjectId('54f62ee28891e756a6e1abd5')} collection: {'x': 1, '_id': ObjectId('54f62e60fba5226811f634f0')} {'y': 1, '_id': ObjectId('54f62ee2fba5226811f634f1')} {'z': 1, '_id': ObjectId('54f62ee28891e756a6e1abd5')}
- Parameters
requests: A list of write operations (see examples above).
ordered (optional): If
True
(the default) requests will be performed on the server serially, in the order provided. If an error occurs all remaining operations are aborted. IfFalse
requests will be performed on the server in arbitrary order, possibly in parallel, and all operations will be attempted.bypass_document_validation: (optional) If
True
, allows the write to opt-out of document level validation. Default isFalse
.session (optional): a
ClientSession
, created withstart_session()
.
- Returns
An instance of
BulkWriteResult
.
See also
writes-and-ids
Note
bypass_document_validation requires server version >= 3.2
Changed in version 1.2: Added session parameter.
-
property
codec_options
¶ Read only access to the
CodecOptions
of this instance.
-
count_documents
(filter, session=None, **kwargs)¶ Count the number of documents in this collection.
Note
For a fast count of the total documents in a collection see
estimated_document_count()
.The
count_documents()
method is supported in a transaction.All optional parameters should be passed as keyword arguments to this method. Valid options include:
skip (int): The number of matching documents to skip before returning results.
limit (int): The maximum number of documents to count. Must be a positive integer. If not provided, no limit is imposed.
maxTimeMS (int): The maximum amount of time to allow this operation to run, in milliseconds.
collation (optional): An instance of
Collation
. This option is only supported on MongoDB 3.4 and above.hint (string or list of tuples): The index to use. Specify either the index name as a string or the index specification as a list of tuples (e.g. [(‘a’, pymongo.ASCENDING), (‘b’, pymongo.ASCENDING)]). This option is only supported on MongoDB 3.6 and above.
The
count_documents()
method obeys theread_preference
of thisCollection
.Note
When migrating from
count()
tocount_documents()
the following query operators must be replaced:Operator
Replacement
$where
$near
$geoWithin with $center
$nearSphere
$geoWithin with $centerSphere
$expr requires MongoDB 3.6+
- Parameters
filter (required): A query document that selects which documents to count in the collection. Can be an empty document to count all documents.
session (optional): a
ClientSession
.**kwargs (optional): See list of options above.
New in version 3.7.
-
create_index
(keys, session=None, **kwargs)¶ Creates an index on this collection.
Takes either a single key or a list of (key, direction) pairs. The key(s) must be an instance of
basestring
(str
in python 3), and the direction(s) must be one of (ASCENDING
,DESCENDING
,GEO2D
,GEOHAYSTACK
,GEOSPHERE
,HASHED
,TEXT
).To create a single key ascending index on the key
'mike'
we just use a string argument:>>> my_collection.create_index("mike")
For a compound index on
'mike'
descending and'eliot'
ascending we need to use a list of tuples:>>> my_collection.create_index([("mike", pymongo.DESCENDING), ... ("eliot", pymongo.ASCENDING)])
All optional index creation parameters should be passed as keyword arguments to this method. For example:
>>> my_collection.create_index([("mike", pymongo.DESCENDING)], ... background=True)
Valid options include, but are not limited to:
name: custom name to use for this index - if none is given, a name will be generated.
unique: if
True
, creates a uniqueness constraint on the index.background: if
True
, this index should be created in the background.sparse: if
True
, omit from the index any documents that lack the indexed field.bucketSize: for use with geoHaystack indexes. Number of documents to group together within a certain proximity to a given longitude and latitude.
min: minimum value for keys in a
GEO2D
index.max: maximum value for keys in a
GEO2D
index.expireAfterSeconds: <int> Used to create an expiring (TTL) collection. MongoDB will automatically delete documents from this collection after <int> seconds. The indexed field must be a UTC datetime or the data will not expire.
partialFilterExpression: A document that specifies a filter for a partial index. Requires MongoDB >=3.2.
collation (optional): An instance of
Collation
. Requires MongoDB >= 3.4.wildcardProjection: Allows users to include or exclude specific field paths from a wildcard index using the {“$**” : 1} key pattern. Requires MongoDB >= 4.2.
hidden: if
True
, this index will be hidden from the query planner and will not be evaluated as part of query plan selection. Requires MongoDB >= 4.4.
See the MongoDB documentation for a full list of supported options by server version.
Warning
dropDups is not supported by MongoDB 3.0 or newer. The option is silently ignored by the server and unique index builds using the option will fail if a duplicate value is detected.
Note
The
write_concern
of this collection is automatically applied to this operation when using MongoDB >= 3.4.- Parameters
keys: a single key or a list of (key, direction) pairs specifying the index to create
session (optional): a
ClientSession
.**kwargs (optional): any additional index creation options (see the above list) should be passed as keyword arguments
Changed in version 3.11: Added the
hidden
option.Changed in version 3.6: Added
session
parameter. Added support for passing maxTimeMS in kwargs.Changed in version 3.4: Apply this collection’s write concern automatically to this operation when connected to MongoDB >= 3.4. Support the collation option.
Changed in version 3.2: Added partialFilterExpression to support partial indexes.
Changed in version 3.0: Renamed key_or_list to keys. Removed the cache_for option.
create_index()
no longer caches index names. Removed support for the drop_dups and bucket_size aliases.
-
create_indexes
(indexes, session=None, **kwargs)¶ Create one or more indexes on this collection:
from pymongo import IndexModel, ASCENDING, DESCENDING async def create_two_indexes(): index1 = IndexModel([("hello", DESCENDING), ("world", ASCENDING)], name="hello_world") index2 = IndexModel([("goodbye", DESCENDING)]) print(await db.test.create_indexes([index1, index2]))
This prints:
['hello_world', 'goodbye_-1']
- Parameters
indexes: A list of
IndexModel
instances.session (optional): a
ClientSession
, created withstart_session()
.**kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.
The
write_concern
of this collection is automatically applied to this operation when using MongoDB >= 3.4.Changed in version 1.2: Added session parameter.
-
async
delete_many
(documents: List[mdocument.document.MDocument], *args, **kwargs)¶ Deletes multiple documents in database. Also updates related documents.
-
async
delete_one
(document: mdocument.document.MDocument, *args, **kwargs)¶ Deletes one document in database. Also updates related documents.
-
distinct
(key, filter=None, session=None, **kwargs)¶ Get a list of distinct values for key among all documents in this collection.
Raises
TypeError
if key is not an instance ofbasestring
(str
in python 3).All optional distinct parameters should be passed as keyword arguments to this method. Valid options include:
maxTimeMS (int): The maximum amount of time to allow the count command to run, in milliseconds.
collation (optional): An instance of
Collation
. This option is only supported on MongoDB 3.4 and above.
The
distinct()
method obeys theread_preference
of thisCollection
.- Parameters
key: name of the field for which we want to get the distinct values
filter (optional): A query document that specifies the documents from which to retrieve the distinct values.
session (optional): a
ClientSession
.**kwargs (optional): See list of options above.
Changed in version 3.6: Added
session
parameter.Changed in version 3.4: Support the collation option.
-
drop
(session=None)¶ Alias for
drop_collection
.The following two calls are equivalent:
await db.foo.drop() await db.drop_collection("foo")
-
drop_index
(index_or_name, session=None, **kwargs)¶ Drops the specified index on this collection.
Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error (e.g. trying to drop an index that does not exist). index_or_name can be either an index name (as returned by create_index), or an index specifier (as passed to create_index). An index specifier should be a list of (key, direction) pairs. Raises TypeError if index is not an instance of (str, unicode, list).
Warning
if a custom name was used on index creation (by passing the name parameter to
create_index()
orensure_index()
) the index must be dropped by name.- Parameters
index_or_name: index (or name of index) to drop
session (optional): a
ClientSession
.**kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.
Note
The
write_concern
of this collection is automatically applied to this operation when using MongoDB >= 3.4.Changed in version 3.6: Added
session
parameter. Added support for arbitrary keyword arguments.Changed in version 3.4: Apply this collection’s write concern automatically to this operation when connected to MongoDB >= 3.4.
-
drop_indexes
(session=None, **kwargs)¶ Drops all indexes on this collection.
Can be used on non-existant collections or collections with no indexes. Raises OperationFailure on an error.
- Parameters
session (optional): a
ClientSession
.**kwargs (optional): optional arguments to the createIndexes command (like maxTimeMS) can be passed as keyword arguments.
Note
The
write_concern
of this collection is automatically applied to this operation when using MongoDB >= 3.4.Changed in version 3.6: Added
session
parameter. Added support for arbitrary keyword arguments.Changed in version 3.4: Apply this collection’s write concern automatically to this operation when connected to MongoDB >= 3.4.
-
estimated_document_count
(**kwargs)¶ Get an estimate of the number of documents in this collection using collection metadata.
The
estimated_document_count()
method is not supported in a transaction.All optional parameters should be passed as keyword arguments to this method. Valid options include:
maxTimeMS (int): The maximum amount of time to allow this operation to run, in milliseconds.
- Parameters
**kwargs (optional): See list of options above.
New in version 3.7.
-
async
find
(document_query: mdocument.document.MDocument, *args, **kwargs)¶ Finds multiple documents and returns them with provided type.
-
async
find_one
(document_query: mdocument.document.MDocument, *args, **kwargs)¶ Finds one document and returns it with provided type.
-
find_one_and_delete
(filter, projection=None, sort=None, hint=None, session=None, **kwargs)¶ Finds a single document and deletes it, returning the document.
If we have a collection with 2 documents like
{'x': 1}
, then this code retrieves and deletes one of them:async def delete_one_document(): print(await db.test.count_documents({'x': 1})) doc = await db.test.find_one_and_delete({'x': 1}) print(doc) print(await db.test.count_documents({'x': 1}))
This outputs something like:
2 {'x': 1, '_id': ObjectId('54f4e12bfba5220aa4d6dee8')} 1
If multiple documents match filter, a sort can be applied. Say we have 3 documents like:
{'x': 1, '_id': 0} {'x': 1, '_id': 1} {'x': 1, '_id': 2}
This code retrieves and deletes the document with the largest
_id
:async def delete_with_largest_id(): doc = await db.test.find_one_and_delete( {'x': 1}, sort=[('_id', pymongo.DESCENDING)])
This deletes one document and prints it:
{'x': 1, '_id': 2}
The projection option can be used to limit the fields returned:
async def delete_and_return_x(): db.test.find_one_and_delete({'x': 1}, projection={'_id': False})
This prints:
{'x': 1}
- Parameters
filter: A query that matches the document to delete.
projection (optional): a list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).
sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is deleted.
hint (optional): An index used to support the query predicate specified either by its string name, or in the same format as passed to
create_index()
(e.g.[('field', ASCENDING)]
). This option is only supported on MongoDB 4.4 and above.session (optional): a
ClientSession
, created withstart_session()
.**kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).
This command uses the
WriteConcern
of thisCollection
when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.Changed in version 2.2: Added
hint
parameter.Changed in version 1.2: Added
session
parameter.
-
find_one_and_replace
(filter, replacement, projection=None, sort=None, upsert=False, return_document=False, hint=None, session=None, **kwargs)¶ Finds a single document and replaces it, returning either the original or the replaced document.
The
find_one_and_replace()
method differs fromfind_one_and_update()
by replacing the document matched by filter, rather than modifying the existing document.Say we have 3 documents like:
{'x': 1, '_id': 0} {'x': 1, '_id': 1} {'x': 1, '_id': 2}
Replace one of them like so:
async def replace_one_doc(): original_doc = await db.test.find_one_and_replace({'x': 1}, {'y': 1}) print("original: %s" % original_doc) print("collection:") async for doc in db.test.find(): print(doc)
This will print:
original: {'x': 1, '_id': 0} collection: {'y': 1, '_id': 0} {'x': 1, '_id': 1} {'x': 1, '_id': 2}
- Parameters
filter: A query that matches the document to replace.
replacement: The replacement document.
projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a mapping to exclude fields from the result (e.g. projection={‘_id’: False}).
sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is replaced.
upsert (optional): When
True
, inserts a new document if no document matches the query. Defaults toFalse
.return_document: If
ReturnDocument.BEFORE
(the default), returns the original document before it was replaced, orNone
if no document matches. IfReturnDocument.AFTER
, returns the replaced or inserted document.hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to
create_index()
(e.g.[('field', ASCENDING)]
). This option is only supported on MongoDB 4.4 and above.session (optional): a
ClientSession
, created withstart_session()
.**kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).
This command uses the
WriteConcern
of thisCollection
when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.Changed in version 2.2: Added
hint
parameter.Changed in version 1.2: Added
session
parameter.
-
find_one_and_update
(filter, update, projection=None, sort=None, upsert=False, return_document=False, array_filters=None, hint=None, session=None, **kwargs)¶ Finds a single document and updates it, returning either the original or the updated document. By default
find_one_and_update()
returns the original version of the document before the update was applied:async def set_done(): print(await db.test.find_one_and_update( {'_id': 665}, {'$inc': {'count': 1}, '$set': {'done': True}}))
This outputs:
{'_id': 665, 'done': False, 'count': 25}}
To return the updated version of the document instead, use the return_document option.
from pymongo import ReturnDocument async def increment_by_userid(): print(await db.example.find_one_and_update( {'_id': 'userid'}, {'$inc': {'seq': 1}}, return_document=ReturnDocument.AFTER))
This prints:
{'_id': 'userid', 'seq': 1}
You can limit the fields returned with the projection option.
async def increment_by_userid(): print(await db.example.find_one_and_update( {'_id': 'userid'}, {'$inc': {'seq': 1}}, projection={'seq': True, '_id': False}, return_document=ReturnDocument.AFTER))
This results in:
{'seq': 2}
The upsert option can be used to create the document if it doesn’t already exist.
async def increment_by_userid(): print(await db.example.find_one_and_update( {'_id': 'userid'}, {'$inc': {'seq': 1}}, projection={'seq': True, '_id': False}, upsert=True, return_document=ReturnDocument.AFTER))
The result:
{'seq': 1}
If multiple documents match filter, a sort can be applied. Say we have these two documents:
{'_id': 665, 'done': True, 'result': {'count': 26}} {'_id': 701, 'done': True, 'result': {'count': 17}}
Then to update the one with the great
_id
:async def set_done(): print(await db.test.find_one_and_update( {'done': True}, {'$set': {'final': True}}, sort=[('_id', pymongo.DESCENDING)]))
This would print:
{'_id': 701, 'done': True, 'result': {'count': 17}}
- Parameters
filter: A query that matches the document to update.
update: The update operations to apply.
projection (optional): A list of field names that should be returned in the result document or a mapping specifying the fields to include or exclude. If projection is a list “_id” will always be returned. Use a dict to exclude fields from the result (e.g. projection={‘_id’: False}).
sort (optional): a list of (key, direction) pairs specifying the sort order for the query. If multiple documents match the query, they are sorted and the first is updated.
upsert (optional): When
True
, inserts a new document if no document matches the query. Defaults toFalse
.return_document: If
ReturnDocument.BEFORE
(the default), returns the original document before it was updated, orNone
if no document matches. IfReturnDocument.AFTER
, returns the updated or inserted document.array_filters (optional): A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+.
hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to
create_index()
(e.g.[('field', ASCENDING)]
). This option is only supported on MongoDB 4.4 and above.session (optional): a
ClientSession
, created withstart_session()
.**kwargs (optional): additional command arguments can be passed as keyword arguments (for example maxTimeMS can be used with recent server versions).
This command uses the
WriteConcern
of thisCollection
when connected to MongoDB >= 3.2. Note that using an elevated write concern with this command may be slower compared to using the default write concern.Changed in version 2.2: Added
hint
parameter.Changed in version 1.2: Added
array_filters
andsession
parameters.
-
find_raw_batches
(*args, **kwargs)¶ Query the database and retrieve batches of raw BSON.
Similar to the
find()
method but returns each batch as bytes.This example demonstrates how to work with raw batches, but in practice raw batches should be passed to an external library that can decode BSON into another data type, rather than used with PyMongo’s
bson
module.async def get_raw(): cursor = db.test.find_raw_batches() async for batch in cursor: print(bson.decode_all(batch))
Note that
find_raw_batches
does not support sessions.New in version 2.0.
-
property
full_name
¶ The full name of this
Collection
.The full name is of the form database_name.collection_name.
-
index_information
(session=None)¶ Get information on this collection’s indexes.
Returns a dictionary where the keys are index names (as returned by create_index()) and the values are dictionaries containing information about each index. The dictionary is guaranteed to contain at least a single key,
"key"
which is a list of (key, direction) pairs specifying the index (as passed to create_index()). It will also contain any other metadata about the indexes, except for the"ns"
and"name"
keys, which are cleaned. For example:async def create_x_index(): print(await db.test.create_index("x", unique=True)) print(await db.test.index_information())
This prints:
'x_1' {'_id_': {'key': [('_id', 1)]}, 'x_1': {'unique': True, 'key': [('x', 1)]}}
Changed in version 1.2: Added session parameter.
-
inline_map_reduce
(map, reduce, full_response=False, session=None, **kwargs)¶ Perform an inline map/reduce operation on this collection.
Perform the map/reduce operation on the server in RAM. A result collection is not created. The result set is returned as a list of documents.
If full_response is
False
(default) returns the result documents in a list. Otherwise, returns the full response from the server to the map reduce command.The
inline_map_reduce()
method obeys theread_preference
of thisCollection
.- Parameters
map: map function (as a JavaScript string)
reduce: reduce function (as a JavaScript string)
full_response (optional): if
True
, return full response to this command - otherwise just return the result collectionsession (optional): a
ClientSession
.**kwargs (optional): additional arguments to the map reduce command may be passed as keyword arguments to this helper method, e.g.:
>>> db.test.inline_map_reduce(map, reduce, limit=2)
Changed in version 3.6: Added
session
parameter.Changed in version 3.4: Added the collation option.
-
async
insert_many
(documents: List[mdocument.document.MDocument], *args, **kwargs)¶ Inserts multiple documents to database.
-
async
insert_one
(document, *args, **kwargs)¶ Inserts one document to database.
-
list_indexes
(session=None)¶ Get a cursor over the index documents for this collection.
async def print_indexes(): async for index in db.test.list_indexes(): print(index)
If the only index is the default index on
_id
, this might print:SON([('v', 1), ('key', SON([('_id', 1)])), ('name', '_id_')])
-
async
map_reduce
(map, reduce, out, full_response=False, session=None, **kwargs)¶ Perform a map/reduce operation on this collection.
If full_response is
False
(default) returns aMotorCollection
instance containing the results of the operation. Otherwise, returns the full response from the server to the map reduce command.- Parameters
map: map function (as a JavaScript string)
reduce: reduce function (as a JavaScript string)
out: output collection name or out object (dict). See the map reduce command documentation for available options. Note: out options are order sensitive.
SON
can be used to specify multiple options. e.g. SON([(‘replace’, <collection name>), (‘db’, <database name>)])full_response (optional): if
True
, return full response to this command - otherwise just return the result collectionsession (optional): a
ClientSession
, created withstart_session()
.**kwargs (optional): additional arguments to the map reduce command may be passed as keyword arguments to this helper method, e.g.:
result = await db.test.map_reduce(map, reduce, "myresults", limit=2)
Returns a Future.
Note
The
map_reduce()
method does not obey theread_preference
of thisMotorCollection
. To run mapReduce on a secondary use theinline_map_reduce()
method instead.Changed in version 1.2: Added session parameter.
-
property
name
¶ The name of this
Collection
.
-
options
(session=None)¶ Get the options set on this collection.
Returns a dictionary of options and their values - see
create_collection()
for more information on the possible options. Returns an empty dictionary if the collection has not been created yet.- Parameters
session (optional): a
ClientSession
.
Changed in version 3.6: Added
session
parameter.
-
property
read_concern
¶ Read only access to the
ReadConcern
of this instance.New in version 3.2.
-
property
read_preference
¶ Read only access to the read preference of this instance.
Changed in version 3.0: The
read_preference
attribute is now read only.
-
reindex
(session=None, **kwargs)¶ DEPRECATED: Rebuild all indexes on this collection.
Deprecated. Use
command()
to run thereIndex
command directly instead:await db.command({"reIndex": "<collection_name>"})
Note
Starting in MongoDB 4.6, the reIndex command can only be run when connected to a standalone mongod.
- Parameters
session (optional): a
MotorClientSession
.**kwargs (optional): optional arguments to the reIndex command (like maxTimeMS) can be passed as keyword arguments.
Warning
reindex blocks all other operations (indexes are built in the foreground) and will be slow for large collections.
Changed in version 2.2: Deprecated.
-
rename
(new_name, session=None, **kwargs)¶ Rename this collection.
If operating in auth mode, client must be authorized as an admin to perform this operation. Raises
TypeError
if new_name is not an instance ofbasestring
(str
in python 3). RaisesInvalidName
if new_name is not a valid collection name.- Parameters
new_name: new name for this collection
session (optional): a
ClientSession
.**kwargs (optional): additional arguments to the rename command may be passed as keyword arguments to this helper method (i.e.
dropTarget=True
)
Note
The
write_concern
of this collection is automatically applied to this operation when using MongoDB >= 3.4.Changed in version 3.6: Added
session
parameter.Changed in version 3.4: Apply this collection’s write concern automatically to this operation when connected to MongoDB >= 3.4.
-
replace_one
(filter, replacement, upsert=False, bypass_document_validation=False, collation=None, hint=None, session=None)¶ Replace a single document matching the filter.
Say our collection has one document:
{'x': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}
Then to replace it with another:
async def_replace_x_with_y(): result = await db.test.replace_one({'x': 1}, {'y': 1}) print('matched %d, modified %d' % (result.matched_count, result.modified_count)) print('collection:') async for doc in db.test.find(): print(doc)
This prints:
matched 1, modified 1 collection: {'y': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}
The upsert option can be used to insert a new document if a matching document does not exist:
async def_replace_or_upsert(): result = await db.test.replace_one({'x': 1}, {'x': 1}, True) print('matched %d, modified %d, upserted_id %r' % (result.matched_count, result.modified_count, result.upserted_id)) print('collection:') async for doc in db.test.find(): print(doc)
This prints:
matched 1, modified 1, upserted_id ObjectId('54f11e5c8891e756a6e1abd4') collection: {'y': 1, '_id': ObjectId('54f4c5befba5220aa4d6dee7')}
- Parameters
filter: A query that matches the document to replace.
replacement: The new document.
upsert (optional): If
True
, perform an insert if no documents match the filter.bypass_document_validation: (optional) If
True
, allows the write to opt-out of document level validation. Default isFalse
.collation (optional): An instance of
Collation
. This option is only supported on MongoDB 3.4 and above.hint (optional): An index to use to support the query predicate specified either by its string name, or in the same format as passed to
create_index()
(e.g.[('field', ASCENDING)]
). This option is only supported on MongoDB 4.2 and above.session (optional): a
ClientSession
, created withstart_session()
.
- Returns
An instance of
UpdateResult
.
Note
bypass_document_validation requires server version >= 3.2
Changed in version 2.2: Added
hint
parameter.Changed in version 1.2: Added
session
parameter.
-
async
update_many
(documents: List[mdocument.document.MDocument], *args, **kwargs)¶ Updates multiple documents in database. Also updates related documents.
-
async
update_one
(document: mdocument.document.MDocument, *args, **kwargs)¶ Updates one document in database. Also updates related documents.
-
watch
(pipeline=None, full_document=None, resume_after=None, max_await_time_ms=None, batch_size=None, collation=None, start_at_operation_time=None, session=None, start_after=None)¶ Watch changes on this collection.
Performs an aggregation with an implicit initial
$changeStream
stage and returns aMotorChangeStream
cursor which iterates over changes on this collection.Introduced in MongoDB 3.6.
A change stream continues waiting indefinitely for matching change events. Code like the following allows a program to cancel the change stream and exit.
change_stream = None async def watch_collection(): global change_stream # Using the change stream in an "async with" block # ensures it is canceled promptly if your code breaks # from the loop or throws an exception. async with db.collection.watch() as change_stream: async for change in change_stream: print(change) # Tornado from tornado.ioloop import IOLoop def main(): loop = IOLoop.current() # Start watching collection for changes. loop.add_callback(watch_collection) try: loop.start() except KeyboardInterrupt: pass finally: if change_stream is not None: change_stream.close() # asyncio from asyncio import get_event_loop def main(): loop = get_event_loop() task = loop.create_task(watch_collection) try: loop.run_forever() except KeyboardInterrupt: pass finally: if change_stream is not None: change_stream.close() # Prevent "Task was destroyed but it is pending!" loop.run_until_complete(task)
The
MotorChangeStream
async iterable blocks until the next change document is returned or an error is raised. If thenext()
method encounters a network error when retrieving a batch from the server, it will automatically attempt to recreate the cursor such that no change events are missed. Any error encountered during the resume attempt indicates there may be an outage and will be raised.try: pipeline = [{'$match': {'operationType': 'insert'}}] async with db.collection.watch(pipeline) as stream: async for change in stream: print(change) except pymongo.errors.PyMongoError: # The ChangeStream encountered an unrecoverable error or the # resume attempt failed to recreate the cursor. logging.error('...')
For a precise description of the resume process see the change streams specification.
- Parameters
pipeline (optional): A list of aggregation pipeline stages to append to an initial
$changeStream
stage. Not all pipeline stages are valid after a$changeStream
stage, see the MongoDB documentation on change streams for the supported stages.full_document (optional): The fullDocument option to pass to the
$changeStream
stage. Allowed values: ‘updateLookup’. When set to ‘updateLookup’, the change notification for partial updates will include both a delta describing the changes to the document, as well as a copy of the entire document that was changed from some time after the change occurred.resume_after (optional): A resume token. If provided, the change stream will start returning changes that occur directly after the operation specified in the resume token. A resume token is the _id value of a change document.
max_await_time_ms (optional): The maximum time in milliseconds for the server to wait for changes before responding to a getMore operation.
batch_size (optional): The maximum number of documents to return per batch.
collation (optional): The
Collation
to use for the aggregation.session (optional): a
ClientSession
.start_after (optional): The same as resume_after except that start_after can resume notifications after an invalidate event. This option and resume_after are mutually exclusive.
- Returns
A
MotorChangeStream
.
See the tornado_change_stream_example.
Changed in version 2.1: Added the
start_after
parameter.New in version 1.2.
-
with_options
(codec_options=None, read_preference=None, write_concern=None, read_concern=None)¶ Get a clone of this collection changing the specified settings.
>>> coll1.read_preference Primary() >>> from pymongo import ReadPreference >>> coll2 = coll1.with_options(read_preference=ReadPreference.SECONDARY) >>> coll1.read_preference Primary() >>> coll2.read_preference Secondary(tag_sets=None)
- Parameters
codec_options (optional): An instance of
CodecOptions
. IfNone
(the default) thecodec_options
of thisCollection
is used.read_preference (optional): The read preference to use. If
None
(the default) theread_preference
of thisCollection
is used. Seeread_preferences
for options.write_concern (optional): An instance of
WriteConcern
. IfNone
(the default) thewrite_concern
of thisCollection
is used.read_concern (optional): An instance of
ReadConcern
. IfNone
(the default) theread_concern
of thisCollection
is used.
-
property
write_concern
¶ Read only access to the
WriteConcern
of this instance.Changed in version 3.0: The
write_concern
attribute is now read only.
-