From atum-stack-backend
MongoDB 7+ pattern library for Node.js and Python backends — schema design (embedded vs referenced documents, polymorphic patterns, attribute patterns, bucket pattern, computed pattern), indexes (single field, compound, multikey, text, geospatial, partial, TTL), aggregation pipeline ($match / $group / $lookup / $unwind / $project / $facet), transactions with session API (ACID multi-document since 4.0), change streams for CDC, official Node.js driver + Mongoose ODM (with TypeScript inference), Python via motor (async) + Beanie ODM, Atlas Search and Vector Search, sharding key selection, replica sets for HA, and backup strategies with mongodump + Atlas PITR. Use when designing or maintaining MongoDB schemas, optimizing slow queries, writing aggregations, implementing transactions, or migrating between relational and document models. Differentiates from postgres-patterns by covering document-oriented design patterns that don't apply to relational databases.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-stack-backendThis skill uses the workspace's default tool permissions.
MongoDB est optimal pour les données **hiérarchiques**, **schemaless**, ou qui nécessitent une flexibilité évolutive. Pour du relationnel pur avec FK et jointures lourdes → PostgreSQL.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
MongoDB est optimal pour les données hiérarchiques, schemaless, ou qui nécessitent une flexibilité évolutive. Pour du relationnel pur avec FK et jointures lourdes → PostgreSQL.
Embed when :
Reference when :
// Embedded — profile in user
{
_id: ObjectId("..."),
email: "user@example.com",
profile: {
firstName: "Arnaud",
lastName: "Porcel",
avatar: "https://..."
}
}
// Referenced — posts and users
// users collection
{ _id: ObjectId("u1"), email: "author@example.com" }
// posts collection
{ _id: ObjectId("p1"), title: "Hello", authorId: ObjectId("u1") }
Documents de types différents dans la même collection :
// events collection
{ _id: "e1", type: "login", userId: "u1", at: ISODate() }
{ _id: "e2", type: "purchase", userId: "u1", items: [...], total: 99.99 }
{ _id: "e3", type: "signup", userId: "u2", referrer: "google" }
Un index sur type accélère le filtrage.
Pour des attributs variables sans créer N colonnes :
// Products with variable attributes
{
_id: "p1",
name: "Laptop",
attributes: [
{ k: "ram", v: "16GB" },
{ k: "storage", v: "512GB SSD" },
{ k: "weight", v: "1.5kg" }
]
}
// Index on attributes.k
db.products.createIndex({ "attributes.k": 1, "attributes.v": 1 })
Pour des séries temporelles — grouper les events par bucket de temps :
// Au lieu de N documents (1 par mesure)
// 1 document par heure avec 3600 mesures
{
_id: "sensor1_2026-04-08T10",
sensorId: "sensor1",
hour: ISODate("2026-04-08T10:00:00Z"),
measurements: [
{ ts: ISODate("2026-04-08T10:00:00Z"), value: 22.5 },
{ ts: ISODate("2026-04-08T10:00:01Z"), value: 22.6 },
// ... 3598 more
],
count: 3600,
avg: 22.55
}
Pré-calculer les agrégations dans le document :
// Product with pre-computed rating
{
_id: "p1",
name: "Book",
reviewCount: 150,
ratingAvg: 4.3,
ratingTotal: 645
}
Update atomique lors d'un nouvel avis :
await db.collection('products').updateOne(
{ _id: productId },
{
$inc: { reviewCount: 1, ratingTotal: newRating },
$set: { ratingAvg: (ratingTotal + newRating) / (reviewCount + 1) }
}
)
Stocker un subset des sous-documents dans le parent pour éviter une jointure :
{
_id: "p1",
name: "Laptop",
// 3 most recent reviews embedded
recentReviews: [
{ _id: "r100", rating: 5, text: "..." },
{ _id: "r99", rating: 4, text: "..." },
{ _id: "r98", rating: 5, text: "..." }
],
// All reviews in separate collection
totalReviews: 1500
}
// Single field
db.users.createIndex({ email: 1 }, { unique: true })
// Compound — order matters!
db.posts.createIndex({ authorId: 1, publishedAt: -1 })
// Multikey — pour les arrays
db.products.createIndex({ tags: 1 })
// Text search
db.articles.createIndex({ title: "text", body: "text" })
// Geospatial 2dsphere
db.places.createIndex({ location: "2dsphere" })
// Partial — seulement certains docs
db.orders.createIndex(
{ userId: 1, status: 1 },
{ partialFilterExpression: { status: "pending" } }
)
// TTL — auto-delete après N secondes
db.sessions.createIndex({ expiresAt: 1 }, { expireAfterSeconds: 0 })
// Wildcard — tous les champs
db.dynamic.createIndex({ "$**": 1 })
// Exemple complet : top 10 authors par nombre de posts publiés ce mois-ci
const result = await db.collection('posts').aggregate([
{
$match: {
publishedAt: { $gte: startOfMonth, $lt: endOfMonth },
status: 'published'
}
},
{
$group: {
_id: '$authorId',
postCount: { $sum: 1 },
lastPost: { $max: '$publishedAt' }
}
},
{ $sort: { postCount: -1 } },
{ $limit: 10 },
{
$lookup: {
from: 'users',
localField: '_id',
foreignField: '_id',
as: 'author'
}
},
{ $unwind: '$author' },
{
$project: {
authorName: '$author.name',
postCount: 1,
lastPost: 1,
_id: 0
}
}
]).toArray()
| Stage | Usage |
|---|---|
$match | Filter — place en premier pour réduire le set |
$group | Group by + aggregations |
$lookup | Left outer join vers une autre collection |
$unwind | Déconstruire un array en multiples docs |
$project | Reshape les champs |
$facet | Multiples pipelines en parallèle (un seul pass) |
$bucket | Group dans des buckets définis |
$sortByCount | Top-N simple |
const session = client.startSession()
try {
await session.withTransaction(async () => {
await db.collection('accounts').updateOne(
{ _id: fromAccountId },
{ $inc: { balance: -amount } },
{ session }
)
await db.collection('accounts').updateOne(
{ _id: toAccountId },
{ $inc: { balance: amount } },
{ session }
)
await db.collection('transactions').insertOne(
{ from: fromAccountId, to: toAccountId, amount, at: new Date() },
{ session }
)
})
} finally {
await session.endSession()
}
Limitations :
const changeStream = db.collection('orders').watch([
{ $match: { 'fullDocument.status': 'paid' } }
], { fullDocument: 'updateLookup' })
changeStream.on('change', async (change) => {
if (change.operationType === 'update') {
await sendOrderConfirmationEmail(change.fullDocument)
}
})
Use cases : envoi d'email, sync vers Elasticsearch, audit log, webhooks sortants.
npm install mongodb
import { MongoClient } from 'mongodb'
const client = new MongoClient(process.env.MONGO_URI!, {
serverSelectionTimeoutMS: 5000,
})
await client.connect()
const db = client.db('myapp')
const users = db.collection<User>('users')
const user = await users.findOne({ email: 'user@example.com' })
npm install mongoose
import mongoose from 'mongoose'
const UserSchema = new mongoose.Schema({
email: { type: String, required: true, unique: true, lowercase: true, trim: true },
name: { type: String, required: true },
createdAt: { type: Date, default: Date.now }
})
UserSchema.index({ email: 1 })
UserSchema.pre('save', function (next) {
if (this.isModified('email')) this.email = this.email.toLowerCase()
next()
})
export const User = mongoose.model('User', UserSchema)
pip install motor
from motor.motor_asyncio import AsyncIOMotorClient
client = AsyncIOMotorClient("mongodb://localhost:27017")
db = client.myapp
async def find_user(email: str):
return await db.users.find_one({"email": email})
pip install beanie
from beanie import Document, Indexed
from pydantic import EmailStr
class User(Document):
email: Indexed(EmailStr, unique=True)
name: str
created_at: datetime = Field(default_factory=datetime.utcnow)
class Settings:
name = "users"
# Query
user = await User.find_one(User.email == "user@example.com")
// Créer l'index dans Atlas UI puis :
const results = await db.collection('articles').aggregate([
{
$search: {
index: 'default',
text: {
query: 'machine learning',
path: ['title', 'body'],
fuzzy: {}
}
}
},
{ $limit: 10 }
]).toArray()
const embedding = await getEmbedding('query text')
const results = await db.collection('documents').aggregate([
{
$vectorSearch: {
index: 'vector_index',
path: 'embedding',
queryVector: embedding,
numCandidates: 100,
limit: 10
}
}
]).toArray()
Règles :
Bons candidats : { userId: 1, createdAt: 1 }, { tenantId: 1, _id: 1 }
Mauvais candidats : _id seul (hotspot sur le last shard), timestamp seul (hotspot écritures récentes)
orders.items: [...] qui grandit à l'infini → document grows > 16MB limit$regex non ancré — $regex: "foo" est slow, $regex: "^foo" utilise l'index$lookup — MongoDB n'est pas relationnel, au bout de 3 joins → repenser le schema$inc, $push, $set)w: 0 — pas d'ACK, perte de data garantiefindOneAndUpdate avec upsert: true sans index unique → race conditionskip pour la pagination deep — O(N) sur le skip, utiliser _id > lastSeenId à la placemongodump cronw: "majority" pour les writes critiquesprimary par défaut, secondaryPreferred pour reportingpostgres-patterns (dans ce plugin)redis-patterns (dans ce plugin)database-reviewermigration-expert