Skip to main content
The v2 client is available on Pro, Team, and Enterprise plans.

Quick Start

from nined.memory import MemoryClientV2

client = MemoryClientV2(
    base_url="https://api.9dlabs.xyz",
    api_key="your-key",
    workspace_id="my-workspace",
)

# Batch ingest
client.ingest([
    {"artifact_type": "document", "raw_payload": {"content": "Deploy runbook..."}},
    {"artifact_type": "note", "raw_payload": {"content": "Meeting notes..."}},
])

# Retrieve with serving profile
pack = client.context_pack("What happened?", profile="high_recall")
for snippet in pack["snippets"]:
    print(snippet["content"])

Constructor

MemoryClientV2(
    base_url: str = "https://api.9dlabs.xyz",
    api_key: str = "",
    workspace_id: str = "",
    timeout: int = 30,
)
ParameterTypeDefaultDescription
base_urlstrhttps://api.9dlabs.xyzServer URL
api_keystr""API key
workspace_idstr""Default workspace (used for all methods)
timeoutint30HTTP timeout in seconds

Key Differences from v1

Featurev1v2
IngestSingle artifactBatch (array)
WorkspacePer-method parameterSet once on client
Serving profilesNot availablelow_latency, balanced, high_recall
Async indexingVia env varPer-request via async_index param
Job trackingNot availablejob_status() method

Methods

ingest()

Batch ingest multiple artifacts. Supports async indexing with job tracking.
# Synchronous
result = client.ingest([
    {"artifact_type": "document", "raw_payload": {"content": "..."}},
    {"artifact_type": "chat_turn", "raw_payload": {"role": "user", "content": "..."}},
])

# Async with job tracking
result = client.ingest([...], async_index=True)
for job in result["queued_jobs"]:
    status = client.job_status(job["job_id"])
    print(f"Job {job['job_id']}: {status['status']}")

context_pack()

Retrieve context with serving profiles for different latency/recall tradeoffs.
pack = client.context_pack(
    "What is our pricing policy?",
    max_tokens=8192,
    profile="high_recall",
)

job_status()

Track async indexing jobs.
status = client.job_status("job-uuid")
# Returns: {"job_id": "...", "status": "done", "artifact_id": "...", ...}

ready()

Readiness check including storage health.
health = client.ready()
# Returns: {"status": "ok", "storage": "connected", ...}

Other Methods

The following methods work the same as v1 but use the workspace set on the client:
  • feedback(action, artifact_id, ...) — Submit correction
  • list_receipts(limit, offset) — List receipts
  • get_receipt(pack_hash) — Get receipt by pack hash
  • artifact_status(artifact_id) — Indexing status
  • ask(query, max_tokens, profile) — LLM-synthesized answer
  • delete_workspace() — Delete workspace
  • health() — Liveness check

Serving Profiles

ProfileUse CaseTradeoff
low_latencyReal-time chat, quick lookupsFastest retrieval, fewer results
balancedGeneral-purpose (default)Good recall with reasonable latency
high_recallResearch, compliance, auditsMaximum evidence, higher latency