auto_awesome Main Operations

Core operations include question answering, statement decomposition, enrichment, and structured logging.

Typical workflows include:

  • Remember: store facts or text inputs.
  • Search: query memory with semantic + graph retrieval.
  • Generate Context: compile relevant facts into a prompt-ready block.

bookmark Remember

code Remember
thinker.remember("Paris is the capital of France.")
thinker.remember("France is in Europe.")

search Search

code Search
answer = thinker.search("What is the capital of France?")
print(answer)

article Generate Context

code Generate Context
context = thinker.generate_context("Tell me about France.")
# pass context into your own LLM call

Use generate_context when you want to keep your own LLM orchestration while still benefiting from Deepthink's memory layer.