hub Embeddings

Deepthink can use a local embedding module by default, or you can opt into an API-backed embedding provider for faster application bootstrapping.

To use API embeddings, pass EmbeddingConfig alongside your LLM configuration:

code EmbeddingConfig (API)
from analogai.deepthink.thinker import DeepThinker
from analogai.deepthink.infrastructure.gateways.llm.config import LlmConfig
from analogai.deepthink.infrastructure.gateways.llm.enums.llm_provider import LlmProvider
from analogai.deepthink.infrastructure.gateways.embedding.config import EmbeddingConfig
from analogai.foundation.common.enums.embedding_type import EmbeddingType
from analogai.foundation.common.enums.embedding_provider import EmbeddingProvider

thinker = DeepThinker(
    llm_config=LlmConfig(
        provider=LlmProvider.AZURE,
        model="gpt-4o-mini",
    ),
    embedding_config=EmbeddingConfig(
        embed_type=EmbeddingType.API,
        provider=EmbeddingProvider.AZURE,
        model="text-embedding-3-small",
    ),
)

If you omit embedding_config, Deepthink will use the local embedding module.