Location:
app/core/providers.py
This module provides a standardized and extensible interface for interacting with multiple Large Language Model (LLM) APIs. It enables pluggable support for models like DeepSeek and Gemini using a unified abstraction layer.
The module behavior is controlled by environment variables, primarily for authentication and model selection.
Variable | Description | Default Value |
---|---|---|
DEEPSEEK_API_KEY |
Your DeepSeek API key (required) | None |
GEMINI_API_KEY |
Your Google Gemini API key (required) | None |
DEEPSEEK_MODEL_NAME |
Name of the DeepSeek model to use | "deepseek-chat" |
GEMINI_MODEL_NAME |
Name of the Gemini model to use | "gemini-1.5-flash-latest" |
💡 Tip: Set these variables in your
.env
file or environment before running the application.
The system uses a provider interface pattern to support multiple LLMs with a consistent API.
LLMProvider
(Abstract Base Class)Defines the required method:
def generate_response(self, prompt: str) -> str
All concrete providers must implement this method.
DeepSeekProvider
LLMProvider
interface.DEEPSEEK_API_KEY
DEEPSEEK_MODEL_NAME
GeminiProvider
LLMProvider
interface.GEMINI_API_KEY
GEMINI_MODEL_NAME
get_llm_provider(model_name: str) -> LLMProvider
provider = get_llm_provider("deepseek") response = provider.generate_response("Tell me a joke.")
model_name
:"deepseek"
"gemini"
ValueError
if an unsupported model_name
is passed.Use the get_llm_provider()
factory function to get a provider instance. Then call generate_response()
on it:
from app.core.providers import get_llm_provider # Get provider llm = get_llm_provider("gemini") # Generate response response = llm.generate_response("What is the capital of France?") print(response)
This design allows you to easily swap or extend LLMs (e.g., add OpenAI, Anthropic) by simply implementing a new provider class.