diff --git a/ai-hub/app/api/routes/README.md b/ai-hub/app/api/routes/README.md
new file mode 100644
index 0000000..5fb40c6
--- /dev/null
+++ b/ai-hub/app/api/routes/README.md
@@ -0,0 +1,683 @@
+# Invoking the Text-to-Speech (TTS) API Endpoint
+
+This guide explains how a frontend application can interact with the FastAPI `/speech` endpoint for text-to-speech conversion. The endpoint supports both **non-streaming** and **streaming** audio responses.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/speech`
+* **Purpose:** Convert a given text string into audio.
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a JSON object matching the `SpeechRequest` schema.
+
+| Field | Type | Description | Example |
+| ----- | ------ | ------------------------------ | ---------------------------------- |
+| text | string | Text to be converted to speech | `"Hello, this is a test message."` |
+
+**Example JSON body:**
+
+```json
+{
+ "text": "The quick brown fox jumps over the lazy dog."
+}
+```
+
+---
+
+### 2.2 Query Parameter
+
+| Parameter | Type | Default | Description |
+| --------- | ------- | ------- | -------------------------------------------------------------------------------------- |
+| stream | boolean | false | If `true`, returns a continuous audio stream. If `false`, returns the full audio file. |
+
+**Example URLs:**
+
+* Non-streaming (Default):
+
+ ```
+ http://[your-api-server]/speech
+ ```
+
+* Streaming:
+
+ ```
+ http://[your-api-server]/speech?stream=true
+ ```
+
+---
+
+## 3. Frontend Implementation (JavaScript)
+
+Below are two implementations using the `fetch` API.
+
+---
+
+### Example 1: Non-Streaming Response
+
+Downloads the complete WAV file before playing. Suitable for short messages.
+
+```javascript
+// Generate and play non-streaming audio
+async function getSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const audioBlob = await response.blob();
+ const audioUrl = URL.createObjectURL(audioBlob);
+
+ const audio = new Audio(audioUrl);
+ audio.play();
+
+ console.log("Audio file received and is now playing.");
+ } catch (error) {
+ console.error("Failed to generate speech:", error);
+ }
+}
+
+// Example:
+// getSpeechAudio("This is an example of a non-streaming response.");
+```
+
+---
+
+### Example 2: Streaming Response
+
+Plays audio as it arrives using the **MediaSource API**. Ideal for long texts.
+
+```javascript
+// Stream audio and play as it arrives
+async function streamSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech?stream=true'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok || !response.body) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const mediaSource = new MediaSource();
+ const audio = new Audio();
+ audio.src = URL.createObjectURL(mediaSource);
+
+ mediaSource.addEventListener('sourceopen', async () => {
+ const sourceBuffer = mediaSource.addSourceBuffer('audio/wav');
+ const reader = response.body.getReader();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ mediaSource.endOfStream();
+ break;
+ }
+ sourceBuffer.appendBuffer(value);
+ }
+ });
+
+ audio.play();
+ console.log("Streaming audio is starting...");
+ } catch (error) {
+ console.error("Failed to stream speech:", error);
+ }
+}
+
+// Example:
+// streamSpeechAudio("This is an example of a streaming response, which begins playing before the entire audio file is received.");
+```
+
+# Invoking the Speech-to-Text (STT) API Endpoint
+
+This document explains how a frontend application can interact with the FastAPI `/stt/transcribe` endpoint to transcribe an uploaded audio file into text.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/stt/transcribe`
+* **Purpose:** Transcribe an uploaded audio file into text.
+* **Content Type:** `multipart/form-data`
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a `multipart/form-data` object with a single file field named `audio_file`.
+
+| Field | Type | Description |
+| ----------- | ---- | -------------------------------- |
+| audio\_file | File | The audio file to be transcribed |
+
+---
+
+## 3. Frontend Implementation (JavaScript + HTML)
+
+Below is a complete working example using `fetch` to send the file and display the transcription result.
+
+```html
+
+
+
+
+
+ STT API Example
+
+
+
+
+
Speech-to-Text (STT) Transcription
+
+
+
+
Transcribing...
+
+
Your transcribed text will appear here.
+
+
+
+
+
+
+```
+
+Here’s your Chat Sessions API documentation reformatted for clarity, structure, and consistency:
+
+---
+
+# Invoking the Chat Sessions API Endpoint
+
+This document describes how a frontend application can interact with the FastAPI `/sessions` endpoints. These endpoints allow you to:
+
+* Create new chat sessions
+* Send messages within a session
+* Retrieve chat history
+
+---
+
+## 1. Endpoint Details
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | --------------------------------- | ------------------------------------------------------------- | ------------------ |
+| **POST** | `/sessions/` | Creates a new chat session | `application/json` |
+| **POST** | `/sessions/{session_id}/chat` | Sends a message and receives a response in a specific session | `application/json` |
+| **GET** | `/sessions/{session_id}/messages` | Retrieves the message history for a given session | N/A |
+
+---
+
+## 2. Request & Response Structures
+
+### 2.1 Create a New Chat Session
+
+**POST** `/sessions/`
+
+**Request Body:**
+
+| Field | Type | Description |
+| -------- | ------ | ----------------------------------- |
+| user\_id | string | ID of the user creating the session |
+| model | string | Model to use for the session |
+
+**Example Request:**
+
+```json
+{
+ "user_id": "user-1234",
+ "model": "gemini"
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------- |
+| id | integer | Session ID |
+| user\_id | string | User ID |
+| created\_at | string | Session creation timestamp |
+| model | string | Model used |
+
+---
+
+### 2.2 Send a Message in a Session
+
+**POST** `/sessions/{session_id}/chat`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Request Body:**
+
+| Field | Type | Description |
+| ---------------------- | ------- | ----------------------------------------------------- |
+| prompt | string | User message |
+| model | string | Model for this message (can override session default) |
+| load\_faiss\_retriever | boolean | Whether to use FAISS retriever |
+
+**Example Request:**
+
+```json
+{
+ "prompt": "What is the capital of France?",
+ "model": "gemini",
+ "load_faiss_retriever": false
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------ | --------------------------- |
+| answer | string | Model's answer |
+| model\_used | string | Model used for the response |
+
+---
+
+### 2.3 Get Session Chat History
+
+**GET** `/sessions/{session_id}/messages`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------------------------------------- |
+| session\_id | integer | Session ID |
+| messages | array | List of message objects (`role`, `content`, `timestamp`) |
+
+---
+
+## 3. Frontend Implementation (HTML + JavaScript)
+
+Below is a complete example that:
+
+1. Creates a new chat session
+2. Sends a message in the session
+3. Retrieves the chat history
+
+```html
+
+
+
+
+
+ Chat Sessions API Example
+
+
+
+
+
Chat Sessions API Example
+
This page demonstrates creating a session, sending a message, and retrieving the history.
+
+
+
Workflow Log
+
+
+
+
+
+
+
+```
+
+# **Invoking the Documents API Endpoint**
+
+This guide explains how a frontend application can interact with the FastAPI `/documents` endpoints.
+These endpoints allow you to **add**, **list**, and **delete** documents.
+
+---
+
+## **Endpoint Summary**
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | -------------------------- | ---------------------------------- | ------------------ |
+| **POST** | `/documents/` | Adds a new document. | `application/json` |
+| **GET** | `/documents/` | Lists all documents. | N/A |
+| **DELETE** | `/documents/{document_id}` | Deletes a specific document by ID. | N/A |
+
+---
+
+## **Request & Response Structures**
+
+### **1. Add a New Document**
+
+**POST** `/documents/`
+
+**Request Body** (JSON):
+
+* `title` *(string)* – The title of the document.
+* `content` *(string)* – The content of the document.
+
+**Example Request:**
+
+```json
+{
+ "title": "My First Document",
+ "content": "This is the content of my very first document."
+}
+```
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+
+---
+
+### **2. List All Documents**
+
+**GET** `/documents/`
+
+**Request Body:** None.
+
+**Response Body**:
+
+* `documents` *(array)* – List of documents. Each object contains:
+
+ * `id` *(integer)*
+ * `title` *(string)*
+ * `content` *(string)*
+ * `created_at` *(timestamp)*
+
+---
+
+### **3. Delete a Document**
+
+**DELETE** `/documents/{document_id}`
+
+**Path Parameters:**
+
+* `document_id` *(integer)* – Unique ID of the document to be deleted.
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+* `document_id` *(integer)* – ID of the deleted document.
+
+---
+
+## **Frontend Implementation (JavaScript Example)**
+
+Below is a complete HTML + JavaScript example showing how to **add**, **list**, and **delete** documents using the API.
+
+```html
+
+
+
+
+
+ Documents API Example
+
+
+
+
+
Documents API Example
+
+
Add a New Document
+
+
+
Documents List
+
+
+
+
Log
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/ai-hub/app/api/routes/README.md b/ai-hub/app/api/routes/README.md
new file mode 100644
index 0000000..5fb40c6
--- /dev/null
+++ b/ai-hub/app/api/routes/README.md
@@ -0,0 +1,683 @@
+# Invoking the Text-to-Speech (TTS) API Endpoint
+
+This guide explains how a frontend application can interact with the FastAPI `/speech` endpoint for text-to-speech conversion. The endpoint supports both **non-streaming** and **streaming** audio responses.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/speech`
+* **Purpose:** Convert a given text string into audio.
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a JSON object matching the `SpeechRequest` schema.
+
+| Field | Type | Description | Example |
+| ----- | ------ | ------------------------------ | ---------------------------------- |
+| text | string | Text to be converted to speech | `"Hello, this is a test message."` |
+
+**Example JSON body:**
+
+```json
+{
+ "text": "The quick brown fox jumps over the lazy dog."
+}
+```
+
+---
+
+### 2.2 Query Parameter
+
+| Parameter | Type | Default | Description |
+| --------- | ------- | ------- | -------------------------------------------------------------------------------------- |
+| stream | boolean | false | If `true`, returns a continuous audio stream. If `false`, returns the full audio file. |
+
+**Example URLs:**
+
+* Non-streaming (Default):
+
+ ```
+ http://[your-api-server]/speech
+ ```
+
+* Streaming:
+
+ ```
+ http://[your-api-server]/speech?stream=true
+ ```
+
+---
+
+## 3. Frontend Implementation (JavaScript)
+
+Below are two implementations using the `fetch` API.
+
+---
+
+### Example 1: Non-Streaming Response
+
+Downloads the complete WAV file before playing. Suitable for short messages.
+
+```javascript
+// Generate and play non-streaming audio
+async function getSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const audioBlob = await response.blob();
+ const audioUrl = URL.createObjectURL(audioBlob);
+
+ const audio = new Audio(audioUrl);
+ audio.play();
+
+ console.log("Audio file received and is now playing.");
+ } catch (error) {
+ console.error("Failed to generate speech:", error);
+ }
+}
+
+// Example:
+// getSpeechAudio("This is an example of a non-streaming response.");
+```
+
+---
+
+### Example 2: Streaming Response
+
+Plays audio as it arrives using the **MediaSource API**. Ideal for long texts.
+
+```javascript
+// Stream audio and play as it arrives
+async function streamSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech?stream=true'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok || !response.body) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const mediaSource = new MediaSource();
+ const audio = new Audio();
+ audio.src = URL.createObjectURL(mediaSource);
+
+ mediaSource.addEventListener('sourceopen', async () => {
+ const sourceBuffer = mediaSource.addSourceBuffer('audio/wav');
+ const reader = response.body.getReader();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ mediaSource.endOfStream();
+ break;
+ }
+ sourceBuffer.appendBuffer(value);
+ }
+ });
+
+ audio.play();
+ console.log("Streaming audio is starting...");
+ } catch (error) {
+ console.error("Failed to stream speech:", error);
+ }
+}
+
+// Example:
+// streamSpeechAudio("This is an example of a streaming response, which begins playing before the entire audio file is received.");
+```
+
+# Invoking the Speech-to-Text (STT) API Endpoint
+
+This document explains how a frontend application can interact with the FastAPI `/stt/transcribe` endpoint to transcribe an uploaded audio file into text.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/stt/transcribe`
+* **Purpose:** Transcribe an uploaded audio file into text.
+* **Content Type:** `multipart/form-data`
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a `multipart/form-data` object with a single file field named `audio_file`.
+
+| Field | Type | Description |
+| ----------- | ---- | -------------------------------- |
+| audio\_file | File | The audio file to be transcribed |
+
+---
+
+## 3. Frontend Implementation (JavaScript + HTML)
+
+Below is a complete working example using `fetch` to send the file and display the transcription result.
+
+```html
+
+
+
+
+
+ STT API Example
+
+
+
+
+
Speech-to-Text (STT) Transcription
+
+
+
+
Transcribing...
+
+
Your transcribed text will appear here.
+
+
+
+
+
+
+```
+
+Here’s your Chat Sessions API documentation reformatted for clarity, structure, and consistency:
+
+---
+
+# Invoking the Chat Sessions API Endpoint
+
+This document describes how a frontend application can interact with the FastAPI `/sessions` endpoints. These endpoints allow you to:
+
+* Create new chat sessions
+* Send messages within a session
+* Retrieve chat history
+
+---
+
+## 1. Endpoint Details
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | --------------------------------- | ------------------------------------------------------------- | ------------------ |
+| **POST** | `/sessions/` | Creates a new chat session | `application/json` |
+| **POST** | `/sessions/{session_id}/chat` | Sends a message and receives a response in a specific session | `application/json` |
+| **GET** | `/sessions/{session_id}/messages` | Retrieves the message history for a given session | N/A |
+
+---
+
+## 2. Request & Response Structures
+
+### 2.1 Create a New Chat Session
+
+**POST** `/sessions/`
+
+**Request Body:**
+
+| Field | Type | Description |
+| -------- | ------ | ----------------------------------- |
+| user\_id | string | ID of the user creating the session |
+| model | string | Model to use for the session |
+
+**Example Request:**
+
+```json
+{
+ "user_id": "user-1234",
+ "model": "gemini"
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------- |
+| id | integer | Session ID |
+| user\_id | string | User ID |
+| created\_at | string | Session creation timestamp |
+| model | string | Model used |
+
+---
+
+### 2.2 Send a Message in a Session
+
+**POST** `/sessions/{session_id}/chat`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Request Body:**
+
+| Field | Type | Description |
+| ---------------------- | ------- | ----------------------------------------------------- |
+| prompt | string | User message |
+| model | string | Model for this message (can override session default) |
+| load\_faiss\_retriever | boolean | Whether to use FAISS retriever |
+
+**Example Request:**
+
+```json
+{
+ "prompt": "What is the capital of France?",
+ "model": "gemini",
+ "load_faiss_retriever": false
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------ | --------------------------- |
+| answer | string | Model's answer |
+| model\_used | string | Model used for the response |
+
+---
+
+### 2.3 Get Session Chat History
+
+**GET** `/sessions/{session_id}/messages`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------------------------------------- |
+| session\_id | integer | Session ID |
+| messages | array | List of message objects (`role`, `content`, `timestamp`) |
+
+---
+
+## 3. Frontend Implementation (HTML + JavaScript)
+
+Below is a complete example that:
+
+1. Creates a new chat session
+2. Sends a message in the session
+3. Retrieves the chat history
+
+```html
+
+
+
+
+
+ Chat Sessions API Example
+
+
+
+
+
Chat Sessions API Example
+
This page demonstrates creating a session, sending a message, and retrieving the history.
+
+
+
Workflow Log
+
+
+
+
+
+
+
+```
+
+# **Invoking the Documents API Endpoint**
+
+This guide explains how a frontend application can interact with the FastAPI `/documents` endpoints.
+These endpoints allow you to **add**, **list**, and **delete** documents.
+
+---
+
+## **Endpoint Summary**
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | -------------------------- | ---------------------------------- | ------------------ |
+| **POST** | `/documents/` | Adds a new document. | `application/json` |
+| **GET** | `/documents/` | Lists all documents. | N/A |
+| **DELETE** | `/documents/{document_id}` | Deletes a specific document by ID. | N/A |
+
+---
+
+## **Request & Response Structures**
+
+### **1. Add a New Document**
+
+**POST** `/documents/`
+
+**Request Body** (JSON):
+
+* `title` *(string)* – The title of the document.
+* `content` *(string)* – The content of the document.
+
+**Example Request:**
+
+```json
+{
+ "title": "My First Document",
+ "content": "This is the content of my very first document."
+}
+```
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+
+---
+
+### **2. List All Documents**
+
+**GET** `/documents/`
+
+**Request Body:** None.
+
+**Response Body**:
+
+* `documents` *(array)* – List of documents. Each object contains:
+
+ * `id` *(integer)*
+ * `title` *(string)*
+ * `content` *(string)*
+ * `created_at` *(timestamp)*
+
+---
+
+### **3. Delete a Document**
+
+**DELETE** `/documents/{document_id}`
+
+**Path Parameters:**
+
+* `document_id` *(integer)* – Unique ID of the document to be deleted.
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+* `document_id` *(integer)* – ID of the deleted document.
+
+---
+
+## **Frontend Implementation (JavaScript Example)**
+
+Below is a complete HTML + JavaScript example showing how to **add**, **list**, and **delete** documents using the API.
+
+```html
+
+
+
+
+
+ Documents API Example
+
+
+
+
+
Documents API Example
+
+
Add a New Document
+
+
+
Documents List
+
+
+
+
Log
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/ai-hub/app/app.py b/ai-hub/app/app.py
index 0a70fb3..150a1ac 100644
--- a/ai-hub/app/app.py
+++ b/ai-hub/app/app.py
@@ -19,6 +19,8 @@
# Note: The llm_clients import and initialization are removed as they
# are not used in RAGService's constructor based on your services.py
# from app.core.llm_clients import DeepSeekClient, GeminiClient
+from fastapi.middleware.cors import CORSMiddleware
+
@asynccontextmanager
async def lifespan(app: FastAPI):
@@ -109,4 +111,12 @@
api_router = create_api_router(services=services)
app.include_router(api_router)
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"], # <-- This is a compromised solution should not be used in production.
+ # In real production, the allow origins should specified with frontend address to only allow the frontend can send request to it.
+ allow_credentials=True,
+ allow_methods=["*"], # Allows all HTTP methods (GET, POST, PUT, DELETE, etc.)
+ allow_headers=["*"], # Allows all headers
+ )
return app
diff --git a/ai-hub/app/api/routes/README.md b/ai-hub/app/api/routes/README.md
new file mode 100644
index 0000000..5fb40c6
--- /dev/null
+++ b/ai-hub/app/api/routes/README.md
@@ -0,0 +1,683 @@
+# Invoking the Text-to-Speech (TTS) API Endpoint
+
+This guide explains how a frontend application can interact with the FastAPI `/speech` endpoint for text-to-speech conversion. The endpoint supports both **non-streaming** and **streaming** audio responses.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/speech`
+* **Purpose:** Convert a given text string into audio.
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a JSON object matching the `SpeechRequest` schema.
+
+| Field | Type | Description | Example |
+| ----- | ------ | ------------------------------ | ---------------------------------- |
+| text | string | Text to be converted to speech | `"Hello, this is a test message."` |
+
+**Example JSON body:**
+
+```json
+{
+ "text": "The quick brown fox jumps over the lazy dog."
+}
+```
+
+---
+
+### 2.2 Query Parameter
+
+| Parameter | Type | Default | Description |
+| --------- | ------- | ------- | -------------------------------------------------------------------------------------- |
+| stream | boolean | false | If `true`, returns a continuous audio stream. If `false`, returns the full audio file. |
+
+**Example URLs:**
+
+* Non-streaming (Default):
+
+ ```
+ http://[your-api-server]/speech
+ ```
+
+* Streaming:
+
+ ```
+ http://[your-api-server]/speech?stream=true
+ ```
+
+---
+
+## 3. Frontend Implementation (JavaScript)
+
+Below are two implementations using the `fetch` API.
+
+---
+
+### Example 1: Non-Streaming Response
+
+Downloads the complete WAV file before playing. Suitable for short messages.
+
+```javascript
+// Generate and play non-streaming audio
+async function getSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const audioBlob = await response.blob();
+ const audioUrl = URL.createObjectURL(audioBlob);
+
+ const audio = new Audio(audioUrl);
+ audio.play();
+
+ console.log("Audio file received and is now playing.");
+ } catch (error) {
+ console.error("Failed to generate speech:", error);
+ }
+}
+
+// Example:
+// getSpeechAudio("This is an example of a non-streaming response.");
+```
+
+---
+
+### Example 2: Streaming Response
+
+Plays audio as it arrives using the **MediaSource API**. Ideal for long texts.
+
+```javascript
+// Stream audio and play as it arrives
+async function streamSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech?stream=true'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok || !response.body) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const mediaSource = new MediaSource();
+ const audio = new Audio();
+ audio.src = URL.createObjectURL(mediaSource);
+
+ mediaSource.addEventListener('sourceopen', async () => {
+ const sourceBuffer = mediaSource.addSourceBuffer('audio/wav');
+ const reader = response.body.getReader();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ mediaSource.endOfStream();
+ break;
+ }
+ sourceBuffer.appendBuffer(value);
+ }
+ });
+
+ audio.play();
+ console.log("Streaming audio is starting...");
+ } catch (error) {
+ console.error("Failed to stream speech:", error);
+ }
+}
+
+// Example:
+// streamSpeechAudio("This is an example of a streaming response, which begins playing before the entire audio file is received.");
+```
+
+# Invoking the Speech-to-Text (STT) API Endpoint
+
+This document explains how a frontend application can interact with the FastAPI `/stt/transcribe` endpoint to transcribe an uploaded audio file into text.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/stt/transcribe`
+* **Purpose:** Transcribe an uploaded audio file into text.
+* **Content Type:** `multipart/form-data`
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a `multipart/form-data` object with a single file field named `audio_file`.
+
+| Field | Type | Description |
+| ----------- | ---- | -------------------------------- |
+| audio\_file | File | The audio file to be transcribed |
+
+---
+
+## 3. Frontend Implementation (JavaScript + HTML)
+
+Below is a complete working example using `fetch` to send the file and display the transcription result.
+
+```html
+
+
+
+
+
+ STT API Example
+
+
+
+
+
Speech-to-Text (STT) Transcription
+
+
+
+
Transcribing...
+
+
Your transcribed text will appear here.
+
+
+
+
+
+
+```
+
+Here’s your Chat Sessions API documentation reformatted for clarity, structure, and consistency:
+
+---
+
+# Invoking the Chat Sessions API Endpoint
+
+This document describes how a frontend application can interact with the FastAPI `/sessions` endpoints. These endpoints allow you to:
+
+* Create new chat sessions
+* Send messages within a session
+* Retrieve chat history
+
+---
+
+## 1. Endpoint Details
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | --------------------------------- | ------------------------------------------------------------- | ------------------ |
+| **POST** | `/sessions/` | Creates a new chat session | `application/json` |
+| **POST** | `/sessions/{session_id}/chat` | Sends a message and receives a response in a specific session | `application/json` |
+| **GET** | `/sessions/{session_id}/messages` | Retrieves the message history for a given session | N/A |
+
+---
+
+## 2. Request & Response Structures
+
+### 2.1 Create a New Chat Session
+
+**POST** `/sessions/`
+
+**Request Body:**
+
+| Field | Type | Description |
+| -------- | ------ | ----------------------------------- |
+| user\_id | string | ID of the user creating the session |
+| model | string | Model to use for the session |
+
+**Example Request:**
+
+```json
+{
+ "user_id": "user-1234",
+ "model": "gemini"
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------- |
+| id | integer | Session ID |
+| user\_id | string | User ID |
+| created\_at | string | Session creation timestamp |
+| model | string | Model used |
+
+---
+
+### 2.2 Send a Message in a Session
+
+**POST** `/sessions/{session_id}/chat`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Request Body:**
+
+| Field | Type | Description |
+| ---------------------- | ------- | ----------------------------------------------------- |
+| prompt | string | User message |
+| model | string | Model for this message (can override session default) |
+| load\_faiss\_retriever | boolean | Whether to use FAISS retriever |
+
+**Example Request:**
+
+```json
+{
+ "prompt": "What is the capital of France?",
+ "model": "gemini",
+ "load_faiss_retriever": false
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------ | --------------------------- |
+| answer | string | Model's answer |
+| model\_used | string | Model used for the response |
+
+---
+
+### 2.3 Get Session Chat History
+
+**GET** `/sessions/{session_id}/messages`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------------------------------------- |
+| session\_id | integer | Session ID |
+| messages | array | List of message objects (`role`, `content`, `timestamp`) |
+
+---
+
+## 3. Frontend Implementation (HTML + JavaScript)
+
+Below is a complete example that:
+
+1. Creates a new chat session
+2. Sends a message in the session
+3. Retrieves the chat history
+
+```html
+
+
+
+
+
+ Chat Sessions API Example
+
+
+
+
+
Chat Sessions API Example
+
This page demonstrates creating a session, sending a message, and retrieving the history.
+
+
+
Workflow Log
+
+
+
+
+
+
+
+```
+
+# **Invoking the Documents API Endpoint**
+
+This guide explains how a frontend application can interact with the FastAPI `/documents` endpoints.
+These endpoints allow you to **add**, **list**, and **delete** documents.
+
+---
+
+## **Endpoint Summary**
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | -------------------------- | ---------------------------------- | ------------------ |
+| **POST** | `/documents/` | Adds a new document. | `application/json` |
+| **GET** | `/documents/` | Lists all documents. | N/A |
+| **DELETE** | `/documents/{document_id}` | Deletes a specific document by ID. | N/A |
+
+---
+
+## **Request & Response Structures**
+
+### **1. Add a New Document**
+
+**POST** `/documents/`
+
+**Request Body** (JSON):
+
+* `title` *(string)* – The title of the document.
+* `content` *(string)* – The content of the document.
+
+**Example Request:**
+
+```json
+{
+ "title": "My First Document",
+ "content": "This is the content of my very first document."
+}
+```
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+
+---
+
+### **2. List All Documents**
+
+**GET** `/documents/`
+
+**Request Body:** None.
+
+**Response Body**:
+
+* `documents` *(array)* – List of documents. Each object contains:
+
+ * `id` *(integer)*
+ * `title` *(string)*
+ * `content` *(string)*
+ * `created_at` *(timestamp)*
+
+---
+
+### **3. Delete a Document**
+
+**DELETE** `/documents/{document_id}`
+
+**Path Parameters:**
+
+* `document_id` *(integer)* – Unique ID of the document to be deleted.
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+* `document_id` *(integer)* – ID of the deleted document.
+
+---
+
+## **Frontend Implementation (JavaScript Example)**
+
+Below is a complete HTML + JavaScript example showing how to **add**, **list**, and **delete** documents using the API.
+
+```html
+
+
+
+
+
+ Documents API Example
+
+
+
+
+
Documents API Example
+
+
Add a New Document
+
+
+
Documents List
+
+
+
+
Log
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/ai-hub/app/app.py b/ai-hub/app/app.py
index 0a70fb3..150a1ac 100644
--- a/ai-hub/app/app.py
+++ b/ai-hub/app/app.py
@@ -19,6 +19,8 @@
# Note: The llm_clients import and initialization are removed as they
# are not used in RAGService's constructor based on your services.py
# from app.core.llm_clients import DeepSeekClient, GeminiClient
+from fastapi.middleware.cors import CORSMiddleware
+
@asynccontextmanager
async def lifespan(app: FastAPI):
@@ -109,4 +111,12 @@
api_router = create_api_router(services=services)
app.include_router(api_router)
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"], # <-- This is a compromised solution should not be used in production.
+ # In real production, the allow origins should specified with frontend address to only allow the frontend can send request to it.
+ allow_credentials=True,
+ allow_methods=["*"], # Allows all HTTP methods (GET, POST, PUT, DELETE, etc.)
+ allow_headers=["*"], # Allows all headers
+ )
return app
diff --git a/ai-hub/app/core/services/tts.py b/ai-hub/app/core/services/tts.py
index 3b1e2be..63298e4 100644
--- a/ai-hub/app/core/services/tts.py
+++ b/ai-hub/app/core/services/tts.py
@@ -19,7 +19,7 @@
"""
# Use an environment variable or a default value for the max chunk size
- MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 2000))
+ MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 200))
def __init__(self, tts_provider: TTSProvider):
"""
diff --git a/ai-hub/app/api/routes/README.md b/ai-hub/app/api/routes/README.md
new file mode 100644
index 0000000..5fb40c6
--- /dev/null
+++ b/ai-hub/app/api/routes/README.md
@@ -0,0 +1,683 @@
+# Invoking the Text-to-Speech (TTS) API Endpoint
+
+This guide explains how a frontend application can interact with the FastAPI `/speech` endpoint for text-to-speech conversion. The endpoint supports both **non-streaming** and **streaming** audio responses.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/speech`
+* **Purpose:** Convert a given text string into audio.
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a JSON object matching the `SpeechRequest` schema.
+
+| Field | Type | Description | Example |
+| ----- | ------ | ------------------------------ | ---------------------------------- |
+| text | string | Text to be converted to speech | `"Hello, this is a test message."` |
+
+**Example JSON body:**
+
+```json
+{
+ "text": "The quick brown fox jumps over the lazy dog."
+}
+```
+
+---
+
+### 2.2 Query Parameter
+
+| Parameter | Type | Default | Description |
+| --------- | ------- | ------- | -------------------------------------------------------------------------------------- |
+| stream | boolean | false | If `true`, returns a continuous audio stream. If `false`, returns the full audio file. |
+
+**Example URLs:**
+
+* Non-streaming (Default):
+
+ ```
+ http://[your-api-server]/speech
+ ```
+
+* Streaming:
+
+ ```
+ http://[your-api-server]/speech?stream=true
+ ```
+
+---
+
+## 3. Frontend Implementation (JavaScript)
+
+Below are two implementations using the `fetch` API.
+
+---
+
+### Example 1: Non-Streaming Response
+
+Downloads the complete WAV file before playing. Suitable for short messages.
+
+```javascript
+// Generate and play non-streaming audio
+async function getSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const audioBlob = await response.blob();
+ const audioUrl = URL.createObjectURL(audioBlob);
+
+ const audio = new Audio(audioUrl);
+ audio.play();
+
+ console.log("Audio file received and is now playing.");
+ } catch (error) {
+ console.error("Failed to generate speech:", error);
+ }
+}
+
+// Example:
+// getSpeechAudio("This is an example of a non-streaming response.");
+```
+
+---
+
+### Example 2: Streaming Response
+
+Plays audio as it arrives using the **MediaSource API**. Ideal for long texts.
+
+```javascript
+// Stream audio and play as it arrives
+async function streamSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech?stream=true'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok || !response.body) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const mediaSource = new MediaSource();
+ const audio = new Audio();
+ audio.src = URL.createObjectURL(mediaSource);
+
+ mediaSource.addEventListener('sourceopen', async () => {
+ const sourceBuffer = mediaSource.addSourceBuffer('audio/wav');
+ const reader = response.body.getReader();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ mediaSource.endOfStream();
+ break;
+ }
+ sourceBuffer.appendBuffer(value);
+ }
+ });
+
+ audio.play();
+ console.log("Streaming audio is starting...");
+ } catch (error) {
+ console.error("Failed to stream speech:", error);
+ }
+}
+
+// Example:
+// streamSpeechAudio("This is an example of a streaming response, which begins playing before the entire audio file is received.");
+```
+
+# Invoking the Speech-to-Text (STT) API Endpoint
+
+This document explains how a frontend application can interact with the FastAPI `/stt/transcribe` endpoint to transcribe an uploaded audio file into text.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/stt/transcribe`
+* **Purpose:** Transcribe an uploaded audio file into text.
+* **Content Type:** `multipart/form-data`
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a `multipart/form-data` object with a single file field named `audio_file`.
+
+| Field | Type | Description |
+| ----------- | ---- | -------------------------------- |
+| audio\_file | File | The audio file to be transcribed |
+
+---
+
+## 3. Frontend Implementation (JavaScript + HTML)
+
+Below is a complete working example using `fetch` to send the file and display the transcription result.
+
+```html
+
+
+
+
+
+ STT API Example
+
+
+
+
+
Speech-to-Text (STT) Transcription
+
+
+
+
Transcribing...
+
+
Your transcribed text will appear here.
+
+
+
+
+
+
+```
+
+Here’s your Chat Sessions API documentation reformatted for clarity, structure, and consistency:
+
+---
+
+# Invoking the Chat Sessions API Endpoint
+
+This document describes how a frontend application can interact with the FastAPI `/sessions` endpoints. These endpoints allow you to:
+
+* Create new chat sessions
+* Send messages within a session
+* Retrieve chat history
+
+---
+
+## 1. Endpoint Details
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | --------------------------------- | ------------------------------------------------------------- | ------------------ |
+| **POST** | `/sessions/` | Creates a new chat session | `application/json` |
+| **POST** | `/sessions/{session_id}/chat` | Sends a message and receives a response in a specific session | `application/json` |
+| **GET** | `/sessions/{session_id}/messages` | Retrieves the message history for a given session | N/A |
+
+---
+
+## 2. Request & Response Structures
+
+### 2.1 Create a New Chat Session
+
+**POST** `/sessions/`
+
+**Request Body:**
+
+| Field | Type | Description |
+| -------- | ------ | ----------------------------------- |
+| user\_id | string | ID of the user creating the session |
+| model | string | Model to use for the session |
+
+**Example Request:**
+
+```json
+{
+ "user_id": "user-1234",
+ "model": "gemini"
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------- |
+| id | integer | Session ID |
+| user\_id | string | User ID |
+| created\_at | string | Session creation timestamp |
+| model | string | Model used |
+
+---
+
+### 2.2 Send a Message in a Session
+
+**POST** `/sessions/{session_id}/chat`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Request Body:**
+
+| Field | Type | Description |
+| ---------------------- | ------- | ----------------------------------------------------- |
+| prompt | string | User message |
+| model | string | Model for this message (can override session default) |
+| load\_faiss\_retriever | boolean | Whether to use FAISS retriever |
+
+**Example Request:**
+
+```json
+{
+ "prompt": "What is the capital of France?",
+ "model": "gemini",
+ "load_faiss_retriever": false
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------ | --------------------------- |
+| answer | string | Model's answer |
+| model\_used | string | Model used for the response |
+
+---
+
+### 2.3 Get Session Chat History
+
+**GET** `/sessions/{session_id}/messages`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------------------------------------- |
+| session\_id | integer | Session ID |
+| messages | array | List of message objects (`role`, `content`, `timestamp`) |
+
+---
+
+## 3. Frontend Implementation (HTML + JavaScript)
+
+Below is a complete example that:
+
+1. Creates a new chat session
+2. Sends a message in the session
+3. Retrieves the chat history
+
+```html
+
+
+
+
+
+ Chat Sessions API Example
+
+
+
+
+
Chat Sessions API Example
+
This page demonstrates creating a session, sending a message, and retrieving the history.
+
+
+
Workflow Log
+
+
+
+
+
+
+
+```
+
+# **Invoking the Documents API Endpoint**
+
+This guide explains how a frontend application can interact with the FastAPI `/documents` endpoints.
+These endpoints allow you to **add**, **list**, and **delete** documents.
+
+---
+
+## **Endpoint Summary**
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | -------------------------- | ---------------------------------- | ------------------ |
+| **POST** | `/documents/` | Adds a new document. | `application/json` |
+| **GET** | `/documents/` | Lists all documents. | N/A |
+| **DELETE** | `/documents/{document_id}` | Deletes a specific document by ID. | N/A |
+
+---
+
+## **Request & Response Structures**
+
+### **1. Add a New Document**
+
+**POST** `/documents/`
+
+**Request Body** (JSON):
+
+* `title` *(string)* – The title of the document.
+* `content` *(string)* – The content of the document.
+
+**Example Request:**
+
+```json
+{
+ "title": "My First Document",
+ "content": "This is the content of my very first document."
+}
+```
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+
+---
+
+### **2. List All Documents**
+
+**GET** `/documents/`
+
+**Request Body:** None.
+
+**Response Body**:
+
+* `documents` *(array)* – List of documents. Each object contains:
+
+ * `id` *(integer)*
+ * `title` *(string)*
+ * `content` *(string)*
+ * `created_at` *(timestamp)*
+
+---
+
+### **3. Delete a Document**
+
+**DELETE** `/documents/{document_id}`
+
+**Path Parameters:**
+
+* `document_id` *(integer)* – Unique ID of the document to be deleted.
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+* `document_id` *(integer)* – ID of the deleted document.
+
+---
+
+## **Frontend Implementation (JavaScript Example)**
+
+Below is a complete HTML + JavaScript example showing how to **add**, **list**, and **delete** documents using the API.
+
+```html
+
+
+
+
+
+ Documents API Example
+
+
+
+
+
Documents API Example
+
+
Add a New Document
+
+
+
Documents List
+
+
+
+
Log
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/ai-hub/app/app.py b/ai-hub/app/app.py
index 0a70fb3..150a1ac 100644
--- a/ai-hub/app/app.py
+++ b/ai-hub/app/app.py
@@ -19,6 +19,8 @@
# Note: The llm_clients import and initialization are removed as they
# are not used in RAGService's constructor based on your services.py
# from app.core.llm_clients import DeepSeekClient, GeminiClient
+from fastapi.middleware.cors import CORSMiddleware
+
@asynccontextmanager
async def lifespan(app: FastAPI):
@@ -109,4 +111,12 @@
api_router = create_api_router(services=services)
app.include_router(api_router)
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"], # <-- This is a compromised solution should not be used in production.
+ # In real production, the allow origins should specified with frontend address to only allow the frontend can send request to it.
+ allow_credentials=True,
+ allow_methods=["*"], # Allows all HTTP methods (GET, POST, PUT, DELETE, etc.)
+ allow_headers=["*"], # Allows all headers
+ )
return app
diff --git a/ai-hub/app/core/services/tts.py b/ai-hub/app/core/services/tts.py
index 3b1e2be..63298e4 100644
--- a/ai-hub/app/core/services/tts.py
+++ b/ai-hub/app/core/services/tts.py
@@ -19,7 +19,7 @@
"""
# Use an environment variable or a default value for the max chunk size
- MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 2000))
+ MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 200))
def __init__(self, tts_provider: TTSProvider):
"""
diff --git a/ai-hub/integration_tests/demo/run_server.sh b/ai-hub/integration_tests/demo/run_server.sh
new file mode 100644
index 0000000..1c89f1d
--- /dev/null
+++ b/ai-hub/integration_tests/demo/run_server.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+
+# ===============================================================================
+# Script Name: run-server.sh
+# Description: Starts the AI Hub FastAPI application using uvicorn.
+# The server will be accessible at http://127.0.0.1:8000.
+#
+# Usage: ./run-server.sh
+# ===============================================================================
+
+# Set the host and port for the server
+HOST="0.0.0.0"
+PORT="8000"
+APP_MODULE="app.main:app"
+
+# Start the uvicorn server with auto-reloading for development
+# The --host and --port flags bind the server to the specified address.
+echo "--- Starting AI Hub Server on http://${HOST}:${PORT} ---"
+exec uvicorn "$APP_MODULE" --host "$HOST" --port "$PORT" --reload
diff --git a/ai-hub/app/api/routes/README.md b/ai-hub/app/api/routes/README.md
new file mode 100644
index 0000000..5fb40c6
--- /dev/null
+++ b/ai-hub/app/api/routes/README.md
@@ -0,0 +1,683 @@
+# Invoking the Text-to-Speech (TTS) API Endpoint
+
+This guide explains how a frontend application can interact with the FastAPI `/speech` endpoint for text-to-speech conversion. The endpoint supports both **non-streaming** and **streaming** audio responses.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/speech`
+* **Purpose:** Convert a given text string into audio.
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a JSON object matching the `SpeechRequest` schema.
+
+| Field | Type | Description | Example |
+| ----- | ------ | ------------------------------ | ---------------------------------- |
+| text | string | Text to be converted to speech | `"Hello, this is a test message."` |
+
+**Example JSON body:**
+
+```json
+{
+ "text": "The quick brown fox jumps over the lazy dog."
+}
+```
+
+---
+
+### 2.2 Query Parameter
+
+| Parameter | Type | Default | Description |
+| --------- | ------- | ------- | -------------------------------------------------------------------------------------- |
+| stream | boolean | false | If `true`, returns a continuous audio stream. If `false`, returns the full audio file. |
+
+**Example URLs:**
+
+* Non-streaming (Default):
+
+ ```
+ http://[your-api-server]/speech
+ ```
+
+* Streaming:
+
+ ```
+ http://[your-api-server]/speech?stream=true
+ ```
+
+---
+
+## 3. Frontend Implementation (JavaScript)
+
+Below are two implementations using the `fetch` API.
+
+---
+
+### Example 1: Non-Streaming Response
+
+Downloads the complete WAV file before playing. Suitable for short messages.
+
+```javascript
+// Generate and play non-streaming audio
+async function getSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const audioBlob = await response.blob();
+ const audioUrl = URL.createObjectURL(audioBlob);
+
+ const audio = new Audio(audioUrl);
+ audio.play();
+
+ console.log("Audio file received and is now playing.");
+ } catch (error) {
+ console.error("Failed to generate speech:", error);
+ }
+}
+
+// Example:
+// getSpeechAudio("This is an example of a non-streaming response.");
+```
+
+---
+
+### Example 2: Streaming Response
+
+Plays audio as it arrives using the **MediaSource API**. Ideal for long texts.
+
+```javascript
+// Stream audio and play as it arrives
+async function streamSpeechAudio(text) {
+ const url = 'http://[your-api-server]/speech?stream=true'; // Replace with your API URL
+
+ try {
+ const response = await fetch(url, {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify({ text })
+ });
+
+ if (!response.ok || !response.body) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const mediaSource = new MediaSource();
+ const audio = new Audio();
+ audio.src = URL.createObjectURL(mediaSource);
+
+ mediaSource.addEventListener('sourceopen', async () => {
+ const sourceBuffer = mediaSource.addSourceBuffer('audio/wav');
+ const reader = response.body.getReader();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ mediaSource.endOfStream();
+ break;
+ }
+ sourceBuffer.appendBuffer(value);
+ }
+ });
+
+ audio.play();
+ console.log("Streaming audio is starting...");
+ } catch (error) {
+ console.error("Failed to stream speech:", error);
+ }
+}
+
+// Example:
+// streamSpeechAudio("This is an example of a streaming response, which begins playing before the entire audio file is received.");
+```
+
+# Invoking the Speech-to-Text (STT) API Endpoint
+
+This document explains how a frontend application can interact with the FastAPI `/stt/transcribe` endpoint to transcribe an uploaded audio file into text.
+
+---
+
+## 1. Endpoint Details
+
+* **HTTP Method:** `POST`
+* **Path:** `/stt/transcribe`
+* **Purpose:** Transcribe an uploaded audio file into text.
+* **Content Type:** `multipart/form-data`
+
+---
+
+## 2. Request Structure
+
+### 2.1 Request Body
+
+The POST request must include a `multipart/form-data` object with a single file field named `audio_file`.
+
+| Field | Type | Description |
+| ----------- | ---- | -------------------------------- |
+| audio\_file | File | The audio file to be transcribed |
+
+---
+
+## 3. Frontend Implementation (JavaScript + HTML)
+
+Below is a complete working example using `fetch` to send the file and display the transcription result.
+
+```html
+
+
+
+
+
+ STT API Example
+
+
+
+
+
Speech-to-Text (STT) Transcription
+
+
+
+
Transcribing...
+
+
Your transcribed text will appear here.
+
+
+
+
+
+
+```
+
+Here’s your Chat Sessions API documentation reformatted for clarity, structure, and consistency:
+
+---
+
+# Invoking the Chat Sessions API Endpoint
+
+This document describes how a frontend application can interact with the FastAPI `/sessions` endpoints. These endpoints allow you to:
+
+* Create new chat sessions
+* Send messages within a session
+* Retrieve chat history
+
+---
+
+## 1. Endpoint Details
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | --------------------------------- | ------------------------------------------------------------- | ------------------ |
+| **POST** | `/sessions/` | Creates a new chat session | `application/json` |
+| **POST** | `/sessions/{session_id}/chat` | Sends a message and receives a response in a specific session | `application/json` |
+| **GET** | `/sessions/{session_id}/messages` | Retrieves the message history for a given session | N/A |
+
+---
+
+## 2. Request & Response Structures
+
+### 2.1 Create a New Chat Session
+
+**POST** `/sessions/`
+
+**Request Body:**
+
+| Field | Type | Description |
+| -------- | ------ | ----------------------------------- |
+| user\_id | string | ID of the user creating the session |
+| model | string | Model to use for the session |
+
+**Example Request:**
+
+```json
+{
+ "user_id": "user-1234",
+ "model": "gemini"
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------- |
+| id | integer | Session ID |
+| user\_id | string | User ID |
+| created\_at | string | Session creation timestamp |
+| model | string | Model used |
+
+---
+
+### 2.2 Send a Message in a Session
+
+**POST** `/sessions/{session_id}/chat`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Request Body:**
+
+| Field | Type | Description |
+| ---------------------- | ------- | ----------------------------------------------------- |
+| prompt | string | User message |
+| model | string | Model for this message (can override session default) |
+| load\_faiss\_retriever | boolean | Whether to use FAISS retriever |
+
+**Example Request:**
+
+```json
+{
+ "prompt": "What is the capital of France?",
+ "model": "gemini",
+ "load_faiss_retriever": false
+}
+```
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------ | --------------------------- |
+| answer | string | Model's answer |
+| model\_used | string | Model used for the response |
+
+---
+
+### 2.3 Get Session Chat History
+
+**GET** `/sessions/{session_id}/messages`
+
+**Path Parameter:**
+
+| Name | Type | Description |
+| ----------- | ------- | ----------------- |
+| session\_id | integer | Unique session ID |
+
+**Response Body:**
+
+| Field | Type | Description |
+| ----------- | ------- | -------------------------------------------------------- |
+| session\_id | integer | Session ID |
+| messages | array | List of message objects (`role`, `content`, `timestamp`) |
+
+---
+
+## 3. Frontend Implementation (HTML + JavaScript)
+
+Below is a complete example that:
+
+1. Creates a new chat session
+2. Sends a message in the session
+3. Retrieves the chat history
+
+```html
+
+
+
+
+
+ Chat Sessions API Example
+
+
+
+
+
Chat Sessions API Example
+
This page demonstrates creating a session, sending a message, and retrieving the history.
+
+
+
Workflow Log
+
+
+
+
+
+
+
+```
+
+# **Invoking the Documents API Endpoint**
+
+This guide explains how a frontend application can interact with the FastAPI `/documents` endpoints.
+These endpoints allow you to **add**, **list**, and **delete** documents.
+
+---
+
+## **Endpoint Summary**
+
+| HTTP Method | Path | Purpose | Request Type |
+| ----------- | -------------------------- | ---------------------------------- | ------------------ |
+| **POST** | `/documents/` | Adds a new document. | `application/json` |
+| **GET** | `/documents/` | Lists all documents. | N/A |
+| **DELETE** | `/documents/{document_id}` | Deletes a specific document by ID. | N/A |
+
+---
+
+## **Request & Response Structures**
+
+### **1. Add a New Document**
+
+**POST** `/documents/`
+
+**Request Body** (JSON):
+
+* `title` *(string)* – The title of the document.
+* `content` *(string)* – The content of the document.
+
+**Example Request:**
+
+```json
+{
+ "title": "My First Document",
+ "content": "This is the content of my very first document."
+}
+```
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+
+---
+
+### **2. List All Documents**
+
+**GET** `/documents/`
+
+**Request Body:** None.
+
+**Response Body**:
+
+* `documents` *(array)* – List of documents. Each object contains:
+
+ * `id` *(integer)*
+ * `title` *(string)*
+ * `content` *(string)*
+ * `created_at` *(timestamp)*
+
+---
+
+### **3. Delete a Document**
+
+**DELETE** `/documents/{document_id}`
+
+**Path Parameters:**
+
+* `document_id` *(integer)* – Unique ID of the document to be deleted.
+
+**Response Body**:
+
+* `message` *(string)* – Success message.
+* `document_id` *(integer)* – ID of the deleted document.
+
+---
+
+## **Frontend Implementation (JavaScript Example)**
+
+Below is a complete HTML + JavaScript example showing how to **add**, **list**, and **delete** documents using the API.
+
+```html
+
+
+
+
+
+ Documents API Example
+
+
+
+
+
Documents API Example
+
+
Add a New Document
+
+
+
Documents List
+
+
+
+
Log
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/ai-hub/app/app.py b/ai-hub/app/app.py
index 0a70fb3..150a1ac 100644
--- a/ai-hub/app/app.py
+++ b/ai-hub/app/app.py
@@ -19,6 +19,8 @@
# Note: The llm_clients import and initialization are removed as they
# are not used in RAGService's constructor based on your services.py
# from app.core.llm_clients import DeepSeekClient, GeminiClient
+from fastapi.middleware.cors import CORSMiddleware
+
@asynccontextmanager
async def lifespan(app: FastAPI):
@@ -109,4 +111,12 @@
api_router = create_api_router(services=services)
app.include_router(api_router)
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"], # <-- This is a compromised solution should not be used in production.
+ # In real production, the allow origins should specified with frontend address to only allow the frontend can send request to it.
+ allow_credentials=True,
+ allow_methods=["*"], # Allows all HTTP methods (GET, POST, PUT, DELETE, etc.)
+ allow_headers=["*"], # Allows all headers
+ )
return app
diff --git a/ai-hub/app/core/services/tts.py b/ai-hub/app/core/services/tts.py
index 3b1e2be..63298e4 100644
--- a/ai-hub/app/core/services/tts.py
+++ b/ai-hub/app/core/services/tts.py
@@ -19,7 +19,7 @@
"""
# Use an environment variable or a default value for the max chunk size
- MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 2000))
+ MAX_CHUNK_SIZE = int(os.getenv("TTS_MAX_CHUNK_SIZE", 200))
def __init__(self, tts_provider: TTSProvider):
"""
diff --git a/ai-hub/integration_tests/demo/run_server.sh b/ai-hub/integration_tests/demo/run_server.sh
new file mode 100644
index 0000000..1c89f1d
--- /dev/null
+++ b/ai-hub/integration_tests/demo/run_server.sh
@@ -0,0 +1,19 @@
+#!/bin/bash
+
+# ===============================================================================
+# Script Name: run-server.sh
+# Description: Starts the AI Hub FastAPI application using uvicorn.
+# The server will be accessible at http://127.0.0.1:8000.
+#
+# Usage: ./run-server.sh
+# ===============================================================================
+
+# Set the host and port for the server
+HOST="0.0.0.0"
+PORT="8000"
+APP_MODULE="app.main:app"
+
+# Start the uvicorn server with auto-reloading for development
+# The --host and --port flags bind the server to the specified address.
+echo "--- Starting AI Hub Server on http://${HOST}:${PORT} ---"
+exec uvicorn "$APP_MODULE" --host "$HOST" --port "$PORT" --reload
diff --git a/ai-hub/integration_tests/demo/voice_chat.html b/ai-hub/integration_tests/demo/voice_chat.html
new file mode 100644
index 0000000..3f53542
--- /dev/null
+++ b/ai-hub/integration_tests/demo/voice_chat.html
@@ -0,0 +1,205 @@
+
+
+
+
+
+ AI Voice Chat
+
+
+
+
+
+