## Chatbot Architecture

This document outlines the chatbot backend architecture, core components, data flows, and external integrations.

### High-level components

- **API Server (Express)**: `src/server.js` wires routes, CORS, JSON, and initializes RAG retriever.
- **Chat Routes**: `src/routes/chat.js` handles `/api/chat-stream` and orchestrates LLM + tools + RAG + sessions.
- **Provider Routes**: `src/routes/providers.js` exposes `/api/providers/search` and aggregates provider sources.
- **Sessions API**: `src/routes/sessions.js` creates and reads chat sessions.
- **RAG Retriever**: `src/rag/retriever.js` builds in-memory embeddings over markdown KB using OpenAI embeddings.
- **Tooling**: `src/tools/providerSearchTool.js` callable by the LLM to hit internal provider search.
- **Provider Aggregation**: `src/services/searchAggregator.js` composes multiple sources (DB, NPPES, CMS, Medicaid, 211, ClinicalTrials).
- **Session Storage**: `src/storage/chatStore.js` file-based JSON sessions for messages/history.
- **Frontend (demo)**: `chatBotFrontend/app/page.jsx` streams SSE tokens from `/api/chat-stream`.

### Component diagram

```mermaid
graph LR
  %% Groups
  subgraph Frontend
    UI[Next.js Chat UI]
  end

  subgraph "Chatbot Backend (Express)"
    Srv["API Server [server.js]"]
    ChatR["Chat Router [routes/chat.js]<br/>[/api/chat-stream]"]
    ProvR["Providers Router [routes/providers.js]<br/>[/api/providers/search]"]
    SessR["Sessions Router [routes/sessions.js]<br/>[/api/sessions]"]
    RAG["RAG Retriever [rag/retriever.js]"]
    Tool["Provider Search Tool [tools/providerSearchTool.js]"]
    Store["Session Store [storage/chatStore.js]"]
    Agg["Provider Search Aggregator [services/searchAggregator.js]"]
  end

  subgraph "External / Upstream"
    MainAPI["Main Backend API [MAIN_API_BASE]"]
    OpenAI["OpenAI API"]
    NPPES[NPPES]
    CMS[CMS]
    Medicaid[Medicaid]
    In211["211 / Socrata"]
    Trials[ClinicalTrials.gov]
  end

  %% Flows
  UI -->|SSE POST| ChatR
  Srv --> ChatR
  Srv --> ProvR
  Srv --> SessR

  ChatR -->|init on boot| RAG
  ChatR -->|messages| Store
  ChatR -->|LLM call| OpenAI
  ChatR -->|tool call| Tool
  Tool -->|HTTP POST| ProvR
  ProvR --> Agg
  Agg -->|search| MainAPI
  Agg --> NPPES
  Agg --> CMS
  Agg --> Medicaid
  Agg --> In211
  Agg --> Trials

  %% Styling
  class UI ui
  class ChatR,ProvR,SessR router
  class Srv,RAG,Tool,Agg service
  class Store storage
  class MainAPI,OpenAI,NPPES,CMS,Medicaid,In211,Trials external

  classDef ui fill:#E3F2FD,stroke:#1E88E5,color:#0D47A1,stroke-width:2,rx:6,ry:6
  classDef router fill:#FFF3E0,stroke:#FB8C00,color:#E65100,stroke-width:2,rx:6,ry:6
  classDef service fill:#E8F5E9,stroke:#43A047,color:#1B5E20,stroke-width:2,rx:6,ry:6
  classDef storage fill:#F3E5F5,stroke:#8E24AA,color:#4A148C,stroke-width:2,rx:6,ry:6
  classDef external fill:#FCE4EC,stroke:#D81B60,color:#880E4F,stroke-width:2,rx:6,ry:6
```

### Chat interaction sequence

```mermaid
sequenceDiagram
  autonumber
  box rgba(227,242,253,0.6) Frontend
    participant UI as Chat UI<br/>[chatBotFrontend/app/page.jsx]
  end
  box rgba(232,245,233,0.6) Backend
    participant Chat as Chat Router<br/>[/api/chat-stream]<br/>[routes/chat.js]
    participant RAG as RAG Retriever<br/>[rag/retriever.js]
    participant Store as Session Store<br/>[storage/chatStore.js]
  end
  box rgba(252,228,236,0.6) External
    participant LLM as LLM<br/>[ChatOpenAI]
    participant Tool as Provider Search Tool<br/>[tools/providerSearchTool.js]
    participant Prov as Providers API<br/>[/api/providers/search]<br/>[routes/providers.js]
    participant Agg as Provider Search Aggregator<br/>[services/searchAggregator.js]
    participant Main as Main Backend API<br/>[MAIN_API_BASE]
  end

  UI->>+Chat: POST message (SSE)
  opt Has sessionId
    Chat->>Chat: Load session history
  end

  Chat->>+RAG: getRelevantDocuments(question)
  RAG-->>-Chat: Top-K chunks

  Chat->>+LLM: system + context + history + user
  alt LLM decides to call tool
    LLM-->>Chat: tool_calls: search_providers
    Chat->>+Tool: invoke({zip,date,service,insurance})
    Tool->>+Prov: POST /api/providers/search
    Prov->>+Agg: aggregatedProviderSearch(params)
    Agg->>+Main: POST MAIN_API_BASE/providers/search (Bearer token)
    Main-->>-Agg: results
    Agg-->>-Prov: merged provider list
    Prov-->>-Tool: {count, results}
    Tool-->>-Chat: tool result
    Chat->>LLM: ToolMessage(result)
  else No tool call
    Note over Chat,LLM: Direct completion with RAG context
  end
  LLM-->>-Chat: final answer

  par Stream tokens
    Chat-->>UI: SSE: { token }
    Chat-->>UI: SSE: { token } ...
  and Persist
    Chat->>Store: append user/assistant messages
  end
```


### Chatbot flow diagram

```mermaid
flowchart LR
  %% Groups
  subgraph FE[Frontend]
    direction TB
    User([User])
    UI[Next.js Chat UI]
    Stream[SSE: stream tokens]
    User --> UI
  end

  subgraph BE[Chatbot Backend]
    direction TB
    Chat[[/api/chat-stream]]
    HasSess{sessionId provided?}
    LoadSess[Load session history]
    RAG[RAG: retrieve relevant documents]
    Prompt[Compose system + context + history + user]
    LLM[LLM: ChatOpenAI]
    ToolNeeded{Tool call needed?}
    Persist[(Persist messages)]
  end

  subgraph EXT[External Services]
    direction TB
    Tool[Provider Search Tool]
    Prov[[/api/providers/search]]
    Agg[Aggregate providers]
    OpenAI[(OpenAI API)]
  end

  %% Flows
  UI -->|POST message| Chat
  Chat --> HasSess
  HasSess -- Yes --> LoadSess
  LoadSess --> RAG
  HasSess -- No --> RAG
  RAG --> Prompt
  Prompt --> LLM
  LLM -->|inference| Chat
  Chat --> ToolNeeded
  Chat -->|LLM call| OpenAI
  OpenAI --> Chat
  ToolNeeded -- Yes --> Tool
  Tool --> Prov
  Prov --> Agg
  Agg --> Tool
  Tool --> LLM
  ToolNeeded -- No --> Chat
  Chat -->|SSE| Stream
  Stream --> UI
  Chat -->|append messages| Persist

  %% Styling
  class UI,User,Stream io
  class Chat,Prompt,RAG,LoadSess,LLM process
  class HasSess,ToolNeeded decision
  class Persist storage
  class Tool,Prov,Agg,OpenAI external

  classDef io fill:#E3F2FD,stroke:#1E88E5,color:#0D47A1,stroke-width:2,rx:6,ry:6
  classDef process fill:#E8F5E9,stroke:#43A047,color:#1B5E20,stroke-width:2,rx:6,ry:6
  classDef decision fill:#FFF3E0,stroke:#FB8C00,color:#E65100,stroke-width:2
  classDef storage fill:#F3E5F5,stroke:#8E24AA,color:#4A148C,stroke-width:2,rx:6,ry:6
  classDef external fill:#FCE4EC,stroke:#D81B60,color:#880E4F,stroke-width:2,rx:6,ry:6
```

### Key endpoints

- `POST /api/chat-stream`: Streams assistant response as SSE. Body: `{ message, role?, history?, sessionId? }`.
- `POST /api/providers/search`: Aggregated provider search. Body: `{ zip?, date?, service?, insurance? }`.
- `POST /api/sessions`: Create a session. Returns `{ id }`.
- `GET /api/sessions/:id`: Retrieve a persisted session record.

### Environment variables

- `OPENAI_API_KEY`: Required for embeddings and LLM.
- `PORT`: Chatbot backend port (default 4001).
- `ALLOWED_ORIGINS`: CSV for CORS allowlist.
- `MAIN_API_BASE`: Base URL of the main backend (used for provider DB search and user profile fetch).
- `PROVIDER_SEARCH_TIMEOUT_MS`: Timeout for provider tool HTTP call (default 8000ms).
- `NEXT_PUBLIC_API_BASE_URL` (frontend): URL of this chatbot backend.

### Notes

- RAG corpus lives in `chatBotbackend/kb`. On boot, contents are embedded in-memory; no vector DB required.
- Provider aggregation favors the platform DB, then enriches with public sources; results are de-duplicated by name+address.
- Sessions are stored in JSON under `chatBotbackend/storage/chats` for simplicity.


