Build Conversations
Create multi-turn chat experiences with persistent context
The chat() method enables multi-turn conversations where context is maintained across messages and user preferences are remembered.
Basic Usage
const stream = await text2sql.chat(
[{ role: 'user', content: 'Show me top 10 customers' }],
{
chatId: 'chat-123',
userId: 'user-456',
},
);
// Stream the response
for await (const chunk of stream) {
if (chunk.type === 'text-delta') {
process.stdout.write(chunk.textDelta);
}
}How It Works
When you call chat():
- Load history - Previous messages for the chat are loaded
- Load user profile - User preferences and context are injected
- Generate response - AI processes full context
- Save messages - Both user message and response are persisted
Message Format
Messages follow the Vercel AI SDK UIMessage format:
interface UIMessage {
role: 'user' | 'assistant' | 'system';
content: string;
}
// Send multiple messages
await text2sql.chat(
[
{ role: 'user', content: 'Show me sales by region' },
{ role: 'assistant', content: 'Here are sales by region...' },
{ role: 'user', content: 'Now show me just California' },
],
{ chatId: 'chat-123', userId: 'user-456' },
);User Profiles
User profiles enable personalization and persistent memory across conversations. Text2SQL uses a TeachablesStore to maintain five types of user information:
Memory Types
| Type | Purpose | Schema Fields |
|---|---|---|
identity | User's name and/or role | description |
alias | User-specific term meanings | term, meaning |
preference | Output and style preferences | aspect, value |
context | Current working focus | description |
correction | Corrections to prior errors | subject, clarification |
The user profile is stored per userId and automatically injected into the chat context.
Automatic Memory Updates
During chat, memory tools are automatically available. The AI recognizes when users share facts, preferences, or context and stores them:
// User says: "I'm the sales manager for West region"
// AI automatically calls:
remember_memory({
memory: {
type: 'identity',
description: 'Sales manager for West region',
},
});
// User says: "When I say 'revenue', I mean gross revenue"
// AI calls:
remember_memory({
memory: {
type: 'alias',
term: 'revenue',
meaning: 'gross revenue before deductions',
},
});
// User says: "I prefer weekly aggregations"
// AI calls:
remember_memory({
memory: {
type: 'preference',
aspect: 'aggregation',
value: 'weekly',
},
});Available memory tools:
| Tool | Description |
|---|---|
remember_memory | Store something about the user for future conversations |
recall_memory | List stored memories for the current user |
update_memory | Update an existing memory |
forget_memory | Remove a specific memory |
Profile Injection
User profiles are injected into the system prompt as teachables:
<user_profile>
<identity>
<description>Sales manager for West region</description>
</identity>
<user_vocabulary>
<alias>
<term>revenue</term>
<meaning>gross revenue before deductions</meaning>
</alias>
</user_vocabulary>
<user_preferences>
<preference>
<aspect>aggregation</aspect>
<value>weekly</value>
</preference>
</user_preferences>
<user_context>
<context>Analyzing Q4 2024 performance</context>
</user_context>
</user_profile>Streaming Options
The chat response supports rich streaming:
const stream = await text2sql.chat(messages, params);
const uiStream = stream.toUIMessageStream({
sendStart: true, // Stream start event
sendFinish: true, // Stream finish event
sendReasoning: true, // AI reasoning steps
sendSources: true, // Data sources used
});Error Handling
The stream handles common errors gracefully:
const stream = await text2sql.chat(messages, {
chatId: 'chat-123',
userId: 'user-456',
});
// Errors are converted to user-friendly messages
// - "The model tried to call an unknown tool"
// - "The model called a tool with invalid arguments"Context Window
All previous messages in the chat are included in context:
// First message
await text2sql.chat([{ role: 'user', content: 'Show me total revenue' }], {
chatId: 'chat-123',
userId: 'user-456',
});
// Second message - has access to first conversation
await text2sql.chat([{ role: 'user', content: 'Break that down by quarter' }], {
chatId: 'chat-123',
userId: 'user-456',
});
// AI knows "that" refers to revenue from previous messageIntegration Example
// Express/Next.js API route
app.post('/api/chat', async (req, res) => {
const { messages, chatId } = req.body;
const userId = req.user.id;
const stream = await text2sql.chat(messages, { chatId, userId });
// Stream response to client
res.setHeader('Content-Type', 'text/event-stream');
for await (const chunk of stream) {
res.write(`data: ${JSON.stringify(chunk)}\n\n`);
}
res.end();
});Best Practices
- Use consistent chat IDs - Same chatId maintains conversation context
- Use consistent user IDs - Same userId maintains user profile
- Handle streaming properly - Process chunks as they arrive
- Clean up old chats - Periodically delete unused conversations