🆕 Text Generation
This template uses best practices to generate text based on a user's input.
The text generation feature is powered by language models like OpenAI's GPT-3.5 Turbo and uses Vercel's AI SDK to stream responses back to the user.
You can generate text using the following API endpoints:
/api/chat
: Used to generate chat responses and store conversations in Supabase/api/vector-search
: Used to perform "document search" with retrieval augmented generation, or RAG. Read more on the vector search tutorial page
3-Min Walkthrough
🆕 Generate chat responses with Groq, Claude 3, or Ollama
TemplateAI supports 3 other language models as alternatives to OpenAI's GPT, allowing you to choose the model you prefer most.
Using Groq
-
Get an API Key from the Groq Console and set
GROQ_API_KEY
in your.env
file -
When chatting in the
ChatWindow
, select Groq as the language model.
Using Claude 3
-
Get an API Key from the Anthropic Console and set
ANTHROPIC_API_KEY
in your.env
file -
When chatting in the
ChatWindow
, select Claude 3 as the language model.
Using Ollama (llama2 running locally)
- Install Ollama and pull the
llama2:chat
model:
- When chatting in the
ChatWindow
, select Ollama as the language model.
If you want to use a different text model, switch OLLAMA_MODEL=llama2:chat
in your .env
file to your preferred model.
⚠️ The alternative models currently only work with the /api/chat route, so you can use them to generate chat responses, but not for vector search.
Setup with OpenAI
-
Set the
OPENAI_API_KEY
in your.env
file -
Make sure you have Supabase configs set correctly, and have run migrations on your database. (Follow the Database setup first if you haven't completed these steps.)
Example Usage
⚠️ By default, users must be authenticated to use the text generation
features. You can disable this in the route.ts
files for the
above endpoints.
Last Updated: June 7