4 AI Features You Can Build in an Hour with TemplateAI
Learn how to build AI chat support, semantic search, writing assistants, and image generation using pre-built NextJS components and integrations
Building AI features from scratch typically takes weeks. You need to integrate multiple APIs, set up database schemas for storing conversations and embeddings, build streaming UI components, handle error states, and deploy everything reliably.
TemplateAI provides production-ready AI components and integrations that you can customize and deploy in hours instead of weeks. The infrastructure is already built—authentication, database schemas, API routes, streaming responses, and UI components. You focus on customization, not reinventing the wheel.
In this guide, we'll walk through 4 real AI features you can implement using code that's already in the template.
Feature 1: AI Chat Support Widget
Time Estimate: 15 minutes
What it is
A floating chat bubble that appears in the bottom-right corner of your application. Users click it to open a chat interface where they can ask questions and receive AI-generated responses in real-time. The responses stream word-by-word for a better user experience, and the conversation history is automatically stored in your database.
This is perfect for customer support, user onboarding, contextual help, or any scenario where users need interactive assistance.
What makes it work
The template includes a ChatButton component with a complete chat interface already built. Under the hood, it uses:
- Vercel AI SDK's
useChathook for managing streaming responses - A pre-configured
/api/chatendpoint that handles the AI requests - Supabase database tables (already set up via migrations) to store conversation history
- Support for multiple AI providers: OpenAI, Claude, Groq, or Ollama
Here's the core pattern from src/components/ChatButton.tsx:
const { messages, input, handleInputChange, handleSubmit } =
useChat({ api: '/api/chat' });The useChat hook manages the entire conversation state—messages array, user input, form submission, and streaming responses. You don't need to handle WebSocket connections, streaming protocols, or state management manually.
Where to customize
Change the appearance: The chat button and window styling is in ChatButton.tsx. You can modify the icon, position, colors, and size using Tailwind classes at lines 34-44.
Customize the AI personality: The system prompt that defines how the AI responds lives in your API route. You can make it friendly, professional, technical, or domain-specific.
Add features: You can extend the component to include message timestamps, typing indicators, or conversation history that persists across sessions.
Style the messages: The chat message bubbles are rendered in the ChatMessage component. Customize the layout, colors, and avatars to match your brand.
What you need
OPENAI_API_KEYset in your.envfile- Supabase database configured (the template's migrations create the necessary tables)
How to deploy
Import the component into any page:
import ChatButton from '@/components/ChatButton';
export default function YourPage() {
return (
<div>
{/* Your page content */}
<ChatButton />
</div>
);
}The component handles all the complexity—state management, API calls, streaming, error handling, and UI rendering. It works immediately with no additional configuration.
Feature 2: Semantic Search (RAG)
Time Estimate: 20 minutes
What it is
"ChatGPT for your documentation." Users can ask natural language questions about your content, and the AI searches through your documentation to provide accurate, contextual answers. This is called Retrieval-Augmented Generation (RAG).
Instead of hallucinating answers, the AI retrieves relevant sections from your actual documentation and uses that context to generate accurate responses. This is how you build "chat with your docs" features or knowledge base assistants.
What makes it work
The template includes a complete RAG pipeline:
At build time: The generate-embeddings.ts script reads documents from your /docs folder, chunks them into smaller pieces, converts each chunk into a vector embedding using OpenAI's API, and stores everything in Supabase with pgvector.
At runtime: When a user asks a question, the system converts the question into an embedding, performs a vector similarity search to find the most relevant document chunks, injects those chunks as context into the AI prompt, and streams back a response based on actual content from your docs.
The template uses LangChain to handle document loading and chunking, OpenAI's text-embedding-3-small model (5x cheaper than previous embedding models), and Supabase's pgvector extension for fast similarity search.
Here's how you use it in a component (from src/components/ChatModal.tsx):
const { completion, input, handleInputChange, handleSubmit } =
useCompletion({ api: '/api/vector-search' });The ChatModal component provides a CMD+K keyboard shortcut interface (common in modern apps) for quick document search.
Where to customize
Add your documents: Put markdown, text, or other documents in the /docs folder. The template handles text and markdown by default.
Support different file types: To process PDFs, HTML, CSV, or other formats, modify the document loader in src/utils/generate-embeddings.ts. LangChain has loaders for dozens of file types—you just swap out the loader and re-run the embeddings script.
Adjust chunking strategy: The script splits documents into 1000-character chunks with 50-character overlap (lines 18-21 in generate-embeddings.ts). You can tune these numbers based on your content. Smaller chunks work better for precise facts; larger chunks work better for conceptual content.
Customize the interface: The ChatModal component opens with CMD+K by default. You can change the keyboard shortcut, replace it with a button, or embed it directly in your page layout.
What you need
OPENAI_API_KEYin your.envfile- Supabase configured with pgvector extension (the migrations enable this automatically)
- Documents placed in the
/docsfolder
How to deploy
- Add your documentation to
/docs - Run the embeddings generation script:
npm run embeddingsThis processes all your documents and stores the embeddings in Supabase. You only need to rerun this when you add or update documents.
- Add the component to your page:
import ChatModal from '@/components/ChatModal';
export default function YourPage() {
return (
<div>
<ChatModal />
{/* Your page content */}
</div>
);
}Users can now press CMD+K to search your documentation using natural language.
Feature 3: AI Writing Assistant
Time Estimate: 10 minutes
What it is
Real-time text generation with streaming responses. Users input a prompt or partial text, and the AI generates completions instantly—streaming word-by-word as it generates. This is the foundation for email drafters, content generators, code explainers, social media caption writers, or any feature where users need AI-generated text.
The key difference from a simple API call: responses stream in real-time rather than waiting for the full response. This creates a much better user experience, especially for longer outputs.
What makes it work
The template uses Vercel AI SDK's streaming hooks, which handle all the complexity of streaming protocols, chunk parsing, and state management. You get real-time responses with just a few lines of code.
The template supports two patterns:
Chat-style (back-and-forth conversation):
const { messages, input, handleSubmit } = useChat({
api: '/api/chat'
});Completion-style (one-shot generation):
const { completion, input, handleSubmit } = useCompletion({
api: '/api/chat'
});Both patterns use the same API infrastructure, but useChat maintains a messages array for conversations, while useCompletion just gives you a single completion string.
The template includes built-in support for multiple AI providers. You can use OpenAI's GPT models, Anthropic's Claude, Groq for faster inference, or Ollama for local development without API costs. Switching between providers requires zero code changes—just set the appropriate API key in your .env file and select the model in your UI.
Where to customize
Build your UI: Create a form with a textarea for user input. The template provides the hooks—you design the interface that fits your use case.
Specialize the AI: Modify the system prompt to make the AI an expert in your domain. For example, "You are a professional email writer who crafts concise, friendly business emails" or "You are a senior developer who explains code clearly to junior developers."
Add parameters: The API routes support parameters like temperature (creativity level) and max_tokens (response length). Expose these in your UI if you want users to control the output style.
Switch providers: Set OPENAI_API_KEY, ANTHROPIC_API_KEY, or GROQ_API_KEY in your .env. For local development, install Ollama and the template will use it automatically.
What you need
At least one of:
OPENAI_API_KEYfor GPT modelsANTHROPIC_API_KEYfor ClaudeGROQ_API_KEYfor fast inference- Ollama installed locally (free, no API key needed)
How to deploy
Build your interface and connect it to the streaming hook:
import { useCompletion } from 'ai/react';
export default function WritingAssistant() {
const { completion, input, handleInputChange, handleSubmit, isLoading } =
useCompletion({ api: '/api/chat' });
return (
<div>
<form onSubmit={handleSubmit}>
<textarea
value={input}
onChange={handleInputChange}
placeholder="Describe what you want to write..."
/>
<button type="submit" disabled={isLoading}>
Generate
</button>
</form>
{completion && (
<div>
<h3>Result:</h3>
<p>{completion}</p>
</div>
)}
</div>
);
}As the AI generates text, completion updates automatically, giving you real-time streaming output.
Example use cases
- Email draft generator (paste context, get a professional email)
- Social media caption writer (describe the post, get engaging copy)
- Code documentation generator (paste code, get clear explanations)
- Product description writer (list features, get marketing copy)
- Meeting notes summarizer (paste notes, get action items)
Feature 4: Image Generation Gallery
Time Estimate: 20 minutes
What it is
Users enter a text prompt and generate images using AI models like Stable Diffusion XL. The template handles the entire workflow: submitting the generation request, polling for completion, displaying loading states, showing the final image, and storing it in your database for persistence.
This is perfect for avatar generators, design tools, product mockup creators, or any feature where users need AI-generated images.
What makes it work
The template integrates with Replicate's API to run image generation models. Replicate provides on-demand GPU infrastructure—you don't manage servers or model deployments.
The flow works like this:
- Your frontend sends a POST request to
/api/replicatewith the user's prompt - The API route forwards the request to Replicate's predictions API
- Your frontend polls
/api/replicate/[prediction_id]to check generation status - During generation, you show a loading spinner
- When complete, the image URL is returned
- The template automatically stores the image metadata in Supabase (in the
sdxl_imagestable)
Here's the polling pattern:
// Submit the generation request
const response = await fetch('/api/replicate', {
method: 'POST',
body: JSON.stringify({ input: { prompt: userPrompt } })
});
let prediction = await response.json();
// Poll until generation completes
while (prediction.status !== 'succeeded' && prediction.status !== 'failed') {
await sleep(1000);
const statusResponse = await fetch(`/api/replicate/${prediction.id}`);
prediction = await statusResponse.json();
}
// Display the generated image
if (prediction.status === 'succeeded') {
setImageUrl(prediction.output[0]);
}The template includes additional components for working with images:
ImageDropzonefor drag-and-drop image uploads to Supabase storageImageDifffor before/after image comparisons (useful for image editing features)
Where to customize
Switch models: Replicate hosts hundreds of models. You can use SDXL, Flux, image editing models, or style transfer models. Change the model by updating the model reference in your API route.
Add advanced parameters: Most image models support parameters like negative prompts (what to avoid), seed values (for reproducible outputs), number of inference steps (quality vs speed tradeoff), and aspect ratios. You can expose these in your UI and pass them through the API.
Build a gallery: The template stores generated images in Supabase. You can query the sdxl_images table to build a gallery showing all of a user's generated images.
Enable downloads: The template includes the file-saver package. Add a download button that lets users save images locally.
What you need
REPLICATE_API_TOKENin your.envfile (get this from Replicate's console)- Supabase storage bucket configured for image uploads
How to deploy
Create a form for prompt input and wire up the generation flow:
export default function ImageGenerator() {
const [prompt, setPrompt] = useState('');
const [imageUrl, setImageUrl] = useState('');
const [isGenerating, setIsGenerating] = useState(false);
const handleGenerate = async (e) => {
e.preventDefault();
setIsGenerating(true);
// Submit generation request
const response = await fetch('/api/replicate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ input: { prompt } })
});
let prediction = await response.json();
// Poll for completion
while (prediction.status !== 'succeeded' && prediction.status !== 'failed') {
await new Promise(resolve => setTimeout(resolve, 1000));
const statusRes = await fetch(`/api/replicate/${prediction.id}`);
prediction = await statusRes.json();
}
if (prediction.status === 'succeeded') {
setImageUrl(prediction.output[0]);
}
setIsGenerating(false);
};
return (
<div>
<form onSubmit={handleGenerate}>
<input
type="text"
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Describe the image you want to generate..."
/>
<button type="submit" disabled={isGenerating}>
{isGenerating ? 'Generating...' : 'Generate Image'}
</button>
</form>
{imageUrl && <img src={imageUrl} alt="Generated" />}
</div>
);
}The template's API routes handle authentication, error handling, and database storage automatically.
Conclusion
Each of these features would typically take days or weeks to build from scratch. You'd need to:
- Research and integrate multiple APIs (OpenAI, Replicate, LangChain)
- Set up database schemas for conversations, embeddings, and generated content
- Build streaming infrastructure for real-time AI responses
- Handle edge cases and error states
- Create UI components from scratch
- Write deployment configurations
TemplateAI provides all of this infrastructure out of the box. The integrations are built, the database schemas are defined via migrations, the streaming hooks work reliably, and the UI components are production-ready. You customize and ship, rather than spending weeks on boilerplate.
These four features are just a starting point. The template also includes:
- Authentication with magic links and Google OAuth
- Stripe payments for monetization
- Dashboard layouts and user management
- Deployment configurations for Vercel and Supabase
- Dark mode and 30+ theme options
Ready to build your AI application? Check out the full documentation to see everything included in TemplateAI, or get started now to access the complete template and start shipping.