GroveAI
Examples

AI Search Examples

Examples of AI-powered search implementations that go beyond keyword matching — semantic understanding, personalised ranking, conversational search, and multi-modal discovery.

Semantic Product Search for E-Commerce

intermediate

A product search engine that understands natural language queries ('comfortable shoes for standing all day') and matches against product descriptions, reviews, and attributes using semantic embeddings rather than keyword matching.

// Semantic product search
async function semanticProductSearch(query, filters) {
  const queryEmbedding = await embed(query);

  const results = await vectorStore.search(queryEmbedding, {
    topK: 50,
    filter: {
      inStock: true,
      ...filters,
    },
  });

  // Re-rank with business logic
  const reranked = results.map(r => ({
    ...r,
    finalScore: r.similarityScore * 0.6 + r.popularityScore * 0.2 + r.marginScore * 0.1 + r.freshnessScore * 0.1,
  })).sort((a, b) => b.finalScore - a.finalScore);

  return reranked.slice(0, 20);
}

Key takeaway: Semantic search increases product discovery by 30-40% by matching intent rather than keywords — customers find products they could not describe with exact terms.

Enterprise Knowledge Search with Access Control

advanced

A unified search across company documentation (Confluence, SharePoint, Slack, Google Drive) that respects document permissions, ranks results by relevance and recency, and provides AI-generated answer snippets.

// Enterprise knowledge search
async function enterpriseSearch(query, user) {
  const permissions = await getPermissions(user);

  // Search across all sources with access control
  const [confluenceResults, slackResults, driveResults] = await Promise.all([
    searchConfluence(query, permissions.confluence),
    searchSlack(query, permissions.slackChannels),
    searchDrive(query, permissions.driveAccess),
  ]);

  const allResults = [...confluenceResults, ...slackResults, ...driveResults];

  // Generate AI answer snippet
  const topResults = rerank(allResults, query).slice(0, 5);
  const answerSnippet = await llm.chat({
    messages: [{
      role: "system",
      content: "Generate a concise answer based on these search results. Cite which source each piece of information comes from."
    }, { role: "user", content: `Query: ${query}\nResults: ${JSON.stringify(topResults)}` }]
  });

  return { answer: answerSnippet, results: topResults };
}

Key takeaway: Unified enterprise search across all platforms eliminates the 'where did I see that?' problem that wastes hours per employee per week.

Conversational Search Interface

intermediate

A search experience where users can refine results through conversation. 'Find project management tools' followed by 'that integrate with Slack' followed by 'under £20 per month'. Each turn narrows and refines the search.

// Conversational search
async function conversationalSearch(message, conversationState) {
  // Understand the refinement
  const intent = await llm.chat({
    messages: [{
      role: "system",
      content: "Given the conversation, extract the cumulative search criteria as structured filters."
    }, ...conversationState.messages, { role: "user", content: message }]
  });

  // Merge with existing filters
  const filters = { ...conversationState.filters, ...intent.newFilters };

  const results = await search(intent.query, filters);

  return {
    results,
    filters,
    clarification: results.length > 20 ? "Would you like to narrow this down further?" : null,
  };
}

Key takeaway: Conversational search reduces search-to-find time by letting users progressively refine rather than crafting the perfect query upfront.

Visual Similarity Search

advanced

An image-based search where users upload a photo and find visually similar products. Uses CLIP embeddings to map images and text into the same space, enabling both image-to-image and text-to-image search.

// Visual similarity search
async function visualSearch(image) {
  const imageEmbedding = await clipEmbed(image);

  const similar = await vectorStore.search(imageEmbedding, {
    topK: 20,
    index: "product-images",
  });

  // Group by product to avoid showing multiple angles of same item
  const unique = deduplicateByProduct(similar);

  return unique.map(r => ({
    product: r.product,
    similarity: r.score,
    image: r.thumbnailUrl,
    price: r.price,
  }));
}

Key takeaway: Visual search converts browsers into buyers — users who search by image have 2x higher purchase intent than text searchers.

Typo-Tolerant Search with Query Understanding

beginner

A search system that handles misspellings, synonyms, and ambiguous queries gracefully. Uses an LLM to understand query intent, correct spelling, expand abbreviations, and generate multiple search interpretations.

// Query understanding pipeline
async function understandQuery(rawQuery) {
  const understood = await llm.chat({
    messages: [{
      role: "system",
      content: `Analyse this search query. Return JSON:
      { correctedQuery, intent, synonymExpansions, filters, isAmbiguous, interpretations }`
    }, { role: "user", content: rawQuery }]
  });

  if (understood.isAmbiguous) {
    return {
      searchQueries: understood.interpretations.map(i => i.query),
      clarificationPrompt: "Did you mean: " + understood.interpretations.map(i => i.description).join(" or "),
    };
  }

  return { searchQueries: [understood.correctedQuery, ...understood.synonymExpansions] };
}

Key takeaway: Query understanding is more impactful than ranking improvements — fixing the query fixes everything downstream.

Patterns

Key patterns to follow

  • Semantic search complements rather than replaces keyword search — use hybrid approaches
  • Query understanding (spelling correction, intent detection) has the highest ROI of any search improvement
  • Access control in enterprise search must be enforced at query time, not post-filtering
  • Re-ranking with business signals (popularity, margin, freshness) alongside relevance improves commercial outcomes

FAQ

Frequently asked questions

Traditional search matches keywords. AI search understands intent, handles synonyms and misspellings, supports natural language queries, and can match across modalities (text, images). It finds what users mean, not just what they type.

For semantic search, yes — you need vector storage for embeddings. However, you can start with PostgreSQL's pgvector extension before moving to dedicated solutions like Pinecone or Weaviate as you scale.

Use metrics like Mean Reciprocal Rank (MRR), Normalized Discounted Cumulative Gain (NDCG), and click-through rate. Build a test set of queries with known relevant results and measure against it regularly.

Embedding generation adds 50-200ms. Vector search adds 10-50ms. Total AI search latency is typically 100-300ms, which is acceptable for most applications. Pre-compute embeddings for your corpus to avoid indexing latency.

Yes. The most common approach is to add semantic search as a complement to your existing keyword search, merge results, and re-rank. This hybrid approach gives you the benefits of AI without replacing proven infrastructure.

Need custom AI implementation?

Our team can help you build production-ready AI solutions. Book a free strategy call.