QyverLabs Docs
  • 👋Welcome
  • Getting Started
    • 👀Why Qyver?
    • 💻Setup Qyver
    • 🏗️Basic Building Blocks
  • Run In Production
    • 🎯Overview
    • 💻Setup Qyver Server
      • 🧮Configuring your app
      • 📔Interacting with app via API
    • ⚙️Supported Vector Databases
      • Redis
      • Mongo DB
      • Qdrant
  • Concepts
    • 🗓️Combining Multiple Embeddings for Better Retrieval Outcomes
    • 🏹Dynamic Parameters/Query Time weights
  • Reference
    • 🎯Overview
    • ⏰Changelog
  • Help & FAQ
    • 📒Logging
    • ❔Support
Powered by GitBook
On this page
  • Your Users Demand Smarter Search
  • Enter Qyver
  • But can't I put all data in json, stringify it and embed using LLM?
  • Alright, but can't I...
  • 1. Store and Search Separately?
  • 2. Use Metadata Filters or Candidate Re-ranking?
  • Scaling with Qyver
  • Where Qyver Fits in the Big Picture
  1. Getting Started

Why Qyver?

PreviousWelcomeNextSetup Qyver

Last updated 3 months ago

Your Users Demand Smarter Search

Search and information retrieval are evolving fast. With AI and large language models redefining expectations, users now anticipate search systems that handle complex, nuanced queries—far beyond simple keyword matching.

Just consider what Algolia's CTO observed:

"We saw 2x more keyword searches just six months after ChatGPT launched." – Algolia CTO, 2023

With 17,000 customers generating 120 billion searches per month, this shift is undeniable. Across industries, search queries are becoming more sophisticated, blending multiple concepts, contexts, and data types.

Why Traditional Vector Search Falls Short

Vector search—whether text-only or multi-modal—struggles with complex queries because search isn't just about text. Real-world queries incorporate structured data like numbers, categories, and time-based information.

Real-World Examples:

  • E-commerce: "Comfortable running shoes for marathon training under $150"

    • Requires understanding text, numerical data (price), and categorical attributes (product type, use case).

  • Content Platforms: "Popular sci-fi movies from the 80s with strong female leads"

    • Blends text analysis, temporal data, and popularity metrics.

  • Job Search: "Entry-level data science roles in tech startups with good work-life balance"

    • Involves text, categorical data (industry, job level), and even subjective metrics.

To meet these rising expectations, search must go beyond embeddings—it needs structured and unstructured data working together.

Enter Qyver

This is where Qyver comes in—a powerful and flexible framework designed to tackle the complexities of modern search and information retrieval. Qyver provides a vector embedding solution for AI teams working with Retrieval-Augmented Generation (RAG), Search, Recommendations, and Analytics stacks.

Imagine you're building a system that can handle a query like "recent news about crop yield." After gathering your data, you define your schema, ingest the data, and build an index like this:

Schema Definition

class News(sl.Schema):
    id: IdField
    created_at: Timestamp
    like_count: Integer
    moderation_score: Float
    content: String

class User(sl.Schema):
    id: IdField
    interest: String

class Event(sl.EventSchema):
    id: IdField
    news: SchemaReference[News]
    user: SchemaReference[User]
    event_type: String

Encoder definition

recency_space = sl.RecencySpace(timestamp=news.created_at)
popularity_space = sl.NumberSpace(number=news.like_count, mode=Mode.MAXIMUM)
trust_space = sl.NumberSpace(number=news.moderation_score, mode=Mode.MAXIMUM)
semantic_space = sl.TextSimilarity(
    text=[news.content, user.interest], model="sentence-transformers/all-mpnet-base-v2"
)

Define Indexes

index = sl.Index(
    spaces=[recency_space, popularity_space, trust_space, semantic_space],
    effects=[sl.Effect(semantic_space, event.user, 0.8 * event.news)],
)

You define your queries and parameterize them like this:

Query definition

query = (
    sl.Query(
        index,
        weights={
            recency_space: Param("recency_weight"),
            popularity_space: Param("popularity_weight"),
            trust_space: Param("trust_weight"),
            semantic_space: Param("semantic_weight"),
        },
    )
    .find(news)
    .similar(semantic_space.text, Param("content_query"))
    .with_vector(user, Param("user_id"))
)

Debug in notebook, run as server

sl.OnlineExecutor(
    sources=[sl.RestSource(news), sl.RestSource(user)],
    index=[index],
    query=[query],
    # vector_database = MongoDBVectorDatabase(...),
    # vector_database = RedisVectorDatabase(...),
    # vector_database = QdrantVectorDatabase(...),
)

# SparkExecutor()   <-- Coming soon in Qyver Cloud
curl -X POST \
    'http://localhost:8000/api/v1/search/query' \
    --header 'Accept: */*' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "content_query": "crop yields",
        "semantic_weight": 0.5,
        "recency_weight": 0.9,
        "popularity_weight": 0.5,
        "trust_weight": 0.2,
    }'

Handle natural language queries

#In a notebook like this:

query = (
    sl.Query(...)
    .with_natural_query(Param("recent news about crop yield"))
)
# As an API call like this:

curl -X POST \
    'http://localhost:8000/api/v1/search/query' \
    --header 'Accept: */*' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "natural_language_query": "recent news about crop yield"
    }'

But can't I put all data in json, stringify it and embed using LLM?

Stringify and embed approach produces unpredictable results. For example (code below):

  • Embed 0..100 with OpenAI API

  • Calculate and plot the cosine similarity

  • Observe the difference between expected and actual results

from openai import OpenAI
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

response = OpenAI().embeddings.create(
    input=[str(i) for i in range(0, 101)],
    model="text-embedding-3-small",
)
embeddings = np.array([r.embedding for r in response.data])
scores = cosine_similarity(embeddings, embeddings)

Alright, but can't I...

1. Store and Search Separately?

One common but inefficient approach is to store and search attribute vectors separately, fire multiple searches, and then reconcile the results. However, this method has several drawbacks:

  • Limited in capturing subtle relationships between attributes

  • Less efficient when retrieving objects with multiple simultaneous attributes

A better approach is to store all attribute vectors in the same vector database and perform a single weighted search at query time.


2. Use Metadata Filters or Candidate Re-ranking?

Metadata filters and re-ranking have their own limitations:

  • Converting vague preferences like “recent,” “risky,” or “popular” into filters often results in a binary step function, leading to low resolution.

  • Semantic ranking (e.g., ColBERT) is limited to text.

  • Learn-to-rank models require ML expertise to fine-tune.

  • Broad queries like “popular pants” fail with re-ranking alone due to poor candidate recall.

For high-quality, nuanced retrieval, a multi-attribute vector search approach is often superior.

Scaling with Qyver

Why Use Qyver Server?

  • REST API for easy integration with existing applications

  • Built-in Vector Database connectivity for efficient storage and retrieval

  • Eliminates infrastructure overhead, letting developers focus on functionality, not deployment

  • Scales from prototype to production effortlessly

By deploying Qyver Server, you can integrate multi-attribute vector search into your system while maintaining efficiency, flexibility, and scalability.

Where Qyver Fits in the Big Picture

Let’s walk through a quick example. Don’t worry if some concepts are new—we’ll break them down in . For now, this is just a glimpse of how Qyver works.

Multiple kNN searches take

To build with Qyver at scale, you can leverage the . It’s designed to seamlessly integrate Qyver’s advanced search and retrieval capabilities into your application via a RESTful API.

detail later
longer than a single search using concatenated vectors
Qyver Server
Why Qyver?
Your Users Demand Smarter Search
Enter Qyver
But can't I put all data in json, stringify it and embed using LLM?
Alright, but can't I...
1. Store and Search Separately?
2. Use Metadata Filters or Candidate Re-ranking?
Scaling with Qyver
Where Qyver Fits in the Big Picture
👀
Example of queries needing other data than text
OpenAI embeddings result in noisy, non-monotonic cosine similarity scores. For example, CosSim(25, 50) equals to 0.69 when CosSim(32, 50) equals 0.42 meaning 25 is more similar to 50 than 32 which doesn't make sense. Qyver number embeddings avoid such inconsistencies by design.
Qyver framework diagram
Page cover image