Skip to main content

Features


1. Qdrant Dashboard

Qdrant provides a built-in web dashboard for managing collections, running queries, and exploring data.

How to Access

  1. Navigate to your database in the TIR portal.
  2. Click Connect to open the Qdrant Dashboard.
  3. Enter your API Key and click Apply.

Dashboard Sections

SectionPurpose
CollectionsCreate, view, and delete collections. Browse points, view collection info, and visualize vectors.
ConsoleExecute Qdrant API commands interactively. Browse available API commands and run them directly.
TutorialInteractive API tutorial provided by Qdrant to learn how the APIs work.
DatasetsSample data for testing and experimentation.
tip

Use the Console section to test API calls before integrating them into your application.


2. Monitoring

Track your cluster's health and performance through the Monitoring tab on the database details page.

Hardware Metrics

MetricDescription
CPU UsageProcessor utilization percentage per node
Memory UsageRAM consumption per node

Service Metrics

MetricDescription
REST Request CountTotal HTTP API requests (with failure breakdown)
gRPC Request CountTotal gRPC API requests (with failure breakdown)
REST Response DurationAverage latency of HTTP API responses per cluster peer
gRPC Response DurationAverage latency of gRPC API responses per cluster peer
Cluster Pending OperationsOperations waiting in the cluster queue

Cluster Gauges

GaugeDescription
Total CollectionsNumber of collections in the cluster
Total VectorsTotal vector count across all collections
Cluster PeersNumber of active nodes in the cluster

Available time intervals: 5 minutes, 1 hour, 24 hours, 7 days, 30 days.


3. Scaling

TIR deploys Qdrant in distributed mode where multiple nodes form a cluster.

Upscaling Nodes

  1. Click the Actions icon for your database.
  2. Select Scale Up.
  3. Choose the new node count and click Update.
Important
  • You can scale from 3 to 10 nodes. Downscaling is not supported.
  • New nodes start empty. Data is not automatically rebalanced.
  • See Making Use of New Nodes below to distribute data to new nodes.

Resizing Disk

  1. Navigate to your database and open the Resize Disk tab.
  2. Select the new storage size and confirm.
warning

Disk size can be increased from 10 GB to 1,000 GB but cannot be reduced once increased. Additional storage is charged per node.

Making Use of New Nodes

When you add nodes, they start empty. You have three options to use them:

OptionWhen to Use
Create a new collectionNew data automatically distributes across all nodes based on shard count and replication factor.
Replicate existing shardsCopy shard data to the new node for redundancy.
Move existing shardsRelocate shards without duplicating data.

Creating a New Collection

curl -X PUT \
-H "api-key: <your-api-key>" \
-d '{
"vectors": { "size": 300, "distance": "Cosine" },
"shard_number": 8,
"replication_factor": 2
}' \
https://<your-endpoint-url>:6333/collections/<collection_name>
tip

Set shard_number as a multiple of your node count for even distribution.

Replicating Existing Data

First, get cluster info to find peer IDs and shard assignments:

curl -X GET \
-H "api-key: <your-api-key>" \
https://<your-endpoint-url>:6333/collections/<collection_name>/cluster

Replicate a shard to a new node:

curl -X PUT \
-H "api-key: <your-api-key>" \
-d '{
"replicate_shard": {
"shard_id": 0,
"from_peer_id": 123,
"to_peer_id": 456
}
}' \
https://<your-endpoint-url>:6333/collections/<collection_name>/cluster

Drop a replica from a node:

curl -X POST \
-H "api-key: <your-api-key>" \
-d '{
"drop_replica": {
"shard_id": 0,
"peer_id": 123
}
}' \
https://<your-endpoint-url>:6333/collections/<collection_name>/cluster

Moving Shards

Move a shard between nodes without duplicating data:

curl -X POST \
-H "api-key: <your-api-key>" \
-d '{
"move_shard": {
"shard_id": 0,
"from_peer_id": 123,
"to_peer_id": 456
}
}' \
https://<your-endpoint-url>:6333/collections/<collection_name>/cluster

Transfer methods:

MethodDescription
stream_records (default)Streams records to the target node in batches.
snapshotTransfers the shard including index and quantized data via a snapshot.

To use snapshot transfer, add "method": "snapshot" to the move_shard payload.


4. Snapshots

Managed Snapshots (TIR)

TIR provides one-click cluster-wide snapshots that back up all data across all nodes.

Create a Snapshot

  1. Click the Take Snapshot icon on the Vector Database list or from the Actions menu.
  2. Monitor the status in the Snapshots tab.
info

You cannot create another snapshot or scale your cluster while a snapshot is in progress.

Restore a Snapshot

  1. Open the Snapshots tab for your database.
  2. Click the Restore icon next to the snapshot.
  3. Monitor the restoration status.
warning
  • The database must be in Running state to restore.
  • During restoration, the dashboard and upscaling operations are disabled.
  • Do not perform write operations during restoration.

Delete a Snapshot

  1. Open the Snapshots tab.
  2. Click the Delete icon next to the snapshot.
info

You cannot delete a snapshot while its restoration is in progress.

API Snapshots (Per-Node, Per-Collection)

Qdrant also provides per-node, per-collection snapshot APIs. These snapshots are stored on the node's local disk.

info

You must create and restore snapshots for each node per collection separately using the node-specific URI: https://<endpoint>:6333/node-{node} where node ranges from 0 to replicas - 1.

Create a Snapshot

from qdrant_client import QdrantClient

client = QdrantClient(
host="<endpoint-url>", port=6333,
prefix="node-<node>", api_key="<api-key>"
)
client.create_snapshot(collection_name="<collection_name>")

List Snapshots

client.list_snapshots(collection_name="<collection_name>")

Download a Snapshot File

curl -X GET \
-H "api-key: <your-api-key>" \
https://<endpoint-url>:6333/node-<node>/collections/<collection_name>/snapshots/<snapshot_name>

Restore a Snapshot

client.recover_snapshot(
"<collection_name>",
"<file_location>"
)

You can also restore from an uploaded file:

curl -X POST \
'https://<endpoint-url>:6333/node-<node>/collections/<collection_name>/snapshots/upload?priority=snapshot' \
-H 'api-key: <your-api-key>' \
-H 'Content-Type: multipart/form-data' \
-F 'snapshot=@/path/to/<file_name>.snapshot'

Snapshot Recovery Priorities

PriorityBehavior
replica (default)Prefer existing data over snapshot data.
snapshotPrefer snapshot data over existing data.
no_syncRestore without any additional synchronization.

Delete a Snapshot

client.delete_snapshot(
collection_name="<collection_name>",
snapshot_name="<snapshot_name>"
)

5. Access Control

Each Qdrant database has two access keys:

Key TypePurpose
API KeyFull read/write access to all collections and operations.
Read-Only API KeyRead-only access for queries and listing operations.
tip

Use the read-only key for client applications that only need to query data.


6. Integrations

LangChain

LangChain simplifies building LLM-powered applications. Integrate Qdrant as a vector store for search and retrieval.

Install:

pip install langchain langchain-community qdrant-client

Connect, embed, and search:

from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Qdrant

# Load and chunk documents
loader = TextLoader("<your-file>")
documents = loader.load()
docs = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0).split_documents(documents)

# Embed and insert into Qdrant
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
qdrant = Qdrant.from_documents(
docs, embeddings,
host="<your-endpoint-url>", port=6333,
api_key="<your-api-key>",
collection_name="my_documents",
)

# Similarity search
found_docs = qdrant.similarity_search_with_score("<your-query>")
document, score = found_docs[0]

# MMR search (diverse results)
found_docs = qdrant.max_marginal_relevance_search("<your-query>", k=2, fetch_k=10)

For more, see the Qdrant LangChain Documentation.

LlamaIndex

LlamaIndex connects your private data with LLMs. Use Qdrant as a vector store for indexing and retrieval.

Install:

pip install llama-index llama-index-vector-stores-qdrant qdrant-client

Connect, embed, and search:

import qdrant_client
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext, Settings
from llama_index.vector_stores.qdrant import QdrantVectorStore
from llama_index.embeddings.fastembed import FastEmbedEmbedding

# Configure embedding model
Settings.embed_model = FastEmbedEmbedding(model_name="BAAI/bge-base-en-v1.5")
Settings.llm = None

# Load documents
documents = SimpleDirectoryReader("<your-data-dir>").load_data()

# Connect and insert into Qdrant
client = qdrant_client.QdrantClient(
host="<your-endpoint-url>", port=6333, api_key="<your-api-key>"
)
vector_store = QdrantVectorStore(client=client, collection_name="<collection_name>")
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)

# Query
query_engine = index.as_query_engine()
response = query_engine.query("<your-query>")

For more, see the Qdrant LlamaIndex Documentation.