LangChain Integration with Qdrant

LangChain is a framework for developing applications powered by large language models (LLMs). It simplifies every stage of the LLM application lifecycle, i.e., Development, Productionization, and Deployment. You can integrate Qdrant with LangChain to leverage Qdrant’s capabilities to enhance the search and retrieval functionalities within the LangChain ecosystem.

Pre-requisites

Installing LangChain and Qdrant-Client:

  • For installing LangChain

pip install langchain
  • For installing Qdrant-Client

pip install qdrant-client

Importing Libraries

  • First, we need to import some libraries provided by LangChain to help us in converting text into chunks and then embeddings and putting them into the Qdrant Vector Database.

from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Qdrant

Reading the File

  • We need to read the contents of a file and store it in a variable. To do this we will be using the TextLoader library.

loader = TextLoader("<<FILE_NAME>>")
documents = loader.load()

Chunking the text

  • The next step is to chunk the text from the file so that at the time of Query from Qdrant only the chunk of text that is the most similar to the query is displayed.

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

Embedding and Inserting into Qdrant

  • Now, we need to convert these chunks of text into vector embeddings and insert it into qdrant. For converting chunks into vector embeddings we will be using a HuggingFace Model as it is compatible with the LangChain library. You also need to specify your API-Key and URL for accessing the qdrant cluster in the code below.

Note

You can get your Qdrant URL and API-Key from the Overview tab in Vector Database on TIR. See Qdrant Details subsection for more information.

embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-mpnet-base-v2"
)

host = "<<YOUR_QDRANT_ENDPOINT_URL>>"
# HTTP Port = 6333
# gRPC Port = 6334
port = "<<YOUR_PORT>>"
api_key = "<<YOUR_QDRANT_API_KEY>>"
qdrant = Qdrant.from_documents(
    docs,
    embeddings,
    host=host,
    port=port,
    api_key=api_key,
    collection_name="my_documents",
)

Note

You can specify some extra paramenters like force_recreate=True to recreate your collection or prefer_grpc=True to use grpc request.

Maximum marginal relevance search (MMR)

  • If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.

query = "<<QUERY>>"  # Question related to the text inserted in Qdrant.
found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10)
for i, doc in enumerate(found_docs):
    print(f"{i + 1}.", doc.page_content, "\n")

Now, you can pass the output text with the original query to your inference to get the context specific response.

What’s Next?

This was a quick start guide to get you started with the Qdrant Integration with LangChain.