
Artificial intelligence (AI) has transformed how humans interact with information in two major ways—search applications and generative AI. Search applications include ecommerce websites, document repository search, customer support call centers, customer relationship management, matchmaking for gaming, and application search. Generative AI use cases include chatbots with Retrieval-Augmented Generation (RAG), intelligent log analysis, code generation, document summarization, and AI assistants. AWS recommends Amazon OpenSearch Service as a vector database for Amazon Bedrock as the building blocks to power your solution for these workloads.
In this post, you’ll learn how to use OpenSearch Service and Amazon Bedrock to build AI-powered search and generative AI applications. You’ll learn about how AI-powered search systems employ foundation models (FMs) to capture and search context and meaning across text, images, audio, and video, delivering more accurate results to users. You’ll learn how generative AI systems use these search results to create original responses to questions, supporting interactive conversations between humans and machines.
The post addresses common questions such as:
- What is a vector database and how does it support generative AI applications?
- Why is Amazon OpenSearch Service recommended as a vector database for Amazon Bedrock?
- How do vector databases help prevent AI hallucinations?
- How can vector databases improve recommendation systems?
- What are the scaling capabilities of OpenSearch as a vector database?
How vector databases work in the AI workflow
When you’re building for search, FMs and other AI models convert various types of data (text, images, audio, and video) into mathematical representations called vectors. When you use vectors for search, you encode your data as vectors and store those vectors in a vector database. You further convert your query into a vector and then query the vector database to find related items by minimizing the distance between vectors.
When you’re building for generative AI, you use FMs such as large language models (LLMs), to generate text, video, audio, images, code, and more from a prompt. The prompt might contain text, such as a user’s question, along with other media such as images, audio, or video. However, generative AI models can produce hallucinations—outputs that appear convincing but contain factual errors. To solve for this challenge, you employ vector search to retrieve accurate information from a vector database. You add this information to the prompt in a process called Retrieval-Augmented Generation (RAG).
Why is Amazon OpenSearch Service the recommended vector database for Amazon Bedrock?
Amazon Bedrock is a fully managed service that provides FMs from leading AI companies, and the tools to customize these FMs with your data to improve their accuracy. With Amazon Bedrock, you get a serverless, no-fuss solution to adopt your selected FM and use it for your generative AI application.
Amazon OpenSearch Service is a fully managed service that you can use to deploy and operate OpenSearch in the AWS Cloud. OpenSearch is an open source search, log analytics, and vector database solution, composed of a search engine and vector database; and OpenSearch Dashboards, a log analytics, observability, security analytics, and dashboarding solution. OpenSearch Service can help you to deploy and operate your search infrastructure with native vector database capabilities, pre-built templates, and simplified setup. API calls and integration templates streamline connectivity with Amazon Bedrock FMs, while the OpenSearch Service vector engine can deliver as low as single-digit millisecond latencies for searches across billions of vectors, making it ideal for real-time AI applications.
OpenSearch is a specialized type of database technology that was originally designed for latency- and throughput-optimized matching and retrieval of large and small blocks of unstructured text with ranked results. OpenSearch ranks results based on a measure of similarity to the search query, returning the most similar results. This similarity matching has evolved over time. Before FMs, search engines used a word-frequency scoring system called term frequency/inverse document frequency (TF/IDF). OpenSearch Service uses TF/IDF to score a document based on the rarity of the search terms in all documents and how often the search terms appeared in the document it’s scoring.
With the rise of AI/ML, OpenSearch added the ability to compute a similarity score for the distance between vectors. To search with vectors, you add vector embeddings produced by FMs and other AI/ML technologies to your documents. To score documents for a query, OpenSearch computes the distance from the document’s vector to a vector from the query. OpenSearch further provides field-based filtering and matching and hybrid vector and lexical search, which you use to incorporate terms in your queries. OpenSearch hybrid search performs a lexical and a vector query in parallel, producing a similarity score with built-in score normalization and blending to improve the accuracy of the search result compared with lexical or vector similarity alone.
OpenSearch Service supports three vector engines: Facebook AI Similarity (FAISS), Non-Metric Space Library (NMSLib), and Apache Lucene. It supports exact nearest neighbor search, and approximate nearest neighbor (ANN) search with either hierarchical navigable small world (HNSW), or Inverted File (IVF) engines. OpenSearch Service supports vector quantization methods, including disk-based vector quantization so you can optimize cost, latency, and retrieval accuracy for your solution.
Use case 1: Improve your search results with AI/ML
To improve your search results with AI/ML, you use a vector-generating ML model, most frequently an LLM or multi-modal model that produces embeddings for text and image inputs. You use Amazon OpenSearch Ingestion, or a similar technology to send your data to OpenSearch Service with OpenSearch Neural Plugin to integrate the model, using a model ID, into an OpenSearch ingest pipeline. The ingest pipeline calls Amazon Bedrock to create vector embeddings for every document during ingestion.
To query OpenSearch Service as a vector database, you use an OpenSearch neural query to call Amazon Bedrock to create an embedding for the query. The neural query uses the vector database to retrieve nearest neighbors.
The service offers pre-built CloudFormation templates that construct OpenSearch Service integrations to connect to Amazon Bedrock foundation models for remote inference. These templates simplify the setup of the connector that OpenSearch Service uses to contact Amazon Bedrock.
After you’ve created the integration, you can refer to the model_id
when you set up your ingest and search pipelines.
Use case 2: Amazon OpenSearch Serverless as an Amazon Bedrock knowledge base
Amazon OpenSearch Serverless offers an auto-scaled, high-performing vector database that you can use to build with Amazon Bedrock for RAG, and AI agents, without having to manage the vector database infrastructure. When you use OpenSearch Serverless, you create a collection—a collection of indexes for your application’s search, vector, and logging needs. For vector database use cases, you send your vector data to your collection’s indices, and OpenSearch Serverless creates a vector database that provides fast vector similarity and retrieval.
When you use OpenSearch Serverless as a vector database, you pay only for storage for your vectors and the compute needed to serve your queries. Serverless compute capacity is measured in OpenSearch Compute Units (OCUs). You can deploy OpenSearch Serverless starting at just one OCU for development and test workloads for about $175/month. OpenSearch Serverless scales up and down automatically to accommodate your ingestion and search workloads.
With Amazon OpenSearch Serverless, you get an autoscaled, performant vector database that is seamlessly integrated with Amazon Bedrock as a knowledge base for your generative AI solution. You use the Amazon Bedrock console to automatically create vectors from your data in up to five data stores, including an Amazon Simple Storage Service (Amazon S3) bucket and store them in an Amazon OpenSearch Serverless collection.
When you’ve configured your data source, and selected a model, select Amazon OpenSearch Serverless as your vector store, and Amazon Bedrock and OpenSearch Serverless will take it from there. Amazon Bedrock will automatically retrieve source data from your data source, apply the parsing and chunking strategies you have configured, and index vector embeddings in OpenSearch Serverless. An API call will synchronize your data source with OpenSearch Serverless vector store.
The Amazon Bedrock retrieve_and_generate() runtime API call makes it straightforward for you to implement RAG with Amazon Bedrock and your OpenSearch Serverless knowledge base.
Conclusion
In this post, you learned how Amazon OpenSearch Service and Amazon Bedrock work together to deliver AI-powered search and generative AI applications and why OpenSearch Service is the AWS recommended vector database for Amazon Bedrock. You learned how to add Amazon Bedrock FMs to generate vector embeddings for OpenSearch Service semantic search to bring meaning and context to your search results. You learned how OpenSearch Serverless provides a tightly integrated knowledge base for Amazon Bedrock that simplifies using foundation models for RAG and other generative AI. Get started with Amazon OpenSearch Service and Amazon Bedrock today to enhance your AI-powered applications with improved search capabilities with more reliable generative AI outputs.
About the author
Jon Handler is Director of Solutions Architecture for Search Services at Amazon Web Services, based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads for OpenSearch. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale ecommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master’s of Science and a PhD in Computer Science and Artificial Intelligence from Northwestern University.