Posted On: Jul 10, 2024
Vector search for Amazon MemoryDB, an in-memory database with multi-AZ durability, is now generally available. This capability helps you to store, index, retrieve, and search vectors. Amazon MemoryDB delivers the fastest vector search performance at the highest recall rates among popular vector databases on Amazon Web Services Cloud. Vector search for MemoryDB supports storing millions of vectors with single-digit millisecond query and update latencies at the highest levels of throughput with >99% recall. You can generate vector embeddings using AI/ML services, such as Amazon SageMaker, and store them within MemoryDB.
With vector search for MemoryDB, you can develop real-time machine learning (ML) and generative AI applications that require the highest throughput at the highest recall rates with the lowest latency using the MemoryDB API or orchestration frameworks such as LangChain. For example, a bank can use vector search for MemoryDB to detect anomalies, such as fraudulent transactions during periods of high transactional volumes, with minimal false positives.
Vector search for MemoryDB is available in all Amazon Web Services Regions that MemoryDB is available, including the Amazon Web Services China (Beijing) Region, operated by Sinnet and the Amazon Web Services China (Ningxia) Region, operated by NWCD—at no additional cost.
To get started, create a new MemoryDB cluster using MemoryDB version 7.1 and enable vector search through the Amazon Web Services Management Console or Amazon Command Line Interface (CLI). To learn more, check out the vector search for MemoryDB documentation.