This page explains how to store vectors in hashes. Hashes provide an efficient way to store vectors in Memorystore for Valkey.
Data serialization
Before storing vectors in a hash data type, vectors need to be converted into a format that Memorystore for Valkey understands. It requires vector serialization into binary blobs where the size equals the data type's byte size (e.g., 4 for FLOAT32) multiplied by the vector's number of dimensions. A popular choice for numerical vectors is the Python NumPy library:
Connect to Memorystore for Valkey
Before storing the vector in a hash, establish a connection to your Memorystore for Valkey instance using a OSS Redis compatible client like redis-py:
Store the vector in a hash
Hashes are like dictionaries, with key-value pairs. Use the HSET
command to store your serialized vector:
import numpy as np import redis # Sample vector vector = np.array([1.2, 3.5, -0.8], dtype=np.float32) # 3-dimensional vector # Serialize to a binary blob serialized_vector = vector.tobytes() redis_client = redis.cluster.RedisCluster(host='your_server_host', port=6379) redis_client.hset('vector_storage', 'vector_key', serialized_vector) # 'vector_key' is a unique identifier
- For successful indexing, your vector data must adhere to the dimensions and data type set in the index schema.
Backfilling Indexes
Backfilling indexes may occur in one of the following scenarios:
- Once an index is created, the backfilling procedure scans through keyspace for entries that meet the index filter criteria.
- Vector indexes and their data are persisted in RDB snapshots. When an RDB file is loaded, an automatic index backfilling process is triggered. This process actively detects and integrates any new or modified entries into the index since the RDB snapshot was created, maintaining index integrity and ensuring current results.