Hierarchical Navigable Small Worlds (HNSW)

https://www.pinecone.io/learn/series/faiss/hnsw/

Hierarchical Navigable Small Worlds

Hierarchical Navigable Small World (HNSW) graphs are among the top-performing indexes for vector similarity search[1]. HNSW is a hugely popular technology that time and time again produces state-of-the-art performance with super fast search speeds and fantastic recall.

Yet despite being a popular and robust algorithm for approximate nearest neighbors (ANN) searches, understanding how it works is far from easy.


Note: Pinecone lets you build scalable, performant vector search into applications without knowing anything about HNSW or vector indexing libraries. But we know you like seeing how things work, so enjoy the guide!


This article helps demystify HNSW and explains this intelligent algorithm in an easy-to-understand way. Towards the end of the article, we’ll look at how to implement HNSW using Faiss and which parameter settings give us the performance we need.

Foundations of HNSW

We can split ANN algorithms into three distinct categories; trees, hashes, and graphs. HNSW slots into the graph category. More specifically, it is a proximity graph, in which two vertices are linked based on their proximity (closer vertices are linked) — often defined in Euclidean distance.

There is a significant leap in complexity from a ‘proximity’ graph to ‘hierarchical navigable small world’ graph. We will describe two fundamental techniques that contributed most heavily to HNSW: the probability skip list, and navigable small world graphs.

Probability Skip List

The probability skip list was introduced way back in 1990 by William Pugh [2]. It allows fast search like a sorted array, while using a linked list structure for easy (and fast) insertion of new elements (something that is not possible with sorted arrays).

Skip lists work by building several layers of linked lists. On the first layer, we find links that skip many intermediate nodes/vertices. As we move down the layers, the number of ‘skips’ by each link is decreased.

To search a skip list, we start at the highest layer with the longest ‘skips’ and move along the edges towards the right (below). If we find that the current node ‘key’ is greater than the key we are searching for — we know we have overshot our target, so we move down to previous node in the next level.

A probability skip list structure, we start on the top layer. If our current key is greater than the key we are searching for (or we reach end), we drop to the next layer.

A probability skip list structure, we start on the top layer. If our current key is greater than the key we are searching for (or we reach end), we drop to the next layer.

HNSW inherits the same layered format with longer edges in the highest layers (for fast search) and shorter edges in the lower layers (for accurate search).

Vector search using Navigable Small World (NSW) graphs was introduced over the course of several papers from 2011-14 [4, 5, 6]. The idea is that if we take a proximity graph but build it so that we have both long-range and short-range links, then search times are reduced to (poly/)logarithmic complexity.

Each vertex in the graph connects to several other vertices. We call these connected vertices friends, and each vertex keeps a friend list, creating our graph.

When searching an NSW graph, we begin at a pre-defined entry-point. This entry point connects to several nearby vertices. We identify which of these vertices is the closest to our query vector and move there.

The search process through a NSW graph. Starting at a pre-defined entry point, the algorithm greedily traverses to connected vertices that are nearer to the query vector.

The search process through a NSW graph. Starting at a pre-defined entry point, the algorithm greedily traverses to connected vertices that are nearer to the query vector.

We repeat the greedy-routing search process of moving from vertex to vertex by identifying the nearest neighboring vertices in each friend list. Eventually, we will find no nearer vertices than our current vertex — this is a local minimum and acts as our stopping condition.


Navigable small world models are defined as any network with (poly/)logarithmic complexity using greedy routing. The efficiency of greedy routing breaks down for larger networks (1-10K+ vertices) when a graph is not navigable [7].


The routing (literally the route we take through the graph) consists of two phases. We start with the “zoom-out” phase where we pass through low-degree vertices (degree is the number of links a vertex has) — and the later “zoom-in” phase where we pass through higher-degree vertices [8].

High-degree vertices have many links, whereas low-degree vertices have very few links.

High-degree vertices have many links, whereas low-degree vertices have very few links.

Our stopping condition is finding no nearer vertices in our current vertex’s friend list. Because of this, we are more likely to hit a local minimum and stop too early when in the zoom-out phase (fewer links, less likely to find a nearer vertex).

To minimize the probability of stopping early (and increase recall), we can increase the average degree of vertices, but this increases network complexity (and search time). So we need to balance the average degree of vertices between recall and search speed.

Another approach is to start the search on high-degree vertices (zoom-in first). For NSW, this does improve performance on low-dimensional data. We will see that this is also a significant factor in the structure of HNSW.

Creating HNSW

HNSW is a natural evolution of NSW, which borrows inspiration from hierarchical multi-layers from Pugh’s probability skip list structure.

Adding hierarchy to NSW produces a graph where links are separated across different layers. At the top layer, we have the longest links, and at the bottom layer, we have the shortest.

Layered graph of HNSW, the top layer is our entry point and contains only the longest links, as we move down the layers, the link lengths become shorter and more numerous.

Layered graph of HNSW, the top layer is our entry point and contains only the longest links, as we move down the layers, the link lengths become shorter and more numerous.

During the search, we enter the top layer, where we find the longest links. These vertices will tend to be higher-degree vertices (with links separated across multiple layers), meaning that we, by default, start in the zoom-in phase described for NSW.

We traverse edges in each layer just as we did for NSW, greedily moving to the nearest vertex until we find a local minimum. Unlike NSW, at this point, we shift to the current vertex in a lower layer and begin searching again. We repeat this process until finding the local minimum of our bottom layer — layer 0.

The search process through the multi-layer structure of an HNSW graph.

The search process through the multi-layer structure of an HNSW graph.

Graph Construction

During graph construction, vectors are iteratively inserted one-by-one. The number of layers is represented by parameter L. The probability of a vector insertion at a given layer is given by a probability function normalized by the ‘level multiplier’ m_L, where m_L = ~0 means vectors are inserted at layer 0 only.

The probability function is repeated for each layer (other than layer 0). The vector is added to its insertion layer and every layer below it.

The probability function is repeated for each layer (other than layer 0). The vector is added to its insertion layer and every layer below it.

The creators of HNSW found that the best performance is achieved when we minimize the overlap of shared neighbors across layers. Decreasing m_L can help minimize overlap (pushing more vectors to layer 0), but this increases the average number of traversals during search. So, we use an m_L value which balances both. A rule of thumb for this optimal value is 1/ln(M) [1].

Graph construction starts at the top layer. After entering the graph the algorithm greedily traverse across edges, finding the ef nearest neighbors to our inserted vector q — at this point ef = 1.

After finding the local minimum, it moves down to the next layer (just as is done during search). This process is repeated until reaching our chosen insertion layer. Here begins phase two of construction.

The ef value is increased to efConstruction (a parameter we set), meaning more nearest neighbors will be returned. In phase two, these nearest neighbors are candidates for the links to the new inserted element q and as entry points to the next layer.

M neighbors are added as links from these candidates — the most straightforward selection criteria are to choose the closest vectors.

After working through multiple iterations, there are two more parameters that are considered when adding links. M_max, which defines the maximum number of links a vertex can have, and M_max0, which defines the same but for vertices in layer 0.

Explanation of the number of links assigned to each vertex and the effect of M, M_max, and M_max0.

Explanation of the number of links assigned to each vertex and the effect of M, M_max, and M_max0.

The stopping condition for insertion is reaching the local minimum in layer 0.

Implementation of HNSW

We will implement HNSW using the Facebook AI Similarity Search (Faiss) library, and test different construction and search parameters and see how these affect index performance.

To initialize the HNSW index we write:

In[2]:

# setup our HNSW parameters
d = 128  # vector size
M = 32
index = faiss.IndexHNSWFlat(d, M)
print(index.hnsw)

Out[2]:

<faiss.swigfaiss.HNSW; proxy of <Swig Object of type 'faiss::HNSW *' at 0x7f91183ef120> >

With this, we have set our M parameter — the number of neighbors we add to each vertex on insertion, but we’re missing M_max and M_max0.

In Faiss, these two parameters are set automatically in the set_default_probas method, called at index initialization. The M_max value is set to M, and M_max0 set to M*2 (find further detail in the notebook).

Before building our index with index.add(xb), we will find that the number of layers (or levels in Faiss) are not set:

In[3]:

# the HNSW index starts with no levels
index.hnsw.max_level

In[4]:

# and levels (or layers) are empty too
levels = faiss.vector_to_array(index.hnsw.levels)
np.bincount(levels)

Out[4]:

array([], dtype=int64)

If we go ahead and build the index, we’ll find that both of these parameters are now set.

In[6]:

# after adding our data we will find that the level
# has been set automatically
index.hnsw.max_level

In[7]:

# and levels (or layers) are now populated
levels = faiss.vector_to_array(index.hnsw.levels)
np.bincount(levels)

Out[7]:

array([     0, 968746,  30276,    951,     26,      1], dtype=int64)

Here we have the number of levels in our graph, 0 -> 4 as described by max_level. And we have levels, which shows the distribution of vertices on each level from 0 to 4 (ignoring the first 0 value). We can even find which vector is our entry point:

That’s a high-level view of our Faiss-flavored HNSW graph, but before we test the index, let’s dive a little deeper into how Faiss is building this structure.

Graph Structure

When we initialize our index we pass our vector dimensionality d and number of neighbors for each vertex M. This calls the method ‘set_default_probas’, passing M and 1 / log(M) in the place of levelMult (equivalent to m_L above). A Python equivalent of this method looks like:

def set_default_probas(M: int, m_L: float):
    nn = 0  # set nearest neighbors count = 0
    cum_nneighbor_per_level = []
    level = 0  # we start at level 0
    assign_probas = []
    while True:
        # calculate probability for current level
        proba = np.exp(-level / m_L) * (1 - np.exp(-1 / m_L))
        # once we reach low prob threshold, we've created enough levels
        if proba < 1e-9: break
        assign_probas.append(proba)
        # neighbors is == M on every level except level 0 where == M*2
        nn += M*2 if level == 0 else M
        cum_nneighbor_per_level.append(nn)
        level += 1
    return assign_probas, cum_nneighbor_per_level

Here we are building two vectors — assign_probas, the probability of insertion at a given layer, and cum_nneighbor_per_level, the cumulative total of nearest neighbors assigned to a vertex at different insertion levels.

In[10]:

assign_probas, cum_nneighbor_per_level = set_default_probas(
    32, 1/np.log(32)
)
assign_probas, cum_nneighbor_per_level

Out[10]:

([0.96875,
  0.030273437499999986,
  0.0009460449218749991,
  2.956390380859371e-05,
  9.23871994018553e-07,
  2.887099981307982e-08],
 [64, 96, 128, 160, 192, 224])

From this, we can see the significantly higher probability of inserting a vector at level 0 than higher levels (although, as we will explain below, the probability is not exactly as defined here). This function means higher levels are more sparse, reducing the likelihood of ‘getting stuck’, and ensuring we start with longer range traversals.

Our assign_probas vector is used by another method called random_level — it is in this function that each vertex is assigned an insertion level.

def random_level(assign_probas: list, rng):
    # get random float from 'r'andom 'n'umber 'g'enerator
    f = rng.uniform() 
    for level in range(len(assign_probas)):
        # if the random float is less than level probability...
        if f < assign_probas[level]:
            # ... we assert at this level
            return level
        # otherwise subtract level probability and try again
        f -= assign_probas[level]
    # below happens with very low probability
    return len(assign_probas) - 1

We generate a random float using Numpy’s random number generator rng (initialized below) in f. For each level, we check if f is less than the assigned probability for that level in assign_probas — if so, that is our insertion layer.

If f is too high, we subtract the assign_probas value from f and try again for the next level. The result of this logic is that vectors are most likely going to be inserted at level 0. Still, if not, there is a decreasing probability of insertion at ease increment level.

Finally, if no levels satisfy the probability condition, we insert the vector at the highest level with return len(assign_probas) - 1. If we compare the distribution between our Python implementation and that of Faiss, we see very similar results:

In[12]:

chosen_levels = []
rng = np.random.default_rng(12345)
for _ in range(1_000_000):
    chosen_levels.append(random_level(assign_probas, rng))

In[13]:

np.bincount(chosen_levels)

Out[13]:

array([968821,  30170,    985,     23,       1],
      dtype=int64)

Distribution of vertices across layers in both the Faiss implementation (left) and the Python implementation (right).

Distribution of vertices across layers in both the Faiss implementation (left) and the Python implementation (right).

The Faiss implementation also ensures that we always have at least one vertex in the highest layer to act as the entry point to our graph.

HNSW Performance

Now that we’ve explored all there is to explore on the theory behind HNSW and how this is implemented in Faiss — let’s look at the effect of different parameters on our recall, search and build times, and the memory usage of each.

We will be modifying three parameters: M, efSearch, and efConstruction. And we will be indexing the Sift1M dataset, which you can download and prepare using this script.

As we did before, we initialize our index like so:

index = faiss.IndexHNSWFlat(d, M)

The two other parameters, efConstruction and efSearch can be modified after we have initialized our index.

index.hnsw.efConstruction = efConstruction
index.add(xb)  # build the index
index.hnsw.efSearch = efSearch
# and now we can search
index.search(xq[:1000], k=1)

Our efConstruction value must be set before we construct the index via index.add(xb), but efSearch can be set anytime before searching.

Let’s take a look at the recall performance first.

Recall@1 performance for various M, efConstruction, and efSearch parameters.

Recall@1 performance for various M, efConstruction, and efSearch parameters.

High M and efSearch values can make a big difference in recall performance — and it’s also evident that a reasonable efConstruction value is needed. We can also increase efConstruction to achieve higher recall at lower M and efSearch values.

However, this performance does not come for free. As always, we have a balancing act between recall and search time — let’s take a look.

Search time in µs for various M, efConstruction, and efSearch parameters when searching for 1000 queries. Note that the y-axis is using a log scale.

Search time in µs for various M, efConstruction, and efSearch parameters when searching for 1000 queries. Note that the y-axis is using a log scale.

Although higher parameter values provide us with better recall, the effect on search times can be dramatic. Here we search for 1000 similar vectors (xq[:1000]), and our recall/search-time can vary from 80%-1ms to 100%-50ms. If we’re happy with a rather terrible recall, search times can even reach 0.1ms.

If you’ve been following our articles on vector similarity search, you may recall that efConstruction has a negligible effect on search-time — but that is not the case here…

When we search using a few queries, it is true that efConstruction has little effect on search time. But with the 1000 queries used here, the small effect of efConstruction becomes much more significant.

If you believe your queries will mostly be low volume, efConstruction is a great parameter to increase. It can improve recall with little effect on search time, particularly when using lower M values.

efConstruction and search time when searching for only one query. When using lower M values, the search time remains almost unchanged for different efConstruction values.

efConstruction and search time when searching for only one query. When using lower M values, the search time remains almost unchanged for different efConstruction values.

That all looks great, but what about the memory usage of the HNSW index? Here things can get slightly less appealing.

Memory usage with increasing values of M using our Sift1M dataset. efSearch and efConstruction have no effect on the memory usage.

Memory usage with increasing values of M using our Sift1M dataset. efSearch and efConstruction have no effect on the memory usage.

Both efConstruction and efSearch do not affect index memory usage, leaving us only with M. Even with M at a low value of 2, our index size is already above 0.5GB, reaching almost 5GB with an M of 512.

So although HNSW produces incredible performance, we need to weigh that against high memory usage and the inevitable high infrastructure costs that this can produce.

Improving Memory Usage and Search Speeds

HNSW is not the best index in terms of memory utilization. However, if this is important and using another index isn’t an option, we can improve it by compressing our vectors using product quantization (PQ). Using PQ will reduce recall and increase search time — but as always, much of ANN is a case of balancing these three factors.

If, instead, we’d like to improve our search speeds — we can do that too! All we do is add an IVF component to our index. There is plenty to discuss when adding IVF or PQ to our index, so we wrote an entire article on mixing-and-matching of indexes.

That’s it for this article covering the Hierarchical Navigable Small World graph for vector similarity search! Now that you’ve learned the intuition behind HNSW and how to implement it in Faiss, you’re ready to go ahead and test HNSW indexes in your own vector search applications, or use a managed solution like Pinecone or OpenSearch that has vector search ready-to-go!

If you’d like to continue learning more about vector search and how you can use it to supercharge your own applications, we have a whole set of learning materials aiming to bring you up to speed with the world of vector search.

References

[1] E. Bernhardsson, ANN Benchmarks (2021), GitHub

[2] W. Pugh, Skip lists: a probabilistic alternative to balanced trees (1990), Communications of the ACM, vol. 33, no.6, pp. 668-676

[3] Y. Malkov, D. Yashunin, Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs (2016), IEEE Transactions on Pattern Analysis and Machine Intelligence

[4] Y. Malkov et al., Approximate Nearest Neighbor Search Small World Approach (2011), International Conference on Information and Communication Technologies & Applications

[5] Y. Malkov et al., Scalable Distributed Algorithm for Approximate Nearest Neighbor Search Problem in High Dimensional General Metric Spaces (2012), Similarity Search and Applications, pp. 132-147

[6] Y. Malkov et al., Approximate nearest neighbor algorithm based on navigable small world graphs (2014), Information Systems, vol. 45, pp. 61-68

[7] M. Boguna et al., Navigability of complex networks (2009), Nature Physics, vol. 5, no. 1, pp. 74-80

[8] Y. Malkov, A. Ponomarenko, Growing homophilic networks are natural navigable small worlds (2015), PloS one

Facebook Research, Faiss HNSW Implementation, GitHub

{
"by": "thunderbong",
"descendants": 0,
"id": 40248506,
"score": 3,
"time": 1714748736,
"title": "Hierarchical Navigable Small Worlds (HNSW)",
"type": "story",
"url": "https://www.pinecone.io/learn/series/faiss/hnsw/"
}
{
"author": null,
"date": null,
"description": null,
"image": "https://www.pinecone.io/api/og/?title=Hierarchical%20Navigable%20Small%20Worlds%20(HNSW)",
"logo": "https://logo.clearbit.com/pinecone.io",
"publisher": "Pinecone",
"title": "Hierarchical Navigable Small Worlds (HNSW) | Pinecone",
"url": "https://www.pinecone.io/learn/series/faiss/hnsw/"
}
{
"url": "https://www.pinecone.io/learn/series/faiss/hnsw/",
"title": "Hierarchical Navigable Small Worlds (HNSW)",
"description": "Hierarchical Navigable Small World (HNSW) graphs are among the top-performing indexes for vector similarity search[1]. HNSW is a hugely popular technology that time and time again produces state-of-the-art...",
"links": [
"https://www.pinecone.io/learn/series/faiss/hnsw/"
],
"image": "https://www.pinecone.io/api/og/?title=Hierarchical%20Navigable%20Small%20Worlds%20(HNSW)",
"content": "<div><div><p><img alt=\"Hierarchical Navigable Small Worlds\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd6e3a660654d9cb55f7ac137a736539e227296b6-1920x1080.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd6e3a660654d9cb55f7ac137a736539e227296b6-1920x1080.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd6e3a660654d9cb55f7ac137a736539e227296b6-1920x1080.png&amp;w=3840&amp;q=75\" /></p></div><p>Hierarchical Navigable Small World (HNSW) graphs are among the top-performing indexes for <a target=\"_blank\" href=\"https://www.pinecone.io/learn/what-is-similarity-search/\">vector similarity search</a>[1]. HNSW is a hugely popular technology that time and time again produces state-of-the-art performance with super fast search speeds and fantastic recall.</p><p>Yet despite being a popular and robust algorithm for approximate nearest neighbors (ANN) searches, understanding how it works is far from easy.</p><hr /><p><strong>Note:</strong> <a target=\"_blank\" href=\"https://www.pinecone.io/\"><strong>Pinecone</strong></a> <strong>lets you build scalable, performant vector search into applications without knowing anything about HNSW or vector indexing libraries. But we know you like seeing how things work, so enjoy the guide!</strong></p><hr /><p>This article helps demystify HNSW and explains this intelligent algorithm in an easy-to-understand way. Towards the end of the article, we’ll look at how to implement HNSW using <a target=\"_blank\" href=\"https://www.pinecone.io/learn/series/faiss/\">Faiss</a> and which parameter settings give us the performance we need.</p><h2 id=\"Foundations-of-HNSW\">Foundations of HNSW</h2><p>We can split <a target=\"_blank\" href=\"https://www.pinecone.io/learn/what-is-similarity-search/\">ANN algorithms</a> into three distinct categories; trees, hashes, and graphs. HNSW slots into the <em>graph</em> category. More specifically, it is a <em>proximity graph</em>, in which two vertices are linked based on their proximity (closer vertices are linked) — often defined in Euclidean distance.</p><p>There is a significant leap in complexity from a <em>‘proximity’</em> graph to <em>‘hierarchical navigable small world’</em> graph. We will describe two fundamental techniques that contributed most heavily to HNSW: the probability skip list, and navigable small world graphs.</p><h3 id=\"Probability-Skip-List\">Probability Skip List</h3><p>The probability skip list was introduced <em>way back</em> in 1990 by <em>William Pugh</em> [2]. It allows fast search like a sorted array, while using a linked list structure for easy (and fast) insertion of new elements (something that is not possible with sorted arrays).</p><p>Skip lists work by building several layers of linked lists. On the first layer, we find links that <em>skip</em> many intermediate nodes/vertices. As we move down the layers, the number of <em>‘skips’</em> by each link is decreased.</p><p>To search a skip list, we start at the highest layer with the longest <em>‘skips’</em> and move along the edges towards the right (below). If we find that the current node ‘key’ is <em>greater than</em> the key we are searching for — we know we have overshot our target, so we move down to previous node in the <em>next</em> level.</p><div><div><p><img alt=\"A probability skip list structure, we start on the top layer. If our current key is greater than the key we are searching for (or we reach end), we drop to the next layer.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F9065d31e1b2e33ca697a56082f0ece7eff1c2d9b-1920x500.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F9065d31e1b2e33ca697a56082f0ece7eff1c2d9b-1920x500.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F9065d31e1b2e33ca697a56082f0ece7eff1c2d9b-1920x500.png&amp;w=3840&amp;q=75\" /></p></div><p>A probability skip list structure, we start on the top layer. If our current key is greater than the key we are searching for (or we reach end), we drop to the next layer.</p></div><p>HNSW inherits the same layered format with longer edges in the highest layers (for fast search) and shorter edges in the lower layers (for accurate search).</p><h3 id=\"Navigable-Small-World-Graphs\">Navigable Small World Graphs</h3><p>Vector search using <em>Navigable Small World</em> (NSW) graphs was introduced over the course of several papers from 2011-14 [4, 5, 6]. The idea is that if we take a proximity graph but build it so that we have both long-range and short-range links, then search times are reduced to (poly/)logarithmic complexity.</p><p>Each vertex in the graph connects to several other vertices. We call these connected vertices <em>friends</em>, and each vertex keeps a <em>friend list</em>, creating our graph.</p><p>When searching an NSW graph, we begin at a pre-defined <em>entry-point</em>. This entry point connects to several nearby vertices. We identify which of these vertices is the closest to our query vector and move there.</p><div><div><p><img alt=\"The search process through a NSW graph. Starting at a pre-defined entry point, the algorithm greedily traverses to connected vertices that are nearer to the query vector.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F5ca4fca27b2a9bf89b06748b39b7b6238fd4548c-1920x1080.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F5ca4fca27b2a9bf89b06748b39b7b6238fd4548c-1920x1080.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F5ca4fca27b2a9bf89b06748b39b7b6238fd4548c-1920x1080.png&amp;w=3840&amp;q=75\" /></p></div><p>The search process through a NSW graph. Starting at a pre-defined entry point, the algorithm greedily traverses to connected vertices that are nearer to the query vector.</p></div><p>We repeat the <em>greedy-routing search</em> process of moving from vertex to vertex by identifying the nearest neighboring vertices in each friend list. Eventually, we will find no nearer vertices than our current vertex — this is a local minimum and acts as our stopping condition.</p><hr /><p><em>Navigable small world models are defined as any network with (poly/)logarithmic complexity using greedy routing. The efficiency of greedy routing breaks down for larger networks (1-10K+ vertices) when a graph is not navigable</em> <em>[7].</em></p><hr /><p>The <em>routing</em> (literally the route we take through the graph) consists of two phases. We start with the “zoom-out” phase where we pass through low-degree vertices (degree is the number of links a vertex has) — and the later “zoom-in” phase where we pass through higher-degree vertices [8].</p><div><div><p><img alt=\"High-degree vertices have many links, whereas low-degree vertices have very few links.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd65da887accc1d8d577f236459b16946c9f71a96-1920x960.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd65da887accc1d8d577f236459b16946c9f71a96-1920x960.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fd65da887accc1d8d577f236459b16946c9f71a96-1920x960.png&amp;w=3840&amp;q=75\" /></p></div><p>High-degree vertices have many links, whereas low-degree vertices have very few links.</p></div><p>Our <em>stopping condition</em> is finding no nearer vertices in our current vertex’s friend list. Because of this, we are more likely to hit a local minimum and stop too early when in the <em>zoom-out</em> phase (fewer links, less likely to find a nearer vertex).</p><p>To minimize the probability of stopping early (and increase recall), we can increase the average degree of vertices, but this increases network complexity (and search time). So we need to balance the average degree of vertices between recall and search speed.</p><p>Another approach is to start the search on high-degree vertices (<em>zoom-in</em> first). For NSW, this <em>does</em> improve performance on low-dimensional data. We will see that this is also a significant factor in the structure of HNSW.</p><h3 id=\"Creating-HNSW\">Creating HNSW</h3><p>HNSW is a natural evolution of NSW, which borrows inspiration from hierarchical multi-layers from Pugh’s probability skip list structure.</p><p>Adding hierarchy to NSW produces a graph where links are separated across different layers. At the top layer, we have the longest links, and at the bottom layer, we have the shortest.</p><div><div><p><img alt=\"Layered graph of HNSW, the top layer is our entry point and contains only the longest links, as we move down the layers, the link lengths become shorter and more numerous.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F42d4a3ffc43e5dc2758ba8e5d2ef29d4c4d78254-1920x1040.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F42d4a3ffc43e5dc2758ba8e5d2ef29d4c4d78254-1920x1040.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F42d4a3ffc43e5dc2758ba8e5d2ef29d4c4d78254-1920x1040.png&amp;w=3840&amp;q=75\" /></p></div><p>Layered graph of HNSW, the top layer is our entry point and contains only the longest links, as we move down the layers, the link lengths become shorter and more numerous.</p></div><p>During the search, we enter the top layer, where we find the longest links. These vertices will tend to be higher-degree vertices (with links separated across multiple layers), meaning that we, by default, start in the <em>zoom-in</em> phase described for NSW.</p><p>We traverse edges in each layer just as we did for NSW, greedily moving to the nearest vertex until we find a local minimum. Unlike NSW, at this point, we shift to the current vertex in a lower layer and begin searching again. We repeat this process until finding the local minimum of our bottom layer — <em>layer 0</em>.</p><div><div><p><img alt=\"The search process through the multi-layer structure of an HNSW graph.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe63ca5c638bc3cd61cc1cd2ab33b101d82170426-1920x1080.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe63ca5c638bc3cd61cc1cd2ab33b101d82170426-1920x1080.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe63ca5c638bc3cd61cc1cd2ab33b101d82170426-1920x1080.png&amp;w=3840&amp;q=75\" /></p></div><p>The search process through the multi-layer structure of an HNSW graph.</p></div><h2 id=\"Graph-Construction\">Graph Construction</h2><p>During graph construction, vectors are iteratively inserted one-by-one. The number of layers is represented by parameter <em>L</em>. The probability of a vector insertion at a given layer is given by a probability function normalized by the <em>‘level multiplier’ m_L</em>, where <em>m_L = ~0</em> means vectors are inserted at <em>layer 0</em> only.</p><div><div><p><img alt=\"The probability function is repeated for each layer (other than layer 0). The vector is added to its insertion layer and every layer below it.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Ff105cb148aae44f77fa7e3df7b7f8c0256bcbec4-1920x980.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Ff105cb148aae44f77fa7e3df7b7f8c0256bcbec4-1920x980.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Ff105cb148aae44f77fa7e3df7b7f8c0256bcbec4-1920x980.png&amp;w=3840&amp;q=75\" /></p></div><p>The probability function is repeated for each layer (other than layer 0). The vector is added to its insertion layer and every layer below it.</p></div><p>The creators of HNSW found that the best performance is achieved when we minimize the overlap of shared neighbors across layers. <em>Decreasing m_L</em> can help minimize overlap (pushing more vectors to <em>layer 0</em>), but this increases the average number of traversals during search. So, we use an <em>m_L</em> value which balances both. <em>A rule of thumb for this optimal value is</em> <em><code>1/ln(M)</code></em> <em>[1]</em>.</p><p>Graph construction starts at the top layer. After entering the graph the algorithm greedily traverse across edges, finding the <em>ef</em> nearest neighbors to our inserted vector <em>q</em> — at this point <em>ef = 1</em>.</p><p>After finding the local minimum, it moves down to the next layer (just as is done during search). This process is repeated until reaching our chosen <em>insertion layer</em>. Here begins phase two of construction.</p><p>The <em>ef</em> value is increased to <code>efConstruction</code> (a parameter we set), meaning more nearest neighbors will be returned. In phase two, these nearest neighbors are candidates for the links to the new inserted element <em>q</em> <em>and</em> as entry points to the next layer.</p><p><em>M</em> neighbors are added as links from these candidates — the most straightforward selection criteria are to choose the closest vectors.</p><p>After working through multiple iterations, there are two more parameters that are considered when adding links. <em>M_max</em>, which defines the maximum number of links a vertex can have, and <em>M_max0</em>, which defines the same but for vertices in <em>layer 0</em>.</p><div><div><p><img alt=\"Explanation of the number of links assigned to each vertex and the effect of M, M_max, and M_max0.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fdc5cb11ea197ceb4e1f18214066c8c51526b9af5-1920x1080.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fdc5cb11ea197ceb4e1f18214066c8c51526b9af5-1920x1080.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fdc5cb11ea197ceb4e1f18214066c8c51526b9af5-1920x1080.png&amp;w=3840&amp;q=75\" /></p></div><p>Explanation of the number of links assigned to each vertex and the effect of M, M_max, and M_max0.</p></div><p>The stopping condition for insertion is reaching the local minimum in <em>layer 0</em>.</p><h2 id=\"Implementation-of-HNSW\">Implementation of HNSW</h2><p>We will implement HNSW using the Facebook AI Similarity Search (Faiss) library, and test different construction and search parameters and see how these affect index performance.</p><p>To initialize the HNSW index we write:</p><div><div><p>In[2]:</p><div><pre><code># setup our HNSW parameters\nd = 128 # vector size\nM = 32\nindex = faiss.IndexHNSWFlat(d, M)\nprint(index.hnsw)</code></pre></div></div><div><p>Out[2]:</p><pre><code>&lt;faiss.swigfaiss.HNSW; proxy of &lt;Swig Object of type 'faiss::HNSW *' at 0x7f91183ef120&gt; &gt;\n</code></pre></div></div><p>With this, we have set our <code>M</code> parameter — the number of neighbors we add to each vertex on insertion, but we’re missing <em>M_max</em> and <em>M_max0</em>.</p><p>In Faiss, these two parameters are set automatically in the <code>set_default_probas</code> method, called at index initialization. The <em>M_max</em> value is set to <code>M</code>, and <em>M_max0</em> set to <code>M*2</code> (find further detail in the <a target=\"_blank\" href=\"https://github.com/pinecone-io/examples/blob/master/learn/search/faiss-ebook/hnsw-faiss/hnsw_faiss.ipynb\">notebook</a>).</p><p>Before building our <code>index</code> with <code>index.add(xb)</code>, we will find that the number of layers (or <em>levels</em> in Faiss) are not set:</p><div><div><p>In[3]:</p><div><pre><code># the HNSW index starts with no levels\nindex.hnsw.max_level</code></pre></div></div><div><div><p>In[4]:</p><div><pre><code># and levels (or layers) are empty too\nlevels = faiss.vector_to_array(index.hnsw.levels)\nnp.bincount(levels)</code></pre></div></div><div><p>Out[4]:</p><pre><code>array([], dtype=int64)</code></pre></div></div></div><p>If we go ahead and build the index, we’ll find that both of these parameters are now set.</p><div><div><p>In[6]:</p><div><pre><code># after adding our data we will find that the level\n# has been set automatically\nindex.hnsw.max_level</code></pre></div></div><div><div><p>In[7]:</p><div><pre><code># and levels (or layers) are now populated\nlevels = faiss.vector_to_array(index.hnsw.levels)\nnp.bincount(levels)</code></pre></div></div><div><p>Out[7]:</p><pre><code>array([ 0, 968746, 30276, 951, 26, 1], dtype=int64)</code></pre></div></div></div><p>Here we have the number of levels in our graph, <em>0</em> -&gt; <em>4</em> as described by <code>max_level</code>. And we have <code>levels</code>, which shows the distribution of vertices on each level from <em>0</em> to <em>4</em> (ignoring the first <code>0</code> value). We can even find which vector is our entry point:</p><p>That’s a high-level view of our Faiss-flavored HNSW graph, but before we test the index, let’s dive a little deeper into how Faiss is building this structure.</p><h4 id=\"Graph-Structure\">Graph Structure</h4><p>When we initialize our index we pass our vector dimensionality <code>d</code> and number of neighbors for each vertex <code>M</code>. This calls the method ‘<code>set_default_probas</code>’, passing <code>M</code> and <code>1 / log(M)</code> in the place of <code>levelMult</code> (equivalent to <em>m_L</em> above). A Python equivalent of this method looks like:</p><div><pre><code>def set_default_probas(M: int, m_L: float):\n nn = 0 # set nearest neighbors count = 0\n cum_nneighbor_per_level = []\n level = 0 # we start at level 0\n assign_probas = []\n while True:\n # calculate probability for current level\n proba = np.exp(-level / m_L) * (1 - np.exp(-1 / m_L))\n # once we reach low prob threshold, we've created enough levels\n if proba &lt; 1e-9: break\n assign_probas.append(proba)\n # neighbors is == M on every level except level 0 where == M*2\n nn += M*2 if level == 0 else M\n cum_nneighbor_per_level.append(nn)\n level += 1\n return assign_probas, cum_nneighbor_per_level</code></pre></div><p>Here we are building two vectors — <code>assign_probas</code>, the <em>probability</em> of insertion at a given layer, and <code>cum_nneighbor_per_level</code>, the cumulative total of nearest neighbors assigned to a vertex at different insertion levels.</p><div><div><p>In[10]:</p><div><pre><code>assign_probas, cum_nneighbor_per_level = set_default_probas(\n 32, 1/np.log(32)\n)\nassign_probas, cum_nneighbor_per_level</code></pre></div></div><div><p>Out[10]:</p><pre><code>([0.96875,\n 0.030273437499999986,\n 0.0009460449218749991,\n 2.956390380859371e-05,\n 9.23871994018553e-07,\n 2.887099981307982e-08],\n [64, 96, 128, 160, 192, 224])</code></pre></div></div><p>From this, we can see the significantly higher probability of inserting a vector at <em>level 0</em> than higher levels (although, as we will explain below, the probability is not exactly as defined here). This function means higher levels are more sparse, reducing the likelihood of ‘getting stuck’, and ensuring we start with longer range traversals.</p><p>Our <code>assign_probas</code> vector is used by another method called <code>random_level</code> — it is in this function that each vertex is assigned an insertion level.</p><div><pre><code>def random_level(assign_probas: list, rng):\n # get random float from 'r'andom 'n'umber 'g'enerator\n f = rng.uniform() \n for level in range(len(assign_probas)):\n # if the random float is less than level probability...\n if f &lt; assign_probas[level]:\n # ... we assert at this level\n return level\n # otherwise subtract level probability and try again\n f -= assign_probas[level]\n # below happens with very low probability\n return len(assign_probas) - 1</code></pre></div><p>We generate a random float using Numpy’s random number generator <code>rng</code> (initialized below) in <code>f</code>. For each <code>level</code>, we check if <code>f</code> is less than the assigned probability for that level in <code>assign_probas</code> — if so, that is our insertion layer.</p><p>If <code>f</code> is too high, we subtract the <code>assign_probas</code> value from <code>f</code> and try again for the next level. The result of this logic is that vectors are <em>most likely</em> going to be inserted at <em>level 0</em>. Still, if not, there is a decreasing probability of insertion at ease increment level.</p><p>Finally, if no levels satisfy the probability condition, we insert the vector at the highest level with <code>return len(assign_probas) - 1</code>. If we compare the distribution between our Python implementation and that of Faiss, we see very similar results:</p><div><div><p>In[12]:</p><div><pre><code>chosen_levels = []\nrng = np.random.default_rng(12345)\nfor _ in range(1_000_000):\n chosen_levels.append(random_level(assign_probas, rng))</code></pre></div></div><div><div><p>In[13]:</p><div><pre><code>np.bincount(chosen_levels)</code></pre></div></div><div><p>Out[13]:</p><pre><code>array([968821, 30170, 985, 23, 1],\n dtype=int64)</code></pre></div></div></div><div><div><p><img alt=\"Distribution of vertices across layers in both the Faiss implementation (left) and the Python implementation (right).\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F75658a08c25dabc1405f769c76fd2929c051853b-1920x930.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F75658a08c25dabc1405f769c76fd2929c051853b-1920x930.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F75658a08c25dabc1405f769c76fd2929c051853b-1920x930.png&amp;w=3840&amp;q=75\" /></p></div><p>Distribution of vertices across layers in both the Faiss implementation (left) and the Python implementation (right).</p></div><p>The Faiss implementation also ensures that we <em>always</em> have at least one vertex in the highest layer to act as the entry point to our graph.</p><h3 id=\"HNSW-Performance\">HNSW Performance</h3><p>Now that we’ve explored all there is to explore on the theory behind HNSW and how this is implemented in Faiss — let’s look at the effect of different parameters on our recall, search and build times, and the memory usage of each.</p><p>We will be modifying three parameters: <code>M</code>, <code>efSearch</code>, and <code>efConstruction</code>. And we will be indexing the Sift1M dataset, which you can download and prepare using <a target=\"_blank\" href=\"https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf\">this script</a>.</p><p>As we did before, we initialize our index like so:</p><div><pre><code>index = faiss.IndexHNSWFlat(d, M)</code></pre></div><p>The two other parameters, <code>efConstruction</code> and <code>efSearch</code> can be modified <em>after</em> we have initialized our <code>index</code>.</p><div><pre><code>index.hnsw.efConstruction = efConstruction\nindex.add(xb) # build the index\nindex.hnsw.efSearch = efSearch\n# and now we can search\nindex.search(xq[:1000], k=1)</code></pre></div><p>Our <code>efConstruction</code> value <em>must</em> be set before we construct the index via <code>index.add(xb)</code>, but <code>efSearch</code> can be set anytime before searching.</p><p>Let’s take a look at the recall performance first.</p><div><div><p><img alt=\"Recall@1 performance for various M, efConstruction, and efSearch parameters.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe8c281c3626226a76389fa344a71eb57f70cf879-1920x980.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe8c281c3626226a76389fa344a71eb57f70cf879-1920x980.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe8c281c3626226a76389fa344a71eb57f70cf879-1920x980.png&amp;w=3840&amp;q=75\" /></p></div><p>Recall@1 performance for various M, efConstruction, and efSearch parameters.</p></div><p>High <code>M</code> and <code>efSearch</code> values can make a big difference in recall performance — and it’s also evident that a reasonable <code>efConstruction</code> value is needed. We can also increase <code>efConstruction</code> to achieve higher recall at lower <code>M</code> and <code>efSearch</code> values.</p><p>However, this performance does not come for free. As always, we have a balancing act between recall and search time — let’s take a look.</p><div><div><p><img alt=\"Search time in µs for various M, efConstruction, and efSearch parameters when searching for 1000 queries. Note that the y-axis is using a log scale.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F876bf66aba408959042888efe72c55db4d6b3b41-1920x980.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F876bf66aba408959042888efe72c55db4d6b3b41-1920x980.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2F876bf66aba408959042888efe72c55db4d6b3b41-1920x980.png&amp;w=3840&amp;q=75\" /></p></div><p>Search time in µs for various M, efConstruction, and efSearch parameters when searching for 1000 queries. Note that the y-axis is using a log scale.</p></div><p>Although higher parameter values provide us with better recall, the effect on search times can be dramatic. Here we search for <code>1000</code> similar vectors (<code>xq[:1000]</code>), and our recall/search-time can vary from 80%-1ms to 100%-50ms. If we’re happy with a rather terrible recall, search times can even reach 0.1ms.</p><p>If you’ve been following our <a target=\"_blank\" href=\"https://www.pinecone.io/learn/\">articles on vector similarity search</a>, you may recall that <code>efConstruction</code> has a <a target=\"_blank\" href=\"https://www.pinecone.io/learn/series/faiss/vector-indexes/\">negligible effect on search-time</a> — but that is not the case here…</p><p>When we search using a few queries, it <em>is</em> true that <code>efConstruction</code> has little effect on search time. But with the <code>1000</code> queries used here, the small effect of <code>efConstruction</code> becomes much more significant.</p><p>If you believe your queries will mostly be low volume, <code>efConstruction</code> is a great parameter to increase. It can improve recall with little effect on <em>search time</em>, particularly when using lower <code>M</code> values.</p><div><div><p><img alt=\"efConstruction and search time when searching for only one query. When using lower M values, the search time remains almost unchanged for different efConstruction values.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fef1a2edd25adb202c0a98a1f33a0e72d1295b554-1720x1080.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fef1a2edd25adb202c0a98a1f33a0e72d1295b554-1720x1080.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fef1a2edd25adb202c0a98a1f33a0e72d1295b554-1720x1080.png&amp;w=3840&amp;q=75\" /></p></div><p>efConstruction and search time when searching for only one query. When using lower M values, the search time remains almost unchanged for different efConstruction values.</p></div><p>That all looks great, but what about the memory usage of the HNSW index? Here things can get slightly <em>less</em> appealing.</p><div><div><p><img alt=\"Memory usage with increasing values of M using our Sift1M dataset. efSearch and efConstruction have no effect on the memory usage.\" srcset=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe04d23ccd76d8bdc568542bebe75a75e7d36a21e-1480x1050.png&amp;w=1920&amp;q=75 1x, https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe04d23ccd76d8bdc568542bebe75a75e7d36a21e-1480x1050.png&amp;w=3840&amp;q=75 2x\" src=\"https://www.pinecone.io/_next/image/?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2Fvr8gru94%2Fproduction%2Fe04d23ccd76d8bdc568542bebe75a75e7d36a21e-1480x1050.png&amp;w=3840&amp;q=75\" /></p></div><p>Memory usage with increasing values of M using our Sift1M dataset. efSearch and efConstruction have no effect on the memory usage.</p></div><p>Both <code>efConstruction</code> and <code>efSearch</code> do not affect index memory usage, leaving us only with <code>M</code>. Even with <code>M</code> at a low value of <code>2</code>, our index size is already above 0.5GB, reaching almost 5GB with an <code>M</code> of <code>512</code>.</p><p>So although HNSW produces incredible performance, we need to weigh that against high memory usage and the inevitable high infrastructure costs that this can produce.</p><h4 id=\"Improving-Memory-Usage-and-Search-Speeds\">Improving Memory Usage and Search Speeds</h4><p>HNSW is not the best index in terms of memory utilization. However, if this is important and using <a target=\"_blank\" href=\"https://www.pinecone.io/learn/series/faiss/vector-indexes/\">another index</a> isn’t an option, we can improve it by compressing our vectors using <a target=\"_blank\" href=\"https://www.pinecone.io/learn/series/faiss/product-quantization/\">product quantization (PQ)</a>. Using PQ will reduce recall and increase search time — but as always, much of ANN is a case of balancing these three factors.</p><p>If, instead, we’d like to improve our search speeds — we can do that too! All we do is add an IVF component to our index. There is plenty to discuss when adding IVF or PQ to our index, so we wrote an <a target=\"_blank\" href=\"https://www.pinecone.io/learn/series/faiss/composite-indexes/\">entire article on mixing-and-matching of indexes</a>.</p><p>That’s it for this article covering the Hierarchical Navigable Small World graph for vector similarity search! Now that you’ve learned the intuition behind HNSW and how to implement it in Faiss, you’re ready to go ahead and test HNSW indexes in your own vector search applications, or use a managed solution like <a target=\"_blank\" href=\"https://www.pinecone.io/\">Pinecone</a> or OpenSearch that has vector search ready-to-go!</p><p>If you’d like to continue learning more about vector search and how you can use it to supercharge your own applications, we have a <a target=\"_blank\" href=\"https://www.pinecone.io/learn/\">whole set of learning materials</a> aiming to bring you up to speed with the world of vector search.</p><h2 id=\"References\">References</h2><p>[1] E. Bernhardsson, <a target=\"_blank\" href=\"https://github.com/erikbern/ann-benchmarks\">ANN Benchmarks</a> (2021), GitHub</p><p>[2] W. Pugh, <a target=\"_blank\" href=\"https://15721.courses.cs.cmu.edu/spring2018/papers/08-oltpindexes1/pugh-skiplists-cacm1990.pdf\">Skip lists: a probabilistic alternative to balanced trees</a> (1990), Communications of the ACM, vol. 33, no.6, pp. 668-676</p><p>[3] Y. Malkov, D. Yashunin, <a target=\"_blank\" href=\"https://arxiv.org/abs/1603.09320\">Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs</a> (2016), IEEE Transactions on Pattern Analysis and Machine Intelligence</p><p>[4] Y. Malkov et al., <a target=\"_blank\" href=\"https://www.iiis.org/CDs2011/CD2011IDI/ICTA_2011/PapersPdf/CT175ON.pdf\">Approximate Nearest Neighbor Search Small World Approach</a> (2011), International Conference on Information and Communication Technologies &amp; Applications</p><p>[5] Y. Malkov et al., <a target=\"_blank\" href=\"https://www.researchgate.net/publication/262334462_Scalable_Distributed_Algorithm_for_Approximate_Nearest_Neighbor_Search_Problem_in_High_Dimensional_General_Metric_Spaces\">Scalable Distributed Algorithm for Approximate Nearest Neighbor Search Problem in High Dimensional General Metric Spaces</a> (2012), Similarity Search and Applications, pp. 132-147</p><p>[6] Y. Malkov et al., <a target=\"_blank\" href=\"https://publications.hse.ru/mirror/pubs/share/folder/x5p6h7thif/direct/128296059\">Approximate nearest neighbor algorithm based on navigable small world graphs</a> (2014), Information Systems, vol. 45, pp. 61-68</p><p>[7] M. Boguna et al., <a target=\"_blank\" href=\"https://arxiv.org/abs/0709.0303\">Navigability of complex networks</a> (2009), Nature Physics, vol. 5, no. 1, pp. 74-80</p><p>[8] Y. Malkov, A. Ponomarenko, <a target=\"_blank\" href=\"https://arxiv.org/abs/1507.06529\">Growing homophilic networks are natural navigable small worlds</a> (2015), PloS one</p><p>Facebook Research, <a target=\"_blank\" href=\"https://github.com/facebookresearch/faiss/blob/main/faiss/impl/HNSW.cpp\">Faiss HNSW Implementation</a>, GitHub</p></div>",
"author": "@pinecone",
"favicon": "https://www.pinecone.io/favicon.ico",
"source": "pinecone.io",
"published": "",
"ttr": 599,
"type": ""
}