我正在寻找一个关于哈希表如何工作的解释-用简单的英语为我这样的傻瓜!

例如,我知道它接受键,计算哈希(我正在寻找一个解释),然后执行某种模运算来计算出它在存储值的数组中的位置,但我的知识到此为止。

谁能解释一下过程吗?

编辑:我并不是特别问哈希码是如何计算的,而是对哈希表如何工作的一般概述。


哈希的计算方式通常不取决于哈希表,而是取决于添加到哈希表中的项。在框架/基类库(如。net和Java)中,每个对象都有一个GetHashCode()(或类似)方法,返回该对象的哈希码。理想的哈希码算法和准确的实现取决于对象中表示的数据。


我的理解是这样的:

这里有一个例子:把整个表想象成一系列的桶。假设您有一个带有字母-数字哈希码的实现,并且每个字母都有一个存储桶。该实现将哈希码以特定字母开头的每个项放入相应的bucket中。

假设你有200个对象,但只有15个对象的哈希码以字母“B”开头。哈希表只需要查找和搜索'B' bucket中的15个对象,而不是所有200个对象。

至于计算哈希码,没有什么神奇的。目标只是让不同的对象返回不同的代码,对于相同的对象返回相同的代码。您可以编写一个类,它总是为所有实例返回相同的整数作为哈希代码,但这实际上会破坏哈希表的用处,因为它只会变成一个巨大的桶。


其实比这更简单。

哈希表不过是一个包含键/值对的向量数组(通常是稀疏数组)。此数组的最大大小通常小于哈希表中存储的数据类型的可能值集中的项数。

哈希算法用于根据将存储在数组中的项的值生成该数组的索引。

This is where storing vectors of key/value pairs in the array come in. Because the set of values that can be indexes in the array is typically smaller than the number of all possible values that the type can have, it is possible that your hash algorithm is going to generate the same value for two separate keys. A good hash algorithm will prevent this as much as possible (which is why it is relegated to the type usually because it has specific information which a general hash algorithm can't possibly know), but it's impossible to prevent.

因此,您可以使用多个键来生成相同的散列代码。当这种情况发生时,将遍历向量中的项,并在向量中的键和正在查找的键之间进行直接比较。如果找到,则返回与该键关联的值,否则不返回任何值。


这是一个相当深奥的理论领域,但基本轮廓很简单。

本质上,哈希函数只是一个函数,它从一个空间(比如任意长度的字符串)获取内容,并将它们映射到一个用于索引的空间(比如无符号整数)。

如果你只有一个小空间的东西来散列,你可能只需要把这些东西解释为整数,你就完成了(例如4字节字符串)

不过,通常情况下,你的空间要大得多。如果你允许作为键的空间大于你用于索引的空间(你的uint32或其他),那么你不可能为每个键都有唯一的值。当两个或多个东西散列到相同的结果时,您必须以适当的方式处理冗余(这通常被称为冲突,如何处理它或不处理它将略微取决于您使用散列的目的)。

这意味着你不希望得到相同的结果,你也可能希望哈希函数是快速的。

平衡这两个属性(以及其他一些属性)让许多人忙得不可开交!

在实践中,您通常应该能够找到一个已知适合您的应用程序的函数并使用它。

Now to make this work as a hashtable: Imagine you didn't care about memory usage. Then you can create an array as long as your indexing set (all uint32's, for example). As you add something to the table, you hash it's key and look at the array at that index. If there is nothing there, you put your value there. If there is already something there, you add this new entry to a list of things at that address, along with enough information (your original key, or something clever) to find which entry actually belongs to which key.

因此,随着时间的推移,哈希表(数组)中的每个条目要么是空的,要么包含一个条目,要么包含一个条目列表。检索很简单,就像在数组中建立索引,然后返回值,或者遍历值列表并返回正确的值。

当然,在实践中你通常不能这样做,它浪费太多的内存。因此,所有操作都基于稀疏数组(其中唯一的条目是实际使用的条目,其他所有内容都隐式为空)。

有很多方案和技巧可以让它更好地工作,但这是最基本的。


你取一堆东西,和一个数组。

对于每一个东西,你为它建立一个索引,称为哈希。关于哈希的重要事情是它“分散”了很多;你不希望两个相似的东西有相似的哈希值。

你把东西放到数组中哈希值表示的位置。在一个给定的哈希中可以有多个对象,所以你可以将这些对象存储在数组或其他合适的东西中,我们通常称之为bucket。

当你在哈希中查找东西时,你会经历相同的步骤,计算哈希值,然后查看那个位置的bucket中有什么,并检查它是否是你要寻找的东西。

当你的哈希工作得很好并且你的数组足够大时,在数组的任何特定下标处最多只会有很少的东西,所以你不需要看太多。

额外的好处是,当你的哈希表被访问时,它会把找到的东西(如果有的话)移动到桶的开头,这样下次它就会是第一个被检查的东西。


这是一个外行的解释。

让我们假设你想要用书填满一个图书馆,而不仅仅是把它们塞进去,而且你希望在你需要它们的时候能够很容易地再次找到它们。

因此,您决定,如果想要阅读一本书的人知道书名和确切的书名,那么这就是所有应该做的。有了书名,在图书管理员的帮助下,读者就能轻松快速地找到这本书。

那么,你该怎么做呢?当然,你可以列出你把每本书放在哪里的列表,但是你会遇到和搜索图书馆一样的问题,你需要搜索列表。当然,列表会更小,更容易搜索,但您仍然不希望从库(或列表)的一端到另一端依次搜索。

你想要的东西,有了书名,就能立刻给你正确的位置,所以你所要做的就是漫步到正确的书架上,拿起书。

但这怎么能做到呢?嗯,当你填满图书馆的时候要有一点先见之明,当你填满图书馆的时候要做很多工作。

你设计了一个聪明的小方法,而不是开始从一端到另一端填满这个库。你拿着书名,在一个小的计算机程序中运行,它会显示出书架的编号和书架上的槽号。这是你放书的地方。

这个程序的美妙之处在于,稍后,当一个人回来阅读这本书时,您再次通过程序输入标题,并获得与最初给您的相同的书架编号和插槽编号,这就是书的位置。

正如其他人已经提到的,这个程序被称为哈希算法或哈希计算,通常通过输入数据(在这种情况下是书名)并从中计算一个数字来工作。

为了简单起见,我们假设它只是将每个字母和符号转换为一个数字,并将它们全部相加。实际上,它要比这复杂得多,但现在让我们先把它放在这里。

这种算法的美妙之处在于,如果你一次又一次地向它输入相同的输入,它每次都会输出相同的数字。

这就是哈希表的基本工作原理。

接下来是技术方面的内容。

首先是数字的大小。通常,这种哈希算法的输出在一个较大的数字范围内,通常比表中的空间大得多。例如,假设我们的图书馆刚好有100万本书的空间。哈希计算的输出可以在0到10亿的范围内,这要高得多。

那么,我们该怎么办呢?我们使用所谓的模量计算,它基本上是说,如果你数到你想要的数字(即10亿数字),但想要保持在一个小得多的范围内,每次你达到这个小范围的极限,你就从0开始,但你必须跟踪你在大序列中走了多远。

假设哈希算法的输出在0到20的范围内,并且从特定的标题中获得值17。如果图书馆的大小只有7本书,你数1、2、3、4、5、6,当你数到7时,你从0开始。因为我们需要数17次,所以我们有1、2、3、4、5、6、0、1、2、3、4、5、6、0、1、2、3,最后的数字是3。

当然模量的计算不是这样的,它是用除法和余数来完成的。17除以7的余数是3(17除7得14,17和14之差是3)。

因此,你把书放在3号槽里。

这就导致了下一个问题。碰撞。由于该算法无法将图书间隔开来以使它们完全填满库(或者填满哈希表),因此它最终总是会计算一个以前使用过的数字。在图书馆的意义上,当你到达书架和你想放一本书的槽号时,那里已经有一本书了。

存在各种冲突处理方法,包括将数据运行到另一个计算中以获得表中的另一个位置(双重哈希),或者只是在给定的位置附近找到一个空间(例如,就在前一本书的旁边,假设插槽可用,也称为线性探测)。这意味着当你稍后试图找到这本书时,你需要做一些挖掘工作,但这仍然比简单地从图书馆的一端开始要好。

最后,在某些情况下,您可能希望将更多的书放入图书馆,而不是图书馆所允许的。换句话说,你需要建立一个更大的库。由于图书馆中的确切位置是使用图书馆的确切和当前大小计算出来的,因此,如果您调整了图书馆的大小,那么您可能最终不得不为所有书籍找到新的位置,因为为找到它们的位置所做的计算已经改变了。

我希望这个解释比桶和函数更接地气一点:)


到目前为止,所有的答案都很好,并且从不同的方面了解了哈希表的工作方式。这里有一个简单的例子,可能会有帮助。假设我们想要存储一些带有小写字母字符串的项作为键。

正如simon所解释的,哈希函数用于从大空间映射到小空间。对于我们的例子,一个简单的哈希函数实现可以取字符串的第一个字母,并将其映射为一个整数,因此“短吻鳄”的哈希代码为0,“蜜蜂”的哈希代码为1,“斑马”的哈希代码为25,等等。

接下来,我们有一个包含26个存储桶的数组(在Java中可以是数组列表),我们将项放入与键的哈希码匹配的存储桶中。如果我们有不止一个元素键以相同字母开头,它们就会有相同的哈希码,所以它们都会进入存储桶中寻找那个哈希码所以必须在存储桶中进行线性搜索才能找到一个特定的元素。

在我们的例子中,如果我们只有几十个项目,键横跨字母表,它会工作得很好。然而,如果我们有一百万个条目,或者所有的键都以'a'或'b'开头,那么我们的哈希表就不是理想的。为了获得更好的性能,我们需要一个不同的哈希函数和/或更多的桶。


用法和行话:

哈希表用于快速存储和检索数据(或记录)。 记录使用散列键存储在桶中 哈希键是通过对记录中包含的选定值(键值)应用哈希算法来计算的。所选值必须是所有记录的公共值。 每个桶可以有多条记录,这些记录按照特定的顺序组织。

现实世界的例子:

哈希公司成立于1803年,当时没有任何计算机技术,只有300个文件柜来保存大约3万名客户的详细信息(记录)。每个文件夹都清楚地标识其客户端编号,从0到29,999的唯一编号。

当时的档案管理员必须迅速为工作人员获取和存储客户记录。工作人员决定使用哈希方法来存储和检索他们的记录会更有效。

要归档客户记录,档案管理员将使用写在文件夹上的唯一客户编号。使用这个客户端编号,他们将哈希键调整300,以识别包含它的文件柜。当他们打开文件柜时,他们会发现里面有很多按客户号排序的文件夹。在确定正确的位置后,他们会简单地把它塞进去。

要检索客户记录,档案管理员将在一张纸上获得客户号码。使用这个唯一的客户端编号(哈希键),他们会将其调整300,以确定哪个文件柜拥有客户端文件夹。当他们打开文件柜时,他们会发现里面有很多按客户号排序的文件夹。通过搜索记录,他们可以快速找到客户端文件夹并检索它。

在我们的实际示例中,桶是文件柜,记录是文件夹。


需要记住的一件重要的事情是,计算机(及其算法)处理数字比处理字符串更好。因此,使用索引访问大型数组要比按顺序访问快得多。

正如Simon提到的,我认为非常重要的是哈希部分是转换一个大空间(任意长度,通常是字符串等),并将其映射到一个小空间(已知大小,通常是数字)进行索引。记住这一点非常重要!

因此,在上面的示例中,大约30,000个可能的客户机被映射到一个较小的空间中。


这样做的主要思想是将整个数据集划分为几个部分,以加快实际搜索的速度,而实际搜索通常是耗时的。在我们上面的例子中,300个文件柜中的每个(统计上)将包含大约100条记录。搜索100条记录(不管顺序)要比处理3万条记录快得多。

你可能已经注意到有些人已经这样做了。但是,在大多数情况下,他们只是使用姓氏的第一个字母,而不是设计一个哈希方法来生成哈希键。因此,如果您有26个文件柜,每个文件柜都包含从a到Z的一个字母,理论上您只是将数据分割并增强了归档和检索过程。


简短而甜蜜:

哈希表封装了一个数组,我们称之为internalArray。将项以如下方式插入数组:

let insert key value =
    internalArray[hash(key) % internalArray.Length] <- (key, value)
    //oversimplified for educational purposes

有时两个键会散列到数组中的同一个索引,而您希望保留这两个值。我喜欢把两个值都存储在同一个索引中,通过将internalArray作为一个链表数组来编码很简单:

let insert key value =
    internalArray[hash(key) % internalArray.Length].AddLast(key, value)

所以,如果我想从哈希表中检索一个项,我可以这样写:

let get key =
    let linkedList = internalArray[hash(key) % internalArray.Length]
    for (testKey, value) in linkedList
        if (testKey = key) then return value
    return null

删除操作写起来也很简单。正如你所知道的,从我们的链表数组中插入、查找和删除几乎是O(1)。

当我们的internalArray太满时,可能在85%左右的容量,我们可以调整内部数组的大小,并将所有项目从旧数组移动到新数组中。


这是另一种看待它的方式。

我假设你理解数组A的概念,它支持索引操作,你可以一步找到第I个元素,A[I],不管A有多大。

因此,例如,如果您想存储一组恰好年龄不同的人的信息,一个简单的方法是有一个足够大的数组,并使用每个人的年龄作为数组的索引。这样,你就可以一步获取任何人的信息。

But of course there could be more than one person with the same age, so what you put in the array at each entry is a list of all the people who have that age. So you can get to an individual person's information in one step plus a little bit of search in that list (called a "bucket"). It only slows down if there are so many people that the buckets get big. Then you need a larger array, and some other way to get more identifying information about the person, like the first few letters of their surname, instead of using age.

这是基本思想。不使用年龄,可以使用任何能产生良好价值观传播的人的函数。这就是哈希函数。比如你可以把这个人名字的ASCII表示的每三分之一,按某种顺序打乱。重要的是,您不希望太多人散列到同一个存储桶,因为速度取决于存储桶保持较小。


你们已经很接近完整地解释了这个问题,但是遗漏了一些东西。哈希表只是一个数组。数组本身将在每个槽中包含一些内容。至少要将哈希值或值本身存储在这个插槽中。除此之外,您还可以存储在此插槽上碰撞的值的链接/链表,或者您可以使用开放寻址方法。您还可以存储一个或多个指针,这些指针指向您希望从该槽中检索的其他数据。

It's important to note that the hashvalue itself generally does not indicate the slot into which to place the value. For example, a hashvalue might be a negative integer value. Obviously a negative number cannot point to an array location. Additionally, hash values will tend to many times be larger numbers than the slots available. Thus another calculation needs to be performed by the hashtable itself to figure out which slot the value should go into. This is done with a modulus math operation like:

uint slotIndex = hashValue % hashTableSize;

这个值是该值将要进入的槽。在开放寻址中,如果槽位已经被另一个哈希值和/或其他数据填充,将再次运行模运算来查找下一个槽:

slotIndex = (remainder + 1) % hashTableSize;

我想可能还有其他更高级的方法来确定槽索引,但这是我见过的最常见的方法……会对其他表现更好的公司感兴趣。

With the modulus method, if you have a table of say size 1000, any hashvalue that is between 1 and 1000 will go into the corresponding slot. Any Negative values, and any values greater than 1000 will be potentially colliding slot values. The chances of that happening depend both on your hashing method, as well as how many total items you add to the hash table. Generally, it's best practice to make the size of the hashtable such that the total number of values added to it is only equal to about 70% of its size. If your hash function does a good job of even distribution, you will generally encounter very few to no bucket/slot collisions and it will perform very quickly for both lookup and write operations. If the total number of values to add is not known in advance, make a good guesstimate using whatever means, and then resize your hashtable once the number of elements added to it reaches 70% of capacity.

我希望这对你有所帮助。

PS - In C# the GetHashCode() method is pretty slow and results in actual value collisions under a lot of conditions I've tested. For some real fun, build your own hashfunction and try to get it to NEVER collide on the specific data you are hashing, run faster than GetHashCode, and have a fairly even distribution. I've done this using long instead of int size hashcode values and it's worked quite well on up to 32 million entires hashvalues in the hashtable with 0 collisions. Unfortunately I can't share the code as it belongs to my employer... but I can reveal it is possible for certain data domains. When you can achieve this, the hashtable is VERY fast. :)


有很多答案,但没有一个是非常可视化的,而哈希表在可视化时很容易“点击”。

哈希表通常实现为链表数组。如果我们想象一个存储人名的表,经过几次插入之后,它可能会被放置在内存中,其中()包含的数字是文本/姓名的哈希值。

bucket#  bucket content / linked list

[0]      --> "sue"(780) --> null
[1]      null
[2]      --> "fred"(42) --> "bill"(9282) --> "jane"(42) --> null
[3]      --> "mary"(73) --> null
[4]      null
[5]      --> "masayuki"(75) --> "sarwar"(105) --> null
[6]      --> "margaret"(2626) --> null
[7]      null
[8]      --> "bob"(308) --> null
[9]      null

以下几点:

each of the array entries (indices [0], [1]...) is known as a bucket, and starts a - possibly empty - linked list of values (aka elements, in this example - people's names) each value (e.g. "fred" with hash 42) is linked from bucket [hash % number_of_buckets] e.g. 42 % 10 == [2]; % is the modulo operator - the remainder when divided by the number of buckets multiple data values may collide at and be linked from the same bucket, most often because their hash values collide after the modulo operation (e.g. 42 % 10 == [2], and 9282 % 10 == [2]), but occasionally because the hash values are the same (e.g. "fred" and "jane" both shown with hash 42 above) most hash tables handle collisions - with slightly reduced performance but no functional confusion - by comparing the full value (here text) of a value being sought or inserted to each value already in the linked list at the hashed-to bucket

链表长度与负载因子有关,而不是值的数量

如果表的大小增加,上面实现的哈希表倾向于调整自己的大小(即创建一个更大的桶数组,在那里创建新的/更新的链表,删除旧的数组),以保持值与桶的比率(又名负载因子)在0.5到1.0的范围内。

Hans gives the actual formula for other load factors in a comment below, but for indicative values: with load factor 1 and a cryptographic strength hash function, 1/e (~36.8%) of buckets will tend to be empty, another 1/e (~36.8%) have one element, 1/(2e) or ~18.4% two elements, 1/(3!e) about 6.1% three elements, 1/(4!e) or ~1.5% four elements, 1/(5!e) ~.3% have five etc.. - the average chain length from non-empty buckets is ~1.58 no matter how many elements are in the table (i.e. whether there are 100 elements and 100 buckets, or 100 million elements and 100 million buckets), which is why we say lookup/insert/erase are O(1) constant time operations.

哈希表如何将键与值关联

Given a hash table implementation as described above, we can imagine creating a value type such as `struct Value { string name; int age; };`, and equality comparison and hash functions that only look at the `name` field (ignoring age), and then something wonderful happens: we can store `Value` records like `{"sue", 63}` in the table, then later search for "sue" without knowing her age, find the stored value and recover or even update her age - happy birthday Sue - which interestingly doesn't change the hash value so doesn't require that we move Sue's record to another bucket.

当我们这样做的时候,我们使用哈希表作为一个关联容器,也就是map,它存储的值可以被认为是由一个键(名称)和一个或多个其他字段组成,仍然被称为值(在我的例子中,只是年龄)。用作映射的哈希表实现称为哈希映射。

这与前面我们存储离散值的例子形成了对比,比如“sue”,你可以把它看作是它自己的键:这种用法被称为散列集。

还有其他方法来实现哈希表

并不是所有的哈希表都使用链表(称为独立链表),但大多数通用哈希表都使用链表,因为主要的替代封闭哈希(又名开放寻址)-特别是支持擦除操作-与易于冲突的键/哈希函数相比性能不太稳定。


简单讲一下哈希函数

强大的散列…

一个通用的、最小化最坏情况碰撞的哈希函数的工作是有效地随机地在哈希表桶周围散布键,同时总是为相同的键生成相同的哈希值。理想情况下,即使在键的任何位置改变一个位,也会随机地翻转结果哈希值中的大约一半位。

This is normally orchestrated with maths too complicated for me to grok. I'll mention one easy-to-understand way - not the most scalable or cache friendly but inherently elegant (like encryption with a one-time pad!) - as I think it helps drive home the desirable qualities mentioned above. Say you were hashing 64-bit doubles - you could create 8 tables each of 256 random numbers (code below), then use each 8-bit/1-byte slice of the double's memory representation to index into a different table, XORing the random numbers you look up. With this approach, it's easy to see that a bit (in the binary digit sense) changing anywhere in the double results in a different random number being looked up in one of the tables, and a totally uncorrelated final value.

// note caveats above: cache unfriendly (SLOW) but strong hashing...
std::size_t random[8][256] = { ...random data... };
auto p = (const std::byte*)&my_double;
size_t hash = random[0][p[0]] ^
              random[1][p[1]] ^
              ... ^
              random[7][p[7]];

弱但通常快速的哈希…

Many libraries' hashing functions pass integers through unchanged (known as a trivial or identity hash function); it's the other extreme from the strong hashing described above. An identity hash is extremely collision prone in the worst cases, but the hope is that in the fairly common case of integer keys that tend to be incrementing (perhaps with some gaps), they'll map into successive buckets leaving fewer empty than random hashing leaves (our ~36.8% at load factor 1 mentioned earlier), thereby having fewer collisions and fewer longer linked lists of colliding elements than is achieved by random mappings. It's also great to save the time it takes to generate a strong hash, and if keys are looked up in order they'll be found in buckets nearby in memory, improving cache hits. When the keys don't increment nicely, the hope is they'll be random enough they won't need a strong hash function to totally randomise their placement into buckets.


哈希表完全基于这样一个事实,即实际计算遵循随机访问机模型,即内存中任何地址的值都可以在O(1)时间或常数时间内访问。

因此,如果我有一个键的宇宙(我可以在应用程序中使用的所有可能的键的集合,例如,滚动no。对于学生来说,如果它是4位,那么这个宇宙就是从1到9999的一组数字),并且一种将它们映射到有限大小的数字集的方法可以在我的系统中分配内存,理论上我的哈希表已经准备好了。

Generally, in applications the size of universe of keys is very large than number of elements I want to add to the hash table(I don't wanna waste a 1 GB memory to hash ,say, 10000 or 100000 integer values because they are 32 bit long in binary reprsentaion). So, we use this hashing. It's sort of a mixing kind of "mathematical" operation, which maps my large universe to a small set of values that I can accomodate in memory. In practical cases, often space of a hash table is of the same "order"(big-O) as the (number of elements *size of each element), So, we don't waste much memory.

现在,一个大集合映射到一个小集合,映射必须是多对一的。因此,不同的键将被分配相同的空间(?? ?不公平)。有几种方法可以解决这个问题,我只知道其中最流行的两种:

Use the space that was to be allocated to the value as a reference to a linked list. This linked list will store one or more values, that come to reside in same slot in many to one mapping. The linked list also contains keys to help someone who comes searching. It's like many people in same apartment, when a delivery-man comes, he goes to the room and asks specifically for the guy. Use a double hash function in an array which gives the same sequence of values every time rather than a single value. When I go to store a value, I see whether the required memory location is free or occupied. If it's free, I can store my value there, if it's occupied I take next value from the sequence and so on until I find a free location and I store my value there. When searching or retreiving the value, I go back on same path as given by the sequence and at each location ask for the vaue if it's there until I find it or search all possible locations in the array.

CLRS的《算法导论》对这个主题提供了非常好的见解。


对于所有寻找编程用语的人,下面是它是如何工作的。高级哈希表的内部实现有许多复杂之处,并且对存储分配/释放和搜索进行了优化,但顶层的思想是非常相同的。

(void) addValue : (object) value
{
   int bucket = calculate_bucket_from_val(value);
   if (bucket) 
   {
       //do nothing, just overwrite
   }
   else   //create bucket
   {
      create_extra_space_for_bucket();
   }
   put_value_into_bucket(bucket,value);
}

(bool) exists : (object) value
{
   int bucket = calculate_bucket_from_val(value);
   return bucket;
}

其中calculate_bucket_from_val()是哈希函数,所有的惟一性魔术都必须在这里发生。

经验法则是: 对于要插入的给定值,bucket必须是唯一的,并且派生自它应该存储的值。

Bucket是存储值的任何空间-这里我将它保持int作为数组索引,但它也可能是一个内存位置。


基本思路

Why do people use dressers to store their clothing? Besides looking trendy and stylish, they have the advantage that every article of clothing has a place where it's supposed to be. If you're looking for a pair of socks, you just check the sock drawer. If you're looking for a shirt, you check the drawer that has your shirts in it. It doesn't matter, when you're looking for socks, how many shirts you have or how many pairs of pants you own, since you don't need to look at them. You just look in the sock drawer and expect to find socks there.

在高层次上,哈希表是一种存储东西的方式,有点像衣服的梳妆台。其基本思想如下:

你有一些可以存储物品的位置(抽屉)。 你想出一些规则,告诉你每件物品属于哪个位置(抽屉)。 当你需要找东西时,你就用这个规则来决定要找哪个抽屉。

这样的系统的优点是,假设您的规则不是太复杂,并且您有适当数量的抽屉,您可以通过查找正确的位置来快速找到您要找的东西。

当你把衣服放好时,你使用的“规则”可能是“袜子放在左边最上面的抽屉里,衬衫放在中间的大抽屉里,等等。”当你存储更抽象的数据时,我们使用一种叫做哈希函数的东西来为我们做这件事。

考虑哈希函数的一种合理方式是将其视为一个黑盒。你把数据放在一边,一个叫做哈希码的数字从另一边出来。从示意图上看,它是这样的:

              +---------+
            |\|   hash  |/| --> hash code
   data --> |/| function|\|
              +---------+

All hash functions are deterministic: if you put the same data into the function multiple times, you'll always get the same value coming out the other side. And a good hash function should look more or less random: small changes to the input data should give wildly different hash codes. For example, the hash codes for the string "pudu" and for the string "kudu" will likely be wildly different from one another. (Then again, it's possible that they're the same. After all, if a hash function's outputs should look more or less random, there's a chance we get the same hash code twice.)

如何构建哈希函数呢?现在,让我们选择“正派的人不应该想太多”。数学家们已经想出了更好和更差的方法来设计哈希函数,但对于我们的目的,我们真的不需要太担心内部。把哈希函数看成是这样的函数就很好了

确定性的(相同的输入给出相同的输出),但是 看起来是随机的(很难预测一个哈希码给出另一个)。

Once we have a hash function, we can build a very simple hash table. We'll make an array of "buckets," which you can think of as being analogous to drawers in our dresser. To store an item in the hash table, we'll compute the hash code of the object and use it as an index in the table, which is analogous to "pick which drawer this item goes in." Then, we put that data item inside the bucket at that index. If that bucket was empty, great! We can put the item there. If that bucket is full, we have some choices of what we can do. A simple approach (called chained hashing) is to treat each bucket as a list of items, the same way that your sock drawer might store multiple socks, and then just add the item to the list at that index.

要在哈希表中查找内容,我们基本上使用相同的过程。我们首先计算要查找的项的哈希代码,它告诉我们要查找哪个桶(抽屉)。如果条目在表中,它就必须在那个bucket中。然后,我们只需查看桶中的所有项,看看我们的项是否在其中。

What's the advantage of doing things this way? Well, assuming we have a large number of buckets, we'd expect that most buckets won't have too many things in them. After all, our hash function kinda sorta ish looks like it has random outputs, so the items are distributed kinda sorta ish evenly across all the buckets. In fact, if we formalize the notion of "our hash function looks kinda random," we can prove that the expected number of items in each bucket is the ratio of the total number of items to the total number of buckets. Therefore, we can find the items we're looking for without having to do too much work.

细节

解释“哈希表”是如何工作的有点棘手,因为哈希表有很多种。下一节将讨论所有哈希表通用的一些通用实现细节,以及不同风格的哈希表如何工作的一些细节。

A first question that comes up is how you turn a hash code into a table slot index. In the above discussion, I just said "use the hash code as an index," but that's actually not a very good idea. In most programming languages, hash codes work out to 32-bit or 64-bit integers, and you aren't going to be able to use those directly as bucket indices. Instead, a common strategy is to make an array of buckets of some size m, compute the (full 32- or 64-bit) hash codes for your items, then mod them by the size of the table to get an index between 0 and m-1, inclusive. The use of modulus works well here because it's decently fast and does a decent job spreading the full range of hash codes across a smaller range.

(这里有时会使用位运算符。如果你的表的大小是2的幂,比如说2k,那么计算哈希码的位与,然后数字2k - 1相当于计算一个模数,而且它明显更快。)

下一个问题是如何选择正确的桶数。如果您选择太多的bucket,那么大多数bucket将是空的或只有很少的元素(对速度有好处-每个bucket只需要检查一些项),但是您将使用大量的空间来简单地存储bucket(不是很好,尽管也许您可以负担得起)。反之亦然——如果存储桶太少,那么每个存储桶平均会有更多的元素,这会使查找时间变长,但会减少内存使用量。

A good compromise is to dynamically change the number of buckets over the lifetime of the hash table. The load factor of a hash table, typically denoted α, is the ratio of the number of elements to the number of buckets. Most hash tables pick some maximum load factor. Once the load factor crosses this limit, the hash table increases its number of slots (say, by doubling), then redistributes the elements from the old table into the new one. This is called rehashing. Assuming the maximum load factor in the table is a constant, this ensures that, assuming you have a good hash function, the expected cost of doing a lookup remains O(1). Insertions now have an amortized expected cost of O(1) because of the cost of periodically rebuilding the table, as is the case with deletions. (Deletions can similarly compact the table if the load factor gets too small.)

哈希策略

到目前为止,我们一直在讨论链式哈希,这是构建哈希表的许多不同策略之一。提醒一下,链式哈希有点像一个服装梳妆台——每个桶(抽屉)可以容纳多个项目,当你进行查找时,你会检查所有这些项目。

然而,这并不是构建哈希表的唯一方法。还有另一类哈希表使用一种叫做开放寻址的策略。开放寻址的基本思想是存储一个槽数组,其中每个槽可以是空的,也可以只保存一项。

In open addressing, when you perform an insertion, as before, you jump to some slot whose index depends on the hash code computed. If that slot is free, great! You put the item there, and you're done. But what if the slot is already full? In that case, you use some secondary strategy to find a different free slot in which to store the item. The most common strategy for doing this uses an approach called linear probing. In linear probing, if the slot you want is already full, you simply shift to the next slot in the table. If that slot is empty, great! You can put the item there. But if that slot is full, you then move to the next slot in the table, etc. (If you hit the end of the table, just wrap back around to the beginning).

Linear probing is a surprisingly fast way to build a hash table. CPU caches are optimized for locality of reference, so memory lookups in adjacent memory locations tend to be much faster than memory lookups in scattered locations. Since a linear probing insertion or deletion works by hitting some array slot and then walking linearly forward, it results in few cache misses and ends up being a lot faster than what the theory normally predicts. (And it happens to be the case that the theory predicts it's going to be very fast!)

Another strategy that's become popular recently is cuckoo hashing. I like to think of cuckoo hashing as the "Frozen" of hash tables. Instead of having one hash table and one hash function, we have two hash tables and two hash functions. Each item can be in exactly one of two places - it's either in the location in the first table given by the first hash function, or it's in the location in the second table given by the second hash function. This means that lookups are worst-case efficient, since you only have to check two spots to see if something is in the table.

Insertions in cuckoo hashing use a different strategy than before. We start off by seeing if either of the two slots that could hold the item are free. If so, great! We just put the item there. But if that doesn't work, then we pick one of the slots, put the item there, and kick out the item that used to be there. That item has to go somewhere, so we try putting it in the other table at the appropriate slot. If that works, great! If not, we kick an item out of that table and try inserting it into the other table. This process continues until everything comes to rest, or we find ourselves trapped in a cycle. (That latter case is rare, and if it happens we have a bunch of options, like "put it in a secondary hash table" or "choose new hash functions and rebuild the tables.")

对于布谷鸟哈希有许多改进的可能,例如使用多个表,让每个槽容纳多个项目,以及制作一个“隐藏”来保存其他地方无法容纳的项目,这是一个活跃的研究领域!

Then there are hybrid approaches. Hopscotch hashing is a mix between open addressing and chained hashing that can be thought of as taking a chained hash table and storing each item in each bucket in a slot near where the item wants to go. This strategy plays well with multithreading. The Swiss table uses the fact that some processors can perform multiple operations in parallel with a single instruction to speed up a linear probing table. Extendible hashing is designed for databases and file systems and uses a mix of a trie and a chained hash table to dynamically increase bucket sizes as individual buckets get loaded. Robin Hood hashing is a variant of linear probing in which items can be moved after being inserted to reduce the variance in how far from home each element can live.

进一步的阅读

有关哈希表基础知识的更多信息,请查看关于链式哈希的讲座幻灯片以及关于线性探测和罗宾汉哈希的后续幻灯片。你可以在这里学到更多关于布谷鸟哈希的知识,以及哈希函数的理论性质。


直连地址表

要理解哈希表,直接地址表是我们应该理解的第一个概念。

直接地址表直接使用键作为数组中槽的索引。宇宙键的大小等于数组的大小。在O(1)时间内访问这个键非常快,因为数组支持随机访问操作。

然而,在实现直接地址表之前,有四个注意事项:

要成为有效的数组索引,键应该是整数 键的范围是相当小的,否则,我们将需要一个巨大的数组。 不能将两个不同的键映射到数组中的同一个槽 宇宙键的长度等于数组的长度

事实上,现实生活中并不是很多情况都符合上述要求,所以哈希表就可以救场了

哈希表

哈希表不是直接使用键,而是首先应用数学哈希函数将任意键数据一致地转换为数字,然后使用该哈希结果作为键。

宇宙键的长度可以大于数组的长度,这意味着两个不同的键可以散列到相同的索引(称为散列碰撞)?

实际上,有一些不同的策略来处理它。这里有一个常见的解决方案:我们不将实际值存储在数组中,而是存储一个指向链表的指针,该链表包含散列到该索引的所有键的值。

如果你仍然有兴趣知道如何从头开始实现hashmap,请阅读下面的帖子


Hashtable inside contains cans in which it stores the key sets. The Hashtable uses the hashcode to decide to which the key pair should plan. The capacity to get the container area from Key's hashcode is known as hash work. In principle, a hash work is a capacity which when given a key, creates an address in the table. A hash work consistently returns a number for an item. Two equivalent items will consistently have a similar number while two inconsistent objects may not generally have various numbers. When we put objects into a hashtable then it is conceivable that various objects may have equal/ same hashcode. This is known as a collision. To determine collision, hashtable utilizes a variety of lists. The sets mapped to a single array index are stored in a list and then the list reference is stored in the index.