日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

如何用python32位开发词典软件_Python如何实现字典?

發(fā)布時(shí)間:2025/3/8 python 14 豆豆
生活随笔 收集整理的這篇文章主要介紹了 如何用python32位开发词典软件_Python如何实现字典? 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

I was wondering how python dictionaries work under the hood, particularly the dynamic aspect?

When we create a dictionary, what is its initial size?

If we update it with a lot of elements, I suppose we need to enlarge the hash table. I suppose we need to recompute the hash function to adapt the size of the new bigger hash table while keeping a kind of logic with the previous hash table?

As you can see, I do not fully understand the internal of this structure.

解決方案When we create a dictionary what is its initial size?

As can be seen in the source code:

/* PyDict_MINSIZE is the starting size for any new dict.

* 8 allows dicts with no more than 5 active entries; experiments suggested

* this suffices for the majority of dicts (consisting mostly of usually-small

* dicts created to pass keyword arguments).

* Making this 8, rather than 4 reduces the number of resizes for most

* dictionaries, without any significant extra memory use.

*/

#define PyDict_MINSIZE 8

Imagine we update with a lot of key value pairs, i suppose we need to externe the hash table. I suppose we need to recompute the hash function to adapt the size of the new bigger hash table while keeping a kind of logic with the previous hash table....

CPython checks the hash table size every time we add a key. If the table is two-thirds full, it would resize the hash table by GROWTH_RATE (which is currently set to 3), and insert all elements:

/* GROWTH_RATE. Growth rate upon hitting maximum load.

* Currently set to used*3.

* This means that dicts double in size when growing without deletions,

* but have more head room when the number of deletions is on a par with the

* number of insertions. See also bpo-17563 and bpo-33205.

*

* GROWTH_RATE was set to used*4 up to version 3.2.

* GROWTH_RATE was set to used*2 in version 3.3.0

* GROWTH_RATE was set to used*2 + capacity/2 in 3.4.0-3.6.0.

*/

#define GROWTH_RATE(d) ((d)->ma_used*3)

The USABLE_FRACTION is the two thirds I mentioned above:

/* USABLE_FRACTION is the maximum dictionary load.

* Increasing this ratio makes dictionaries more dense resulting in more

* collisions. Decreasing it improves sparseness at the expense of spreading

* indices over more cache lines and at the cost of total memory consumed.

*

* USABLE_FRACTION must obey the following:

* (0 < USABLE_FRACTION(n) < n) for all n >= 2

*

* USABLE_FRACTION should be quick to calculate.

* Fractions around 1/2 to 2/3 seem to work well in practice.

*/

#define USABLE_FRACTION(n) (((n) << 1)/3)

Furthermore, the index calculation is:

i = (size_t)hash & mask;

where mask is HASH_TABLE_SIZE-1.

Here's how hash collisions are dealt:

perturb >>= PERTURB_SHIFT;

i = (i*5 + perturb + 1) & mask;

Explained in the source code:

The first half of collision resolution is to visit table indices via this

recurrence:

j = ((5*j) + 1) mod 2**i

For any initial j in range(2**i), repeating that 2**i times generates each

int in range(2**i) exactly once (see any text on random-number generation for

proof). By itself, this doesn't help much: like linear probing (setting

j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed

order. This would be bad, except that's not the only thing we do, and it's

actually *good* in the common cases where hash keys are consecutive. In an

example that's really too small to make this entirely clear, for a table of

size 2**3 the order of indices is:

0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]

If two things come in at index 5, the first place we look after is index 2,

not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.

Linear probing is deadly in this case because there the fixed probe order

is the *same* as the order consecutive keys are likely to arrive. But it's

extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,

and certain that consecutive hash codes do not.

The other half of the strategy is to get the other bits of the hash code

into play. This is done by initializing a (unsigned) vrbl "perturb" to the

full hash code, and changing the recurrence to:

perturb >>= PERTURB_SHIFT;

j = (5*j) + 1 + perturb;

use j % 2**i as the next table index;

Now the probe sequence depends (eventually) on every bit in the hash code,

and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,

because it quickly magnifies small differences in the bits that didn't affect

the initial index. Note that because perturb is unsigned, if the recurrence

is executed often enough perturb eventually becomes and remains 0. At that

point (very rarely reached) the recurrence is on (just) 5*j+1 again, and

that's certain to find an empty slot eventually (since it generates every int

in range(2**i), and we make sure there's always at least one empty slot).

總結(jié)

以上是生活随笔為你收集整理的如何用python32位开发词典软件_Python如何实现字典?的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。