日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

[JDK1.6] JAVA集合 ConcurrentHashMap源码浅析

發布時間:2024/3/12 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 [JDK1.6] JAVA集合 ConcurrentHashMap源码浅析 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • 一 簡介:
  • ConcurrentHashMap 存儲數據的結構與架構圖
  • ConcurrentHashMap 的字段與常量
  • 構造方法
  • 存儲數據 put(K, V)
  • 獲取數據 get(Object)
  • 移除數據 remove(Object)
  • Segment 描述
    • 存儲數據 Segment.put()
    • 獲取數據 Segment.get()
    • 移除數據 remove()
    • 擴容 Segment.rehash()
    • Segment.containsKey()
    • Segment.containsValue()
    • Segment.clear()
    • Segment.replace(K, int, V, V)
    • Segment.replace(K, int, V)
  • 元素數量 size()
  • 是否有元素 isEmpty()
  • 是否包含指定鍵 containsKey()
  • 是否包含指定值 containsValue()
  • 是否包含指定值 contains()
  • 清空所有元素 clear()
  • 鍵的視圖 keySet()
  • 值的視圖 values()
  • 鍵-值 對的視圖 entrySet()
  • ConcurrentMap 的方法
    • putIfAbsent(K key, V value)
    • remove(Object key, Object value)
    • V replace(K key, V value)
    • replace(K key, V oldValue, V newValue)
  • 迭代器 HashIterator

源碼來自 jdk1.6

一 簡介:

支持獲取的完全并發和更新的所期望可調整并發的哈希表。此類遵守與 Hashtable 相同的功能規范,并且包括對應于 Hashtable 的每個方法的方法版本。不過,盡管所有操作都是線程安全的,但獲取操作不 必鎖定,并且不 支持以某種防止所有訪問的方式鎖定整個表。此類可以通過程序完全與 Hashtable 進行互操作,這取決于其線程安全,而與其同步細節無關。

獲取操作(包括 get)通常不會受阻塞,因此,可能與更新操作交迭(包括 putremove)。獲取會影響最近完成的 更新操作的結果。對于一些聚合操作,比如 putAllclear,并發獲取可能只影響某些條目的插入和移除。類似地,在創建迭代器/枚舉時或自此之后,IteratorsEnumerations 返回在某一時間點上影響哈希表狀態的元素。它們不會 拋出 ConcurrentModificationException。不過,迭代器被設計成每次僅由一個線程使用。

這允許通過可選的 concurrencyLevel 構造方法參數(默認值為 16)來引導更新操作之間的并發,該參數用作內部調整大小的一個提示。表是在內部進行分區的,試圖允許指示無爭用并發更新的數量。因為哈希表中的位置基本上是隨意的,所以實際的并發將各不相同。理想情況下,應該選擇一個盡可能多地容納并發修改該表的線程的值。使用一個比所需要的值高很多的值可能會浪費空間和時間,而使用一個顯然低很多的值可能導致線程爭用。對數量級估計過高或估計過低通常都會帶來非常顯著的影響。當僅有一個線程將執行修改操作,而其他所有線程都只是執行讀取操作時,才認為某個值是合適的。此外,重新調整此類或其他任何種類哈希表的大小都是一個相對較慢的操作,因此,在可能的時候,提供構造方法中期望表大小的估計值是一個好主意。

此類及其視圖和迭代器實現了 MapIterator 接口的所有可選 方法。

此類與 Hashtable 相似,但與 HashMap 不同,它不 允許將 null 用作鍵或值。

此類是 Java Collections Framework 的成員。

ConcurrentHashMap 存儲數據的結構與架構圖

segment (分段);
ConcurrentHashMap采用了分段鎖的機制;數組包含著數組,數組再包含著鏈表.
segments --> table --> entry
在存儲元素時, 只鎖定segments里的的一個數組;所以對于其他數組的操作是不會鎖定.

ConcurrentHashMap 的字段與常量

靜態常量

/*** segments數組元素, 默認的初始容量*/static final int DEFAULT_INITIAL_CAPACITY = 16;/*** 默認的負載因子*/static final float DEFAULT_LOAD_FACTOR = 0.75f;/*** 默認并發級別, 即segments 數組的大小*/static final int DEFAULT_CONCURRENCY_LEVEL = 16;/*** HashEntry<K,V>[] oldTable 的最大值 = 1073741824 [0x40000000]* 可以存儲的鏈數*/static final int MAXIMUM_CAPACITY = 1 << 30;/*** 并發級別,允許的最大段數 = 65536 [0x10000] (即 segments 數組大最大值)*/static final int MAX_SEGMENTS = 1 << 16; // slightly conservative/*** 在不同步之前,大小和containsValue方法的非同步重試次數。 * 如果表經過連續修改而無法獲得準確的結果,則用于避免無限次重試。*/static final int RETRIES_BEFORE_LOCK = 2;

屬性字段

/*** Mask value for indexing into segments. The upper bits of a* key's hash code are used to choose the segment.*/final int segmentMask;/*** Shift value for indexing within segments.*/final int segmentShift;/*** 這個segments,每個都是一個獨立的 hash table*/final Segment<K,V>[] segments;/* 視圖 */transient Set<K> keySet;transient Set<Map.Entry<K,V>> entrySet;transient Collection<V> values;

構造方法

使用默認初始容量(16),加載因子(0.75)和concurrencyLevel(16)創建一個新的空映射。

public ConcurrentHashMap() {this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);}

指定hash 表的初始容量;

public ConcurrentHashMap(int initialCapacity) {this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);}

指定hash 表的初始容量, 和負載因子

public ConcurrentHashMap(int initialCapacity, float loadFactor) {this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);}

指定hash 表的初始容量, 和負載因子,和分段級別, 即并發訪問數量

public ConcurrentHashMap(int initialCapacity,float loadFactor, int concurrencyLevel) {if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)throw new IllegalArgumentException();if (concurrencyLevel > MAX_SEGMENTS)concurrencyLevel = MAX_SEGMENTS;// Find power-of-two sizes best matching argumentsint sshift = 0;int ssize = 1;while (ssize < concurrencyLevel) {++sshift;ssize <<= 1;}segmentShift = 32 - sshift;segmentMask = ssize - 1;this.segments = Segment.newArray(ssize);if (initialCapacity > MAXIMUM_CAPACITY)initialCapacity = MAXIMUM_CAPACITY;int c = initialCapacity / ssize;if (c * ssize < initialCapacity)++c;int cap = 1;while (cap < c)cap <<= 1;for (int i = 0; i < this.segments.length; ++i)this.segments[i] = new Segment<K,V>(cap, loadFactor);}

使用與給定地圖相同的映射創建新地圖。 創建的映射的容量是給定映射中映射數的1.5倍或16(以較大者為準),以及默認加載因子(0.75)和concurrencyLevel(16)。

public ConcurrentHashMap(Map<? extends K, ? extends V> m) {this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,DEFAULT_INITIAL_CAPACITY),DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);putAll(m);}

存儲數據 put(K, V)

public V put(K key, V value) {if (value == null)throw new NullPointerException();int hash = hash(key.hashCode());return segmentFor(hash).put(key, hash, value, false);}

獲取數據 get(Object)

public V get(Object key) {int hash = hash(key.hashCode());return segmentFor(hash).get(key, hash);}

移除數據 remove(Object)

public V remove(Object key) {int hash = hash(key.hashCode());return segmentFor(hash).remove(key, hash, null);}

源碼注釋: 返回應該用于具有給定哈希的鍵的段

final Segment<K,V> segmentFor(int hash) {return segments[(hash >>> segmentShift) & segmentMask];}

Segment 描述

注釋:
Segment 是哈希表的專用版本。 這是來自ReentrantLock 的子類,只是為了簡化一些鎖定并避免單獨構造。

static final class Segment<K,V> extends ReentrantLock implements Serializable {transient volatile int count;transient int modCount;transient int threshold;transient volatile HashEntry<K,V>[] table;final float loadFactor;Segment(int initialCapacity, float lf) {loadFactor = lf;setTable(HashEntry.<K,V>newArray(initialCapacity));}static final <K,V> Segment<K,V>[] newArray(int i) {return new Segment[i];} }

Segment 的方法

存儲數據 Segment.put()

V put(K key, int hash, V value, boolean onlyIfAbsent) {lock();try {int c = count;if (c++ > threshold) // ensure capacityrehash();HashEntry<K,V>[] tab = table;int index = hash & (tab.length - 1);HashEntry<K,V> first = tab[index];HashEntry<K,V> e = first;while (e != null && (e.hash != hash || !key.equals(e.key)))e = e.next;V oldValue;if (e != null) {oldValue = e.value;if (!onlyIfAbsent)e.value = value;}else {oldValue = null;++modCount;tab[index] = new HashEntry<K,V>(key, hash, first, value);count = c; // write-volatile}return oldValue;} finally {unlock();}}

獲取數據 Segment.get()

V get(Object key, int hash) {if (count != 0) { // read-volatileHashEntry<K,V> e = getFirst(hash);while (e != null) {if (e.hash == hash && key.equals(e.key)) {V v = e.value;if (v != null)return v;return readValueUnderLock(e); // recheck}e = e.next;}}return null;} HashEntry<K,V> getFirst(int hash) {HashEntry<K,V>[] tab = table;return tab[hash & (tab.length - 1)];} V readValueUnderLock(HashEntry<K,V> e) {lock();try {return e.value;} finally {unlock();}}

移除數據 remove()

V remove(Object key, int hash, Object value) {lock();try {int c = count - 1;HashEntry<K,V>[] tab = table;int index = hash & (tab.length - 1);HashEntry<K,V> first = tab[index];HashEntry<K,V> e = first;while (e != null && (e.hash != hash || !key.equals(e.key)))e = e.next;V oldValue = null;if (e != null) {V v = e.value;if (value == null || value.equals(v)) {oldValue = v;// All entries following removed node can stay// in list, but all preceding ones need to be// cloned.++modCount;HashEntry<K,V> newFirst = e.next;for (HashEntry<K,V> p = first; p != e; p = p.next)newFirst = new HashEntry<K,V>(p.key, p.hash, newFirst, p.value);tab[index] = newFirst;count = c; // write-volatile}}return oldValue;} finally {unlock();}}

擴容 Segment.rehash()

2^(n - 1)

void rehash() {HashEntry<K,V>[] oldTable = table;int oldCapacity = oldTable.length;if (oldCapacity >= MAXIMUM_CAPACITY)return;/** Reclassify nodes in each list to new Map. Because we are* using power-of-two expansion, the elements from each bin* must either stay at same index, or move with a power of two* offset. We eliminate unnecessary node creation by catching* cases where old nodes can be reused because their next* fields won't change. Statistically, at the default* threshold, only about one-sixth of them need cloning when* a table doubles. The nodes they replace will be garbage* collectable as soon as they are no longer referenced by any* reader thread that may be in the midst of traversing table* right now.*/HashEntry<K,V>[] newTable = HashEntry.newArray(oldCapacity<<1);threshold = (int)(newTable.length * loadFactor);int sizeMask = newTable.length - 1;for (int i = 0; i < oldCapacity ; i++) {// We need to guarantee that any existing reads of old Map can// proceed. So we cannot yet null out each bin.HashEntry<K,V> e = oldTable[i];if (e != null) {HashEntry<K,V> next = e.next;int idx = e.hash & sizeMask;// Single node on listif (next == null)newTable[idx] = e;else {// Reuse trailing consecutive sequence at same slotHashEntry<K,V> lastRun = e;int lastIdx = idx;for (HashEntry<K,V> last = next;last != null;last = last.next) {int k = last.hash & sizeMask;if (k != lastIdx) {lastIdx = k;lastRun = last;}}newTable[lastIdx] = lastRun;// Clone all remaining nodesfor (HashEntry<K,V> p = e; p != lastRun; p = p.next) {int k = p.hash & sizeMask;HashEntry<K,V> n = newTable[k];newTable[k] = new HashEntry<K,V>(p.key, p.hash,n, p.value);}}}}table = newTable;}

Segment.containsKey()

boolean containsKey(Object key, int hash) {if (count != 0) { // read-volatileHashEntry<K,V> e = getFirst(hash);while (e != null) {if (e.hash == hash && key.equals(e.key))return true;e = e.next;}}return false;}

Segment.containsValue()

boolean containsValue(Object value) {if (count != 0) { // read-volatileHashEntry<K,V>[] tab = table;int len = tab.length;for (int i = 0 ; i < len; i++) {for (HashEntry<K,V> e = tab[i]; e != null; e = e.next) {V v = e.value;if (v == null) // recheckv = readValueUnderLock(e);if (value.equals(v))return true;}}}return false;}

Segment.clear()

void clear() {if (count != 0) {lock();try {HashEntry<K,V>[] tab = table;for (int i = 0; i < tab.length ; i++)tab[i] = null;++modCount;count = 0; // write-volatile} finally {unlock();}}}

Segment.replace(K, int, V, V)

boolean replace(K key, int hash, V oldValue, V newValue) {lock();try {HashEntry<K,V> e = getFirst(hash);while (e != null && (e.hash != hash || !key.equals(e.key)))e = e.next;boolean replaced = false;if (e != null && oldValue.equals(e.value)) {replaced = true;e.value = newValue;}return replaced;} finally {unlock();}}

Segment.replace(K, int, V)

V replace(K key, int hash, V newValue) {lock();try {HashEntry<K,V> e = getFirst(hash);while (e != null && (e.hash != hash || !key.equals(e.key)))e = e.next;V oldValue = null;if (e != null) {oldValue = e.value;e.value = newValue;}return oldValue;} finally {unlock();}}

元素數量 size()

public int size() {final Segment<K,V>[] segments = this.segments;long sum = 0;long check = 0;int[] mc = new int[segments.length];// Try a few times to get accurate count. On failure due to// continuous async changes in table, resort to locking.for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {check = 0;sum = 0;int mcsum = 0;for (int i = 0; i < segments.length; ++i) {sum += segments[i].count;mcsum += mc[i] = segments[i].modCount;}if (mcsum != 0) {for (int i = 0; i < segments.length; ++i) {check += segments[i].count;if (mc[i] != segments[i].modCount) {check = -1; // force retrybreak;}}}if (check == sum)break;}if (check != sum) { // Resort to locking all segmentssum = 0;for (int i = 0; i < segments.length; ++i)segments[i].lock();for (int i = 0; i < segments.length; ++i)sum += segments[i].count;for (int i = 0; i < segments.length; ++i)segments[i].unlock();}if (sum > Integer.MAX_VALUE)return Integer.MAX_VALUE;elsereturn (int)sum;}

是否有元素 isEmpty()

public boolean isEmpty() {final Segment<K,V>[] segments = this.segments;/** We keep track of per-segment modCounts to avoid ABA* problems in which an element in one segment was added and* in another removed during traversal, in which case the* table was never actually empty at any point. Note the* similar use of modCounts in the size() and containsValue()* methods, which are the only other methods also susceptible* to ABA problems.*/int[] mc = new int[segments.length];int mcsum = 0;for (int i = 0; i < segments.length; ++i) {if (segments[i].count != 0)return false;elsemcsum += mc[i] = segments[i].modCount;}// If mcsum happens to be zero, then we know we got a snapshot// before any modifications at all were made. This is// probably common enough to bother tracking.if (mcsum != 0) {for (int i = 0; i < segments.length; ++i) {if (segments[i].count != 0 ||mc[i] != segments[i].modCount)return false;}}return true;}

是否包含指定鍵 containsKey()

public boolean containsKey(Object key) {int hash = hash(key.hashCode());return segmentFor(hash).containsKey(key, hash);}

是否包含指定值 containsValue()

public boolean containsValue(Object value) {if (value == null)throw new NullPointerException();// See explanation of modCount use abovefinal Segment<K,V>[] segments = this.segments;int[] mc = new int[segments.length];// Try a few times without lockingfor (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {int sum = 0;int mcsum = 0;for (int i = 0; i < segments.length; ++i) {int c = segments[i].count;mcsum += mc[i] = segments[i].modCount;if (segments[i].containsValue(value))return true;}boolean cleanSweep = true;if (mcsum != 0) {for (int i = 0; i < segments.length; ++i) {int c = segments[i].count;if (mc[i] != segments[i].modCount) {cleanSweep = false;break;}}}if (cleanSweep)return false;}// Resort to locking all segmentsfor (int i = 0; i < segments.length; ++i)segments[i].lock();boolean found = false;try {for (int i = 0; i < segments.length; ++i) {if (segments[i].containsValue(value)) {found = true;break;}}} finally {for (int i = 0; i < segments.length; ++i)segments[i].unlock();}return found;}

是否包含指定值 contains()

這個方法等同于 containsValue()

public boolean contains(Object value) {return containsValue(value);}

清空所有元素 clear()

public void clear() {for (int i = 0; i < segments.length; ++i)segments[i].clear();}

鍵的視圖 keySet()

public Set<K> keySet() {Set<K> ks = keySet;return (ks != null) ? ks : (keySet = new KeySet());}

值的視圖 values()

public Collection<V> values() {Collection<V> vs = values;return (vs != null) ? vs : (values = new Values());}

鍵-值 對的視圖 entrySet()

public Set<Map.Entry<K,V>> entrySet() {Set<Map.Entry<K,V>> es = entrySet;return (es != null) ? es : (entrySet = new EntrySet());}

ConcurrentMap 的方法

putIfAbsent(K key, V value)

public V putIfAbsent(K key, V value) {if (value == null)throw new NullPointerException();int hash = hash(key.hashCode());return segmentFor(hash).put(key, hash, value, true);}

remove(Object key, Object value)

public boolean remove(Object key, Object value) {int hash = hash(key.hashCode());if (value == null)return false;return segmentFor(hash).remove(key, hash, value) != null;}

V replace(K key, V value)

public V replace(K key, V value) {if (value == null)throw new NullPointerException();int hash = hash(key.hashCode());return segmentFor(hash).replace(key, hash, value);}

replace(K key, V oldValue, V newValue)

public boolean replace(K key, V oldValue, V newValue) {if (oldValue == null || newValue == null)throw new NullPointerException();int hash = hash(key.hashCode());return segmentFor(hash).replace(key, hash, oldValue, newValue);}

迭代器 HashIterator

abstract class HashIterator {int nextSegmentIndex;int nextTableIndex;HashEntry<K,V>[] currentTable;HashEntry<K, V> nextEntry;HashEntry<K, V> lastReturned;HashIterator() {nextSegmentIndex = segments.length - 1;nextTableIndex = -1;advance();}public boolean hasMoreElements() { return hasNext(); }final void advance() {if (nextEntry != null && (nextEntry = nextEntry.next) != null)return;while (nextTableIndex >= 0) {if ( (nextEntry = currentTable[nextTableIndex--]) != null)return;}while (nextSegmentIndex >= 0) {Segment<K,V> seg = segments[nextSegmentIndex--];if (seg.count != 0) {currentTable = seg.table;for (int j = currentTable.length - 1; j >= 0; --j) {if ( (nextEntry = currentTable[j]) != null) {nextTableIndex = j - 1;return;}}}}}public boolean hasNext() { return nextEntry != null; }HashEntry<K,V> nextEntry() {if (nextEntry == null)throw new NoSuchElementException();lastReturned = nextEntry;advance();return lastReturned;}public void remove() {if (lastReturned == null)throw new IllegalStateException();ConcurrentHashMap.this.remove(lastReturned.key);lastReturned = null;}}

迭代器實現類

final class KeyIterator extends HashIterator implements Iterator<K>, Enumeration<K> {public K next() { return super.nextEntry().key; }public K nextElement() { return super.nextEntry().key; }}final class ValueIterator extends HashIterator implements Iterator<V>, Enumeration<V> {public V next() { return super.nextEntry().value; }public V nextElement() { return super.nextEntry().value; }}static class SimpleEntry<K,V> implements Entry<K,V> {K key;V value;public SimpleEntry(K key, V value) {this.key = key;this.value = value;}public SimpleEntry(Entry<K,V> e) {this.key = e.getKey();this.value = e.getValue();}}final class WriteThroughEntry extends AbstractMap.SimpleEntry<K,V> {WriteThroughEntry(K k, V v) {super(k,v);}/*** Set our entry's value and write through to the map. The* value to return is somewhat arbitrary here. Since a* WriteThroughEntry does not necessarily track asynchronous* changes, the most recent "previous" value could be* different from what we return (or could even have been* removed in which case the put will re-establish). We do not* and cannot guarantee more.*/public V setValue(V value) {if (value == null) throw new NullPointerException();V v = super.setValue(value);ConcurrentHashMap.this.put(getKey(), value);return v;}}final class EntryIterator extends HashIterator implements Iterator<Entry<K,V>> {public Map.Entry<K,V> next() {HashEntry<K,V> e = super.nextEntry();return new WriteThroughEntry(e.key, e.value);}}

實現類

final class KeySet extends AbstractSet<K> {public Iterator<K> iterator() {return new KeyIterator();}public int size() {return ConcurrentHashMap.this.size();}public boolean contains(Object o) {return ConcurrentHashMap.this.containsKey(o);}public boolean remove(Object o) {return ConcurrentHashMap.this.remove(o) != null;}public void clear() {ConcurrentHashMap.this.clear();}}final class Values extends AbstractCollection<V> {public Iterator<V> iterator() {return new ValueIterator();}public int size() {return ConcurrentHashMap.this.size();}public boolean contains(Object o) {return ConcurrentHashMap.this.containsValue(o);}public void clear() {ConcurrentHashMap.this.clear();}}final class EntrySet extends AbstractSet<Map.Entry<K,V>> {public Iterator<Map.Entry<K,V>> iterator() {return new EntryIterator();}public boolean contains(Object o) {if (!(o instanceof Map.Entry))return false;Map.Entry<?,?> e = (Map.Entry<?,?>)o;V v = ConcurrentHashMap.this.get(e.getKey());return v != null && v.equals(e.getValue());}public boolean remove(Object o) {if (!(o instanceof Map.Entry))return false;Map.Entry<?,?> e = (Map.Entry<?,?>)o;return ConcurrentHashMap.this.remove(e.getKey(), e.getValue());}public int size() {return ConcurrentHashMap.this.size();}public void clear() {ConcurrentHashMap.this.clear();}}

總結

以上是生活随笔為你收集整理的[JDK1.6] JAVA集合 ConcurrentHashMap源码浅析的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。