HashMap 核心实现原理分析
主要数据结构
核心字段
transient Node<K,V>[] table; // 哈希表数组
transient int size; // 实际存储的键值对数量
int threshold; // 扩容阈值 (capacity * load factor)
final float loadFactor; // 负载因子,默认0.75f
transient int modCount; // 修改次数,用于fail-fast机制
节点结构
static class Node<K,V> implements Map.Entry<K,V> {final int hash; // 缓存的hash值final K key; // 键V value; // 值Node<K,V> next; // 链表指针
}
初始容量设计
- 默认初始容量:16(2^4)
- 延迟初始化:构造时不创建数组,首次使用时才创建
- 逻辑容量:threshold字段,表示触发扩容的元素数量阈值
从源码可以看到,HashMap的默认初始容量是:
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
16是HashMap的默认初始容量,使用位运算 1 << 4
来表示,确保容量是2的幂次方。
查看HashMap的默认构造函数:
public HashMap() {this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
关键发现:构造函数中没有立即创建数组,只设置了负载因子为0.75,其他字段保持默认值(table = null)。
真正的初始化发生在第一次使用时,通过 initTable()
方法:
private final Node<K,V>[] initTable() {Node<K,V>[] tab;int sc;while ((tab = table) == null || tab.length == 0) {if ((sc = sizeCtl) < 0)Thread.yield(); // lost initialization race; just spinelse if (U.compareAndSetInt(this, SIZECTL, sc, -1)) {try {if ((tab = table) == null || tab.length == 0) {int n = (sc > 0) ? sc : DEFAULT_CAPACITY; // 这里使用默认容量16@SuppressWarnings("unchecked")Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];table = tab = nt;sc = n - (n >>> 2); // 计算下次扩容阈值}} finally {sizeCtl = sc;}break;}}return tab;
}
指定初始容量的构造函数
public HashMap(int initialCapacity) {this(initialCapacity, DEFAULT_LOAD_FACTOR);
}public HashMap(int initialCapacity, float loadFactor) {// ... 参数校验 ...this.loadFactor = loadFactor;this.threshold = tableSizeFor(initialCapacity); // 预计算threshold
}
重要细节:当指定初始容量时,threshold
临时存储了预计算的容量值,在首次初始化时会被正确设置。
static final int tableSizeFor(int cap) {int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1);return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
此方法确保返回大于等于cap的最小2幂次方。(-1的补码全1,所以先产生一个覆盖cap所有位的全1的二进制数,然后加1)
哈希机制实现
哈希函数设计
static final int hash(Object key) {int h;return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
核心思想:
- 将hashCode的高16位与低16位进行异或运算
- 目的:减少哈希冲突,因为数组索引计算只用低位,通过异或让高位也参与运算
- 优势:保持了原hashCode的分布特性,同时增强了散列性
索引计算
// 在putVal等方法中使用
int index = (n - 1) & hash; // n为数组长度,必须是2的幂
- 使用位运算代替取模,性能更高
- 要求数组长度必须是2的幂次方
resize()方法
resize()
方法是HashMap扩容的核心实现,负责在以下场景下重新调整哈希表大小:
- 初始化:首次创建内部数组
- 扩容:当元素数量超过阈值时,将容量翻倍
核心优化原理:
- 扩容后,元素只可能在原位置或原位置+oldCap
- 通过
hash & oldCap
判断元素应该放在哪个位置 - 避免了重新计算hash值,大大提升性能
/*** Initializes or doubles table size. If null, allocates in* accord with initial capacity target held in field threshold.* Otherwise, because we are using power-of-two expansion, the* elements from each bin must either stay at same index, or move* with a power of two offset in the new table.** @return the table*/final Node<K,V>[] resize() {Node<K,V>[] oldTab = table;int oldCap = (oldTab == null) ? 0 : oldTab.length;int oldThr = threshold;int newCap, newThr = 0;if (oldCap > 0) {if (oldCap >= MAXIMUM_CAPACITY) {threshold = Integer.MAX_VALUE;return oldTab;}else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&oldCap >= DEFAULT_INITIAL_CAPACITY)newThr = oldThr << 1; // double threshold}else if (oldThr > 0) // initial capacity was placed in thresholdnewCap = oldThr;else { // zero initial threshold signifies using defaultsnewCap = DEFAULT_INITIAL_CAPACITY;newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);}if (newThr == 0) {float ft = (float)newCap * loadFactor;newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?(int)ft : Integer.MAX_VALUE);}threshold = newThr;@SuppressWarnings({"rawtypes","unchecked"})Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];table = newTab;if (oldTab != null) {for (int j = 0; j < oldCap; ++j) {Node<K,V> e;if ((e = oldTab[j]) != null) {oldTab[j] = null;if (e.next == null)newTab[e.hash & (newCap - 1)] = e;else if (e instanceof TreeNode)((TreeNode<K,V>)e).split(this, newTab, j, oldCap);else { // preserve orderNode<K,V> loHead = null, loTail = null;Node<K,V> hiHead = null, hiTail = null;Node<K,V> next;do {next = e.next;if ((e.hash & oldCap) == 0) {if (loTail == null)loHead = e;elseloTail.next = e;loTail = e;}else {if (hiTail == null)hiHead = e;elsehiTail.next = e;hiTail = e;}} while ((e = next) != null);if (loTail != null) {loTail.next = null;newTab[j] = loHead;}if (hiTail != null) {hiTail.next = null;newTab[j + oldCap] = hiHead;}}}}}return newTab;}
容量和阈值计算逻辑
三种初始化场景
Node<K,V>[] oldTab = table;int oldCap = (oldTab == null) ? 0 : oldTab.length;int oldThr = threshold;int newCap, newThr = 0;
if (oldCap > 0) {// 场景1:已有数组,进行扩容if (oldCap >= MAXIMUM_CAPACITY) {threshold = Integer.MAX_VALUE;return oldTab; // 达到最大容量,不再扩容}else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&oldCap >= DEFAULT_INITIAL_CAPACITY)newThr = oldThr << 1; // 容量和阈值都翻倍
}
else if (oldThr > 0) // 场景2:延迟初始化(因为oldCap==0),threshold中存储了预计算的容量newCap = oldThr;
else { // 场景3:默认初始化newCap = DEFAULT_INITIAL_CAPACITY; // 16newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); // 12
}
阈值重新计算
if (newThr == 0) {float ft = (float)newCap * loadFactor;newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?(int)ft : Integer.MAX_VALUE);
}
关键设计:当newThr == 0
时(场景2),需要根据新容量重新计算真正的扩容阈值。
元素重新分布算法
核心优化:位运算判断新位置
if ((e.hash & oldCap) == 0) {// 保持原位置:新索引 = 原索引// loHead/loTail 链表
} else {// 移动到新位置:新索引 = 原索引 + oldCap// hiHead/hiTail 链表
}
为什么使用e.hash & oldCap
判断?
- 原索引计算:
hash & (oldCap - 1)
- 新索引计算:
hash & (newCap - 1)
=hash & (2*oldCap - 1)
当容量从oldCap
扩展到2*oldCap
时:
- 如果
hash
的第log2(oldCap)
位为0 → 新索引 = 原索引 - 如果
hash
的第log2(oldCap)
位为1 → 新索引 = 原索引 + oldCap
示例(容量从8扩展到16):
oldCap = 8 (1000), newCap = 16 (10000)
hash = 19 (10011)原索引:19 & 7 = 10011 & 0111 = 3
新索引:19 & 15 = 10011 & 1111 = 3hash = 27 (11011)
原索引:27 & 7 = 11011 & 0111 = 3
新索引:27 & 15 = 11011 & 1111 = 11 = 3 + 8
三种节点处理策略
1 单节点处理
if (e.next == null)newTab[e.hash & (newCap - 1)] = e;
直接重新计算索引位置
2 红黑树节点处理
else if (e instanceof TreeNode)((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
调用TreeNode的split方法,处理树形结构的分裂
3 链表节点处理
Node<K,V> loHead = null, loTail = null; // 低位链表(保持原位置)
Node<K,V> hiHead = null, hiTail = null; // 高位链表(移动到新位置)do {next = e.next;if ((e.hash & oldCap) == 0) {// 构建低位链表if (loTail == null) loHead = e;else loTail.next = e;loTail = e;} else {// 构建高位链表 if (hiTail == null) hiHead = e;else hiTail.next = e;hiTail = e;}
} while ((e = next) != null);
TreeNode.split() 方法
split()
方法是 HashMap 扩容机制中处理红黑树节点分裂的核心方法,专门在 resize()
操作中被调用。当哈希表容量翻倍时,原本在同一个桶中的红黑树节点需要根据新的容量重新分布到两个不同的桶中。
map
:当前 HashMap 实例的引用tab
:新的哈希表数组index
:原始桶的索引位置bit
:分割位,实际上就是oldCap
(原始容量)
/*** Splits nodes in a tree bin into lower and upper tree bins,* or untreeifies if now too small. Called only from resize;* see above discussion about split bits and indices.** @param map the map* @param tab the table for recording bin heads* @param index the index of the table being split* @param bit the bit of hash to split on*/final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {TreeNode<K,V> b = this;// Relink into lo and hi lists, preserving orderTreeNode<K,V> loHead = null, loTail = null;TreeNode<K,V> hiHead = null, hiTail = null;int lc = 0, hc = 0;for (TreeNode<K,V> e = b, next; e != null; e = next) {next = (TreeNode<K,V>)e.next;e.next = null;if ((e.hash & bit) == 0) {if ((e.prev = loTail) == null)loHead = e;elseloTail.next = e;loTail = e;++lc;}else {if ((e.prev = hiTail) == null)hiHead = e;elsehiTail.next = e;hiTail = e;++hc;}}if (loHead != null) {if (lc <= UNTREEIFY_THRESHOLD)tab[index] = loHead.untreeify(map);else {tab[index] = loHead;if (hiHead != null) // (else is already treeified)loHead.treeify(tab);}}if (hiHead != null) {if (hc <= UNTREEIFY_THRESHOLD)tab[index + bit] = hiHead.untreeify(map);else {tab[index + bit] = hiHead;if (loHead != null)hiHead.treeify(tab);}}}
核心分裂算法
节点分类逻辑
if ((e.hash & bit) == 0) {// 低位链表:保持原索引位置// 构建 loHead/loTail 链表
} else {// 高位链表:移动到新索引位置(index + bit)// 构建 hiHead/hiTail 链表
}
这里的分类原理与 resize()
方法中的链表处理完全一致:
- 通过
hash & oldCap
判断节点在新表中的位置 - 如果结果为 0,节点保持在原索引位置
- 如果结果非 0,节点移动到
原索引 + oldCap
位置
在分裂过程中,方法巧妙地维护了双向链表结构:
if ((e.prev = loTail) == null)loHead = e;
elseloTail.next = e;
loTail = e;
这确保了分裂后的两个链表仍然保持原有的相对顺序,为后续的树化或去树化操作提供基础。
分裂后的数据结构决策
UNTREEIFY_THRESHOLD 阈值判断
分裂完成后,方法根据每个子链表的节点数量决定最终的数据结构:
- 去树化条件:
lc <= UNTREEIFY_THRESHOLD
(阈值为 6) - 保持树化条件:节点数量大于 6
当需要去树化时,调用 untreeify()
方法
final Node<K,V> untreeify(HashMap<K,V> map) {Node<K,V> hd = null, tl = null;for (Node<K,V> q = this; q != null; q = q.next) {Node<K,V> p = map.replacementNode(q, null);if (tl == null)hd = p;elsetl.next = p;tl = p;}return hd;
}
该方法将 TreeNode 转换为普通的 Node,调用 replacementNode()
创建新节点:
Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {return new Node<>(p.hash, p.key, p.value, next);
}
当保持树化时,调用 treeify()
方法重新构建红黑树:
final void treeify(Node<K,V>[] tab) {TreeNode<K,V> root = null;for (TreeNode<K,V> x = this, next; x != null; x = next) {next = (TreeNode<K,V>)x.next;x.left = x.right = null;if (root == null) {x.parent = null;x.red = false;root = x;} else {// 重新构建红黑树结构// 包含平衡插入逻辑}}moveRootToFront(tab, root);
}
性能优化设计
条件树化优化
if (hiHead != null) // (else is already treeified)loHead.treeify(tab);
这个条件判断是一个重要优化:只有当两个子链表都存在时,才对低位链表进行树化。如果高位链表为空,说明所有节点都分配到了低位链表,此时无需重新树化(本来就是红黑树)。
在 treeify()
方法的最后调用 moveRootToFront()
:
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {int n;if (root != null && tab != null && (n = tab.length) > 0) {int index = (n - 1) & root.hash;TreeNode<K,V> first = (TreeNode<K,V>)tab[index];if (root != first) {// 调整链表指针,确保根节点位于链表头部}}
}
此方法确保红黑树的根节点始终是桶中链表的第一个节点,这对于查找性能至关重要。
算法复杂度分析
- 时间复杂度:O(n),其中 n 是原红黑树中的节点数量
- 空间复杂度:O(1),原地重新组织节点
- 树化/去树化开销:
- 去树化:O(n) - 线性遍历转换
- 树化:O(n log n) - 重新构建平衡树
相比于普通链表的分裂,TreeNode 的 split() 方法需要额外处理:
- 双向链表指针的维护(prev/next)
- 树结构的决策(树化/去树化)
- 红黑树的重新平衡
查找操作实现
查找流程
final Node<K,V> getNode(Object key) {Node<K,V>[] tab; Node<K,V> first, e; int n, hash; K k;if ((tab = table) != null && (n = tab.length) > 0 &&(first = tab[(n - 1) & (hash = hash(key))]) != null) {// 1. 检查桶的第一个节点if (first.hash == hash && ((k = first.key) == key || (key != null && key.equals(k))))return first;if ((e = first.next) != null) {// 2. 红黑树查找if (first instanceof TreeNode)return ((TreeNode<K,V>)first).getTreeNode(hash, key);// 3. 链表遍历查找do {if (e.hash == hash &&((k = e.key) == key || (key != null && key.equals(k))))return e;} while ((e = e.next) != null);}}return null;
}
查找策略:
- 直接命中:检查桶首节点
- 树查找:O(log n)复杂度
- 链表遍历:O(n)复杂度
static <K,V> Node<K,V> untreeify(Node<K,V> b) {Node<K,V> hd = null, tl = null;for (Node<K,V> q = b; q != null; q = q.next) {// TreeNode转回普通NodeNode<K,V> p = new Node<K,V>(q.hash, q.key, q.val);if (tl == null) hd = p;else tl.next = p;tl = p;}return hd;
}
putVal方法分析
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {Node<K,V>[] tab; Node<K,V> p; int n, i;// 1. 延迟初始化if ((tab = table) == null || (n = tab.length) == 0)n = (tab = resize()).length;// 2. 桶为空,直接插入if ((p = tab[i = (n - 1) & hash]) == null)tab[i] = newNode(hash, key, value, null);else {Node<K,V> e; K k;// 3. key已存在,准备覆盖if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k))))e = p;// 4. 红黑树插入else if (p instanceof TreeNode)e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);// 5. 链表插入else {for (int binCount = 0; ; ++binCount) {if ((e = p.next) == null) {p.next = newNode(hash, key, value, null);// 达到树化阈值if (binCount >= TREEIFY_THRESHOLD - 1)treeifyBin(tab, hash);break;}if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))break;p = e;}}// 6. 处理值覆盖if (e != null) {V oldValue = e.value;if (!onlyIfAbsent || oldValue == null)e.value = value;afterNodeAccess(e);return oldValue;}}// 7. 检查是否需要扩容++modCount;if (++size > threshold)resize();afterNodeInsertion(evict);return null;
}