如何为HashMap设置初始化大小
- 1.阿里巴巴代码规范的要求
- 2.使用阿里巴巴插件扫描时
- 3. 源码
- 3.1 当初始化不设置大小时
- 3.2 当初始化设置大小时
- 4. 测试
- 附录
1.阿里巴巴代码规范的要求
2.使用阿里巴巴插件扫描时
3. 源码
3.1 当初始化不设置大小时
Map<Integer, BigDecimal> staffDidiMap = new HashMap<>();
3.2 当初始化设置大小时
Map<Integer, BigDecimal> staffDidiMap = new HashMap<>(16);
@Test
public void caffineTest() {
System.out.println(1 << 30);
}
result: 1073741824
this.threshold = tableSizeFor(initialCapacity);
@Test
public void caffineTest() {
int cap = 16;
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
int result = (n < 0) ? 1 : (n >= 1073741824) ? 1073741824 : n + 1;
System.out.println(result);
}
result: 16
// TODO 源码部分待完善
4. 测试
@SpringBootTest
public class CaffineTest {
@Test
public void caffineTest() throws InterruptedException {
test1();
test2();
test3();
test4();
test5();
}
void test1() throws InterruptedException {
int aHundredMillion = 10000000;
Map<Integer, Integer> map = new HashMap<>();
long s1 = System.currentTimeMillis();
for (int i = 0; i < aHundredMillion; i++) {
map.put(i, i);
}
long s2 = System.currentTimeMillis();
System.out.println("未初始化容量,耗时 : " + (s2 - s1));
map.clear();
Thread.sleep(5000);
}
void test2() throws InterruptedException {
int aHundredMillion = 10000000;
Map<Integer, Integer> map1 = new HashMap<>(aHundredMillion / 2);
long s5 = System.currentTimeMillis();
for (int i = 0; i < aHundredMillion; i++) {
map1.put(i, i);
}
long s6 = System.currentTimeMillis();
System.out.println("初始化容量5000000,耗时 : " + (s6 - s5));
map1.clear();
Thread.sleep(5000);
}
void test3() throws InterruptedException {
int aHundredMillion = 10000000;
Map<Integer, Integer> map2 = new HashMap<>(aHundredMillion);
long s3 = System.currentTimeMillis();
for (int i = 0; i < aHundredMillion; i++) {
map2.put(i, i);
}
long s4 = System.currentTimeMillis();
System.out.println("初始化容量为10000000,耗时 : " + (s4 - s3));
map2.clear();
Thread.sleep(5000);
}
void test4() throws InterruptedException {
int aHundredMillion = 10000000;
// 按照阿里巴巴规范 initialCapacity = (需要存储的元素个数 / 负载因子) + 1。注意负载因子(即 loader factor)默认 为 0.75
int initial = (int) (aHundredMillion / 0.75F + 1F);
Map<Integer, Integer> map3 = new HashMap<>(initial);
long s7 = System.currentTimeMillis();
for (int i = 0; i < aHundredMillion; i++) {
map3.put(i, i);
}
long s8 = System.currentTimeMillis();
System.out.println("初始化容量为" + initial + "耗时" + (s8 - s7));
map3.clear();
Thread.sleep(5000);
}
void test5() throws InterruptedException {
int aHundredMillion = 10000000;
// 初始化16
Map<Integer, Integer> map4 = new HashMap<>(16);
long s9 = System.currentTimeMillis();
for (int i = 0; i < aHundredMillion; i++) {
map4.put(i, i);
}
long s10 = System.currentTimeMillis();
System.out.println("初始化容量为" + 16 + "耗时" + (s10 - s9));
map4.clear();
Thread.sleep(5000);
}
}
结果:
每次向map中插入一千万条数据。
方法一:未初始化耗时3995
方法二:初始化五百万耗时2345
方法三:初始化为一千万耗时261
方法四:按照阿里的规范初始化,耗时192
方法五:初始化16,耗时182
!!!
多次测试过后,可以得出结论性能最差的是未初始化,最高效且稳定的是按照阿里巴巴规范initialCapacity = (需要存储的元素个数 / 负载因子) + 1。注意负载因子(即 loader factor)默认 为 0.75
,然后是16最后是按照多大赋多大
附录
1.为什么阿里巴巴建议HashMap初始化时需要指定容量大小?