系列文章目录
第一章 OkHttp3源码解析 - 请求流程
第二章 OkHttp3源码解析 - 拦截器
第三章 OkHttp3源码解析 - 连接机制和缓存机制
文章目录
- 系列文章目录
- 前言
- 一、连接机制
- 1.1 创建连接
- 1.2 连接池
- 二、缓存机制
- 2.1 缓存策略
- 2.2 缓存管理
- 彩蛋
- 致谢
前言
本文基于okhttp3.12.13源码进行分析
前两篇讲了OkHttp的请求流程和拦截器,详情可见:
第一章 OkHttp3源码解析 - 请求流程
第二章 OkHttp3源码解析 - 拦截器
下面看连接机制和缓存机制。
一、连接机制
连接的创建是在StreamAllocation对象统筹下完成的,我们前面也说过它早在RetryAndFollowUpInterceptor就被创建了,StreamAllocation对象 主要用来管理两个关键角色:
- RealConnection:真正建立连接的对象,利用Socket建立连接。
- ConnectionPool:连接池,用来管理和复用连接。
在里初始化了一个StreamAllocation对象,我们说在这个StreamAllocation对象里初始化了一个Socket对象用来做连接,但是并没有。
1.1 创建连接
我们在前面拦截器章节的ConnectInterceptor分析中已经说过,connectInterceptor用来完成连接。而真正的连接在RealConnect中实现,连接由连接池ConnectPool来管理,连接池最多保
持5个地址的连接keep-alive,每个keep-alive时长为5分钟,并由异步线程清理无效的连接。
主要由以下两个方法完成:
- HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
- RealConnection connection = streamAllocation.connection();
详细调用如下:
//ConnectInterceptor.java
public final class ConnectInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();
// We need the network to satisfy this request. Possibly for validating a conditional GET.
boolean doExtensiveHealthChecks = !request.method().equals("GET");
HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();
return realChain.proceed(request, streamAllocation, httpCodec, connection);
}
}
StreamAllocation.newStream()调动findHealthyConnection()方法来建立连接:
public final class StreamAllocation {
public HttpCodec newStream(
OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
int connectTimeout = chain.connectTimeoutMillis();
int readTimeout = chain.readTimeoutMillis();
int writeTimeout = chain.writeTimeoutMillis();
int pingIntervalMillis = client.pingIntervalMillis();
boolean connectionRetryEnabled = client.retryOnConnectionFailure();
try {
// 调用 findHealthyConnection去找到可用连接
RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks);
HttpCodec resultCodec = resultConnection.newCodec(client, chain, this);
synchronized (connectionPool) {
codec = resultCodec;
return resultCodec;
}
} catch (IOException e) {
throw new RouteException(e);
}
}
}
findHealthyConnection() 最终调动findConnect()方法来建立连接:
public final class StreamAllocation {
/**
* Returns a connection to host a new stream. This prefers the existing connection if it exists,
* then the pool, finally building a new connection.
*/
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
boolean foundPooledConnection = false;
RealConnection result = null;
Route selectedRoute = null;
Connection releasedConnection;
Socket toClose;
synchronized (connectionPool) {
if (released) throw new IllegalStateException("released");
if (codec != null) throw new IllegalStateException("codec != null");
if (canceled) throw new IOException("Canceled");
//1 查看是否有完好的连接
releasedConnection = this.connection;
toClose = releaseIfNoNewStreams();
if (this.connection != null) {
// We had an already-allocated connection and it's good.
result = this.connection;
releasedConnection = null;
}
if (!reportedAcquired) {
// If the connection was never reported acquired, don't report it as released!
releasedConnection = null;
}
if (result == null) {
//2 连接池中是否用可用的连接,有则使用
Internal.instance.get(connectionPool, address, this, null);
if (connection != null) {
foundPooledConnection = true;
result = connection;
} else {
selectedRoute = route;
}
}
}
closeQuietly(toClose);
if (releasedConnection != null) {
eventListener.connectionReleased(call, releasedConnection);
}
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
}
if (result != null) {
// If we found an already-allocated or pooled connection, we're done.
route = connection.route();
return result;
}
// If we need a route selection, make one. This is a blocking operation.
boolean newRouteSelection = false;
//线程的选择,多IP操作
if (selectedRoute == null && (routeSelection == null || !routeSelection.hasNext())) {
newRouteSelection = true;
routeSelection = routeSelector.next();
}
//3 如果没有可用连接,则自己创建一个
synchronized (connectionPool) {
if (canceled) throw new IOException("Canceled");
if (newRouteSelection) {
// Now that we have a set of IP addresses, make another attempt at getting a connection from
// the pool. This could match due to connection coalescing.
List<Route> routes = routeSelection.getAll();
for (int i = 0, size = routes.size(); i < size; i++) {
Route route = routes.get(i);
Internal.instance.get(connectionPool, address, this, route);
if (connection != null) {
foundPooledConnection = true;
result = connection;
this.route = route;
break;
}
}
}
if (!foundPooledConnection) {
if (selectedRoute == null) {
selectedRoute = routeSelection.next();
}
// Create a connection and assign it to this allocation immediately. This makes it possible
// for an asynchronous cancel() to interrupt the handshake we're about to do.
route = selectedRoute;
refusedStreamCount = 0;
result = new RealConnection(connectionPool, selectedRoute);
acquire(result, false);
}
}
// If we found a pooled connection on the 2nd time around, we're done.
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
return result;
}
// Do TCP + TLS handshakes. This is a blocking operation.
//4 开始TCP以及TLS握手操作
result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
connectionRetryEnabled, call, eventListener);
routeDatabase().connected(result.route());
//5 将新创建的连接,放在连接池中
Socket socket = null;
synchronized (connectionPool) {
reportedAcquired = true;
// Pool the connection.
Internal.instance.put(connectionPool, result);
// If another multiplexed connection to the same address was created concurrently, then
// release this connection and acquire that one.
if (result.isMultiplexed()) {
socket = Internal.instance.deduplicate(connectionPool, address, this);
result = connection;
}
}
closeQuietly(socket);
eventListener.connectionAcquired(call, result);
return result;
}
}
整个流程如下:
- 查找是否有完整的连接可用:
- Socket没有关闭
- 输入流没有关闭
- 输出流没有关闭
- Http2连接没有关闭
- 连接池中是否有可用的连接,如果有则可用。
- 如果没有可用连接,则自己创建一个。
- 开始TCP连接以及TLS握手操作。
- 将新创建的连接加入连接池。
上述方法完成后会创建一个RealConnection对象,然后调用该方法的connect()方法建立连接,我们再来看看RealConnection.connect()方法的实现。
public final class RealConnection extends Http2Connection.Listener implements Connection {
public void connect(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled, Call call,
EventListener eventListener) {
if (protocol != null) throw new IllegalStateException("already connected");
//线路选择
RouteException routeException = null;
List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
if (route.address().sslSocketFactory() == null) {
if (!connectionSpecs.contains(ConnectionSpec.CLEARTEXT)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication not enabled for client"));
}
String host = route.address().url().host();
if (!Platform.get().isCleartextTrafficPermitted(host)) {
throw new RouteException(new UnknownServiceException(
"CLEARTEXT communication to " + host + " not permitted by network security policy"));
}
} else {
if (route.address().protocols().contains(Protocol.H2_PRIOR_KNOWLEDGE)) {
throw new RouteException(new UnknownServiceException(
"H2_PRIOR_KNOWLEDGE cannot be used with HTTPS"));
}
}
//开始连接
while (true) {
try {
if (route.requiresTunnel()) {//如果是通道模式,则建立通道连接
connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener);
if (rawSocket == null) {
// We were unable to connect the tunnel but properly closed down our resources.
break;
}
} else {//否则进行Socket连接,一般都是属于这种情况 --> 接着看connectSocket方法
connectSocket(connectTimeout, readTimeout, call, eventListener);
}
//建立https连接
establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener);
eventListener.connectEnd(call, route.socketAddress(), route.proxy(), protocol);
break;
} catch (IOException e) {
closeQuietly(socket);
closeQuietly(rawSocket);
socket = null;
rawSocket = null;
source = null;
sink = null;
handshake = null;
protocol = null;
http2Connection = null;
eventListener.connectFailed(call, route.socketAddress(), route.proxy(), null, e);
if (routeException == null) {
routeException = new RouteException(e);
} else {
routeException.addConnectException(e);
}
if (!connectionRetryEnabled || !connectionSpecSelector.connectionFailed(e)) {
throw routeException;
}
}
}
if (route.requiresTunnel() && rawSocket == null) {
ProtocolException exception = new ProtocolException("Too many tunnel connections attempted: "
+ MAX_TUNNEL_ATTEMPTS);
throw new RouteException(exception);
}
if (http2Connection != null) {
synchronized (connectionPool) {
allocationLimit = http2Connection.maxConcurrentStreams();
}
}
}
/**Does all the work necessary to build a full HTTP or HTTPS connection on a raw socket.*/
private void connectSocket(int connectTimeout, int readTimeout, Call call,
EventListener eventListener) throws IOException {
Proxy proxy = route.proxy();
Address address = route.address();
//根据代理类型的不同处理Socket
rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
? address.socketFactory().createSocket()
: new Socket(proxy);
eventListener.connectStart(call, route.socketAddress(), proxy);
rawSocket.setSoTimeout(readTimeout);
try {
//建立Socket连接 --> socket.connect
Platform.get().connectSocket(rawSocket, route.socketAddress(), connectTimeout);
} catch (ConnectException e) {
ConnectException ce = new ConnectException("Failed to connect to " + route.socketAddress());
ce.initCause(e);
throw ce;
}
// The following try/catch block is a pseudo hacky way to get around a crash on Android 7.0
// More details:
// https://github.com/square/okhttp/issues/3245
// https://android-review.googlesource.com/#/c/271775/
try {
//获取输入/输出流
source = Okio.buffer(Okio.source(rawSocket));
sink = Okio.buffer(Okio.sink(rawSocket));
} catch (NullPointerException npe) {
if (NPE_THROW_WITH_NULL.equals(npe.getMessage())) {
throw new IOException(npe);
}
}
}
}
最终调用Java里的套接字Socket里的connect()方法:
public class Platform {
public void connectSocket(Socket socket, InetSocketAddress address, int connectTimeout)
throws IOException {
socket.connect(address, connectTimeout);
}
}
1.2 连接池
我们知道在复杂的网络环境下,频繁的进行建立Sokcet连接(TCP三次握手)和断开Socket(TCP四次分手)是非常消耗网络资源和浪费时间的,HTTP中的keepalive连接对于 降低延迟和提升速度有非常重要的作用。
复用连接就需要对连接进行管理,这里就引入了连接池的概念。
Okhttp支持5个并发KeepAlive,默认链路生命为5分钟(链路空闲后,保持存活的时间),连接池由ConectionPool实现,对连接进行回收和管理。
ConectionPool在内部维护了一个线程池,来清理连接,如下所示:
public final class ConnectionPool {
/**
* Background threads are used to cleanup expired connections. There will be at most a single
* thread running per connection pool. The thread pool executor permits the pool itself to be
* garbage collected.
*/
private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
Integer.MAX_VALUE /* maximumPoolSize */, 60L /* keepAliveTime */, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp ConnectionPool", true));
/** The maximum number of idle connections for each address. */
private final int maxIdleConnections;
private final long keepAliveDurationNs;
//清理连接,在线程池executor里调用。
private final Runnable cleanupRunnable = new Runnable() {
@Override public void run() {
while (true) {
//执行清理,并返回下次需要清理的时间。
long waitNanos = cleanup(System.nanoTime());
if (waitNanos == -1) return;
if (waitNanos > 0) {
long waitMillis = waitNanos / 1000000L;
waitNanos -= (waitMillis * 1000000L);
synchronized (ConnectionPool.this) {
try {
//在timeout时间内释放锁
ConnectionPool.this.wait(waitMillis, (int) waitNanos);
} catch (InterruptedException ignored) {
}
}
}
}
}
};
}
ConectionPool在内部维护了一个线程池,清理任务由cleanup()方法完成,它是一个阻塞操作,首先执行清理,并返回下次需要清理的间隔时间,调用wait()方法释放锁。等时间到了以后,再次进行清理,并返回下一次需要清理的时间,循环往复。我们来看一看cleanup()方法的具体实现:
//ConnectionPool.java
long cleanup(long now) {
int inUseConnectionCount = 0;
int idleConnectionCount = 0;
RealConnection longestIdleConnection = null;
long longestIdleDurationNs = Long.MIN_VALUE;
// Find either a connection to evict, or the time that the next eviction is due.
synchronized (this) {
//遍历所有的连接,标记不活跃的连接。
for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
RealConnection connection = i.next();
//1. 查询此连接内部的StreanAllocation的引用数量。
if (pruneAndGetAllocationCount(connection, now) > 0) {
inUseConnectionCount++;
continue;
}
idleConnectionCount++;
//2. 标记空闲连接。
long idleDurationNs = now - connection.idleAtNanos;
if (idleDurationNs > longestIdleDurationNs) {
longestIdleDurationNs = idleDurationNs;
longestIdleConnection = connection;
}
}
if (longestIdleDurationNs >= this.keepAliveDurationNs
|| idleConnectionCount > this.maxIdleConnections) {
//3. 如果空闲连接超过5个或者keepalive时间大于5分钟,则将该连接清理掉。
connections.remove(longestIdleConnection);
} else if (idleConnectionCount > 0) {
//4. 返回此连接的到期时间,供下次进行清理。
return keepAliveDurationNs - longestIdleDurationNs;
} else if (inUseConnectionCount > 0) {
//5. 全部都是活跃连接,5分钟时候再进行清理。
return keepAliveDurationNs;
} else {
//6. 没有任何连接,跳出循环。
cleanupRunning = false;
return -1;
}
}
//7. 关闭连接,返回时间0,立即再次进行清理。
closeQuietly(longestIdleConnection.socket());
// Cleanup again immediately.
return 0;
}
整个方法的流程如下所示:
- 查询此连接内部的StreanAllocation的引用数量。
- 标记空闲连接。
- 如果空闲连接超过5个或者keepalive时间大于5分钟,则将该连接清理掉。
- 返回此连接的到期时间,供下次进行清理。
- 全部都是活跃连接,5分钟时候再进行清理。
- 没有任何连接,跳出循环。
- 关闭连接,返回时间0,立即再次进行清理。
在RealConnection里有个StreamAllocation弱引用列表,每创建一个StreamAllocation,就会把它添加进该列表中,如果留关闭以后就将StreamAllocation对象从该列表中移除,正是利用利用这种引用计数的方式判定一个连接是否为空闲连接:
//Current streams carried by this connection.
public final List<Reference<StreamAllocation>> allocations = new ArrayList<>();
查找引用计数由pruneAndGetAllocationCount()方法实现,具体实现如下所示:
//ConnectionPool.java
private int pruneAndGetAllocationCount(RealConnection connection, long now) {
//弱引用列表
List<Reference<StreamAllocation>> references = connection.allocations;
//遍历弱引用列表
for (int i = 0; i < references.size(); ) {
Reference<StreamAllocation> reference = references.get(i);
//如果弱引用StreamAllocation正在被使用,则跳过进行下一次循环,
if (reference.get() != null) {
//引用计数
i++;
continue;
}
// We've discovered a leaked allocation. This is an application bug.
StreamAllocation.StreamAllocationReference streamAllocRef =
(StreamAllocation.StreamAllocationReference) reference;
String message = "A connection to " + connection.route().address().url()
+ " was leaked. Did you forget to close a response body?";
Platform.get().logCloseableLeak(message, streamAllocRef.callStackTrace);
//否则移除该StreamAllocation引用
references.remove(i);
connection.noNewStreams = true;
// 如果所有的StreamAllocation引用都没有了,返回引用计数0
if (references.isEmpty()) {
connection.idleAtNanos = now - keepAliveDurationNs;
return 0;
}
}
//返回引用列表的大小,作为引用计数
return references.size();
}
二、缓存机制
2.1 缓存策略
在分析Okhttp的缓存机制之前,我们先来回顾一下HTTP与缓存相关的理论知识,这是实现Okhttp机制的基础。
HTTP的缓存机制也是依赖于请求和响应header里的参数类实现的,最终响应式从缓存中去,还是从服务端重新拉取,HTTP的缓存机制的流程如下所示:
HTTP的缓存可以分为两种:(强制缓存优先于对比缓存)
- 强制缓存:需要服务端参与判断是否继续使用缓存,当客户端第一次请求数据时,服务端返回了缓存的过期时间(Expires与Cache-Control),没有过期就可以继续使用缓存,否则不使用,无需再向服务端询问。
- 对比缓存:需要服务端参与判断是否继续使用缓存,当客户端第一次请求数据时,服务端会将缓存标识(Last-Modified/If-Modified-Since与Etag/If-None-Match)与数据一起返回给客户端,客户端将两者都备份到缓存中 ,再次请求数据时,客户端将上次备份的缓存标识发送给服务端,服务端根据缓存标识进行判断,如果返回304,则表示通知客户端可以继续使用缓存。
1.上面提到强制缓存使用的的两个标识:
- Expires:Expires的值为服务端返回的到期时间,即下一次请求时,请求时间小于服务端返回的到期时间,直接使用缓存数据。到期时间是服务端生成的,客户端和服务端的时间可能有误差。
- Cache-Control:Expires有个时间校验的问题,所有HTTP1.1采用Cache-Control替代Expires。
Cache-Control的取值有以下几种:
private: 客户端可以缓存。
public: 客户端和代理服务器都可缓存。
max-age=xxx: 缓存的内容将在 xxx 秒后失效
no-cache: 需要使用对比缓存来验证缓存数据。
no-store: 所有内容都不会缓存,强制缓存,对比缓存都不会触发。
2.我们再来看看对比缓存的两个标识:
- Last-Modified 表示资源上次修改的时间
- If-Modified-Since 客户端再次发送的时间
服务端接收到客户端发来的资源修改时间,与自己当前的资源修改时间进行对比,如果自己的资源修改时间大于客户端发来的资源修改时间,则说明资源做过修改, 则返回200表示需要重新请求资源,否则返回304表示资源没有被修改,可以继续使用缓存。
上面是一种时间戳标记资源是否修改的方法,还有一种资源标识码ETag的方式来标记是否修改,如果标识码发生改变,则说明资源已经被修改,ETag优先级高于Last-Modified。
- ETag 客户端发送第一次请求的标记
- If-None-Match 客户端再次发送的标记
服务端接收到客户端发来的资源标识码,则会与自己当前的资源吗进行比较,如果不同,则说明资源已经被修改,则返回200,如果相同则说明资源没有被修改,返回 304,客户端可以继续使用缓存。
以上便是HTTP缓存策略的相关理论知识,Okhttp的缓存策略就是根据上述流程图实现的,具体的实现类是CacheStrategy,CacheStrategy的构造函数里有两个参数:
public final class CacheStrategy {
CacheStrategy(Request networkRequest, Response cacheResponse) {
this.networkRequest = networkRequest;
this.cacheResponse = cacheResponse;
}
}
这两个参数参数的含义如下:
- networkRequest:网络请求。
- cacheResponse:缓存响应。基于DiskLruCache实现的文件缓存,可以是请求中url的md5,value是文件中查询到的缓存,这个我们下面会说。
CacheStrategy就是利用这两个参数生成最终的策略,有点像map操作,将networkRequest与cacheResponse这两个值输入,处理之后再将这两个值输出,们的组合结果如下所示:
如果networkRequest为null,cacheResponse为null:only-if-cached(表明不进行网络请求,且缓存不存在或者过期,一定会返回503错误)。
如果networkRequest为null,cacheResponse为non-null:不进行网络请求,而且缓存可以使用,直接返回缓存,不用请求网络。
如果networkRequest为non-null,cacheResponse为null:需要进行网络请求,而且缓存不存在或者过期,直接访问网络。
如果networkRequest为non-null,cacheResponse为non-null:Header中含有ETag/Last-Modified标签,需要在条件请求下使用,还是需要访问网络。
那么这四种情况是如何判定的,我们来看一下:
CacheStrategy是利用Factory模式进行构造的,CacheStrategy.Factory对象构建以后,调用它的get()方法即可获得具体的CacheStrategy,CacheStrategy.Factory.get()方法内部
调用的是CacheStrategy.Factory.getCandidate()方法,它是核心的实现。具体如下所示:
public Factory(long nowMillis, Request request, Response cacheResponse) {
private CacheStrategy getCandidate() {
//1. 如果缓存没有命中,就直接进行网络请求。
if (cacheResponse == null) {
return new CacheStrategy(request, null);
}
//2. 如果TLS握手信息丢失,则返回直接进行连接。
if (request.isHttps() && cacheResponse.handshake() == null) {
return new CacheStrategy(request, null);
}
//3. 根据response状态码,Expired时间和是否有no-cache标签判断是否进行直接访问。
if (!isCacheable(cacheResponse, request)) {
return new CacheStrategy(request, null);
}
//4. 如果请求header里有"no-cache"或者GET请求(header里带有ETag/Since标签),则直接连接。
CacheControl requestCaching = request.cacheControl();
if (requestCaching.noCache() || hasConditions(request)) {
return new CacheStrategy(request, null);
}
CacheControl responseCaching = cacheResponse.cacheControl();
//计算当前age的时间戳:now - sent + age
long ageMillis = cacheResponseAge();
//刷新时间,一般服务器设置为max-age
long freshMillis = computeFreshnessLifetime();
if (requestCaching.maxAgeSeconds() != -1) {
//一般取max-age
freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
}
long minFreshMillis = 0;
if (requestCaching.minFreshSeconds() != -1) {
//一般取0
minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}
long maxStaleMillis = 0;
if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}
//5. 如果缓存在过期时间内则可以直接使用,则直接返回上次缓存。
if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
Response.Builder builder = cacheResponse.newBuilder();
if (ageMillis + minFreshMillis >= freshMillis) {
builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
}
long oneDayMillis = 24 * 60 * 60 * 1000L;
if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
}
return new CacheStrategy(null, builder.build());
}
//6. 如果缓存过期,且有ETag等信息,则发送If-None-Match、If-Modified-Since、If-Modified-Since等条件请求
//交给服务端判断处理
String conditionName;
String conditionValue;
if (etag != null) {
conditionName = "If-None-Match";
conditionValue = etag;
} else if (lastModified != null) {
conditionName = "If-Modified-Since";
conditionValue = lastModifiedString;
} else if (servedDate != null) {
conditionName = "If-Modified-Since";
conditionValue = servedDateString;
} else {
return new CacheStrategy(request, null); // No condition! Make a regular request.
}
Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
Request conditionalRequest = request.newBuilder()
.headers(conditionalRequestHeaders.build())
.build();
return new CacheStrategy(conditionalRequest, cacheResponse);
}
}
整个函数的逻辑就是按照上面那个HTTP缓存判定流程图来实现,具体流程如下所示:
- 如果缓存没有命中,就直接进行网络请求。
- 如果TLS握手信息丢失,则返回直接进行连接。
- 根据response状态码,Expired时间和是否有no-cache标签判断是否进行直接访问。
- 如果请求header里有"no-cache"或者GET请求(header里带有ETag/Since标签),则直接连接。
- 如果缓存在过期时间内则可以直接使用,则直接返回上次缓存。
- 如果缓存过期,且有ETag等信息,则发送If-None-Match、If-Modified-Since、If-Modified-Since等条件请求交给服务端判断处理
整个流程就是这样,另外说一点,Okhttp的缓存是根据服务器header自动的完成的,整个流程也是根据RFC文档写死的,客户端不必要进行手动控制。
理解了缓存策略,我们来看看缓存在磁盘上是如何被管理的。
2.2 缓存管理
这小节我们来分析Okhttp的缓存机制,缓存机制是基于DiskLruCache做的。Cache类封装了缓存的实现,实现了InternalCache接口,InternalCache接口如下所示:
public interface InternalCache {
//获取缓存
Response get(Request request) throws IOException;
//存入缓存
CacheRequest put(Response response) throws IOException;
//移除缓存
void remove(Request request) throws IOException;
//更新缓存
void update(Response cached, Response network);
//跟踪一个满足缓存条件的GET请求
void trackConditionalCacheHit();
//跟踪满足缓存策略CacheStrategy的响应
void trackResponse(CacheStrategy cacheStrategy);
}
我们接着来看看它的实现类。
Cache没有直接实现InternalCache这个接口,而是在其内部实现了InternalCache的匿名内部类,内部类的方法调用Cache对应的方法,如下所示:
//Cache.java
final InternalCache internalCache = new InternalCache() {
@Override public Response get(Request request) throws IOException {
return Cache.this.get(request);
}
@Override public CacheRequest put(Response response) throws IOException {
return Cache.this.put(response);
}
@Override public void remove(Request request) throws IOException {
Cache.this.remove(request);
}
@Override public void update(Response cached, Response network) {
Cache.this.update(cached, network);
}
@Override public void trackConditionalCacheHit() {
Cache.this.trackConditionalCacheHit();
}
@Override public void trackResponse(CacheStrategy cacheStrategy) {
Cache.this.trackResponse(cacheStrategy);
}
};
在Cache类里还定义一些内部类,这些类封装了请求与响应信息:
- Cache.Entry:封装了请求与响应等信息,包括url、varyHeaders、protocol、code、message、responseHeaders、handshake、sentRequestMillis与receivedResponseMillis。
- Cache.CacheResponseBody:继承于ResponseBody,封装了缓存快照snapshot,响应体bodySource,内容类型contentType,内容长度contentLength。
除了两个类以外,OkHttp还封装了一个文件系统类FileSystem类,这个类利用Okio这个库对Java的File操作进行了一层封装,简化了IO操作。理解了这些剩下的就是DiskLruCahe里的插入缓存 、获取缓存和删除缓存的操作。
彩蛋
到这里关于Okhttp的核心内容就都讲完了,可以说Okhttp是设计非常优良的一个库,有很多值得我们学习的地方.
关于开源库源码的分析,对于复杂库直接进去看可能会比较晕,有如下经验供新手参考:
- 可从互联网搜索大致脉络
- 使用不同版本自行比对梳理
- 遇到难点走不通的细节,再找相同版本进行查看细化,加深理解
致谢
Android开源框架源码鉴赏:Okhttp
-官方推荐使用的OkHttp4网络请求库全面解析