mirror of
https://github.com/fatedier/frp.git
synced 2025-12-08 17:08:13 +00:00
Compare commits
61 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ae08811636 | ||
|
|
b657c0fe09 | ||
|
|
84df71047c | ||
|
|
abc6d720d0 | ||
|
|
80154639e3 | ||
|
|
f2117d8331 | ||
|
|
261be6a7b7 | ||
|
|
b53a2c1ed9 | ||
|
|
ee0df07a3c | ||
|
|
4e363eca2b | ||
|
|
4277405c0e | ||
|
|
6a99f0caf7 | ||
|
|
394af08561 | ||
|
|
6451583e60 | ||
|
|
30cb0a3ab0 | ||
|
|
5680a88267 | ||
|
|
6b089858db | ||
|
|
b3ed863021 | ||
|
|
5796c27ed5 | ||
|
|
310e8dd768 | ||
|
|
0b40ac2dbc | ||
|
|
f22c8e0882 | ||
|
|
a388bb2c95 | ||
|
|
e611c44dea | ||
|
|
8e36e2bb67 | ||
|
|
541ad8d899 | ||
|
|
17cc0735d1 | ||
|
|
fd336a5503 | ||
|
|
802d1c1861 | ||
|
|
65fe0a1179 | ||
|
|
2d24879fa3 | ||
|
|
75383a95b3 | ||
|
|
95444ea46b | ||
|
|
9f9c01b520 | ||
|
|
285d1eba0d | ||
|
|
0dfd3a421c | ||
|
|
6a1f15b25e | ||
|
|
9f47c324b7 | ||
|
|
f0df6084af | ||
|
|
879ca47590 | ||
|
|
6a7efc81c9 | ||
|
|
12c5c553c3 | ||
|
|
988e9b1de3 | ||
|
|
db6bbc5187 | ||
|
|
c67b4e7b94 | ||
|
|
b7a73d3469 | ||
|
|
7f9d88c10a | ||
|
|
79237d2b94 | ||
|
|
9c4ec56491 | ||
|
|
74a8752570 | ||
|
|
a8ab4c5003 | ||
|
|
9cee263c91 | ||
|
|
c6bf6f59e6 | ||
|
|
4b7aef2196 | ||
|
|
f6d0046b5a | ||
|
|
84363266d2 | ||
|
|
9ac8f2a047 | ||
|
|
b2b55533b8 | ||
|
|
a4cfab689a | ||
|
|
c7df39074c | ||
|
|
fdcdccb0c2 |
2
.github/ISSUE_TEMPLATE
vendored
2
.github/ISSUE_TEMPLATE
vendored
@@ -1,5 +1,7 @@
|
||||
Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
|
||||
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)
|
||||
(请不要在 issue 评论中出现无意义的 **加1**,**我也是** 等内容,将会被直接删除。)
|
||||
(由于个人精力有限,和系统环境,网络环境等相关的求助问题请转至其他论坛或社交平台。)
|
||||
|
||||
Use the commands below to provide key information from your environment:
|
||||
You do NOT have to include this information if this is a FEATURE REQUEST
|
||||
|
||||
@@ -2,7 +2,6 @@ sudo: false
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.11.x
|
||||
- 1.12.x
|
||||
|
||||
install:
|
||||
|
||||
62
README.md
62
README.md
@@ -22,6 +22,7 @@ Now it also try to support p2p connect.
|
||||
* [Forward DNS query request](#forward-dns-query-request)
|
||||
* [Forward unix domain socket](#forward-unix-domain-socket)
|
||||
* [Expose a simple http file server](#expose-a-simple-http-file-server)
|
||||
* [Enable HTTPS for local HTTP service](#enable-https-for-local-http-service)
|
||||
* [Expose your service in security](#expose-your-service-in-security)
|
||||
* [P2P Mode](#p2p-mode)
|
||||
* [Features](#features)
|
||||
@@ -44,6 +45,8 @@ Now it also try to support p2p connect.
|
||||
* [Rewriting the Host Header](#rewriting-the-host-header)
|
||||
* [Set Headers In HTTP Request](#set-headers-in-http-request)
|
||||
* [Get Real IP](#get-real-ip)
|
||||
* [HTTP X-Forwarded-For](#http-x-forwarded-for)
|
||||
* [Proxy Protocol](#proxy-protocol)
|
||||
* [Password protecting your web service](#password-protecting-your-web-service)
|
||||
* [Custom subdomain names](#custom-subdomain-names)
|
||||
* [URL routing](#url-routing)
|
||||
@@ -243,11 +246,34 @@ Configure frps same as above.
|
||||
|
||||
2. Visit `http://x.x.x.x:6000/static/` by your browser, set correct user and password, so you can see files in `/tmp/file`.
|
||||
|
||||
### Enable HTTPS for local HTTP service
|
||||
|
||||
1. Start frpc with configurations:
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
[common]
|
||||
server_addr = x.x.x.x
|
||||
server_port = 7000
|
||||
|
||||
[test_htts2http]
|
||||
type = https
|
||||
custom_domains = test.yourdomain.com
|
||||
|
||||
plugin = https2http
|
||||
plugin_local_addr = 127.0.0.1:80
|
||||
plugin_crt_path = ./server.crt
|
||||
plugin_key_path = ./server.key
|
||||
plugin_host_header_rewrite = 127.0.0.1
|
||||
```
|
||||
|
||||
2. Visit `https://test.yourdomain.com`.
|
||||
|
||||
### Expose your service in security
|
||||
|
||||
For some services, if expose them to the public network directly will be a security risk.
|
||||
|
||||
**stcp(secret tcp)** help you create a proxy avoiding any one can access it.
|
||||
**stcp(secret tcp)** helps you create a proxy avoiding any one can access it.
|
||||
|
||||
Configure frps same as above.
|
||||
|
||||
@@ -484,8 +510,6 @@ tcp_mux = false
|
||||
|
||||
### Support KCP Protocol
|
||||
|
||||
frp support kcp protocol since v0.12.0.
|
||||
|
||||
KCP is a fast and reliable protocol that can achieve the transmission effect of a reduction of the average latency by 30% to 40% and reduction of the maximum delay by a factor of three, at the cost of 10% to 20% more bandwidth wasted than TCP.
|
||||
|
||||
Using kcp in frp:
|
||||
@@ -536,7 +560,8 @@ This feature is fit for a large number of short connections.
|
||||
### Load balancing
|
||||
|
||||
Load balancing is supported by `group`.
|
||||
This feature is available only for type `tcp` now.
|
||||
|
||||
This feature is available only for type `tcp` and `http` now.
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
@@ -559,6 +584,10 @@ group_key = 123
|
||||
|
||||
Proxies in same group will accept connections from port 80 randomly.
|
||||
|
||||
For `tcp` type, `remote_port` in one group shoud be same.
|
||||
|
||||
For `http` type, `custom_domains, subdomain, locations` shoud be same.
|
||||
|
||||
### Health Check
|
||||
|
||||
Health check feature can help you achieve high availability with load balancing.
|
||||
@@ -639,9 +668,32 @@ In this example, it will set header `X-From-Where: frp` to http request.
|
||||
|
||||
### Get Real IP
|
||||
|
||||
#### HTTP X-Forwarded-For
|
||||
|
||||
Features for http proxy only.
|
||||
|
||||
You can get user's real IP from http request header `X-Forwarded-For` and `X-Real-IP`.
|
||||
You can get user's real IP from HTTP request header `X-Forwarded-For` and `X-Real-IP`.
|
||||
|
||||
#### Proxy Protocol
|
||||
|
||||
frp support Proxy Protocol to send user's real IP to local service. It support all types without UDP.
|
||||
|
||||
Here is an example for https service:
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
[web]
|
||||
type = https
|
||||
local_port = 443
|
||||
custom_domains = test.yourdomain.com
|
||||
|
||||
# now v1 and v2 is supported
|
||||
proxy_protocol_version = v2
|
||||
```
|
||||
|
||||
You can enable Proxy Protocol support in nginx to parse user's real IP to http header `X-Real-IP`.
|
||||
|
||||
Then you can get it from HTTP request header in your local service.
|
||||
|
||||
### Password protecting your web service
|
||||
|
||||
|
||||
72
README_zh.md
72
README_zh.md
@@ -16,8 +16,9 @@ frp 是一个可用于内网穿透的高性能的反向代理应用,支持 tcp
|
||||
* [通过 ssh 访问公司内网机器](#通过-ssh-访问公司内网机器)
|
||||
* [通过自定义域名访问部署于内网的 web 服务](#通过自定义域名访问部署于内网的-web-服务)
|
||||
* [转发 DNS 查询请求](#转发-dns-查询请求)
|
||||
* [转发 Unix域套接字](#转发-unix域套接字)
|
||||
* [转发 Unix 域套接字](#转发-unix-域套接字)
|
||||
* [对外提供简单的文件访问服务](#对外提供简单的文件访问服务)
|
||||
* [为本地 HTTP 服务启用 HTTPS](#为本地-http-服务启用-https)
|
||||
* [安全地暴露内网服务](#安全地暴露内网服务)
|
||||
* [点对点内网穿透](#点对点内网穿透)
|
||||
* [功能说明](#功能说明)
|
||||
@@ -40,6 +41,8 @@ frp 是一个可用于内网穿透的高性能的反向代理应用,支持 tcp
|
||||
* [修改 Host Header](#修改-host-header)
|
||||
* [设置 HTTP 请求的 header](#设置-http-请求的-header)
|
||||
* [获取用户真实 IP](#获取用户真实-ip)
|
||||
* [HTTP X-Forwarded-For](#http-x-forwarded-for)
|
||||
* [Proxy Protocol](#proxy-protocol)
|
||||
* [通过密码保护你的 web 服务](#通过密码保护你的-web-服务)
|
||||
* [自定义二级域名](#自定义二级域名)
|
||||
* [URL 路由](#url-路由)
|
||||
@@ -191,7 +194,7 @@ DNS 查询请求通常使用 UDP 协议,frp 支持对内网 UDP 服务的穿
|
||||
|
||||
`dig @x.x.x.x -p 6000 www.google.com`
|
||||
|
||||
### 转发 Unix域套接字
|
||||
### 转发 Unix 域套接字
|
||||
|
||||
通过 tcp 端口访问内网的 unix域套接字(例如和 docker daemon 通信)。
|
||||
|
||||
@@ -244,6 +247,33 @@ frps 的部署步骤同上。
|
||||
|
||||
2. 通过浏览器访问 `http://x.x.x.x:6000/static/` 来查看位于 `/tmp/file` 目录下的文件,会要求输入已设置好的用户名和密码。
|
||||
|
||||
### 为本地 HTTP 服务启用 HTTPS
|
||||
|
||||
通过 `https2http` 插件可以让本地 HTTP 服务转换成 HTTPS 服务对外提供。
|
||||
|
||||
1. 启用 frpc,启用 `https2http` 插件,配置如下:
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
[common]
|
||||
server_addr = x.x.x.x
|
||||
server_port = 7000
|
||||
|
||||
[test_htts2http]
|
||||
type = https
|
||||
custom_domains = test.yourdomain.com
|
||||
|
||||
plugin = https2http
|
||||
plugin_local_addr = 127.0.0.1:80
|
||||
|
||||
# HTTPS 证书相关的配置
|
||||
plugin_crt_path = ./server.crt
|
||||
plugin_key_path = ./server.key
|
||||
plugin_host_header_rewrite = 127.0.0.1
|
||||
```
|
||||
|
||||
2. 通过浏览器访问 `https://test.yourdomain.com` 即可。
|
||||
|
||||
### 安全地暴露内网服务
|
||||
|
||||
对于某些服务来说如果直接暴露于公网上将会存在安全隐患。
|
||||
@@ -514,7 +544,7 @@ tcp_mux = false
|
||||
|
||||
### 底层通信可选 kcp 协议
|
||||
|
||||
从 v0.12.0 版本开始,底层通信协议支持选择 kcp 协议,在弱网环境下传输效率提升明显,但是会有一些额外的流量消耗。
|
||||
底层通信协议支持选择 kcp 协议,在弱网环境下传输效率提升明显,但是会有一些额外的流量消耗。
|
||||
|
||||
开启 kcp 协议支持:
|
||||
|
||||
@@ -566,7 +596,8 @@ tcp_mux = false
|
||||
### 负载均衡
|
||||
|
||||
可以将多个相同类型的 proxy 加入到同一个 group 中,从而实现负载均衡的功能。
|
||||
目前只支持 tcp 类型的 proxy。
|
||||
|
||||
目前只支持 TCP 和 HTTP 类型的 proxy。
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
@@ -587,7 +618,9 @@ group_key = 123
|
||||
|
||||
用户连接 frps 服务器的 80 端口,frps 会将接收到的用户连接随机分发给其中一个存活的 proxy。这样可以在一台 frpc 机器挂掉后仍然有其他节点能够提供服务。
|
||||
|
||||
要求 `group_key` 相同,做权限验证,且 `remote_port` 相同。
|
||||
TCP 类型代理要求 `group_key` 相同,做权限验证,且 `remote_port` 相同。
|
||||
|
||||
HTTP 类型代理要求 `group_key, custom_domains 或 subdomain 和 locations` 相同。
|
||||
|
||||
### 健康检查
|
||||
|
||||
@@ -668,7 +701,34 @@ header_X-From-Where = frp
|
||||
|
||||
### 获取用户真实 IP
|
||||
|
||||
目前只有 **http** 类型的代理支持这一功能,可以通过用户请求的 header 中的 `X-Forwarded-For` 和 `X-Real-IP` 来获取用户真实 IP。
|
||||
#### HTTP X-Forwarded-For
|
||||
|
||||
目前只有 **http** 类型的代理支持这一功能,可以通过用户请求的 header 中的 `X-Forwarded-For` 来获取用户真实 IP,默认启用。
|
||||
|
||||
#### Proxy Protocol
|
||||
|
||||
frp 支持通过 **Proxy Protocol** 协议来传递经过 frp 代理的请求的真实 IP,此功能支持所有以 TCP 为底层协议的类型,不支持 UDP。
|
||||
|
||||
**Proxy Protocol** 功能启用后,frpc 在和本地服务建立连接后,会先发送一段 **Proxy Protocol** 的协议内容给本地服务,本地服务通过解析这一内容可以获得访问用户的真实 IP。所以不仅仅是 HTTP 服务,任何的 TCP 服务,只要支持这一协议,都可以获得用户的真实 IP 地址。
|
||||
|
||||
需要注意的是,在代理配置中如果要启用此功能,需要本地的服务能够支持 **Proxy Protocol** 这一协议,目前 nginx 和 haproxy 都能够很好的支持。
|
||||
|
||||
这里以 https 类型为例:
|
||||
|
||||
```ini
|
||||
# frpc.ini
|
||||
[web]
|
||||
type = https
|
||||
local_port = 443
|
||||
custom_domains = test.yourdomain.com
|
||||
|
||||
# 目前支持 v1 和 v2 两个版本的 proxy protocol 协议。
|
||||
proxy_protocol_version = v2
|
||||
```
|
||||
|
||||
只需要在代理配置中增加一行 `proxy_protocol_version = v2` 即可开启此功能。
|
||||
|
||||
本地的 https 服务可以通过在 nginx 的配置中启用 **Proxy Protocol** 的解析并将结果设置在 `X-Real-IP` 这个 Header 中就可以在自己的 Web 服务中通过 `X-Real-IP` 获取到用户的真实 IP。
|
||||
|
||||
### 通过密码保护你的 web 服务
|
||||
|
||||
|
||||
@@ -131,7 +131,7 @@ func (ctl *Control) HandleReqWorkConn(inMsg *msg.ReqWorkConn) {
|
||||
workConn.AddLogPrefix(startMsg.ProxyName)
|
||||
|
||||
// dispatch this work connection to related proxy
|
||||
ctl.pm.HandleWorkConn(startMsg.ProxyName, workConn)
|
||||
ctl.pm.HandleWorkConn(startMsg.ProxyName, workConn, &startMsg)
|
||||
}
|
||||
|
||||
func (ctl *Control) HandleNewProxyResp(inMsg *msg.NewProxyResp) {
|
||||
@@ -148,6 +148,9 @@ func (ctl *Control) HandleNewProxyResp(inMsg *msg.NewProxyResp) {
|
||||
func (ctl *Control) Close() error {
|
||||
ctl.pm.Close()
|
||||
ctl.conn.Close()
|
||||
if ctl.session != nil {
|
||||
ctl.session.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -202,6 +205,7 @@ func (ctl *Control) reader() {
|
||||
return
|
||||
} else {
|
||||
ctl.Warn("read error: %v", err)
|
||||
ctl.conn.Close()
|
||||
return
|
||||
}
|
||||
} else {
|
||||
@@ -300,6 +304,9 @@ func (ctl *Control) worker() {
|
||||
ctl.vm.Close()
|
||||
|
||||
close(ctl.closedDoneCh)
|
||||
if ctl.session != nil {
|
||||
ctl.session.Close()
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,6 +18,8 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"time"
|
||||
@@ -170,6 +172,8 @@ func (monitor *HealthCheckMonitor) doHttpCheck(ctx context.Context) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
io.Copy(ioutil.Discard, resp.Body)
|
||||
|
||||
if resp.StatusCode/100 != 2 {
|
||||
return fmt.Errorf("do http health check, StatusCode is [%d] not 2xx", resp.StatusCode)
|
||||
|
||||
@@ -37,6 +37,7 @@ import (
|
||||
frpIo "github.com/fatedier/golib/io"
|
||||
"github.com/fatedier/golib/pool"
|
||||
fmux "github.com/hashicorp/yamux"
|
||||
pp "github.com/pires/go-proxyproto"
|
||||
)
|
||||
|
||||
// Proxy defines how to handle work connections for different proxy type.
|
||||
@@ -44,7 +45,7 @@ type Proxy interface {
|
||||
Run() error
|
||||
|
||||
// InWorkConn accept work connections registered to server.
|
||||
InWorkConn(conn frpNet.Conn)
|
||||
InWorkConn(frpNet.Conn, *msg.StartWorkConn)
|
||||
|
||||
Close()
|
||||
log.Logger
|
||||
@@ -119,9 +120,9 @@ func (pxy *TcpProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *TcpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *TcpProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
HandleTcpWorkConnection(&pxy.cfg.LocalSvrConf, pxy.proxyPlugin, &pxy.cfg.BaseProxyConf, conn,
|
||||
[]byte(g.GlbClientCfg.Token))
|
||||
[]byte(g.GlbClientCfg.Token), m)
|
||||
}
|
||||
|
||||
// HTTP
|
||||
@@ -148,9 +149,9 @@ func (pxy *HttpProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *HttpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *HttpProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
HandleTcpWorkConnection(&pxy.cfg.LocalSvrConf, pxy.proxyPlugin, &pxy.cfg.BaseProxyConf, conn,
|
||||
[]byte(g.GlbClientCfg.Token))
|
||||
[]byte(g.GlbClientCfg.Token), m)
|
||||
}
|
||||
|
||||
// HTTPS
|
||||
@@ -177,9 +178,9 @@ func (pxy *HttpsProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *HttpsProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *HttpsProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
HandleTcpWorkConnection(&pxy.cfg.LocalSvrConf, pxy.proxyPlugin, &pxy.cfg.BaseProxyConf, conn,
|
||||
[]byte(g.GlbClientCfg.Token))
|
||||
[]byte(g.GlbClientCfg.Token), m)
|
||||
}
|
||||
|
||||
// STCP
|
||||
@@ -206,9 +207,9 @@ func (pxy *StcpProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *StcpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *StcpProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
HandleTcpWorkConnection(&pxy.cfg.LocalSvrConf, pxy.proxyPlugin, &pxy.cfg.BaseProxyConf, conn,
|
||||
[]byte(g.GlbClientCfg.Token))
|
||||
[]byte(g.GlbClientCfg.Token), m)
|
||||
}
|
||||
|
||||
// XTCP
|
||||
@@ -235,7 +236,7 @@ func (pxy *XtcpProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *XtcpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *XtcpProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
defer conn.Close()
|
||||
var natHoleSidMsg msg.NatHoleSid
|
||||
err := msg.ReadMsgInto(conn, &natHoleSidMsg)
|
||||
@@ -353,7 +354,7 @@ func (pxy *XtcpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
}
|
||||
|
||||
HandleTcpWorkConnection(&pxy.cfg.LocalSvrConf, pxy.proxyPlugin, &pxy.cfg.BaseProxyConf,
|
||||
frpNet.WrapConn(muxConn), []byte(pxy.cfg.Sk))
|
||||
frpNet.WrapConn(muxConn), []byte(pxy.cfg.Sk), m)
|
||||
}
|
||||
|
||||
func (pxy *XtcpProxy) sendDetectMsg(addr string, port int, laddr *net.UDPAddr, content []byte) (err error) {
|
||||
@@ -415,7 +416,7 @@ func (pxy *UdpProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *UdpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
func (pxy *UdpProxy) InWorkConn(conn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
pxy.Info("incoming a new work connection for udp proxy, %s", conn.RemoteAddr().String())
|
||||
// close resources releated with old workConn
|
||||
pxy.Close()
|
||||
@@ -482,7 +483,7 @@ func (pxy *UdpProxy) InWorkConn(conn frpNet.Conn) {
|
||||
|
||||
// Common handler for tcp work connections.
|
||||
func HandleTcpWorkConnection(localInfo *config.LocalSvrConf, proxyPlugin plugin.Plugin,
|
||||
baseInfo *config.BaseProxyConf, workConn frpNet.Conn, encKey []byte) {
|
||||
baseInfo *config.BaseProxyConf, workConn frpNet.Conn, encKey []byte, m *msg.StartWorkConn) {
|
||||
|
||||
var (
|
||||
remote io.ReadWriteCloser
|
||||
@@ -502,10 +503,43 @@ func HandleTcpWorkConnection(localInfo *config.LocalSvrConf, proxyPlugin plugin.
|
||||
remote = frpIo.WithCompression(remote)
|
||||
}
|
||||
|
||||
// check if we need to send proxy protocol info
|
||||
var extraInfo []byte
|
||||
if baseInfo.ProxyProtocolVersion != "" {
|
||||
if m.SrcAddr != "" && m.SrcPort != 0 {
|
||||
if m.DstAddr == "" {
|
||||
m.DstAddr = "127.0.0.1"
|
||||
}
|
||||
h := &pp.Header{
|
||||
Command: pp.PROXY,
|
||||
SourceAddress: net.ParseIP(m.SrcAddr),
|
||||
SourcePort: m.SrcPort,
|
||||
DestinationAddress: net.ParseIP(m.DstAddr),
|
||||
DestinationPort: m.DstPort,
|
||||
}
|
||||
|
||||
if h.SourceAddress.To16() == nil {
|
||||
h.TransportProtocol = pp.TCPv4
|
||||
} else {
|
||||
h.TransportProtocol = pp.TCPv6
|
||||
}
|
||||
|
||||
if baseInfo.ProxyProtocolVersion == "v1" {
|
||||
h.Version = 1
|
||||
} else if baseInfo.ProxyProtocolVersion == "v2" {
|
||||
h.Version = 2
|
||||
}
|
||||
|
||||
buf := bytes.NewBuffer(nil)
|
||||
h.WriteTo(buf)
|
||||
extraInfo = buf.Bytes()
|
||||
}
|
||||
}
|
||||
|
||||
if proxyPlugin != nil {
|
||||
// if plugin is set, let plugin handle connections first
|
||||
workConn.Debug("handle by plugin: %s", proxyPlugin.Name())
|
||||
proxyPlugin.Handle(remote, workConn)
|
||||
proxyPlugin.Handle(remote, workConn, extraInfo)
|
||||
workConn.Debug("handle by plugin finished")
|
||||
return
|
||||
} else {
|
||||
@@ -518,6 +552,11 @@ func HandleTcpWorkConnection(localInfo *config.LocalSvrConf, proxyPlugin plugin.
|
||||
|
||||
workConn.Debug("join connections, localConn(l[%s] r[%s]) workConn(l[%s] r[%s])", localConn.LocalAddr().String(),
|
||||
localConn.RemoteAddr().String(), workConn.LocalAddr().String(), workConn.RemoteAddr().String())
|
||||
|
||||
if len(extraInfo) > 0 {
|
||||
localConn.Write(extraInfo)
|
||||
}
|
||||
|
||||
frpIo.Join(localConn, remote)
|
||||
workConn.Debug("join connections closed")
|
||||
}
|
||||
|
||||
@@ -58,12 +58,12 @@ func (pm *ProxyManager) Close() {
|
||||
pm.proxies = make(map[string]*ProxyWrapper)
|
||||
}
|
||||
|
||||
func (pm *ProxyManager) HandleWorkConn(name string, workConn frpNet.Conn) {
|
||||
func (pm *ProxyManager) HandleWorkConn(name string, workConn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
pm.mu.RLock()
|
||||
pw, ok := pm.proxies[name]
|
||||
pm.mu.RUnlock()
|
||||
if ok {
|
||||
pw.InWorkConn(workConn)
|
||||
pw.InWorkConn(workConn, m)
|
||||
} else {
|
||||
workConn.Close()
|
||||
}
|
||||
|
||||
@@ -217,13 +217,13 @@ func (pw *ProxyWrapper) statusFailedCallback() {
|
||||
pw.Info("health check failed")
|
||||
}
|
||||
|
||||
func (pw *ProxyWrapper) InWorkConn(workConn frpNet.Conn) {
|
||||
func (pw *ProxyWrapper) InWorkConn(workConn frpNet.Conn, m *msg.StartWorkConn) {
|
||||
pw.mu.RLock()
|
||||
pxy := pw.pxy
|
||||
pw.mu.RUnlock()
|
||||
if pxy != nil {
|
||||
workConn.Debug("start a new work connection, localAddr: %s remoteAddr: %s", workConn.LocalAddr().String(), workConn.RemoteAddr().String())
|
||||
go pxy.InWorkConn(workConn)
|
||||
go pxy.InWorkConn(workConn, m)
|
||||
} else {
|
||||
workConn.Close()
|
||||
}
|
||||
|
||||
@@ -167,6 +167,9 @@ func (svr *Service) login() (conn frpNet.Conn, session *fmux.Session, err error)
|
||||
defer func() {
|
||||
if err != nil {
|
||||
conn.Close()
|
||||
if session != nil {
|
||||
session.Close()
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
|
||||
@@ -76,17 +76,16 @@ func reload() error {
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
} else {
|
||||
if resp.StatusCode == 200 {
|
||||
return nil
|
||||
}
|
||||
|
||||
defer resp.Body.Close()
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return fmt.Errorf("code [%d], %s", resp.StatusCode, strings.TrimSpace(string(body)))
|
||||
}
|
||||
return nil
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == 200 {
|
||||
return nil
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return fmt.Errorf("code [%d], %s", resp.StatusCode, strings.TrimSpace(string(body)))
|
||||
}
|
||||
|
||||
@@ -73,7 +73,7 @@ var (
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.PersistentFlags().StringVarP(&cfgFile, "", "c", "./frpc.ini", "config file of frpc")
|
||||
rootCmd.PersistentFlags().StringVarP(&cfgFile, "config", "c", "./frpc.ini", "config file of frpc")
|
||||
rootCmd.PersistentFlags().BoolVarP(&showVersion, "version", "v", false, "version of frpc")
|
||||
|
||||
kcpDoneCh = make(chan struct{})
|
||||
|
||||
@@ -78,76 +78,78 @@ func status() error {
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
} else {
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res := &client.StatusResp{}
|
||||
err = json.Unmarshal(body, &res)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unmarshal http response error: %s", strings.TrimSpace(string(body)))
|
||||
}
|
||||
|
||||
fmt.Println("Proxy Status...")
|
||||
if len(res.Tcp) > 0 {
|
||||
fmt.Printf("TCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Tcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Udp) > 0 {
|
||||
fmt.Printf("UDP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Udp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Http) > 0 {
|
||||
fmt.Printf("HTTP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Http {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Https) > 0 {
|
||||
fmt.Printf("HTTPS")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Https {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Stcp) > 0 {
|
||||
fmt.Printf("STCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Stcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Xtcp) > 0 {
|
||||
fmt.Printf("XTCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Xtcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res := &client.StatusResp{}
|
||||
err = json.Unmarshal(body, &res)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unmarshal http response error: %s", strings.TrimSpace(string(body)))
|
||||
}
|
||||
|
||||
fmt.Println("Proxy Status...")
|
||||
if len(res.Tcp) > 0 {
|
||||
fmt.Printf("TCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Tcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Udp) > 0 {
|
||||
fmt.Printf("UDP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Udp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Http) > 0 {
|
||||
fmt.Printf("HTTP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Http {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Https) > 0 {
|
||||
fmt.Printf("HTTPS")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Https {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Stcp) > 0 {
|
||||
fmt.Printf("STCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Stcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
if len(res.Xtcp) > 0 {
|
||||
fmt.Printf("XTCP")
|
||||
tbl := table.New("Name", "Status", "LocalAddr", "Plugin", "RemoteAddr", "Error")
|
||||
for _, ps := range res.Xtcp {
|
||||
tbl.AddRow(ps.Name, ps.Status, ps.LocalAddr, ps.Plugin, ps.RemoteAddr, ps.Err)
|
||||
}
|
||||
tbl.Print()
|
||||
fmt.Println("")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ var (
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.PersistentFlags().StringVarP(&cfgFile, "", "c", "", "config file of frps")
|
||||
rootCmd.PersistentFlags().StringVarP(&cfgFile, "config", "c", "", "config file of frps")
|
||||
rootCmd.PersistentFlags().BoolVarP(&showVersion, "version", "v", false, "version of frpc")
|
||||
|
||||
rootCmd.PersistentFlags().StringVarP(&bindAddr, "bind_addr", "", "0.0.0.0", "bind address")
|
||||
@@ -79,7 +79,7 @@ func init() {
|
||||
rootCmd.PersistentFlags().StringVarP(&dashboardPwd, "dashboard_pwd", "", "admin", "dashboard password")
|
||||
rootCmd.PersistentFlags().StringVarP(&logFile, "log_file", "", "console", "log file")
|
||||
rootCmd.PersistentFlags().StringVarP(&logLevel, "log_level", "", "info", "log level")
|
||||
rootCmd.PersistentFlags().Int64VarP(&logMaxDays, "log_max_days", "", 3, "log_max_days")
|
||||
rootCmd.PersistentFlags().Int64VarP(&logMaxDays, "log_max_days", "", 3, "log max days")
|
||||
rootCmd.PersistentFlags().StringVarP(&token, "token", "t", "", "auth token")
|
||||
rootCmd.PersistentFlags().StringVarP(&subDomainHost, "subdomain_host", "", "", "subdomain host")
|
||||
rootCmd.PersistentFlags().StringVarP(&allowPorts, "allow_ports", "", "", "allow ports")
|
||||
|
||||
@@ -50,7 +50,7 @@ tls_enable = true
|
||||
# specify a dns server, so frpc will use this instead of default one
|
||||
# dns_server = 8.8.8.8
|
||||
|
||||
# proxy names you want to start divided by ','
|
||||
# proxy names you want to start seperated by ','
|
||||
# default is empty, means all proxies
|
||||
# start = ssh,dns
|
||||
|
||||
@@ -154,6 +154,9 @@ use_encryption = false
|
||||
use_compression = false
|
||||
subdomain = web01
|
||||
custom_domains = web02.yourdomain.com
|
||||
# if not empty, frpc will use proxy protocol to transfer connection info to your local service
|
||||
# v1 or v2 or empty
|
||||
proxy_protocol_version = v2
|
||||
|
||||
[plugin_unix_domain_socket]
|
||||
type = tcp
|
||||
@@ -187,6 +190,15 @@ plugin_strip_prefix = static
|
||||
plugin_http_user = abc
|
||||
plugin_http_passwd = abc
|
||||
|
||||
[plugin_https2http]
|
||||
type = https
|
||||
custom_domains = test.yourdomain.com
|
||||
plugin = https2http
|
||||
plugin_local_addr = 127.0.0.1:80
|
||||
plugin_crt_path = ./server.crt
|
||||
plugin_key_path = ./server.key
|
||||
plugin_host_header_rewrite = 127.0.0.1
|
||||
|
||||
[secret_tcp]
|
||||
# If the type is secret tcp, remote_port is useless
|
||||
# Who want to connect local port should deploy another frpc with stcp proxy and role is visitor
|
||||
|
||||
@@ -65,3 +65,6 @@ subdomain_host = frps.com
|
||||
|
||||
# if tcp stream multiplexing is used, default is true
|
||||
tcp_mux = true
|
||||
|
||||
# custom 404 page for HTTP requests
|
||||
# custom_404_page = /path/to/404.html
|
||||
|
||||
14
conf/systemd/frpc.service
Normal file
14
conf/systemd/frpc.service
Normal file
@@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Frp Client Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=nobody
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/bin/frpc -c /etc/frp/frpc.ini
|
||||
ExecReload=/usr/bin/frpc reload -c /etc/frp/frpc.ini
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
14
conf/systemd/frpc@.service
Normal file
14
conf/systemd/frpc@.service
Normal file
@@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Frp Client Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=idle
|
||||
User=nobody
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/bin/frpc -c /etc/frp/%i.ini
|
||||
ExecReload=/usr/bin/frpc reload -c /etc/frp/%i.ini
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
13
conf/systemd/frps.service
Normal file
13
conf/systemd/frps.service
Normal file
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=Frp Server Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=nobody
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/bin/frps -c /etc/frp/frps.ini
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
13
conf/systemd/frps@.service
Normal file
13
conf/systemd/frps@.service
Normal file
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=Frp Server Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=nobody
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
ExecStart=/usr/bin/frps -c /etc/frp/%i.ini
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
20
go.mod
20
go.mod
@@ -4,29 +4,29 @@ go 1.12
|
||||
|
||||
require (
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5
|
||||
github.com/davecgh/go-spew v1.1.0 // indirect
|
||||
github.com/fatedier/beego v0.0.0-20171024143340-6c6a4f5bd5eb
|
||||
github.com/fatedier/golib v0.0.0-20181107124048-ff8cd814b049
|
||||
github.com/fatedier/kcp-go v0.0.0-20171023144637-cd167d2f15f4
|
||||
github.com/fatedier/kcp-go v2.0.4-0.20190803094908-fe8645b0a904+incompatible
|
||||
github.com/golang/snappy v0.0.0-20170215233205-553a64147049 // indirect
|
||||
github.com/gorilla/context v1.1.1 // indirect
|
||||
github.com/gorilla/mux v1.6.2
|
||||
github.com/gorilla/websocket v1.2.0
|
||||
github.com/gorilla/mux v1.7.3
|
||||
github.com/gorilla/websocket v1.4.0
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/klauspost/cpuid v1.2.0 // indirect
|
||||
github.com/klauspost/reedsolomon v1.9.1 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.4 // indirect
|
||||
github.com/pires/go-proxyproto v0.0.0-20190111085350-4d51b51e3bfc
|
||||
github.com/pkg/errors v0.8.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/rakyll/statik v0.1.1
|
||||
github.com/rodaine/table v1.0.0
|
||||
github.com/spf13/cobra v0.0.3
|
||||
github.com/spf13/pflag v1.0.1 // indirect
|
||||
github.com/stretchr/testify v1.2.1
|
||||
github.com/stretchr/testify v1.3.0
|
||||
github.com/templexxx/cpufeat v0.0.0-20170927014610-3794dfbfb047 // indirect
|
||||
github.com/templexxx/reedsolomon v0.0.0-20170926020725-5e06b81a1c76 // indirect
|
||||
github.com/templexxx/xor v0.0.0-20170926022130-0af8e873c554 // indirect
|
||||
github.com/tjfoc/gmsm v0.0.0-20171124023159-98aa888b79d8 // indirect
|
||||
github.com/vaughan0/go-ini v0.0.0-20130923145212-a98ad7ee00ec
|
||||
golang.org/x/crypto v0.0.0-20180505025534-4ec37c66abab // indirect
|
||||
golang.org/x/net v0.0.0-20180524181706-dfa909b99c79
|
||||
github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae // indirect
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80
|
||||
golang.org/x/text v0.3.2 // indirect
|
||||
)
|
||||
|
||||
48
go.sum
48
go.sum
@@ -1,31 +1,61 @@
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/fatedier/beego v0.0.0-20171024143340-6c6a4f5bd5eb h1:wCrNShQidLmvVWn/0PikGmpdP0vtQmnvyRg3ZBEhczw=
|
||||
github.com/fatedier/beego v0.0.0-20171024143340-6c6a4f5bd5eb/go.mod h1:wx3gB6dbIfBRcucp94PI9Bt3I0F2c/MyNEWuhzpWiwk=
|
||||
github.com/fatedier/golib v0.0.0-20181107124048-ff8cd814b049 h1:teH578mf2ii42NHhIp3PhgvjU5bv+NFMq9fSQR8NaG8=
|
||||
github.com/fatedier/golib v0.0.0-20181107124048-ff8cd814b049/go.mod h1:DqIrnl0rp3Zybg9zbJmozTy1n8fYJoX+QoAj9slIkKM=
|
||||
github.com/fatedier/kcp-go v0.0.0-20171023144637-cd167d2f15f4/go.mod h1:YpCOaxj7vvMThhIQ9AfTOPW2sfztQR5WDfs7AflSy4s=
|
||||
github.com/fatedier/kcp-go v2.0.4-0.20190803094908-fe8645b0a904+incompatible h1:ssXat9YXFvigNge/IkkZvFMn8yeYKFX+uI6wn2mLJ74=
|
||||
github.com/fatedier/kcp-go v2.0.4-0.20190803094908-fe8645b0a904+incompatible/go.mod h1:YpCOaxj7vvMThhIQ9AfTOPW2sfztQR5WDfs7AflSy4s=
|
||||
github.com/golang/snappy v0.0.0-20170215233205-553a64147049 h1:K9KHZbXKpGydfDN0aZrsoHpLJlZsBrGMFWbgLDGnPZk=
|
||||
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
|
||||
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
|
||||
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
|
||||
github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/gorilla/mux v1.7.3 h1:gnP5JzjVOuiZD07fKKToCAOjS0yOpj/qPETTXCCS6hw=
|
||||
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
|
||||
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ=
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/klauspost/cpuid v1.2.0 h1:NMpwD2G9JSFOE1/TJjGSo5zG7Yb2bTe7eq1jH+irmeE=
|
||||
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/reedsolomon v1.9.1 h1:kYrT1MlR4JH6PqOpC+okdb9CDTcwEC/BqpzK4WFyXL8=
|
||||
github.com/klauspost/reedsolomon v1.9.1/go.mod h1:CwCi+NUr9pqSVktrkN+Ondf06rkhYZ/pcNv7fu+8Un4=
|
||||
github.com/mattn/go-runewidth v0.0.4 h1:2BvfKmzob6Bmd4YsL0zygOqfdFnK7GR4QL06Do4/p7Y=
|
||||
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
||||
github.com/pires/go-proxyproto v0.0.0-20190111085350-4d51b51e3bfc h1:lNOt1SMsgHXTdpuGw+RpnJtzUcCb/oRKZP65pBy9pr8=
|
||||
github.com/pires/go-proxyproto v0.0.0-20190111085350-4d51b51e3bfc/go.mod h1:6/gX3+E/IYGa0wMORlSMla999awQFdbaeQCHjSMKIzY=
|
||||
github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rakyll/statik v0.1.1 h1:fCLHsIMajHqD5RKigbFXpvX3dN7c80Pm12+NCrI3kvg=
|
||||
github.com/rakyll/statik v0.1.1/go.mod h1:OEi9wJV/fMUAGx1eNjq75DKDsJVuEv1U0oYdX6GX8Zs=
|
||||
github.com/rodaine/table v1.0.0 h1:UaCJG5Axc/cNXVGXqnCrffm1KxP0OfYLe1HuJLf5sFY=
|
||||
github.com/rodaine/table v1.0.0/go.mod h1:YAUzwPOji0DUJNEvggdxyQcUAl4g3hDRcFlyjnnR51I=
|
||||
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
|
||||
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
|
||||
github.com/spf13/pflag v1.0.1 h1:aCvUg6QPl3ibpQUxyLkrEkCHtPqYJL4x9AuhqVqFis4=
|
||||
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/templexxx/cpufeat v0.0.0-20170927014610-3794dfbfb047 h1:K+jtWCOuZgCra7eXZ/VWn2FbJmrA/D058mTXhh2rq+8=
|
||||
github.com/templexxx/cpufeat v0.0.0-20170927014610-3794dfbfb047/go.mod h1:wM7WEvslTq+iOEAMDLSzhVuOt5BRZ05WirO+b09GHQU=
|
||||
github.com/templexxx/reedsolomon v0.0.0-20170926020725-5e06b81a1c76/go.mod h1:ToWcj2sZ6xHl14JjZiVDktYpFtrFZJXBlsu7TV23lNg=
|
||||
github.com/templexxx/xor v0.0.0-20170926022130-0af8e873c554 h1:pexgSe+JCFuxG+uoMZLO+ce8KHtdHGhst4cs6rw3gmk=
|
||||
github.com/templexxx/xor v0.0.0-20170926022130-0af8e873c554/go.mod h1:5XA7W9S6mni3h5uvOC75dA3m9CCCaS83lltmc0ukdi4=
|
||||
github.com/tjfoc/gmsm v0.0.0-20171124023159-98aa888b79d8 h1:6CNSDqI1wiE+JqyOy5Qt/yo/DoNI2/QmmOZeiCid2Nw=
|
||||
github.com/tjfoc/gmsm v0.0.0-20171124023159-98aa888b79d8/go.mod h1:XxO4hdhhrzAd+G4CjDqaOkd0hUzmtPR/d3EiBBMn/wc=
|
||||
github.com/vaughan0/go-ini v0.0.0-20130923145212-a98ad7ee00ec h1:DGmKwyZwEB8dI7tbLt/I/gQuP559o/0FrAkHKlQM/Ks=
|
||||
github.com/vaughan0/go-ini v0.0.0-20130923145212-a98ad7ee00ec/go.mod h1:owBmyHYMLkxyrugmfwE/DLJyW8Ro9mkphwuVErQ0iUw=
|
||||
golang.org/x/crypto v0.0.0-20180505025534-4ec37c66abab/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/net v0.0.0-20180524181706-dfa909b99c79/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae h1:J0GxkO96kL4WF+AIT3M4mfUVinOCPgf2uUWYFUzN0sM=
|
||||
github.com/xtaci/lossyconn v0.0.0-20190602105132-8df528c0c9ae/go.mod h1:gXtu8J62kEgmN++bm9BVICuT/e8yiLI2KFobd/TRFsE=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80 h1:Ao/3l156eZf2AW5wK8a7/smtodRU+gha3+BeqJ69lRk=
|
||||
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
|
||||
@@ -107,8 +107,10 @@ type BaseProxyConf struct {
|
||||
Group string `json:"group"`
|
||||
GroupKey string `json:"group_key"`
|
||||
|
||||
// only used for client
|
||||
ProxyProtocolVersion string `json:"proxy_protocol_version"`
|
||||
LocalSvrConf
|
||||
HealthCheckConf // only used for client
|
||||
HealthCheckConf
|
||||
}
|
||||
|
||||
func (cfg *BaseProxyConf) GetBaseInfo() *BaseProxyConf {
|
||||
@@ -121,7 +123,8 @@ func (cfg *BaseProxyConf) compare(cmp *BaseProxyConf) bool {
|
||||
cfg.UseEncryption != cmp.UseEncryption ||
|
||||
cfg.UseCompression != cmp.UseCompression ||
|
||||
cfg.Group != cmp.Group ||
|
||||
cfg.GroupKey != cmp.GroupKey {
|
||||
cfg.GroupKey != cmp.GroupKey ||
|
||||
cfg.ProxyProtocolVersion != cmp.ProxyProtocolVersion {
|
||||
return false
|
||||
}
|
||||
if !cfg.LocalSvrConf.compare(&cmp.LocalSvrConf) {
|
||||
@@ -162,6 +165,7 @@ func (cfg *BaseProxyConf) UnmarshalFromIni(prefix string, name string, section i
|
||||
|
||||
cfg.Group = section["group"]
|
||||
cfg.GroupKey = section["group_key"]
|
||||
cfg.ProxyProtocolVersion = section["proxy_protocol_version"]
|
||||
|
||||
if err := cfg.LocalSvrConf.UnmarshalFromIni(prefix, name, section); err != nil {
|
||||
return err
|
||||
@@ -194,6 +198,12 @@ func (cfg *BaseProxyConf) MarshalToMsg(pMsg *msg.NewProxy) {
|
||||
}
|
||||
|
||||
func (cfg *BaseProxyConf) checkForCli() (err error) {
|
||||
if cfg.ProxyProtocolVersion != "" {
|
||||
if cfg.ProxyProtocolVersion != "v1" && cfg.ProxyProtocolVersion != "v2" {
|
||||
return fmt.Errorf("no support proxy protocol version: %s", cfg.ProxyProtocolVersion)
|
||||
}
|
||||
}
|
||||
|
||||
if err = cfg.LocalSvrConf.checkForCli(); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -69,6 +69,7 @@ type ServerCommonConf struct {
|
||||
Token string `json:"token"`
|
||||
SubDomainHost string `json:"subdomain_host"`
|
||||
TcpMux bool `json:"tcp_mux"`
|
||||
Custom404Page string `json:"custom_404_page"`
|
||||
|
||||
AllowPorts map[int]struct{}
|
||||
MaxPoolCount int64 `json:"max_pool_count"`
|
||||
@@ -104,6 +105,7 @@ func GetDefaultServerConf() *ServerCommonConf {
|
||||
MaxPortsPerClient: 0,
|
||||
HeartBeatTimeout: 90,
|
||||
UserConnTimeout: 10,
|
||||
Custom404Page: "",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -293,6 +295,10 @@ func UnmarshalServerConfFromIni(defaultCfg *ServerCommonConf, content string) (c
|
||||
cfg.TcpMux = true
|
||||
}
|
||||
|
||||
if tmpStr, ok = conf.Get("common", "custom_404_page"); ok {
|
||||
cfg.Custom404Page = tmpStr
|
||||
}
|
||||
|
||||
if tmpStr, ok = conf.Get("common", "heartbeat_timeout"); ok {
|
||||
v, errRet := strconv.ParseInt(tmpStr, 10, 64)
|
||||
if errRet != nil {
|
||||
|
||||
@@ -126,6 +126,10 @@ type ReqWorkConn struct {
|
||||
|
||||
type StartWorkConn struct {
|
||||
ProxyName string `json:"proxy_name"`
|
||||
SrcAddr string `json:"src_addr"`
|
||||
DstAddr string `json:"dst_addr"`
|
||||
SrcPort uint16 `json:"src_port"`
|
||||
DstPort uint16 `json:"dst_port"`
|
||||
}
|
||||
|
||||
type NewVisitorConn struct {
|
||||
|
||||
@@ -64,7 +64,7 @@ func (hp *HttpProxy) Name() string {
|
||||
return PluginHttpProxy
|
||||
}
|
||||
|
||||
func (hp *HttpProxy) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn) {
|
||||
func (hp *HttpProxy) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte) {
|
||||
wrapConn := frpNet.WrapReadWriteCloserToConn(conn, realConn)
|
||||
|
||||
sc, rd := gnet.NewSharedConn(wrapConn)
|
||||
|
||||
114
models/plugin/https2http.go
Normal file
114
models/plugin/https2http.go
Normal file
@@ -0,0 +1,114 @@
|
||||
// Copyright 2019 fatedier, fatedier@gmail.com
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
|
||||
frpNet "github.com/fatedier/frp/utils/net"
|
||||
)
|
||||
|
||||
const PluginHTTPS2HTTP = "https2http"
|
||||
|
||||
func init() {
|
||||
Register(PluginHTTPS2HTTP, NewHTTPS2HTTPPlugin)
|
||||
}
|
||||
|
||||
type HTTPS2HTTPPlugin struct {
|
||||
crtPath string
|
||||
keyPath string
|
||||
hostHeaderRewrite string
|
||||
localAddr string
|
||||
|
||||
l *Listener
|
||||
s *http.Server
|
||||
}
|
||||
|
||||
func NewHTTPS2HTTPPlugin(params map[string]string) (Plugin, error) {
|
||||
crtPath := params["plugin_crt_path"]
|
||||
keyPath := params["plugin_key_path"]
|
||||
localAddr := params["plugin_local_addr"]
|
||||
hostHeaderRewrite := params["plugin_host_header_rewrite"]
|
||||
|
||||
if crtPath == "" {
|
||||
return nil, fmt.Errorf("plugin_crt_path is required")
|
||||
}
|
||||
if keyPath == "" {
|
||||
return nil, fmt.Errorf("plugin_key_path is required")
|
||||
}
|
||||
if localAddr == "" {
|
||||
return nil, fmt.Errorf("plugin_local_addr is required")
|
||||
}
|
||||
|
||||
listener := NewProxyListener()
|
||||
|
||||
p := &HTTPS2HTTPPlugin{
|
||||
crtPath: crtPath,
|
||||
keyPath: keyPath,
|
||||
localAddr: localAddr,
|
||||
hostHeaderRewrite: hostHeaderRewrite,
|
||||
l: listener,
|
||||
}
|
||||
|
||||
rp := &httputil.ReverseProxy{
|
||||
Director: func(req *http.Request) {
|
||||
req.URL.Scheme = "http"
|
||||
req.URL.Host = p.localAddr
|
||||
if p.hostHeaderRewrite != "" {
|
||||
req.Host = p.hostHeaderRewrite
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
p.s = &http.Server{
|
||||
Handler: rp,
|
||||
}
|
||||
|
||||
tlsConfig, err := p.genTLSConfig()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("gen TLS config error: %v", err)
|
||||
}
|
||||
ln := tls.NewListener(listener, tlsConfig)
|
||||
|
||||
go p.s.Serve(ln)
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *HTTPS2HTTPPlugin) genTLSConfig() (*tls.Config, error) {
|
||||
cert, err := tls.LoadX509KeyPair(p.crtPath, p.keyPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
config := &tls.Config{Certificates: []tls.Certificate{cert}}
|
||||
return config, nil
|
||||
}
|
||||
|
||||
func (p *HTTPS2HTTPPlugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte) {
|
||||
wrapConn := frpNet.WrapReadWriteCloserToConn(conn, realConn)
|
||||
p.l.PutConn(wrapConn)
|
||||
}
|
||||
|
||||
func (p *HTTPS2HTTPPlugin) Name() string {
|
||||
return PluginHTTPS2HTTP
|
||||
}
|
||||
|
||||
func (p *HTTPS2HTTPPlugin) Close() error {
|
||||
return nil
|
||||
}
|
||||
@@ -46,7 +46,7 @@ func Create(name string, params map[string]string) (p Plugin, err error) {
|
||||
|
||||
type Plugin interface {
|
||||
Name() string
|
||||
Handle(conn io.ReadWriteCloser, realConn frpNet.Conn)
|
||||
Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte)
|
||||
Close() error
|
||||
}
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ func NewSocks5Plugin(params map[string]string) (p Plugin, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (sp *Socks5Plugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn) {
|
||||
func (sp *Socks5Plugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte) {
|
||||
defer conn.Close()
|
||||
wrapConn := frpNet.WrapReadWriteCloserToConn(conn, realConn)
|
||||
sp.Server.ServeConn(wrapConn)
|
||||
|
||||
@@ -72,7 +72,7 @@ func NewStaticFilePlugin(params map[string]string) (Plugin, error) {
|
||||
return sp, nil
|
||||
}
|
||||
|
||||
func (sp *StaticFilePlugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn) {
|
||||
func (sp *StaticFilePlugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte) {
|
||||
wrapConn := frpNet.WrapReadWriteCloserToConn(conn, realConn)
|
||||
sp.l.PutConn(wrapConn)
|
||||
}
|
||||
|
||||
@@ -53,11 +53,14 @@ func NewUnixDomainSocketPlugin(params map[string]string) (p Plugin, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (uds *UnixDomainSocketPlugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn) {
|
||||
func (uds *UnixDomainSocketPlugin) Handle(conn io.ReadWriteCloser, realConn frpNet.Conn, extraBufToLocal []byte) {
|
||||
localConn, err := net.DialUnix("unix", nil, uds.UnixAddr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if len(extraBufToLocal) > 0 {
|
||||
localConn.Write(extraBufToLocal)
|
||||
}
|
||||
|
||||
frpIo.Join(localConn, conn)
|
||||
}
|
||||
|
||||
@@ -44,7 +44,7 @@ for os in $os_all; do
|
||||
mv ./frps_${os}_${arch} ${frp_path}/frps
|
||||
fi
|
||||
cp ./LICENSE ${frp_path}
|
||||
cp ./conf/* ${frp_path}
|
||||
cp -rf ./conf/* ${frp_path}
|
||||
|
||||
# packages
|
||||
cd ./packages
|
||||
|
||||
@@ -301,6 +301,7 @@ func (ctl *Control) reader() {
|
||||
return
|
||||
} else {
|
||||
ctl.conn.Warn("read error: %v", err)
|
||||
ctl.conn.Close()
|
||||
return
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -29,6 +29,9 @@ type ResourceController struct {
|
||||
// Tcp Group Controller
|
||||
TcpGroupCtl *group.TcpGroupCtl
|
||||
|
||||
// HTTP Group Controller
|
||||
HTTPGroupCtl *group.HTTPGroupController
|
||||
|
||||
// Manage all tcp ports
|
||||
TcpPortManager *ports.PortManager
|
||||
|
||||
@@ -38,7 +41,7 @@ type ResourceController struct {
|
||||
// For http proxies, forwarding http requests
|
||||
HttpReverseProxy *vhost.HttpReverseProxy
|
||||
|
||||
// For https proxies, route requests to different clients by hostname and other infomation
|
||||
// For https proxies, route requests to different clients by hostname and other information
|
||||
VhostHttpsMuxer *vhost.HttpsMuxer
|
||||
|
||||
// Controller for nat hole connections
|
||||
|
||||
@@ -279,6 +279,7 @@ func (svr *Service) getProxyStatsByTypeAndName(proxyType string, proxyName strin
|
||||
proxyInfo.CurConns = ps.CurConns
|
||||
proxyInfo.LastStartTime = ps.LastStartTime
|
||||
proxyInfo.LastCloseTime = ps.LastCloseTime
|
||||
code = 200
|
||||
}
|
||||
|
||||
return
|
||||
|
||||
@@ -23,4 +23,5 @@ var (
|
||||
ErrGroupParamsInvalid = errors.New("group params invalid")
|
||||
ErrListenerClosed = errors.New("group listener closed")
|
||||
ErrGroupDifferentPort = errors.New("group should have same remote port")
|
||||
ErrProxyRepeated = errors.New("group proxy repeated")
|
||||
)
|
||||
|
||||
157
server/group/http.go
Normal file
157
server/group/http.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package group
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
frpNet "github.com/fatedier/frp/utils/net"
|
||||
|
||||
"github.com/fatedier/frp/utils/vhost"
|
||||
)
|
||||
|
||||
type HTTPGroupController struct {
|
||||
groups map[string]*HTTPGroup
|
||||
|
||||
vhostRouter *vhost.VhostRouters
|
||||
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func NewHTTPGroupController(vhostRouter *vhost.VhostRouters) *HTTPGroupController {
|
||||
return &HTTPGroupController{
|
||||
groups: make(map[string]*HTTPGroup),
|
||||
vhostRouter: vhostRouter,
|
||||
}
|
||||
}
|
||||
|
||||
func (ctl *HTTPGroupController) Register(proxyName, group, groupKey string,
|
||||
routeConfig vhost.VhostRouteConfig) (err error) {
|
||||
|
||||
indexKey := httpGroupIndex(group, routeConfig.Domain, routeConfig.Location)
|
||||
ctl.mu.Lock()
|
||||
g, ok := ctl.groups[indexKey]
|
||||
if !ok {
|
||||
g = NewHTTPGroup(ctl)
|
||||
ctl.groups[indexKey] = g
|
||||
}
|
||||
ctl.mu.Unlock()
|
||||
|
||||
return g.Register(proxyName, group, groupKey, routeConfig)
|
||||
}
|
||||
|
||||
func (ctl *HTTPGroupController) UnRegister(proxyName, group, domain, location string) {
|
||||
indexKey := httpGroupIndex(group, domain, location)
|
||||
ctl.mu.Lock()
|
||||
defer ctl.mu.Unlock()
|
||||
g, ok := ctl.groups[indexKey]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
isEmpty := g.UnRegister(proxyName)
|
||||
if isEmpty {
|
||||
delete(ctl.groups, indexKey)
|
||||
}
|
||||
}
|
||||
|
||||
type HTTPGroup struct {
|
||||
group string
|
||||
groupKey string
|
||||
domain string
|
||||
location string
|
||||
|
||||
createFuncs map[string]vhost.CreateConnFunc
|
||||
pxyNames []string
|
||||
index uint64
|
||||
ctl *HTTPGroupController
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
func NewHTTPGroup(ctl *HTTPGroupController) *HTTPGroup {
|
||||
return &HTTPGroup{
|
||||
createFuncs: make(map[string]vhost.CreateConnFunc),
|
||||
pxyNames: make([]string, 0),
|
||||
ctl: ctl,
|
||||
}
|
||||
}
|
||||
|
||||
func (g *HTTPGroup) Register(proxyName, group, groupKey string,
|
||||
routeConfig vhost.VhostRouteConfig) (err error) {
|
||||
|
||||
g.mu.Lock()
|
||||
defer g.mu.Unlock()
|
||||
if len(g.createFuncs) == 0 {
|
||||
// the first proxy in this group
|
||||
tmp := routeConfig // copy object
|
||||
tmp.CreateConnFn = g.createConn
|
||||
err = g.ctl.vhostRouter.Add(routeConfig.Domain, routeConfig.Location, &tmp)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
g.group = group
|
||||
g.groupKey = groupKey
|
||||
g.domain = routeConfig.Domain
|
||||
g.location = routeConfig.Location
|
||||
} else {
|
||||
if g.group != group || g.domain != routeConfig.Domain || g.location != routeConfig.Location {
|
||||
err = ErrGroupParamsInvalid
|
||||
return
|
||||
}
|
||||
if g.groupKey != groupKey {
|
||||
err = ErrGroupAuthFailed
|
||||
return
|
||||
}
|
||||
}
|
||||
if _, ok := g.createFuncs[proxyName]; ok {
|
||||
err = ErrProxyRepeated
|
||||
return
|
||||
}
|
||||
g.createFuncs[proxyName] = routeConfig.CreateConnFn
|
||||
g.pxyNames = append(g.pxyNames, proxyName)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (g *HTTPGroup) UnRegister(proxyName string) (isEmpty bool) {
|
||||
g.mu.Lock()
|
||||
defer g.mu.Unlock()
|
||||
delete(g.createFuncs, proxyName)
|
||||
for i, name := range g.pxyNames {
|
||||
if name == proxyName {
|
||||
g.pxyNames = append(g.pxyNames[:i], g.pxyNames[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(g.createFuncs) == 0 {
|
||||
isEmpty = true
|
||||
g.ctl.vhostRouter.Del(g.domain, g.location)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (g *HTTPGroup) createConn(remoteAddr string) (frpNet.Conn, error) {
|
||||
var f vhost.CreateConnFunc
|
||||
newIndex := atomic.AddUint64(&g.index, 1)
|
||||
|
||||
g.mu.RLock()
|
||||
group := g.group
|
||||
domain := g.domain
|
||||
location := g.location
|
||||
if len(g.pxyNames) > 0 {
|
||||
name := g.pxyNames[int(newIndex)%len(g.pxyNames)]
|
||||
f, _ = g.createFuncs[name]
|
||||
}
|
||||
g.mu.RUnlock()
|
||||
|
||||
if f == nil {
|
||||
return nil, fmt.Errorf("no CreateConnFunc for http group [%s], domain [%s], location [%s]", group, domain, location)
|
||||
}
|
||||
|
||||
return f(remoteAddr)
|
||||
}
|
||||
|
||||
func httpGroupIndex(group, domain, location string) string {
|
||||
return fmt.Sprintf("%s_%s_%s", group, domain, location)
|
||||
}
|
||||
@@ -24,46 +24,47 @@ import (
|
||||
gerr "github.com/fatedier/golib/errors"
|
||||
)
|
||||
|
||||
type TcpGroupListener struct {
|
||||
groupName string
|
||||
group *TcpGroup
|
||||
// TcpGroupCtl manage all TcpGroups
|
||||
type TcpGroupCtl struct {
|
||||
groups map[string]*TcpGroup
|
||||
|
||||
addr net.Addr
|
||||
closeCh chan struct{}
|
||||
// portManager is used to manage port
|
||||
portManager *ports.PortManager
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func newTcpGroupListener(name string, group *TcpGroup, addr net.Addr) *TcpGroupListener {
|
||||
return &TcpGroupListener{
|
||||
groupName: name,
|
||||
group: group,
|
||||
addr: addr,
|
||||
closeCh: make(chan struct{}),
|
||||
// NewTcpGroupCtl return a new TcpGroupCtl
|
||||
func NewTcpGroupCtl(portManager *ports.PortManager) *TcpGroupCtl {
|
||||
return &TcpGroupCtl{
|
||||
groups: make(map[string]*TcpGroup),
|
||||
portManager: portManager,
|
||||
}
|
||||
}
|
||||
|
||||
func (ln *TcpGroupListener) Accept() (c net.Conn, err error) {
|
||||
var ok bool
|
||||
select {
|
||||
case <-ln.closeCh:
|
||||
return nil, ErrListenerClosed
|
||||
case c, ok = <-ln.group.Accept():
|
||||
if !ok {
|
||||
return nil, ErrListenerClosed
|
||||
}
|
||||
return c, nil
|
||||
// Listen is the wrapper for TcpGroup's Listen
|
||||
// If there are no group, we will create one here
|
||||
func (tgc *TcpGroupCtl) Listen(proxyName string, group string, groupKey string,
|
||||
addr string, port int) (l net.Listener, realPort int, err error) {
|
||||
|
||||
tgc.mu.Lock()
|
||||
tcpGroup, ok := tgc.groups[group]
|
||||
if !ok {
|
||||
tcpGroup = NewTcpGroup(tgc)
|
||||
tgc.groups[group] = tcpGroup
|
||||
}
|
||||
tgc.mu.Unlock()
|
||||
|
||||
return tcpGroup.Listen(proxyName, group, groupKey, addr, port)
|
||||
}
|
||||
|
||||
func (ln *TcpGroupListener) Addr() net.Addr {
|
||||
return ln.addr
|
||||
}
|
||||
|
||||
func (ln *TcpGroupListener) Close() (err error) {
|
||||
close(ln.closeCh)
|
||||
ln.group.CloseListener(ln)
|
||||
return
|
||||
// RemoveGroup remove TcpGroup from controller
|
||||
func (tgc *TcpGroupCtl) RemoveGroup(group string) {
|
||||
tgc.mu.Lock()
|
||||
defer tgc.mu.Unlock()
|
||||
delete(tgc.groups, group)
|
||||
}
|
||||
|
||||
// TcpGroup route connections to different proxies
|
||||
type TcpGroup struct {
|
||||
group string
|
||||
groupKey string
|
||||
@@ -79,6 +80,7 @@ type TcpGroup struct {
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewTcpGroup return a new TcpGroup
|
||||
func NewTcpGroup(ctl *TcpGroupCtl) *TcpGroup {
|
||||
return &TcpGroup{
|
||||
lns: make([]*TcpGroupListener, 0),
|
||||
@@ -87,10 +89,14 @@ func NewTcpGroup(ctl *TcpGroupCtl) *TcpGroup {
|
||||
}
|
||||
}
|
||||
|
||||
// Listen will return a new TcpGroupListener
|
||||
// if TcpGroup already has a listener, just add a new TcpGroupListener to the queues
|
||||
// otherwise, listen on the real address
|
||||
func (tg *TcpGroup) Listen(proxyName string, group string, groupKey string, addr string, port int) (ln *TcpGroupListener, realPort int, err error) {
|
||||
tg.mu.Lock()
|
||||
defer tg.mu.Unlock()
|
||||
if len(tg.lns) == 0 {
|
||||
// the first listener, listen on the real address
|
||||
realPort, err = tg.ctl.portManager.Acquire(proxyName, port)
|
||||
if err != nil {
|
||||
return
|
||||
@@ -114,6 +120,7 @@ func (tg *TcpGroup) Listen(proxyName string, group string, groupKey string, addr
|
||||
}
|
||||
go tg.worker()
|
||||
} else {
|
||||
// address and port in the same group must be equal
|
||||
if tg.group != group || tg.addr != addr {
|
||||
err = ErrGroupParamsInvalid
|
||||
return
|
||||
@@ -133,6 +140,7 @@ func (tg *TcpGroup) Listen(proxyName string, group string, groupKey string, addr
|
||||
return
|
||||
}
|
||||
|
||||
// worker is called when the real tcp listener has been created
|
||||
func (tg *TcpGroup) worker() {
|
||||
for {
|
||||
c, err := tg.tcpLn.Accept()
|
||||
@@ -152,6 +160,7 @@ func (tg *TcpGroup) Accept() <-chan net.Conn {
|
||||
return tg.acceptCh
|
||||
}
|
||||
|
||||
// CloseListener remove the TcpGroupListener from the TcpGroup
|
||||
func (tg *TcpGroup) CloseListener(ln *TcpGroupListener) {
|
||||
tg.mu.Lock()
|
||||
defer tg.mu.Unlock()
|
||||
@@ -169,36 +178,47 @@ func (tg *TcpGroup) CloseListener(ln *TcpGroupListener) {
|
||||
}
|
||||
}
|
||||
|
||||
type TcpGroupCtl struct {
|
||||
groups map[string]*TcpGroup
|
||||
// TcpGroupListener
|
||||
type TcpGroupListener struct {
|
||||
groupName string
|
||||
group *TcpGroup
|
||||
|
||||
portManager *ports.PortManager
|
||||
mu sync.Mutex
|
||||
addr net.Addr
|
||||
closeCh chan struct{}
|
||||
}
|
||||
|
||||
func NewTcpGroupCtl(portManager *ports.PortManager) *TcpGroupCtl {
|
||||
return &TcpGroupCtl{
|
||||
groups: make(map[string]*TcpGroup),
|
||||
portManager: portManager,
|
||||
func newTcpGroupListener(name string, group *TcpGroup, addr net.Addr) *TcpGroupListener {
|
||||
return &TcpGroupListener{
|
||||
groupName: name,
|
||||
group: group,
|
||||
addr: addr,
|
||||
closeCh: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
func (tgc *TcpGroupCtl) Listen(proxyNanme string, group string, groupKey string,
|
||||
addr string, port int) (l net.Listener, realPort int, err error) {
|
||||
|
||||
tgc.mu.Lock()
|
||||
defer tgc.mu.Unlock()
|
||||
if tcpGroup, ok := tgc.groups[group]; ok {
|
||||
return tcpGroup.Listen(proxyNanme, group, groupKey, addr, port)
|
||||
} else {
|
||||
tcpGroup = NewTcpGroup(tgc)
|
||||
tgc.groups[group] = tcpGroup
|
||||
return tcpGroup.Listen(proxyNanme, group, groupKey, addr, port)
|
||||
// Accept will accept connections from TcpGroup
|
||||
func (ln *TcpGroupListener) Accept() (c net.Conn, err error) {
|
||||
var ok bool
|
||||
select {
|
||||
case <-ln.closeCh:
|
||||
return nil, ErrListenerClosed
|
||||
case c, ok = <-ln.group.Accept():
|
||||
if !ok {
|
||||
return nil, ErrListenerClosed
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (tgc *TcpGroupCtl) RemoveGroup(group string) {
|
||||
tgc.mu.Lock()
|
||||
defer tgc.mu.Unlock()
|
||||
delete(tgc.groups, group)
|
||||
func (ln *TcpGroupListener) Addr() net.Addr {
|
||||
return ln.addr
|
||||
}
|
||||
|
||||
// Close close the listener
|
||||
func (ln *TcpGroupListener) Close() (err error) {
|
||||
close(ln.closeCh)
|
||||
|
||||
// remove self from TcpGroup
|
||||
ln.group.CloseListener(ln)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ package proxy
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net"
|
||||
"strings"
|
||||
|
||||
"github.com/fatedier/frp/g"
|
||||
@@ -49,22 +50,46 @@ func (pxy *HttpProxy) Run() (remoteAddr string, err error) {
|
||||
locations = []string{""}
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
pxy.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
addrs := make([]string, 0)
|
||||
for _, domain := range pxy.cfg.CustomDomains {
|
||||
if domain == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
routeConfig.Domain = domain
|
||||
for _, location := range locations {
|
||||
routeConfig.Location = location
|
||||
err = pxy.rc.HttpReverseProxy.Register(routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
tmpDomain := routeConfig.Domain
|
||||
tmpLocation := routeConfig.Location
|
||||
addrs = append(addrs, util.CanonicalAddr(tmpDomain, int(g.GlbServerCfg.VhostHttpPort)))
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HttpReverseProxy.UnRegister(tmpDomain, tmpLocation)
|
||||
})
|
||||
pxy.Info("http proxy listen for host [%s] location [%s]", routeConfig.Domain, routeConfig.Location)
|
||||
|
||||
// handle group
|
||||
if pxy.cfg.Group != "" {
|
||||
err = pxy.rc.HTTPGroupCtl.Register(pxy.name, pxy.cfg.Group, pxy.cfg.GroupKey, routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HTTPGroupCtl.UnRegister(pxy.name, pxy.cfg.Group, tmpDomain, tmpLocation)
|
||||
})
|
||||
} else {
|
||||
// no group
|
||||
err = pxy.rc.HttpReverseProxy.Register(routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HttpReverseProxy.UnRegister(tmpDomain, tmpLocation)
|
||||
})
|
||||
}
|
||||
addrs = append(addrs, util.CanonicalAddr(routeConfig.Domain, int(g.GlbServerCfg.VhostHttpPort)))
|
||||
pxy.Info("http proxy listen for host [%s] location [%s] group [%s]", routeConfig.Domain, routeConfig.Location, pxy.cfg.Group)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,17 +97,31 @@ func (pxy *HttpProxy) Run() (remoteAddr string, err error) {
|
||||
routeConfig.Domain = pxy.cfg.SubDomain + "." + g.GlbServerCfg.SubDomainHost
|
||||
for _, location := range locations {
|
||||
routeConfig.Location = location
|
||||
err = pxy.rc.HttpReverseProxy.Register(routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
tmpDomain := routeConfig.Domain
|
||||
tmpLocation := routeConfig.Location
|
||||
|
||||
// handle group
|
||||
if pxy.cfg.Group != "" {
|
||||
err = pxy.rc.HTTPGroupCtl.Register(pxy.name, pxy.cfg.Group, pxy.cfg.GroupKey, routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HTTPGroupCtl.UnRegister(pxy.name, pxy.cfg.Group, tmpDomain, tmpLocation)
|
||||
})
|
||||
} else {
|
||||
err = pxy.rc.HttpReverseProxy.Register(routeConfig)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HttpReverseProxy.UnRegister(tmpDomain, tmpLocation)
|
||||
})
|
||||
}
|
||||
addrs = append(addrs, util.CanonicalAddr(tmpDomain, g.GlbServerCfg.VhostHttpPort))
|
||||
pxy.closeFuncs = append(pxy.closeFuncs, func() {
|
||||
pxy.rc.HttpReverseProxy.UnRegister(tmpDomain, tmpLocation)
|
||||
})
|
||||
pxy.Info("http proxy listen for host [%s] location [%s]", routeConfig.Domain, routeConfig.Location)
|
||||
|
||||
pxy.Info("http proxy listen for host [%s] location [%s] group [%s]", routeConfig.Domain, routeConfig.Location, pxy.cfg.Group)
|
||||
}
|
||||
}
|
||||
remoteAddr = strings.Join(addrs, ",")
|
||||
@@ -93,8 +132,14 @@ func (pxy *HttpProxy) GetConf() config.ProxyConf {
|
||||
return pxy.cfg
|
||||
}
|
||||
|
||||
func (pxy *HttpProxy) GetRealConn() (workConn frpNet.Conn, err error) {
|
||||
tmpConn, errRet := pxy.GetWorkConnFromPool()
|
||||
func (pxy *HttpProxy) GetRealConn(remoteAddr string) (workConn frpNet.Conn, err error) {
|
||||
rAddr, errRet := net.ResolveTCPAddr("tcp", remoteAddr)
|
||||
if errRet != nil {
|
||||
pxy.Warn("resolve TCP addr [%s] error: %v", remoteAddr, errRet)
|
||||
// we do not return error here since remoteAddr is not necessary for proxies without proxy protocol enabled
|
||||
}
|
||||
|
||||
tmpConn, errRet := pxy.GetWorkConnFromPool(rAddr, nil)
|
||||
if errRet != nil {
|
||||
err = errRet
|
||||
return
|
||||
|
||||
@@ -31,8 +31,17 @@ type HttpsProxy struct {
|
||||
func (pxy *HttpsProxy) Run() (remoteAddr string, err error) {
|
||||
routeConfig := &vhost.VhostRouteConfig{}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
pxy.Close()
|
||||
}
|
||||
}()
|
||||
addrs := make([]string, 0)
|
||||
for _, domain := range pxy.cfg.CustomDomains {
|
||||
if domain == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
routeConfig.Domain = domain
|
||||
l, errRet := pxy.rc.VhostHttpsMuxer.Listen(routeConfig)
|
||||
if errRet != nil {
|
||||
|
||||
@@ -17,6 +17,8 @@ package proxy
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"sync"
|
||||
|
||||
"github.com/fatedier/frp/g"
|
||||
@@ -36,7 +38,7 @@ type Proxy interface {
|
||||
Run() (remoteAddr string, err error)
|
||||
GetName() string
|
||||
GetConf() config.ProxyConf
|
||||
GetWorkConnFromPool() (workConn frpNet.Conn, err error)
|
||||
GetWorkConnFromPool(src, dst net.Addr) (workConn frpNet.Conn, err error)
|
||||
GetUsedPortsNum() int
|
||||
Close()
|
||||
log.Logger
|
||||
@@ -70,7 +72,9 @@ func (pxy *BaseProxy) Close() {
|
||||
}
|
||||
}
|
||||
|
||||
func (pxy *BaseProxy) GetWorkConnFromPool() (workConn frpNet.Conn, err error) {
|
||||
// GetWorkConnFromPool try to get a new work connections from pool
|
||||
// for quickly response, we immediately send the StartWorkConn message to frpc after take out one from pool
|
||||
func (pxy *BaseProxy) GetWorkConnFromPool(src, dst net.Addr) (workConn frpNet.Conn, err error) {
|
||||
// try all connections from the pool
|
||||
for i := 0; i < pxy.poolCount+1; i++ {
|
||||
if workConn, err = pxy.getWorkConnFn(); err != nil {
|
||||
@@ -80,8 +84,29 @@ func (pxy *BaseProxy) GetWorkConnFromPool() (workConn frpNet.Conn, err error) {
|
||||
pxy.Info("get a new work connection: [%s]", workConn.RemoteAddr().String())
|
||||
workConn.AddLogPrefix(pxy.GetName())
|
||||
|
||||
var (
|
||||
srcAddr string
|
||||
dstAddr string
|
||||
srcPortStr string
|
||||
dstPortStr string
|
||||
srcPort int
|
||||
dstPort int
|
||||
)
|
||||
|
||||
if src != nil {
|
||||
srcAddr, srcPortStr, _ = net.SplitHostPort(src.String())
|
||||
srcPort, _ = strconv.Atoi(srcPortStr)
|
||||
}
|
||||
if dst != nil {
|
||||
dstAddr, dstPortStr, _ = net.SplitHostPort(dst.String())
|
||||
dstPort, _ = strconv.Atoi(dstPortStr)
|
||||
}
|
||||
err := msg.WriteMsg(workConn, &msg.StartWorkConn{
|
||||
ProxyName: pxy.GetName(),
|
||||
SrcAddr: srcAddr,
|
||||
SrcPort: uint16(srcPort),
|
||||
DstAddr: dstAddr,
|
||||
DstPort: uint16(dstPort),
|
||||
})
|
||||
if err != nil {
|
||||
workConn.Warn("failed to send message to work connection from pool: %v, times: %d", err, i)
|
||||
@@ -177,7 +202,7 @@ func HandleUserTcpConnection(pxy Proxy, userConn frpNet.Conn, statsCollector sta
|
||||
defer userConn.Close()
|
||||
|
||||
// try all connections from the pool
|
||||
workConn, err := pxy.GetWorkConnFromPool()
|
||||
workConn, err := pxy.GetWorkConnFromPool(userConn.RemoteAddr(), userConn.LocalAddr())
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -160,7 +160,7 @@ func (pxy *UdpProxy) Run() (remoteAddr string, err error) {
|
||||
// Sleep a while for waiting control send the NewProxyResp to client.
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
for {
|
||||
workConn, err := pxy.GetWorkConnFromPool()
|
||||
workConn, err := pxy.GetWorkConnFromPool(nil, nil)
|
||||
if err != nil {
|
||||
time.Sleep(1 * time.Second)
|
||||
// check if proxy is closed
|
||||
|
||||
@@ -44,7 +44,7 @@ func (pxy *XtcpProxy) Run() (remoteAddr string, err error) {
|
||||
break
|
||||
case sidRequest := <-sidCh:
|
||||
sr := sidRequest
|
||||
workConn, errRet := pxy.GetWorkConnFromPool()
|
||||
workConn, errRet := pxy.GetWorkConnFromPool(nil, nil)
|
||||
if errRet != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -76,6 +76,9 @@ type Service struct {
|
||||
// Manage all proxies
|
||||
pxyManager *proxy.ProxyManager
|
||||
|
||||
// HTTP vhost router
|
||||
httpVhostRouter *vhost.VhostRouters
|
||||
|
||||
// All resource managers and controllers
|
||||
rc *controller.ResourceController
|
||||
|
||||
@@ -95,12 +98,16 @@ func NewService() (svr *Service, err error) {
|
||||
TcpPortManager: ports.NewPortManager("tcp", cfg.ProxyBindAddr, cfg.AllowPorts),
|
||||
UdpPortManager: ports.NewPortManager("udp", cfg.ProxyBindAddr, cfg.AllowPorts),
|
||||
},
|
||||
tlsConfig: generateTLSConfig(),
|
||||
httpVhostRouter: vhost.NewVhostRouters(),
|
||||
tlsConfig: generateTLSConfig(),
|
||||
}
|
||||
|
||||
// Init group controller
|
||||
svr.rc.TcpGroupCtl = group.NewTcpGroupCtl(svr.rc.TcpPortManager)
|
||||
|
||||
// Init HTTP group controller
|
||||
svr.rc.HTTPGroupCtl = group.NewHTTPGroupController(svr.httpVhostRouter)
|
||||
|
||||
// Init assets
|
||||
err = assets.Load(cfg.AssetsDir)
|
||||
if err != nil {
|
||||
@@ -108,6 +115,9 @@ func NewService() (svr *Service, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Init 404 not found page
|
||||
vhost.NotFoundPagePath = cfg.Custom404Page
|
||||
|
||||
var (
|
||||
httpMuxOn bool
|
||||
httpsMuxOn bool
|
||||
@@ -156,7 +166,7 @@ func NewService() (svr *Service, err error) {
|
||||
if cfg.VhostHttpPort > 0 {
|
||||
rp := vhost.NewHttpReverseProxy(vhost.HttpReverseProxyOptions{
|
||||
ResponseHeaderTimeoutS: cfg.VhostHttpTimeout,
|
||||
})
|
||||
}, svr.httpVhostRouter)
|
||||
svr.rc.HttpReverseProxy = rp
|
||||
|
||||
address := fmt.Sprintf("%s:%d", cfg.ProxyBindAddr, cfg.VhostHttpPort)
|
||||
@@ -256,7 +266,16 @@ func (svr *Service) HandleListener(l frpNet.Listener) {
|
||||
log.Warn("Listener for incoming connections from client closed")
|
||||
return
|
||||
}
|
||||
c = frpNet.CheckAndEnableTLSServerConn(c, svr.tlsConfig)
|
||||
|
||||
log.Trace("start check TLS connection...")
|
||||
originConn := c
|
||||
c, err = frpNet.CheckAndEnableTLSServerConnWithTimeout(c, svr.tlsConfig, connReadTimeout)
|
||||
if err != nil {
|
||||
log.Warn("CheckAndEnableTLSServerConnWithTimeout error: %v", err)
|
||||
originConn.Close()
|
||||
continue
|
||||
}
|
||||
log.Trace("success check TLS connection")
|
||||
|
||||
// Start a new goroutine for dealing connections.
|
||||
go func(frpConn frpNet.Conn) {
|
||||
|
||||
@@ -72,7 +72,7 @@ health_check_url = /health
|
||||
func TestHealthCheck(t *testing.T) {
|
||||
assert := assert.New(t)
|
||||
|
||||
// ****** start backgroud services ******
|
||||
// ****** start background services ******
|
||||
echoSvc1 := mock.NewEchoServer(15001, 1, "echo1")
|
||||
err := echoSvc1.Start()
|
||||
if assert.NoError(err) {
|
||||
|
||||
@@ -28,51 +28,51 @@ func GetProxyStatus(statusAddr string, user string, passwd string, name string)
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return status, err
|
||||
} else {
|
||||
if resp.StatusCode != 200 {
|
||||
return status, fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return status, err
|
||||
}
|
||||
allStatus := &client.StatusResp{}
|
||||
err = json.Unmarshal(body, &allStatus)
|
||||
if err != nil {
|
||||
return status, fmt.Errorf("unmarshal http response error: %s", strings.TrimSpace(string(body)))
|
||||
}
|
||||
for _, s := range allStatus.Tcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Udp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Http {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Https {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Stcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Xtcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != 200 {
|
||||
return status, fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return status, err
|
||||
}
|
||||
allStatus := &client.StatusResp{}
|
||||
err = json.Unmarshal(body, &allStatus)
|
||||
if err != nil {
|
||||
return status, fmt.Errorf("unmarshal http response error: %s", strings.TrimSpace(string(body)))
|
||||
}
|
||||
for _, s := range allStatus.Tcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Udp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Http {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Https {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Stcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
for _, s := range allStatus.Xtcp {
|
||||
if s.Name == name {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
|
||||
return status, errors.New("no proxy status found")
|
||||
}
|
||||
|
||||
@@ -87,13 +87,13 @@ func ReloadConf(reloadAddr string, user string, passwd string) error {
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
} else {
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
io.Copy(ioutil.Discard, resp.Body)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return fmt.Errorf("admin api status code [%d]", resp.StatusCode)
|
||||
}
|
||||
io.Copy(ioutil.Discard, resp.Body)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -211,6 +211,10 @@ func ConnectServerByProxy(proxyUrl string, protocol string, addr string) (c Conn
|
||||
|
||||
func ConnectServerByProxyWithTLS(proxyUrl string, protocol string, addr string, tlsConfig *tls.Config) (c Conn, err error) {
|
||||
c, err = ConnectServerByProxy(proxyUrl, protocol, addr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if tlsConfig == nil {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ package net
|
||||
import (
|
||||
"crypto/tls"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
gnet "github.com/fatedier/golib/net"
|
||||
)
|
||||
@@ -31,10 +32,17 @@ func WrapTLSClientConn(c net.Conn, tlsConfig *tls.Config) (out Conn) {
|
||||
return
|
||||
}
|
||||
|
||||
func CheckAndEnableTLSServerConn(c net.Conn, tlsConfig *tls.Config) (out Conn) {
|
||||
sc, r := gnet.NewSharedConnSize(c, 1)
|
||||
func CheckAndEnableTLSServerConnWithTimeout(c net.Conn, tlsConfig *tls.Config, timeout time.Duration) (out Conn, err error) {
|
||||
sc, r := gnet.NewSharedConnSize(c, 2)
|
||||
buf := make([]byte, 1)
|
||||
n, _ := r.Read(buf)
|
||||
var n int
|
||||
c.SetReadDeadline(time.Now().Add(timeout))
|
||||
n, err = r.Read(buf)
|
||||
c.SetReadDeadline(time.Time{})
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if n == 1 && int(buf[0]) == FRP_TLS_HEAD_BYTE {
|
||||
out = WrapConn(tls.Server(c, tlsConfig))
|
||||
} else {
|
||||
|
||||
@@ -19,7 +19,7 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
var version string = "0.25.1"
|
||||
var version string = "0.28.1"
|
||||
|
||||
func Full() string {
|
||||
return version
|
||||
|
||||
@@ -23,7 +23,6 @@ import (
|
||||
"net"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
frpLog "github.com/fatedier/frp/utils/log"
|
||||
@@ -32,8 +31,7 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
ErrRouterConfigConflict = errors.New("router config conflict")
|
||||
ErrNoDomain = errors.New("no such domain")
|
||||
ErrNoDomain = errors.New("no such domain")
|
||||
)
|
||||
|
||||
func getHostFromAddr(addr string) (host string) {
|
||||
@@ -51,21 +49,19 @@ type HttpReverseProxyOptions struct {
|
||||
}
|
||||
|
||||
type HttpReverseProxy struct {
|
||||
proxy *ReverseProxy
|
||||
|
||||
proxy *ReverseProxy
|
||||
vhostRouter *VhostRouters
|
||||
|
||||
responseHeaderTimeout time.Duration
|
||||
cfgMu sync.RWMutex
|
||||
}
|
||||
|
||||
func NewHttpReverseProxy(option HttpReverseProxyOptions) *HttpReverseProxy {
|
||||
func NewHttpReverseProxy(option HttpReverseProxyOptions, vhostRouter *VhostRouters) *HttpReverseProxy {
|
||||
if option.ResponseHeaderTimeoutS <= 0 {
|
||||
option.ResponseHeaderTimeoutS = 60
|
||||
}
|
||||
rp := &HttpReverseProxy{
|
||||
responseHeaderTimeout: time.Duration(option.ResponseHeaderTimeoutS) * time.Second,
|
||||
vhostRouter: NewVhostRouters(),
|
||||
vhostRouter: vhostRouter,
|
||||
}
|
||||
proxy := &ReverseProxy{
|
||||
Director: func(req *http.Request) {
|
||||
@@ -89,36 +85,34 @@ func NewHttpReverseProxy(option HttpReverseProxyOptions) *HttpReverseProxy {
|
||||
DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) {
|
||||
url := ctx.Value("url").(string)
|
||||
host := getHostFromAddr(ctx.Value("host").(string))
|
||||
return rp.CreateConnection(host, url)
|
||||
remote := ctx.Value("remote").(string)
|
||||
return rp.CreateConnection(host, url, remote)
|
||||
},
|
||||
},
|
||||
WebSocketDialContext: func(ctx context.Context, network, addr string) (net.Conn, error) {
|
||||
url := ctx.Value("url").(string)
|
||||
host := getHostFromAddr(ctx.Value("host").(string))
|
||||
return rp.CreateConnection(host, url)
|
||||
},
|
||||
BufferPool: newWrapPool(),
|
||||
ErrorLog: log.New(newWrapLogger(), "", 0),
|
||||
ErrorHandler: func(rw http.ResponseWriter, req *http.Request, err error) {
|
||||
frpLog.Warn("do http proxy request error: %v", err)
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
rw.Write(getNotFoundPageContent())
|
||||
},
|
||||
}
|
||||
rp.proxy = proxy
|
||||
return rp
|
||||
}
|
||||
|
||||
// Register register the route config to reverse proxy
|
||||
// reverse proxy will use CreateConnFn from routeCfg to create a connection to the remote service
|
||||
func (rp *HttpReverseProxy) Register(routeCfg VhostRouteConfig) error {
|
||||
rp.cfgMu.Lock()
|
||||
defer rp.cfgMu.Unlock()
|
||||
_, ok := rp.vhostRouter.Exist(routeCfg.Domain, routeCfg.Location)
|
||||
if ok {
|
||||
return ErrRouterConfigConflict
|
||||
} else {
|
||||
rp.vhostRouter.Add(routeCfg.Domain, routeCfg.Location, &routeCfg)
|
||||
err := rp.vhostRouter.Add(routeCfg.Domain, routeCfg.Location, &routeCfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnRegister unregister route config by domain and location
|
||||
func (rp *HttpReverseProxy) UnRegister(domain string, location string) {
|
||||
rp.cfgMu.Lock()
|
||||
defer rp.cfgMu.Unlock()
|
||||
rp.vhostRouter.Del(domain, location)
|
||||
}
|
||||
|
||||
@@ -138,12 +132,13 @@ func (rp *HttpReverseProxy) GetHeaders(domain string, location string) (headers
|
||||
return
|
||||
}
|
||||
|
||||
func (rp *HttpReverseProxy) CreateConnection(domain string, location string) (net.Conn, error) {
|
||||
// CreateConnection create a new connection by route config
|
||||
func (rp *HttpReverseProxy) CreateConnection(domain string, location string, remoteAddr string) (net.Conn, error) {
|
||||
vr, ok := rp.getVhost(domain, location)
|
||||
if ok {
|
||||
fn := vr.payload.(*VhostRouteConfig).CreateConnFn
|
||||
if fn != nil {
|
||||
return fn()
|
||||
return fn(remoteAddr)
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("%v: %s %s", ErrNoDomain, domain, location)
|
||||
@@ -161,10 +156,8 @@ func (rp *HttpReverseProxy) CheckAuth(domain, location, user, passwd string) boo
|
||||
return true
|
||||
}
|
||||
|
||||
// getVhost get vhost router by domain and location
|
||||
func (rp *HttpReverseProxy) getVhost(domain string, location string) (vr *VhostRouter, ok bool) {
|
||||
rp.cfgMu.RLock()
|
||||
defer rp.cfgMu.RUnlock()
|
||||
|
||||
// first we check the full hostname
|
||||
// if not exist, then check the wildcard_domain such as *.example.com
|
||||
vr, ok = rp.vhostRouter.Get(domain, location)
|
||||
@@ -15,13 +15,18 @@
|
||||
package vhost
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
frpLog "github.com/fatedier/frp/utils/log"
|
||||
"github.com/fatedier/frp/utils/version"
|
||||
)
|
||||
|
||||
var (
|
||||
NotFoundPagePath = ""
|
||||
)
|
||||
|
||||
const (
|
||||
NotFound = `<!DOCTYPE html>
|
||||
<html>
|
||||
@@ -46,10 +51,28 @@ Please try again later.</p>
|
||||
`
|
||||
)
|
||||
|
||||
func getNotFoundPageContent() []byte {
|
||||
var (
|
||||
buf []byte
|
||||
err error
|
||||
)
|
||||
if NotFoundPagePath != "" {
|
||||
buf, err = ioutil.ReadFile(NotFoundPagePath)
|
||||
if err != nil {
|
||||
frpLog.Warn("read custom 404 page error: %v", err)
|
||||
buf = []byte(NotFound)
|
||||
}
|
||||
} else {
|
||||
buf = []byte(NotFound)
|
||||
}
|
||||
return buf
|
||||
}
|
||||
|
||||
func notFoundResponse() *http.Response {
|
||||
header := make(http.Header)
|
||||
header.Set("server", "frp/"+version.Full())
|
||||
header.Set("Content-Type", "text/html")
|
||||
|
||||
res := &http.Response{
|
||||
Status: "Not Found",
|
||||
StatusCode: 404,
|
||||
@@ -57,7 +80,7 @@ func notFoundResponse() *http.Response {
|
||||
ProtoMajor: 1,
|
||||
ProtoMinor: 0,
|
||||
Header: header,
|
||||
Body: ioutil.NopCloser(strings.NewReader(NotFound)),
|
||||
Body: ioutil.NopCloser(bytes.NewReader(getNotFoundPageContent())),
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ package vhost
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
@@ -17,13 +18,9 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
frpIo "github.com/fatedier/golib/io"
|
||||
"golang.org/x/net/http/httpguts"
|
||||
)
|
||||
|
||||
// onExitFlushLoop is a callback set by tests to detect the state of the
|
||||
// flushLoop() goroutine.
|
||||
var onExitFlushLoop func()
|
||||
|
||||
// ReverseProxy is an HTTP Handler that takes an incoming request and
|
||||
// sends it to another server, proxying the response back to the
|
||||
// client.
|
||||
@@ -44,12 +41,17 @@ type ReverseProxy struct {
|
||||
// to flush to the client while copying the
|
||||
// response body.
|
||||
// If zero, no periodic flushing is done.
|
||||
// A negative value means to flush immediately
|
||||
// after each write to the client.
|
||||
// The FlushInterval is ignored when ReverseProxy
|
||||
// recognizes a response as a streaming response;
|
||||
// for such responses, writes are flushed to the client
|
||||
// immediately.
|
||||
FlushInterval time.Duration
|
||||
|
||||
// ErrorLog specifies an optional logger for errors
|
||||
// that occur when attempting to proxy the request.
|
||||
// If nil, logging goes to os.Stderr via the log package's
|
||||
// standard logger.
|
||||
// If nil, logging is done via the log package's standard logger.
|
||||
ErrorLog *log.Logger
|
||||
|
||||
// BufferPool optionally specifies a buffer pool to
|
||||
@@ -57,12 +59,23 @@ type ReverseProxy struct {
|
||||
// copying HTTP response bodies.
|
||||
BufferPool BufferPool
|
||||
|
||||
// ModifyResponse is an optional function that
|
||||
// modifies the Response from the backend.
|
||||
// If it returns an error, the proxy returns a StatusBadGateway error.
|
||||
// ModifyResponse is an optional function that modifies the
|
||||
// Response from the backend. It is called if the backend
|
||||
// returns a response at all, with any HTTP status code.
|
||||
// If the backend is unreachable, the optional ErrorHandler is
|
||||
// called without any call to ModifyResponse.
|
||||
//
|
||||
// If ModifyResponse returns an error, ErrorHandler is called
|
||||
// with its error value. If ErrorHandler is nil, its default
|
||||
// implementation is used.
|
||||
ModifyResponse func(*http.Response) error
|
||||
|
||||
WebSocketDialContext func(ctx context.Context, network, addr string) (net.Conn, error)
|
||||
// ErrorHandler is an optional function that handles errors
|
||||
// reaching the backend or errors from ModifyResponse.
|
||||
//
|
||||
// If nil, the default is to log the provided error and return
|
||||
// a 502 Status Bad Gateway response.
|
||||
ErrorHandler func(http.ResponseWriter, *http.Request, error)
|
||||
}
|
||||
|
||||
// A BufferPool is an interface for getting and returning temporary
|
||||
@@ -118,18 +131,11 @@ func copyHeader(dst, src http.Header) {
|
||||
}
|
||||
}
|
||||
|
||||
func cloneHeader(h http.Header) http.Header {
|
||||
h2 := make(http.Header, len(h))
|
||||
for k, vv := range h {
|
||||
vv2 := make([]string, len(vv))
|
||||
copy(vv2, vv)
|
||||
h2[k] = vv2
|
||||
}
|
||||
return h2
|
||||
}
|
||||
|
||||
// Hop-by-hop headers. These are removed when sent to the backend.
|
||||
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
|
||||
// As of RFC 7230, hop-by-hop headers are required to appear in the
|
||||
// Connection header field. These are the headers defined by the
|
||||
// obsoleted RFC 2616 (section 13.5.1) and are used for backward
|
||||
// compatibility.
|
||||
var hopHeaders = []string{
|
||||
"Connection",
|
||||
"Proxy-Connection", // non-standard but still sent by libcurl and rejected by e.g. google
|
||||
@@ -137,54 +143,38 @@ var hopHeaders = []string{
|
||||
"Proxy-Authenticate",
|
||||
"Proxy-Authorization",
|
||||
"Te", // canonicalized version of "TE"
|
||||
"Trailer", // not Trailers per URL above; http://www.rfc-editor.org/errata_search.php?eid=4522
|
||||
"Trailer", // not Trailers per URL above; https://www.rfc-editor.org/errata_search.php?eid=4522
|
||||
"Transfer-Encoding",
|
||||
"Upgrade",
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) defaultErrorHandler(rw http.ResponseWriter, req *http.Request, err error) {
|
||||
p.logf("http: proxy error: %v", err)
|
||||
rw.WriteHeader(http.StatusBadGateway)
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) getErrorHandler() func(http.ResponseWriter, *http.Request, error) {
|
||||
if p.ErrorHandler != nil {
|
||||
return p.ErrorHandler
|
||||
}
|
||||
return p.defaultErrorHandler
|
||||
}
|
||||
|
||||
// modifyResponse conditionally runs the optional ModifyResponse hook
|
||||
// and reports whether the request should proceed.
|
||||
func (p *ReverseProxy) modifyResponse(rw http.ResponseWriter, res *http.Response, req *http.Request) bool {
|
||||
if p.ModifyResponse == nil {
|
||||
return true
|
||||
}
|
||||
if err := p.ModifyResponse(res); err != nil {
|
||||
res.Body.Close()
|
||||
p.getErrorHandler()(rw, req, err)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
if IsWebsocketRequest(req) {
|
||||
p.serveWebSocket(rw, req)
|
||||
} else {
|
||||
p.serveHTTP(rw, req)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) serveWebSocket(rw http.ResponseWriter, req *http.Request) {
|
||||
if p.WebSocketDialContext == nil {
|
||||
rw.WriteHeader(500)
|
||||
return
|
||||
}
|
||||
|
||||
req = req.WithContext(context.WithValue(req.Context(), "url", req.URL.Path))
|
||||
req = req.WithContext(context.WithValue(req.Context(), "host", req.Host))
|
||||
|
||||
targetConn, err := p.WebSocketDialContext(req.Context(), "tcp", "")
|
||||
if err != nil {
|
||||
rw.WriteHeader(501)
|
||||
return
|
||||
}
|
||||
defer targetConn.Close()
|
||||
|
||||
p.Director(req)
|
||||
|
||||
hijacker, ok := rw.(http.Hijacker)
|
||||
if !ok {
|
||||
rw.WriteHeader(500)
|
||||
return
|
||||
}
|
||||
conn, _, errHijack := hijacker.Hijack()
|
||||
if errHijack != nil {
|
||||
rw.WriteHeader(500)
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
req.Write(targetConn)
|
||||
frpIo.Join(conn, targetConn)
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
transport := p.Transport
|
||||
if transport == nil {
|
||||
transport = http.DefaultTransport
|
||||
@@ -205,37 +195,49 @@ func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
}()
|
||||
}
|
||||
|
||||
outreq := req.WithContext(ctx) // includes shallow copies of maps, but okay
|
||||
outreq := req.WithContext(ctx)
|
||||
if req.ContentLength == 0 {
|
||||
outreq.Body = nil // Issue 16036: nil Body for http.Transport retries
|
||||
}
|
||||
|
||||
outreq.Header = cloneHeader(req.Header)
|
||||
|
||||
// Modify for frp
|
||||
// =============================
|
||||
// Modified for frp
|
||||
outreq = outreq.WithContext(context.WithValue(outreq.Context(), "url", req.URL.Path))
|
||||
outreq = outreq.WithContext(context.WithValue(outreq.Context(), "host", req.Host))
|
||||
outreq = outreq.WithContext(context.WithValue(outreq.Context(), "remote", req.RemoteAddr))
|
||||
// =============================
|
||||
|
||||
p.Director(outreq)
|
||||
outreq.Close = false
|
||||
|
||||
// Remove hop-by-hop headers listed in the "Connection" header.
|
||||
// See RFC 2616, section 14.10.
|
||||
if c := outreq.Header.Get("Connection"); c != "" {
|
||||
for _, f := range strings.Split(c, ",") {
|
||||
if f = strings.TrimSpace(f); f != "" {
|
||||
outreq.Header.Del(f)
|
||||
}
|
||||
}
|
||||
}
|
||||
reqUpType := upgradeType(outreq.Header)
|
||||
removeConnectionHeaders(outreq.Header)
|
||||
|
||||
// Remove hop-by-hop headers to the backend. Especially
|
||||
// important is "Connection" because we want a persistent
|
||||
// connection, regardless of what the client sent to us.
|
||||
for _, h := range hopHeaders {
|
||||
if outreq.Header.Get(h) != "" {
|
||||
outreq.Header.Del(h)
|
||||
hv := outreq.Header.Get(h)
|
||||
if hv == "" {
|
||||
continue
|
||||
}
|
||||
if h == "Te" && hv == "trailers" {
|
||||
// Issue 21096: tell backend applications that
|
||||
// care about trailer support that we support
|
||||
// trailers. (We do, but we don't go out of
|
||||
// our way to advertise that unless the
|
||||
// incoming client request thought it was
|
||||
// worth mentioning)
|
||||
continue
|
||||
}
|
||||
outreq.Header.Del(h)
|
||||
}
|
||||
|
||||
// After stripping all the hop-by-hop connection headers above, add back any
|
||||
// necessary for protocol upgrades, such as for websockets.
|
||||
if reqUpType != "" {
|
||||
outreq.Header.Set("Connection", "Upgrade")
|
||||
outreq.Header.Set("Upgrade", reqUpType)
|
||||
}
|
||||
|
||||
if clientIP, _, err := net.SplitHostPort(req.RemoteAddr); err == nil {
|
||||
@@ -250,32 +252,27 @@ func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
|
||||
res, err := transport.RoundTrip(outreq)
|
||||
if err != nil {
|
||||
p.logf("http: proxy error: %v", err)
|
||||
rw.WriteHeader(http.StatusNotFound)
|
||||
rw.Write([]byte(NotFound))
|
||||
p.getErrorHandler()(rw, outreq, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Remove hop-by-hop headers listed in the
|
||||
// "Connection" header of the response.
|
||||
if c := res.Header.Get("Connection"); c != "" {
|
||||
for _, f := range strings.Split(c, ",") {
|
||||
if f = strings.TrimSpace(f); f != "" {
|
||||
res.Header.Del(f)
|
||||
}
|
||||
// Deal with 101 Switching Protocols responses: (WebSocket, h2c, etc)
|
||||
if res.StatusCode == http.StatusSwitchingProtocols {
|
||||
if !p.modifyResponse(rw, res, outreq) {
|
||||
return
|
||||
}
|
||||
p.handleUpgradeResponse(rw, outreq, res)
|
||||
return
|
||||
}
|
||||
|
||||
removeConnectionHeaders(res.Header)
|
||||
|
||||
for _, h := range hopHeaders {
|
||||
res.Header.Del(h)
|
||||
}
|
||||
|
||||
if p.ModifyResponse != nil {
|
||||
if err := p.ModifyResponse(res); err != nil {
|
||||
p.logf("http: proxy error: %v", err)
|
||||
rw.WriteHeader(http.StatusBadGateway)
|
||||
return
|
||||
}
|
||||
if !p.modifyResponse(rw, res, outreq) {
|
||||
return
|
||||
}
|
||||
|
||||
copyHeader(rw.Header(), res.Header)
|
||||
@@ -292,6 +289,21 @@ func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
}
|
||||
|
||||
rw.WriteHeader(res.StatusCode)
|
||||
|
||||
err = p.copyResponse(rw, res.Body, p.flushInterval(req, res))
|
||||
if err != nil {
|
||||
defer res.Body.Close()
|
||||
// Since we're streaming the response, if we run into an error all we can do
|
||||
// is abort the request. Issue 23643: ReverseProxy should use ErrAbortHandler
|
||||
// on read error while copying body.
|
||||
if !shouldPanicOnCopyError(req) {
|
||||
p.logf("suppressing panic for copyResponse error in test; copy error: %v", err)
|
||||
return
|
||||
}
|
||||
panic(http.ErrAbortHandler)
|
||||
}
|
||||
res.Body.Close() // close now, instead of defer, to populate res.Trailer
|
||||
|
||||
if len(res.Trailer) > 0 {
|
||||
// Force chunking if we saw a response trailer.
|
||||
// This prevents net/http from calculating the length for short
|
||||
@@ -300,8 +312,6 @@ func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
fl.Flush()
|
||||
}
|
||||
}
|
||||
p.copyResponse(rw, res.Body)
|
||||
res.Body.Close() // close now, instead of defer, to populate res.Trailer
|
||||
|
||||
if len(res.Trailer) == announcedTrailers {
|
||||
copyHeader(rw.Header(), res.Trailer)
|
||||
@@ -316,16 +326,68 @@ func (p *ReverseProxy) serveHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) copyResponse(dst io.Writer, src io.Reader) {
|
||||
if p.FlushInterval != 0 {
|
||||
var inOurTests bool // whether we're in our own tests
|
||||
|
||||
// shouldPanicOnCopyError reports whether the reverse proxy should
|
||||
// panic with http.ErrAbortHandler. This is the right thing to do by
|
||||
// default, but Go 1.10 and earlier did not, so existing unit tests
|
||||
// weren't expecting panics. Only panic in our own tests, or when
|
||||
// running under the HTTP server.
|
||||
func shouldPanicOnCopyError(req *http.Request) bool {
|
||||
if inOurTests {
|
||||
// Our tests know to handle this panic.
|
||||
return true
|
||||
}
|
||||
if req.Context().Value(http.ServerContextKey) != nil {
|
||||
// We seem to be running under an HTTP server, so
|
||||
// it'll recover the panic.
|
||||
return true
|
||||
}
|
||||
// Otherwise act like Go 1.10 and earlier to not break
|
||||
// existing tests.
|
||||
return false
|
||||
}
|
||||
|
||||
// removeConnectionHeaders removes hop-by-hop headers listed in the "Connection" header of h.
|
||||
// See RFC 7230, section 6.1
|
||||
func removeConnectionHeaders(h http.Header) {
|
||||
for _, f := range h["Connection"] {
|
||||
for _, sf := range strings.Split(f, ",") {
|
||||
if sf = strings.TrimSpace(sf); sf != "" {
|
||||
h.Del(sf)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// flushInterval returns the p.FlushInterval value, conditionally
|
||||
// overriding its value for a specific request/response.
|
||||
func (p *ReverseProxy) flushInterval(req *http.Request, res *http.Response) time.Duration {
|
||||
resCT := res.Header.Get("Content-Type")
|
||||
|
||||
// For Server-Sent Events responses, flush immediately.
|
||||
// The MIME type is defined in https://www.w3.org/TR/eventsource/#text-event-stream
|
||||
if resCT == "text/event-stream" {
|
||||
return -1 // negative means immediately
|
||||
}
|
||||
|
||||
// TODO: more specific cases? e.g. res.ContentLength == -1?
|
||||
return p.FlushInterval
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) copyResponse(dst io.Writer, src io.Reader, flushInterval time.Duration) error {
|
||||
if flushInterval != 0 {
|
||||
if wf, ok := dst.(writeFlusher); ok {
|
||||
mlw := &maxLatencyWriter{
|
||||
dst: wf,
|
||||
latency: p.FlushInterval,
|
||||
done: make(chan bool),
|
||||
latency: flushInterval,
|
||||
}
|
||||
go mlw.flushLoop()
|
||||
defer mlw.stop()
|
||||
|
||||
// set up initial timer so headers get flushed even if body writes are delayed
|
||||
mlw.flushPending = true
|
||||
mlw.t = time.AfterFunc(flushInterval, mlw.delayedFlush)
|
||||
|
||||
dst = mlw
|
||||
}
|
||||
}
|
||||
@@ -333,13 +395,14 @@ func (p *ReverseProxy) copyResponse(dst io.Writer, src io.Reader) {
|
||||
var buf []byte
|
||||
if p.BufferPool != nil {
|
||||
buf = p.BufferPool.Get()
|
||||
defer p.BufferPool.Put(buf)
|
||||
}
|
||||
p.copyBuffer(dst, src, buf)
|
||||
if p.BufferPool != nil {
|
||||
p.BufferPool.Put(buf)
|
||||
}
|
||||
_, err := p.copyBuffer(dst, src, buf)
|
||||
return err
|
||||
}
|
||||
|
||||
// copyBuffer returns any write errors or non-EOF read errors, and the amount
|
||||
// of bytes written.
|
||||
func (p *ReverseProxy) copyBuffer(dst io.Writer, src io.Reader, buf []byte) (int64, error) {
|
||||
if len(buf) == 0 {
|
||||
buf = make([]byte, 32*1024)
|
||||
@@ -363,6 +426,9 @@ func (p *ReverseProxy) copyBuffer(dst io.Writer, src io.Reader, buf []byte) (int
|
||||
}
|
||||
}
|
||||
if rerr != nil {
|
||||
if rerr == io.EOF {
|
||||
rerr = nil
|
||||
}
|
||||
return written, rerr
|
||||
}
|
||||
}
|
||||
@@ -383,47 +449,115 @@ type writeFlusher interface {
|
||||
|
||||
type maxLatencyWriter struct {
|
||||
dst writeFlusher
|
||||
latency time.Duration
|
||||
latency time.Duration // non-zero; negative means to flush immediately
|
||||
|
||||
mu sync.Mutex // protects Write + Flush
|
||||
done chan bool
|
||||
mu sync.Mutex // protects t, flushPending, and dst.Flush
|
||||
t *time.Timer
|
||||
flushPending bool
|
||||
}
|
||||
|
||||
func (m *maxLatencyWriter) Write(p []byte) (int, error) {
|
||||
func (m *maxLatencyWriter) Write(p []byte) (n int, err error) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
return m.dst.Write(p)
|
||||
n, err = m.dst.Write(p)
|
||||
if m.latency < 0 {
|
||||
m.dst.Flush()
|
||||
return
|
||||
}
|
||||
if m.flushPending {
|
||||
return
|
||||
}
|
||||
if m.t == nil {
|
||||
m.t = time.AfterFunc(m.latency, m.delayedFlush)
|
||||
} else {
|
||||
m.t.Reset(m.latency)
|
||||
}
|
||||
m.flushPending = true
|
||||
return
|
||||
}
|
||||
|
||||
func (m *maxLatencyWriter) flushLoop() {
|
||||
t := time.NewTicker(m.latency)
|
||||
defer t.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-m.done:
|
||||
if onExitFlushLoop != nil {
|
||||
onExitFlushLoop()
|
||||
}
|
||||
return
|
||||
case <-t.C:
|
||||
m.mu.Lock()
|
||||
m.dst.Flush()
|
||||
m.mu.Unlock()
|
||||
}
|
||||
func (m *maxLatencyWriter) delayedFlush() {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
if !m.flushPending { // if stop was called but AfterFunc already started this goroutine
|
||||
return
|
||||
}
|
||||
m.dst.Flush()
|
||||
m.flushPending = false
|
||||
}
|
||||
|
||||
func (m *maxLatencyWriter) stop() {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
m.flushPending = false
|
||||
if m.t != nil {
|
||||
m.t.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
func (m *maxLatencyWriter) stop() { m.done <- true }
|
||||
|
||||
func IsWebsocketRequest(req *http.Request) bool {
|
||||
containsHeader := func(name, value string) bool {
|
||||
items := strings.Split(req.Header.Get(name), ",")
|
||||
for _, item := range items {
|
||||
if value == strings.ToLower(strings.TrimSpace(item)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
func upgradeType(h http.Header) string {
|
||||
if !httpguts.HeaderValuesContainsToken(h["Connection"], "Upgrade") {
|
||||
return ""
|
||||
}
|
||||
return containsHeader("Connection", "upgrade") && containsHeader("Upgrade", "websocket")
|
||||
return strings.ToLower(h.Get("Upgrade"))
|
||||
}
|
||||
|
||||
func (p *ReverseProxy) handleUpgradeResponse(rw http.ResponseWriter, req *http.Request, res *http.Response) {
|
||||
reqUpType := upgradeType(req.Header)
|
||||
resUpType := upgradeType(res.Header)
|
||||
if reqUpType != resUpType {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("backend tried to switch protocol %q when %q was requested", resUpType, reqUpType))
|
||||
return
|
||||
}
|
||||
|
||||
copyHeader(res.Header, rw.Header())
|
||||
|
||||
hj, ok := rw.(http.Hijacker)
|
||||
if !ok {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("can't switch protocols using non-Hijacker ResponseWriter type %T", rw))
|
||||
return
|
||||
}
|
||||
backConn, ok := res.Body.(io.ReadWriteCloser)
|
||||
if !ok {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("internal error: 101 switching protocols response with non-writable body"))
|
||||
return
|
||||
}
|
||||
defer backConn.Close()
|
||||
conn, brw, err := hj.Hijack()
|
||||
if err != nil {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("Hijack failed on protocol switch: %v", err))
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
res.Body = nil // so res.Write only writes the headers; we have res.Body in backConn above
|
||||
if err := res.Write(brw); err != nil {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("response write: %v", err))
|
||||
return
|
||||
}
|
||||
if err := brw.Flush(); err != nil {
|
||||
p.getErrorHandler()(rw, req, fmt.Errorf("response flush: %v", err))
|
||||
return
|
||||
}
|
||||
errc := make(chan error, 1)
|
||||
spc := switchProtocolCopier{user: conn, backend: backConn}
|
||||
go spc.copyToBackend(errc)
|
||||
go spc.copyFromBackend(errc)
|
||||
<-errc
|
||||
return
|
||||
}
|
||||
|
||||
// switchProtocolCopier exists so goroutines proxying data back and
|
||||
// forth have nice names in stacks.
|
||||
type switchProtocolCopier struct {
|
||||
user, backend io.ReadWriter
|
||||
}
|
||||
|
||||
func (c switchProtocolCopier) copyFromBackend(errc chan<- error) {
|
||||
_, err := io.Copy(c.user, c.backend)
|
||||
errc <- err
|
||||
}
|
||||
|
||||
func (c switchProtocolCopier) copyToBackend(errc chan<- error) {
|
||||
_, err := io.Copy(c.backend, c.user)
|
||||
errc <- err
|
||||
}
|
||||
|
||||
@@ -1,11 +1,16 @@
|
||||
package vhost
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrRouterConfigConflict = errors.New("router config conflict")
|
||||
)
|
||||
|
||||
type VhostRouters struct {
|
||||
RouterByDomain map[string][]*VhostRouter
|
||||
mutex sync.RWMutex
|
||||
@@ -24,10 +29,14 @@ func NewVhostRouters() *VhostRouters {
|
||||
}
|
||||
}
|
||||
|
||||
func (r *VhostRouters) Add(domain, location string, payload interface{}) {
|
||||
func (r *VhostRouters) Add(domain, location string, payload interface{}) error {
|
||||
r.mutex.Lock()
|
||||
defer r.mutex.Unlock()
|
||||
|
||||
if _, exist := r.exist(domain, location); exist {
|
||||
return ErrRouterConfigConflict
|
||||
}
|
||||
|
||||
vrs, found := r.RouterByDomain[domain]
|
||||
if !found {
|
||||
vrs = make([]*VhostRouter, 0, 1)
|
||||
@@ -42,6 +51,7 @@ func (r *VhostRouters) Add(domain, location string, payload interface{}) {
|
||||
|
||||
sort.Sort(sort.Reverse(ByLocation(vrs)))
|
||||
r.RouterByDomain[domain] = vrs
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *VhostRouters) Del(domain, location string) {
|
||||
@@ -80,10 +90,7 @@ func (r *VhostRouters) Get(host, path string) (vr *VhostRouter, exist bool) {
|
||||
return
|
||||
}
|
||||
|
||||
func (r *VhostRouters) Exist(host, path string) (vr *VhostRouter, exist bool) {
|
||||
r.mutex.RLock()
|
||||
defer r.mutex.RUnlock()
|
||||
|
||||
func (r *VhostRouters) exist(host, path string) (vr *VhostRouter, exist bool) {
|
||||
vrs, found := r.RouterByDomain[host]
|
||||
if !found {
|
||||
return
|
||||
|
||||
@@ -15,7 +15,6 @@ package vhost
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/fatedier/frp/utils/log"
|
||||
@@ -35,7 +34,6 @@ type VhostMuxer struct {
|
||||
authFunc httpAuthFunc
|
||||
rewriteFunc hostRewriteFunc
|
||||
registryRouter *VhostRouters
|
||||
mutex sync.RWMutex
|
||||
}
|
||||
|
||||
func NewVhostMuxer(listener frpNet.Listener, vhostFunc muxFunc, authFunc httpAuthFunc, rewriteFunc hostRewriteFunc, timeout time.Duration) (mux *VhostMuxer, err error) {
|
||||
@@ -51,8 +49,9 @@ func NewVhostMuxer(listener frpNet.Listener, vhostFunc muxFunc, authFunc httpAut
|
||||
return mux, nil
|
||||
}
|
||||
|
||||
type CreateConnFunc func() (frpNet.Conn, error)
|
||||
type CreateConnFunc func(remoteAddr string) (frpNet.Conn, error)
|
||||
|
||||
// VhostRouteConfig is the params used to match HTTP requests
|
||||
type VhostRouteConfig struct {
|
||||
Domain string
|
||||
Location string
|
||||
@@ -67,14 +66,6 @@ type VhostRouteConfig struct {
|
||||
// listen for a new domain name, if rewriteHost is not empty and rewriteFunc is not nil
|
||||
// then rewrite the host header to rewriteHost
|
||||
func (v *VhostMuxer) Listen(cfg *VhostRouteConfig) (l *Listener, err error) {
|
||||
v.mutex.Lock()
|
||||
defer v.mutex.Unlock()
|
||||
|
||||
_, ok := v.registryRouter.Exist(cfg.Domain, cfg.Location)
|
||||
if ok {
|
||||
return nil, fmt.Errorf("hostname [%s] location [%s] is already registered", cfg.Domain, cfg.Location)
|
||||
}
|
||||
|
||||
l = &Listener{
|
||||
name: cfg.Domain,
|
||||
location: cfg.Location,
|
||||
@@ -85,14 +76,14 @@ func (v *VhostMuxer) Listen(cfg *VhostRouteConfig) (l *Listener, err error) {
|
||||
accept: make(chan frpNet.Conn),
|
||||
Logger: log.NewPrefixLogger(""),
|
||||
}
|
||||
v.registryRouter.Add(cfg.Domain, cfg.Location, l)
|
||||
err = v.registryRouter.Add(cfg.Domain, cfg.Location, l)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return l, nil
|
||||
}
|
||||
|
||||
func (v *VhostMuxer) getListener(name, path string) (l *Listener, exist bool) {
|
||||
v.mutex.RLock()
|
||||
defer v.mutex.RUnlock()
|
||||
|
||||
// first we check the full hostname
|
||||
// if not exist, then check the wildcard_domain such as *.example.com
|
||||
vr, found := v.registryRouter.Get(name, path)
|
||||
|
||||
4
vendor/github.com/fatedier/kcp-go/.travis.yml
generated
vendored
4
vendor/github.com/fatedier/kcp-go/.travis.yml
generated
vendored
@@ -1,6 +1,8 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.9
|
||||
- 1.9.x
|
||||
- 1.10.x
|
||||
- 1.11.x
|
||||
|
||||
before_install:
|
||||
- go get -t -v ./...
|
||||
|
||||
154
vendor/github.com/fatedier/kcp-go/README.md
generated
vendored
154
vendor/github.com/fatedier/kcp-go/README.md
generated
vendored
@@ -20,24 +20,21 @@
|
||||
|
||||
**kcp-go** is a **Production-Grade Reliable-UDP** library for [golang](https://golang.org/).
|
||||
|
||||
It provides **fast, ordered and error-checked** delivery of streams over **UDP** packets, has been well tested with opensource project [kcptun](https://github.com/xtaci/kcptun). Millions of devices(from low-end MIPS routers to high-end servers) are running with **kcp-go** at present, including applications like **online games, live broadcasting, file synchronization and network acceleration**.
|
||||
This library intents to provide a **smooth, resilient, ordered, error-checked and anonymous** delivery of streams over **UDP** packets, it has been battle-tested with opensource project [kcptun](https://github.com/xtaci/kcptun). Millions of devices(from low-end MIPS routers to high-end servers) have deployed **kcp-go** powered program in a variety of forms like **online games, live broadcasting, file synchronization and network acceleration**.
|
||||
|
||||
[Lastest Release](https://github.com/xtaci/kcp-go/releases)
|
||||
|
||||
## Features
|
||||
|
||||
1. Optimized for **Realtime Online Games, Audio/Video Streaming and Latency-Sensitive Distributed Consensus**.
|
||||
1. Compatible with [skywind3000's](https://github.com/skywind3000) C version with language specific optimizations.
|
||||
1. Designed for **Latency-sensitive** scenarios.
|
||||
1. **Cache friendly** and **Memory optimized** design, offers extremely **High Performance** core.
|
||||
1. Handles **>5K concurrent connections** on a single commodity server.
|
||||
1. Compatible with [net.Conn](https://golang.org/pkg/net/#Conn) and [net.Listener](https://golang.org/pkg/net/#Listener), a drop-in replacement for [net.TCPConn](https://golang.org/pkg/net/#TCPConn).
|
||||
1. [FEC(Forward Error Correction)](https://en.wikipedia.org/wiki/Forward_error_correction) Support with [Reed-Solomon Codes](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction)
|
||||
1. Packet level encryption support with [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard), [TEA](https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm), [3DES](https://en.wikipedia.org/wiki/Triple_DES), [Blowfish](https://en.wikipedia.org/wiki/Blowfish_(cipher)), [Cast5](https://en.wikipedia.org/wiki/CAST-128), [Salsa20]( https://en.wikipedia.org/wiki/Salsa20), etc. in [CFB](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Feedback_.28CFB.29) mode.
|
||||
1. **Fixed number of goroutines** created for the entire server application, minimized goroutine context switch.
|
||||
|
||||
## Conventions
|
||||
|
||||
Control messages like **SYN/FIN/RST** in TCP **are not defined** in KCP, you need some **keepalive/heartbeat mechanism** in the application-level. A real world example is to use some **multiplexing** protocol over session, such as [smux](https://github.com/xtaci/smux)(with embedded keepalive mechanism), see [kcptun](https://github.com/xtaci/kcptun) for example.
|
||||
1. Packet level encryption support with [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard), [TEA](https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm), [3DES](https://en.wikipedia.org/wiki/Triple_DES), [Blowfish](https://en.wikipedia.org/wiki/Blowfish_(cipher)), [Cast5](https://en.wikipedia.org/wiki/CAST-128), [Salsa20]( https://en.wikipedia.org/wiki/Salsa20), etc. in [CFB](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Feedback_.28CFB.29) mode, which generates completely anonymous packet.
|
||||
1. Only **A fixed number of goroutines** will be created for the entire server application, costs in **context switch** between goroutines have been taken into consideration.
|
||||
1. Compatible with [skywind3000's](https://github.com/skywind3000) C version with various improvements.
|
||||
1. Platform-dependent optimizations: [sendmmsg](http://man7.org/linux/man-pages/man2/sendmmsg.2.html) and [recvmmsg](http://man7.org/linux/man-pages/man2/recvmmsg.2.html) were expoloited for linux.
|
||||
|
||||
## Documentation
|
||||
|
||||
@@ -47,6 +44,24 @@ For complete documentation, see the associated [Godoc](https://godoc.org/github.
|
||||
|
||||
<img src="frame.png" alt="Frame Format" height="109px" />
|
||||
|
||||
```
|
||||
NONCE:
|
||||
16bytes cryptographically secure random number, nonce changes for every packet.
|
||||
|
||||
CRC32:
|
||||
CRC-32 checksum of data using the IEEE polynomial
|
||||
|
||||
FEC TYPE:
|
||||
typeData = 0xF1
|
||||
typeParity = 0xF2
|
||||
|
||||
FEC SEQID:
|
||||
monotonically increasing in range: [0, (0xffffffff/shardSize) * shardSize - 1]
|
||||
|
||||
SIZE:
|
||||
The size of KCP frame plus 2
|
||||
```
|
||||
|
||||
```
|
||||
+-----------------+
|
||||
| SESSION |
|
||||
@@ -69,58 +84,69 @@ For complete documentation, see the associated [Godoc](https://godoc.org/github.
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
## Examples
|
||||
|
||||
Client: [full demo](https://github.com/xtaci/kcptun/blob/master/client/main.go)
|
||||
```go
|
||||
kcpconn, err := kcp.DialWithOptions("192.168.0.1:10000", nil, 10, 3)
|
||||
```
|
||||
Server: [full demo](https://github.com/xtaci/kcptun/blob/master/server/main.go)
|
||||
```go
|
||||
lis, err := kcp.ListenWithOptions(":10000", nil, 10, 3)
|
||||
```
|
||||
1. [simple examples](https://github.com/xtaci/kcp-go/tree/master/examples)
|
||||
2. [kcptun client](https://github.com/xtaci/kcptun/blob/master/client/main.go)
|
||||
3. [kcptun server](https://github.com/xtaci/kcptun/blob/master/server/main.go)
|
||||
|
||||
## Performance
|
||||
## Benchmark
|
||||
```
|
||||
Model Name: MacBook Pro
|
||||
Model Identifier: MacBookPro12,1
|
||||
Model Identifier: MacBookPro14,1
|
||||
Processor Name: Intel Core i5
|
||||
Processor Speed: 2.7 GHz
|
||||
Processor Speed: 3.1 GHz
|
||||
Number of Processors: 1
|
||||
Total Number of Cores: 2
|
||||
L2 Cache (per Core): 256 KB
|
||||
L3 Cache: 3 MB
|
||||
L3 Cache: 4 MB
|
||||
Memory: 8 GB
|
||||
```
|
||||
```
|
||||
$ go test -v -run=^$ -bench .
|
||||
beginning tests, encryption:salsa20, fec:10/3
|
||||
BenchmarkAES128-4 200000 8256 ns/op 363.33 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkAES192-4 200000 9153 ns/op 327.74 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkAES256-4 200000 10079 ns/op 297.64 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkTEA-4 100000 18643 ns/op 160.91 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkXOR-4 5000000 316 ns/op 9486.46 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkBlowfish-4 50000 35643 ns/op 84.17 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkNone-4 30000000 56.2 ns/op 53371.83 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkCast5-4 30000 44744 ns/op 67.05 MB/s 0 B/op 0 allocs/op
|
||||
Benchmark3DES-4 2000 639839 ns/op 4.69 MB/s 2 B/op 0 allocs/op
|
||||
BenchmarkTwofish-4 30000 43368 ns/op 69.17 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkXTEA-4 30000 57673 ns/op 52.02 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkSalsa20-4 300000 3917 ns/op 765.80 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkFlush-4 10000000 226 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkEchoSpeed4K-4 5000 300030 ns/op 13.65 MB/s 5672 B/op 177 allocs/op
|
||||
BenchmarkEchoSpeed64K-4 500 3202335 ns/op 20.47 MB/s 73295 B/op 2198 allocs/op
|
||||
BenchmarkEchoSpeed512K-4 50 24926924 ns/op 21.03 MB/s 659339 B/op 17602 allocs/op
|
||||
BenchmarkEchoSpeed1M-4 20 64857821 ns/op 16.17 MB/s 1772437 B/op 42869 allocs/op
|
||||
BenchmarkSinkSpeed4K-4 30000 50230 ns/op 81.54 MB/s 2058 B/op 48 allocs/op
|
||||
BenchmarkSinkSpeed64K-4 2000 648718 ns/op 101.02 MB/s 31165 B/op 687 allocs/op
|
||||
BenchmarkSinkSpeed256K-4 300 4635905 ns/op 113.09 MB/s 286229 B/op 5516 allocs/op
|
||||
BenchmarkSinkSpeed1M-4 200 9566933 ns/op 109.60 MB/s 463771 B/op 10701 allocs/op
|
||||
goos: darwin
|
||||
goarch: amd64
|
||||
pkg: github.com/xtaci/kcp-go
|
||||
BenchmarkSM4-4 50000 32180 ns/op 93.23 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkAES128-4 500000 3285 ns/op 913.21 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkAES192-4 300000 3623 ns/op 827.85 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkAES256-4 300000 3874 ns/op 774.20 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkTEA-4 100000 15384 ns/op 195.00 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkXOR-4 20000000 89.9 ns/op 33372.00 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkBlowfish-4 50000 26927 ns/op 111.41 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkNone-4 30000000 45.7 ns/op 65597.94 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkCast5-4 50000 34258 ns/op 87.57 MB/s 0 B/op 0 allocs/op
|
||||
Benchmark3DES-4 10000 117149 ns/op 25.61 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkTwofish-4 50000 33538 ns/op 89.45 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkXTEA-4 30000 45666 ns/op 65.69 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkSalsa20-4 500000 3308 ns/op 906.76 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkCRC32-4 20000000 65.2 ns/op 15712.43 MB/s
|
||||
BenchmarkCsprngSystem-4 1000000 1150 ns/op 13.91 MB/s
|
||||
BenchmarkCsprngMD5-4 10000000 145 ns/op 110.26 MB/s
|
||||
BenchmarkCsprngSHA1-4 10000000 158 ns/op 126.54 MB/s
|
||||
BenchmarkCsprngNonceMD5-4 10000000 153 ns/op 104.22 MB/s
|
||||
BenchmarkCsprngNonceAES128-4 100000000 19.1 ns/op 837.81 MB/s
|
||||
BenchmarkFECDecode-4 1000000 1119 ns/op 1339.61 MB/s 1606 B/op 2 allocs/op
|
||||
BenchmarkFECEncode-4 2000000 832 ns/op 1801.83 MB/s 17 B/op 0 allocs/op
|
||||
BenchmarkFlush-4 5000000 272 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkEchoSpeed4K-4 5000 259617 ns/op 15.78 MB/s 5451 B/op 149 allocs/op
|
||||
BenchmarkEchoSpeed64K-4 1000 1706084 ns/op 38.41 MB/s 56002 B/op 1604 allocs/op
|
||||
BenchmarkEchoSpeed512K-4 100 14345505 ns/op 36.55 MB/s 482597 B/op 13045 allocs/op
|
||||
BenchmarkEchoSpeed1M-4 30 34859104 ns/op 30.08 MB/s 1143773 B/op 27186 allocs/op
|
||||
BenchmarkSinkSpeed4K-4 50000 31369 ns/op 130.57 MB/s 1566 B/op 30 allocs/op
|
||||
BenchmarkSinkSpeed64K-4 5000 329065 ns/op 199.16 MB/s 21529 B/op 453 allocs/op
|
||||
BenchmarkSinkSpeed256K-4 500 2373354 ns/op 220.91 MB/s 166332 B/op 3554 allocs/op
|
||||
BenchmarkSinkSpeed1M-4 300 5117927 ns/op 204.88 MB/s 310378 B/op 6988 allocs/op
|
||||
PASS
|
||||
ok _/Users/xtaci/.godeps/src/github.com/xtaci/kcp-go 39.689s
|
||||
ok github.com/xtaci/kcp-go 50.349s
|
||||
```
|
||||
|
||||
## Design Considerations
|
||||
|
||||
## Typical Flame Graph
|
||||

|
||||
|
||||
## Key Design Considerations
|
||||
|
||||
1. slice vs. container/list
|
||||
|
||||
@@ -139,7 +165,9 @@ List structure introduces **heavy cache misses** compared to slice which owns be
|
||||
|
||||
2. Timing accuracy vs. syscall clock_gettime
|
||||
|
||||
Timing is **critical** to **RTT estimator**, inaccurate timing introduces false retransmissions in KCP, but calling `time.Now()` costs 42 cycles(10.5ns on 4GHz CPU, 15.6ns on my MacBook Pro 2.7GHz), the benchmark for time.Now():
|
||||
Timing is **critical** to **RTT estimator**, inaccurate timing leads to false retransmissions in KCP, but calling `time.Now()` costs 42 cycles(10.5ns on 4GHz CPU, 15.6ns on my MacBook Pro 2.7GHz).
|
||||
|
||||
The benchmark for time.Now() lies here:
|
||||
|
||||
https://github.com/xtaci/notes/blob/master/golang/benchmark2/syscall_test.go
|
||||
|
||||
@@ -147,14 +175,37 @@ https://github.com/xtaci/notes/blob/master/golang/benchmark2/syscall_test.go
|
||||
BenchmarkNow-4 100000000 15.6 ns/op
|
||||
```
|
||||
|
||||
In kcp-go, after each `kcp.output()` function call, current time will be updated upon return, and each `kcp.flush()` will get current time once. For most of the time, 5000 connections costs 5000 * 15.6ns = 78us(no packet needs to be sent by `kcp.output()`), as for 10MB/s data transfering with 1400 MTU, `kcp.output()` will be called around 7500 times and costs 117us for `time.Now()` in **every second**.
|
||||
In kcp-go, after each `kcp.output()` function call, current clock time will be updated upon return, and for a single `kcp.flush()` operation, current time will be queried from system once. For most of the time, 5000 connections costs 5000 * 15.6ns = 78us(a fixed cost while no packet needs to be sent), as for 10MB/s data transfering with 1400 MTU, `kcp.output()` will be called around 7500 times and costs 117us for `time.Now()` in **every second**.
|
||||
|
||||
3. Memory management
|
||||
|
||||
## Tuning
|
||||
Primary memory allocation are done from a global buffer pool xmit.Buf, in kcp-go, when we need to allocate some bytes, we can get from that pool, and a fixed-capacity 1500 bytes(mtuLimit) will be returned, the rx queue, tx queue and fec queue all receive bytes from there, and they will return the bytes to the pool after using to prevent unnecessary zer0ing of bytes. The pool mechanism maintained a high watermark for slice objects, these in-flight objects from the pool will survive from the perodical garbage collection, meanwhile the pool kept the ability to return the memory to runtime if in idle.
|
||||
|
||||
Q: I'm handling >5K connections on my server. the CPU utilization is high.
|
||||
4. Information security
|
||||
|
||||
A: A standalone `agent` or `gate` server for kcp-go is suggested, not only for CPU utilization, but also important to the **precision** of RTT measurements which indirectly affects retransmission. By increasing update `interval` with `SetNoDelay` like `conn.SetNoDelay(1, 40, 1, 1)` will dramatically reduce system load.
|
||||
kcp-go is shipped with builtin packet encryption powered by various block encryption algorithms and works in [Cipher Feedback Mode](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Feedback_(CFB)), for each packet to be sent, the encryption process will start from encrypting a [nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) from the [system entropy](https://en.wikipedia.org/wiki//dev/random), so encryption to same plaintexts never leads to a same ciphertexts thereafter.
|
||||
|
||||
The contents of the packets are completely anonymous with encryption, including the headers(FEC,KCP), checksums and contents. Note that, no matter which encryption method you choose on you upper layer, if you disable encryption, the transmit will be insecure somehow, since the header is ***PLAINTEXT*** to everyone it would be susceptible to header tampering, such as jamming the *sliding window size*, *round-trip time*, *FEC property* and *checksums*. ```AES-128``` is suggested for minimal encryption since modern CPUs are shipped with [AES-NI](https://en.wikipedia.org/wiki/AES_instruction_set) instructions and performs even better than `salsa20`(check the table above).
|
||||
|
||||
Other possible attacks to kcp-go includes: a) [traffic analysis](https://en.wikipedia.org/wiki/Traffic_analysis), dataflow on specific websites may have pattern while interchanging data, but this type of eavesdropping has been mitigated by adapting [smux](https://github.com/xtaci/smux) to mix data streams so as to introduce noises, perfect solution to this has not appeared yet, theroretically by shuffling/mixing messages on larger scale network may mitigate this problem. b) [replay attack](https://en.wikipedia.org/wiki/Replay_attack), since the asymmetrical encryption has not been introduced into kcp-go for some reason, capturing the packets and replay them on a different machine is possible, (notice: hijacking the session and decrypting the contents is still *impossible*), so upper layers should contain a asymmetrical encryption system to guarantee the authenticity of each message(to process message exactly once), such as HTTPS/OpenSSL/LibreSSL, only by signing the requests with private keys can eliminate this type of attack.
|
||||
|
||||
## Connection Termination
|
||||
|
||||
Control messages like **SYN/FIN/RST** in TCP **are not defined** in KCP, you need some **keepalive/heartbeat mechanism** in the application-level. A real world example is to use some **multiplexing** protocol over session, such as [smux](https://github.com/xtaci/smux)(with embedded keepalive mechanism), see [kcptun](https://github.com/xtaci/kcptun) for example.
|
||||
|
||||
## FAQ
|
||||
|
||||
Q: I'm handling >5K connections on my server, the CPU utilization is so high.
|
||||
|
||||
A: A standalone `agent` or `gate` server for running kcp-go is suggested, not only for CPU utilization, but also important to the **precision** of RTT measurements(timing) which indirectly affects retransmission. By increasing update `interval` with `SetNoDelay` like `conn.SetNoDelay(1, 40, 1, 1)` will dramatically reduce system load, but lower the performance.
|
||||
|
||||
Q: When should I enable FEC?
|
||||
|
||||
A: Forward error correction is critical to long-distance transmission, because a packet loss will lead to a huge penalty in time. And for the complicated packet routing network in modern world, round-trip time based loss check will not always be efficient, the big deviation of RTT samples in the long way usually leads to a larger RTO value in typical rtt estimator, which in other words, slows down the transmission.
|
||||
|
||||
Q: Should I enable encryption?
|
||||
|
||||
A: Yes, for the safety of protocol, even if the upper layer has encrypted.
|
||||
|
||||
## Who is using this?
|
||||
|
||||
@@ -163,10 +214,9 @@ A: A standalone `agent` or `gate` server for kcp-go is suggested, not only for C
|
||||
3. https://github.com/smallnest/rpcx -- A RPC service framework based on net/rpc like alibaba Dubbo and weibo Motan.
|
||||
4. https://github.com/gonet2/agent -- A gateway for games with stream multiplexing.
|
||||
5. https://github.com/syncthing/syncthing -- Open Source Continuous File Synchronization.
|
||||
6. https://play.google.com/store/apps/details?id=com.k17game.k3 -- Battle Zone - Earth 2048, a world-wide strategy game.
|
||||
|
||||
## Links
|
||||
|
||||
1. https://github.com/xtaci/libkcp -- FEC enhanced KCP session library for iOS/Android in C++
|
||||
2. https://github.com/skywind3000/kcp -- A Fast and Reliable ARQ Protocol
|
||||
3. https://github.com/templexxx/reedsolomon -- Reed-Solomon Erasure Coding in Go
|
||||
3. https://github.com/klauspost/reedsolomon -- Reed-Solomon Erasure Coding in Go
|
||||
|
||||
12
vendor/github.com/fatedier/kcp-go/batchconn.go
generated
vendored
Normal file
12
vendor/github.com/fatedier/kcp-go/batchconn.go
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
package kcp
|
||||
|
||||
import "golang.org/x/net/ipv4"
|
||||
|
||||
const (
|
||||
batchSize = 16
|
||||
)
|
||||
|
||||
type batchConn interface {
|
||||
WriteBatch(ms []ipv4.Message, flags int) (int, error)
|
||||
ReadBatch(ms []ipv4.Message, flags int) (int, error)
|
||||
}
|
||||
601
vendor/github.com/fatedier/kcp-go/crypt.go
generated
vendored
601
vendor/github.com/fatedier/kcp-go/crypt.go
generated
vendored
@@ -57,8 +57,8 @@ func (c *salsa20BlockCrypt) Decrypt(dst, src []byte) {
|
||||
}
|
||||
|
||||
type sm4BlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [sm4.BlockSize]byte
|
||||
decbuf [2 * sm4.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -70,17 +70,15 @@ func NewSM4BlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, sm4.BlockSize)
|
||||
c.decbuf = make([]byte, 2*sm4.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *sm4BlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *sm4BlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *sm4BlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *sm4BlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type twofishBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [twofish.BlockSize]byte
|
||||
decbuf [2 * twofish.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -92,17 +90,15 @@ func NewTwofishBlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, twofish.BlockSize)
|
||||
c.decbuf = make([]byte, 2*twofish.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *twofishBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *twofishBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *twofishBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *twofishBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type tripleDESBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [des.BlockSize]byte
|
||||
decbuf [2 * des.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -114,17 +110,15 @@ func NewTripleDESBlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, des.BlockSize)
|
||||
c.decbuf = make([]byte, 2*des.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *tripleDESBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *tripleDESBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *tripleDESBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *tripleDESBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type cast5BlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [cast5.BlockSize]byte
|
||||
decbuf [2 * cast5.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -136,17 +130,15 @@ func NewCast5BlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, cast5.BlockSize)
|
||||
c.decbuf = make([]byte, 2*cast5.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *cast5BlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *cast5BlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *cast5BlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *cast5BlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type blowfishBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [blowfish.BlockSize]byte
|
||||
decbuf [2 * blowfish.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -158,17 +150,15 @@ func NewBlowfishBlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, blowfish.BlockSize)
|
||||
c.decbuf = make([]byte, 2*blowfish.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *blowfishBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *blowfishBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *blowfishBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *blowfishBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type aesBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [aes.BlockSize]byte
|
||||
decbuf [2 * aes.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -180,17 +170,15 @@ func NewAESBlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, aes.BlockSize)
|
||||
c.decbuf = make([]byte, 2*aes.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *aesBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *aesBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *aesBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *aesBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type teaBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [tea.BlockSize]byte
|
||||
decbuf [2 * tea.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -202,17 +190,15 @@ func NewTEABlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, tea.BlockSize)
|
||||
c.decbuf = make([]byte, 2*tea.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *teaBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *teaBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *teaBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *teaBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type xteaBlockCrypt struct {
|
||||
encbuf []byte
|
||||
decbuf []byte
|
||||
encbuf [xtea.BlockSize]byte
|
||||
decbuf [2 * xtea.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
@@ -224,13 +210,11 @@ func NewXTEABlockCrypt(key []byte) (BlockCrypt, error) {
|
||||
return nil, err
|
||||
}
|
||||
c.block = block
|
||||
c.encbuf = make([]byte, xtea.BlockSize)
|
||||
c.decbuf = make([]byte, 2*xtea.BlockSize)
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *xteaBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf) }
|
||||
func (c *xteaBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf) }
|
||||
func (c *xteaBlockCrypt) Encrypt(dst, src []byte) { encrypt(c.block, dst, src, c.encbuf[:]) }
|
||||
func (c *xteaBlockCrypt) Decrypt(dst, src []byte) { decrypt(c.block, dst, src, c.decbuf[:]) }
|
||||
|
||||
type simpleXORBlockCrypt struct {
|
||||
xortbl []byte
|
||||
@@ -258,31 +242,544 @@ func (c *noneBlockCrypt) Decrypt(dst, src []byte) { copy(dst, src) }
|
||||
|
||||
// packet encryption with local CFB mode
|
||||
func encrypt(block cipher.Block, dst, src, buf []byte) {
|
||||
switch block.BlockSize() {
|
||||
case 8:
|
||||
encrypt8(block, dst, src, buf)
|
||||
case 16:
|
||||
encrypt16(block, dst, src, buf)
|
||||
default:
|
||||
encryptVariant(block, dst, src, buf)
|
||||
}
|
||||
}
|
||||
|
||||
// optimized encryption for the ciphers which works in 8-bytes
|
||||
func encrypt8(block cipher.Block, dst, src, buf []byte) {
|
||||
tbl := buf[:8]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / 8
|
||||
base := 0
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
s := src[base:][0:64]
|
||||
d := dst[base:][0:64]
|
||||
// 1
|
||||
xor.BytesSrc1(d[0:8], s[0:8], tbl)
|
||||
block.Encrypt(tbl, d[0:8])
|
||||
// 2
|
||||
xor.BytesSrc1(d[8:16], s[8:16], tbl)
|
||||
block.Encrypt(tbl, d[8:16])
|
||||
// 3
|
||||
xor.BytesSrc1(d[16:24], s[16:24], tbl)
|
||||
block.Encrypt(tbl, d[16:24])
|
||||
// 4
|
||||
xor.BytesSrc1(d[24:32], s[24:32], tbl)
|
||||
block.Encrypt(tbl, d[24:32])
|
||||
// 5
|
||||
xor.BytesSrc1(d[32:40], s[32:40], tbl)
|
||||
block.Encrypt(tbl, d[32:40])
|
||||
// 6
|
||||
xor.BytesSrc1(d[40:48], s[40:48], tbl)
|
||||
block.Encrypt(tbl, d[40:48])
|
||||
// 7
|
||||
xor.BytesSrc1(d[48:56], s[48:56], tbl)
|
||||
block.Encrypt(tbl, d[48:56])
|
||||
// 8
|
||||
xor.BytesSrc1(d[56:64], s[56:64], tbl)
|
||||
block.Encrypt(tbl, d[56:64])
|
||||
base += 64
|
||||
}
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 6:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 5:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 4:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 3:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 2:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 1:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 8
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
}
|
||||
|
||||
// optimized encryption for the ciphers which works in 16-bytes
|
||||
func encrypt16(block cipher.Block, dst, src, buf []byte) {
|
||||
tbl := buf[:16]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / 16
|
||||
base := 0
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
s := src[base:][0:128]
|
||||
d := dst[base:][0:128]
|
||||
// 1
|
||||
xor.BytesSrc1(d[0:16], s[0:16], tbl)
|
||||
block.Encrypt(tbl, d[0:16])
|
||||
// 2
|
||||
xor.BytesSrc1(d[16:32], s[16:32], tbl)
|
||||
block.Encrypt(tbl, d[16:32])
|
||||
// 3
|
||||
xor.BytesSrc1(d[32:48], s[32:48], tbl)
|
||||
block.Encrypt(tbl, d[32:48])
|
||||
// 4
|
||||
xor.BytesSrc1(d[48:64], s[48:64], tbl)
|
||||
block.Encrypt(tbl, d[48:64])
|
||||
// 5
|
||||
xor.BytesSrc1(d[64:80], s[64:80], tbl)
|
||||
block.Encrypt(tbl, d[64:80])
|
||||
// 6
|
||||
xor.BytesSrc1(d[80:96], s[80:96], tbl)
|
||||
block.Encrypt(tbl, d[80:96])
|
||||
// 7
|
||||
xor.BytesSrc1(d[96:112], s[96:112], tbl)
|
||||
block.Encrypt(tbl, d[96:112])
|
||||
// 8
|
||||
xor.BytesSrc1(d[112:128], s[112:128], tbl)
|
||||
block.Encrypt(tbl, d[112:128])
|
||||
base += 128
|
||||
}
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 6:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 5:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 4:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 3:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 2:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 1:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += 16
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
}
|
||||
|
||||
func encryptVariant(block cipher.Block, dst, src, buf []byte) {
|
||||
blocksize := block.BlockSize()
|
||||
tbl := buf[:blocksize]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / blocksize
|
||||
base := 0
|
||||
for i := 0; i < n; i++ {
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
// 1
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 2
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 3
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 4
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 5
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 6
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 7
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
|
||||
// 8
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
}
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 6:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 5:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 4:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 3:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 2:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 1:
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
block.Encrypt(tbl, dst[base:])
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
}
|
||||
|
||||
// decryption
|
||||
func decrypt(block cipher.Block, dst, src, buf []byte) {
|
||||
switch block.BlockSize() {
|
||||
case 8:
|
||||
decrypt8(block, dst, src, buf)
|
||||
case 16:
|
||||
decrypt16(block, dst, src, buf)
|
||||
default:
|
||||
decryptVariant(block, dst, src, buf)
|
||||
}
|
||||
}
|
||||
|
||||
func decrypt8(block cipher.Block, dst, src, buf []byte) {
|
||||
tbl := buf[0:8]
|
||||
next := buf[8:16]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / 8
|
||||
base := 0
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
s := src[base:][0:64]
|
||||
d := dst[base:][0:64]
|
||||
// 1
|
||||
block.Encrypt(next, s[0:8])
|
||||
xor.BytesSrc1(d[0:8], s[0:8], tbl)
|
||||
// 2
|
||||
block.Encrypt(tbl, s[8:16])
|
||||
xor.BytesSrc1(d[8:16], s[8:16], next)
|
||||
// 3
|
||||
block.Encrypt(next, s[16:24])
|
||||
xor.BytesSrc1(d[16:24], s[16:24], tbl)
|
||||
// 4
|
||||
block.Encrypt(tbl, s[24:32])
|
||||
xor.BytesSrc1(d[24:32], s[24:32], next)
|
||||
// 5
|
||||
block.Encrypt(next, s[32:40])
|
||||
xor.BytesSrc1(d[32:40], s[32:40], tbl)
|
||||
// 6
|
||||
block.Encrypt(tbl, s[40:48])
|
||||
xor.BytesSrc1(d[40:48], s[40:48], next)
|
||||
// 7
|
||||
block.Encrypt(next, s[48:56])
|
||||
xor.BytesSrc1(d[48:56], s[48:56], tbl)
|
||||
// 8
|
||||
block.Encrypt(tbl, s[56:64])
|
||||
xor.BytesSrc1(d[56:64], s[56:64], next)
|
||||
base += 64
|
||||
}
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 6:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 5:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 4:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 3:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 2:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 1:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 8
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
}
|
||||
|
||||
func decrypt16(block cipher.Block, dst, src, buf []byte) {
|
||||
tbl := buf[0:16]
|
||||
next := buf[16:32]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / 16
|
||||
base := 0
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
s := src[base:][0:128]
|
||||
d := dst[base:][0:128]
|
||||
// 1
|
||||
block.Encrypt(next, s[0:16])
|
||||
xor.BytesSrc1(d[0:16], s[0:16], tbl)
|
||||
// 2
|
||||
block.Encrypt(tbl, s[16:32])
|
||||
xor.BytesSrc1(d[16:32], s[16:32], next)
|
||||
// 3
|
||||
block.Encrypt(next, s[32:48])
|
||||
xor.BytesSrc1(d[32:48], s[32:48], tbl)
|
||||
// 4
|
||||
block.Encrypt(tbl, s[48:64])
|
||||
xor.BytesSrc1(d[48:64], s[48:64], next)
|
||||
// 5
|
||||
block.Encrypt(next, s[64:80])
|
||||
xor.BytesSrc1(d[64:80], s[64:80], tbl)
|
||||
// 6
|
||||
block.Encrypt(tbl, s[80:96])
|
||||
xor.BytesSrc1(d[80:96], s[80:96], next)
|
||||
// 7
|
||||
block.Encrypt(next, s[96:112])
|
||||
xor.BytesSrc1(d[96:112], s[96:112], tbl)
|
||||
// 8
|
||||
block.Encrypt(tbl, s[112:128])
|
||||
xor.BytesSrc1(d[112:128], s[112:128], next)
|
||||
base += 128
|
||||
}
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 6:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 5:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 4:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 3:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 2:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 1:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += 16
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
}
|
||||
|
||||
func decryptVariant(block cipher.Block, dst, src, buf []byte) {
|
||||
blocksize := block.BlockSize()
|
||||
tbl := buf[:blocksize]
|
||||
next := buf[blocksize:]
|
||||
block.Encrypt(tbl, initialVector)
|
||||
n := len(src) / blocksize
|
||||
base := 0
|
||||
for i := 0; i < n; i++ {
|
||||
repeat := n / 8
|
||||
left := n % 8
|
||||
for i := 0; i < repeat; i++ {
|
||||
// 1
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
base += blocksize
|
||||
|
||||
// 2
|
||||
block.Encrypt(tbl, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], next)
|
||||
base += blocksize
|
||||
|
||||
// 3
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
base += blocksize
|
||||
|
||||
// 4
|
||||
block.Encrypt(tbl, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], next)
|
||||
base += blocksize
|
||||
|
||||
// 5
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
base += blocksize
|
||||
|
||||
// 6
|
||||
block.Encrypt(tbl, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], next)
|
||||
base += blocksize
|
||||
|
||||
// 7
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
base += blocksize
|
||||
|
||||
// 8
|
||||
block.Encrypt(tbl, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], next)
|
||||
base += blocksize
|
||||
}
|
||||
|
||||
switch left {
|
||||
case 7:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 6:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 5:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 4:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 3:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 2:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 1:
|
||||
block.Encrypt(next, src[base:])
|
||||
xor.BytesSrc1(dst[base:], src[base:], tbl)
|
||||
tbl, next = next, tbl
|
||||
base += blocksize
|
||||
fallthrough
|
||||
case 0:
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
xor.BytesSrc0(dst[base:], src[base:], tbl)
|
||||
}
|
||||
|
||||
52
vendor/github.com/fatedier/kcp-go/entropy.go
generated
vendored
Normal file
52
vendor/github.com/fatedier/kcp-go/entropy.go
generated
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/md5"
|
||||
"crypto/rand"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Entropy defines a entropy source
|
||||
type Entropy interface {
|
||||
Init()
|
||||
Fill(nonce []byte)
|
||||
}
|
||||
|
||||
// nonceMD5 nonce generator for packet header
|
||||
type nonceMD5 struct {
|
||||
seed [md5.Size]byte
|
||||
}
|
||||
|
||||
func (n *nonceMD5) Init() { /*nothing required*/ }
|
||||
|
||||
func (n *nonceMD5) Fill(nonce []byte) {
|
||||
if n.seed[0] == 0 { // entropy update
|
||||
io.ReadFull(rand.Reader, n.seed[:])
|
||||
}
|
||||
n.seed = md5.Sum(n.seed[:])
|
||||
copy(nonce, n.seed[:])
|
||||
}
|
||||
|
||||
// nonceAES128 nonce generator for packet headers
|
||||
type nonceAES128 struct {
|
||||
seed [aes.BlockSize]byte
|
||||
block cipher.Block
|
||||
}
|
||||
|
||||
func (n *nonceAES128) Init() {
|
||||
var key [16]byte //aes-128
|
||||
io.ReadFull(rand.Reader, key[:])
|
||||
io.ReadFull(rand.Reader, n.seed[:])
|
||||
block, _ := aes.NewCipher(key[:])
|
||||
n.block = block
|
||||
}
|
||||
|
||||
func (n *nonceAES128) Fill(nonce []byte) {
|
||||
if n.seed[0] == 0 { // entropy update
|
||||
io.ReadFull(rand.Reader, n.seed[:])
|
||||
}
|
||||
n.block.Encrypt(n.seed[:], n.seed[:])
|
||||
copy(nonce, n.seed[:])
|
||||
}
|
||||
195
vendor/github.com/fatedier/kcp-go/fec.go
generated
vendored
195
vendor/github.com/fatedier/kcp-go/fec.go
generated
vendored
@@ -4,40 +4,41 @@ import (
|
||||
"encoding/binary"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/templexxx/reedsolomon"
|
||||
"github.com/klauspost/reedsolomon"
|
||||
)
|
||||
|
||||
const (
|
||||
fecHeaderSize = 6
|
||||
fecHeaderSizePlus2 = fecHeaderSize + 2 // plus 2B data size
|
||||
typeData = 0xf1
|
||||
typeFEC = 0xf2
|
||||
typeParity = 0xf2
|
||||
)
|
||||
|
||||
type (
|
||||
// fecPacket is a decoded FEC packet
|
||||
fecPacket struct {
|
||||
seqid uint32
|
||||
flag uint16
|
||||
data []byte
|
||||
}
|
||||
// fecPacket is a decoded FEC packet
|
||||
type fecPacket []byte
|
||||
|
||||
// fecDecoder for decoding incoming packets
|
||||
fecDecoder struct {
|
||||
rxlimit int // queue size limit
|
||||
dataShards int
|
||||
parityShards int
|
||||
shardSize int
|
||||
rx []fecPacket // ordered receive queue
|
||||
func (bts fecPacket) seqid() uint32 { return binary.LittleEndian.Uint32(bts) }
|
||||
func (bts fecPacket) flag() uint16 { return binary.LittleEndian.Uint16(bts[4:]) }
|
||||
func (bts fecPacket) data() []byte { return bts[6:] }
|
||||
|
||||
// caches
|
||||
decodeCache [][]byte
|
||||
flagCache []bool
|
||||
// fecDecoder for decoding incoming packets
|
||||
type fecDecoder struct {
|
||||
rxlimit int // queue size limit
|
||||
dataShards int
|
||||
parityShards int
|
||||
shardSize int
|
||||
rx []fecPacket // ordered receive queue
|
||||
|
||||
// RS decoder
|
||||
codec reedsolomon.Encoder
|
||||
}
|
||||
)
|
||||
// caches
|
||||
decodeCache [][]byte
|
||||
flagCache []bool
|
||||
|
||||
// zeros
|
||||
zeros []byte
|
||||
|
||||
// RS decoder
|
||||
codec reedsolomon.Encoder
|
||||
}
|
||||
|
||||
func newFECDecoder(rxlimit, dataShards, parityShards int) *fecDecoder {
|
||||
if dataShards <= 0 || parityShards <= 0 {
|
||||
@@ -47,48 +48,40 @@ func newFECDecoder(rxlimit, dataShards, parityShards int) *fecDecoder {
|
||||
return nil
|
||||
}
|
||||
|
||||
fec := new(fecDecoder)
|
||||
fec.rxlimit = rxlimit
|
||||
fec.dataShards = dataShards
|
||||
fec.parityShards = parityShards
|
||||
fec.shardSize = dataShards + parityShards
|
||||
enc, err := reedsolomon.New(dataShards, parityShards)
|
||||
dec := new(fecDecoder)
|
||||
dec.rxlimit = rxlimit
|
||||
dec.dataShards = dataShards
|
||||
dec.parityShards = parityShards
|
||||
dec.shardSize = dataShards + parityShards
|
||||
codec, err := reedsolomon.New(dataShards, parityShards)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
fec.codec = enc
|
||||
fec.decodeCache = make([][]byte, fec.shardSize)
|
||||
fec.flagCache = make([]bool, fec.shardSize)
|
||||
return fec
|
||||
}
|
||||
|
||||
// decodeBytes a fec packet
|
||||
func (dec *fecDecoder) decodeBytes(data []byte) fecPacket {
|
||||
var pkt fecPacket
|
||||
pkt.seqid = binary.LittleEndian.Uint32(data)
|
||||
pkt.flag = binary.LittleEndian.Uint16(data[4:])
|
||||
// allocate memory & copy
|
||||
buf := xmitBuf.Get().([]byte)[:len(data)-6]
|
||||
copy(buf, data[6:])
|
||||
pkt.data = buf
|
||||
return pkt
|
||||
dec.codec = codec
|
||||
dec.decodeCache = make([][]byte, dec.shardSize)
|
||||
dec.flagCache = make([]bool, dec.shardSize)
|
||||
dec.zeros = make([]byte, mtuLimit)
|
||||
return dec
|
||||
}
|
||||
|
||||
// decode a fec packet
|
||||
func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
func (dec *fecDecoder) decode(in fecPacket) (recovered [][]byte) {
|
||||
// insertion
|
||||
n := len(dec.rx) - 1
|
||||
insertIdx := 0
|
||||
for i := n; i >= 0; i-- {
|
||||
if pkt.seqid == dec.rx[i].seqid { // de-duplicate
|
||||
xmitBuf.Put(pkt.data)
|
||||
if in.seqid() == dec.rx[i].seqid() { // de-duplicate
|
||||
return nil
|
||||
} else if _itimediff(pkt.seqid, dec.rx[i].seqid) > 0 { // insertion
|
||||
} else if _itimediff(in.seqid(), dec.rx[i].seqid()) > 0 { // insertion
|
||||
insertIdx = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// make a copy
|
||||
pkt := fecPacket(xmitBuf.Get().([]byte)[:len(in)])
|
||||
copy(pkt, in)
|
||||
|
||||
// insert into ordered rx queue
|
||||
if insertIdx == n+1 {
|
||||
dec.rx = append(dec.rx, pkt)
|
||||
@@ -99,11 +92,11 @@ func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
}
|
||||
|
||||
// shard range for current packet
|
||||
shardBegin := pkt.seqid - pkt.seqid%uint32(dec.shardSize)
|
||||
shardBegin := pkt.seqid() - pkt.seqid()%uint32(dec.shardSize)
|
||||
shardEnd := shardBegin + uint32(dec.shardSize) - 1
|
||||
|
||||
// max search range in ordered queue for current shard
|
||||
searchBegin := insertIdx - int(pkt.seqid%uint32(dec.shardSize))
|
||||
searchBegin := insertIdx - int(pkt.seqid()%uint32(dec.shardSize))
|
||||
if searchBegin < 0 {
|
||||
searchBegin = 0
|
||||
}
|
||||
@@ -116,7 +109,7 @@ func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
if searchEnd-searchBegin+1 >= dec.dataShards {
|
||||
var numshard, numDataShard, first, maxlen int
|
||||
|
||||
// zero cache
|
||||
// zero caches
|
||||
shards := dec.decodeCache
|
||||
shardsflag := dec.flagCache
|
||||
for k := range dec.decodeCache {
|
||||
@@ -126,40 +119,43 @@ func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
|
||||
// shard assembly
|
||||
for i := searchBegin; i <= searchEnd; i++ {
|
||||
seqid := dec.rx[i].seqid
|
||||
seqid := dec.rx[i].seqid()
|
||||
if _itimediff(seqid, shardEnd) > 0 {
|
||||
break
|
||||
} else if _itimediff(seqid, shardBegin) >= 0 {
|
||||
shards[seqid%uint32(dec.shardSize)] = dec.rx[i].data
|
||||
shards[seqid%uint32(dec.shardSize)] = dec.rx[i].data()
|
||||
shardsflag[seqid%uint32(dec.shardSize)] = true
|
||||
numshard++
|
||||
if dec.rx[i].flag == typeData {
|
||||
if dec.rx[i].flag() == typeData {
|
||||
numDataShard++
|
||||
}
|
||||
if numshard == 1 {
|
||||
first = i
|
||||
}
|
||||
if len(dec.rx[i].data) > maxlen {
|
||||
maxlen = len(dec.rx[i].data)
|
||||
if len(dec.rx[i].data()) > maxlen {
|
||||
maxlen = len(dec.rx[i].data())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if numDataShard == dec.dataShards {
|
||||
// case 1: no lost data shards
|
||||
// case 1: no loss on data shards
|
||||
dec.rx = dec.freeRange(first, numshard, dec.rx)
|
||||
} else if numshard >= dec.dataShards {
|
||||
// case 2: data shard lost, but recoverable from parity shard
|
||||
// case 2: loss on data shards, but it's recoverable from parity shards
|
||||
for k := range shards {
|
||||
if shards[k] != nil {
|
||||
dlen := len(shards[k])
|
||||
shards[k] = shards[k][:maxlen]
|
||||
xorBytes(shards[k][dlen:], shards[k][dlen:], shards[k][dlen:])
|
||||
copy(shards[k][dlen:], dec.zeros)
|
||||
} else {
|
||||
shards[k] = xmitBuf.Get().([]byte)[:0]
|
||||
}
|
||||
}
|
||||
if err := dec.codec.ReconstructData(shards); err == nil {
|
||||
for k := range shards[:dec.dataShards] {
|
||||
if !shardsflag[k] {
|
||||
// recovered data should be recycled
|
||||
recovered = append(recovered, shards[k])
|
||||
}
|
||||
}
|
||||
@@ -170,7 +166,7 @@ func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
|
||||
// keep rxlimit
|
||||
if len(dec.rx) > dec.rxlimit {
|
||||
if dec.rx[0].flag == typeData { // record unrecoverable data
|
||||
if dec.rx[0].flag() == typeData { // track the unrecoverable data
|
||||
atomic.AddUint64(&DefaultSnmp.FECShortShards, 1)
|
||||
}
|
||||
dec.rx = dec.freeRange(0, 1, dec.rx)
|
||||
@@ -178,15 +174,16 @@ func (dec *fecDecoder) decode(pkt fecPacket) (recovered [][]byte) {
|
||||
return
|
||||
}
|
||||
|
||||
// free a range of fecPacket, and zero for GC recycling
|
||||
// free a range of fecPacket
|
||||
func (dec *fecDecoder) freeRange(first, n int, q []fecPacket) []fecPacket {
|
||||
for i := first; i < first+n; i++ { // free
|
||||
xmitBuf.Put(q[i].data)
|
||||
for i := first; i < first+n; i++ { // recycle buffer
|
||||
xmitBuf.Put([]byte(q[i]))
|
||||
}
|
||||
|
||||
if first == 0 && n < cap(q)/2 {
|
||||
return q[n:]
|
||||
}
|
||||
copy(q[first:], q[first+n:])
|
||||
for i := 0; i < n; i++ { // dereference data
|
||||
q[len(q)-1-i].data = nil
|
||||
}
|
||||
return q[:len(q)-n]
|
||||
}
|
||||
|
||||
@@ -200,7 +197,7 @@ type (
|
||||
next uint32 // next seqid
|
||||
|
||||
shardCount int // count the number of datashards collected
|
||||
maxSize int // record maximum data length in datashard
|
||||
maxSize int // track maximum data length in datashard
|
||||
|
||||
headerOffset int // FEC header offset
|
||||
payloadOffset int // FEC payload offset
|
||||
@@ -209,6 +206,9 @@ type (
|
||||
shardCache [][]byte
|
||||
encodeCache [][]byte
|
||||
|
||||
// zeros
|
||||
zeros []byte
|
||||
|
||||
// RS encoder
|
||||
codec reedsolomon.Encoder
|
||||
}
|
||||
@@ -218,53 +218,57 @@ func newFECEncoder(dataShards, parityShards, offset int) *fecEncoder {
|
||||
if dataShards <= 0 || parityShards <= 0 {
|
||||
return nil
|
||||
}
|
||||
fec := new(fecEncoder)
|
||||
fec.dataShards = dataShards
|
||||
fec.parityShards = parityShards
|
||||
fec.shardSize = dataShards + parityShards
|
||||
fec.paws = (0xffffffff/uint32(fec.shardSize) - 1) * uint32(fec.shardSize)
|
||||
fec.headerOffset = offset
|
||||
fec.payloadOffset = fec.headerOffset + fecHeaderSize
|
||||
enc := new(fecEncoder)
|
||||
enc.dataShards = dataShards
|
||||
enc.parityShards = parityShards
|
||||
enc.shardSize = dataShards + parityShards
|
||||
enc.paws = 0xffffffff / uint32(enc.shardSize) * uint32(enc.shardSize)
|
||||
enc.headerOffset = offset
|
||||
enc.payloadOffset = enc.headerOffset + fecHeaderSize
|
||||
|
||||
enc, err := reedsolomon.New(dataShards, parityShards)
|
||||
codec, err := reedsolomon.New(dataShards, parityShards)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
fec.codec = enc
|
||||
enc.codec = codec
|
||||
|
||||
// caches
|
||||
fec.encodeCache = make([][]byte, fec.shardSize)
|
||||
fec.shardCache = make([][]byte, fec.shardSize)
|
||||
for k := range fec.shardCache {
|
||||
fec.shardCache[k] = make([]byte, mtuLimit)
|
||||
enc.encodeCache = make([][]byte, enc.shardSize)
|
||||
enc.shardCache = make([][]byte, enc.shardSize)
|
||||
for k := range enc.shardCache {
|
||||
enc.shardCache[k] = make([]byte, mtuLimit)
|
||||
}
|
||||
return fec
|
||||
enc.zeros = make([]byte, mtuLimit)
|
||||
return enc
|
||||
}
|
||||
|
||||
// encode the packet, output parity shards if we have enough datashards
|
||||
// the content of returned parityshards will change in next encode
|
||||
// encodes the packet, outputs parity shards if we have collected quorum datashards
|
||||
// notice: the contents of 'ps' will be re-written in successive calling
|
||||
func (enc *fecEncoder) encode(b []byte) (ps [][]byte) {
|
||||
// The header format:
|
||||
// | FEC SEQID(4B) | FEC TYPE(2B) | SIZE (2B) | PAYLOAD(SIZE-2) |
|
||||
// |<-headerOffset |<-payloadOffset
|
||||
enc.markData(b[enc.headerOffset:])
|
||||
binary.LittleEndian.PutUint16(b[enc.payloadOffset:], uint16(len(b[enc.payloadOffset:])))
|
||||
|
||||
// copy data to fec datashards
|
||||
// copy data from payloadOffset to fec shard cache
|
||||
sz := len(b)
|
||||
enc.shardCache[enc.shardCount] = enc.shardCache[enc.shardCount][:sz]
|
||||
copy(enc.shardCache[enc.shardCount], b)
|
||||
copy(enc.shardCache[enc.shardCount][enc.payloadOffset:], b[enc.payloadOffset:])
|
||||
enc.shardCount++
|
||||
|
||||
// record max datashard length
|
||||
// track max datashard length
|
||||
if sz > enc.maxSize {
|
||||
enc.maxSize = sz
|
||||
}
|
||||
|
||||
// calculate Reed-Solomon Erasure Code
|
||||
// Generation of Reed-Solomon Erasure Code
|
||||
if enc.shardCount == enc.dataShards {
|
||||
// bzero each datashard's tail
|
||||
// fill '0' into the tail of each datashard
|
||||
for i := 0; i < enc.dataShards; i++ {
|
||||
shard := enc.shardCache[i]
|
||||
slen := len(shard)
|
||||
xorBytes(shard[slen:enc.maxSize], shard[slen:enc.maxSize], shard[slen:enc.maxSize])
|
||||
copy(shard[slen:enc.maxSize], enc.zeros)
|
||||
}
|
||||
|
||||
// construct equal-sized slice with stripped header
|
||||
@@ -273,16 +277,16 @@ func (enc *fecEncoder) encode(b []byte) (ps [][]byte) {
|
||||
cache[k] = enc.shardCache[k][enc.payloadOffset:enc.maxSize]
|
||||
}
|
||||
|
||||
// rs encode
|
||||
// encoding
|
||||
if err := enc.codec.Encode(cache); err == nil {
|
||||
ps = enc.shardCache[enc.dataShards:]
|
||||
for k := range ps {
|
||||
enc.markFEC(ps[k][enc.headerOffset:])
|
||||
enc.markParity(ps[k][enc.headerOffset:])
|
||||
ps[k] = ps[k][:enc.maxSize]
|
||||
}
|
||||
}
|
||||
|
||||
// reset counters to zero
|
||||
// counters resetting
|
||||
enc.shardCount = 0
|
||||
enc.maxSize = 0
|
||||
}
|
||||
@@ -296,8 +300,9 @@ func (enc *fecEncoder) markData(data []byte) {
|
||||
enc.next++
|
||||
}
|
||||
|
||||
func (enc *fecEncoder) markFEC(data []byte) {
|
||||
func (enc *fecEncoder) markParity(data []byte) {
|
||||
binary.LittleEndian.PutUint32(data, enc.next)
|
||||
binary.LittleEndian.PutUint16(data[4:], typeFEC)
|
||||
binary.LittleEndian.PutUint16(data[4:], typeParity)
|
||||
// sequence wrap will only happen at parity shard
|
||||
enc.next = (enc.next + 1) % enc.paws
|
||||
}
|
||||
|
||||
BIN
vendor/github.com/fatedier/kcp-go/flame.png
generated
vendored
Normal file
BIN
vendor/github.com/fatedier/kcp-go/flame.png
generated
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 56 KiB |
329
vendor/github.com/fatedier/kcp-go/kcp.go
generated
vendored
329
vendor/github.com/fatedier/kcp-go/kcp.go
generated
vendored
@@ -1,9 +1,9 @@
|
||||
// Package kcp - A Fast and Reliable ARQ Protocol
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -30,6 +30,12 @@ const (
|
||||
IKCP_PROBE_LIMIT = 120000 // up to 120 secs to probe window
|
||||
)
|
||||
|
||||
// monotonic reference time point
|
||||
var refTime time.Time = time.Now()
|
||||
|
||||
// currentMs returns current elasped monotonic milliseconds since program startup
|
||||
func currentMs() uint32 { return uint32(time.Now().Sub(refTime) / time.Millisecond) }
|
||||
|
||||
// output_callback is a prototype which ought capture conn and call conn.Write
|
||||
type output_callback func(buf []byte, size int)
|
||||
|
||||
@@ -104,6 +110,7 @@ type segment struct {
|
||||
xmit uint32
|
||||
resendts uint32
|
||||
fastack uint32
|
||||
acked uint32 // mark if the seg has acked
|
||||
data []byte
|
||||
}
|
||||
|
||||
@@ -144,8 +151,9 @@ type KCP struct {
|
||||
|
||||
acklist []ackItem
|
||||
|
||||
buffer []byte
|
||||
output output_callback
|
||||
buffer []byte
|
||||
reserved int
|
||||
output output_callback
|
||||
}
|
||||
|
||||
type ackItem struct {
|
||||
@@ -153,8 +161,11 @@ type ackItem struct {
|
||||
ts uint32
|
||||
}
|
||||
|
||||
// NewKCP create a new kcp control object, 'conv' must equal in two endpoint
|
||||
// from the same connection.
|
||||
// NewKCP create a new kcp state machine
|
||||
//
|
||||
// 'conv' must be equal in the connection peers, or else data will be silently rejected.
|
||||
//
|
||||
// 'output' function will be called whenever these is data to be sent on wire.
|
||||
func NewKCP(conv uint32, output output_callback) *KCP {
|
||||
kcp := new(KCP)
|
||||
kcp.conv = conv
|
||||
@@ -163,7 +174,7 @@ func NewKCP(conv uint32, output output_callback) *KCP {
|
||||
kcp.rmt_wnd = IKCP_WND_RCV
|
||||
kcp.mtu = IKCP_MTU_DEF
|
||||
kcp.mss = kcp.mtu - IKCP_OVERHEAD
|
||||
kcp.buffer = make([]byte, (kcp.mtu+IKCP_OVERHEAD)*3)
|
||||
kcp.buffer = make([]byte, kcp.mtu)
|
||||
kcp.rx_rto = IKCP_RTO_DEF
|
||||
kcp.rx_minrto = IKCP_RTO_MIN
|
||||
kcp.interval = IKCP_INTERVAL
|
||||
@@ -181,8 +192,24 @@ func (kcp *KCP) newSegment(size int) (seg segment) {
|
||||
}
|
||||
|
||||
// delSegment recycles a KCP segment
|
||||
func (kcp *KCP) delSegment(seg segment) {
|
||||
xmitBuf.Put(seg.data)
|
||||
func (kcp *KCP) delSegment(seg *segment) {
|
||||
if seg.data != nil {
|
||||
xmitBuf.Put(seg.data)
|
||||
seg.data = nil
|
||||
}
|
||||
}
|
||||
|
||||
// ReserveBytes keeps n bytes untouched from the beginning of the buffer,
|
||||
// the output_callback function should be aware of this.
|
||||
//
|
||||
// Return false if n >= mss
|
||||
func (kcp *KCP) ReserveBytes(n int) bool {
|
||||
if n >= int(kcp.mtu-IKCP_OVERHEAD) || n < 0 {
|
||||
return false
|
||||
}
|
||||
kcp.reserved = n
|
||||
kcp.mss = kcp.mtu - IKCP_OVERHEAD - uint32(n)
|
||||
return true
|
||||
}
|
||||
|
||||
// PeekSize checks the size of next message in the recv queue
|
||||
@@ -210,19 +237,21 @@ func (kcp *KCP) PeekSize() (length int) {
|
||||
return
|
||||
}
|
||||
|
||||
// Recv is user/upper level recv: returns size, returns below zero for EAGAIN
|
||||
// Receive data from kcp state machine
|
||||
//
|
||||
// Return number of bytes read.
|
||||
//
|
||||
// Return -1 when there is no readable data.
|
||||
//
|
||||
// Return -2 if len(buffer) is smaller than kcp.PeekSize().
|
||||
func (kcp *KCP) Recv(buffer []byte) (n int) {
|
||||
if len(kcp.rcv_queue) == 0 {
|
||||
peeksize := kcp.PeekSize()
|
||||
if peeksize < 0 {
|
||||
return -1
|
||||
}
|
||||
|
||||
peeksize := kcp.PeekSize()
|
||||
if peeksize < 0 {
|
||||
return -2
|
||||
}
|
||||
|
||||
if peeksize > len(buffer) {
|
||||
return -3
|
||||
return -2
|
||||
}
|
||||
|
||||
var fast_recover bool
|
||||
@@ -238,7 +267,7 @@ func (kcp *KCP) Recv(buffer []byte) (n int) {
|
||||
buffer = buffer[len(seg.data):]
|
||||
n += len(seg.data)
|
||||
count++
|
||||
kcp.delSegment(*seg)
|
||||
kcp.delSegment(seg)
|
||||
if seg.frg == 0 {
|
||||
break
|
||||
}
|
||||
@@ -251,7 +280,7 @@ func (kcp *KCP) Recv(buffer []byte) (n int) {
|
||||
count = 0
|
||||
for k := range kcp.rcv_buf {
|
||||
seg := &kcp.rcv_buf[k]
|
||||
if seg.sn == kcp.rcv_nxt && len(kcp.rcv_queue) < int(kcp.rcv_wnd) {
|
||||
if seg.sn == kcp.rcv_nxt && len(kcp.rcv_queue)+count < int(kcp.rcv_wnd) {
|
||||
kcp.rcv_nxt++
|
||||
count++
|
||||
} else {
|
||||
@@ -382,10 +411,12 @@ func (kcp *KCP) parse_ack(sn uint32) {
|
||||
for k := range kcp.snd_buf {
|
||||
seg := &kcp.snd_buf[k]
|
||||
if sn == seg.sn {
|
||||
kcp.delSegment(*seg)
|
||||
copy(kcp.snd_buf[k:], kcp.snd_buf[k+1:])
|
||||
kcp.snd_buf[len(kcp.snd_buf)-1] = segment{}
|
||||
kcp.snd_buf = kcp.snd_buf[:len(kcp.snd_buf)-1]
|
||||
// mark and free space, but leave the segment here,
|
||||
// and wait until `una` to delete this, then we don't
|
||||
// have to shift the segments behind forward,
|
||||
// which is an expensive operation for large window
|
||||
seg.acked = 1
|
||||
kcp.delSegment(seg)
|
||||
break
|
||||
}
|
||||
if _itimediff(sn, seg.sn) < 0 {
|
||||
@@ -394,7 +425,7 @@ func (kcp *KCP) parse_ack(sn uint32) {
|
||||
}
|
||||
}
|
||||
|
||||
func (kcp *KCP) parse_fastack(sn uint32) {
|
||||
func (kcp *KCP) parse_fastack(sn, ts uint32) {
|
||||
if _itimediff(sn, kcp.snd_una) < 0 || _itimediff(sn, kcp.snd_nxt) >= 0 {
|
||||
return
|
||||
}
|
||||
@@ -403,7 +434,7 @@ func (kcp *KCP) parse_fastack(sn uint32) {
|
||||
seg := &kcp.snd_buf[k]
|
||||
if _itimediff(sn, seg.sn) < 0 {
|
||||
break
|
||||
} else if sn != seg.sn {
|
||||
} else if sn != seg.sn && _itimediff(seg.ts, ts) <= 0 {
|
||||
seg.fastack++
|
||||
}
|
||||
}
|
||||
@@ -414,7 +445,7 @@ func (kcp *KCP) parse_una(una uint32) {
|
||||
for k := range kcp.snd_buf {
|
||||
seg := &kcp.snd_buf[k]
|
||||
if _itimediff(una, seg.sn) > 0 {
|
||||
kcp.delSegment(*seg)
|
||||
kcp.delSegment(seg)
|
||||
count++
|
||||
} else {
|
||||
break
|
||||
@@ -430,12 +461,12 @@ func (kcp *KCP) ack_push(sn, ts uint32) {
|
||||
kcp.acklist = append(kcp.acklist, ackItem{sn, ts})
|
||||
}
|
||||
|
||||
func (kcp *KCP) parse_data(newseg segment) {
|
||||
// returns true if data has repeated
|
||||
func (kcp *KCP) parse_data(newseg segment) bool {
|
||||
sn := newseg.sn
|
||||
if _itimediff(sn, kcp.rcv_nxt+kcp.rcv_wnd) >= 0 ||
|
||||
_itimediff(sn, kcp.rcv_nxt) < 0 {
|
||||
kcp.delSegment(newseg)
|
||||
return
|
||||
return true
|
||||
}
|
||||
|
||||
n := len(kcp.rcv_buf) - 1
|
||||
@@ -445,7 +476,6 @@ func (kcp *KCP) parse_data(newseg segment) {
|
||||
seg := &kcp.rcv_buf[i]
|
||||
if seg.sn == sn {
|
||||
repeat = true
|
||||
atomic.AddUint64(&DefaultSnmp.RepeatSegs, 1)
|
||||
break
|
||||
}
|
||||
if _itimediff(sn, seg.sn) > 0 {
|
||||
@@ -455,6 +485,11 @@ func (kcp *KCP) parse_data(newseg segment) {
|
||||
}
|
||||
|
||||
if !repeat {
|
||||
// replicate the content if it's new
|
||||
dataCopy := xmitBuf.Get().([]byte)[:len(newseg.data)]
|
||||
copy(dataCopy, newseg.data)
|
||||
newseg.data = dataCopy
|
||||
|
||||
if insert_idx == n+1 {
|
||||
kcp.rcv_buf = append(kcp.rcv_buf, newseg)
|
||||
} else {
|
||||
@@ -462,15 +497,13 @@ func (kcp *KCP) parse_data(newseg segment) {
|
||||
copy(kcp.rcv_buf[insert_idx+1:], kcp.rcv_buf[insert_idx:])
|
||||
kcp.rcv_buf[insert_idx] = newseg
|
||||
}
|
||||
} else {
|
||||
kcp.delSegment(newseg)
|
||||
}
|
||||
|
||||
// move available data from rcv_buf -> rcv_queue
|
||||
count := 0
|
||||
for k := range kcp.rcv_buf {
|
||||
seg := &kcp.rcv_buf[k]
|
||||
if seg.sn == kcp.rcv_nxt && len(kcp.rcv_queue) < int(kcp.rcv_wnd) {
|
||||
if seg.sn == kcp.rcv_nxt && len(kcp.rcv_queue)+count < int(kcp.rcv_wnd) {
|
||||
kcp.rcv_nxt++
|
||||
count++
|
||||
} else {
|
||||
@@ -481,18 +514,23 @@ func (kcp *KCP) parse_data(newseg segment) {
|
||||
kcp.rcv_queue = append(kcp.rcv_queue, kcp.rcv_buf[:count]...)
|
||||
kcp.rcv_buf = kcp.remove_front(kcp.rcv_buf, count)
|
||||
}
|
||||
|
||||
return repeat
|
||||
}
|
||||
|
||||
// Input when you received a low level packet (eg. UDP packet), call it
|
||||
// regular indicates a regular packet has received(not from FEC)
|
||||
// Input a packet into kcp state machine.
|
||||
//
|
||||
// 'regular' indicates it's a real data packet from remote, and it means it's not generated from ReedSolomon
|
||||
// codecs.
|
||||
//
|
||||
// 'ackNoDelay' will trigger immediate ACK, but surely it will not be efficient in bandwidth
|
||||
func (kcp *KCP) Input(data []byte, regular, ackNoDelay bool) int {
|
||||
una := kcp.snd_una
|
||||
snd_una := kcp.snd_una
|
||||
if len(data) < IKCP_OVERHEAD {
|
||||
return -1
|
||||
}
|
||||
|
||||
var maxack uint32
|
||||
var lastackts uint32
|
||||
var latest uint32 // the latest ack packet
|
||||
var flag int
|
||||
var inSegs uint64
|
||||
|
||||
@@ -535,19 +573,15 @@ func (kcp *KCP) Input(data []byte, regular, ackNoDelay bool) int {
|
||||
|
||||
if cmd == IKCP_CMD_ACK {
|
||||
kcp.parse_ack(sn)
|
||||
kcp.shrink_buf()
|
||||
if flag == 0 {
|
||||
flag = 1
|
||||
maxack = sn
|
||||
} else if _itimediff(sn, maxack) > 0 {
|
||||
maxack = sn
|
||||
}
|
||||
lastackts = ts
|
||||
kcp.parse_fastack(sn, ts)
|
||||
flag |= 1
|
||||
latest = ts
|
||||
} else if cmd == IKCP_CMD_PUSH {
|
||||
repeat := true
|
||||
if _itimediff(sn, kcp.rcv_nxt+kcp.rcv_wnd) < 0 {
|
||||
kcp.ack_push(sn, ts)
|
||||
if _itimediff(sn, kcp.rcv_nxt) >= 0 {
|
||||
seg := kcp.newSegment(int(length))
|
||||
var seg segment
|
||||
seg.conv = conv
|
||||
seg.cmd = cmd
|
||||
seg.frg = frg
|
||||
@@ -555,12 +589,11 @@ func (kcp *KCP) Input(data []byte, regular, ackNoDelay bool) int {
|
||||
seg.ts = ts
|
||||
seg.sn = sn
|
||||
seg.una = una
|
||||
copy(seg.data, data[:length])
|
||||
kcp.parse_data(seg)
|
||||
} else {
|
||||
atomic.AddUint64(&DefaultSnmp.RepeatSegs, 1)
|
||||
seg.data = data[:length] // delayed data copying
|
||||
repeat = kcp.parse_data(seg)
|
||||
}
|
||||
} else {
|
||||
}
|
||||
if regular && repeat {
|
||||
atomic.AddUint64(&DefaultSnmp.RepeatSegs, 1)
|
||||
}
|
||||
} else if cmd == IKCP_CMD_WASK {
|
||||
@@ -578,40 +611,42 @@ func (kcp *KCP) Input(data []byte, regular, ackNoDelay bool) int {
|
||||
}
|
||||
atomic.AddUint64(&DefaultSnmp.InSegs, inSegs)
|
||||
|
||||
// update rtt with the latest ts
|
||||
// ignore the FEC packet
|
||||
if flag != 0 && regular {
|
||||
kcp.parse_fastack(maxack)
|
||||
current := currentMs()
|
||||
if _itimediff(current, lastackts) >= 0 {
|
||||
kcp.update_ack(_itimediff(current, lastackts))
|
||||
if _itimediff(current, latest) >= 0 {
|
||||
kcp.update_ack(_itimediff(current, latest))
|
||||
}
|
||||
}
|
||||
|
||||
if _itimediff(kcp.snd_una, una) > 0 {
|
||||
if kcp.cwnd < kcp.rmt_wnd {
|
||||
mss := kcp.mss
|
||||
if kcp.cwnd < kcp.ssthresh {
|
||||
kcp.cwnd++
|
||||
kcp.incr += mss
|
||||
} else {
|
||||
if kcp.incr < mss {
|
||||
kcp.incr = mss
|
||||
}
|
||||
kcp.incr += (mss*mss)/kcp.incr + (mss / 16)
|
||||
if (kcp.cwnd+1)*mss <= kcp.incr {
|
||||
// cwnd update when packet arrived
|
||||
if kcp.nocwnd == 0 {
|
||||
if _itimediff(kcp.snd_una, snd_una) > 0 {
|
||||
if kcp.cwnd < kcp.rmt_wnd {
|
||||
mss := kcp.mss
|
||||
if kcp.cwnd < kcp.ssthresh {
|
||||
kcp.cwnd++
|
||||
kcp.incr += mss
|
||||
} else {
|
||||
if kcp.incr < mss {
|
||||
kcp.incr = mss
|
||||
}
|
||||
kcp.incr += (mss*mss)/kcp.incr + (mss / 16)
|
||||
if (kcp.cwnd+1)*mss <= kcp.incr {
|
||||
kcp.cwnd++
|
||||
}
|
||||
}
|
||||
if kcp.cwnd > kcp.rmt_wnd {
|
||||
kcp.cwnd = kcp.rmt_wnd
|
||||
kcp.incr = kcp.rmt_wnd * mss
|
||||
}
|
||||
}
|
||||
if kcp.cwnd > kcp.rmt_wnd {
|
||||
kcp.cwnd = kcp.rmt_wnd
|
||||
kcp.incr = kcp.rmt_wnd * mss
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ackNoDelay && len(kcp.acklist) > 0 { // ack immediately
|
||||
kcp.flush(true)
|
||||
} else if kcp.rmt_wnd == 0 && len(kcp.acklist) > 0 { // window zero
|
||||
kcp.flush(true)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
@@ -624,7 +659,7 @@ func (kcp *KCP) wnd_unused() uint16 {
|
||||
}
|
||||
|
||||
// flush pending data
|
||||
func (kcp *KCP) flush(ackOnly bool) {
|
||||
func (kcp *KCP) flush(ackOnly bool) uint32 {
|
||||
var seg segment
|
||||
seg.conv = kcp.conv
|
||||
seg.cmd = IKCP_CMD_ACK
|
||||
@@ -632,14 +667,28 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
seg.una = kcp.rcv_nxt
|
||||
|
||||
buffer := kcp.buffer
|
||||
// flush acknowledges
|
||||
ptr := buffer
|
||||
for i, ack := range kcp.acklist {
|
||||
ptr := buffer[kcp.reserved:] // keep n bytes untouched
|
||||
|
||||
// makeSpace makes room for writing
|
||||
makeSpace := func(space int) {
|
||||
size := len(buffer) - len(ptr)
|
||||
if size+IKCP_OVERHEAD > int(kcp.mtu) {
|
||||
if size+space > int(kcp.mtu) {
|
||||
kcp.output(buffer, size)
|
||||
ptr = buffer
|
||||
ptr = buffer[kcp.reserved:]
|
||||
}
|
||||
}
|
||||
|
||||
// flush bytes in buffer if there is any
|
||||
flushBuffer := func() {
|
||||
size := len(buffer) - len(ptr)
|
||||
if size > kcp.reserved {
|
||||
kcp.output(buffer, size)
|
||||
}
|
||||
}
|
||||
|
||||
// flush acknowledges
|
||||
for i, ack := range kcp.acklist {
|
||||
makeSpace(IKCP_OVERHEAD)
|
||||
// filter jitters caused by bufferbloat
|
||||
if ack.sn >= kcp.rcv_nxt || len(kcp.acklist)-1 == i {
|
||||
seg.sn, seg.ts = ack.sn, ack.ts
|
||||
@@ -649,11 +698,8 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
kcp.acklist = kcp.acklist[0:0]
|
||||
|
||||
if ackOnly { // flash remain ack segments
|
||||
size := len(buffer) - len(ptr)
|
||||
if size > 0 {
|
||||
kcp.output(buffer, size)
|
||||
}
|
||||
return
|
||||
flushBuffer()
|
||||
return kcp.interval
|
||||
}
|
||||
|
||||
// probe window size (if remote window size equals zero)
|
||||
@@ -683,22 +729,14 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
// flush window probing commands
|
||||
if (kcp.probe & IKCP_ASK_SEND) != 0 {
|
||||
seg.cmd = IKCP_CMD_WASK
|
||||
size := len(buffer) - len(ptr)
|
||||
if size+IKCP_OVERHEAD > int(kcp.mtu) {
|
||||
kcp.output(buffer, size)
|
||||
ptr = buffer
|
||||
}
|
||||
makeSpace(IKCP_OVERHEAD)
|
||||
ptr = seg.encode(ptr)
|
||||
}
|
||||
|
||||
// flush window probing commands
|
||||
if (kcp.probe & IKCP_ASK_TELL) != 0 {
|
||||
seg.cmd = IKCP_CMD_WINS
|
||||
size := len(buffer) - len(ptr)
|
||||
if size+IKCP_OVERHEAD > int(kcp.mtu) {
|
||||
kcp.output(buffer, size)
|
||||
ptr = buffer
|
||||
}
|
||||
makeSpace(IKCP_OVERHEAD)
|
||||
ptr = seg.encode(ptr)
|
||||
}
|
||||
|
||||
@@ -723,7 +761,6 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
kcp.snd_buf = append(kcp.snd_buf, newseg)
|
||||
kcp.snd_nxt++
|
||||
newSegsCount++
|
||||
kcp.snd_queue[k].data = nil
|
||||
}
|
||||
if newSegsCount > 0 {
|
||||
kcp.snd_queue = kcp.remove_front(kcp.snd_queue, newSegsCount)
|
||||
@@ -738,9 +775,15 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
// check for retransmissions
|
||||
current := currentMs()
|
||||
var change, lost, lostSegs, fastRetransSegs, earlyRetransSegs uint64
|
||||
for k := range kcp.snd_buf {
|
||||
segment := &kcp.snd_buf[k]
|
||||
minrto := int32(kcp.interval)
|
||||
|
||||
ref := kcp.snd_buf[:len(kcp.snd_buf)] // for bounds check elimination
|
||||
for k := range ref {
|
||||
segment := &ref[k]
|
||||
needsend := false
|
||||
if segment.acked == 1 {
|
||||
continue
|
||||
}
|
||||
if segment.xmit == 0 { // initial transmit
|
||||
needsend = true
|
||||
segment.rto = kcp.rx_rto
|
||||
@@ -772,20 +815,14 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
}
|
||||
|
||||
if needsend {
|
||||
current = currentMs()
|
||||
segment.xmit++
|
||||
segment.ts = current
|
||||
segment.wnd = seg.wnd
|
||||
segment.una = seg.una
|
||||
|
||||
size := len(buffer) - len(ptr)
|
||||
need := IKCP_OVERHEAD + len(segment.data)
|
||||
|
||||
if size+need > int(kcp.mtu) {
|
||||
kcp.output(buffer, size)
|
||||
current = currentMs() // time update for a blocking call
|
||||
ptr = buffer
|
||||
}
|
||||
|
||||
makeSpace(need)
|
||||
ptr = segment.encode(ptr)
|
||||
copy(ptr, segment.data)
|
||||
ptr = ptr[len(segment.data):]
|
||||
@@ -794,13 +831,15 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
kcp.state = 0xFFFFFFFF
|
||||
}
|
||||
}
|
||||
|
||||
// get the nearest rto
|
||||
if rto := _itimediff(segment.resendts, current); rto > 0 && rto < minrto {
|
||||
minrto = rto
|
||||
}
|
||||
}
|
||||
|
||||
// flash remain segments
|
||||
size := len(buffer) - len(ptr)
|
||||
if size > 0 {
|
||||
kcp.output(buffer, size)
|
||||
}
|
||||
flushBuffer()
|
||||
|
||||
// counter updates
|
||||
sum := lostSegs
|
||||
@@ -819,34 +858,41 @@ func (kcp *KCP) flush(ackOnly bool) {
|
||||
atomic.AddUint64(&DefaultSnmp.RetransSegs, sum)
|
||||
}
|
||||
|
||||
// update ssthresh
|
||||
// rate halving, https://tools.ietf.org/html/rfc6937
|
||||
if change > 0 {
|
||||
inflight := kcp.snd_nxt - kcp.snd_una
|
||||
kcp.ssthresh = inflight / 2
|
||||
if kcp.ssthresh < IKCP_THRESH_MIN {
|
||||
kcp.ssthresh = IKCP_THRESH_MIN
|
||||
// cwnd update
|
||||
if kcp.nocwnd == 0 {
|
||||
// update ssthresh
|
||||
// rate halving, https://tools.ietf.org/html/rfc6937
|
||||
if change > 0 {
|
||||
inflight := kcp.snd_nxt - kcp.snd_una
|
||||
kcp.ssthresh = inflight / 2
|
||||
if kcp.ssthresh < IKCP_THRESH_MIN {
|
||||
kcp.ssthresh = IKCP_THRESH_MIN
|
||||
}
|
||||
kcp.cwnd = kcp.ssthresh + resent
|
||||
kcp.incr = kcp.cwnd * kcp.mss
|
||||
}
|
||||
|
||||
// congestion control, https://tools.ietf.org/html/rfc5681
|
||||
if lost > 0 {
|
||||
kcp.ssthresh = cwnd / 2
|
||||
if kcp.ssthresh < IKCP_THRESH_MIN {
|
||||
kcp.ssthresh = IKCP_THRESH_MIN
|
||||
}
|
||||
kcp.cwnd = 1
|
||||
kcp.incr = kcp.mss
|
||||
}
|
||||
|
||||
if kcp.cwnd < 1 {
|
||||
kcp.cwnd = 1
|
||||
kcp.incr = kcp.mss
|
||||
}
|
||||
kcp.cwnd = kcp.ssthresh + resent
|
||||
kcp.incr = kcp.cwnd * kcp.mss
|
||||
}
|
||||
|
||||
// congestion control, https://tools.ietf.org/html/rfc5681
|
||||
if lost > 0 {
|
||||
kcp.ssthresh = cwnd / 2
|
||||
if kcp.ssthresh < IKCP_THRESH_MIN {
|
||||
kcp.ssthresh = IKCP_THRESH_MIN
|
||||
}
|
||||
kcp.cwnd = 1
|
||||
kcp.incr = kcp.mss
|
||||
}
|
||||
|
||||
if kcp.cwnd < 1 {
|
||||
kcp.cwnd = 1
|
||||
kcp.incr = kcp.mss
|
||||
}
|
||||
return uint32(minrto)
|
||||
}
|
||||
|
||||
// (deprecated)
|
||||
//
|
||||
// Update updates state (call it repeatedly, every 10ms-100ms), or you can ask
|
||||
// ikcp_check when to call it again (without ikcp_input/_send calling).
|
||||
// 'current' - current timestamp in millisec.
|
||||
@@ -875,6 +921,8 @@ func (kcp *KCP) Update() {
|
||||
}
|
||||
}
|
||||
|
||||
// (deprecated)
|
||||
//
|
||||
// Check determines when should you invoke ikcp_update:
|
||||
// returns when you should invoke ikcp_update in millisec, if there
|
||||
// is no ikcp_input/_send calling. you can call ikcp_update in that
|
||||
@@ -930,12 +978,16 @@ func (kcp *KCP) SetMtu(mtu int) int {
|
||||
if mtu < 50 || mtu < IKCP_OVERHEAD {
|
||||
return -1
|
||||
}
|
||||
buffer := make([]byte, (mtu+IKCP_OVERHEAD)*3)
|
||||
if kcp.reserved >= int(kcp.mtu-IKCP_OVERHEAD) || kcp.reserved < 0 {
|
||||
return -1
|
||||
}
|
||||
|
||||
buffer := make([]byte, mtu)
|
||||
if buffer == nil {
|
||||
return -2
|
||||
}
|
||||
kcp.mtu = uint32(mtu)
|
||||
kcp.mss = kcp.mtu - IKCP_OVERHEAD
|
||||
kcp.mss = kcp.mtu - IKCP_OVERHEAD - uint32(kcp.reserved)
|
||||
kcp.buffer = buffer
|
||||
return 0
|
||||
}
|
||||
@@ -989,10 +1041,13 @@ func (kcp *KCP) WaitSnd() int {
|
||||
}
|
||||
|
||||
// remove front n elements from queue
|
||||
// if the number of elements to remove is more than half of the size.
|
||||
// just shift the rear elements to front, otherwise just reslice q to q[n:]
|
||||
// then the cost of runtime.growslice can always be less than n/2
|
||||
func (kcp *KCP) remove_front(q []segment, n int) []segment {
|
||||
newn := copy(q, q[n:])
|
||||
for i := newn; i < len(q); i++ {
|
||||
q[i] = segment{} // manual set nil for GC
|
||||
if n > cap(q)/2 {
|
||||
newn := copy(q, q[n:])
|
||||
return q[:newn]
|
||||
}
|
||||
return q[:newn]
|
||||
return q[n:]
|
||||
}
|
||||
|
||||
48
vendor/github.com/fatedier/kcp-go/readloop.go
generated
vendored
Normal file
48
vendor/github.com/fatedier/kcp-go/readloop.go
generated
vendored
Normal file
@@ -0,0 +1,48 @@
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func (s *UDPSession) defaultReadLoop() {
|
||||
buf := make([]byte, mtuLimit)
|
||||
var src string
|
||||
for {
|
||||
if n, addr, err := s.conn.ReadFrom(buf); err == nil {
|
||||
// make sure the packet is from the same source
|
||||
if src == "" { // set source address
|
||||
src = addr.String()
|
||||
} else if addr.String() != src {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
continue
|
||||
}
|
||||
|
||||
if n >= s.headerSize+IKCP_OVERHEAD {
|
||||
s.packetInput(buf[:n])
|
||||
} else {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
}
|
||||
} else {
|
||||
s.notifyReadError(errors.WithStack(err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (l *Listener) defaultMonitor() {
|
||||
buf := make([]byte, mtuLimit)
|
||||
for {
|
||||
if n, from, err := l.conn.ReadFrom(buf); err == nil {
|
||||
if n >= l.headerSize+IKCP_OVERHEAD {
|
||||
l.packetInput(buf[:n], from)
|
||||
} else {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
}
|
||||
} else {
|
||||
l.notifyReadError(errors.WithStack(err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
11
vendor/github.com/fatedier/kcp-go/readloop_generic.go
generated
vendored
Normal file
11
vendor/github.com/fatedier/kcp-go/readloop_generic.go
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
// +build !linux
|
||||
|
||||
package kcp
|
||||
|
||||
func (s *UDPSession) readLoop() {
|
||||
s.defaultReadLoop()
|
||||
}
|
||||
|
||||
func (l *Listener) monitor() {
|
||||
l.defaultMonitor()
|
||||
}
|
||||
120
vendor/github.com/fatedier/kcp-go/readloop_linux.go
generated
vendored
Normal file
120
vendor/github.com/fatedier/kcp-go/readloop_linux.go
generated
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
// +build linux
|
||||
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"net"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/ipv4"
|
||||
"golang.org/x/net/ipv6"
|
||||
)
|
||||
|
||||
// the read loop for a client session
|
||||
func (s *UDPSession) readLoop() {
|
||||
// default version
|
||||
if s.xconn == nil {
|
||||
s.defaultReadLoop()
|
||||
return
|
||||
}
|
||||
|
||||
// x/net version
|
||||
var src string
|
||||
msgs := make([]ipv4.Message, batchSize)
|
||||
for k := range msgs {
|
||||
msgs[k].Buffers = [][]byte{make([]byte, mtuLimit)}
|
||||
}
|
||||
|
||||
for {
|
||||
if count, err := s.xconn.ReadBatch(msgs, 0); err == nil {
|
||||
for i := 0; i < count; i++ {
|
||||
msg := &msgs[i]
|
||||
// make sure the packet is from the same source
|
||||
if src == "" { // set source address if nil
|
||||
src = msg.Addr.String()
|
||||
} else if msg.Addr.String() != src {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
continue
|
||||
}
|
||||
|
||||
if msg.N < s.headerSize+IKCP_OVERHEAD {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
continue
|
||||
}
|
||||
|
||||
// source and size has validated
|
||||
s.packetInput(msg.Buffers[0][:msg.N])
|
||||
}
|
||||
} else {
|
||||
// compatibility issue:
|
||||
// for linux kernel<=2.6.32, support for sendmmsg is not available
|
||||
// an error of type os.SyscallError will be returned
|
||||
if operr, ok := err.(*net.OpError); ok {
|
||||
if se, ok := operr.Err.(*os.SyscallError); ok {
|
||||
if se.Syscall == "recvmmsg" {
|
||||
s.defaultReadLoop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
s.notifyReadError(errors.WithStack(err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// monitor incoming data for all connections of server
|
||||
func (l *Listener) monitor() {
|
||||
var xconn batchConn
|
||||
if _, ok := l.conn.(*net.UDPConn); ok {
|
||||
addr, err := net.ResolveUDPAddr("udp", l.conn.LocalAddr().String())
|
||||
if err == nil {
|
||||
if addr.IP.To4() != nil {
|
||||
xconn = ipv4.NewPacketConn(l.conn)
|
||||
} else {
|
||||
xconn = ipv6.NewPacketConn(l.conn)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// default version
|
||||
if xconn == nil {
|
||||
l.defaultMonitor()
|
||||
return
|
||||
}
|
||||
|
||||
// x/net version
|
||||
msgs := make([]ipv4.Message, batchSize)
|
||||
for k := range msgs {
|
||||
msgs[k].Buffers = [][]byte{make([]byte, mtuLimit)}
|
||||
}
|
||||
|
||||
for {
|
||||
if count, err := xconn.ReadBatch(msgs, 0); err == nil {
|
||||
for i := 0; i < count; i++ {
|
||||
msg := &msgs[i]
|
||||
if msg.N >= l.headerSize+IKCP_OVERHEAD {
|
||||
l.packetInput(msg.Buffers[0][:msg.N], msg.Addr)
|
||||
} else {
|
||||
atomic.AddUint64(&DefaultSnmp.InErrs, 1)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// compatibility issue:
|
||||
// for linux kernel<=2.6.32, support for sendmmsg is not available
|
||||
// an error of type os.SyscallError will be returned
|
||||
if operr, ok := err.(*net.OpError); ok {
|
||||
if se, ok := operr.Err.(*os.SyscallError); ok {
|
||||
if se.Syscall == "recvmmsg" {
|
||||
l.defaultMonitor()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
l.notifyReadError(errors.WithStack(err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
780
vendor/github.com/fatedier/kcp-go/sess.go
generated
vendored
780
vendor/github.com/fatedier/kcp-go/sess.go
generated
vendored
File diff suppressed because it is too large
Load Diff
25
vendor/github.com/fatedier/kcp-go/tx.go
generated
vendored
Normal file
25
vendor/github.com/fatedier/kcp-go/tx.go
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/ipv4"
|
||||
)
|
||||
|
||||
func (s *UDPSession) defaultTx(txqueue []ipv4.Message) {
|
||||
nbytes := 0
|
||||
npkts := 0
|
||||
for k := range txqueue {
|
||||
if n, err := s.conn.WriteTo(txqueue[k].Buffers[0], txqueue[k].Addr); err == nil {
|
||||
nbytes += n
|
||||
npkts++
|
||||
xmitBuf.Put(txqueue[k].Buffers[0])
|
||||
} else {
|
||||
s.notifyWriteError(errors.WithStack(err))
|
||||
break
|
||||
}
|
||||
}
|
||||
atomic.AddUint64(&DefaultSnmp.OutPkts, uint64(npkts))
|
||||
atomic.AddUint64(&DefaultSnmp.OutBytes, uint64(nbytes))
|
||||
}
|
||||
11
vendor/github.com/fatedier/kcp-go/tx_generic.go
generated
vendored
Normal file
11
vendor/github.com/fatedier/kcp-go/tx_generic.go
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
// +build !linux
|
||||
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"golang.org/x/net/ipv4"
|
||||
)
|
||||
|
||||
func (s *UDPSession) tx(txqueue []ipv4.Message) {
|
||||
s.defaultTx(txqueue)
|
||||
}
|
||||
52
vendor/github.com/fatedier/kcp-go/tx_linux.go
generated
vendored
Normal file
52
vendor/github.com/fatedier/kcp-go/tx_linux.go
generated
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
// +build linux
|
||||
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"net"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/ipv4"
|
||||
)
|
||||
|
||||
func (s *UDPSession) tx(txqueue []ipv4.Message) {
|
||||
// default version
|
||||
if s.xconn == nil || s.xconnWriteError != nil {
|
||||
s.defaultTx(txqueue)
|
||||
return
|
||||
}
|
||||
|
||||
// x/net version
|
||||
nbytes := 0
|
||||
npkts := 0
|
||||
for len(txqueue) > 0 {
|
||||
if n, err := s.xconn.WriteBatch(txqueue, 0); err == nil {
|
||||
for k := range txqueue[:n] {
|
||||
nbytes += len(txqueue[k].Buffers[0])
|
||||
xmitBuf.Put(txqueue[k].Buffers[0])
|
||||
}
|
||||
npkts += n
|
||||
txqueue = txqueue[n:]
|
||||
} else {
|
||||
// compatibility issue:
|
||||
// for linux kernel<=2.6.32, support for sendmmsg is not available
|
||||
// an error of type os.SyscallError will be returned
|
||||
if operr, ok := err.(*net.OpError); ok {
|
||||
if se, ok := operr.Err.(*os.SyscallError); ok {
|
||||
if se.Syscall == "sendmmsg" {
|
||||
s.xconnWriteError = se
|
||||
s.defaultTx(txqueue)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
s.notifyWriteError(errors.WithStack(err))
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
atomic.AddUint64(&DefaultSnmp.OutPkts, uint64(npkts))
|
||||
atomic.AddUint64(&DefaultSnmp.OutBytes, uint64(nbytes))
|
||||
}
|
||||
17
vendor/github.com/fatedier/kcp-go/updater.go
generated
vendored
17
vendor/github.com/fatedier/kcp-go/updater.go
generated
vendored
@@ -76,29 +76,28 @@ func (h *updateHeap) wakeup() {
|
||||
}
|
||||
|
||||
func (h *updateHeap) updateTask() {
|
||||
var timer <-chan time.Time
|
||||
timer := time.NewTimer(0)
|
||||
for {
|
||||
select {
|
||||
case <-timer:
|
||||
case <-timer.C:
|
||||
case <-h.chWakeUp:
|
||||
}
|
||||
|
||||
h.mu.Lock()
|
||||
hlen := h.Len()
|
||||
now := time.Now()
|
||||
for i := 0; i < hlen; i++ {
|
||||
entry := heap.Pop(h).(entry)
|
||||
if now.After(entry.ts) {
|
||||
entry.ts = now.Add(entry.s.update())
|
||||
heap.Push(h, entry)
|
||||
entry := &h.entries[0]
|
||||
if !time.Now().Before(entry.ts) {
|
||||
interval := entry.s.update()
|
||||
entry.ts = time.Now().Add(interval)
|
||||
heap.Fix(h, 0)
|
||||
} else {
|
||||
heap.Push(h, entry)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if hlen > 0 {
|
||||
timer = time.After(h.entries[0].ts.Sub(now))
|
||||
timer.Reset(h.entries[0].ts.Sub(time.Now()))
|
||||
}
|
||||
h.mu.Unlock()
|
||||
}
|
||||
|
||||
110
vendor/github.com/fatedier/kcp-go/xor.go
generated
vendored
110
vendor/github.com/fatedier/kcp-go/xor.go
generated
vendored
@@ -1,110 +0,0 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package kcp
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const wordSize = int(unsafe.Sizeof(uintptr(0)))
|
||||
const supportsUnaligned = runtime.GOARCH == "386" || runtime.GOARCH == "amd64" || runtime.GOARCH == "ppc64" || runtime.GOARCH == "ppc64le" || runtime.GOARCH == "s390x"
|
||||
|
||||
// fastXORBytes xors in bulk. It only works on architectures that
|
||||
// support unaligned read/writes.
|
||||
func fastXORBytes(dst, a, b []byte) int {
|
||||
n := len(a)
|
||||
if len(b) < n {
|
||||
n = len(b)
|
||||
}
|
||||
|
||||
w := n / wordSize
|
||||
if w > 0 {
|
||||
wordBytes := w * wordSize
|
||||
fastXORWords(dst[:wordBytes], a[:wordBytes], b[:wordBytes])
|
||||
}
|
||||
|
||||
for i := (n - n%wordSize); i < n; i++ {
|
||||
dst[i] = a[i] ^ b[i]
|
||||
}
|
||||
|
||||
return n
|
||||
}
|
||||
|
||||
func safeXORBytes(dst, a, b []byte) int {
|
||||
n := len(a)
|
||||
if len(b) < n {
|
||||
n = len(b)
|
||||
}
|
||||
ex := n % 8
|
||||
for i := 0; i < ex; i++ {
|
||||
dst[i] = a[i] ^ b[i]
|
||||
}
|
||||
|
||||
for i := ex; i < n; i += 8 {
|
||||
_dst := dst[i : i+8]
|
||||
_a := a[i : i+8]
|
||||
_b := b[i : i+8]
|
||||
_dst[0] = _a[0] ^ _b[0]
|
||||
_dst[1] = _a[1] ^ _b[1]
|
||||
_dst[2] = _a[2] ^ _b[2]
|
||||
_dst[3] = _a[3] ^ _b[3]
|
||||
|
||||
_dst[4] = _a[4] ^ _b[4]
|
||||
_dst[5] = _a[5] ^ _b[5]
|
||||
_dst[6] = _a[6] ^ _b[6]
|
||||
_dst[7] = _a[7] ^ _b[7]
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// xorBytes xors the bytes in a and b. The destination is assumed to have enough
|
||||
// space. Returns the number of bytes xor'd.
|
||||
func xorBytes(dst, a, b []byte) int {
|
||||
if supportsUnaligned {
|
||||
return fastXORBytes(dst, a, b)
|
||||
}
|
||||
// TODO(hanwen): if (dst, a, b) have common alignment
|
||||
// we could still try fastXORBytes. It is not clear
|
||||
// how often this happens, and it's only worth it if
|
||||
// the block encryption itself is hardware
|
||||
// accelerated.
|
||||
return safeXORBytes(dst, a, b)
|
||||
}
|
||||
|
||||
// fastXORWords XORs multiples of 4 or 8 bytes (depending on architecture.)
|
||||
// The arguments are assumed to be of equal length.
|
||||
func fastXORWords(dst, a, b []byte) {
|
||||
dw := *(*[]uintptr)(unsafe.Pointer(&dst))
|
||||
aw := *(*[]uintptr)(unsafe.Pointer(&a))
|
||||
bw := *(*[]uintptr)(unsafe.Pointer(&b))
|
||||
n := len(b) / wordSize
|
||||
ex := n % 8
|
||||
for i := 0; i < ex; i++ {
|
||||
dw[i] = aw[i] ^ bw[i]
|
||||
}
|
||||
|
||||
for i := ex; i < n; i += 8 {
|
||||
_dw := dw[i : i+8]
|
||||
_aw := aw[i : i+8]
|
||||
_bw := bw[i : i+8]
|
||||
_dw[0] = _aw[0] ^ _bw[0]
|
||||
_dw[1] = _aw[1] ^ _bw[1]
|
||||
_dw[2] = _aw[2] ^ _bw[2]
|
||||
_dw[3] = _aw[3] ^ _bw[3]
|
||||
_dw[4] = _aw[4] ^ _bw[4]
|
||||
_dw[5] = _aw[5] ^ _bw[5]
|
||||
_dw[6] = _aw[6] ^ _bw[6]
|
||||
_dw[7] = _aw[7] ^ _bw[7]
|
||||
}
|
||||
}
|
||||
|
||||
func xorWords(dst, a, b []byte) {
|
||||
if supportsUnaligned {
|
||||
fastXORWords(dst, a, b)
|
||||
} else {
|
||||
safeXORBytes(dst, a, b)
|
||||
}
|
||||
}
|
||||
19
vendor/github.com/gorilla/context/.travis.yml
generated
vendored
19
vendor/github.com/gorilla/context/.travis.yml
generated
vendored
@@ -1,19 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- go: 1.3
|
||||
- go: 1.4
|
||||
- go: 1.5
|
||||
- go: 1.6
|
||||
- go: 1.7
|
||||
- go: tip
|
||||
allow_failures:
|
||||
- go: tip
|
||||
|
||||
script:
|
||||
- go get -t -v ./...
|
||||
- diff -u <(echo -n) <(gofmt -d .)
|
||||
- go vet $(go list ./... | grep -v /vendor/)
|
||||
- go test -v -race ./...
|
||||
10
vendor/github.com/gorilla/context/README.md
generated
vendored
10
vendor/github.com/gorilla/context/README.md
generated
vendored
@@ -1,10 +0,0 @@
|
||||
context
|
||||
=======
|
||||
[](https://travis-ci.org/gorilla/context)
|
||||
|
||||
gorilla/context is a general purpose registry for global request variables.
|
||||
|
||||
> Note: gorilla/context, having been born well before `context.Context` existed, does not play well
|
||||
> with the shallow copying of the request that [`http.Request.WithContext`](https://golang.org/pkg/net/http/#Request.WithContext) (added to net/http Go 1.7 onwards) performs. You should either use *just* gorilla/context, or moving forward, the new `http.Request.Context()`.
|
||||
|
||||
Read the full documentation here: http://www.gorillatoolkit.org/pkg/context
|
||||
143
vendor/github.com/gorilla/context/context.go
generated
vendored
143
vendor/github.com/gorilla/context/context.go
generated
vendored
@@ -1,143 +0,0 @@
|
||||
// Copyright 2012 The Gorilla Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package context
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
mutex sync.RWMutex
|
||||
data = make(map[*http.Request]map[interface{}]interface{})
|
||||
datat = make(map[*http.Request]int64)
|
||||
)
|
||||
|
||||
// Set stores a value for a given key in a given request.
|
||||
func Set(r *http.Request, key, val interface{}) {
|
||||
mutex.Lock()
|
||||
if data[r] == nil {
|
||||
data[r] = make(map[interface{}]interface{})
|
||||
datat[r] = time.Now().Unix()
|
||||
}
|
||||
data[r][key] = val
|
||||
mutex.Unlock()
|
||||
}
|
||||
|
||||
// Get returns a value stored for a given key in a given request.
|
||||
func Get(r *http.Request, key interface{}) interface{} {
|
||||
mutex.RLock()
|
||||
if ctx := data[r]; ctx != nil {
|
||||
value := ctx[key]
|
||||
mutex.RUnlock()
|
||||
return value
|
||||
}
|
||||
mutex.RUnlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetOk returns stored value and presence state like multi-value return of map access.
|
||||
func GetOk(r *http.Request, key interface{}) (interface{}, bool) {
|
||||
mutex.RLock()
|
||||
if _, ok := data[r]; ok {
|
||||
value, ok := data[r][key]
|
||||
mutex.RUnlock()
|
||||
return value, ok
|
||||
}
|
||||
mutex.RUnlock()
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// GetAll returns all stored values for the request as a map. Nil is returned for invalid requests.
|
||||
func GetAll(r *http.Request) map[interface{}]interface{} {
|
||||
mutex.RLock()
|
||||
if context, ok := data[r]; ok {
|
||||
result := make(map[interface{}]interface{}, len(context))
|
||||
for k, v := range context {
|
||||
result[k] = v
|
||||
}
|
||||
mutex.RUnlock()
|
||||
return result
|
||||
}
|
||||
mutex.RUnlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetAllOk returns all stored values for the request as a map and a boolean value that indicates if
|
||||
// the request was registered.
|
||||
func GetAllOk(r *http.Request) (map[interface{}]interface{}, bool) {
|
||||
mutex.RLock()
|
||||
context, ok := data[r]
|
||||
result := make(map[interface{}]interface{}, len(context))
|
||||
for k, v := range context {
|
||||
result[k] = v
|
||||
}
|
||||
mutex.RUnlock()
|
||||
return result, ok
|
||||
}
|
||||
|
||||
// Delete removes a value stored for a given key in a given request.
|
||||
func Delete(r *http.Request, key interface{}) {
|
||||
mutex.Lock()
|
||||
if data[r] != nil {
|
||||
delete(data[r], key)
|
||||
}
|
||||
mutex.Unlock()
|
||||
}
|
||||
|
||||
// Clear removes all values stored for a given request.
|
||||
//
|
||||
// This is usually called by a handler wrapper to clean up request
|
||||
// variables at the end of a request lifetime. See ClearHandler().
|
||||
func Clear(r *http.Request) {
|
||||
mutex.Lock()
|
||||
clear(r)
|
||||
mutex.Unlock()
|
||||
}
|
||||
|
||||
// clear is Clear without the lock.
|
||||
func clear(r *http.Request) {
|
||||
delete(data, r)
|
||||
delete(datat, r)
|
||||
}
|
||||
|
||||
// Purge removes request data stored for longer than maxAge, in seconds.
|
||||
// It returns the amount of requests removed.
|
||||
//
|
||||
// If maxAge <= 0, all request data is removed.
|
||||
//
|
||||
// This is only used for sanity check: in case context cleaning was not
|
||||
// properly set some request data can be kept forever, consuming an increasing
|
||||
// amount of memory. In case this is detected, Purge() must be called
|
||||
// periodically until the problem is fixed.
|
||||
func Purge(maxAge int) int {
|
||||
mutex.Lock()
|
||||
count := 0
|
||||
if maxAge <= 0 {
|
||||
count = len(data)
|
||||
data = make(map[*http.Request]map[interface{}]interface{})
|
||||
datat = make(map[*http.Request]int64)
|
||||
} else {
|
||||
min := time.Now().Unix() - int64(maxAge)
|
||||
for r := range data {
|
||||
if datat[r] < min {
|
||||
clear(r)
|
||||
count++
|
||||
}
|
||||
}
|
||||
}
|
||||
mutex.Unlock()
|
||||
return count
|
||||
}
|
||||
|
||||
// ClearHandler wraps an http.Handler and clears request values at the end
|
||||
// of a request lifetime.
|
||||
func ClearHandler(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
defer Clear(r)
|
||||
h.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
88
vendor/github.com/gorilla/context/doc.go
generated
vendored
88
vendor/github.com/gorilla/context/doc.go
generated
vendored
@@ -1,88 +0,0 @@
|
||||
// Copyright 2012 The Gorilla Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
/*
|
||||
Package context stores values shared during a request lifetime.
|
||||
|
||||
Note: gorilla/context, having been born well before `context.Context` existed,
|
||||
does not play well > with the shallow copying of the request that
|
||||
[`http.Request.WithContext`](https://golang.org/pkg/net/http/#Request.WithContext)
|
||||
(added to net/http Go 1.7 onwards) performs. You should either use *just*
|
||||
gorilla/context, or moving forward, the new `http.Request.Context()`.
|
||||
|
||||
For example, a router can set variables extracted from the URL and later
|
||||
application handlers can access those values, or it can be used to store
|
||||
sessions values to be saved at the end of a request. There are several
|
||||
others common uses.
|
||||
|
||||
The idea was posted by Brad Fitzpatrick to the go-nuts mailing list:
|
||||
|
||||
http://groups.google.com/group/golang-nuts/msg/e2d679d303aa5d53
|
||||
|
||||
Here's the basic usage: first define the keys that you will need. The key
|
||||
type is interface{} so a key can be of any type that supports equality.
|
||||
Here we define a key using a custom int type to avoid name collisions:
|
||||
|
||||
package foo
|
||||
|
||||
import (
|
||||
"github.com/gorilla/context"
|
||||
)
|
||||
|
||||
type key int
|
||||
|
||||
const MyKey key = 0
|
||||
|
||||
Then set a variable. Variables are bound to an http.Request object, so you
|
||||
need a request instance to set a value:
|
||||
|
||||
context.Set(r, MyKey, "bar")
|
||||
|
||||
The application can later access the variable using the same key you provided:
|
||||
|
||||
func MyHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// val is "bar".
|
||||
val := context.Get(r, foo.MyKey)
|
||||
|
||||
// returns ("bar", true)
|
||||
val, ok := context.GetOk(r, foo.MyKey)
|
||||
// ...
|
||||
}
|
||||
|
||||
And that's all about the basic usage. We discuss some other ideas below.
|
||||
|
||||
Any type can be stored in the context. To enforce a given type, make the key
|
||||
private and wrap Get() and Set() to accept and return values of a specific
|
||||
type:
|
||||
|
||||
type key int
|
||||
|
||||
const mykey key = 0
|
||||
|
||||
// GetMyKey returns a value for this package from the request values.
|
||||
func GetMyKey(r *http.Request) SomeType {
|
||||
if rv := context.Get(r, mykey); rv != nil {
|
||||
return rv.(SomeType)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetMyKey sets a value for this package in the request values.
|
||||
func SetMyKey(r *http.Request, val SomeType) {
|
||||
context.Set(r, mykey, val)
|
||||
}
|
||||
|
||||
Variables must be cleared at the end of a request, to remove all values
|
||||
that were stored. This can be done in an http.Handler, after a request was
|
||||
served. Just call Clear() passing the request:
|
||||
|
||||
context.Clear(r)
|
||||
|
||||
...or use ClearHandler(), which conveniently wraps an http.Handler to clear
|
||||
variables at the end of a request lifetime.
|
||||
|
||||
The Routers from the packages gorilla/mux and gorilla/pat call Clear()
|
||||
so if you are using either of them you don't need to clear the context manually.
|
||||
*/
|
||||
package context
|
||||
23
vendor/github.com/gorilla/mux/.travis.yml
generated
vendored
23
vendor/github.com/gorilla/mux/.travis.yml
generated
vendored
@@ -1,23 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- go: 1.5.x
|
||||
- go: 1.6.x
|
||||
- go: 1.7.x
|
||||
- go: 1.8.x
|
||||
- go: 1.9.x
|
||||
- go: 1.10.x
|
||||
- go: tip
|
||||
allow_failures:
|
||||
- go: tip
|
||||
|
||||
install:
|
||||
- # Skip
|
||||
|
||||
script:
|
||||
- go get -t -v ./...
|
||||
- diff -u <(echo -n) <(gofmt -d .)
|
||||
- go tool vet .
|
||||
- go test -v -race ./...
|
||||
8
vendor/github.com/gorilla/mux/AUTHORS
generated
vendored
Normal file
8
vendor/github.com/gorilla/mux/AUTHORS
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
# This is the official list of gorilla/mux authors for copyright purposes.
|
||||
#
|
||||
# Please keep the list sorted.
|
||||
|
||||
Google LLC (https://opensource.google.com/)
|
||||
Kamil Kisielk <kamil@kamilkisiel.net>
|
||||
Matt Silverlock <matt@eatsleeprepeat.net>
|
||||
Rodrigo Moraes (https://github.com/moraes)
|
||||
11
vendor/github.com/gorilla/mux/ISSUE_TEMPLATE.md
generated
vendored
11
vendor/github.com/gorilla/mux/ISSUE_TEMPLATE.md
generated
vendored
@@ -1,11 +0,0 @@
|
||||
**What version of Go are you running?** (Paste the output of `go version`)
|
||||
|
||||
|
||||
**What version of gorilla/mux are you at?** (Paste the output of `git rev-parse HEAD` inside `$GOPATH/src/github.com/gorilla/mux`)
|
||||
|
||||
|
||||
**Describe your problem** (and what you have tried so far)
|
||||
|
||||
|
||||
**Paste a minimal, runnable, reproduction of your issue below** (use backticks to format it)
|
||||
|
||||
2
vendor/github.com/gorilla/mux/LICENSE
generated
vendored
2
vendor/github.com/gorilla/mux/LICENSE
generated
vendored
@@ -1,4 +1,4 @@
|
||||
Copyright (c) 2012 Rodrigo Moraes. All rights reserved.
|
||||
Copyright (c) 2012-2018 The Gorilla Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
|
||||
85
vendor/github.com/gorilla/mux/README.md
generated
vendored
85
vendor/github.com/gorilla/mux/README.md
generated
vendored
@@ -2,11 +2,12 @@
|
||||
|
||||
[](https://godoc.org/github.com/gorilla/mux)
|
||||
[](https://travis-ci.org/gorilla/mux)
|
||||
[](https://circleci.com/gh/gorilla/mux)
|
||||
[](https://sourcegraph.com/github.com/gorilla/mux?badge)
|
||||
|
||||

|
||||
|
||||
http://www.gorillatoolkit.org/pkg/mux
|
||||
https://www.gorillatoolkit.org/pkg/mux
|
||||
|
||||
Package `gorilla/mux` implements a request router and dispatcher for matching incoming requests to
|
||||
their respective handler.
|
||||
@@ -29,6 +30,7 @@ The name mux stands for "HTTP request multiplexer". Like the standard `http.Serv
|
||||
* [Walking Routes](#walking-routes)
|
||||
* [Graceful Shutdown](#graceful-shutdown)
|
||||
* [Middleware](#middleware)
|
||||
* [Handling CORS Requests](#handling-cors-requests)
|
||||
* [Testing Handlers](#testing-handlers)
|
||||
* [Full Example](#full-example)
|
||||
|
||||
@@ -88,7 +90,7 @@ r := mux.NewRouter()
|
||||
// Only matches if domain is "www.example.com".
|
||||
r.Host("www.example.com")
|
||||
// Matches a dynamic subdomain.
|
||||
r.Host("{subdomain:[a-z]+}.domain.com")
|
||||
r.Host("{subdomain:[a-z]+}.example.com")
|
||||
```
|
||||
|
||||
There are several other matchers that can be added. To match path prefixes:
|
||||
@@ -238,13 +240,13 @@ This also works for host and query value variables:
|
||||
|
||||
```go
|
||||
r := mux.NewRouter()
|
||||
r.Host("{subdomain}.domain.com").
|
||||
r.Host("{subdomain}.example.com").
|
||||
Path("/articles/{category}/{id:[0-9]+}").
|
||||
Queries("filter", "{filter}").
|
||||
HandlerFunc(ArticleHandler).
|
||||
Name("article")
|
||||
|
||||
// url.String() will be "http://news.domain.com/articles/technology/42?filter=gorilla"
|
||||
// url.String() will be "http://news.example.com/articles/technology/42?filter=gorilla"
|
||||
url, err := r.Get("article").URL("subdomain", "news",
|
||||
"category", "technology",
|
||||
"id", "42",
|
||||
@@ -264,7 +266,7 @@ r.HeadersRegexp("Content-Type", "application/(text|json)")
|
||||
There's also a way to build only the URL host or path for a route: use the methods `URLHost()` or `URLPath()` instead. For the previous route, we would do:
|
||||
|
||||
```go
|
||||
// "http://news.domain.com/"
|
||||
// "http://news.example.com/"
|
||||
host, err := r.Get("article").URLHost("subdomain", "news")
|
||||
|
||||
// "/articles/technology/42"
|
||||
@@ -275,12 +277,12 @@ And if you use subrouters, host and path defined separately can be built as well
|
||||
|
||||
```go
|
||||
r := mux.NewRouter()
|
||||
s := r.Host("{subdomain}.domain.com").Subrouter()
|
||||
s := r.Host("{subdomain}.example.com").Subrouter()
|
||||
s.Path("/articles/{category}/{id:[0-9]+}").
|
||||
HandlerFunc(ArticleHandler).
|
||||
Name("article")
|
||||
|
||||
// "http://news.domain.com/articles/technology/42"
|
||||
// "http://news.example.com/articles/technology/42"
|
||||
url, err := r.Get("article").URL("subdomain", "news",
|
||||
"category", "technology",
|
||||
"id", "42")
|
||||
@@ -491,6 +493,73 @@ r.Use(amw.Middleware)
|
||||
|
||||
Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to. Middlewares _should_ write to `ResponseWriter` if they _are_ going to terminate the request, and they _should not_ write to `ResponseWriter` if they _are not_ going to terminate it.
|
||||
|
||||
### Handling CORS Requests
|
||||
|
||||
[CORSMethodMiddleware](https://godoc.org/github.com/gorilla/mux#CORSMethodMiddleware) intends to make it easier to strictly set the `Access-Control-Allow-Methods` response header.
|
||||
|
||||
* You will still need to use your own CORS handler to set the other CORS headers such as `Access-Control-Allow-Origin`
|
||||
* The middleware will set the `Access-Control-Allow-Methods` header to all the method matchers (e.g. `r.Methods(http.MethodGet, http.MethodPut, http.MethodOptions)` -> `Access-Control-Allow-Methods: GET,PUT,OPTIONS`) on a route
|
||||
* If you do not specify any methods, then:
|
||||
> _Important_: there must be an `OPTIONS` method matcher for the middleware to set the headers.
|
||||
|
||||
Here is an example of using `CORSMethodMiddleware` along with a custom `OPTIONS` handler to set all the required CORS headers:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
func main() {
|
||||
r := mux.NewRouter()
|
||||
|
||||
// IMPORTANT: you must specify an OPTIONS method matcher for the middleware to set CORS headers
|
||||
r.HandleFunc("/foo", fooHandler).Methods(http.MethodGet, http.MethodPut, http.MethodPatch, http.MethodOptions)
|
||||
r.Use(mux.CORSMethodMiddleware(r))
|
||||
|
||||
http.ListenAndServe(":8080", r)
|
||||
}
|
||||
|
||||
func fooHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
if r.Method == http.MethodOptions {
|
||||
return
|
||||
}
|
||||
|
||||
w.Write([]byte("foo"))
|
||||
}
|
||||
```
|
||||
|
||||
And an request to `/foo` using something like:
|
||||
|
||||
```bash
|
||||
curl localhost:8080/foo -v
|
||||
```
|
||||
|
||||
Would look like:
|
||||
|
||||
```bash
|
||||
* Trying ::1...
|
||||
* TCP_NODELAY set
|
||||
* Connected to localhost (::1) port 8080 (#0)
|
||||
> GET /foo HTTP/1.1
|
||||
> Host: localhost:8080
|
||||
> User-Agent: curl/7.59.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Access-Control-Allow-Methods: GET,PUT,PATCH,OPTIONS
|
||||
< Access-Control-Allow-Origin: *
|
||||
< Date: Fri, 28 Jun 2019 20:13:30 GMT
|
||||
< Content-Length: 3
|
||||
< Content-Type: text/plain; charset=utf-8
|
||||
<
|
||||
* Connection #0 to host localhost left intact
|
||||
foo
|
||||
```
|
||||
|
||||
### Testing Handlers
|
||||
|
||||
Testing handlers in a Go web application is straightforward, and _mux_ doesn't complicate this any further. Given two files: `endpoints.go` and `endpoints_test.go`, here's how we'd test an application using _mux_.
|
||||
@@ -503,8 +572,8 @@ package main
|
||||
|
||||
func HealthCheckHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// A very simple health check.
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
|
||||
// In the future we could report back on the status of our DB, or our cache
|
||||
// (e.g. Redis) by performing a simple PING, and include them in the response.
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
// +build go1.7
|
||||
|
||||
package mux
|
||||
|
||||
import (
|
||||
@@ -18,7 +16,3 @@ func contextSet(r *http.Request, key, val interface{}) *http.Request {
|
||||
|
||||
return r.WithContext(context.WithValue(r.Context(), key, val))
|
||||
}
|
||||
|
||||
func contextClear(r *http.Request) {
|
||||
return
|
||||
}
|
||||
26
vendor/github.com/gorilla/mux/context_gorilla.go
generated
vendored
26
vendor/github.com/gorilla/mux/context_gorilla.go
generated
vendored
@@ -1,26 +0,0 @@
|
||||
// +build !go1.7
|
||||
|
||||
package mux
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/gorilla/context"
|
||||
)
|
||||
|
||||
func contextGet(r *http.Request, key interface{}) interface{} {
|
||||
return context.Get(r, key)
|
||||
}
|
||||
|
||||
func contextSet(r *http.Request, key, val interface{}) *http.Request {
|
||||
if val == nil {
|
||||
return r
|
||||
}
|
||||
|
||||
context.Set(r, key, val)
|
||||
return r
|
||||
}
|
||||
|
||||
func contextClear(r *http.Request) {
|
||||
context.Clear(r)
|
||||
}
|
||||
2
vendor/github.com/gorilla/mux/doc.go
generated
vendored
2
vendor/github.com/gorilla/mux/doc.go
generated
vendored
@@ -295,7 +295,7 @@ A more complex authentication middleware, which maps session token to users, cou
|
||||
r := mux.NewRouter()
|
||||
r.HandleFunc("/", handler)
|
||||
|
||||
amw := authenticationMiddleware{}
|
||||
amw := authenticationMiddleware{tokenUsers: make(map[string]string)}
|
||||
amw.Populate()
|
||||
|
||||
r.Use(amw.Middleware)
|
||||
|
||||
1
vendor/github.com/gorilla/mux/go.mod
generated
vendored
Normal file
1
vendor/github.com/gorilla/mux/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/gorilla/mux
|
||||
61
vendor/github.com/gorilla/mux/middleware.go
generated
vendored
61
vendor/github.com/gorilla/mux/middleware.go
generated
vendored
@@ -32,37 +32,19 @@ func (r *Router) useInterface(mw middleware) {
|
||||
r.middlewares = append(r.middlewares, mw)
|
||||
}
|
||||
|
||||
// CORSMethodMiddleware sets the Access-Control-Allow-Methods response header
|
||||
// on a request, by matching routes based only on paths. It also handles
|
||||
// OPTIONS requests, by settings Access-Control-Allow-Methods, and then
|
||||
// returning without calling the next http handler.
|
||||
// CORSMethodMiddleware automatically sets the Access-Control-Allow-Methods response header
|
||||
// on requests for routes that have an OPTIONS method matcher to all the method matchers on
|
||||
// the route. Routes that do not explicitly handle OPTIONS requests will not be processed
|
||||
// by the middleware. See examples for usage.
|
||||
func CORSMethodMiddleware(r *Router) MiddlewareFunc {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
|
||||
var allMethods []string
|
||||
|
||||
err := r.Walk(func(route *Route, _ *Router, _ []*Route) error {
|
||||
for _, m := range route.matchers {
|
||||
if _, ok := m.(*routeRegexp); ok {
|
||||
if m.Match(req, &RouteMatch{}) {
|
||||
methods, err := route.GetMethods()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
allMethods = append(allMethods, methods...)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
allMethods, err := getAllMethodsForRoute(r, req)
|
||||
if err == nil {
|
||||
w.Header().Set("Access-Control-Allow-Methods", strings.Join(append(allMethods, "OPTIONS"), ","))
|
||||
|
||||
if req.Method == "OPTIONS" {
|
||||
return
|
||||
for _, v := range allMethods {
|
||||
if v == http.MethodOptions {
|
||||
w.Header().Set("Access-Control-Allow-Methods", strings.Join(allMethods, ","))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -70,3 +52,28 @@ func CORSMethodMiddleware(r *Router) MiddlewareFunc {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// getAllMethodsForRoute returns all the methods from method matchers matching a given
|
||||
// request.
|
||||
func getAllMethodsForRoute(r *Router, req *http.Request) ([]string, error) {
|
||||
var allMethods []string
|
||||
|
||||
err := r.Walk(func(route *Route, _ *Router, _ []*Route) error {
|
||||
for _, m := range route.matchers {
|
||||
if _, ok := m.(*routeRegexp); ok {
|
||||
if m.Match(req, &RouteMatch{}) {
|
||||
methods, err := route.GetMethods()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
allMethods = append(allMethods, methods...)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
return allMethods, err
|
||||
}
|
||||
|
||||
129
vendor/github.com/gorilla/mux/mux.go
generated
vendored
129
vendor/github.com/gorilla/mux/mux.go
generated
vendored
@@ -22,7 +22,7 @@ var (
|
||||
|
||||
// NewRouter returns a new router instance.
|
||||
func NewRouter() *Router {
|
||||
return &Router{namedRoutes: make(map[string]*Route), KeepContext: false}
|
||||
return &Router{namedRoutes: make(map[string]*Route)}
|
||||
}
|
||||
|
||||
// Router registers routes to be matched and dispatches a handler.
|
||||
@@ -50,24 +50,78 @@ type Router struct {
|
||||
// Configurable Handler to be used when the request method does not match the route.
|
||||
MethodNotAllowedHandler http.Handler
|
||||
|
||||
// Parent route, if this is a subrouter.
|
||||
parent parentRoute
|
||||
// Routes to be matched, in order.
|
||||
routes []*Route
|
||||
|
||||
// Routes by name for URL building.
|
||||
namedRoutes map[string]*Route
|
||||
// See Router.StrictSlash(). This defines the flag for new routes.
|
||||
strictSlash bool
|
||||
// See Router.SkipClean(). This defines the flag for new routes.
|
||||
skipClean bool
|
||||
|
||||
// If true, do not clear the request context after handling the request.
|
||||
// This has no effect when go1.7+ is used, since the context is stored
|
||||
//
|
||||
// Deprecated: No effect when go1.7+ is used, since the context is stored
|
||||
// on the request itself.
|
||||
KeepContext bool
|
||||
// see Router.UseEncodedPath(). This defines a flag for all routes.
|
||||
useEncodedPath bool
|
||||
|
||||
// Slice of middlewares to be called after a match is found
|
||||
middlewares []middleware
|
||||
|
||||
// configuration shared with `Route`
|
||||
routeConf
|
||||
}
|
||||
|
||||
// common route configuration shared between `Router` and `Route`
|
||||
type routeConf struct {
|
||||
// If true, "/path/foo%2Fbar/to" will match the path "/path/{var}/to"
|
||||
useEncodedPath bool
|
||||
|
||||
// If true, when the path pattern is "/path/", accessing "/path" will
|
||||
// redirect to the former and vice versa.
|
||||
strictSlash bool
|
||||
|
||||
// If true, when the path pattern is "/path//to", accessing "/path//to"
|
||||
// will not redirect
|
||||
skipClean bool
|
||||
|
||||
// Manager for the variables from host and path.
|
||||
regexp routeRegexpGroup
|
||||
|
||||
// List of matchers.
|
||||
matchers []matcher
|
||||
|
||||
// The scheme used when building URLs.
|
||||
buildScheme string
|
||||
|
||||
buildVarsFunc BuildVarsFunc
|
||||
}
|
||||
|
||||
// returns an effective deep copy of `routeConf`
|
||||
func copyRouteConf(r routeConf) routeConf {
|
||||
c := r
|
||||
|
||||
if r.regexp.path != nil {
|
||||
c.regexp.path = copyRouteRegexp(r.regexp.path)
|
||||
}
|
||||
|
||||
if r.regexp.host != nil {
|
||||
c.regexp.host = copyRouteRegexp(r.regexp.host)
|
||||
}
|
||||
|
||||
c.regexp.queries = make([]*routeRegexp, 0, len(r.regexp.queries))
|
||||
for _, q := range r.regexp.queries {
|
||||
c.regexp.queries = append(c.regexp.queries, copyRouteRegexp(q))
|
||||
}
|
||||
|
||||
c.matchers = make([]matcher, 0, len(r.matchers))
|
||||
for _, m := range r.matchers {
|
||||
c.matchers = append(c.matchers, m)
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
func copyRouteRegexp(r *routeRegexp) *routeRegexp {
|
||||
c := *r
|
||||
return &c
|
||||
}
|
||||
|
||||
// Match attempts to match the given request against the router's registered routes.
|
||||
@@ -155,22 +209,18 @@ func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||
handler = http.NotFoundHandler()
|
||||
}
|
||||
|
||||
if !r.KeepContext {
|
||||
defer contextClear(req)
|
||||
}
|
||||
|
||||
handler.ServeHTTP(w, req)
|
||||
}
|
||||
|
||||
// Get returns a route registered with the given name.
|
||||
func (r *Router) Get(name string) *Route {
|
||||
return r.getNamedRoutes()[name]
|
||||
return r.namedRoutes[name]
|
||||
}
|
||||
|
||||
// GetRoute returns a route registered with the given name. This method
|
||||
// was renamed to Get() and remains here for backwards compatibility.
|
||||
func (r *Router) GetRoute(name string) *Route {
|
||||
return r.getNamedRoutes()[name]
|
||||
return r.namedRoutes[name]
|
||||
}
|
||||
|
||||
// StrictSlash defines the trailing slash behavior for new routes. The initial
|
||||
@@ -221,55 +271,24 @@ func (r *Router) UseEncodedPath() *Router {
|
||||
return r
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
// parentRoute
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
func (r *Router) getBuildScheme() string {
|
||||
if r.parent != nil {
|
||||
return r.parent.getBuildScheme()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// getNamedRoutes returns the map where named routes are registered.
|
||||
func (r *Router) getNamedRoutes() map[string]*Route {
|
||||
if r.namedRoutes == nil {
|
||||
if r.parent != nil {
|
||||
r.namedRoutes = r.parent.getNamedRoutes()
|
||||
} else {
|
||||
r.namedRoutes = make(map[string]*Route)
|
||||
}
|
||||
}
|
||||
return r.namedRoutes
|
||||
}
|
||||
|
||||
// getRegexpGroup returns regexp definitions from the parent route, if any.
|
||||
func (r *Router) getRegexpGroup() *routeRegexpGroup {
|
||||
if r.parent != nil {
|
||||
return r.parent.getRegexpGroup()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *Router) buildVars(m map[string]string) map[string]string {
|
||||
if r.parent != nil {
|
||||
m = r.parent.buildVars(m)
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
// Route factories
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// NewRoute registers an empty route.
|
||||
func (r *Router) NewRoute() *Route {
|
||||
route := &Route{parent: r, strictSlash: r.strictSlash, skipClean: r.skipClean, useEncodedPath: r.useEncodedPath}
|
||||
// initialize a route with a copy of the parent router's configuration
|
||||
route := &Route{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes}
|
||||
r.routes = append(r.routes, route)
|
||||
return route
|
||||
}
|
||||
|
||||
// Name registers a new route with a name.
|
||||
// See Route.Name().
|
||||
func (r *Router) Name(name string) *Route {
|
||||
return r.NewRoute().Name(name)
|
||||
}
|
||||
|
||||
// Handle registers a new route with a matcher for the URL path.
|
||||
// See Route.Path() and Route.Handler().
|
||||
func (r *Router) Handle(path string, handler http.Handler) *Route {
|
||||
|
||||
51
vendor/github.com/gorilla/mux/regexp.go
generated
vendored
51
vendor/github.com/gorilla/mux/regexp.go
generated
vendored
@@ -113,6 +113,13 @@ func newRouteRegexp(tpl string, typ regexpType, options routeRegexpOptions) (*ro
|
||||
if typ != regexpTypePrefix {
|
||||
pattern.WriteByte('$')
|
||||
}
|
||||
|
||||
var wildcardHostPort bool
|
||||
if typ == regexpTypeHost {
|
||||
if !strings.Contains(pattern.String(), ":") {
|
||||
wildcardHostPort = true
|
||||
}
|
||||
}
|
||||
reverse.WriteString(raw)
|
||||
if endSlash {
|
||||
reverse.WriteByte('/')
|
||||
@@ -131,13 +138,14 @@ func newRouteRegexp(tpl string, typ regexpType, options routeRegexpOptions) (*ro
|
||||
|
||||
// Done!
|
||||
return &routeRegexp{
|
||||
template: template,
|
||||
regexpType: typ,
|
||||
options: options,
|
||||
regexp: reg,
|
||||
reverse: reverse.String(),
|
||||
varsN: varsN,
|
||||
varsR: varsR,
|
||||
template: template,
|
||||
regexpType: typ,
|
||||
options: options,
|
||||
regexp: reg,
|
||||
reverse: reverse.String(),
|
||||
varsN: varsN,
|
||||
varsR: varsR,
|
||||
wildcardHostPort: wildcardHostPort,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -158,11 +166,22 @@ type routeRegexp struct {
|
||||
varsN []string
|
||||
// Variable regexps (validators).
|
||||
varsR []*regexp.Regexp
|
||||
// Wildcard host-port (no strict port match in hostname)
|
||||
wildcardHostPort bool
|
||||
}
|
||||
|
||||
// Match matches the regexp against the URL host or path.
|
||||
func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool {
|
||||
if r.regexpType != regexpTypeHost {
|
||||
if r.regexpType == regexpTypeHost {
|
||||
host := getHost(req)
|
||||
if r.wildcardHostPort {
|
||||
// Don't be strict on the port match
|
||||
if i := strings.Index(host, ":"); i != -1 {
|
||||
host = host[:i]
|
||||
}
|
||||
}
|
||||
return r.regexp.MatchString(host)
|
||||
} else {
|
||||
if r.regexpType == regexpTypeQuery {
|
||||
return r.matchQueryString(req)
|
||||
}
|
||||
@@ -172,8 +191,6 @@ func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool {
|
||||
}
|
||||
return r.regexp.MatchString(path)
|
||||
}
|
||||
|
||||
return r.regexp.MatchString(getHost(req))
|
||||
}
|
||||
|
||||
// url builds a URL part using the given values.
|
||||
@@ -267,7 +284,7 @@ type routeRegexpGroup struct {
|
||||
}
|
||||
|
||||
// setMatch extracts the variables from the URL once a route matches.
|
||||
func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) {
|
||||
func (v routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) {
|
||||
// Store host variables.
|
||||
if v.host != nil {
|
||||
host := getHost(req)
|
||||
@@ -296,7 +313,7 @@ func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route)
|
||||
} else {
|
||||
u.Path += "/"
|
||||
}
|
||||
m.Handler = http.RedirectHandler(u.String(), 301)
|
||||
m.Handler = http.RedirectHandler(u.String(), http.StatusMovedPermanently)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -312,17 +329,13 @@ func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route)
|
||||
}
|
||||
|
||||
// getHost tries its best to return the request host.
|
||||
// According to section 14.23 of RFC 2616 the Host header
|
||||
// can include the port number if the default value of 80 is not used.
|
||||
func getHost(r *http.Request) string {
|
||||
if r.URL.IsAbs() {
|
||||
return r.URL.Host
|
||||
}
|
||||
host := r.Host
|
||||
// Slice off any port information.
|
||||
if i := strings.Index(host, ":"); i != -1 {
|
||||
host = host[:i]
|
||||
}
|
||||
return host
|
||||
|
||||
return r.Host
|
||||
}
|
||||
|
||||
func extractVars(input string, matches []int, names []string, output map[string]string) {
|
||||
|
||||
141
vendor/github.com/gorilla/mux/route.go
generated
vendored
141
vendor/github.com/gorilla/mux/route.go
generated
vendored
@@ -15,24 +15,8 @@ import (
|
||||
|
||||
// Route stores information to match a request and build URLs.
|
||||
type Route struct {
|
||||
// Parent where the route was registered (a Router).
|
||||
parent parentRoute
|
||||
// Request handler for the route.
|
||||
handler http.Handler
|
||||
// List of matchers.
|
||||
matchers []matcher
|
||||
// Manager for the variables from host and path.
|
||||
regexp *routeRegexpGroup
|
||||
// If true, when the path pattern is "/path/", accessing "/path" will
|
||||
// redirect to the former and vice versa.
|
||||
strictSlash bool
|
||||
// If true, when the path pattern is "/path//to", accessing "/path//to"
|
||||
// will not redirect
|
||||
skipClean bool
|
||||
// If true, "/path/foo%2Fbar/to" will match the path "/path/{var}/to"
|
||||
useEncodedPath bool
|
||||
// The scheme used when building URLs.
|
||||
buildScheme string
|
||||
// If true, this route never matches: it is only used to build URLs.
|
||||
buildOnly bool
|
||||
// The name used to build URLs.
|
||||
@@ -40,7 +24,11 @@ type Route struct {
|
||||
// Error resulted from building a route.
|
||||
err error
|
||||
|
||||
buildVarsFunc BuildVarsFunc
|
||||
// "global" reference to all named routes
|
||||
namedRoutes map[string]*Route
|
||||
|
||||
// config possibly passed in from `Router`
|
||||
routeConf
|
||||
}
|
||||
|
||||
// SkipClean reports whether path cleaning is enabled for this route via
|
||||
@@ -64,6 +52,18 @@ func (r *Route) Match(req *http.Request, match *RouteMatch) bool {
|
||||
matchErr = ErrMethodMismatch
|
||||
continue
|
||||
}
|
||||
|
||||
// Ignore ErrNotFound errors. These errors arise from match call
|
||||
// to Subrouters.
|
||||
//
|
||||
// This prevents subsequent matching subrouters from failing to
|
||||
// run middleware. If not ignored, the middleware would see a
|
||||
// non-nil MatchErr and be skipped, even when there was a
|
||||
// matching route.
|
||||
if match.MatchErr == ErrNotFound {
|
||||
match.MatchErr = nil
|
||||
}
|
||||
|
||||
matchErr = nil
|
||||
return false
|
||||
}
|
||||
@@ -93,9 +93,7 @@ func (r *Route) Match(req *http.Request, match *RouteMatch) bool {
|
||||
}
|
||||
|
||||
// Set variables.
|
||||
if r.regexp != nil {
|
||||
r.regexp.setMatch(req, match, r)
|
||||
}
|
||||
r.regexp.setMatch(req, match, r)
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -137,7 +135,7 @@ func (r *Route) GetHandler() http.Handler {
|
||||
// Name -----------------------------------------------------------------------
|
||||
|
||||
// Name sets the name for the route, used to build URLs.
|
||||
// If the name was registered already it will be overwritten.
|
||||
// It is an error to call Name more than once on a route.
|
||||
func (r *Route) Name(name string) *Route {
|
||||
if r.name != "" {
|
||||
r.err = fmt.Errorf("mux: route already has name %q, can't set %q",
|
||||
@@ -145,7 +143,7 @@ func (r *Route) Name(name string) *Route {
|
||||
}
|
||||
if r.err == nil {
|
||||
r.name = name
|
||||
r.getNamedRoutes()[name] = r
|
||||
r.namedRoutes[name] = r
|
||||
}
|
||||
return r
|
||||
}
|
||||
@@ -177,7 +175,6 @@ func (r *Route) addRegexpMatcher(tpl string, typ regexpType) error {
|
||||
if r.err != nil {
|
||||
return r.err
|
||||
}
|
||||
r.regexp = r.getRegexpGroup()
|
||||
if typ == regexpTypePath || typ == regexpTypePrefix {
|
||||
if len(tpl) > 0 && tpl[0] != '/' {
|
||||
return fmt.Errorf("mux: path must start with a slash, got %q", tpl)
|
||||
@@ -386,7 +383,7 @@ func (r *Route) PathPrefix(tpl string) *Route {
|
||||
// The above route will only match if the URL contains the defined queries
|
||||
// values, e.g.: ?foo=bar&id=42.
|
||||
//
|
||||
// It the value is an empty string, it will match any value if the key is set.
|
||||
// If the value is an empty string, it will match any value if the key is set.
|
||||
//
|
||||
// Variables can define an optional regexp pattern to be matched:
|
||||
//
|
||||
@@ -424,7 +421,7 @@ func (r *Route) Schemes(schemes ...string) *Route {
|
||||
for k, v := range schemes {
|
||||
schemes[k] = strings.ToLower(v)
|
||||
}
|
||||
if r.buildScheme == "" && len(schemes) > 0 {
|
||||
if len(schemes) > 0 {
|
||||
r.buildScheme = schemes[0]
|
||||
}
|
||||
return r.addMatcher(schemeMatcher(schemes))
|
||||
@@ -439,7 +436,15 @@ type BuildVarsFunc func(map[string]string) map[string]string
|
||||
// BuildVarsFunc adds a custom function to be used to modify build variables
|
||||
// before a route's URL is built.
|
||||
func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route {
|
||||
r.buildVarsFunc = f
|
||||
if r.buildVarsFunc != nil {
|
||||
// compose the old and new functions
|
||||
old := r.buildVarsFunc
|
||||
r.buildVarsFunc = func(m map[string]string) map[string]string {
|
||||
return f(old(m))
|
||||
}
|
||||
} else {
|
||||
r.buildVarsFunc = f
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
@@ -458,7 +463,8 @@ func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route {
|
||||
// Here, the routes registered in the subrouter won't be tested if the host
|
||||
// doesn't match.
|
||||
func (r *Route) Subrouter() *Router {
|
||||
router := &Router{parent: r, strictSlash: r.strictSlash}
|
||||
// initialize a subrouter with a copy of the parent route's configuration
|
||||
router := &Router{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes}
|
||||
r.addMatcher(router)
|
||||
return router
|
||||
}
|
||||
@@ -502,9 +508,6 @@ func (r *Route) URL(pairs ...string) (*url.URL, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil {
|
||||
return nil, errors.New("mux: route doesn't have a host or path")
|
||||
}
|
||||
values, err := r.prepareVars(pairs...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -516,8 +519,8 @@ func (r *Route) URL(pairs ...string) (*url.URL, error) {
|
||||
return nil, err
|
||||
}
|
||||
scheme = "http"
|
||||
if s := r.getBuildScheme(); s != "" {
|
||||
scheme = s
|
||||
if r.buildScheme != "" {
|
||||
scheme = r.buildScheme
|
||||
}
|
||||
}
|
||||
if r.regexp.path != nil {
|
||||
@@ -547,7 +550,7 @@ func (r *Route) URLHost(pairs ...string) (*url.URL, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.host == nil {
|
||||
if r.regexp.host == nil {
|
||||
return nil, errors.New("mux: route doesn't have a host")
|
||||
}
|
||||
values, err := r.prepareVars(pairs...)
|
||||
@@ -562,8 +565,8 @@ func (r *Route) URLHost(pairs ...string) (*url.URL, error) {
|
||||
Scheme: "http",
|
||||
Host: host,
|
||||
}
|
||||
if s := r.getBuildScheme(); s != "" {
|
||||
u.Scheme = s
|
||||
if r.buildScheme != "" {
|
||||
u.Scheme = r.buildScheme
|
||||
}
|
||||
return u, nil
|
||||
}
|
||||
@@ -575,7 +578,7 @@ func (r *Route) URLPath(pairs ...string) (*url.URL, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.path == nil {
|
||||
if r.regexp.path == nil {
|
||||
return nil, errors.New("mux: route doesn't have a path")
|
||||
}
|
||||
values, err := r.prepareVars(pairs...)
|
||||
@@ -600,7 +603,7 @@ func (r *Route) GetPathTemplate() (string, error) {
|
||||
if r.err != nil {
|
||||
return "", r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.path == nil {
|
||||
if r.regexp.path == nil {
|
||||
return "", errors.New("mux: route doesn't have a path")
|
||||
}
|
||||
return r.regexp.path.template, nil
|
||||
@@ -614,7 +617,7 @@ func (r *Route) GetPathRegexp() (string, error) {
|
||||
if r.err != nil {
|
||||
return "", r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.path == nil {
|
||||
if r.regexp.path == nil {
|
||||
return "", errors.New("mux: route does not have a path")
|
||||
}
|
||||
return r.regexp.path.regexp.String(), nil
|
||||
@@ -629,7 +632,7 @@ func (r *Route) GetQueriesRegexp() ([]string, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.queries == nil {
|
||||
if r.regexp.queries == nil {
|
||||
return nil, errors.New("mux: route doesn't have queries")
|
||||
}
|
||||
var queries []string
|
||||
@@ -648,7 +651,7 @@ func (r *Route) GetQueriesTemplates() ([]string, error) {
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.queries == nil {
|
||||
if r.regexp.queries == nil {
|
||||
return nil, errors.New("mux: route doesn't have queries")
|
||||
}
|
||||
var queries []string
|
||||
@@ -683,7 +686,7 @@ func (r *Route) GetHostTemplate() (string, error) {
|
||||
if r.err != nil {
|
||||
return "", r.err
|
||||
}
|
||||
if r.regexp == nil || r.regexp.host == nil {
|
||||
if r.regexp.host == nil {
|
||||
return "", errors.New("mux: route doesn't have a host")
|
||||
}
|
||||
return r.regexp.host.template, nil
|
||||
@@ -700,64 +703,8 @@ func (r *Route) prepareVars(pairs ...string) (map[string]string, error) {
|
||||
}
|
||||
|
||||
func (r *Route) buildVars(m map[string]string) map[string]string {
|
||||
if r.parent != nil {
|
||||
m = r.parent.buildVars(m)
|
||||
}
|
||||
if r.buildVarsFunc != nil {
|
||||
m = r.buildVarsFunc(m)
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
// parentRoute
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
// parentRoute allows routes to know about parent host and path definitions.
|
||||
type parentRoute interface {
|
||||
getBuildScheme() string
|
||||
getNamedRoutes() map[string]*Route
|
||||
getRegexpGroup() *routeRegexpGroup
|
||||
buildVars(map[string]string) map[string]string
|
||||
}
|
||||
|
||||
func (r *Route) getBuildScheme() string {
|
||||
if r.buildScheme != "" {
|
||||
return r.buildScheme
|
||||
}
|
||||
if r.parent != nil {
|
||||
return r.parent.getBuildScheme()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// getNamedRoutes returns the map where named routes are registered.
|
||||
func (r *Route) getNamedRoutes() map[string]*Route {
|
||||
if r.parent == nil {
|
||||
// During tests router is not always set.
|
||||
r.parent = NewRouter()
|
||||
}
|
||||
return r.parent.getNamedRoutes()
|
||||
}
|
||||
|
||||
// getRegexpGroup returns regexp definitions from this route.
|
||||
func (r *Route) getRegexpGroup() *routeRegexpGroup {
|
||||
if r.regexp == nil {
|
||||
if r.parent == nil {
|
||||
// During tests router is not always set.
|
||||
r.parent = NewRouter()
|
||||
}
|
||||
regexp := r.parent.getRegexpGroup()
|
||||
if regexp == nil {
|
||||
r.regexp = new(routeRegexpGroup)
|
||||
} else {
|
||||
// Copy.
|
||||
r.regexp = &routeRegexpGroup{
|
||||
host: regexp.host,
|
||||
path: regexp.path,
|
||||
queries: regexp.queries,
|
||||
}
|
||||
}
|
||||
}
|
||||
return r.regexp
|
||||
}
|
||||
|
||||
2
vendor/github.com/gorilla/websocket/.gitignore
generated
vendored
2
vendor/github.com/gorilla/websocket/.gitignore
generated
vendored
@@ -22,4 +22,4 @@ _testmain.go
|
||||
*.exe
|
||||
|
||||
.idea/
|
||||
*.iml
|
||||
*.iml
|
||||
|
||||
10
vendor/github.com/gorilla/websocket/.travis.yml
generated
vendored
10
vendor/github.com/gorilla/websocket/.travis.yml
generated
vendored
@@ -3,11 +3,11 @@ sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- go: 1.4
|
||||
- go: 1.5
|
||||
- go: 1.6
|
||||
- go: 1.7
|
||||
- go: 1.8
|
||||
- go: 1.7.x
|
||||
- go: 1.8.x
|
||||
- go: 1.9.x
|
||||
- go: 1.10.x
|
||||
- go: 1.11.x
|
||||
- go: tip
|
||||
allow_failures:
|
||||
- go: tip
|
||||
|
||||
1
vendor/github.com/gorilla/websocket/AUTHORS
generated
vendored
1
vendor/github.com/gorilla/websocket/AUTHORS
generated
vendored
@@ -4,5 +4,6 @@
|
||||
# Please keep the list sorted.
|
||||
|
||||
Gary Burd <gary@beagledreams.com>
|
||||
Google LLC (https://opensource.google.com/)
|
||||
Joachim Bauch <mail@joachim-bauch.de>
|
||||
|
||||
|
||||
2
vendor/github.com/gorilla/websocket/README.md
generated
vendored
2
vendor/github.com/gorilla/websocket/README.md
generated
vendored
@@ -51,7 +51,7 @@ subdirectory](https://github.com/gorilla/websocket/tree/master/examples/autobahn
|
||||
<tr><td>Write message using io.WriteCloser</td><td><a href="http://godoc.org/github.com/gorilla/websocket#Conn.NextWriter">Yes</a></td><td>No, see note 3</td></tr>
|
||||
</table>
|
||||
|
||||
Notes:
|
||||
Notes:
|
||||
|
||||
1. Large messages are fragmented in [Chrome's new WebSocket implementation](http://www.ietf.org/mail-archive/web/hybi/current/msg10503.html).
|
||||
2. The application can get the type of a received data message by implementing
|
||||
|
||||
253
vendor/github.com/gorilla/websocket/client.go
generated
vendored
253
vendor/github.com/gorilla/websocket/client.go
generated
vendored
@@ -5,15 +5,15 @@
|
||||
package websocket
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptrace"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -53,6 +53,10 @@ type Dialer struct {
|
||||
// NetDial is nil, net.Dial is used.
|
||||
NetDial func(network, addr string) (net.Conn, error)
|
||||
|
||||
// NetDialContext specifies the dial function for creating TCP connections. If
|
||||
// NetDialContext is nil, net.DialContext is used.
|
||||
NetDialContext func(ctx context.Context, network, addr string) (net.Conn, error)
|
||||
|
||||
// Proxy specifies a function to return a proxy for a given
|
||||
// Request. If the function returns a non-nil error, the
|
||||
// request is aborted with the provided error.
|
||||
@@ -71,6 +75,17 @@ type Dialer struct {
|
||||
// do not limit the size of the messages that can be sent or received.
|
||||
ReadBufferSize, WriteBufferSize int
|
||||
|
||||
// WriteBufferPool is a pool of buffers for write operations. If the value
|
||||
// is not set, then write buffers are allocated to the connection for the
|
||||
// lifetime of the connection.
|
||||
//
|
||||
// A pool is most useful when the application has a modest volume of writes
|
||||
// across a large number of connections.
|
||||
//
|
||||
// Applications should use a single pool for each unique value of
|
||||
// WriteBufferSize.
|
||||
WriteBufferPool BufferPool
|
||||
|
||||
// Subprotocols specifies the client's requested subprotocols.
|
||||
Subprotocols []string
|
||||
|
||||
@@ -86,52 +101,13 @@ type Dialer struct {
|
||||
Jar http.CookieJar
|
||||
}
|
||||
|
||||
var errMalformedURL = errors.New("malformed ws or wss URL")
|
||||
|
||||
// parseURL parses the URL.
|
||||
//
|
||||
// This function is a replacement for the standard library url.Parse function.
|
||||
// In Go 1.4 and earlier, url.Parse loses information from the path.
|
||||
func parseURL(s string) (*url.URL, error) {
|
||||
// From the RFC:
|
||||
//
|
||||
// ws-URI = "ws:" "//" host [ ":" port ] path [ "?" query ]
|
||||
// wss-URI = "wss:" "//" host [ ":" port ] path [ "?" query ]
|
||||
var u url.URL
|
||||
switch {
|
||||
case strings.HasPrefix(s, "ws://"):
|
||||
u.Scheme = "ws"
|
||||
s = s[len("ws://"):]
|
||||
case strings.HasPrefix(s, "wss://"):
|
||||
u.Scheme = "wss"
|
||||
s = s[len("wss://"):]
|
||||
default:
|
||||
return nil, errMalformedURL
|
||||
}
|
||||
|
||||
if i := strings.Index(s, "?"); i >= 0 {
|
||||
u.RawQuery = s[i+1:]
|
||||
s = s[:i]
|
||||
}
|
||||
|
||||
if i := strings.Index(s, "/"); i >= 0 {
|
||||
u.Opaque = s[i:]
|
||||
s = s[:i]
|
||||
} else {
|
||||
u.Opaque = "/"
|
||||
}
|
||||
|
||||
u.Host = s
|
||||
|
||||
if strings.Contains(u.Host, "@") {
|
||||
// Don't bother parsing user information because user information is
|
||||
// not allowed in websocket URIs.
|
||||
return nil, errMalformedURL
|
||||
}
|
||||
|
||||
return &u, nil
|
||||
// Dial creates a new client connection by calling DialContext with a background context.
|
||||
func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) {
|
||||
return d.DialContext(context.Background(), urlStr, requestHeader)
|
||||
}
|
||||
|
||||
var errMalformedURL = errors.New("malformed ws or wss URL")
|
||||
|
||||
func hostPortNoPort(u *url.URL) (hostPort, hostNoPort string) {
|
||||
hostPort = u.Host
|
||||
hostNoPort = u.Host
|
||||
@@ -150,26 +126,29 @@ func hostPortNoPort(u *url.URL) (hostPort, hostNoPort string) {
|
||||
return hostPort, hostNoPort
|
||||
}
|
||||
|
||||
// DefaultDialer is a dialer with all fields set to the default zero values.
|
||||
// DefaultDialer is a dialer with all fields set to the default values.
|
||||
var DefaultDialer = &Dialer{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
HandshakeTimeout: 45 * time.Second,
|
||||
}
|
||||
|
||||
// Dial creates a new client connection. Use requestHeader to specify the
|
||||
// nilDialer is dialer to use when receiver is nil.
|
||||
var nilDialer = *DefaultDialer
|
||||
|
||||
// DialContext creates a new client connection. Use requestHeader to specify the
|
||||
// origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies (Cookie).
|
||||
// Use the response.Header to get the selected subprotocol
|
||||
// (Sec-WebSocket-Protocol) and cookies (Set-Cookie).
|
||||
//
|
||||
// The context will be used in the request and in the Dialer
|
||||
//
|
||||
// If the WebSocket handshake fails, ErrBadHandshake is returned along with a
|
||||
// non-nil *http.Response so that callers can handle redirects, authentication,
|
||||
// etcetera. The response body may not contain the entire response and does not
|
||||
// need to be closed by the application.
|
||||
func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) {
|
||||
|
||||
func (d *Dialer) DialContext(ctx context.Context, urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) {
|
||||
if d == nil {
|
||||
d = &Dialer{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
}
|
||||
d = &nilDialer
|
||||
}
|
||||
|
||||
challengeKey, err := generateChallengeKey()
|
||||
@@ -177,7 +156,7 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
u, err := parseURL(urlStr)
|
||||
u, err := url.Parse(urlStr)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -205,6 +184,7 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
Header: make(http.Header),
|
||||
Host: u.Host,
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
|
||||
// Set the cookies present in the cookie jar of the dialer
|
||||
if d.Jar != nil {
|
||||
@@ -237,45 +217,83 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
k == "Sec-Websocket-Extensions" ||
|
||||
(k == "Sec-Websocket-Protocol" && len(d.Subprotocols) > 0):
|
||||
return nil, nil, errors.New("websocket: duplicate header not allowed: " + k)
|
||||
case k == "Sec-Websocket-Protocol":
|
||||
req.Header["Sec-WebSocket-Protocol"] = vs
|
||||
default:
|
||||
req.Header[k] = vs
|
||||
}
|
||||
}
|
||||
|
||||
if d.EnableCompression {
|
||||
req.Header.Set("Sec-Websocket-Extensions", "permessage-deflate; server_no_context_takeover; client_no_context_takeover")
|
||||
req.Header["Sec-WebSocket-Extensions"] = []string{"permessage-deflate; server_no_context_takeover; client_no_context_takeover"}
|
||||
}
|
||||
|
||||
if d.HandshakeTimeout != 0 {
|
||||
var cancel func()
|
||||
ctx, cancel = context.WithTimeout(ctx, d.HandshakeTimeout)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
// Get network dial function.
|
||||
var netDial func(network, add string) (net.Conn, error)
|
||||
|
||||
if d.NetDialContext != nil {
|
||||
netDial = func(network, addr string) (net.Conn, error) {
|
||||
return d.NetDialContext(ctx, network, addr)
|
||||
}
|
||||
} else if d.NetDial != nil {
|
||||
netDial = d.NetDial
|
||||
} else {
|
||||
netDialer := &net.Dialer{}
|
||||
netDial = func(network, addr string) (net.Conn, error) {
|
||||
return netDialer.DialContext(ctx, network, addr)
|
||||
}
|
||||
}
|
||||
|
||||
// If needed, wrap the dial function to set the connection deadline.
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
forwardDial := netDial
|
||||
netDial = func(network, addr string) (net.Conn, error) {
|
||||
c, err := forwardDial(network, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = c.SetDeadline(deadline)
|
||||
if err != nil {
|
||||
c.Close()
|
||||
return nil, err
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
}
|
||||
|
||||
// If needed, wrap the dial function to connect through a proxy.
|
||||
if d.Proxy != nil {
|
||||
proxyURL, err := d.Proxy(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if proxyURL != nil {
|
||||
dialer, err := proxy_FromURL(proxyURL, netDialerFunc(netDial))
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
netDial = dialer.Dial
|
||||
}
|
||||
}
|
||||
|
||||
hostPort, hostNoPort := hostPortNoPort(u)
|
||||
|
||||
var proxyURL *url.URL
|
||||
// Check wether the proxy method has been configured
|
||||
if d.Proxy != nil {
|
||||
proxyURL, err = d.Proxy(req)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
trace := httptrace.ContextClientTrace(ctx)
|
||||
if trace != nil && trace.GetConn != nil {
|
||||
trace.GetConn(hostPort)
|
||||
}
|
||||
|
||||
var targetHostPort string
|
||||
if proxyURL != nil {
|
||||
targetHostPort, _ = hostPortNoPort(proxyURL)
|
||||
} else {
|
||||
targetHostPort = hostPort
|
||||
netConn, err := netDial("tcp", hostPort)
|
||||
if trace != nil && trace.GotConn != nil {
|
||||
trace.GotConn(httptrace.GotConnInfo{
|
||||
Conn: netConn,
|
||||
})
|
||||
}
|
||||
|
||||
var deadline time.Time
|
||||
if d.HandshakeTimeout != 0 {
|
||||
deadline = time.Now().Add(d.HandshakeTimeout)
|
||||
}
|
||||
|
||||
netDial := d.NetDial
|
||||
if netDial == nil {
|
||||
netDialer := &net.Dialer{Deadline: deadline}
|
||||
netDial = netDialer.Dial
|
||||
}
|
||||
|
||||
netConn, err := netDial("tcp", targetHostPort)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -286,42 +304,6 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
}
|
||||
}()
|
||||
|
||||
if err := netConn.SetDeadline(deadline); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if proxyURL != nil {
|
||||
connectHeader := make(http.Header)
|
||||
if user := proxyURL.User; user != nil {
|
||||
proxyUser := user.Username()
|
||||
if proxyPassword, passwordSet := user.Password(); passwordSet {
|
||||
credential := base64.StdEncoding.EncodeToString([]byte(proxyUser + ":" + proxyPassword))
|
||||
connectHeader.Set("Proxy-Authorization", "Basic "+credential)
|
||||
}
|
||||
}
|
||||
connectReq := &http.Request{
|
||||
Method: "CONNECT",
|
||||
URL: &url.URL{Opaque: hostPort},
|
||||
Host: hostPort,
|
||||
Header: connectHeader,
|
||||
}
|
||||
|
||||
connectReq.Write(netConn)
|
||||
|
||||
// Read response.
|
||||
// Okay to use and discard buffered reader here, because
|
||||
// TLS server will not speak until spoken to.
|
||||
br := bufio.NewReader(netConn)
|
||||
resp, err := http.ReadResponse(br, connectReq)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
f := strings.SplitN(resp.Status, " ", 2)
|
||||
return nil, nil, errors.New(f[1])
|
||||
}
|
||||
}
|
||||
|
||||
if u.Scheme == "https" {
|
||||
cfg := cloneTLSConfig(d.TLSClientConfig)
|
||||
if cfg.ServerName == "" {
|
||||
@@ -329,22 +311,31 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
}
|
||||
tlsConn := tls.Client(netConn, cfg)
|
||||
netConn = tlsConn
|
||||
if err := tlsConn.Handshake(); err != nil {
|
||||
return nil, nil, err
|
||||
|
||||
var err error
|
||||
if trace != nil {
|
||||
err = doHandshakeWithTrace(trace, tlsConn, cfg)
|
||||
} else {
|
||||
err = doHandshake(tlsConn, cfg)
|
||||
}
|
||||
if !cfg.InsecureSkipVerify {
|
||||
if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
conn := newConn(netConn, false, d.ReadBufferSize, d.WriteBufferSize)
|
||||
conn := newConn(netConn, false, d.ReadBufferSize, d.WriteBufferSize, d.WriteBufferPool, nil, nil)
|
||||
|
||||
if err := req.Write(netConn); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if trace != nil && trace.GotFirstResponseByte != nil {
|
||||
if peek, err := conn.br.Peek(1); err == nil && len(peek) == 1 {
|
||||
trace.GotFirstResponseByte()
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := http.ReadResponse(conn.br, req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
@@ -390,3 +381,15 @@ func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Re
|
||||
netConn = nil // to avoid close in defer.
|
||||
return conn, resp, nil
|
||||
}
|
||||
|
||||
func doHandshake(tlsConn *tls.Conn, cfg *tls.Config) error {
|
||||
if err := tlsConn.Handshake(); err != nil {
|
||||
return err
|
||||
}
|
||||
if !cfg.InsecureSkipVerify {
|
||||
if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
174
vendor/github.com/gorilla/websocket/conn.go
generated
vendored
174
vendor/github.com/gorilla/websocket/conn.go
generated
vendored
@@ -76,7 +76,7 @@ const (
|
||||
// is UTF-8 encoded text.
|
||||
PingMessage = 9
|
||||
|
||||
// PongMessage denotes a ping control message. The optional message payload
|
||||
// PongMessage denotes a pong control message. The optional message payload
|
||||
// is UTF-8 encoded text.
|
||||
PongMessage = 10
|
||||
)
|
||||
@@ -100,9 +100,8 @@ func (e *netError) Error() string { return e.msg }
|
||||
func (e *netError) Temporary() bool { return e.temporary }
|
||||
func (e *netError) Timeout() bool { return e.timeout }
|
||||
|
||||
// CloseError represents close frame.
|
||||
// CloseError represents a close message.
|
||||
type CloseError struct {
|
||||
|
||||
// Code is defined in RFC 6455, section 11.7.
|
||||
Code int
|
||||
|
||||
@@ -224,6 +223,20 @@ func isValidReceivedCloseCode(code int) bool {
|
||||
return validReceivedCloseCodes[code] || (code >= 3000 && code <= 4999)
|
||||
}
|
||||
|
||||
// BufferPool represents a pool of buffers. The *sync.Pool type satisfies this
|
||||
// interface. The type of the value stored in a pool is not specified.
|
||||
type BufferPool interface {
|
||||
// Get gets a value from the pool or returns nil if the pool is empty.
|
||||
Get() interface{}
|
||||
// Put adds a value to the pool.
|
||||
Put(interface{})
|
||||
}
|
||||
|
||||
// writePoolData is the type added to the write buffer pool. This wrapper is
|
||||
// used to prevent applications from peeking at and depending on the values
|
||||
// added to the pool.
|
||||
type writePoolData struct{ buf []byte }
|
||||
|
||||
// The Conn type represents a WebSocket connection.
|
||||
type Conn struct {
|
||||
conn net.Conn
|
||||
@@ -233,6 +246,8 @@ type Conn struct {
|
||||
// Write fields
|
||||
mu chan bool // used as mutex to protect write to conn
|
||||
writeBuf []byte // frame is constructed in this buffer.
|
||||
writePool BufferPool
|
||||
writeBufSize int
|
||||
writeDeadline time.Time
|
||||
writer io.WriteCloser // the current writer returned to the application
|
||||
isWriting bool // for best-effort concurrent write detection
|
||||
@@ -264,64 +279,29 @@ type Conn struct {
|
||||
newDecompressionReader func(io.Reader) io.ReadCloser
|
||||
}
|
||||
|
||||
func newConn(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int) *Conn {
|
||||
return newConnBRW(conn, isServer, readBufferSize, writeBufferSize, nil)
|
||||
}
|
||||
func newConn(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int, writeBufferPool BufferPool, br *bufio.Reader, writeBuf []byte) *Conn {
|
||||
|
||||
type writeHook struct {
|
||||
p []byte
|
||||
}
|
||||
|
||||
func (wh *writeHook) Write(p []byte) (int, error) {
|
||||
wh.p = p
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
func newConnBRW(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int, brw *bufio.ReadWriter) *Conn {
|
||||
mu := make(chan bool, 1)
|
||||
mu <- true
|
||||
|
||||
var br *bufio.Reader
|
||||
if readBufferSize == 0 && brw != nil && brw.Reader != nil {
|
||||
// Reuse the supplied bufio.Reader if the buffer has a useful size.
|
||||
// This code assumes that peek on a reader returns
|
||||
// bufio.Reader.buf[:0].
|
||||
brw.Reader.Reset(conn)
|
||||
if p, err := brw.Reader.Peek(0); err == nil && cap(p) >= 256 {
|
||||
br = brw.Reader
|
||||
}
|
||||
}
|
||||
if br == nil {
|
||||
if readBufferSize == 0 {
|
||||
readBufferSize = defaultReadBufferSize
|
||||
}
|
||||
if readBufferSize < maxControlFramePayloadSize {
|
||||
} else if readBufferSize < maxControlFramePayloadSize {
|
||||
// must be large enough for control frame
|
||||
readBufferSize = maxControlFramePayloadSize
|
||||
}
|
||||
br = bufio.NewReaderSize(conn, readBufferSize)
|
||||
}
|
||||
|
||||
var writeBuf []byte
|
||||
if writeBufferSize == 0 && brw != nil && brw.Writer != nil {
|
||||
// Use the bufio.Writer's buffer if the buffer has a useful size. This
|
||||
// code assumes that bufio.Writer.buf[:1] is passed to the
|
||||
// bufio.Writer's underlying writer.
|
||||
var wh writeHook
|
||||
brw.Writer.Reset(&wh)
|
||||
brw.Writer.WriteByte(0)
|
||||
brw.Flush()
|
||||
if cap(wh.p) >= maxFrameHeaderSize+256 {
|
||||
writeBuf = wh.p[:cap(wh.p)]
|
||||
}
|
||||
}
|
||||
|
||||
if writeBuf == nil {
|
||||
if writeBufferSize == 0 {
|
||||
writeBufferSize = defaultWriteBufferSize
|
||||
}
|
||||
writeBuf = make([]byte, writeBufferSize+maxFrameHeaderSize)
|
||||
if writeBufferSize <= 0 {
|
||||
writeBufferSize = defaultWriteBufferSize
|
||||
}
|
||||
writeBufferSize += maxFrameHeaderSize
|
||||
|
||||
if writeBuf == nil && writeBufferPool == nil {
|
||||
writeBuf = make([]byte, writeBufferSize)
|
||||
}
|
||||
|
||||
mu := make(chan bool, 1)
|
||||
mu <- true
|
||||
c := &Conn{
|
||||
isServer: isServer,
|
||||
br: br,
|
||||
@@ -329,6 +309,8 @@ func newConnBRW(conn net.Conn, isServer bool, readBufferSize, writeBufferSize in
|
||||
mu: mu,
|
||||
readFinal: true,
|
||||
writeBuf: writeBuf,
|
||||
writePool: writeBufferPool,
|
||||
writeBufSize: writeBufferSize,
|
||||
enableWriteCompression: true,
|
||||
compressionLevel: defaultCompressionLevel,
|
||||
}
|
||||
@@ -343,7 +325,8 @@ func (c *Conn) Subprotocol() string {
|
||||
return c.subprotocol
|
||||
}
|
||||
|
||||
// Close closes the underlying network connection without sending or waiting for a close frame.
|
||||
// Close closes the underlying network connection without sending or waiting
|
||||
// for a close message.
|
||||
func (c *Conn) Close() error {
|
||||
return c.conn.Close()
|
||||
}
|
||||
@@ -370,7 +353,16 @@ func (c *Conn) writeFatal(err error) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func (c *Conn) write(frameType int, deadline time.Time, bufs ...[]byte) error {
|
||||
func (c *Conn) read(n int) ([]byte, error) {
|
||||
p, err := c.br.Peek(n)
|
||||
if err == io.EOF {
|
||||
err = errUnexpectedEOF
|
||||
}
|
||||
c.br.Discard(len(p))
|
||||
return p, err
|
||||
}
|
||||
|
||||
func (c *Conn) write(frameType int, deadline time.Time, buf0, buf1 []byte) error {
|
||||
<-c.mu
|
||||
defer func() { c.mu <- true }()
|
||||
|
||||
@@ -382,15 +374,14 @@ func (c *Conn) write(frameType int, deadline time.Time, bufs ...[]byte) error {
|
||||
}
|
||||
|
||||
c.conn.SetWriteDeadline(deadline)
|
||||
for _, buf := range bufs {
|
||||
if len(buf) > 0 {
|
||||
_, err := c.conn.Write(buf)
|
||||
if err != nil {
|
||||
return c.writeFatal(err)
|
||||
}
|
||||
}
|
||||
if len(buf1) == 0 {
|
||||
_, err = c.conn.Write(buf0)
|
||||
} else {
|
||||
err = c.writeBufs(buf0, buf1)
|
||||
}
|
||||
if err != nil {
|
||||
return c.writeFatal(err)
|
||||
}
|
||||
|
||||
if frameType == CloseMessage {
|
||||
c.writeFatal(ErrCloseSent)
|
||||
}
|
||||
@@ -476,7 +467,19 @@ func (c *Conn) prepWrite(messageType int) error {
|
||||
c.writeErrMu.Lock()
|
||||
err := c.writeErr
|
||||
c.writeErrMu.Unlock()
|
||||
return err
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if c.writeBuf == nil {
|
||||
wpd, ok := c.writePool.Get().(writePoolData)
|
||||
if ok {
|
||||
c.writeBuf = wpd.buf
|
||||
} else {
|
||||
c.writeBuf = make([]byte, c.writeBufSize)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NextWriter returns a writer for the next message to send. The writer's Close
|
||||
@@ -484,6 +487,9 @@ func (c *Conn) prepWrite(messageType int) error {
|
||||
//
|
||||
// There can be at most one open writer on a connection. NextWriter closes the
|
||||
// previous writer if the application has not already done so.
|
||||
//
|
||||
// All message types (TextMessage, BinaryMessage, CloseMessage, PingMessage and
|
||||
// PongMessage) are supported.
|
||||
func (c *Conn) NextWriter(messageType int) (io.WriteCloser, error) {
|
||||
if err := c.prepWrite(messageType); err != nil {
|
||||
return nil, err
|
||||
@@ -599,6 +605,10 @@ func (w *messageWriter) flushFrame(final bool, extra []byte) error {
|
||||
|
||||
if final {
|
||||
c.writer = nil
|
||||
if c.writePool != nil {
|
||||
c.writePool.Put(writePoolData{buf: c.writeBuf})
|
||||
c.writeBuf = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -764,7 +774,6 @@ func (c *Conn) SetWriteDeadline(t time.Time) error {
|
||||
// Read methods
|
||||
|
||||
func (c *Conn) advanceFrame() (int, error) {
|
||||
|
||||
// 1. Skip remainder of previous frame.
|
||||
|
||||
if c.readRemaining > 0 {
|
||||
@@ -1033,7 +1042,7 @@ func (c *Conn) SetReadDeadline(t time.Time) error {
|
||||
}
|
||||
|
||||
// SetReadLimit sets the maximum size for a message read from the peer. If a
|
||||
// message exceeds the limit, the connection sends a close frame to the peer
|
||||
// message exceeds the limit, the connection sends a close message to the peer
|
||||
// and returns ErrReadLimit to the application.
|
||||
func (c *Conn) SetReadLimit(limit int64) {
|
||||
c.readLimit = limit
|
||||
@@ -1046,24 +1055,22 @@ func (c *Conn) CloseHandler() func(code int, text string) error {
|
||||
|
||||
// SetCloseHandler sets the handler for close messages received from the peer.
|
||||
// The code argument to h is the received close code or CloseNoStatusReceived
|
||||
// if the close message is empty. The default close handler sends a close frame
|
||||
// back to the peer.
|
||||
// if the close message is empty. The default close handler sends a close
|
||||
// message back to the peer.
|
||||
//
|
||||
// The application must read the connection to process close messages as
|
||||
// described in the section on Control Frames above.
|
||||
// The handler function is called from the NextReader, ReadMessage and message
|
||||
// reader Read methods. The application must read the connection to process
|
||||
// close messages as described in the section on Control Messages above.
|
||||
//
|
||||
// The connection read methods return a CloseError when a close frame is
|
||||
// The connection read methods return a CloseError when a close message is
|
||||
// received. Most applications should handle close messages as part of their
|
||||
// normal error handling. Applications should only set a close handler when the
|
||||
// application must perform some action before sending a close frame back to
|
||||
// application must perform some action before sending a close message back to
|
||||
// the peer.
|
||||
func (c *Conn) SetCloseHandler(h func(code int, text string) error) {
|
||||
if h == nil {
|
||||
h = func(code int, text string) error {
|
||||
message := []byte{}
|
||||
if code != CloseNoStatusReceived {
|
||||
message = FormatCloseMessage(code, "")
|
||||
}
|
||||
message := FormatCloseMessage(code, "")
|
||||
c.WriteControl(CloseMessage, message, time.Now().Add(writeWait))
|
||||
return nil
|
||||
}
|
||||
@@ -1077,11 +1084,12 @@ func (c *Conn) PingHandler() func(appData string) error {
|
||||
}
|
||||
|
||||
// SetPingHandler sets the handler for ping messages received from the peer.
|
||||
// The appData argument to h is the PING frame application data. The default
|
||||
// The appData argument to h is the PING message application data. The default
|
||||
// ping handler sends a pong to the peer.
|
||||
//
|
||||
// The application must read the connection to process ping messages as
|
||||
// described in the section on Control Frames above.
|
||||
// The handler function is called from the NextReader, ReadMessage and message
|
||||
// reader Read methods. The application must read the connection to process
|
||||
// ping messages as described in the section on Control Messages above.
|
||||
func (c *Conn) SetPingHandler(h func(appData string) error) {
|
||||
if h == nil {
|
||||
h = func(message string) error {
|
||||
@@ -1103,11 +1111,12 @@ func (c *Conn) PongHandler() func(appData string) error {
|
||||
}
|
||||
|
||||
// SetPongHandler sets the handler for pong messages received from the peer.
|
||||
// The appData argument to h is the PONG frame application data. The default
|
||||
// The appData argument to h is the PONG message application data. The default
|
||||
// pong handler does nothing.
|
||||
//
|
||||
// The application must read the connection to process ping messages as
|
||||
// described in the section on Control Frames above.
|
||||
// The handler function is called from the NextReader, ReadMessage and message
|
||||
// reader Read methods. The application must read the connection to process
|
||||
// pong messages as described in the section on Control Messages above.
|
||||
func (c *Conn) SetPongHandler(h func(appData string) error) {
|
||||
if h == nil {
|
||||
h = func(string) error { return nil }
|
||||
@@ -1141,7 +1150,14 @@ func (c *Conn) SetCompressionLevel(level int) error {
|
||||
}
|
||||
|
||||
// FormatCloseMessage formats closeCode and text as a WebSocket close message.
|
||||
// An empty message is returned for code CloseNoStatusReceived.
|
||||
func FormatCloseMessage(closeCode int, text string) []byte {
|
||||
if closeCode == CloseNoStatusReceived {
|
||||
// Return empty message because it's illegal to send
|
||||
// CloseNoStatusReceived. Return non-nil value in case application
|
||||
// checks for nil.
|
||||
return []byte{}
|
||||
}
|
||||
buf := make([]byte, 2+len(text))
|
||||
binary.BigEndian.PutUint16(buf, uint16(closeCode))
|
||||
copy(buf[2:], text)
|
||||
|
||||
21
vendor/github.com/gorilla/websocket/conn_read_legacy.go
generated
vendored
21
vendor/github.com/gorilla/websocket/conn_read_legacy.go
generated
vendored
@@ -1,21 +0,0 @@
|
||||
// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !go1.5
|
||||
|
||||
package websocket
|
||||
|
||||
import "io"
|
||||
|
||||
func (c *Conn) read(n int) ([]byte, error) {
|
||||
p, err := c.br.Peek(n)
|
||||
if err == io.EOF {
|
||||
err = errUnexpectedEOF
|
||||
}
|
||||
if len(p) > 0 {
|
||||
// advance over the bytes just read
|
||||
io.ReadFull(c.br, p)
|
||||
}
|
||||
return p, err
|
||||
}
|
||||
@@ -2,17 +2,14 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build go1.5
|
||||
// +build go1.8
|
||||
|
||||
package websocket
|
||||
|
||||
import "io"
|
||||
import "net"
|
||||
|
||||
func (c *Conn) read(n int) ([]byte, error) {
|
||||
p, err := c.br.Peek(n)
|
||||
if err == io.EOF {
|
||||
err = errUnexpectedEOF
|
||||
}
|
||||
c.br.Discard(len(p))
|
||||
return p, err
|
||||
func (c *Conn) writeBufs(bufs ...[]byte) error {
|
||||
b := net.Buffers(bufs)
|
||||
_, err := b.WriteTo(c.conn)
|
||||
return err
|
||||
}
|
||||
18
vendor/github.com/gorilla/websocket/conn_write_legacy.go
generated
vendored
Normal file
18
vendor/github.com/gorilla/websocket/conn_write_legacy.go
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !go1.8
|
||||
|
||||
package websocket
|
||||
|
||||
func (c *Conn) writeBufs(bufs ...[]byte) error {
|
||||
for _, buf := range bufs {
|
||||
if len(buf) > 0 {
|
||||
if _, err := c.conn.Write(buf); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
56
vendor/github.com/gorilla/websocket/doc.go
generated
vendored
56
vendor/github.com/gorilla/websocket/doc.go
generated
vendored
@@ -6,9 +6,8 @@
|
||||
//
|
||||
// Overview
|
||||
//
|
||||
// The Conn type represents a WebSocket connection. A server application uses
|
||||
// the Upgrade function from an Upgrader object with a HTTP request handler
|
||||
// to get a pointer to a Conn:
|
||||
// The Conn type represents a WebSocket connection. A server application calls
|
||||
// the Upgrader.Upgrade method from an HTTP request handler to get a *Conn:
|
||||
//
|
||||
// var upgrader = websocket.Upgrader{
|
||||
// ReadBufferSize: 1024,
|
||||
@@ -31,10 +30,12 @@
|
||||
// for {
|
||||
// messageType, p, err := conn.ReadMessage()
|
||||
// if err != nil {
|
||||
// log.Println(err)
|
||||
// return
|
||||
// }
|
||||
// if err = conn.WriteMessage(messageType, p); err != nil {
|
||||
// return err
|
||||
// if err := conn.WriteMessage(messageType, p); err != nil {
|
||||
// log.Println(err)
|
||||
// return
|
||||
// }
|
||||
// }
|
||||
//
|
||||
@@ -85,20 +86,26 @@
|
||||
// and pong. Call the connection WriteControl, WriteMessage or NextWriter
|
||||
// methods to send a control message to the peer.
|
||||
//
|
||||
// Connections handle received close messages by sending a close message to the
|
||||
// peer and returning a *CloseError from the the NextReader, ReadMessage or the
|
||||
// message Read method.
|
||||
// Connections handle received close messages by calling the handler function
|
||||
// set with the SetCloseHandler method and by returning a *CloseError from the
|
||||
// NextReader, ReadMessage or the message Read method. The default close
|
||||
// handler sends a close message to the peer.
|
||||
//
|
||||
// Connections handle received ping and pong messages by invoking callback
|
||||
// functions set with SetPingHandler and SetPongHandler methods. The callback
|
||||
// functions are called from the NextReader, ReadMessage and the message Read
|
||||
// methods.
|
||||
// Connections handle received ping messages by calling the handler function
|
||||
// set with the SetPingHandler method. The default ping handler sends a pong
|
||||
// message to the peer.
|
||||
//
|
||||
// The default ping handler sends a pong to the peer. The application's reading
|
||||
// goroutine can block for a short time while the handler writes the pong data
|
||||
// to the connection.
|
||||
// Connections handle received pong messages by calling the handler function
|
||||
// set with the SetPongHandler method. The default pong handler does nothing.
|
||||
// If an application sends ping messages, then the application should set a
|
||||
// pong handler to receive the corresponding pong.
|
||||
//
|
||||
// The application must read the connection to process ping, pong and close
|
||||
// The control message handler functions are called from the NextReader,
|
||||
// ReadMessage and message reader Read methods. The default close and ping
|
||||
// handlers can block these methods for a short time when the handler writes to
|
||||
// the connection.
|
||||
//
|
||||
// The application must read the connection to process close, ping and pong
|
||||
// messages sent from the peer. If the application is not otherwise interested
|
||||
// in messages from the peer, then the application should start a goroutine to
|
||||
// read and discard messages from the peer. A simple example is:
|
||||
@@ -137,19 +144,12 @@
|
||||
// method fails the WebSocket handshake with HTTP status 403.
|
||||
//
|
||||
// If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail
|
||||
// the handshake if the Origin request header is present and not equal to the
|
||||
// Host request header.
|
||||
// the handshake if the Origin request header is present and the Origin host is
|
||||
// not equal to the Host request header.
|
||||
//
|
||||
// An application can allow connections from any origin by specifying a
|
||||
// function that always returns true:
|
||||
//
|
||||
// var upgrader = websocket.Upgrader{
|
||||
// CheckOrigin: func(r *http.Request) bool { return true },
|
||||
// }
|
||||
//
|
||||
// The deprecated Upgrade function does not enforce an origin policy. It's the
|
||||
// application's responsibility to check the Origin header before calling
|
||||
// Upgrade.
|
||||
// The deprecated package-level Upgrade function does not perform origin
|
||||
// checking. The application is responsible for checking the Origin header
|
||||
// before calling the Upgrade function.
|
||||
//
|
||||
// Compression EXPERIMENTAL
|
||||
//
|
||||
|
||||
11
vendor/github.com/gorilla/websocket/json.go
generated
vendored
11
vendor/github.com/gorilla/websocket/json.go
generated
vendored
@@ -9,12 +9,14 @@ import (
|
||||
"io"
|
||||
)
|
||||
|
||||
// WriteJSON is deprecated, use c.WriteJSON instead.
|
||||
// WriteJSON writes the JSON encoding of v as a message.
|
||||
//
|
||||
// Deprecated: Use c.WriteJSON instead.
|
||||
func WriteJSON(c *Conn, v interface{}) error {
|
||||
return c.WriteJSON(v)
|
||||
}
|
||||
|
||||
// WriteJSON writes the JSON encoding of v to the connection.
|
||||
// WriteJSON writes the JSON encoding of v as a message.
|
||||
//
|
||||
// See the documentation for encoding/json Marshal for details about the
|
||||
// conversion of Go values to JSON.
|
||||
@@ -31,7 +33,10 @@ func (c *Conn) WriteJSON(v interface{}) error {
|
||||
return err2
|
||||
}
|
||||
|
||||
// ReadJSON is deprecated, use c.ReadJSON instead.
|
||||
// ReadJSON reads the next JSON-encoded message from the connection and stores
|
||||
// it in the value pointed to by v.
|
||||
//
|
||||
// Deprecated: Use c.ReadJSON instead.
|
||||
func ReadJSON(c *Conn, v interface{}) error {
|
||||
return c.ReadJSON(v)
|
||||
}
|
||||
|
||||
1
vendor/github.com/gorilla/websocket/mask.go
generated
vendored
1
vendor/github.com/gorilla/websocket/mask.go
generated
vendored
@@ -11,7 +11,6 @@ import "unsafe"
|
||||
const wordSize = int(unsafe.Sizeof(uintptr(0)))
|
||||
|
||||
func maskBytes(key [4]byte, pos int, b []byte) int {
|
||||
|
||||
// Mask one byte at a time for small buffers.
|
||||
if len(b) < 2*wordSize {
|
||||
for i := range b {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user