mirror of
https://github.com/superseriousbusiness/gotosocial.git
synced 2025-10-28 11:22:25 -05:00
Compare commits
4 commits
11f39bead0
...
e81bcb5171
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e81bcb5171 | ||
|
|
6801ce299a | ||
|
|
247733aef4 | ||
|
|
5533fbc1f8 |
45 changed files with 485 additions and 1708 deletions
|
|
@ -439,7 +439,6 @@ By default the standalone testrig uses an in-memory SQLite database, which is fi
|
|||
- `GTS_DB_ADDRESS` - this is set to `:memory:` by default. You can change this to use an sqlite.db file somewhere, or set it to a Postgres address.
|
||||
- `GTS_DB_PORT`, `GTS_DB_USER`, `GTS_DB_PASSWORD`, `GTS_DB_DATABASE`, `GTS_DB_TLS_MODE`, `GTS_DB_TLS_CA_CERT` - you can set these if you change `GTS_DB_ADDRESS` to `postgres` and don't use `GTS_DB_POSTGRES_CONNECTION_STRING`.
|
||||
- `GTS_DB_POSTGRES_CONNECTION_STRING` - use this to provide a Postgres connection string if you don't want to set all the db env variables mentioned in the previous point.
|
||||
- `GTS_ADVANCED_SCRAPER_DETERRENCE_ENABLED`, `GTS_ADVANCED_SCRAPER_DETERRENCE_DIFFICULTY` - set these if you want to try out the PoW scraper deterrence locally.
|
||||
|
||||
Using these variables you can also (albeit awkwardly) test migrations from one schema to another.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,6 @@ You can allow or disallow crawlers from collecting stats about your instance fro
|
|||
|
||||
The AI scrapers come from a [community maintained repository][airobots]. It's manually kept in sync for the time being. If you know of any missing robots, please send them a PR!
|
||||
|
||||
A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md), and enabling a proof-of-work [scraper deterrence](../advanced/scraper_deterrence.md).
|
||||
A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md).
|
||||
|
||||
[airobots]: https://github.com/ai-robots-txt/ai.robots.txt/
|
||||
|
|
|
|||
|
|
@ -1,19 +0,0 @@
|
|||
# Scraper Deterrence
|
||||
|
||||
GoToSocial provides an optional proof-of-work based scraper and automated HTTP client deterrence that can be enabled on profile and status web views. The way
|
||||
it works is that it generates a unique but deterministic challenge for each incoming HTTP request based on client information and current time, that-is a hex encoded SHA256 hash. It then asks the client to find an integer addition to a portion of this that will generate an expected encoded hash result. This is served to the client as a minimal holding page with a single JavaScript worker that computes a solution to this.
|
||||
|
||||
The number of hash encode rounds the client is required to complete may be configured, where high values will take the client longer to find a solution and vice-versa. We also instill a certain amount of jitter to make it harder for scrapers to "game" the algorithm. If your challenges take too long to solve, you may deter users from accessing your web pages. And conversely, the longer it takes for a solution to be found, the more you'll be incurring costs for scrapers (and in some cases, causing their operation to time-out). That balance is up to you to configure, hence why this is an advanced feature.
|
||||
|
||||
Once a solution to this challenge has been provided, by refreshing the page with the solution in the query parameter, GoToSocial will verify this solution and on success will return the expected profile / status page with a cookie that provides challenge-less access to the instance for up-to the next hour.
|
||||
|
||||
The outcomes of this, (when enabled), is that it should make scraping of your instance's profile / status pages economically unviable for automated data gathering (e.g. by AI companies, search engines). The only negative, is that it places a requirement on JavaScript being enabled for people to access your profile / status web views.
|
||||
|
||||
This was heavily inspired by the great project that is [anubis], but ultimately we determined we could implement it ourselves with only the features we require, minimal code, and more granularity with our existing authorization / authentication procedures.
|
||||
|
||||
The GoToSocial implementation of this scraper deterrence is still incredibly minimal, so if you're looking for more features or fine-grained control over your deterrence measures then by all means keep ours disabled and stand-up a service like [anubis] in front of your instance!
|
||||
|
||||
!!! warning
|
||||
This proof-of-work scraper deterrence does not protect user profile RSS feeds due to the extra complexity involved. If you rely on your RSS feed being exposed, this is one such case where [anubis] may be a better fit!
|
||||
|
||||
[anubis]: https://github.com/TecharoHQ/anubis
|
||||
|
|
@ -182,23 +182,4 @@ advanced-csp-extra-uris: []
|
|||
# Options: ["block", "allow", ""]
|
||||
# Default: ""
|
||||
advanced-header-filter-mode: ""
|
||||
|
||||
# Bool. Enables a proof-of-work based deterrence against scrapers
|
||||
# on profile and status web pages. This will generate a unique but
|
||||
# deterministic challenge for each HTTP client to complete before
|
||||
# accessing the above mentioned endpoints, on success being given
|
||||
# a cookie that permits challenge-less access within a 1hr window.
|
||||
#
|
||||
# The outcome of this is that it should make scraping of these
|
||||
# endpoints economically unfeasible, while having a negligible
|
||||
# performance impact on your own instance.
|
||||
#
|
||||
# The downside is that it requires javascript to be enabled.
|
||||
#
|
||||
# For more details please check the documentation at:
|
||||
# https://docs.gotosocial.org/en/latest/admin/scraper_deterrence
|
||||
#
|
||||
# Options: [true, false]
|
||||
# Default: true
|
||||
advanced-scraper-deterrence: false
|
||||
```
|
||||
|
|
|
|||
|
|
@ -10,6 +10,6 @@ GoToSocial 在主域名上提供一个 `robots.txt` 文件。该文件包含试
|
|||
|
||||
AI 爬虫来自一个[社区维护的仓库][airobots]。目前是手动保持同步的。如果你知道有任何遗漏的爬虫,请给他们提交一个 PR!
|
||||
|
||||
众所周知,很多 AI 爬虫在 `robots.txt` 不允许其 User-Agent 的情况下,仍然会忽略对应规则并继续抓去内容。这意味着 `robots.txt` 文件并不是确保 AI 爬虫不抓取你的内容的万无一失的方法。除此以外,你可能还需要考虑通过[请求标头过滤](request_filtering_modes.md)来阻止对应 User-Agent,以及启用基于工作证明的[爬虫防护](../advanced/scraper_deterrence.md)。
|
||||
众所周知,很多 AI 爬虫在 `robots.txt` 不允许其 User-Agent 的情况下,仍然会忽略对应规则并继续抓去内容。这意味着 `robots.txt` 文件并不是确保 AI 爬虫不抓取你的内容的万无一失的方法。除此以外,你可能还需要考虑通过[请求标头过滤](request_filtering_modes.md)来阻止对应 User-Agent。
|
||||
|
||||
[airobots]: https://github.com/ai-robots-txt/ai.robots.txt/
|
||||
|
|
|
|||
|
|
@ -1,18 +0,0 @@
|
|||
# 爬虫防护
|
||||
|
||||
GoToSocial 提供一个可选的、基于工作量证明的爬虫和自动化 HTTP 客户端防护机制,可在账户页和贴文页的网页视图上启用。
|
||||
|
||||
它的工作原理是:针对每个传入的 HTTP 请求,系统会根据客户端信息和当前时间生成一个唯一质询(一个十六进制编码的 SHA256 哈希值)。然后,它要求客户端为该质询的一部分找到一个附加值,使(附加值+质询部分)组合计算出的新 SHA256 哈希值(同样为十六进制编码)至少包含 4 个前导 '0' 字符。这个质询会通过一个极简的等待页面呈现给客户端,该页面包含一个独立的 JavaScript worker 来计算解决方案。
|
||||
|
||||
一旦客户端提供了此质询的解,并通过在查询参数中携带该方案刷新页面,GoToSocial 将验证此方案。验证成功后,服务端会返回用户期望访问的账户或贴文页面,并设置一个 Cookie。该 Cookie 允许用户在接下来最多一小时内免验证访问该实例。
|
||||
|
||||
启用此功能的目的是让自动化数据收集(例如 AI 公司、搜索引擎)对你实例的账户和贴文页面进行爬取的行为,在经济上变得不可行。唯一的缺点是,用户需要启用 JavaScript 才能访问你的账户和贴文网页视图。
|
||||
|
||||
这个功能深受优秀的 [anubis] 项目的启发,但我们最终决定自己实现,只包含我们需要的功能,使用最少的代码,并能与我们现有的授权/认证流程实现更细粒度的结合。
|
||||
|
||||
GoToSocial 实现的这个爬虫防护功能仍然是极其精简的。因此,如果你需要更多功能或对防护措施进行更精细的控制,那么完全可以禁用我们的内置功能,并在你的实例前部署像 [anubis] 这样的服务!
|
||||
|
||||
!!! warning "警告"
|
||||
这个基于工作量证明的爬虫防护机制并不保护用户账户页的 RSS feed,因为这会带来额外的复杂性。如果你需要确保 RSS feed 可被访问,那么在这种情况下,[anubis] 可能是更合适的选择!
|
||||
|
||||
[anubis]: https://github.com/TecharoHQ/anubis
|
||||
|
|
@ -149,21 +149,4 @@ advanced-csp-extra-uris: []
|
|||
# 选项: ["block", "allow", ""]
|
||||
# 默认: ""
|
||||
advanced-header-filter-mode: ""
|
||||
|
||||
# 布尔值。启用基于工作量证明的爬虫威慑机制,
|
||||
# 作用于账户页和贴文页面。这将为每个 HTTP 客户端生成一个唯一确定
|
||||
# 的质询,需要由客户端在访问上述端点时完成。
|
||||
# 完成后,客户端会获得一个 Cookie,允许其在 1 小时窗口内免验证访问。
|
||||
#
|
||||
# 这样做的结果是,它理论上使得对这些端点的抓取在经济上变得不可行,
|
||||
# 同时对你自己的实例的性能影响可以忽略不计。
|
||||
#
|
||||
# 缺点是它要求客户端启用 JavaScript。
|
||||
#
|
||||
# 更多详情请查阅文档:
|
||||
# https://docs.gotosocial.org/zh-cn/latest/admin/scraper_deterrence
|
||||
#
|
||||
# 选项: [true, false]
|
||||
# 默认值: true
|
||||
advanced-scraper-deterrence: false
|
||||
```
|
||||
|
|
@ -80,7 +80,6 @@ nav:
|
|||
- "advanced/tracing.md"
|
||||
- "advanced/metrics.md"
|
||||
- "advanced/replicating-sqlite.md"
|
||||
- "advanced/scraper_deterrence.md"
|
||||
- "advanced/sqlite-networked-storage.md"
|
||||
- "适用进阶场景的构建":
|
||||
- "advanced/builds/nowasm.md"
|
||||
|
|
|
|||
|
|
@ -1338,40 +1338,3 @@ advanced-csp-extra-uris: []
|
|||
# Options: ["block", "allow", ""]
|
||||
# Default: ""
|
||||
advanced-header-filter-mode: ""
|
||||
|
||||
# Bool. Enables a proof-of-work based deterrence against scrapers
|
||||
# on profile and status web pages. This will generate a unique but
|
||||
# deterministic challenge for each HTTP client to complete before
|
||||
# accessing the above mentioned endpoints, on success being given
|
||||
# a cookie that permits challenge-less access within a 1hr window.
|
||||
#
|
||||
# The outcome of this is that it should make scraping of these
|
||||
# endpoints economically unfeasible, while having a negligible
|
||||
# performance impact on your own instance.
|
||||
#
|
||||
# The downside is that it requires javascript to be enabled.
|
||||
#
|
||||
# For more details please check the documentation at:
|
||||
# https://docs.gotosocial.org/en/latest/advanced/scraper_deterrence
|
||||
#
|
||||
# Options: [true, false]
|
||||
# Default: true
|
||||
advanced-scraper-deterrence-enabled: false
|
||||
|
||||
# Uint. Allows tweaking the difficulty of the proof-of-work algorithm
|
||||
# used in the scraper deterrence. This determines roughly how many hash
|
||||
# encode rounds we require the client to complete to find a solution.
|
||||
# Higher values will take longer to find solutions for, and vice-versa.
|
||||
#
|
||||
# The downside is that if your deterrence takes too long to solve,
|
||||
# it may deter some users from viewing your web status / profile page.
|
||||
# And conversely, the longer it takes for a solution to be found, the
|
||||
# more you'll be incurring increased CPU usage for scrapers, and possibly
|
||||
# even cause their operation to time out before completion.
|
||||
#
|
||||
# For more details please check the documentation at:
|
||||
# https://docs.gotosocial.org/en/latest/advanced/scraper_deterrence
|
||||
#
|
||||
# Examples: [50000, 100000, 500000]
|
||||
# Default: 100000
|
||||
advanced-scraper-deterrence-difficulty: 100000
|
||||
|
|
|
|||
6
go.mod
6
go.mod
|
|
@ -6,7 +6,7 @@ go 1.24.6
|
|||
replace github.com/go-swagger/go-swagger => codeberg.org/superseriousbusiness/go-swagger v0.32.3-gts-go1.23-fix
|
||||
|
||||
// Replace modernc/sqlite with our version that fixes the concurrency INTERRUPT issue
|
||||
replace modernc.org/sqlite => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.38.2-concurrency-workaround
|
||||
replace modernc.org/sqlite => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.39.0-concurrency-workaround
|
||||
|
||||
require (
|
||||
code.superseriousbusiness.org/activity v1.17.0
|
||||
|
|
@ -85,13 +85,13 @@ require (
|
|||
go.uber.org/automaxprocs v1.6.0
|
||||
golang.org/x/crypto v0.42.0
|
||||
golang.org/x/image v0.31.0
|
||||
golang.org/x/net v0.43.0
|
||||
golang.org/x/net v0.44.0
|
||||
golang.org/x/oauth2 v0.31.0
|
||||
golang.org/x/sys v0.36.0
|
||||
golang.org/x/text v0.29.0
|
||||
gopkg.in/mcuadros/go-syslog.v2 v2.3.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
modernc.org/sqlite v1.38.2
|
||||
modernc.org/sqlite v1.39.0
|
||||
mvdan.cc/xurls/v2 v2.6.0
|
||||
)
|
||||
|
||||
|
|
|
|||
8
go.sum
generated
8
go.sum
generated
|
|
@ -510,8 +510,8 @@ github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82/go.mod h1:lgjkn3NuSvDf
|
|||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/yuin/goldmark v1.7.13 h1:GPddIs617DnBLFFVJFgpo1aBfe/4xcvMc3SB5t/D0pA=
|
||||
github.com/yuin/goldmark v1.7.13/go.mod h1:ip/1k0VRfGynBgxOz0yCqHrbZXhcjxyuS66Brc7iBKg=
|
||||
gitlab.com/NyaaaWhatsUpDoc/sqlite v1.38.2-concurrency-workaround h1:v1DRiquV7HAI68FED0VopIYIWXtuaYYZxj3fL71Zz+s=
|
||||
gitlab.com/NyaaaWhatsUpDoc/sqlite v1.38.2-concurrency-workaround/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
|
||||
gitlab.com/NyaaaWhatsUpDoc/sqlite v1.39.0-concurrency-workaround h1:+e2m4Ycgsri3YaePPAuYcTawQxklz0q/3CbKEbqxhOM=
|
||||
gitlab.com/NyaaaWhatsUpDoc/sqlite v1.39.0-concurrency-workaround/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
|
||||
go.mongodb.org/mongo-driver v1.17.3 h1:TQyXhnsWfWtgAhMtOgtYHMTkZIfBTpMTsMnd9ZBeHxQ=
|
||||
go.mongodb.org/mongo-driver v1.17.3/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
|
|
@ -605,8 +605,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
|||
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
|
||||
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
|
||||
golang.org/x/oauth2 v0.31.0 h1:8Fq0yVZLh4j4YA47vHKFTa9Ew5XIrCP8LC6UeNZnLxo=
|
||||
golang.org/x/oauth2 v0.31.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
|
|
|||
|
|
@ -279,13 +279,12 @@ type CacheConfiguration struct {
|
|||
}
|
||||
|
||||
type AdvancedConfig struct {
|
||||
CookiesSamesite string `name:"cookies-samesite" usage:"'strict' or 'lax', see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite"`
|
||||
SenderMultiplier int `name:"sender-multiplier" usage:"Multiplier to use per cpu for batching outgoing fedi messages. 0 or less turns batching off (not recommended)."`
|
||||
CSPExtraURIs []string `name:"csp-extra-uris" usage:"Additional URIs to allow when building content-security-policy for media + images."`
|
||||
HeaderFilterMode string `name:"header-filter-mode" usage:"Set incoming request header filtering mode."`
|
||||
RateLimit RateLimitConfig `name:"rate-limit"`
|
||||
Throttling ThrottlingConfig `name:"throttling"`
|
||||
ScraperDeterrence ScraperDeterrenceConfig `name:"scraper-deterrence"`
|
||||
CookiesSamesite string `name:"cookies-samesite" usage:"'strict' or 'lax', see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite"`
|
||||
SenderMultiplier int `name:"sender-multiplier" usage:"Multiplier to use per cpu for batching outgoing fedi messages. 0 or less turns batching off (not recommended)."`
|
||||
CSPExtraURIs []string `name:"csp-extra-uris" usage:"Additional URIs to allow when building content-security-policy for media + images."`
|
||||
HeaderFilterMode string `name:"header-filter-mode" usage:"Set incoming request header filtering mode."`
|
||||
RateLimit RateLimitConfig `name:"rate-limit"`
|
||||
Throttling ThrottlingConfig `name:"throttling"`
|
||||
}
|
||||
|
||||
type RateLimitConfig struct {
|
||||
|
|
@ -297,8 +296,3 @@ type ThrottlingConfig struct {
|
|||
Multiplier int `name:"multiplier" usage:"Multiplier to use per cpu for http request throttling. 0 or less turns throttling off."`
|
||||
RetryAfter time.Duration `name:"retry-after" usage:"Retry-After duration response to send for throttled requests."`
|
||||
}
|
||||
|
||||
type ScraperDeterrenceConfig struct {
|
||||
Enabled bool `name:"enabled" usage:"Enable proof-of-work based scraper deterrence on profile / status pages"`
|
||||
Difficulty uint32 `name:"difficulty" usage:"The proof-of-work difficulty, which determines roughly how many hash-encode rounds required of each client."`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -175,7 +175,6 @@ func TestCLIParsing(t *testing.T) {
|
|||
"--config-path", "testdata/test3.yaml",
|
||||
},
|
||||
expected: expectedKV(
|
||||
kv.Field{"advanced-scraper-deterrence-enabled", true},
|
||||
kv.Field{"advanced-rate-limit-requests", 5000},
|
||||
),
|
||||
},
|
||||
|
|
|
|||
|
|
@ -154,11 +154,6 @@ var Defaults = Configuration{
|
|||
Multiplier: 8, // 8 open requests per CPU
|
||||
RetryAfter: 30 * time.Second,
|
||||
},
|
||||
|
||||
ScraperDeterrence: ScraperDeterrenceConfig{
|
||||
Enabled: false,
|
||||
Difficulty: 100000,
|
||||
},
|
||||
},
|
||||
|
||||
Cache: CacheConfiguration{
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
2
internal/config/testdata/test3.yaml
vendored
2
internal/config/testdata/test3.yaml
vendored
|
|
@ -1,5 +1,3 @@
|
|||
advanced:
|
||||
scraper-deterrence:
|
||||
enabled: true
|
||||
rate-limit:
|
||||
requests: 5000
|
||||
|
|
|
|||
|
|
@ -1,385 +0,0 @@
|
|||
// GoToSocial
|
||||
// Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
// SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package middleware
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"crypto/subtle"
|
||||
"encoding/hex"
|
||||
"hash"
|
||||
"io"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
apimodel "code.superseriousbusiness.org/gotosocial/internal/api/model"
|
||||
apiutil "code.superseriousbusiness.org/gotosocial/internal/api/util"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/config"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/gtscontext"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/gtserror"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/log"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/oauth"
|
||||
"codeberg.org/gruf/go-bitutil"
|
||||
"codeberg.org/gruf/go-byteutil"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// NoLLaMas returns a piece of HTTP middleware that provides a deterrence
|
||||
// on routes it is applied to, against bots and scrapers. It generates a
|
||||
// unique but deterministic challenge for each HTTP client within an hour
|
||||
// TTL that requires a proof-of-work solution to pass onto the next handler.
|
||||
// On successful solution, the client is provided a cookie that allows them
|
||||
// to bypass this check within that hour TTL. The outcome of this is that it
|
||||
// should make scraping of these endpoints economically unfeasible, when enabled,
|
||||
// and with an absurdly minimal performance impact. The downside is that it
|
||||
// requires javascript to be enabled on the client to pass the middleware check.
|
||||
//
|
||||
// Heavily inspired by: https://github.com/TecharoHQ/anubis
|
||||
func NoLLaMas(
|
||||
cookiePolicy apiutil.CookiePolicy,
|
||||
getInstanceV1 func(context.Context) (*apimodel.InstanceV1, gtserror.WithCode),
|
||||
) gin.HandlerFunc {
|
||||
|
||||
if !config.GetAdvancedScraperDeterrenceEnabled() {
|
||||
// NoLLaMas middleware disabled.
|
||||
return func(*gin.Context) {}
|
||||
}
|
||||
|
||||
var seed [32]byte
|
||||
|
||||
// Read random data for the token seed.
|
||||
_, err := io.ReadFull(rand.Reader, seed[:])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Configure nollamas.
|
||||
var nollamas nollamas
|
||||
nollamas.entropy = seed
|
||||
nollamas.ttl = time.Hour
|
||||
nollamas.rounds = config.GetAdvancedScraperDeterrenceDifficulty()
|
||||
nollamas.getInstanceV1 = getInstanceV1
|
||||
nollamas.policy = cookiePolicy
|
||||
return nollamas.Serve
|
||||
}
|
||||
|
||||
// i.e. hash slice length.
|
||||
const hashLen = sha256.Size
|
||||
|
||||
// i.e. hex.EncodedLen(hashLen).
|
||||
const encodedHashLen = 2 * hashLen
|
||||
|
||||
// hashWithBufs encompasses a hash along
|
||||
// with the necessary buffers to generate
|
||||
// a hashsum and then encode that sum.
|
||||
type hashWithBufs struct {
|
||||
hash hash.Hash
|
||||
hbuf [hashLen]byte
|
||||
ebuf [encodedHashLen]byte
|
||||
}
|
||||
|
||||
// write is a passthrough to hash.Hash{}.Write().
|
||||
func (h *hashWithBufs) write(b []byte) {
|
||||
_, _ = h.hash.Write(b)
|
||||
}
|
||||
|
||||
// writeString is a passthrough to hash.Hash{}.Write([]byte(s)).
|
||||
func (h *hashWithBufs) writeString(s string) {
|
||||
_, _ = h.hash.Write(byteutil.S2B(s))
|
||||
}
|
||||
|
||||
// EncodedSum returns the hex encoded sum of hash.Sum().
|
||||
func (h *hashWithBufs) EncodedSum() string {
|
||||
_ = h.hash.Sum(h.hbuf[:0])
|
||||
hex.Encode(h.ebuf[:], h.hbuf[:])
|
||||
return string(h.ebuf[:])
|
||||
}
|
||||
|
||||
// Reset will reset hash and buffers.
|
||||
func (h *hashWithBufs) Reset() {
|
||||
h.ebuf = [encodedHashLen]byte{}
|
||||
h.hbuf = [hashLen]byte{}
|
||||
h.hash.Reset()
|
||||
}
|
||||
|
||||
type nollamas struct {
|
||||
// our instance cookie policy.
|
||||
policy apiutil.CookiePolicy
|
||||
|
||||
// unique entropy
|
||||
// to prevent hashes
|
||||
// being guessable
|
||||
entropy [32]byte
|
||||
|
||||
// success cookie TTL
|
||||
ttl time.Duration
|
||||
|
||||
// rounds determines roughly how
|
||||
// many hash-encode rounds each
|
||||
// client is required to complete.
|
||||
rounds uint32
|
||||
|
||||
// extra fields required for
|
||||
// our template rendering.
|
||||
getInstanceV1 func(ctx context.Context) (*apimodel.InstanceV1, gtserror.WithCode)
|
||||
}
|
||||
|
||||
func (m *nollamas) Serve(c *gin.Context) {
|
||||
if c.Request.Method != http.MethodGet {
|
||||
// Only interested in protecting
|
||||
// crawlable 'GET' endpoints.
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Extract request context.
|
||||
ctx := c.Request.Context()
|
||||
|
||||
if ctx.Value(oauth.SessionAuthorizedToken) != nil {
|
||||
// Don't guard against requests
|
||||
// providing valid OAuth tokens.
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
if gtscontext.HTTPSignature(ctx) != "" {
|
||||
// Don't guard against requests
|
||||
// providing HTTP signatures.
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Prepare new hash with buffers.
|
||||
hash := hashWithBufs{hash: sha256.New()}
|
||||
|
||||
// Extract client fingerprint data.
|
||||
userAgent := c.GetHeader("User-Agent")
|
||||
clientIP := c.ClientIP()
|
||||
|
||||
// Generate a unique token for this request,
|
||||
// only valid for a period of now +- m.ttl.
|
||||
token := m.getToken(&hash, userAgent, clientIP)
|
||||
|
||||
// Check for a provided success token.
|
||||
cookie, _ := c.Cookie("gts-nollamas")
|
||||
|
||||
// Check whether passed cookie
|
||||
// is the expected success token.
|
||||
if subtle.ConstantTimeCompare(
|
||||
byteutil.S2B(cookie),
|
||||
byteutil.S2B(token),
|
||||
) == 1 {
|
||||
|
||||
// They passed us a valid, expected
|
||||
// token. They already passed checks.
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// From here-on out, all
|
||||
// possibilities are handled
|
||||
// by us. Prevent further http
|
||||
// handlers from being called.
|
||||
c.Abort()
|
||||
|
||||
// Generate challenge for this unique (yet deterministic) token,
|
||||
// returning seed, wanted 'challenge' result and expected solution.
|
||||
seed, challenge, solution := m.getChallenge(&hash, token)
|
||||
|
||||
// Prepare new log entry.
|
||||
l := log.WithContext(ctx).
|
||||
WithField("userAgent", userAgent).
|
||||
WithField("seed", seed).
|
||||
WithField("rounds", solution)
|
||||
|
||||
// Extract and parse query.
|
||||
query := c.Request.URL.Query()
|
||||
|
||||
// Check query to see if an in-progress
|
||||
// challenge solution has been provided.
|
||||
nonce := query.Get("nollamas_solution")
|
||||
if nonce == "" {
|
||||
|
||||
// No solution given, likely new client!
|
||||
// Simply present them with challenge.
|
||||
m.renderChallenge(c, seed, challenge)
|
||||
l.Info("posing new challenge")
|
||||
return
|
||||
}
|
||||
|
||||
// Check nonce matches expected.
|
||||
if subtle.ConstantTimeCompare(
|
||||
byteutil.S2B(solution),
|
||||
byteutil.S2B(nonce),
|
||||
) != 1 {
|
||||
|
||||
// Their nonce failed, re-challenge them.
|
||||
m.renderChallenge(c, challenge, solution)
|
||||
l.Infof("invalid solution provided: %s", nonce)
|
||||
return
|
||||
}
|
||||
|
||||
l.Info("challenge passed")
|
||||
|
||||
// Drop solution query and encode.
|
||||
query.Del("nollamas_solution")
|
||||
c.Request.URL.RawQuery = query.Encode()
|
||||
|
||||
// They passed the challenge! Set success token
|
||||
// cookie and allow them to continue to next handlers.
|
||||
m.policy.SetCookie(c, "gts-nollamas", token, int(m.ttl/time.Second), "/")
|
||||
c.Redirect(http.StatusTemporaryRedirect, c.Request.URL.RequestURI())
|
||||
}
|
||||
|
||||
func (m *nollamas) renderChallenge(c *gin.Context, seed, challenge string) {
|
||||
// Fetch current instance information for templating vars.
|
||||
instance, errWithCode := m.getInstanceV1(c.Request.Context())
|
||||
if errWithCode != nil {
|
||||
apiutil.ErrorHandler(c, errWithCode, m.getInstanceV1)
|
||||
return
|
||||
}
|
||||
|
||||
// Write templated challenge response to client.
|
||||
apiutil.TemplateWebPage(c, apiutil.WebPage{
|
||||
Template: "nollamas.tmpl",
|
||||
Instance: instance,
|
||||
Stylesheets: []string{
|
||||
"/assets/dist/nollamas.css",
|
||||
// Include fork-awesome stylesheet
|
||||
// to get nice loading spinner.
|
||||
"/assets/Fork-Awesome/css/fork-awesome.min.css",
|
||||
},
|
||||
Extra: map[string]any{
|
||||
"seed": seed,
|
||||
"challenge": challenge,
|
||||
},
|
||||
Javascript: []apiutil.JavascriptEntry{
|
||||
{
|
||||
Src: "/assets/dist/nollamas.js",
|
||||
Defer: true,
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// getToken generates a unique yet deterministic token for given HTTP request
|
||||
// details, seeded by runtime generated entropy data and ttl rounded timestamp.
|
||||
func (m *nollamas) getToken(hash *hashWithBufs, userAgent, clientIP string) string {
|
||||
|
||||
// Reset before
|
||||
// using hash.
|
||||
hash.Reset()
|
||||
|
||||
// Use our unique entropy to seed hash,
|
||||
// to ensure we have cryptographically
|
||||
// unique, yet deterministic, tokens
|
||||
// generated for a given http client.
|
||||
hash.write(m.entropy[:])
|
||||
|
||||
// Also seed the generated input with
|
||||
// current time rounded to TTL, so our
|
||||
// single comparison handles expiries.
|
||||
now := time.Now().Round(m.ttl).Unix()
|
||||
hash.write([]byte{
|
||||
byte(now >> 56),
|
||||
byte(now >> 48),
|
||||
byte(now >> 40),
|
||||
byte(now >> 32),
|
||||
byte(now >> 24),
|
||||
byte(now >> 16),
|
||||
byte(now >> 8),
|
||||
byte(now),
|
||||
})
|
||||
|
||||
// Append client request data.
|
||||
hash.writeString(userAgent)
|
||||
hash.writeString(clientIP)
|
||||
|
||||
// Return hex encoded hash.
|
||||
return hash.EncodedSum()
|
||||
}
|
||||
|
||||
// getChallenge prepares a new challenge given the deterministic input token for this request.
|
||||
// it will return an input seed string, a challenge string which is the end result the client
|
||||
// should be looking for, and the solution for this such that challenge = hex(sha256(seed + solution)).
|
||||
// the solution will always be a string-encoded 64bit integer calculated from m.rounds + random jitter.
|
||||
func (m *nollamas) getChallenge(hash *hashWithBufs, token string) (seed, challenge, solution string) {
|
||||
|
||||
// For their unique seed string just use a
|
||||
// single portion of their 'success' token.
|
||||
// SHA256 is not yet cracked, this is not an
|
||||
// application of a hash requiring serious
|
||||
// cryptographic security and it rotates on
|
||||
// a TTL basis, so it should be fine.
|
||||
seed = token[:len(token)/4]
|
||||
|
||||
// BEFORE resetting the hash, get the last
|
||||
// two bytes of NON-hex-encoded data from
|
||||
// token generation to use for random jitter.
|
||||
// This is taken from the end of the hash as
|
||||
// this is the "unseen" end part of token.
|
||||
//
|
||||
// (if we used hex-encoded data it would
|
||||
// only ever be '0-9' or 'a-z' ASCII chars).
|
||||
//
|
||||
// Security-wise, same applies as-above.
|
||||
jitter := int16(hash.hbuf[len(hash.hbuf)-2]) |
|
||||
int16(hash.hbuf[len(hash.hbuf)-1])<<8
|
||||
|
||||
var rounds int64
|
||||
switch {
|
||||
// For some small percentage of
|
||||
// clients we purposely low-ball
|
||||
// their rounds required, to make
|
||||
// it so gaming it with a starting
|
||||
// nonce value may suddenly fail.
|
||||
case jitter%37 == 0:
|
||||
rounds = int64(m.rounds/10) + int64(jitter/10)
|
||||
case jitter%31 == 0:
|
||||
rounds = int64(m.rounds/5) + int64(jitter/5)
|
||||
case jitter%29 == 0:
|
||||
rounds = int64(m.rounds/3) + int64(jitter/3)
|
||||
case jitter%13 == 0:
|
||||
rounds = int64(m.rounds/2) + int64(jitter/2)
|
||||
|
||||
// Determine an appropriate number of hash rounds
|
||||
// we want the client to perform on input seed. This
|
||||
// is determined as configured m.rounds +- jitter.
|
||||
// This will be the 'solution' to create 'challenge'.
|
||||
default:
|
||||
rounds = int64(m.rounds) + int64(jitter) //nolint:gosec
|
||||
}
|
||||
|
||||
// Encode (positive) determined hash rounds as string.
|
||||
solution = strconv.FormatInt(bitutil.Abs64(rounds), 10)
|
||||
|
||||
// Reset before
|
||||
// using hash.
|
||||
hash.Reset()
|
||||
|
||||
// Calculate the expected result
|
||||
// of hex(sha256(seed + solution)),
|
||||
// i.e. the proposed 'challenge'.
|
||||
hash.writeString(seed)
|
||||
hash.writeString(solution)
|
||||
challenge = hash.EncodedSum()
|
||||
|
||||
return
|
||||
}
|
||||
|
|
@ -1,178 +0,0 @@
|
|||
// GoToSocial
|
||||
// Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
// SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package middleware_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"code.superseriousbusiness.org/gotosocial/internal/api/model"
|
||||
apiutil "code.superseriousbusiness.org/gotosocial/internal/api/util"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/config"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/gtserror"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/middleware"
|
||||
"code.superseriousbusiness.org/gotosocial/internal/router"
|
||||
"codeberg.org/gruf/go-byteutil"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestNoLLaMasMiddleware(t *testing.T) {
|
||||
// Gin test engine.
|
||||
e := gin.New()
|
||||
|
||||
// Setup necessary configuration variables.
|
||||
config.SetAdvancedScraperDeterrenceEnabled(true)
|
||||
config.SetWebTemplateBaseDir("../../web/template")
|
||||
|
||||
// Load templates into engine.
|
||||
err := router.LoadTemplates(e)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Add middleware to the gin engine handler stack.
|
||||
middleware := middleware.NoLLaMas(apiutil.CookiePolicy{}, getInstanceV1)
|
||||
e.Use(middleware)
|
||||
|
||||
// Set test handler we can
|
||||
// easily check if was used.
|
||||
e.Handle("GET", "/", testHandler)
|
||||
|
||||
// Test with differing user-agents.
|
||||
for _, userAgent := range []string{
|
||||
"CURL",
|
||||
"Mozilla FireSox",
|
||||
"Google Gnome",
|
||||
} {
|
||||
testNoLLaMasMiddleware(t, e, userAgent)
|
||||
}
|
||||
}
|
||||
|
||||
func testNoLLaMasMiddleware(t *testing.T, e *gin.Engine, userAgent string) {
|
||||
// Prepare a test request for gin engine.
|
||||
r := httptest.NewRequest("GET", "/", nil)
|
||||
r.Header.Set("User-Agent", userAgent)
|
||||
rw := httptest.NewRecorder()
|
||||
|
||||
// Pass req through
|
||||
// engine handler.
|
||||
e.ServeHTTP(rw, r)
|
||||
|
||||
// Get http result.
|
||||
res := rw.Result()
|
||||
|
||||
// It should have been stopped
|
||||
// by middleware and NOT used
|
||||
// the expected test handler.
|
||||
ok := usedTestHandler(res)
|
||||
assert.False(t, ok)
|
||||
|
||||
// Read entire response body.
|
||||
b, err := io.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var seed string
|
||||
var challenge string
|
||||
|
||||
// Parse output body and find the challenge / difficulty.
|
||||
for _, line := range strings.Split(string(b), "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
switch {
|
||||
case strings.HasPrefix(line, "data-nollamas-seed=\""):
|
||||
line = line[20:]
|
||||
line = line[:len(line)-1]
|
||||
seed = line
|
||||
case strings.HasPrefix(line, "data-nollamas-challenge=\""):
|
||||
line = line[25:]
|
||||
line = line[:len(line)-1]
|
||||
challenge = line
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure valid posed challenge.
|
||||
assert.NotEmpty(t, challenge)
|
||||
assert.NotEmpty(t, seed)
|
||||
|
||||
// Prepare a test request for gin engine.
|
||||
r = httptest.NewRequest("GET", "/", nil)
|
||||
r.Header.Set("User-Agent", userAgent)
|
||||
rw = httptest.NewRecorder()
|
||||
|
||||
t.Logf("seed=%s", seed)
|
||||
t.Logf("challenge=%s", challenge)
|
||||
|
||||
// Now compute and set solution query paramater.
|
||||
solution := computeSolution(seed, challenge)
|
||||
r.URL.RawQuery = "nollamas_solution=" + solution
|
||||
t.Logf("solution=%s", solution)
|
||||
|
||||
// Pass req through
|
||||
// engine handler.
|
||||
e.ServeHTTP(rw, r)
|
||||
|
||||
// Get http result.
|
||||
res = rw.Result()
|
||||
|
||||
// Should have received redirect.
|
||||
uri, err := res.Location()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, uri.String(), "/")
|
||||
|
||||
// Ensure our expected solution cookie (to bypass challenge) was set.
|
||||
ok = slices.ContainsFunc(res.Cookies(), func(c *http.Cookie) bool {
|
||||
return c.Name == "gts-nollamas"
|
||||
})
|
||||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
// computeSolution does the functional equivalent of our nollamas workerTask.js.
|
||||
func computeSolution(seed, challenge string) string {
|
||||
for i := 0; ; i++ {
|
||||
solution := strconv.Itoa(i)
|
||||
combined := seed + solution
|
||||
hash := sha256.Sum256(byteutil.S2B(combined))
|
||||
encoded := hex.EncodeToString(hash[:])
|
||||
if encoded != challenge {
|
||||
continue
|
||||
}
|
||||
return solution
|
||||
}
|
||||
}
|
||||
|
||||
// usedTestHandler returns whether testHandler() was used.
|
||||
func usedTestHandler(res *http.Response) bool {
|
||||
return res.Header.Get("test-handler") == "ok"
|
||||
}
|
||||
|
||||
func testHandler(c *gin.Context) {
|
||||
c.Writer.Header().Set("test-handler", "ok")
|
||||
c.Writer.WriteHeader(http.StatusOK)
|
||||
}
|
||||
|
||||
func getInstanceV1(context.Context) (*model.InstanceV1, gtserror.WithCode) {
|
||||
return &model.InstanceV1{}, nil
|
||||
}
|
||||
|
|
@ -101,16 +101,12 @@ func (m *Module) Route(r *router.Router, mi ...gin.HandlerFunc) {
|
|||
|
||||
// Handlers that serve profiles and statuses should use
|
||||
// the SignatureCheck middleware, so that requests with
|
||||
// content-type application/activity+json can be served,
|
||||
// and (if enabled) the nollamas middleware, to protect
|
||||
// against scraping by shitty LLM bullshit.
|
||||
// content-type application/activity+json can be served.
|
||||
profileGroup := r.AttachGroup(profileGroupPath)
|
||||
profileGroup.Use(mi...)
|
||||
profileGroup.Use(middleware.SignatureCheck(m.isURIBlocked), middleware.CacheControl(middleware.CacheControlConfig{
|
||||
Directives: []string{"no-store"},
|
||||
}))
|
||||
nollamas := middleware.NoLLaMas(m.cookiePolicy, m.processor.InstanceGetV1)
|
||||
profileGroup.Use(nollamas)
|
||||
profileGroup.Handle(http.MethodGet, "", m.profileGETHandler) // use empty path here since it's the base of the group
|
||||
profileGroup.Handle(http.MethodGet, statusPath, m.threadGETHandler)
|
||||
|
||||
|
|
|
|||
|
|
@ -131,7 +131,6 @@ nav:
|
|||
- "advanced/tracing.md"
|
||||
- "advanced/metrics.md"
|
||||
- "advanced/replicating-sqlite.md"
|
||||
- "advanced/scraper_deterrence.md"
|
||||
- "advanced/sqlite-networked-storage.md"
|
||||
- "Advanced builds":
|
||||
- "advanced/builds/nowasm.md"
|
||||
|
|
|
|||
|
|
@ -33,8 +33,7 @@ for file in $(find ./web/source | license_filter); do
|
|||
done
|
||||
|
||||
# Copy over misc other licenses
|
||||
for file in ./LICENSE \
|
||||
./web/source/nollamasworker/sha256.js; do
|
||||
for file in ./LICENSE; do
|
||||
echo "----------------------------------------------------------" >> "$OUTPUT"
|
||||
echo >> "$OUTPUT"
|
||||
echo "${file}:" >> "$OUTPUT"
|
||||
|
|
|
|||
|
|
@ -20,8 +20,6 @@ EXPECT=$(cat << "EOF"
|
|||
"127.0.0.1/32"
|
||||
],
|
||||
"advanced-rate-limit-requests": 6969,
|
||||
"advanced-scraper-deterrence-difficulty": 500000,
|
||||
"advanced-scraper-deterrence-enabled": true,
|
||||
"advanced-sender-multiplier": -1,
|
||||
"advanced-throttling-multiplier": -1,
|
||||
"advanced-throttling-retry-after": 10000000000,
|
||||
|
|
@ -325,8 +323,6 @@ GTS_SYSLOG_ADDRESS='127.0.0.1:6969' \
|
|||
GTS_ADVANCED_COOKIES_SAMESITE='strict' \
|
||||
GTS_ADVANCED_RATE_LIMIT_EXCEPTIONS="192.0.2.0/24,127.0.0.1/32" \
|
||||
GTS_ADVANCED_RATE_LIMIT_REQUESTS=6969 \
|
||||
GTS_ADVANCED_SCRAPER_DETERRENCE_DIFFICULTY=500000 \
|
||||
GTS_ADVANCED_SCRAPER_DETERRENCE_ENABLED=true \
|
||||
GTS_ADVANCED_SENDER_MULTIPLIER=-1 \
|
||||
GTS_ADVANCED_THROTTLING_MULTIPLIER=-1 \
|
||||
GTS_ADVANCED_THROTTLING_RETRY_AFTER='10s' \
|
||||
|
|
|
|||
|
|
@ -184,11 +184,6 @@ func testDefaults() config.Configuration {
|
|||
Throttling: config.ThrottlingConfig{
|
||||
Multiplier: 0, // disabled
|
||||
},
|
||||
|
||||
ScraperDeterrence: config.ScraperDeterrenceConfig{
|
||||
Enabled: envBool("GTS_ADVANCED_SCRAPER_DETERRENCE_ENABLED", false),
|
||||
Difficulty: uint32(envInt("GTS_ADVANCED_SCRAPER_DETERRENCE_DIFFICULTY", 100000)), //nolint
|
||||
},
|
||||
},
|
||||
|
||||
SoftwareVersion: "0.0.0-testrig",
|
||||
|
|
|
|||
35
vendor/golang.org/x/net/context/context.go
generated
vendored
35
vendor/golang.org/x/net/context/context.go
generated
vendored
|
|
@ -6,7 +6,7 @@
|
|||
// cancellation signals, and other request-scoped values across API boundaries
|
||||
// and between processes.
|
||||
// As of Go 1.7 this package is available in the standard library under the
|
||||
// name [context], and migrating to it can be done automatically with [go fix].
|
||||
// name [context].
|
||||
//
|
||||
// Incoming requests to a server should create a [Context], and outgoing
|
||||
// calls to servers should accept a Context. The chain of function
|
||||
|
|
@ -38,8 +38,6 @@
|
|||
//
|
||||
// See https://go.dev/blog/context for example code for a server that uses
|
||||
// Contexts.
|
||||
//
|
||||
// [go fix]: https://go.dev/cmd/go#hdr-Update_packages_to_use_new_APIs
|
||||
package context
|
||||
|
||||
import (
|
||||
|
|
@ -51,36 +49,37 @@ import (
|
|||
// API boundaries.
|
||||
//
|
||||
// Context's methods may be called by multiple goroutines simultaneously.
|
||||
//
|
||||
//go:fix inline
|
||||
type Context = context.Context
|
||||
|
||||
// Canceled is the error returned by [Context.Err] when the context is canceled
|
||||
// for some reason other than its deadline passing.
|
||||
//
|
||||
//go:fix inline
|
||||
var Canceled = context.Canceled
|
||||
|
||||
// DeadlineExceeded is the error returned by [Context.Err] when the context is canceled
|
||||
// due to its deadline passing.
|
||||
//
|
||||
//go:fix inline
|
||||
var DeadlineExceeded = context.DeadlineExceeded
|
||||
|
||||
// Background returns a non-nil, empty Context. It is never canceled, has no
|
||||
// values, and has no deadline. It is typically used by the main function,
|
||||
// initialization, and tests, and as the top-level Context for incoming
|
||||
// requests.
|
||||
func Background() Context {
|
||||
return background
|
||||
}
|
||||
//
|
||||
//go:fix inline
|
||||
func Background() Context { return context.Background() }
|
||||
|
||||
// TODO returns a non-nil, empty Context. Code should use context.TODO when
|
||||
// it's unclear which Context to use or it is not yet available (because the
|
||||
// surrounding function has not yet been extended to accept a Context
|
||||
// parameter).
|
||||
func TODO() Context {
|
||||
return todo
|
||||
}
|
||||
|
||||
var (
|
||||
background = context.Background()
|
||||
todo = context.TODO()
|
||||
)
|
||||
//
|
||||
//go:fix inline
|
||||
func TODO() Context { return context.TODO() }
|
||||
|
||||
// A CancelFunc tells an operation to abandon its work.
|
||||
// A CancelFunc does not wait for the work to stop.
|
||||
|
|
@ -95,6 +94,8 @@ type CancelFunc = context.CancelFunc
|
|||
//
|
||||
// Canceling this context releases resources associated with it, so code should
|
||||
// call cancel as soon as the operations running in this [Context] complete.
|
||||
//
|
||||
//go:fix inline
|
||||
func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
|
||||
return context.WithCancel(parent)
|
||||
}
|
||||
|
|
@ -108,6 +109,8 @@ func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
|
|||
//
|
||||
// Canceling this context releases resources associated with it, so code should
|
||||
// call cancel as soon as the operations running in this [Context] complete.
|
||||
//
|
||||
//go:fix inline
|
||||
func WithDeadline(parent Context, d time.Time) (Context, CancelFunc) {
|
||||
return context.WithDeadline(parent, d)
|
||||
}
|
||||
|
|
@ -122,6 +125,8 @@ func WithDeadline(parent Context, d time.Time) (Context, CancelFunc) {
|
|||
// defer cancel() // releases resources if slowOperation completes before timeout elapses
|
||||
// return slowOperation(ctx)
|
||||
// }
|
||||
//
|
||||
//go:fix inline
|
||||
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
|
||||
return context.WithTimeout(parent, timeout)
|
||||
}
|
||||
|
|
@ -139,6 +144,8 @@ func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
|
|||
// interface{}, context keys often have concrete type
|
||||
// struct{}. Alternatively, exported context key variables' static
|
||||
// type should be a pointer or interface.
|
||||
//
|
||||
//go:fix inline
|
||||
func WithValue(parent Context, key, val interface{}) Context {
|
||||
return context.WithValue(parent, key, val)
|
||||
}
|
||||
|
|
|
|||
46
vendor/golang.org/x/net/http2/config.go
generated
vendored
46
vendor/golang.org/x/net/http2/config.go
generated
vendored
|
|
@ -55,7 +55,7 @@ func configFromServer(h1 *http.Server, h2 *Server) http2Config {
|
|||
PermitProhibitedCipherSuites: h2.PermitProhibitedCipherSuites,
|
||||
CountError: h2.CountError,
|
||||
}
|
||||
fillNetHTTPServerConfig(&conf, h1)
|
||||
fillNetHTTPConfig(&conf, h1.HTTP2)
|
||||
setConfigDefaults(&conf, true)
|
||||
return conf
|
||||
}
|
||||
|
|
@ -81,7 +81,7 @@ func configFromTransport(h2 *Transport) http2Config {
|
|||
}
|
||||
|
||||
if h2.t1 != nil {
|
||||
fillNetHTTPTransportConfig(&conf, h2.t1)
|
||||
fillNetHTTPConfig(&conf, h2.t1.HTTP2)
|
||||
}
|
||||
setConfigDefaults(&conf, false)
|
||||
return conf
|
||||
|
|
@ -120,3 +120,45 @@ func adjustHTTP1MaxHeaderSize(n int64) int64 {
|
|||
const typicalHeaders = 10 // conservative
|
||||
return n + typicalHeaders*perFieldOverhead
|
||||
}
|
||||
|
||||
func fillNetHTTPConfig(conf *http2Config, h2 *http.HTTP2Config) {
|
||||
if h2 == nil {
|
||||
return
|
||||
}
|
||||
if h2.MaxConcurrentStreams != 0 {
|
||||
conf.MaxConcurrentStreams = uint32(h2.MaxConcurrentStreams)
|
||||
}
|
||||
if h2.MaxEncoderHeaderTableSize != 0 {
|
||||
conf.MaxEncoderHeaderTableSize = uint32(h2.MaxEncoderHeaderTableSize)
|
||||
}
|
||||
if h2.MaxDecoderHeaderTableSize != 0 {
|
||||
conf.MaxDecoderHeaderTableSize = uint32(h2.MaxDecoderHeaderTableSize)
|
||||
}
|
||||
if h2.MaxConcurrentStreams != 0 {
|
||||
conf.MaxConcurrentStreams = uint32(h2.MaxConcurrentStreams)
|
||||
}
|
||||
if h2.MaxReadFrameSize != 0 {
|
||||
conf.MaxReadFrameSize = uint32(h2.MaxReadFrameSize)
|
||||
}
|
||||
if h2.MaxReceiveBufferPerConnection != 0 {
|
||||
conf.MaxUploadBufferPerConnection = int32(h2.MaxReceiveBufferPerConnection)
|
||||
}
|
||||
if h2.MaxReceiveBufferPerStream != 0 {
|
||||
conf.MaxUploadBufferPerStream = int32(h2.MaxReceiveBufferPerStream)
|
||||
}
|
||||
if h2.SendPingTimeout != 0 {
|
||||
conf.SendPingTimeout = h2.SendPingTimeout
|
||||
}
|
||||
if h2.PingTimeout != 0 {
|
||||
conf.PingTimeout = h2.PingTimeout
|
||||
}
|
||||
if h2.WriteByteTimeout != 0 {
|
||||
conf.WriteByteTimeout = h2.WriteByteTimeout
|
||||
}
|
||||
if h2.PermitProhibitedCipherSuites {
|
||||
conf.PermitProhibitedCipherSuites = true
|
||||
}
|
||||
if h2.CountError != nil {
|
||||
conf.CountError = h2.CountError
|
||||
}
|
||||
}
|
||||
|
|
|
|||
61
vendor/golang.org/x/net/http2/config_go124.go
generated
vendored
61
vendor/golang.org/x/net/http2/config_go124.go
generated
vendored
|
|
@ -1,61 +0,0 @@
|
|||
// Copyright 2024 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build go1.24
|
||||
|
||||
package http2
|
||||
|
||||
import "net/http"
|
||||
|
||||
// fillNetHTTPServerConfig sets fields in conf from srv.HTTP2.
|
||||
func fillNetHTTPServerConfig(conf *http2Config, srv *http.Server) {
|
||||
fillNetHTTPConfig(conf, srv.HTTP2)
|
||||
}
|
||||
|
||||
// fillNetHTTPTransportConfig sets fields in conf from tr.HTTP2.
|
||||
func fillNetHTTPTransportConfig(conf *http2Config, tr *http.Transport) {
|
||||
fillNetHTTPConfig(conf, tr.HTTP2)
|
||||
}
|
||||
|
||||
func fillNetHTTPConfig(conf *http2Config, h2 *http.HTTP2Config) {
|
||||
if h2 == nil {
|
||||
return
|
||||
}
|
||||
if h2.MaxConcurrentStreams != 0 {
|
||||
conf.MaxConcurrentStreams = uint32(h2.MaxConcurrentStreams)
|
||||
}
|
||||
if h2.MaxEncoderHeaderTableSize != 0 {
|
||||
conf.MaxEncoderHeaderTableSize = uint32(h2.MaxEncoderHeaderTableSize)
|
||||
}
|
||||
if h2.MaxDecoderHeaderTableSize != 0 {
|
||||
conf.MaxDecoderHeaderTableSize = uint32(h2.MaxDecoderHeaderTableSize)
|
||||
}
|
||||
if h2.MaxConcurrentStreams != 0 {
|
||||
conf.MaxConcurrentStreams = uint32(h2.MaxConcurrentStreams)
|
||||
}
|
||||
if h2.MaxReadFrameSize != 0 {
|
||||
conf.MaxReadFrameSize = uint32(h2.MaxReadFrameSize)
|
||||
}
|
||||
if h2.MaxReceiveBufferPerConnection != 0 {
|
||||
conf.MaxUploadBufferPerConnection = int32(h2.MaxReceiveBufferPerConnection)
|
||||
}
|
||||
if h2.MaxReceiveBufferPerStream != 0 {
|
||||
conf.MaxUploadBufferPerStream = int32(h2.MaxReceiveBufferPerStream)
|
||||
}
|
||||
if h2.SendPingTimeout != 0 {
|
||||
conf.SendPingTimeout = h2.SendPingTimeout
|
||||
}
|
||||
if h2.PingTimeout != 0 {
|
||||
conf.PingTimeout = h2.PingTimeout
|
||||
}
|
||||
if h2.WriteByteTimeout != 0 {
|
||||
conf.WriteByteTimeout = h2.WriteByteTimeout
|
||||
}
|
||||
if h2.PermitProhibitedCipherSuites {
|
||||
conf.PermitProhibitedCipherSuites = true
|
||||
}
|
||||
if h2.CountError != nil {
|
||||
conf.CountError = h2.CountError
|
||||
}
|
||||
}
|
||||
16
vendor/golang.org/x/net/http2/config_pre_go124.go
generated
vendored
16
vendor/golang.org/x/net/http2/config_pre_go124.go
generated
vendored
|
|
@ -1,16 +0,0 @@
|
|||
// Copyright 2024 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:build !go1.24
|
||||
|
||||
package http2
|
||||
|
||||
import "net/http"
|
||||
|
||||
// Pre-Go 1.24 fallback.
|
||||
// The Server.HTTP2 and Transport.HTTP2 config fields were added in Go 1.24.
|
||||
|
||||
func fillNetHTTPServerConfig(conf *http2Config, srv *http.Server) {}
|
||||
|
||||
func fillNetHTTPTransportConfig(conf *http2Config, tr *http.Transport) {}
|
||||
17
vendor/golang.org/x/net/http2/gotrack.go
generated
vendored
17
vendor/golang.org/x/net/http2/gotrack.go
generated
vendored
|
|
@ -15,21 +15,32 @@ import (
|
|||
"runtime"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
var DebugGoroutines = os.Getenv("DEBUG_HTTP2_GOROUTINES") == "1"
|
||||
|
||||
// Setting DebugGoroutines to false during a test to disable goroutine debugging
|
||||
// results in race detector complaints when a test leaves goroutines running before
|
||||
// returning. Tests shouldn't do this, of course, but when they do it generally shows
|
||||
// up as infrequent, hard-to-debug flakes. (See #66519.)
|
||||
//
|
||||
// Disable goroutine debugging during individual tests with an atomic bool.
|
||||
// (Note that it's safe to enable/disable debugging mid-test, so the actual race condition
|
||||
// here is harmless.)
|
||||
var disableDebugGoroutines atomic.Bool
|
||||
|
||||
type goroutineLock uint64
|
||||
|
||||
func newGoroutineLock() goroutineLock {
|
||||
if !DebugGoroutines {
|
||||
if !DebugGoroutines || disableDebugGoroutines.Load() {
|
||||
return 0
|
||||
}
|
||||
return goroutineLock(curGoroutineID())
|
||||
}
|
||||
|
||||
func (g goroutineLock) check() {
|
||||
if !DebugGoroutines {
|
||||
if !DebugGoroutines || disableDebugGoroutines.Load() {
|
||||
return
|
||||
}
|
||||
if curGoroutineID() != uint64(g) {
|
||||
|
|
@ -38,7 +49,7 @@ func (g goroutineLock) check() {
|
|||
}
|
||||
|
||||
func (g goroutineLock) checkNotOn() {
|
||||
if !DebugGoroutines {
|
||||
if !DebugGoroutines || disableDebugGoroutines.Load() {
|
||||
return
|
||||
}
|
||||
if curGoroutineID() == uint64(g) {
|
||||
|
|
|
|||
34
vendor/golang.org/x/net/http2/http2.go
generated
vendored
34
vendor/golang.org/x/net/http2/http2.go
generated
vendored
|
|
@ -15,7 +15,6 @@ package http2 // import "golang.org/x/net/http2"
|
|||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
|
@ -255,15 +254,13 @@ func (cw closeWaiter) Wait() {
|
|||
// idle memory usage with many connections.
|
||||
type bufferedWriter struct {
|
||||
_ incomparable
|
||||
group synctestGroupInterface // immutable
|
||||
conn net.Conn // immutable
|
||||
bw *bufio.Writer // non-nil when data is buffered
|
||||
byteTimeout time.Duration // immutable, WriteByteTimeout
|
||||
conn net.Conn // immutable
|
||||
bw *bufio.Writer // non-nil when data is buffered
|
||||
byteTimeout time.Duration // immutable, WriteByteTimeout
|
||||
}
|
||||
|
||||
func newBufferedWriter(group synctestGroupInterface, conn net.Conn, timeout time.Duration) *bufferedWriter {
|
||||
func newBufferedWriter(conn net.Conn, timeout time.Duration) *bufferedWriter {
|
||||
return &bufferedWriter{
|
||||
group: group,
|
||||
conn: conn,
|
||||
byteTimeout: timeout,
|
||||
}
|
||||
|
|
@ -314,24 +311,18 @@ func (w *bufferedWriter) Flush() error {
|
|||
type bufferedWriterTimeoutWriter bufferedWriter
|
||||
|
||||
func (w *bufferedWriterTimeoutWriter) Write(p []byte) (n int, err error) {
|
||||
return writeWithByteTimeout(w.group, w.conn, w.byteTimeout, p)
|
||||
return writeWithByteTimeout(w.conn, w.byteTimeout, p)
|
||||
}
|
||||
|
||||
// writeWithByteTimeout writes to conn.
|
||||
// If more than timeout passes without any bytes being written to the connection,
|
||||
// the write fails.
|
||||
func writeWithByteTimeout(group synctestGroupInterface, conn net.Conn, timeout time.Duration, p []byte) (n int, err error) {
|
||||
func writeWithByteTimeout(conn net.Conn, timeout time.Duration, p []byte) (n int, err error) {
|
||||
if timeout <= 0 {
|
||||
return conn.Write(p)
|
||||
}
|
||||
for {
|
||||
var now time.Time
|
||||
if group == nil {
|
||||
now = time.Now()
|
||||
} else {
|
||||
now = group.Now()
|
||||
}
|
||||
conn.SetWriteDeadline(now.Add(timeout))
|
||||
conn.SetWriteDeadline(time.Now().Add(timeout))
|
||||
nn, err := conn.Write(p[n:])
|
||||
n += nn
|
||||
if n == len(p) || nn == 0 || !errors.Is(err, os.ErrDeadlineExceeded) {
|
||||
|
|
@ -417,14 +408,3 @@ func (s *sorter) SortStrings(ss []string) {
|
|||
// makes that struct also non-comparable, and generally doesn't add
|
||||
// any size (as long as it's first).
|
||||
type incomparable [0]func()
|
||||
|
||||
// synctestGroupInterface is the methods of synctestGroup used by Server and Transport.
|
||||
// It's defined as an interface here to let us keep synctestGroup entirely test-only
|
||||
// and not a part of non-test builds.
|
||||
type synctestGroupInterface interface {
|
||||
Join()
|
||||
Now() time.Time
|
||||
NewTimer(d time.Duration) timer
|
||||
AfterFunc(d time.Duration, f func()) timer
|
||||
ContextWithTimeout(ctx context.Context, d time.Duration) (context.Context, context.CancelFunc)
|
||||
}
|
||||
|
|
|
|||
124
vendor/golang.org/x/net/http2/server.go
generated
vendored
124
vendor/golang.org/x/net/http2/server.go
generated
vendored
|
|
@ -176,39 +176,6 @@ type Server struct {
|
|||
// so that we don't embed a Mutex in this struct, which will make the
|
||||
// struct non-copyable, which might break some callers.
|
||||
state *serverInternalState
|
||||
|
||||
// Synchronization group used for testing.
|
||||
// Outside of tests, this is nil.
|
||||
group synctestGroupInterface
|
||||
}
|
||||
|
||||
func (s *Server) markNewGoroutine() {
|
||||
if s.group != nil {
|
||||
s.group.Join()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) now() time.Time {
|
||||
if s.group != nil {
|
||||
return s.group.Now()
|
||||
}
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
// newTimer creates a new time.Timer, or a synthetic timer in tests.
|
||||
func (s *Server) newTimer(d time.Duration) timer {
|
||||
if s.group != nil {
|
||||
return s.group.NewTimer(d)
|
||||
}
|
||||
return timeTimer{time.NewTimer(d)}
|
||||
}
|
||||
|
||||
// afterFunc creates a new time.AfterFunc timer, or a synthetic timer in tests.
|
||||
func (s *Server) afterFunc(d time.Duration, f func()) timer {
|
||||
if s.group != nil {
|
||||
return s.group.AfterFunc(d, f)
|
||||
}
|
||||
return timeTimer{time.AfterFunc(d, f)}
|
||||
}
|
||||
|
||||
type serverInternalState struct {
|
||||
|
|
@ -423,6 +390,9 @@ func (o *ServeConnOpts) handler() http.Handler {
|
|||
//
|
||||
// The opts parameter is optional. If nil, default values are used.
|
||||
func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
|
||||
if opts == nil {
|
||||
opts = &ServeConnOpts{}
|
||||
}
|
||||
s.serveConn(c, opts, nil)
|
||||
}
|
||||
|
||||
|
|
@ -438,7 +408,7 @@ func (s *Server) serveConn(c net.Conn, opts *ServeConnOpts, newf func(*serverCon
|
|||
conn: c,
|
||||
baseCtx: baseCtx,
|
||||
remoteAddrStr: c.RemoteAddr().String(),
|
||||
bw: newBufferedWriter(s.group, c, conf.WriteByteTimeout),
|
||||
bw: newBufferedWriter(c, conf.WriteByteTimeout),
|
||||
handler: opts.handler(),
|
||||
streams: make(map[uint32]*stream),
|
||||
readFrameCh: make(chan readFrameResult),
|
||||
|
|
@ -638,11 +608,11 @@ type serverConn struct {
|
|||
pingSent bool
|
||||
sentPingData [8]byte
|
||||
goAwayCode ErrCode
|
||||
shutdownTimer timer // nil until used
|
||||
idleTimer timer // nil if unused
|
||||
shutdownTimer *time.Timer // nil until used
|
||||
idleTimer *time.Timer // nil if unused
|
||||
readIdleTimeout time.Duration
|
||||
pingTimeout time.Duration
|
||||
readIdleTimer timer // nil if unused
|
||||
readIdleTimer *time.Timer // nil if unused
|
||||
|
||||
// Owned by the writeFrameAsync goroutine:
|
||||
headerWriteBuf bytes.Buffer
|
||||
|
|
@ -687,12 +657,12 @@ type stream struct {
|
|||
flow outflow // limits writing from Handler to client
|
||||
inflow inflow // what the client is allowed to POST/etc to us
|
||||
state streamState
|
||||
resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
|
||||
gotTrailerHeader bool // HEADER frame for trailers was seen
|
||||
wroteHeaders bool // whether we wrote headers (not status 100)
|
||||
readDeadline timer // nil if unused
|
||||
writeDeadline timer // nil if unused
|
||||
closeErr error // set before cw is closed
|
||||
resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
|
||||
gotTrailerHeader bool // HEADER frame for trailers was seen
|
||||
wroteHeaders bool // whether we wrote headers (not status 100)
|
||||
readDeadline *time.Timer // nil if unused
|
||||
writeDeadline *time.Timer // nil if unused
|
||||
closeErr error // set before cw is closed
|
||||
|
||||
trailer http.Header // accumulated trailers
|
||||
reqTrailer http.Header // handler's Request.Trailer
|
||||
|
|
@ -848,7 +818,6 @@ type readFrameResult struct {
|
|||
// consumer is done with the frame.
|
||||
// It's run on its own goroutine.
|
||||
func (sc *serverConn) readFrames() {
|
||||
sc.srv.markNewGoroutine()
|
||||
gate := make(chan struct{})
|
||||
gateDone := func() { gate <- struct{}{} }
|
||||
for {
|
||||
|
|
@ -881,7 +850,6 @@ type frameWriteResult struct {
|
|||
// At most one goroutine can be running writeFrameAsync at a time per
|
||||
// serverConn.
|
||||
func (sc *serverConn) writeFrameAsync(wr FrameWriteRequest, wd *writeData) {
|
||||
sc.srv.markNewGoroutine()
|
||||
var err error
|
||||
if wd == nil {
|
||||
err = wr.write.writeFrame(sc)
|
||||
|
|
@ -965,22 +933,22 @@ func (sc *serverConn) serve(conf http2Config) {
|
|||
sc.setConnState(http.StateIdle)
|
||||
|
||||
if sc.srv.IdleTimeout > 0 {
|
||||
sc.idleTimer = sc.srv.afterFunc(sc.srv.IdleTimeout, sc.onIdleTimer)
|
||||
sc.idleTimer = time.AfterFunc(sc.srv.IdleTimeout, sc.onIdleTimer)
|
||||
defer sc.idleTimer.Stop()
|
||||
}
|
||||
|
||||
if conf.SendPingTimeout > 0 {
|
||||
sc.readIdleTimeout = conf.SendPingTimeout
|
||||
sc.readIdleTimer = sc.srv.afterFunc(conf.SendPingTimeout, sc.onReadIdleTimer)
|
||||
sc.readIdleTimer = time.AfterFunc(conf.SendPingTimeout, sc.onReadIdleTimer)
|
||||
defer sc.readIdleTimer.Stop()
|
||||
}
|
||||
|
||||
go sc.readFrames() // closed by defer sc.conn.Close above
|
||||
|
||||
settingsTimer := sc.srv.afterFunc(firstSettingsTimeout, sc.onSettingsTimer)
|
||||
settingsTimer := time.AfterFunc(firstSettingsTimeout, sc.onSettingsTimer)
|
||||
defer settingsTimer.Stop()
|
||||
|
||||
lastFrameTime := sc.srv.now()
|
||||
lastFrameTime := time.Now()
|
||||
loopNum := 0
|
||||
for {
|
||||
loopNum++
|
||||
|
|
@ -994,7 +962,7 @@ func (sc *serverConn) serve(conf http2Config) {
|
|||
case res := <-sc.wroteFrameCh:
|
||||
sc.wroteFrame(res)
|
||||
case res := <-sc.readFrameCh:
|
||||
lastFrameTime = sc.srv.now()
|
||||
lastFrameTime = time.Now()
|
||||
// Process any written frames before reading new frames from the client since a
|
||||
// written frame could have triggered a new stream to be started.
|
||||
if sc.writingFrameAsync {
|
||||
|
|
@ -1077,7 +1045,7 @@ func (sc *serverConn) handlePingTimer(lastFrameReadTime time.Time) {
|
|||
}
|
||||
|
||||
pingAt := lastFrameReadTime.Add(sc.readIdleTimeout)
|
||||
now := sc.srv.now()
|
||||
now := time.Now()
|
||||
if pingAt.After(now) {
|
||||
// We received frames since arming the ping timer.
|
||||
// Reset it for the next possible timeout.
|
||||
|
|
@ -1141,10 +1109,10 @@ func (sc *serverConn) readPreface() error {
|
|||
errc <- nil
|
||||
}
|
||||
}()
|
||||
timer := sc.srv.newTimer(prefaceTimeout) // TODO: configurable on *Server?
|
||||
timer := time.NewTimer(prefaceTimeout) // TODO: configurable on *Server?
|
||||
defer timer.Stop()
|
||||
select {
|
||||
case <-timer.C():
|
||||
case <-timer.C:
|
||||
return errPrefaceTimeout
|
||||
case err := <-errc:
|
||||
if err == nil {
|
||||
|
|
@ -1160,6 +1128,21 @@ var errChanPool = sync.Pool{
|
|||
New: func() interface{} { return make(chan error, 1) },
|
||||
}
|
||||
|
||||
func getErrChan() chan error {
|
||||
if inTests {
|
||||
// Channels cannot be reused across synctest tests.
|
||||
return make(chan error, 1)
|
||||
} else {
|
||||
return errChanPool.Get().(chan error)
|
||||
}
|
||||
}
|
||||
|
||||
func putErrChan(ch chan error) {
|
||||
if !inTests {
|
||||
errChanPool.Put(ch)
|
||||
}
|
||||
}
|
||||
|
||||
var writeDataPool = sync.Pool{
|
||||
New: func() interface{} { return new(writeData) },
|
||||
}
|
||||
|
|
@ -1167,7 +1150,7 @@ var writeDataPool = sync.Pool{
|
|||
// writeDataFromHandler writes DATA response frames from a handler on
|
||||
// the given stream.
|
||||
func (sc *serverConn) writeDataFromHandler(stream *stream, data []byte, endStream bool) error {
|
||||
ch := errChanPool.Get().(chan error)
|
||||
ch := getErrChan()
|
||||
writeArg := writeDataPool.Get().(*writeData)
|
||||
*writeArg = writeData{stream.id, data, endStream}
|
||||
err := sc.writeFrameFromHandler(FrameWriteRequest{
|
||||
|
|
@ -1199,7 +1182,7 @@ func (sc *serverConn) writeDataFromHandler(stream *stream, data []byte, endStrea
|
|||
return errStreamClosed
|
||||
}
|
||||
}
|
||||
errChanPool.Put(ch)
|
||||
putErrChan(ch)
|
||||
if frameWriteDone {
|
||||
writeDataPool.Put(writeArg)
|
||||
}
|
||||
|
|
@ -1513,7 +1496,7 @@ func (sc *serverConn) goAway(code ErrCode) {
|
|||
|
||||
func (sc *serverConn) shutDownIn(d time.Duration) {
|
||||
sc.serveG.check()
|
||||
sc.shutdownTimer = sc.srv.afterFunc(d, sc.onShutdownTimer)
|
||||
sc.shutdownTimer = time.AfterFunc(d, sc.onShutdownTimer)
|
||||
}
|
||||
|
||||
func (sc *serverConn) resetStream(se StreamError) {
|
||||
|
|
@ -2118,7 +2101,7 @@ func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
|
|||
// (in Go 1.8), though. That's a more sane option anyway.
|
||||
if sc.hs.ReadTimeout > 0 {
|
||||
sc.conn.SetReadDeadline(time.Time{})
|
||||
st.readDeadline = sc.srv.afterFunc(sc.hs.ReadTimeout, st.onReadTimeout)
|
||||
st.readDeadline = time.AfterFunc(sc.hs.ReadTimeout, st.onReadTimeout)
|
||||
}
|
||||
|
||||
return sc.scheduleHandler(id, rw, req, handler)
|
||||
|
|
@ -2216,7 +2199,7 @@ func (sc *serverConn) newStream(id, pusherID uint32, state streamState) *stream
|
|||
st.flow.add(sc.initialStreamSendWindowSize)
|
||||
st.inflow.init(sc.initialStreamRecvWindowSize)
|
||||
if sc.hs.WriteTimeout > 0 {
|
||||
st.writeDeadline = sc.srv.afterFunc(sc.hs.WriteTimeout, st.onWriteTimeout)
|
||||
st.writeDeadline = time.AfterFunc(sc.hs.WriteTimeout, st.onWriteTimeout)
|
||||
}
|
||||
|
||||
sc.streams[id] = st
|
||||
|
|
@ -2405,7 +2388,6 @@ func (sc *serverConn) handlerDone() {
|
|||
|
||||
// Run on its own goroutine.
|
||||
func (sc *serverConn) runHandler(rw *responseWriter, req *http.Request, handler func(http.ResponseWriter, *http.Request)) {
|
||||
sc.srv.markNewGoroutine()
|
||||
defer sc.sendServeMsg(handlerDoneMsg)
|
||||
didPanic := true
|
||||
defer func() {
|
||||
|
|
@ -2454,7 +2436,7 @@ func (sc *serverConn) writeHeaders(st *stream, headerData *writeResHeaders) erro
|
|||
// waiting for this frame to be written, so an http.Flush mid-handler
|
||||
// writes out the correct value of keys, before a handler later potentially
|
||||
// mutates it.
|
||||
errc = errChanPool.Get().(chan error)
|
||||
errc = getErrChan()
|
||||
}
|
||||
if err := sc.writeFrameFromHandler(FrameWriteRequest{
|
||||
write: headerData,
|
||||
|
|
@ -2466,7 +2448,7 @@ func (sc *serverConn) writeHeaders(st *stream, headerData *writeResHeaders) erro
|
|||
if errc != nil {
|
||||
select {
|
||||
case err := <-errc:
|
||||
errChanPool.Put(errc)
|
||||
putErrChan(errc)
|
||||
return err
|
||||
case <-sc.doneServing:
|
||||
return errClientDisconnected
|
||||
|
|
@ -2573,7 +2555,7 @@ func (b *requestBody) Read(p []byte) (n int, err error) {
|
|||
if err == io.EOF {
|
||||
b.sawEOF = true
|
||||
}
|
||||
if b.conn == nil && inTests {
|
||||
if b.conn == nil {
|
||||
return
|
||||
}
|
||||
b.conn.noteBodyReadFromHandler(b.stream, n, err)
|
||||
|
|
@ -2702,7 +2684,7 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
|
|||
var date string
|
||||
if _, ok := rws.snapHeader["Date"]; !ok {
|
||||
// TODO(bradfitz): be faster here, like net/http? measure.
|
||||
date = rws.conn.srv.now().UTC().Format(http.TimeFormat)
|
||||
date = time.Now().UTC().Format(http.TimeFormat)
|
||||
}
|
||||
|
||||
for _, v := range rws.snapHeader["Trailer"] {
|
||||
|
|
@ -2824,7 +2806,7 @@ func (rws *responseWriterState) promoteUndeclaredTrailers() {
|
|||
|
||||
func (w *responseWriter) SetReadDeadline(deadline time.Time) error {
|
||||
st := w.rws.stream
|
||||
if !deadline.IsZero() && deadline.Before(w.rws.conn.srv.now()) {
|
||||
if !deadline.IsZero() && deadline.Before(time.Now()) {
|
||||
// If we're setting a deadline in the past, reset the stream immediately
|
||||
// so writes after SetWriteDeadline returns will fail.
|
||||
st.onReadTimeout()
|
||||
|
|
@ -2840,9 +2822,9 @@ func (w *responseWriter) SetReadDeadline(deadline time.Time) error {
|
|||
if deadline.IsZero() {
|
||||
st.readDeadline = nil
|
||||
} else if st.readDeadline == nil {
|
||||
st.readDeadline = sc.srv.afterFunc(deadline.Sub(sc.srv.now()), st.onReadTimeout)
|
||||
st.readDeadline = time.AfterFunc(deadline.Sub(time.Now()), st.onReadTimeout)
|
||||
} else {
|
||||
st.readDeadline.Reset(deadline.Sub(sc.srv.now()))
|
||||
st.readDeadline.Reset(deadline.Sub(time.Now()))
|
||||
}
|
||||
})
|
||||
return nil
|
||||
|
|
@ -2850,7 +2832,7 @@ func (w *responseWriter) SetReadDeadline(deadline time.Time) error {
|
|||
|
||||
func (w *responseWriter) SetWriteDeadline(deadline time.Time) error {
|
||||
st := w.rws.stream
|
||||
if !deadline.IsZero() && deadline.Before(w.rws.conn.srv.now()) {
|
||||
if !deadline.IsZero() && deadline.Before(time.Now()) {
|
||||
// If we're setting a deadline in the past, reset the stream immediately
|
||||
// so writes after SetWriteDeadline returns will fail.
|
||||
st.onWriteTimeout()
|
||||
|
|
@ -2866,9 +2848,9 @@ func (w *responseWriter) SetWriteDeadline(deadline time.Time) error {
|
|||
if deadline.IsZero() {
|
||||
st.writeDeadline = nil
|
||||
} else if st.writeDeadline == nil {
|
||||
st.writeDeadline = sc.srv.afterFunc(deadline.Sub(sc.srv.now()), st.onWriteTimeout)
|
||||
st.writeDeadline = time.AfterFunc(deadline.Sub(time.Now()), st.onWriteTimeout)
|
||||
} else {
|
||||
st.writeDeadline.Reset(deadline.Sub(sc.srv.now()))
|
||||
st.writeDeadline.Reset(deadline.Sub(time.Now()))
|
||||
}
|
||||
})
|
||||
return nil
|
||||
|
|
@ -3147,7 +3129,7 @@ func (w *responseWriter) Push(target string, opts *http.PushOptions) error {
|
|||
method: opts.Method,
|
||||
url: u,
|
||||
header: cloneHeader(opts.Header),
|
||||
done: errChanPool.Get().(chan error),
|
||||
done: getErrChan(),
|
||||
}
|
||||
|
||||
select {
|
||||
|
|
@ -3164,7 +3146,7 @@ func (w *responseWriter) Push(target string, opts *http.PushOptions) error {
|
|||
case <-st.cw:
|
||||
return errStreamClosed
|
||||
case err := <-msg.done:
|
||||
errChanPool.Put(msg.done)
|
||||
putErrChan(msg.done)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
|
|||
20
vendor/golang.org/x/net/http2/timer.go
generated
vendored
20
vendor/golang.org/x/net/http2/timer.go
generated
vendored
|
|
@ -1,20 +0,0 @@
|
|||
// Copyright 2024 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
package http2
|
||||
|
||||
import "time"
|
||||
|
||||
// A timer is a time.Timer, as an interface which can be replaced in tests.
|
||||
type timer = interface {
|
||||
C() <-chan time.Time
|
||||
Reset(d time.Duration) bool
|
||||
Stop() bool
|
||||
}
|
||||
|
||||
// timeTimer adapts a time.Timer to the timer interface.
|
||||
type timeTimer struct {
|
||||
*time.Timer
|
||||
}
|
||||
|
||||
func (t timeTimer) C() <-chan time.Time { return t.Timer.C }
|
||||
94
vendor/golang.org/x/net/http2/transport.go
generated
vendored
94
vendor/golang.org/x/net/http2/transport.go
generated
vendored
|
|
@ -193,50 +193,6 @@ type Transport struct {
|
|||
|
||||
type transportTestHooks struct {
|
||||
newclientconn func(*ClientConn)
|
||||
group synctestGroupInterface
|
||||
}
|
||||
|
||||
func (t *Transport) markNewGoroutine() {
|
||||
if t != nil && t.transportTestHooks != nil {
|
||||
t.transportTestHooks.group.Join()
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Transport) now() time.Time {
|
||||
if t != nil && t.transportTestHooks != nil {
|
||||
return t.transportTestHooks.group.Now()
|
||||
}
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func (t *Transport) timeSince(when time.Time) time.Duration {
|
||||
if t != nil && t.transportTestHooks != nil {
|
||||
return t.now().Sub(when)
|
||||
}
|
||||
return time.Since(when)
|
||||
}
|
||||
|
||||
// newTimer creates a new time.Timer, or a synthetic timer in tests.
|
||||
func (t *Transport) newTimer(d time.Duration) timer {
|
||||
if t.transportTestHooks != nil {
|
||||
return t.transportTestHooks.group.NewTimer(d)
|
||||
}
|
||||
return timeTimer{time.NewTimer(d)}
|
||||
}
|
||||
|
||||
// afterFunc creates a new time.AfterFunc timer, or a synthetic timer in tests.
|
||||
func (t *Transport) afterFunc(d time.Duration, f func()) timer {
|
||||
if t.transportTestHooks != nil {
|
||||
return t.transportTestHooks.group.AfterFunc(d, f)
|
||||
}
|
||||
return timeTimer{time.AfterFunc(d, f)}
|
||||
}
|
||||
|
||||
func (t *Transport) contextWithTimeout(ctx context.Context, d time.Duration) (context.Context, context.CancelFunc) {
|
||||
if t.transportTestHooks != nil {
|
||||
return t.transportTestHooks.group.ContextWithTimeout(ctx, d)
|
||||
}
|
||||
return context.WithTimeout(ctx, d)
|
||||
}
|
||||
|
||||
func (t *Transport) maxHeaderListSize() uint32 {
|
||||
|
|
@ -366,7 +322,7 @@ type ClientConn struct {
|
|||
readerErr error // set before readerDone is closed
|
||||
|
||||
idleTimeout time.Duration // or 0 for never
|
||||
idleTimer timer
|
||||
idleTimer *time.Timer
|
||||
|
||||
mu sync.Mutex // guards following
|
||||
cond *sync.Cond // hold mu; broadcast on flow/closed changes
|
||||
|
|
@ -534,14 +490,12 @@ func (cs *clientStream) closeReqBodyLocked() {
|
|||
cs.reqBodyClosed = make(chan struct{})
|
||||
reqBodyClosed := cs.reqBodyClosed
|
||||
go func() {
|
||||
cs.cc.t.markNewGoroutine()
|
||||
cs.reqBody.Close()
|
||||
close(reqBodyClosed)
|
||||
}()
|
||||
}
|
||||
|
||||
type stickyErrWriter struct {
|
||||
group synctestGroupInterface
|
||||
conn net.Conn
|
||||
timeout time.Duration
|
||||
err *error
|
||||
|
|
@ -551,7 +505,7 @@ func (sew stickyErrWriter) Write(p []byte) (n int, err error) {
|
|||
if *sew.err != nil {
|
||||
return 0, *sew.err
|
||||
}
|
||||
n, err = writeWithByteTimeout(sew.group, sew.conn, sew.timeout, p)
|
||||
n, err = writeWithByteTimeout(sew.conn, sew.timeout, p)
|
||||
*sew.err = err
|
||||
return n, err
|
||||
}
|
||||
|
|
@ -650,9 +604,9 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res
|
|||
backoff := float64(uint(1) << (uint(retry) - 1))
|
||||
backoff += backoff * (0.1 * mathrand.Float64())
|
||||
d := time.Second * time.Duration(backoff)
|
||||
tm := t.newTimer(d)
|
||||
tm := time.NewTimer(d)
|
||||
select {
|
||||
case <-tm.C():
|
||||
case <-tm.C:
|
||||
t.vlogf("RoundTrip retrying after failure: %v", roundTripErr)
|
||||
continue
|
||||
case <-req.Context().Done():
|
||||
|
|
@ -699,6 +653,7 @@ var (
|
|||
errClientConnUnusable = errors.New("http2: client conn not usable")
|
||||
errClientConnNotEstablished = errors.New("http2: client conn could not be established")
|
||||
errClientConnGotGoAway = errors.New("http2: Transport received Server's graceful shutdown GOAWAY")
|
||||
errClientConnForceClosed = errors.New("http2: client connection force closed via ClientConn.Close")
|
||||
)
|
||||
|
||||
// shouldRetryRequest is called by RoundTrip when a request fails to get
|
||||
|
|
@ -838,14 +793,11 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
|
|||
pingTimeout: conf.PingTimeout,
|
||||
pings: make(map[[8]byte]chan struct{}),
|
||||
reqHeaderMu: make(chan struct{}, 1),
|
||||
lastActive: t.now(),
|
||||
lastActive: time.Now(),
|
||||
}
|
||||
var group synctestGroupInterface
|
||||
if t.transportTestHooks != nil {
|
||||
t.markNewGoroutine()
|
||||
t.transportTestHooks.newclientconn(cc)
|
||||
c = cc.tconn
|
||||
group = t.group
|
||||
}
|
||||
if VerboseLogs {
|
||||
t.vlogf("http2: Transport creating client conn %p to %v", cc, c.RemoteAddr())
|
||||
|
|
@ -857,7 +809,6 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
|
|||
// TODO: adjust this writer size to account for frame size +
|
||||
// MTU + crypto/tls record padding.
|
||||
cc.bw = bufio.NewWriter(stickyErrWriter{
|
||||
group: group,
|
||||
conn: c,
|
||||
timeout: conf.WriteByteTimeout,
|
||||
err: &cc.werr,
|
||||
|
|
@ -906,7 +857,7 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
|
|||
// Start the idle timer after the connection is fully initialized.
|
||||
if d := t.idleConnTimeout(); d != 0 {
|
||||
cc.idleTimeout = d
|
||||
cc.idleTimer = t.afterFunc(d, cc.onIdleTimeout)
|
||||
cc.idleTimer = time.AfterFunc(d, cc.onIdleTimeout)
|
||||
}
|
||||
|
||||
go cc.readLoop()
|
||||
|
|
@ -917,7 +868,7 @@ func (cc *ClientConn) healthCheck() {
|
|||
pingTimeout := cc.pingTimeout
|
||||
// We don't need to periodically ping in the health check, because the readLoop of ClientConn will
|
||||
// trigger the healthCheck again if there is no frame received.
|
||||
ctx, cancel := cc.t.contextWithTimeout(context.Background(), pingTimeout)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), pingTimeout)
|
||||
defer cancel()
|
||||
cc.vlogf("http2: Transport sending health check")
|
||||
err := cc.Ping(ctx)
|
||||
|
|
@ -1120,7 +1071,7 @@ func (cc *ClientConn) tooIdleLocked() bool {
|
|||
// times are compared based on their wall time. We don't want
|
||||
// to reuse a connection that's been sitting idle during
|
||||
// VM/laptop suspend if monotonic time was also frozen.
|
||||
return cc.idleTimeout != 0 && !cc.lastIdle.IsZero() && cc.t.timeSince(cc.lastIdle.Round(0)) > cc.idleTimeout
|
||||
return cc.idleTimeout != 0 && !cc.lastIdle.IsZero() && time.Since(cc.lastIdle.Round(0)) > cc.idleTimeout
|
||||
}
|
||||
|
||||
// onIdleTimeout is called from a time.AfterFunc goroutine. It will
|
||||
|
|
@ -1186,7 +1137,6 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error {
|
|||
done := make(chan struct{})
|
||||
cancelled := false // guarded by cc.mu
|
||||
go func() {
|
||||
cc.t.markNewGoroutine()
|
||||
cc.mu.Lock()
|
||||
defer cc.mu.Unlock()
|
||||
for {
|
||||
|
|
@ -1257,8 +1207,7 @@ func (cc *ClientConn) closeForError(err error) {
|
|||
//
|
||||
// In-flight requests are interrupted. For a graceful shutdown, use Shutdown instead.
|
||||
func (cc *ClientConn) Close() error {
|
||||
err := errors.New("http2: client connection force closed via ClientConn.Close")
|
||||
cc.closeForError(err)
|
||||
cc.closeForError(errClientConnForceClosed)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -1427,7 +1376,6 @@ func (cc *ClientConn) roundTrip(req *http.Request, streamf func(*clientStream))
|
|||
//
|
||||
// It sends the request and performs post-request cleanup (closing Request.Body, etc.).
|
||||
func (cs *clientStream) doRequest(req *http.Request, streamf func(*clientStream)) {
|
||||
cs.cc.t.markNewGoroutine()
|
||||
err := cs.writeRequest(req, streamf)
|
||||
cs.cleanupWriteRequest(err)
|
||||
}
|
||||
|
|
@ -1558,9 +1506,9 @@ func (cs *clientStream) writeRequest(req *http.Request, streamf func(*clientStre
|
|||
var respHeaderTimer <-chan time.Time
|
||||
var respHeaderRecv chan struct{}
|
||||
if d := cc.responseHeaderTimeout(); d != 0 {
|
||||
timer := cc.t.newTimer(d)
|
||||
timer := time.NewTimer(d)
|
||||
defer timer.Stop()
|
||||
respHeaderTimer = timer.C()
|
||||
respHeaderTimer = timer.C
|
||||
respHeaderRecv = cs.respHeaderRecv
|
||||
}
|
||||
// Wait until the peer half-closes its end of the stream,
|
||||
|
|
@ -1753,7 +1701,7 @@ func (cc *ClientConn) awaitOpenSlotForStreamLocked(cs *clientStream) error {
|
|||
// Return a fatal error which aborts the retry loop.
|
||||
return errClientConnNotEstablished
|
||||
}
|
||||
cc.lastActive = cc.t.now()
|
||||
cc.lastActive = time.Now()
|
||||
if cc.closed || !cc.canTakeNewRequestLocked() {
|
||||
return errClientConnUnusable
|
||||
}
|
||||
|
|
@ -2092,10 +2040,10 @@ func (cc *ClientConn) forgetStreamID(id uint32) {
|
|||
if len(cc.streams) != slen-1 {
|
||||
panic("forgetting unknown stream id")
|
||||
}
|
||||
cc.lastActive = cc.t.now()
|
||||
cc.lastActive = time.Now()
|
||||
if len(cc.streams) == 0 && cc.idleTimer != nil {
|
||||
cc.idleTimer.Reset(cc.idleTimeout)
|
||||
cc.lastIdle = cc.t.now()
|
||||
cc.lastIdle = time.Now()
|
||||
}
|
||||
// Wake up writeRequestBody via clientStream.awaitFlowControl and
|
||||
// wake up RoundTrip if there is a pending request.
|
||||
|
|
@ -2121,7 +2069,6 @@ type clientConnReadLoop struct {
|
|||
|
||||
// readLoop runs in its own goroutine and reads and dispatches frames.
|
||||
func (cc *ClientConn) readLoop() {
|
||||
cc.t.markNewGoroutine()
|
||||
rl := &clientConnReadLoop{cc: cc}
|
||||
defer rl.cleanup()
|
||||
cc.readerErr = rl.run()
|
||||
|
|
@ -2188,9 +2135,9 @@ func (rl *clientConnReadLoop) cleanup() {
|
|||
if cc.idleTimeout > 0 && unusedWaitTime > cc.idleTimeout {
|
||||
unusedWaitTime = cc.idleTimeout
|
||||
}
|
||||
idleTime := cc.t.now().Sub(cc.lastActive)
|
||||
idleTime := time.Now().Sub(cc.lastActive)
|
||||
if atomic.LoadUint32(&cc.atomicReused) == 0 && idleTime < unusedWaitTime && !cc.closedOnIdle {
|
||||
cc.idleTimer = cc.t.afterFunc(unusedWaitTime-idleTime, func() {
|
||||
cc.idleTimer = time.AfterFunc(unusedWaitTime-idleTime, func() {
|
||||
cc.t.connPool().MarkDead(cc)
|
||||
})
|
||||
} else {
|
||||
|
|
@ -2250,9 +2197,9 @@ func (rl *clientConnReadLoop) run() error {
|
|||
cc := rl.cc
|
||||
gotSettings := false
|
||||
readIdleTimeout := cc.readIdleTimeout
|
||||
var t timer
|
||||
var t *time.Timer
|
||||
if readIdleTimeout != 0 {
|
||||
t = cc.t.afterFunc(readIdleTimeout, cc.healthCheck)
|
||||
t = time.AfterFunc(readIdleTimeout, cc.healthCheck)
|
||||
}
|
||||
for {
|
||||
f, err := cc.fr.ReadFrame()
|
||||
|
|
@ -2998,7 +2945,6 @@ func (cc *ClientConn) Ping(ctx context.Context) error {
|
|||
var pingError error
|
||||
errc := make(chan struct{})
|
||||
go func() {
|
||||
cc.t.markNewGoroutine()
|
||||
cc.wmu.Lock()
|
||||
defer cc.wmu.Unlock()
|
||||
if pingError = cc.fr.WritePing(false, p); pingError != nil {
|
||||
|
|
@ -3228,7 +3174,7 @@ func traceGotConn(req *http.Request, cc *ClientConn, reused bool) {
|
|||
cc.mu.Lock()
|
||||
ci.WasIdle = len(cc.streams) == 0 && reused
|
||||
if ci.WasIdle && !cc.lastActive.IsZero() {
|
||||
ci.IdleTime = cc.t.timeSince(cc.lastActive)
|
||||
ci.IdleTime = time.Since(cc.lastActive)
|
||||
}
|
||||
cc.mu.Unlock()
|
||||
|
||||
|
|
|
|||
1
vendor/modernc.org/sqlite/AUTHORS
generated
vendored
1
vendor/modernc.org/sqlite/AUTHORS
generated
vendored
|
|
@ -15,6 +15,7 @@ Dan Peterson <danp@danp.net>
|
|||
David Walton <david@davidwalton.com>
|
||||
Davsk Ltd Co <skinner.david@gmail.com>
|
||||
FerretDB Inc.
|
||||
Harald Albrecht <thediveo@gmx.eu>
|
||||
Jaap Aarts <jaap.aarts1@gmail.com>
|
||||
Jan Mercl <0xjnml@gmail.com>
|
||||
Josh Bleecher Snyder <josharian@gmail.com>
|
||||
|
|
|
|||
2
vendor/modernc.org/sqlite/CONTRIBUTORS
generated
vendored
2
vendor/modernc.org/sqlite/CONTRIBUTORS
generated
vendored
|
|
@ -17,6 +17,8 @@ David Walton <david@davidwalton.com>
|
|||
Elle Mouton <elle.mouton@gmail.com>
|
||||
FlyingOnion <731677080@qq.com>
|
||||
Gleb Sakhnov <gleb.sakhnov@gmail.com>
|
||||
Guénaël Muller <inkey@inkey-art.net>
|
||||
Harald Albrecht <thediveo@gmx.eu>
|
||||
Jaap Aarts <jaap.aarts1@gmail.com>
|
||||
Jan Mercl <0xjnml@gmail.com>
|
||||
Josh Bleecher Snyder <josharian@gmail.com>
|
||||
|
|
|
|||
88
vendor/modernc.org/sqlite/sqlite.go
generated
vendored
88
vendor/modernc.org/sqlite/sqlite.go
generated
vendored
|
|
@ -254,7 +254,30 @@ func (r *rows) Next(dest []driver.Value) (err error) {
|
|||
return err
|
||||
}
|
||||
|
||||
dest[i] = v
|
||||
if !r.c.intToTime {
|
||||
dest[i] = v
|
||||
} else {
|
||||
// Inspired by mattn/go-sqlite3:
|
||||
// https://github.com/mattn/go-sqlite3/blob/f76bae4b0044cbba8fb2c72b8e4559e8fbcffd86/sqlite3.go#L2254-L2262
|
||||
// but we put make this compatibility optional behind a DSN
|
||||
// query parameter, because this changes API behavior, so an
|
||||
// opt-in is needed.
|
||||
switch r.ColumnTypeDatabaseTypeName(i) {
|
||||
case "DATE", "DATETIME", "TIMESTAMP":
|
||||
// Is it a seconds timestamp or a milliseconds
|
||||
// timestamp?
|
||||
if v > 1e12 || v < -1e12 {
|
||||
// time.Unix expects nanoseconds, but this is a
|
||||
// milliseconds timestamp, so convert ms->ns.
|
||||
v *= int64(time.Millisecond)
|
||||
dest[i] = time.Unix(0, v).UTC()
|
||||
} else {
|
||||
dest[i] = time.Unix(v, 0)
|
||||
}
|
||||
default:
|
||||
dest[i] = v
|
||||
}
|
||||
}
|
||||
case sqlite3.SQLITE_FLOAT:
|
||||
v, err := r.c.columnDouble(r.pstmt, i)
|
||||
if err != nil {
|
||||
|
|
@ -752,8 +775,10 @@ type conn struct {
|
|||
db uintptr // *sqlite3.Xsqlite3
|
||||
tls *libc.TLS
|
||||
|
||||
writeTimeFormat string
|
||||
beginMode string
|
||||
writeTimeFormat string
|
||||
beginMode string
|
||||
intToTime bool
|
||||
integerTimeFormat string
|
||||
}
|
||||
|
||||
func newConn(dsn string) (*conn, error) {
|
||||
|
|
@ -858,6 +883,17 @@ func applyQueryParams(c *conn, query string) error {
|
|||
}
|
||||
c.writeTimeFormat = f
|
||||
}
|
||||
if v := q.Get("_time_integer_format"); v != "" {
|
||||
switch v {
|
||||
case "unix":
|
||||
case "unix_milli":
|
||||
case "unix_micro":
|
||||
case "unix_nano":
|
||||
default:
|
||||
return fmt.Errorf("unknown _time_integer_format %q", v)
|
||||
}
|
||||
c.integerTimeFormat = v
|
||||
}
|
||||
|
||||
if v := q.Get("_txlock"); v != "" {
|
||||
lower := strings.ToLower(v)
|
||||
|
|
@ -867,6 +903,15 @@ func applyQueryParams(c *conn, query string) error {
|
|||
c.beginMode = v
|
||||
}
|
||||
|
||||
if v := q.Get("_inttotime"); v != "" {
|
||||
onoff, err := strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unknown _inttotime %q, must be 1, t, T, TRUE, true, True, 0, f, F, FALSE, false, False",
|
||||
v)
|
||||
}
|
||||
c.intToTime = onoff
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -1117,9 +1162,29 @@ func (c *conn) bind(pstmt uintptr, n int, args []driver.NamedValue) (allocs []ui
|
|||
return allocs, err
|
||||
}
|
||||
case time.Time:
|
||||
if p, err = c.bindText(pstmt, i, c.formatTime(x)); err != nil {
|
||||
return allocs, err
|
||||
switch c.integerTimeFormat {
|
||||
case "unix":
|
||||
if err := c.bindInt64(pstmt, i, x.Unix()); err != nil {
|
||||
return allocs, err
|
||||
}
|
||||
case "unix_milli":
|
||||
if err := c.bindInt64(pstmt, i, x.UnixMilli()); err != nil {
|
||||
return allocs, err
|
||||
}
|
||||
case "unix_micro":
|
||||
if err := c.bindInt64(pstmt, i, x.UnixMicro()); err != nil {
|
||||
return allocs, err
|
||||
}
|
||||
case "unix_nano":
|
||||
if err := c.bindInt64(pstmt, i, x.UnixNano()); err != nil {
|
||||
return allocs, err
|
||||
}
|
||||
default:
|
||||
if p, err = c.bindText(pstmt, i, c.formatTime(x)); err != nil {
|
||||
return allocs, err
|
||||
}
|
||||
}
|
||||
|
||||
case nil:
|
||||
if p, err = c.bindNull(pstmt, i); err != nil {
|
||||
return allocs, err
|
||||
|
|
@ -1913,6 +1978,19 @@ func newDriver() *Driver { return d }
|
|||
// including the timezone specifier. If this parameter is not specified, then
|
||||
// the default String() format will be used.
|
||||
//
|
||||
// _time_integer_format: The name of a integer format to use when writing time values.
|
||||
// By default, the time is stored as string and the format can be set with _time_format
|
||||
// parameter. If _time_integer_format is set, the time will be stored as an integer and
|
||||
// the integer value will depend on the integer format.
|
||||
// If you decide to set both _time_format and _time_integer_format, the time will be
|
||||
// converted as integer and the _time_format value will be ignored.
|
||||
// Currently the supported value are "unix","unix_milli", "unix_micro" and "unix_nano",
|
||||
// which corresponds to seconds, milliseconds, microseconds or nanoseconds
|
||||
// since unixepoch (1 January 1970 00:00:00 UTC).
|
||||
//
|
||||
// _inttotime: Enable conversion of time column (DATE, DATETIME,TIMESTAMP) from integer
|
||||
// to time if the field contain integer (int64).
|
||||
//
|
||||
// _txlock: The locking behavior to use when beginning a transaction. May be
|
||||
// "deferred" (the default), "immediate", or "exclusive" (case insensitive). See:
|
||||
// https://www.sqlite.org/lang_transaction.html#deferred_immediate_and_exclusive_transactions
|
||||
|
|
|
|||
8
vendor/modules.txt
vendored
8
vendor/modules.txt
vendored
|
|
@ -1197,8 +1197,8 @@ golang.org/x/image/webp
|
|||
golang.org/x/mod/internal/lazyregexp
|
||||
golang.org/x/mod/module
|
||||
golang.org/x/mod/semver
|
||||
# golang.org/x/net v0.43.0
|
||||
## explicit; go 1.23.0
|
||||
# golang.org/x/net v0.44.0
|
||||
## explicit; go 1.24.0
|
||||
golang.org/x/net/bpf
|
||||
golang.org/x/net/context
|
||||
golang.org/x/net/html
|
||||
|
|
@ -1433,7 +1433,7 @@ modernc.org/mathutil
|
|||
# modernc.org/memory v1.11.0
|
||||
## explicit; go 1.23.0
|
||||
modernc.org/memory
|
||||
# modernc.org/sqlite v1.38.2 => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.38.2-concurrency-workaround
|
||||
# modernc.org/sqlite v1.39.0 => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.39.0-concurrency-workaround
|
||||
## explicit; go 1.23.0
|
||||
modernc.org/sqlite
|
||||
modernc.org/sqlite/lib
|
||||
|
|
@ -1441,4 +1441,4 @@ modernc.org/sqlite/lib
|
|||
## explicit; go 1.22.0
|
||||
mvdan.cc/xurls/v2
|
||||
# github.com/go-swagger/go-swagger => codeberg.org/superseriousbusiness/go-swagger v0.32.3-gts-go1.23-fix
|
||||
# modernc.org/sqlite => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.38.2-concurrency-workaround
|
||||
# modernc.org/sqlite => gitlab.com/NyaaaWhatsUpDoc/sqlite v1.39.0-concurrency-workaround
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
node_modules
|
||||
prism.js
|
||||
prism.css
|
||||
nollamasworker/sha256.js
|
||||
|
|
@ -390,7 +390,7 @@
|
|||
in "normal" width when caption
|
||||
is rendered on the right side.
|
||||
*/
|
||||
padding: 0;
|
||||
padding: 0 0 0 1rem;
|
||||
|
||||
/*
|
||||
Let it have a bit more width if it
|
||||
|
|
|
|||
|
|
@ -1,30 +0,0 @@
|
|||
/*
|
||||
GoToSocial
|
||||
Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
.nollamas {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
|
||||
.nollamas-status {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-self: center;
|
||||
align-items: center;
|
||||
}
|
||||
}
|
||||
|
|
@ -73,24 +73,6 @@ skulk({
|
|||
["babelify", { global: true }]
|
||||
],
|
||||
},
|
||||
nollamas: {
|
||||
entryFile: "nollamas",
|
||||
outputFile: "nollamas.js",
|
||||
preset: ["js"],
|
||||
prodCfg: prodCfg,
|
||||
transform: [
|
||||
["babelify", { global: true }]
|
||||
],
|
||||
},
|
||||
nollamasworker: {
|
||||
entryFile: "nollamasworker",
|
||||
outputFile: "nollamasworker.js",
|
||||
preset: ["js"],
|
||||
prodCfg: prodCfg,
|
||||
transform: [
|
||||
["babelify", { global: true }]
|
||||
],
|
||||
},
|
||||
settings: {
|
||||
entryFile: "settings",
|
||||
outputFile: "settings.js",
|
||||
|
|
|
|||
|
|
@ -1,129 +0,0 @@
|
|||
/*
|
||||
GoToSocial
|
||||
Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
const explanation = "Your browser is currently solving a proof-of-work challenge designed to deter \"ai\" scrapers. This should take no more than a few seconds...";
|
||||
|
||||
document.addEventListener("DOMContentLoaded", function() {
|
||||
// Get the nollamas section container.
|
||||
const nollamas = document.querySelector(".nollamas");
|
||||
|
||||
// Add some "loading" text to show that
|
||||
// a proof-of-work captcha is being done.
|
||||
const p = this.createElement("p");
|
||||
p.className = "nollamas-explanation";
|
||||
p.appendChild(document.createTextNode(explanation));
|
||||
nollamas.appendChild(p);
|
||||
|
||||
// Add a loading spinner, but only
|
||||
// animate it if motion is allowed.
|
||||
const spinner = document.createElement("i");
|
||||
spinner.className = "fa fa-spinner fa-2x fa-fw nollamas-status";
|
||||
if (!window.matchMedia('(prefers-reduced-motion: reduce)').matches) {
|
||||
spinner.className += " fa-pulse";
|
||||
}
|
||||
spinner.setAttribute("title","Solving...");
|
||||
spinner.setAttribute("aria-label", "Solving");
|
||||
nollamas.appendChild(spinner);
|
||||
|
||||
// Read the challenge and difficulty from
|
||||
// data attributes on the nollamas section.
|
||||
const seed = nollamas.dataset.nollamasSeed;
|
||||
const challenge = nollamas.dataset.nollamasChallenge;
|
||||
const threads = navigator.hardwareConcurrency;
|
||||
if (typeof(threads) != "number" || threads < 1) { threads = 1; }
|
||||
|
||||
console.log("seed:", seed); // eslint-disable-line no-console
|
||||
console.log("challenge:", challenge); // eslint-disable-line no-console
|
||||
console.log("threads:", threads); // eslint-disable-line no-console
|
||||
|
||||
// Create an array to track
|
||||
// all workers such that we
|
||||
// can terminate them all.
|
||||
const workers = [];
|
||||
const terminateAll = () => { workers.forEach((worker) => worker.terminate() ); };
|
||||
|
||||
// Get time before task completion.
|
||||
const startTime = performance.now();
|
||||
|
||||
// Prepare and start each worker,
|
||||
// adding them to the workers array.
|
||||
for (let i = 0; i < threads; i++) {
|
||||
const worker = new Worker("/assets/dist/nollamasworker.js");
|
||||
workers.push(worker);
|
||||
|
||||
// On any error terminate.
|
||||
worker.onerror = (ev) => {
|
||||
console.error("worker error:", ev); // eslint-disable-line no-console
|
||||
terminateAll();
|
||||
};
|
||||
|
||||
// Post worker message, where each
|
||||
// worker will compute nonces in range:
|
||||
// $threadNumber + $totalThreads * n
|
||||
worker.postMessage({
|
||||
challenge: challenge,
|
||||
threads: threads,
|
||||
thread: i,
|
||||
seed: seed,
|
||||
});
|
||||
|
||||
// Set main on-success function.
|
||||
worker.onmessage = function (e) {
|
||||
if (e.data.done) {
|
||||
// Stop workers.
|
||||
terminateAll();
|
||||
|
||||
// Log which thread found the solution.
|
||||
console.log("solution from:", e.data.thread); // eslint-disable-line no-console
|
||||
|
||||
// Get total computation duration.
|
||||
const endTime = performance.now();
|
||||
const duration = endTime - startTime;
|
||||
|
||||
// Remove spinner and replace it with a tick
|
||||
// and info about how long it took to solve.
|
||||
nollamas.removeChild(spinner);
|
||||
const solutionWrapper = document.createElement("div");
|
||||
solutionWrapper.className = "nollamas-status";
|
||||
|
||||
const tick = document.createElement("i");
|
||||
tick.className = "fa fa-check fa-2x fa-fw";
|
||||
tick.setAttribute("title","Solved!");
|
||||
tick.setAttribute("aria-label", "Solved!");
|
||||
solutionWrapper.appendChild(tick);
|
||||
|
||||
const took = document.createElement("span");
|
||||
const solvedText = `Solved after ${e.data.nonce} iterations by worker ${e.data.thread} of ${threads}, in ${duration.toString()}ms!`;
|
||||
took.appendChild(document.createTextNode(solvedText));
|
||||
solutionWrapper.appendChild(took);
|
||||
|
||||
nollamas.appendChild(solutionWrapper);
|
||||
|
||||
// Wait for 500ms before redirecting, to give
|
||||
// visitors a shot at reading the message, but
|
||||
// not so long that they have to wait unduly.
|
||||
setTimeout(() => {
|
||||
let url = new URL(window.location.href);
|
||||
url.searchParams.set("nollamas_solution", e.data.nonce);
|
||||
window.location.replace(url.toString());
|
||||
}, 500);
|
||||
}
|
||||
};
|
||||
}
|
||||
});
|
||||
|
|
@ -1,56 +0,0 @@
|
|||
/*
|
||||
GoToSocial
|
||||
Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
import sha256 from "./sha256";
|
||||
|
||||
let compute = async function(seedStr, challengeStr, start, iter) {
|
||||
const textEncoder = new TextEncoder();
|
||||
|
||||
let nonce = start;
|
||||
while (true) { // eslint-disable-line no-constant-condition
|
||||
|
||||
// Create possible solution string from challenge string + nonce.
|
||||
const solution = textEncoder.encode(seedStr + nonce.toString());
|
||||
|
||||
// Generate hex encoded SHA256 hashsum of solution.
|
||||
const hashArray = Array.from(sha256(solution));
|
||||
const hashAsHex = hashArray.map(b => b.toString(16).padStart(2, "0")).join("");
|
||||
|
||||
// Check whether hex encoded
|
||||
// solution matches challenge.
|
||||
if (hashAsHex == challengeStr) {
|
||||
return nonce;
|
||||
}
|
||||
|
||||
// Iter nonce.
|
||||
nonce += iter;
|
||||
}
|
||||
};
|
||||
|
||||
onmessage = async function(e) {
|
||||
const thread = e.data.thread;
|
||||
const threads = e.data.threads;
|
||||
console.log("worker started:", thread); // eslint-disable-line no-console
|
||||
|
||||
// Compute nonce value that produces 'challenge', for our allotted thread.
|
||||
let nonce = await compute(e.data.seed, e.data.challenge, thread, threads);
|
||||
|
||||
// Post the solution nonce back to caller with thread no.
|
||||
postMessage({ nonce: nonce, done: true, thread: thread });
|
||||
};
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
/*
|
||||
Copyright 2022 Andrea Griffini
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
*/
|
||||
|
||||
// sha256(data) returns the digest of an input piece of data.
|
||||
// sha256(none) returns an object you can call .add(data), and .digest() at the end.
|
||||
// the returned digest is a 32-byte Uint8Array instance with an added .hex() function.
|
||||
// input should be string (that will be encoded as UTF-8) or an array-like with values 0..255.
|
||||
// source: https://github.com/6502/sha256
|
||||
export default function sha256(data) {
|
||||
let h0 = 0x6a09e667, h1 = 0xbb67ae85, h2 = 0x3c6ef372, h3 = 0xa54ff53a,
|
||||
h4 = 0x510e527f, h5 = 0x9b05688c, h6 = 0x1f83d9ab, h7 = 0x5be0cd19,
|
||||
tsz = 0, bp = 0;
|
||||
const k = [0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
|
||||
0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174,
|
||||
0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
|
||||
0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967,
|
||||
0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85,
|
||||
0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
|
||||
0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3,
|
||||
0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2],
|
||||
rrot = (x, n) => (x >>> n) | (x << (32-n)),
|
||||
w = new Uint32Array(64),
|
||||
buf = new Uint8Array(64),
|
||||
process = () => {
|
||||
for (let j=0,r=0; j<16; j++,r+=4) {
|
||||
w[j] = (buf[r]<<24) | (buf[r+1]<<16) | (buf[r+2]<<8) | buf[r+3];
|
||||
}
|
||||
for (let j=16; j<64; j++) {
|
||||
let s0 = rrot(w[j-15], 7) ^ rrot(w[j-15], 18) ^ (w[j-15] >>> 3);
|
||||
let s1 = rrot(w[j-2], 17) ^ rrot(w[j-2], 19) ^ (w[j-2] >>> 10);
|
||||
w[j] = (w[j-16] + s0 + w[j-7] + s1) | 0;
|
||||
}
|
||||
let a = h0, b = h1, c = h2, d = h3, e = h4, f = h5, g = h6, h = h7;
|
||||
for (let j=0; j<64; j++) {
|
||||
let S1 = rrot(e, 6) ^ rrot(e, 11) ^ rrot(e, 25),
|
||||
ch = (e & f) ^ ((~e) & g),
|
||||
t1 = (h + S1 + ch + k[j] + w[j]) | 0,
|
||||
S0 = rrot(a, 2) ^ rrot(a, 13) ^ rrot(a, 22),
|
||||
maj = (a & b) ^ (a & c) ^ (b & c),
|
||||
t2 = (S0 + maj) | 0;
|
||||
h = g; g = f; f = e; e = (d + t1)|0; d = c; c = b; b = a; a = (t1 + t2)|0;
|
||||
}
|
||||
h0 = (h0 + a)|0; h1 = (h1 + b)|0; h2 = (h2 + c)|0; h3 = (h3 + d)|0;
|
||||
h4 = (h4 + e)|0; h5 = (h5 + f)|0; h6 = (h6 + g)|0; h7 = (h7 + h)|0;
|
||||
bp = 0;
|
||||
},
|
||||
add = data => {
|
||||
if (typeof data === "string") {
|
||||
data = typeof TextEncoder === "undefined" ? Buffer.from(data) : (new TextEncoder).encode(data);
|
||||
}
|
||||
for (let i=0; i<data.length; i++) {
|
||||
buf[bp++] = data[i];
|
||||
if (bp === 64) {process();}
|
||||
}
|
||||
tsz += data.length;
|
||||
},
|
||||
digest = () => {
|
||||
buf[bp++] = 0x80; if (bp == 64) {process();}
|
||||
if (bp + 8 > 64) {
|
||||
while (bp < 64) {buf[bp++] = 0x00;}
|
||||
process();
|
||||
}
|
||||
while (bp < 58) {buf[bp++] = 0x00;}
|
||||
// Max number of bytes is 35,184,372,088,831
|
||||
let L = tsz * 8;
|
||||
buf[bp++] = (L / 1099511627776.) & 255;
|
||||
buf[bp++] = (L / 4294967296.) & 255;
|
||||
buf[bp++] = L >>> 24;
|
||||
buf[bp++] = (L >>> 16) & 255;
|
||||
buf[bp++] = (L >>> 8) & 255;
|
||||
buf[bp++] = L & 255;
|
||||
process();
|
||||
let reply = new Uint8Array(32);
|
||||
reply[ 0] = h0 >>> 24; reply[ 1] = (h0 >>> 16) & 255; reply[ 2] = (h0 >>> 8) & 255; reply[ 3] = h0 & 255;
|
||||
reply[ 4] = h1 >>> 24; reply[ 5] = (h1 >>> 16) & 255; reply[ 6] = (h1 >>> 8) & 255; reply[ 7] = h1 & 255;
|
||||
reply[ 8] = h2 >>> 24; reply[ 9] = (h2 >>> 16) & 255; reply[10] = (h2 >>> 8) & 255; reply[11] = h2 & 255;
|
||||
reply[12] = h3 >>> 24; reply[13] = (h3 >>> 16) & 255; reply[14] = (h3 >>> 8) & 255; reply[15] = h3 & 255;
|
||||
reply[16] = h4 >>> 24; reply[17] = (h4 >>> 16) & 255; reply[18] = (h4 >>> 8) & 255; reply[19] = h4 & 255;
|
||||
reply[20] = h5 >>> 24; reply[21] = (h5 >>> 16) & 255; reply[22] = (h5 >>> 8) & 255; reply[23] = h5 & 255;
|
||||
reply[24] = h6 >>> 24; reply[25] = (h6 >>> 16) & 255; reply[26] = (h6 >>> 8) & 255; reply[27] = h6 & 255;
|
||||
reply[28] = h7 >>> 24; reply[29] = (h7 >>> 16) & 255; reply[30] = (h7 >>> 8) & 255; reply[31] = h7 & 255;
|
||||
reply.hex = () => {
|
||||
let res = "";
|
||||
reply.forEach(x => res += ("0" + x.toString(16)).slice(-2)); // eslint-disable-line no-return-assign
|
||||
return res;
|
||||
};
|
||||
return reply;
|
||||
};
|
||||
if (data === undefined) {return {add, digest};}
|
||||
add(data);
|
||||
return digest();
|
||||
}
|
||||
|
|
@ -29,6 +29,8 @@ import MutationButton from "../../../components/form/mutation-button";
|
|||
import { useAliasAccountMutation, useMoveAccountMutation } from "../../../lib/query/user";
|
||||
import { FormContext, useWithFormContext } from "../../../lib/form/context";
|
||||
import { store } from "../../../redux/store";
|
||||
import { useInstanceV1Query } from "../../../lib/query/gts-api";
|
||||
import Loading from "../../../components/loading";
|
||||
|
||||
export default function Migration() {
|
||||
return (
|
||||
|
|
@ -142,9 +144,7 @@ function AlsoKnownAsURI({ index, data }) {
|
|||
}
|
||||
|
||||
function MoveForm({ data: profile }) {
|
||||
let urlStr = store.getState().login.instanceUrl ?? "";
|
||||
let url = new URL(urlStr);
|
||||
|
||||
const instanceURL = store.getState().login.instanceUrl ?? "";
|
||||
const form = {
|
||||
movedToURI: useTextInput("moved_to_uri", {
|
||||
source: profile,
|
||||
|
|
@ -153,9 +153,22 @@ function MoveForm({ data: profile }) {
|
|||
password: useTextInput("password"),
|
||||
};
|
||||
|
||||
const [submitForm, result] = useFormSubmit(form, useMoveAccountMutation(), {
|
||||
changedOnly: false,
|
||||
});
|
||||
const [submitForm, result] = useFormSubmit(
|
||||
form,
|
||||
useMoveAccountMutation(),
|
||||
{ changedOnly: false },
|
||||
);
|
||||
|
||||
// Load instance data to know the correct
|
||||
// account domain to provide in form below.
|
||||
const {
|
||||
data: instance,
|
||||
isFetching: isFetchingInstance,
|
||||
isLoading: isLoadingInstance
|
||||
} = useInstanceV1Query();
|
||||
if (isFetchingInstance || isLoadingInstance) {
|
||||
return <Loading />;
|
||||
}
|
||||
|
||||
return (
|
||||
<form className="user-migration-move" onSubmit={submitForm}>
|
||||
|
|
@ -170,11 +183,11 @@ function MoveForm({ data: profile }) {
|
|||
<dl className="migration-details">
|
||||
<div>
|
||||
<dt>Account handle/username:</dt>
|
||||
<dd>@{profile.acct}@{url.host}</dd>
|
||||
<dd>@{profile.acct}@{instance?.account_domain}</dd>
|
||||
</div>
|
||||
<div>
|
||||
<dt>Account URI:</dt>
|
||||
<dd>{urlStr}/users/{profile.username}</dd>
|
||||
<dd>{instanceURL}/users/{profile.username}</dd>
|
||||
</div>
|
||||
</dl>
|
||||
<br/>
|
||||
|
|
|
|||
|
|
@ -1,43 +0,0 @@
|
|||
{{- /*
|
||||
// GoToSocial
|
||||
// Copyright (C) GoToSocial Authors admin@gotosocial.org
|
||||
// SPDX-License-Identifier: AGPL-3.0-or-later
|
||||
//
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/ -}}
|
||||
|
||||
{{- with . }}
|
||||
<main>
|
||||
<section class="nollamas"
|
||||
data-nollamas-seed="{{ .seed }}"
|
||||
data-nollamas-challenge="{{ .challenge }}"
|
||||
>
|
||||
<h1>Checking you're not a creepy crawler...</h1>
|
||||
<noscript>
|
||||
<p>
|
||||
The page you're visiting is guarded from "ai" scrapers
|
||||
and other crawlers by a proof-of-work challenge.
|
||||
</p>
|
||||
<p>
|
||||
Unfortunately, this means that Javascript is required.
|
||||
To see the page, <b>please enable Javascript and try again</b>.
|
||||
</p>
|
||||
<aside>
|
||||
Once your browser has completed the challenge, you can turn
|
||||
Javascript off again if you like. Revalidation is done once per hour.
|
||||
</aside>
|
||||
</noscript>
|
||||
</section>
|
||||
</main>
|
||||
{{- end }}
|
||||
Loading…
Add table
Add a link
Reference in a new issue