[chore] Update gin to v1.9.0 (#1553)

This commit is contained in:
Daenney 2023-02-25 13:12:40 +01:00 committed by GitHub
commit ecdc8379fa
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
347 changed files with 166814 additions and 3671 deletions

View file

@ -6,13 +6,13 @@ codec/encoding library for binc, msgpack, cbor, json.
Supported Serialization formats are:
- msgpack: https://github.com/msgpack/msgpack
- binc: http://github.com/ugorji/binc
- cbor: http://cbor.io http://tools.ietf.org/html/rfc7049
- json: http://json.org http://tools.ietf.org/html/rfc7159
- binc: http://github.com/ugorji/binc
- cbor: http://cbor.io http://tools.ietf.org/html/rfc7049
- json: http://json.org http://tools.ietf.org/html/rfc7159
- simple:
This package will carefully use 'package unsafe' for performance reasons in
specific places. You can build without unsafe use by passing the safe or
This package will carefully use 'package unsafe' for performance reasons
in specific places. You can build without unsafe use by passing the safe or
appengine tag i.e. 'go install -tags=codec.safe ...'.
This library works with both the standard `gc` and the `gccgo` compilers.
@ -20,67 +20,70 @@ This library works with both the standard `gc` and the `gccgo` compilers.
For detailed usage information, read the primer at
http://ugorji.net/blog/go-codec-primer .
The idiomatic Go support is as seen in other encoding packages in the
standard library (ie json, xml, gob, etc).
The idiomatic Go support is as seen in other encoding packages in the standard
library (ie json, xml, gob, etc).
Rich Feature Set includes:
- Simple but extremely powerful and feature-rich API
- Support for go 1.4 and above, while selectively using newer APIs for later releases
- Support for go 1.4 and above, while selectively using newer APIs for later
releases
- Excellent code coverage ( > 90% )
- Very High Performance.
Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
- Very High Performance. Our extensive benchmarks show us outperforming Gob,
Json, Bson, etc by 2-4X.
- Careful selected use of 'unsafe' for targeted performance gains.
- 100% safe mode supported, where 'unsafe' is not used at all.
- Lock-free (sans mutex) concurrency for scaling to 100's of cores
- In-place updates during decode, with option to zero value in maps and slices prior to decode
- Coerce types where appropriate
e.g. decode an int in the stream into a float, decode numbers from formatted strings, etc
- Corner Cases:
Overflows, nil maps/slices, nil values in streams are handled correctly
- In-place updates during decode, with option to zero value in maps and slices
prior to decode
- Coerce types where appropriate e.g. decode an int in the stream into a
float, decode numbers from formatted strings, etc
- Corner Cases: Overflows, nil maps/slices, nil values in streams are handled
correctly
- Standard field renaming via tags
- Support for omitting empty fields during an encoding
- Encoding from any value and decoding into pointer to any value
(struct, slice, map, primitives, pointers, interface{}, etc)
- Encoding from any value and decoding into pointer to any value (struct,
slice, map, primitives, pointers, interface{}, etc)
- Extensions to support efficient encoding/decoding of any named types
- Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
- Support using existence of `IsZero() bool` to determine if a value is a zero value.
Analogous to time.Time.IsZero() bool.
- Decoding without a schema (into a interface{}).
Includes Options to configure what specific map or slice type to use
when decoding an encoded list or map into a nil interface{}
- Support using existence of `IsZero() bool` to determine if a value is a zero
value. Analogous to time.Time.IsZero() bool.
- Decoding without a schema (into a interface{}). Includes Options to
configure what specific map or slice type to use when decoding an encoded
list or map into a nil interface{}
- Mapping a non-interface type to an interface, so we can decode appropriately
into any interface type with a correctly configured non-interface value.
- Encode a struct as an array, and decode struct from an array in the data stream
- Option to encode struct keys as numbers (instead of strings)
(to support structured streams with fields encoded as numeric codes)
- Encode a struct as an array, and decode struct from an array in the data
stream
- Option to encode struct keys as numbers (instead of strings) (to support
structured streams with fields encoded as numeric codes)
- Comprehensive support for anonymous fields
- Fast (no-reflection) encoding/decoding of common maps and slices
- Code-generation for faster performance, supported in go 1.6+
- Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
- Support indefinite-length formats to enable true streaming
(for formats which support it e.g. json, cbor)
- Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes.
This mostly applies to maps, where iteration order is non-deterministic.
- Support indefinite-length formats to enable true streaming (for formats
which support it e.g. json, cbor)
- Support canonical encoding, where a value is ALWAYS encoded as same
sequence of bytes. This mostly applies to maps, where iteration order is
non-deterministic.
- NIL in data stream decoded as zero value
- Never silently skip data when decoding.
User decides whether to return an error or silently skip data when keys or indexes
in the data stream do not map to fields in the struct.
- Detect and error when encoding a cyclic reference (instead of stack overflow shutdown)
- Never silently skip data when decoding. User decides whether to return an
error or silently skip data when keys or indexes in the data stream do not
map to fields in the struct.
- Detect and error when encoding a cyclic reference (instead of stack overflow
shutdown)
- Encode/Decode from/to chan types (for iterative streaming support)
- Drop-in replacement for encoding/json. `json:` key in struct tag supported.
- Provides a RPC Server and Client Codec for net/rpc communication protocol.
- Handle unique idiosyncrasies of codecs e.g.
- For messagepack, configure how ambiguities in handling raw bytes are resolved
- For messagepack, provide rpc server/client codec to support
msgpack-rpc protocol defined at:
https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
- Handle unique idiosyncrasies of codecs e.g. For messagepack,
configure how ambiguities in handling raw bytes are resolved and provide
rpc server/client codec to support msgpack-rpc protocol defined at:
https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
# Extension Support
## Extension Support
Users can register a function to handle the encoding or decoding of their
custom types.
Users can register a function to handle the encoding or decoding of their custom
types.
There are no restrictions on what the custom type can be. Some examples:
@ -92,43 +95,44 @@ There are no restrictions on what the custom type can be. Some examples:
type GifImage struct { ... }
```
As an illustration, MyStructWithUnexportedFields would normally be encoded
as an empty map because it has no exported fields, while UUID would be
encoded as a string. However, with extension support, you can encode any of
these however you like.
As an illustration, MyStructWithUnexportedFields would normally be encoded as
an empty map because it has no exported fields, while UUID would be encoded as a
string. However, with extension support, you can encode any of these however you
like.
There is also seamless support provided for registering an extension (with a
tag) but letting the encoding mechanism default to the standard way.
# Custom Encoding and Decoding
## Custom Encoding and Decoding
This package maintains symmetry in the encoding and decoding halfs. We
determine how to encode or decode by walking this decision tree
This package maintains symmetry in the encoding and decoding halfs. We determine
how to encode or decode by walking this decision tree
- is there an extension registered for the type?
- is type a codec.Selfer?
- is format binary, and is type a encoding.BinaryMarshaler and BinaryUnmarshaler?
- is format specifically json, and is type a encoding/json.Marshaler and Unmarshaler?
- is format text-based, and type an encoding.TextMarshaler and TextUnmarshaler?
- else we use a pair of functions based on the "kind" of the type e.g. map, slice, int64, etc
- is format binary, and is type a encoding.BinaryMarshaler and
BinaryUnmarshaler?
- is format specifically json, and is type a encoding/json.Marshaler and
Unmarshaler?
- is format text-based, and type an encoding.TextMarshaler and
TextUnmarshaler?
- else we use a pair of functions based on the "kind" of the type e.g. map,
slice, int64, etc
This symmetry is important to reduce chances of issues happening because the
encoding and decoding sides are out of sync e.g. decoded via very specific
encoding.TextUnmarshaler but encoded via kind-specific generalized mode.
Consequently, if a type only defines one-half of the symmetry (e.g. it
implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't
Consequently, if a type only defines one-half of the symmetry (e.g.
it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't
satisfy the check and we will continue walking down the decision tree.
# RPC
## RPC
RPC Client and Server Codecs are implemented, so the codecs can be used with the
standard net/rpc package.
RPC Client and Server Codecs are implemented, so the codecs can be used with
the standard net/rpc package.
## Usage
# Usage
The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent
modification.
@ -137,13 +141,13 @@ The Encoder and Decoder are NOT safe for concurrent use.
Consequently, the usage model is basically:
- Create and initialize the Handle before any use.
Once created, DO NOT modify it.
- Multiple Encoders or Decoders can now use the Handle concurrently.
They only read information off the Handle (never write).
- Create and initialize the Handle before any use. Once created, DO NOT modify
it.
- Multiple Encoders or Decoders can now use the Handle concurrently. They only
read information off the Handle (never write).
- However, each Encoder or Decoder MUST not be used concurrently
- To re-use an Encoder/Decoder, call Reset(...) on it first.
This allows you use state maintained on the Encoder/Decoder.
- To re-use an Encoder/Decoder, call Reset(...) on it first. This allows you
use state maintained on the Encoder/Decoder.
Sample usage model:
@ -194,8 +198,7 @@ Sample usage model:
client := rpc.NewClientWithCodec(rpcCodec)
```
## Running Tests
# Running Tests
To run tests, use the following:
@ -216,7 +219,7 @@ You can run the tag 'codec.safe' to run tests or build in safe mode. e.g.
go test -tags "alltests codec.safe" -run Suite
```
## Running Benchmarks
# Running Benchmarks
```
cd bench
@ -225,11 +228,9 @@ You can run the tag 'codec.safe' to run tests or build in safe mode. e.g.
Please see http://github.com/ugorji/go-codec-bench .
# Caveats
## Caveats
Struct fields matching the following are ignored during encoding and
decoding
Struct fields matching the following are ignored during encoding and decoding
- struct tag value set to -
- func, complex numbers, unsafe pointers
@ -239,14 +240,14 @@ decoding
Every other field in a struct will be encoded/decoded.
Embedded fields are encoded as if they exist in the top-level struct, with
some caveats. See Encode documentation.
Embedded fields are encoded as if they exist in the top-level struct, with some
caveats. See Encode documentation.
## Exported Package API
```go
const CborStreamBytes byte = 0x5f ...
const GenVersion = 25
const GenVersion = 26
var SelfExt = &extFailWrapper{}
var GoRpc goRpc
var MsgpackSpecRpc msgpackSpecRpc