mirror of
https://github.com/superseriousbusiness/gotosocial.git
synced 2025-12-07 10:18:08 -06:00
[feature] support processing of (many) more media types (#3090)
* initial work replacing our media decoding / encoding pipeline with ffprobe + ffmpeg
* specify the video codec to use when generating static image from emoji
* update go-storage library (fixes incompatibility after updating go-iotools)
* maintain image aspect ratio when generating a thumbnail for it
* update readme to show go-ffmpreg
* fix a bunch of media tests, move filesize checking to callers of media manager for more flexibility
* remove extra debug from error message
* fix up incorrect function signatures
* update PutFile to just use regular file copy, as changes are file is on separate partition
* fix remaining tests, remove some unneeded tests now we're working with ffmpeg/ffprobe
* update more tests, add more code comments
* add utilities to generate processed emoji / media outputs
* fix remaining tests
* add test for opus media file, add license header to utility cmds
* limit the number of concurrently available ffmpeg / ffprobe instances
* reduce number of instances
* further reduce number of instances
* fix envparsing test with configuration variables
* update docs and configuration with new media-{local,remote}-max-size variables
This commit is contained in:
parent
5bc567196b
commit
cde2fb6244
376 changed files with 8026 additions and 54091 deletions
|
|
@ -1,21 +1,23 @@
|
|||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
|
|
@ -24,34 +26,44 @@ them if you wish), that you receive source code or can get it if you
|
|||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
|
@ -60,7 +72,7 @@ modification follow.
|
|||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
|
@ -537,45 +549,35 @@ to collect a royalty for further conveying from those to whom you convey
|
|||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
|
|
@ -633,29 +635,40 @@ the "copyright" line and a pointer to where the full notice is found.
|
|||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
BIN
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffmpeg/ffmpeg.wasm
generated
vendored
Normal file
BIN
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffmpeg/ffmpeg.wasm
generated
vendored
Normal file
Binary file not shown.
38
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffmpeg/lib.go
generated
vendored
Normal file
38
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffmpeg/lib.go
generated
vendored
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
package ffmpeg
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"os"
|
||||
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
"github.com/tetratelabs/wazero/experimental"
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Check for WASM source file path.
|
||||
path := os.Getenv("FFMPEG_WASM")
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
|
||||
var err error
|
||||
|
||||
// Read file into memory.
|
||||
B, err = os.ReadFile(path)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// CoreFeatures is the WebAssembly Core specification
|
||||
// features this embedded binary was compiled with.
|
||||
const CoreFeatures = api.CoreFeatureSIMD |
|
||||
api.CoreFeatureBulkMemoryOperations |
|
||||
api.CoreFeatureNonTrappingFloatToIntConversion |
|
||||
api.CoreFeatureMutableGlobal |
|
||||
api.CoreFeatureReferenceTypes |
|
||||
api.CoreFeatureSignExtensionOps |
|
||||
experimental.CoreFeaturesThreads
|
||||
|
||||
//go:embed ffmpeg.wasm
|
||||
var B []byte
|
||||
BIN
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffprobe/ffprobe.wasm
generated
vendored
Normal file
BIN
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffprobe/ffprobe.wasm
generated
vendored
Normal file
Binary file not shown.
38
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffprobe/lib.go
generated
vendored
Normal file
38
vendor/codeberg.org/gruf/go-ffmpreg/embed/ffprobe/lib.go
generated
vendored
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
package ffprobe
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"os"
|
||||
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
"github.com/tetratelabs/wazero/experimental"
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Check for WASM source file path.
|
||||
path := os.Getenv("FFPROBE_WASM")
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
|
||||
var err error
|
||||
|
||||
// Read file into memory.
|
||||
B, err = os.ReadFile(path)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// CoreFeatures is the WebAssembly Core specification
|
||||
// features this embedded binary was compiled with.
|
||||
const CoreFeatures = api.CoreFeatureSIMD |
|
||||
api.CoreFeatureBulkMemoryOperations |
|
||||
api.CoreFeatureNonTrappingFloatToIntConversion |
|
||||
api.CoreFeatureMutableGlobal |
|
||||
api.CoreFeatureReferenceTypes |
|
||||
api.CoreFeatureSignExtensionOps |
|
||||
experimental.CoreFeaturesThreads
|
||||
|
||||
//go:embed ffprobe.wasm
|
||||
var B []byte
|
||||
109
vendor/codeberg.org/gruf/go-ffmpreg/ffmpeg/ffmpeg.go
generated
vendored
Normal file
109
vendor/codeberg.org/gruf/go-ffmpreg/ffmpeg/ffmpeg.go
generated
vendored
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
package ffmpeg
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"codeberg.org/gruf/go-ffmpreg/embed/ffmpeg"
|
||||
"codeberg.org/gruf/go-ffmpreg/internal"
|
||||
"codeberg.org/gruf/go-ffmpreg/util"
|
||||
"codeberg.org/gruf/go-ffmpreg/wasm"
|
||||
"github.com/tetratelabs/wazero"
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
"github.com/tetratelabs/wazero/imports/wasi_snapshot_preview1"
|
||||
)
|
||||
|
||||
// pool of WASM module instances.
|
||||
var pool = wasm.InstancePool{
|
||||
Instantiator: wasm.Instantiator{
|
||||
|
||||
// WASM module name.
|
||||
Module: "ffmpeg",
|
||||
|
||||
// Per-instance WebAssembly runtime (with shared cache).
|
||||
Runtime: func(ctx context.Context) wazero.Runtime {
|
||||
|
||||
// Prepare config with cache.
|
||||
cfg := wazero.NewRuntimeConfig()
|
||||
cfg = cfg.WithCoreFeatures(ffmpeg.CoreFeatures)
|
||||
cfg = cfg.WithCompilationCache(internal.Cache)
|
||||
|
||||
// Instantiate runtime with our config.
|
||||
rt := wazero.NewRuntimeWithConfig(ctx, cfg)
|
||||
|
||||
// Prepare default "env" host module.
|
||||
env := rt.NewHostModuleBuilder("env")
|
||||
env = env.NewFunctionBuilder().
|
||||
WithGoModuleFunction(
|
||||
api.GoModuleFunc(util.Wasm_Tempnam),
|
||||
[]api.ValueType{api.ValueTypeI32, api.ValueTypeI32},
|
||||
[]api.ValueType{api.ValueTypeI32},
|
||||
).
|
||||
Export("tempnam")
|
||||
|
||||
// Instantiate "env" module in our runtime.
|
||||
_, err := env.Instantiate(context.Background())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Instantiate the wasi snapshot preview 1 in runtime.
|
||||
_, err = wasi_snapshot_preview1.Instantiate(ctx, rt)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return rt
|
||||
},
|
||||
|
||||
// Per-run module configuration.
|
||||
Config: wazero.NewModuleConfig,
|
||||
|
||||
// Embedded WASM.
|
||||
Source: ffmpeg.B,
|
||||
},
|
||||
}
|
||||
|
||||
// Precompile ensures at least compiled ffmpeg
|
||||
// instance is available in the global pool.
|
||||
func Precompile(ctx context.Context) error {
|
||||
inst, err := pool.Get(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pool.Put(inst)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get fetches new ffmpeg instance from pool, prefering cached if available.
|
||||
func Get(ctx context.Context) (*wasm.Instance, error) { return pool.Get(ctx) }
|
||||
|
||||
// Put places the given ffmpeg instance in pool.
|
||||
func Put(inst *wasm.Instance) { pool.Put(inst) }
|
||||
|
||||
// Run will run the given args against an ffmpeg instance from pool.
|
||||
func Run(ctx context.Context, args wasm.Args) (uint32, error) {
|
||||
inst, err := pool.Get(ctx)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
rc, err := inst.Run(ctx, args)
|
||||
pool.Put(inst)
|
||||
return rc, err
|
||||
}
|
||||
|
||||
// Cached returns a cached instance (if any) from pool.
|
||||
func Cached() *wasm.Instance { return pool.Cached() }
|
||||
|
||||
// Free drops all instances
|
||||
// cached in instance pool.
|
||||
func Free() {
|
||||
ctx := context.Background()
|
||||
for {
|
||||
inst := pool.Cached()
|
||||
if inst == nil {
|
||||
return
|
||||
}
|
||||
_ = inst.Close(ctx)
|
||||
}
|
||||
|
||||
}
|
||||
108
vendor/codeberg.org/gruf/go-ffmpreg/ffprobe/ffprobe.go
generated
vendored
Normal file
108
vendor/codeberg.org/gruf/go-ffmpreg/ffprobe/ffprobe.go
generated
vendored
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
package ffprobe
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"codeberg.org/gruf/go-ffmpreg/embed/ffprobe"
|
||||
"codeberg.org/gruf/go-ffmpreg/internal"
|
||||
"codeberg.org/gruf/go-ffmpreg/util"
|
||||
"codeberg.org/gruf/go-ffmpreg/wasm"
|
||||
"github.com/tetratelabs/wazero"
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
"github.com/tetratelabs/wazero/imports/wasi_snapshot_preview1"
|
||||
)
|
||||
|
||||
// pool of WASM module instances.
|
||||
var pool = wasm.InstancePool{
|
||||
Instantiator: wasm.Instantiator{
|
||||
|
||||
// WASM module name.
|
||||
Module: "ffprobe",
|
||||
|
||||
// Per-instance WebAssembly runtime (with shared cache).
|
||||
Runtime: func(ctx context.Context) wazero.Runtime {
|
||||
|
||||
// Prepare config with cache.
|
||||
cfg := wazero.NewRuntimeConfig()
|
||||
cfg = cfg.WithCoreFeatures(ffprobe.CoreFeatures)
|
||||
cfg = cfg.WithCompilationCache(internal.Cache)
|
||||
|
||||
// Instantiate runtime with our config.
|
||||
rt := wazero.NewRuntimeWithConfig(ctx, cfg)
|
||||
|
||||
// Prepare default "env" host module.
|
||||
env := rt.NewHostModuleBuilder("env")
|
||||
env = env.NewFunctionBuilder().
|
||||
WithGoModuleFunction(
|
||||
api.GoModuleFunc(util.Wasm_Tempnam),
|
||||
[]api.ValueType{api.ValueTypeI32, api.ValueTypeI32},
|
||||
[]api.ValueType{api.ValueTypeI32},
|
||||
).
|
||||
Export("tempnam")
|
||||
|
||||
// Instantiate "env" module in our runtime.
|
||||
_, err := env.Instantiate(context.Background())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Instantiate the wasi snapshot preview 1 in runtime.
|
||||
_, err = wasi_snapshot_preview1.Instantiate(ctx, rt)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return rt
|
||||
},
|
||||
|
||||
// Per-run module configuration.
|
||||
Config: wazero.NewModuleConfig,
|
||||
|
||||
// Embedded WASM.
|
||||
Source: ffprobe.B,
|
||||
},
|
||||
}
|
||||
|
||||
// Precompile ensures at least compiled ffprobe
|
||||
// instance is available in the global pool.
|
||||
func Precompile(ctx context.Context) error {
|
||||
inst, err := pool.Get(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pool.Put(inst)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get fetches new ffprobe instance from pool, prefering cached if available.
|
||||
func Get(ctx context.Context) (*wasm.Instance, error) { return pool.Get(ctx) }
|
||||
|
||||
// Put places the given ffprobe instance in pool.
|
||||
func Put(inst *wasm.Instance) { pool.Put(inst) }
|
||||
|
||||
// Run will run the given args against an ffprobe instance from pool.
|
||||
func Run(ctx context.Context, args wasm.Args) (uint32, error) {
|
||||
inst, err := pool.Get(ctx)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
rc, err := inst.Run(ctx, args)
|
||||
pool.Put(inst)
|
||||
return rc, err
|
||||
}
|
||||
|
||||
// Cached returns a cached instance (if any) from pool.
|
||||
func Cached() *wasm.Instance { return pool.Cached() }
|
||||
|
||||
// Free drops all instances
|
||||
// cached in instance pool.
|
||||
func Free() {
|
||||
ctx := context.Background()
|
||||
for {
|
||||
inst := pool.Cached()
|
||||
if inst == nil {
|
||||
return
|
||||
}
|
||||
_ = inst.Close(ctx)
|
||||
}
|
||||
}
|
||||
25
vendor/codeberg.org/gruf/go-ffmpreg/internal/wasm.go
generated
vendored
Normal file
25
vendor/codeberg.org/gruf/go-ffmpreg/internal/wasm.go
generated
vendored
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
package internal
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/tetratelabs/wazero"
|
||||
)
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
|
||||
if dir := os.Getenv("WAZERO_COMPILATION_CACHE"); dir != "" {
|
||||
// Use on-filesystem compilation cache given by env.
|
||||
Cache, err = wazero.NewCompilationCacheWithDir(dir)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
} else {
|
||||
// Use in-memory compilation cache.
|
||||
Cache = wazero.NewCompilationCache()
|
||||
}
|
||||
}
|
||||
|
||||
// Shared WASM compilation cache.
|
||||
var Cache wazero.CompilationCache
|
||||
65
vendor/codeberg.org/gruf/go-ffmpreg/util/funcs.go
generated
vendored
Normal file
65
vendor/codeberg.org/gruf/go-ffmpreg/util/funcs.go
generated
vendored
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
)
|
||||
|
||||
// Wasm_Tempnam wraps Go_Tempnam to fulfill wazero's api.GoModuleFunc,
|
||||
// the argument definition is (i32, i32) and return definition is (i32).
|
||||
// NOTE: the calling module MUST have access to exported malloc / free.
|
||||
func Wasm_Tempnam(ctx context.Context, mod api.Module, stack []uint64) {
|
||||
dirptr := api.DecodeU32(stack[0])
|
||||
pfxptr := api.DecodeU32(stack[1])
|
||||
dir := readString(ctx, mod, dirptr, 0)
|
||||
pfx := readString(ctx, mod, pfxptr, 0)
|
||||
tmpstr := Go_Tempnam(dir, pfx)
|
||||
tmpptr := writeString(ctx, mod, tmpstr)
|
||||
stack[0] = api.EncodeU32(tmpptr)
|
||||
}
|
||||
|
||||
// Go_Tempname is functionally similar to C's tempnam.
|
||||
func Go_Tempnam(dir, prefix string) string {
|
||||
now := time.Now().Unix()
|
||||
prefix = path.Join(dir, prefix)
|
||||
for i := 0; i < 1000; i++ {
|
||||
n := murmur2(uint32(now + int64(i)))
|
||||
name := prefix + strconv.FormatUint(uint64(n), 10)
|
||||
_, err := os.Stat(name)
|
||||
if err == nil {
|
||||
continue
|
||||
} else if os.IsNotExist(err) {
|
||||
return name
|
||||
} else {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
panic("too many attempts")
|
||||
}
|
||||
|
||||
// murmur2 is a simple uint32 murmur2 hash
|
||||
// impl with fixed seed and input size.
|
||||
func murmur2(k uint32) (h uint32) {
|
||||
const (
|
||||
// seed ^ bitlen
|
||||
s = uint32(2147483647) ^ 8
|
||||
|
||||
M = 0x5bd1e995
|
||||
R = 24
|
||||
)
|
||||
h = s
|
||||
k *= M
|
||||
k ^= k >> R
|
||||
k *= M
|
||||
h *= M
|
||||
h ^= k
|
||||
h ^= h >> 13
|
||||
h *= M
|
||||
h ^= h >> 15
|
||||
return
|
||||
}
|
||||
81
vendor/codeberg.org/gruf/go-ffmpreg/util/wasm.go
generated
vendored
Normal file
81
vendor/codeberg.org/gruf/go-ffmpreg/util/wasm.go
generated
vendored
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
)
|
||||
|
||||
// NOTE:
|
||||
// the below functions are not very well optimized
|
||||
// for repeated calls. this is relying on the fact
|
||||
// that the only place they get used (tempnam), is
|
||||
// not called very often, should only be once per run
|
||||
// so calls to ExportedFunction() and Call() instead
|
||||
// of caching api.Function and using CallWithStack()
|
||||
// will work out the same (if only called once).
|
||||
|
||||
// maxaddr is the maximum
|
||||
// wasm32 memory address.
|
||||
const maxaddr = ^uint32(0)
|
||||
|
||||
func malloc(ctx context.Context, mod api.Module, sz uint32) uint32 {
|
||||
stack, err := mod.ExportedFunction("malloc").Call(ctx, uint64(sz))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
ptr := api.DecodeU32(stack[0])
|
||||
if ptr == 0 {
|
||||
panic("out of memory")
|
||||
}
|
||||
return ptr
|
||||
}
|
||||
|
||||
func free(ctx context.Context, mod api.Module, ptr uint32) {
|
||||
if ptr != 0 {
|
||||
mod.ExportedFunction("free").Call(ctx, uint64(ptr))
|
||||
}
|
||||
}
|
||||
|
||||
func view(ctx context.Context, mod api.Module, ptr uint32, n uint32) []byte {
|
||||
if n == 0 {
|
||||
n = maxaddr - ptr
|
||||
}
|
||||
mem := mod.Memory()
|
||||
b, ok := mem.Read(ptr, n)
|
||||
if !ok {
|
||||
panic("out of range")
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func read(ctx context.Context, mod api.Module, ptr, n uint32) []byte {
|
||||
return bytes.Clone(view(ctx, mod, ptr, n))
|
||||
}
|
||||
|
||||
func readString(ctx context.Context, mod api.Module, ptr, n uint32) string {
|
||||
return string(view(ctx, mod, ptr, n))
|
||||
}
|
||||
|
||||
func write(ctx context.Context, mod api.Module, b []byte) uint32 {
|
||||
mem := mod.Memory()
|
||||
len := uint32(len(b))
|
||||
ptr := malloc(ctx, mod, len)
|
||||
ok := mem.Write(ptr, b)
|
||||
if !ok {
|
||||
panic("out of range")
|
||||
}
|
||||
return ptr
|
||||
}
|
||||
|
||||
func writeString(ctx context.Context, mod api.Module, str string) uint32 {
|
||||
mem := mod.Memory()
|
||||
len := uint32(len(str) + 1)
|
||||
ptr := malloc(ctx, mod, len)
|
||||
ok := mem.WriteString(ptr, str)
|
||||
if !ok {
|
||||
panic("out of range")
|
||||
}
|
||||
return ptr
|
||||
}
|
||||
181
vendor/codeberg.org/gruf/go-ffmpreg/wasm/instance.go
generated
vendored
Normal file
181
vendor/codeberg.org/gruf/go-ffmpreg/wasm/instance.go
generated
vendored
Normal file
|
|
@ -0,0 +1,181 @@
|
|||
package wasm
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"github.com/tetratelabs/wazero"
|
||||
"github.com/tetratelabs/wazero/sys"
|
||||
)
|
||||
|
||||
type Args struct {
|
||||
// Standard FDs.
|
||||
Stdin io.Reader
|
||||
Stdout io.Writer
|
||||
Stderr io.Writer
|
||||
|
||||
// CLI args.
|
||||
Args []string
|
||||
|
||||
// Optional further module configuration function.
|
||||
// (e.g. to mount filesystem dir, set env vars, etc).
|
||||
Config func(wazero.ModuleConfig) wazero.ModuleConfig
|
||||
}
|
||||
|
||||
type Instantiator struct {
|
||||
// Module ...
|
||||
Module string
|
||||
|
||||
// Runtime ...
|
||||
Runtime func(context.Context) wazero.Runtime
|
||||
|
||||
// Config ...
|
||||
Config func() wazero.ModuleConfig
|
||||
|
||||
// Source ...
|
||||
Source []byte
|
||||
}
|
||||
|
||||
func (inst *Instantiator) New(ctx context.Context) (*Instance, error) {
|
||||
switch {
|
||||
case inst.Module == "":
|
||||
panic("missing module name")
|
||||
case inst.Runtime == nil:
|
||||
panic("missing runtime instantiator")
|
||||
case inst.Config == nil:
|
||||
panic("missing module configuration")
|
||||
case len(inst.Source) == 0:
|
||||
panic("missing module source")
|
||||
}
|
||||
|
||||
// Create new host runtime.
|
||||
rt := inst.Runtime(ctx)
|
||||
|
||||
// Compile guest module from WebAssembly source.
|
||||
mod, err := rt.CompileModule(ctx, inst.Source)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Instance{
|
||||
inst: inst,
|
||||
wzrt: rt,
|
||||
cmod: mod,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type InstancePool struct {
|
||||
Instantiator
|
||||
|
||||
pool []*Instance
|
||||
lock sync.Mutex
|
||||
}
|
||||
|
||||
func (p *InstancePool) Get(ctx context.Context) (*Instance, error) {
|
||||
for {
|
||||
// Check for cached.
|
||||
inst := p.Cached()
|
||||
if inst == nil {
|
||||
break
|
||||
}
|
||||
|
||||
// Check if closed.
|
||||
if inst.IsClosed() {
|
||||
continue
|
||||
}
|
||||
|
||||
return inst, nil
|
||||
}
|
||||
|
||||
// Must create new instance.
|
||||
return p.Instantiator.New(ctx)
|
||||
}
|
||||
|
||||
func (p *InstancePool) Put(inst *Instance) {
|
||||
if inst.inst != &p.Instantiator {
|
||||
panic("instance and pool instantiators do not match")
|
||||
}
|
||||
p.lock.Lock()
|
||||
p.pool = append(p.pool, inst)
|
||||
p.lock.Unlock()
|
||||
}
|
||||
|
||||
func (p *InstancePool) Cached() *Instance {
|
||||
var inst *Instance
|
||||
p.lock.Lock()
|
||||
if len(p.pool) > 0 {
|
||||
inst = p.pool[len(p.pool)-1]
|
||||
p.pool = p.pool[:len(p.pool)-1]
|
||||
}
|
||||
p.lock.Unlock()
|
||||
return inst
|
||||
}
|
||||
|
||||
// Instance ...
|
||||
//
|
||||
// NOTE: Instance is NOT concurrency
|
||||
// safe. One at a time please!!
|
||||
type Instance struct {
|
||||
inst *Instantiator
|
||||
wzrt wazero.Runtime
|
||||
cmod wazero.CompiledModule
|
||||
}
|
||||
|
||||
func (inst *Instance) Run(ctx context.Context, args Args) (uint32, error) {
|
||||
if inst.inst == nil {
|
||||
panic("not initialized")
|
||||
}
|
||||
|
||||
// Check instance open.
|
||||
if inst.IsClosed() {
|
||||
return 0, errors.New("instance closed")
|
||||
}
|
||||
|
||||
// Prefix binary name as argv0 to args.
|
||||
cargs := make([]string, len(args.Args)+1)
|
||||
copy(cargs[1:], args.Args)
|
||||
cargs[0] = inst.inst.Module
|
||||
|
||||
// Create base module config.
|
||||
modcfg := inst.inst.Config()
|
||||
modcfg = modcfg.WithName(inst.inst.Module)
|
||||
modcfg = modcfg.WithArgs(cargs...)
|
||||
modcfg = modcfg.WithStdin(args.Stdin)
|
||||
modcfg = modcfg.WithStdout(args.Stdout)
|
||||
modcfg = modcfg.WithStderr(args.Stderr)
|
||||
|
||||
if args.Config != nil {
|
||||
// Pass through config fn.
|
||||
modcfg = args.Config(modcfg)
|
||||
}
|
||||
|
||||
// Instantiate the module from precompiled wasm module data.
|
||||
mod, err := inst.wzrt.InstantiateModule(ctx, inst.cmod, modcfg)
|
||||
|
||||
if mod != nil {
|
||||
// Close module.
|
||||
mod.Close(ctx)
|
||||
}
|
||||
|
||||
// Check for a returned exit code error.
|
||||
if err, ok := err.(*sys.ExitError); ok {
|
||||
return err.ExitCode(), nil
|
||||
}
|
||||
|
||||
return 0, err
|
||||
}
|
||||
|
||||
func (inst *Instance) IsClosed() bool {
|
||||
return (inst.wzrt == nil || inst.cmod == nil)
|
||||
}
|
||||
|
||||
func (inst *Instance) Close(ctx context.Context) error {
|
||||
if inst.IsClosed() {
|
||||
return nil
|
||||
}
|
||||
err1 := inst.cmod.Close(ctx)
|
||||
err2 := inst.wzrt.Close(ctx)
|
||||
return errors.Join(err1, err2)
|
||||
}
|
||||
9
vendor/codeberg.org/gruf/go-iotools/close.go
generated
vendored
9
vendor/codeberg.org/gruf/go-iotools/close.go
generated
vendored
|
|
@ -2,6 +2,13 @@ package iotools
|
|||
|
||||
import "io"
|
||||
|
||||
// NopCloser is an empty
|
||||
// implementation of io.Closer,
|
||||
// that simply does nothing!
|
||||
type NopCloser struct{}
|
||||
|
||||
func (NopCloser) Close() error { return nil }
|
||||
|
||||
// CloserFunc is a function signature which allows
|
||||
// a function to implement the io.Closer type.
|
||||
type CloserFunc func() error
|
||||
|
|
@ -10,6 +17,7 @@ func (c CloserFunc) Close() error {
|
|||
return c()
|
||||
}
|
||||
|
||||
// CloserCallback wraps io.Closer to add a callback deferred to call just after Close().
|
||||
func CloserCallback(c io.Closer, cb func()) io.Closer {
|
||||
return CloserFunc(func() error {
|
||||
defer cb()
|
||||
|
|
@ -17,6 +25,7 @@ func CloserCallback(c io.Closer, cb func()) io.Closer {
|
|||
})
|
||||
}
|
||||
|
||||
// CloserAfterCallback wraps io.Closer to add a callback called just before Close().
|
||||
func CloserAfterCallback(c io.Closer, cb func()) io.Closer {
|
||||
return CloserFunc(func() (err error) {
|
||||
defer func() { err = c.Close() }()
|
||||
|
|
|
|||
85
vendor/codeberg.org/gruf/go-iotools/helpers.go
generated
vendored
Normal file
85
vendor/codeberg.org/gruf/go-iotools/helpers.go
generated
vendored
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
package iotools
|
||||
|
||||
import "io"
|
||||
|
||||
// AtEOF returns true when reader at EOF,
|
||||
// this is checked with a 0 length read.
|
||||
func AtEOF(r io.Reader) bool {
|
||||
_, err := r.Read(nil)
|
||||
return (err == io.EOF)
|
||||
}
|
||||
|
||||
// GetReadCloserLimit attempts to cast io.Reader to access its io.LimitedReader with limit.
|
||||
func GetReaderLimit(r io.Reader) (*io.LimitedReader, int64) {
|
||||
lr, ok := r.(*io.LimitedReader)
|
||||
if !ok {
|
||||
return nil, -1
|
||||
}
|
||||
return lr, lr.N
|
||||
}
|
||||
|
||||
// UpdateReaderLimit attempts to update the limit of a reader for existing, newly wrapping if necessary.
|
||||
func UpdateReaderLimit(r io.Reader, limit int64) (*io.LimitedReader, int64) {
|
||||
lr, ok := r.(*io.LimitedReader)
|
||||
if !ok {
|
||||
lr = &io.LimitedReader{r, limit}
|
||||
return lr, limit
|
||||
}
|
||||
|
||||
if limit < lr.N {
|
||||
// Update existing.
|
||||
lr.N = limit
|
||||
}
|
||||
|
||||
return lr, lr.N
|
||||
}
|
||||
|
||||
// GetReadCloserLimit attempts to unwrap io.ReadCloser to access its io.LimitedReader with limit.
|
||||
func GetReadCloserLimit(rc io.ReadCloser) (*io.LimitedReader, int64) {
|
||||
rct, ok := rc.(*ReadCloserType)
|
||||
if !ok {
|
||||
return nil, -1
|
||||
}
|
||||
lr, ok := rct.Reader.(*io.LimitedReader)
|
||||
if !ok {
|
||||
return nil, -1
|
||||
}
|
||||
return lr, lr.N
|
||||
}
|
||||
|
||||
// UpdateReadCloserLimit attempts to update the limit of a readcloser for existing, newly wrapping if necessary.
|
||||
func UpdateReadCloserLimit(rc io.ReadCloser, limit int64) (io.ReadCloser, *io.LimitedReader, int64) {
|
||||
|
||||
// Check for our wrapped ReadCloserType.
|
||||
if rct, ok := rc.(*ReadCloserType); ok {
|
||||
|
||||
// Attempt to update existing wrapped limit reader.
|
||||
if lr, ok := rct.Reader.(*io.LimitedReader); ok {
|
||||
|
||||
if limit < lr.N {
|
||||
// Update existing.
|
||||
lr.N = limit
|
||||
}
|
||||
|
||||
return rct, lr, lr.N
|
||||
}
|
||||
|
||||
// Wrap the reader type with new limit.
|
||||
lr := &io.LimitedReader{rct.Reader, limit}
|
||||
rct.Reader = lr
|
||||
|
||||
return rct, lr, lr.N
|
||||
}
|
||||
|
||||
// Wrap separated types.
|
||||
rct := &ReadCloserType{
|
||||
Reader: rc,
|
||||
Closer: rc,
|
||||
}
|
||||
|
||||
// Wrap separated reader part with limit.
|
||||
lr := &io.LimitedReader{rct.Reader, limit}
|
||||
rct.Reader = lr
|
||||
|
||||
return rct, lr, lr.N
|
||||
}
|
||||
21
vendor/codeberg.org/gruf/go-iotools/read.go
generated
vendored
21
vendor/codeberg.org/gruf/go-iotools/read.go
generated
vendored
|
|
@ -4,6 +4,16 @@ import (
|
|||
"io"
|
||||
)
|
||||
|
||||
// ReadCloserType implements io.ReadCloser
|
||||
// by combining the two underlying interfaces,
|
||||
// while providing an exported type to still
|
||||
// access the underlying original io.Reader or
|
||||
// io.Closer separately (e.g. without wrapping).
|
||||
type ReadCloserType struct {
|
||||
io.Reader
|
||||
io.Closer
|
||||
}
|
||||
|
||||
// ReaderFunc is a function signature which allows
|
||||
// a function to implement the io.Reader type.
|
||||
type ReaderFunc func([]byte) (int, error)
|
||||
|
|
@ -22,15 +32,10 @@ func (rf ReaderFromFunc) ReadFrom(r io.Reader) (int64, error) {
|
|||
|
||||
// ReadCloser wraps an io.Reader and io.Closer in order to implement io.ReadCloser.
|
||||
func ReadCloser(r io.Reader, c io.Closer) io.ReadCloser {
|
||||
return &struct {
|
||||
io.Reader
|
||||
io.Closer
|
||||
}{r, c}
|
||||
return &ReadCloserType{r, c}
|
||||
}
|
||||
|
||||
// NopReadCloser wraps an io.Reader to implement io.ReadCloser with empty io.Closer implementation.
|
||||
// NopReadCloser wraps io.Reader with NopCloser{} in ReadCloserType.
|
||||
func NopReadCloser(r io.Reader) io.ReadCloser {
|
||||
return ReadCloser(r, CloserFunc(func() error {
|
||||
return nil
|
||||
}))
|
||||
return &ReadCloserType{r, NopCloser{}}
|
||||
}
|
||||
|
|
|
|||
25
vendor/codeberg.org/gruf/go-iotools/size.go
generated
vendored
Normal file
25
vendor/codeberg.org/gruf/go-iotools/size.go
generated
vendored
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
package iotools
|
||||
|
||||
type Sizer interface {
|
||||
Size() int64
|
||||
}
|
||||
|
||||
// SizerFunc is a function signature which allows
|
||||
// a function to implement the Sizer type.
|
||||
type SizerFunc func() int64
|
||||
|
||||
func (s SizerFunc) Size() int64 {
|
||||
return s()
|
||||
}
|
||||
|
||||
type Lengther interface {
|
||||
Len() int
|
||||
}
|
||||
|
||||
// LengthFunc is a function signature which allows
|
||||
// a function to implement the Lengther type.
|
||||
type LengthFunc func() int
|
||||
|
||||
func (l LengthFunc) Len() int {
|
||||
return l()
|
||||
}
|
||||
9
vendor/codeberg.org/gruf/go-iotools/write.go
generated
vendored
9
vendor/codeberg.org/gruf/go-iotools/write.go
generated
vendored
|
|
@ -28,7 +28,10 @@ func WriteCloser(w io.Writer, c io.Closer) io.WriteCloser {
|
|||
|
||||
// NopWriteCloser wraps an io.Writer to implement io.WriteCloser with empty io.Closer implementation.
|
||||
func NopWriteCloser(w io.Writer) io.WriteCloser {
|
||||
return WriteCloser(w, CloserFunc(func() error {
|
||||
return nil
|
||||
}))
|
||||
return &nopWriteCloser{w}
|
||||
}
|
||||
|
||||
// nopWriteCloser implements io.WriteCloser with a no-op Close().
|
||||
type nopWriteCloser struct{ io.Writer }
|
||||
|
||||
func (wc *nopWriteCloser) Close() error { return nil }
|
||||
|
|
|
|||
5
vendor/codeberg.org/gruf/go-mimetypes/README.md
generated
vendored
Normal file
5
vendor/codeberg.org/gruf/go-mimetypes/README.md
generated
vendored
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
# go-mimetypes
|
||||
|
||||
A generated lookup map of file extensions to mimetypes, from data provided at: https://raw.githubusercontent.com/micnic/mime.json/master/index.json
|
||||
|
||||
This allows determining mimetype without relying on OS mimetype lookups.
|
||||
42
vendor/codeberg.org/gruf/go-mimetypes/get-mime-types.sh
generated
vendored
Normal file
42
vendor/codeberg.org/gruf/go-mimetypes/get-mime-types.sh
generated
vendored
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
#!/bin/sh
|
||||
|
||||
# Mime types JSON source
|
||||
URL='https://raw.githubusercontent.com/micnic/mime.json/master/index.json'
|
||||
|
||||
# Define intro to file
|
||||
FILE='
|
||||
// This is an automatically generated file, do not edit
|
||||
package mimetypes
|
||||
|
||||
|
||||
// MimeTypes is a map of file extensions to mime types.
|
||||
var MimeTypes = map[string]string{
|
||||
'
|
||||
|
||||
# Set break on new-line
|
||||
IFS='
|
||||
'
|
||||
|
||||
for line in $(curl -fL "$URL" | grep -E '".+"\s*:\s*".+"'); do
|
||||
# Trim final whitespace
|
||||
line=$(echo "$line" | sed -e 's|\s*$||')
|
||||
|
||||
# Ensure it ends in a comma
|
||||
[ "${line%,}" = "$line" ] && line="${line},"
|
||||
|
||||
# Add to file
|
||||
FILE="${FILE}${line}
|
||||
"
|
||||
done
|
||||
|
||||
# Add final statement to file
|
||||
FILE="${FILE}
|
||||
}
|
||||
|
||||
"
|
||||
|
||||
# Write to file
|
||||
echo "$FILE" > 'mime.gen.go'
|
||||
|
||||
# Check for valid go
|
||||
gofumpt -w 'mime.gen.go'
|
||||
1207
vendor/codeberg.org/gruf/go-mimetypes/mime.gen.go
generated
vendored
Normal file
1207
vendor/codeberg.org/gruf/go-mimetypes/mime.gen.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
47
vendor/codeberg.org/gruf/go-mimetypes/mime.go
generated
vendored
Normal file
47
vendor/codeberg.org/gruf/go-mimetypes/mime.go
generated
vendored
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
package mimetypes
|
||||
|
||||
import "path"
|
||||
|
||||
// PreferredExts defines preferred file
|
||||
// extensions for input mime types (as there
|
||||
// can be multiple extensions per mime type).
|
||||
var PreferredExts = map[string]string{
|
||||
MimeTypes["mp3"]: "mp3", // audio/mpeg
|
||||
MimeTypes["mpeg"]: "mpeg", // video/mpeg
|
||||
}
|
||||
|
||||
// GetForFilename returns mimetype for given filename.
|
||||
func GetForFilename(filename string) (string, bool) {
|
||||
ext := path.Ext(filename)
|
||||
if len(ext) < 1 {
|
||||
return "", false
|
||||
}
|
||||
mime, ok := MimeTypes[ext[1:]]
|
||||
return mime, ok
|
||||
}
|
||||
|
||||
// GetFileExt returns the file extension to use for mimetype. Relying first upon
|
||||
// the 'PreferredExts' map. It simply returns the first match there may multiple.
|
||||
func GetFileExt(mimeType string) (string, bool) {
|
||||
ext, ok := PreferredExts[mimeType]
|
||||
if ok {
|
||||
return ext, true
|
||||
}
|
||||
for ext, mime := range MimeTypes {
|
||||
if mime == mimeType {
|
||||
return ext, true
|
||||
}
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
// GetFileExts returns known file extensions used for mimetype.
|
||||
func GetFileExts(mimeType string) []string {
|
||||
var exts []string
|
||||
for ext, mime := range MimeTypes {
|
||||
if mime == mimeType {
|
||||
exts = append(exts, ext)
|
||||
}
|
||||
}
|
||||
return exts
|
||||
}
|
||||
3
vendor/codeberg.org/gruf/go-storage/memory/memory.go
generated
vendored
3
vendor/codeberg.org/gruf/go-storage/memory/memory.go
generated
vendored
|
|
@ -7,7 +7,6 @@ import (
|
|||
"strings"
|
||||
"sync"
|
||||
|
||||
"codeberg.org/gruf/go-iotools"
|
||||
"codeberg.org/gruf/go-storage"
|
||||
|
||||
"codeberg.org/gruf/go-storage/internal"
|
||||
|
|
@ -93,7 +92,7 @@ func (st *MemoryStorage) ReadStream(ctx context.Context, key string) (io.ReadClo
|
|||
|
||||
// Wrap in readcloser.
|
||||
r := bytes.NewReader(b)
|
||||
return iotools.NopReadCloser(r), nil
|
||||
return io.NopCloser(r), nil
|
||||
}
|
||||
|
||||
// WriteBytes: implements Storage.WriteBytes().
|
||||
|
|
|
|||
122
vendor/codeberg.org/superseriousbusiness/exif-terminator/README.md
generated
vendored
122
vendor/codeberg.org/superseriousbusiness/exif-terminator/README.md
generated
vendored
|
|
@ -1,122 +0,0 @@
|
|||
# exif-terminator
|
||||
|
||||
`exif-terminator` removes exif data from images (jpeg and png currently supported) in a streaming manner. All you need to do is provide a reader of the image in, and exif-terminator will provide a reader of the image out.
|
||||
|
||||
Hasta la vista, baby!
|
||||
|
||||
```text
|
||||
.,lddxococ.
|
||||
..',lxO0Oo;'.
|
||||
. .. .,coodO0klc:.
|
||||
.,. ..','. .. .,..'. .':llxKXk'
|
||||
.;c:cc;;,... .''.,l:cc. .....:l:,,:oo:..
|
||||
.,:ll'. .,;cox0OxOKKXX0kOOxlcld0X0d;,,,'.
|
||||
.:xkl. .':cdKNWWWWMMMMMMMMMMWWNXK0KWNd.
|
||||
.coxo,..:ollk0KKXNWMMMMMMMMMMWWXXXOoOM0;
|
||||
,oc,. .;cloxOKXXWWMMMMMMMMMMMWNXk;;OWO'
|
||||
. ..;cdOKXNNWWMMMMMMMMMMMMWO,,ONO'
|
||||
...... ....;okOO000XWWMMMMMMMMMWXx;,ONNx.
|
||||
.;c;. .:l'ckl. ..';looooolldolloooodolcc:;'.;oo:.
|
||||
.oxl. ;:..OO. .. .. .,' .;.
|
||||
.oko. .cc.'Ok. .:; .:,..';.
|
||||
.cdc. .;;lc.,Ox. . .',,'..','. .dN0; .. .c:,,':.
|
||||
.:oc. ,dxkl.,0x. . .. . .oNMMKc.. ...:l.
|
||||
.:o:. cKXKl.,Ox. .. .lKWMMMXo,. ...''.
|
||||
.:l; c0KKo.,0x. ...........';:lk0OKNNXKkl,..,;cxd'
|
||||
.::' ;k00l.;0d. .. .,cloooddddxxddol;:ddloxdc,:odOWNc
|
||||
.;,. ,ONKc.;0d. 'l,.. .:clllllllokKOl::cllclkKx'.lolxx'
|
||||
.,. '0W0:.;0d. .:l,. .,:ccc:::oOXNXOkxdook0NWNx,,;c;.
|
||||
... .kX0c.;0d. .loc' .,::;;;;lk0kddoooooddooO0o',ld;
|
||||
.. .oOkk:cKd. .... .;:,',;cxK0o::ldkOkkOkxod:';oKx.
|
||||
.. :dlOolKO, '::'.';:oOK0xdddoollooxOx::ccOx.
|
||||
.. ';:o,.xKo. .,;'...';lddolooodkkkdol:,::lc.
|
||||
.. ...:..oOl. ........';:codxxOXKKKk;':;:kl
|
||||
.. .,..lOc. .. ....,codxkxxxxxo:,,;lKO. .,;'..
|
||||
... .. ck: ';,'. .;:cllloc,;;;colOK; .;odxxoc;.
|
||||
...,.... . :x; .;:cc;'. .,;::c:'..,kXk:xNc .':oook00x:.
|
||||
. cKx. .'.. ':clllc,...'';:::cc:;.,kOo:xNx. .'codddoox
|
||||
.. ,xxl;',col:;. .:cccccc;;;:lxkkOOkdc,,lolcxWO' ;kNKc.'
|
||||
.,. .c' ':dkO0O; .. .;ccccccc:::cldxkxoll:;oolcdN0:.. .xWNk;
|
||||
.:' .c',xXNKkOXo .,. .,:cccccllc::lloooolc:;lo:;oXKc,::. .kWWX
|
||||
,' .cONMWMWkco, ', .';::ccclolc:llolollcccodo;:KXl..cl,. ;KWN
|
||||
'. .xWWWWMKc;; ....;' ',;::::coolclloooollc:,:o;;0Xx, .,:;... ,0Ko
|
||||
. ,kKNWWXd,cdd0NXKk:,;;;'';::::coollllllllllc;;ccl0Nkc. ..';loOx'
|
||||
'lxXWMXOOXNMMMMWWNNNWXkc;;;;;:cllccccccccc::lllkNWXd,. .cxO0Ol'
|
||||
,xKNWWXkkXWM0dxKNWWWMWNX0OOkl;;:c::cccc:,...:oONMMXOo;. :kOkOkl;
|
||||
.;,;:;...,::. .;lokXKKNMMMWNOc,;;;,::;'...lOKNWNKkol:,..cKdcO0do
|
||||
.:;... .. .,:okO0KNN0:.',,''''. ':xNMWKkxxOKXd,.cNk,:l:o
|
||||
```
|
||||
|
||||
## Why?
|
||||
|
||||
Exif removal is a pain in the arse. Most other libraries seem to parse the whole image into memory, then remove the exif data, then encode the image again.
|
||||
|
||||
`exif-terminator` differs in that it removes exif data *while scanning through the image bytes*, and it doesn't do any reencoding of the image. Bytes of exif data are simply all set to 0, and the image data is piped back out again into the returned reader.
|
||||
|
||||
The only exception is orientation data: if an image contains orientation data, this and only this data will be preserved since it's *actually useful*.
|
||||
|
||||
## Example
|
||||
|
||||
You can run the following example with `go run ./example/main.go`:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
|
||||
terminator "codeberg.org/superseriousbusiness/exif-terminator"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// open a file
|
||||
sloth, err := os.Open("./images/sloth.jpg")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer sloth.Close()
|
||||
|
||||
// get the length of the file
|
||||
stat, err := sloth.Stat()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// terminate!
|
||||
out, err := terminator.Terminate(sloth, int(stat.Size()), "jpeg")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// read the bytes from the reader
|
||||
b, err := io.ReadAll(out)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// save the file somewhere
|
||||
if err := os.WriteFile("./images/sloth-clean.jpg", b, 0666); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Credits
|
||||
|
||||
### Libraries
|
||||
|
||||
`exif-terminator` borrows heavily from the two [`dsoprea`](https://github.com/dsoprea) libraries credited below. In fact, it's basically a hack on top of those libraries. Thanks `dsoprea`!
|
||||
|
||||
- [dsoprea/go-exif](https://github.com/dsoprea/go-exif): exif header reconstruction. [MIT License](https://spdx.org/licenses/MIT.html).
|
||||
- [dsoprea/go-jpeg-image-structure](https://github.com/dsoprea/go-jpeg-image-structure): jpeg structure parsing. [MIT License](https://spdx.org/licenses/MIT.html).
|
||||
- [dsoprea/go-png-image-structure](https://github.com/dsoprea/go-png-image-structure): png structure parsing. [MIT License](https://spdx.org/licenses/MIT.html).
|
||||
- [stretchr/testify](https://github.com/stretchr/testify); test framework. [MIT License](https://spdx.org/licenses/MIT.html).
|
||||
|
||||
## License
|
||||
|
||||

|
||||
|
||||
`exif-terminator` is free software, licensed under the [GNU AGPL v3 LICENSE](LICENSE).
|
||||
|
||||
Copyright (C) 2022-2024 SuperSeriousBusiness.
|
||||
295
vendor/codeberg.org/superseriousbusiness/exif-terminator/jpeg.go
generated
vendored
295
vendor/codeberg.org/superseriousbusiness/exif-terminator/jpeg.go
generated
vendored
|
|
@ -1,295 +0,0 @@
|
|||
/*
|
||||
exif-terminator
|
||||
Copyright (C) 2022 SuperSeriousBusiness admin@gotosocial.org
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
package terminator
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
exif "github.com/dsoprea/go-exif/v3"
|
||||
jpegstructure "github.com/superseriousbusiness/go-jpeg-image-structure/v2"
|
||||
)
|
||||
|
||||
var markerLen = map[byte]int{
|
||||
0x00: 0,
|
||||
0x01: 0,
|
||||
0xd0: 0,
|
||||
0xd1: 0,
|
||||
0xd2: 0,
|
||||
0xd3: 0,
|
||||
0xd4: 0,
|
||||
0xd5: 0,
|
||||
0xd6: 0,
|
||||
0xd7: 0,
|
||||
0xd8: 0,
|
||||
0xd9: 0,
|
||||
0xda: 0,
|
||||
|
||||
// J2C
|
||||
0x30: 0,
|
||||
0x31: 0,
|
||||
0x32: 0,
|
||||
0x33: 0,
|
||||
0x34: 0,
|
||||
0x35: 0,
|
||||
0x36: 0,
|
||||
0x37: 0,
|
||||
0x38: 0,
|
||||
0x39: 0,
|
||||
0x3a: 0,
|
||||
0x3b: 0,
|
||||
0x3c: 0,
|
||||
0x3d: 0,
|
||||
0x3e: 0,
|
||||
0x3f: 0,
|
||||
0x4f: 0,
|
||||
0x92: 0,
|
||||
0x93: 0,
|
||||
|
||||
// J2C extensions
|
||||
0x74: 4,
|
||||
0x75: 4,
|
||||
0x77: 4,
|
||||
}
|
||||
|
||||
type jpegVisitor struct {
|
||||
js *jpegstructure.JpegSplitter
|
||||
writer io.Writer
|
||||
expectedFileSize int
|
||||
writtenTotalBytes int
|
||||
}
|
||||
|
||||
// HandleSegment satisfies the visitor interface{} of the jpegstructure library.
|
||||
//
|
||||
// We don't really care about many of the parameters, since all we're interested
|
||||
// in here is the very last segment that was scanned.
|
||||
func (v *jpegVisitor) HandleSegment(segmentMarker byte, _ string, _ int, _ bool) error {
|
||||
// get the most recent segment scanned (ie., last in the segments list)
|
||||
segmentList := v.js.Segments()
|
||||
segments := segmentList.Segments()
|
||||
mostRecentSegment := segments[len(segments)-1]
|
||||
|
||||
// check if we've written the expected number of bytes by EOI
|
||||
if segmentMarker == jpegstructure.MARKER_EOI {
|
||||
// take account of the last 2 bytes taken up by the EOI
|
||||
eoiLength := 2
|
||||
|
||||
// this is the total file size we will
|
||||
// have written including the EOI
|
||||
willHaveWritten := v.writtenTotalBytes + eoiLength
|
||||
|
||||
if willHaveWritten < v.expectedFileSize {
|
||||
// if we won't have written enough,
|
||||
// pad the final segment before EOI
|
||||
// so that we meet expected file size
|
||||
missingBytes := make([]byte, v.expectedFileSize-willHaveWritten)
|
||||
if _, err := v.writer.Write(missingBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// process the segment
|
||||
return v.writeSegment(mostRecentSegment)
|
||||
}
|
||||
|
||||
func (v *jpegVisitor) writeSegment(s *jpegstructure.Segment) error {
|
||||
var writtenSegmentData int
|
||||
w := v.writer
|
||||
|
||||
defer func() {
|
||||
// whatever happens, when we finished then evict data from the segment;
|
||||
// once we've written it we don't want it in memory anymore
|
||||
s.Data = s.Data[:0]
|
||||
}()
|
||||
|
||||
// The scan-data will have a marker-ID of (0) because it doesn't have a marker-ID or length.
|
||||
if s.MarkerId != 0 {
|
||||
markerIDWritten, err := w.Write([]byte{0xff, s.MarkerId})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writtenSegmentData += markerIDWritten
|
||||
|
||||
sizeLen, found := markerLen[s.MarkerId]
|
||||
if !found || sizeLen == 2 {
|
||||
sizeLen = 2
|
||||
l := uint16(len(s.Data) + sizeLen)
|
||||
|
||||
if err := binary.Write(w, binary.BigEndian, &l); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
writtenSegmentData += 2
|
||||
} else if sizeLen == 4 {
|
||||
l := uint32(len(s.Data) + sizeLen)
|
||||
|
||||
if err := binary.Write(w, binary.BigEndian, &l); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
writtenSegmentData += 4
|
||||
} else if sizeLen != 0 {
|
||||
return fmt.Errorf("not a supported marker-size: MARKER-ID=(0x%02x) MARKER-SIZE-LEN=(%d)", s.MarkerId, sizeLen)
|
||||
}
|
||||
}
|
||||
|
||||
if !s.IsExif() {
|
||||
// if this isn't exif data just copy it over and bail
|
||||
writtenNormalData, err := w.Write(s.Data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
writtenSegmentData += writtenNormalData
|
||||
v.writtenTotalBytes += writtenSegmentData
|
||||
return nil
|
||||
}
|
||||
|
||||
ifd, _, err := s.Exif()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// amount of bytes we've writtenExifData into the exif body, we'll update this as we go
|
||||
var writtenExifData int
|
||||
|
||||
if orientationEntries, err := ifd.FindTagWithName("Orientation"); err == nil && len(orientationEntries) == 1 {
|
||||
// If we have an orientation entry, we don't want to completely obliterate the exif data.
|
||||
// Instead, we want to surgically obliterate everything *except* the orientation tag, so
|
||||
// that the image will still be rotated correctly when shown in client applications etc.
|
||||
//
|
||||
// To accomplish this, we're going to extract just the bytes that we need and write them
|
||||
// in according to the exif specification, then fill in the rest of the space with empty
|
||||
// bytes.
|
||||
//
|
||||
// First we need to write the exif prefix for this segment.
|
||||
//
|
||||
// Then we write the exif header which contains the byte order and offset of the first ifd.
|
||||
//
|
||||
// Then we write the ifd0 entry which contains the orientation data.
|
||||
//
|
||||
// After that we just fill.
|
||||
|
||||
newExifData := &bytes.Buffer{}
|
||||
byteOrder := ifd.ByteOrder()
|
||||
|
||||
// 1. Write exif prefix.
|
||||
// https://www.ozhiker.com/electronics/pjmt/jpeg_info/app_segments.html
|
||||
prefix := []byte{'E', 'x', 'i', 'f', 0, 0}
|
||||
if err := binary.Write(newExifData, byteOrder, &prefix); err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += len(prefix)
|
||||
|
||||
// 2. Write exif header, taking the existing byte order.
|
||||
exifHeader, err := exif.BuildExifHeader(byteOrder, exif.ExifDefaultFirstIfdOffset)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hWritten, err := newExifData.Write(exifHeader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += hWritten
|
||||
|
||||
// 3. Write in the new ifd
|
||||
//
|
||||
// An ifd with one orientation entry is structured like this:
|
||||
// 2 bytes: the number of entries in the ifd uint16(1)
|
||||
// 2 bytes: the tag id uint16(274)
|
||||
// 2 bytes: the tag type uint16(3)
|
||||
// 4 bytes: the tag count uint32(1)
|
||||
// 4 bytes: the tag value offset: uint32(one of the below with padding on the end)
|
||||
// 1 = Horizontal (normal)
|
||||
// 2 = Mirror horizontal
|
||||
// 3 = Rotate 180
|
||||
// 4 = Mirror vertical
|
||||
// 5 = Mirror horizontal and rotate 270 CW
|
||||
// 6 = Rotate 90 CW
|
||||
// 7 = Mirror horizontal and rotate 90 CW
|
||||
// 8 = Rotate 270 CW
|
||||
//
|
||||
// see https://web.archive.org/web/20190624045241if_/http://www.cipa.jp:80/std/documents/e/DC-008-Translation-2019-E.pdf - p24-25
|
||||
orientationEntry := orientationEntries[0]
|
||||
|
||||
ifdCount := uint16(1) // we're only adding one entry into the ifd
|
||||
if err := binary.Write(newExifData, byteOrder, &ifdCount); err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += 2
|
||||
|
||||
tagID := orientationEntry.TagId()
|
||||
if err := binary.Write(newExifData, byteOrder, &tagID); err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += 2
|
||||
|
||||
tagType := uint16(orientationEntry.TagType())
|
||||
if err := binary.Write(newExifData, byteOrder, &tagType); err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += 2
|
||||
|
||||
tagCount := orientationEntry.UnitCount()
|
||||
if err := binary.Write(newExifData, byteOrder, &tagCount); err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += 4
|
||||
|
||||
valueOffset, err := orientationEntry.GetRawBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
vWritten, err := newExifData.Write(valueOffset)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += vWritten
|
||||
|
||||
valuePad := make([]byte, 4-vWritten)
|
||||
pWritten, err := newExifData.Write(valuePad)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writtenExifData += pWritten
|
||||
|
||||
// write all the new data into the writer from the segment
|
||||
writtenNewExifData, err := io.Copy(w, newExifData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
writtenSegmentData += int(writtenNewExifData)
|
||||
}
|
||||
|
||||
// fill in any remaining exif body with blank bytes
|
||||
blank := make([]byte, len(s.Data)-writtenExifData)
|
||||
writtenPadding, err := w.Write(blank)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
writtenSegmentData += writtenPadding
|
||||
v.writtenTotalBytes += writtenSegmentData
|
||||
return nil
|
||||
}
|
||||
93
vendor/codeberg.org/superseriousbusiness/exif-terminator/png.go
generated
vendored
93
vendor/codeberg.org/superseriousbusiness/exif-terminator/png.go
generated
vendored
|
|
@ -1,93 +0,0 @@
|
|||
/*
|
||||
exif-terminator
|
||||
Copyright (C) 2022 SuperSeriousBusiness admin@gotosocial.org
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
package terminator
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
pngstructure "github.com/superseriousbusiness/go-png-image-structure/v2"
|
||||
)
|
||||
|
||||
type pngVisitor struct {
|
||||
ps *pngstructure.PngSplitter
|
||||
writer io.Writer
|
||||
lastWrittenChunk int
|
||||
}
|
||||
|
||||
func (v *pngVisitor) split(data []byte, atEOF bool) (int, []byte, error) {
|
||||
// execute the ps split function to read in data
|
||||
advance, token, err := v.ps.Split(data, atEOF)
|
||||
if err != nil {
|
||||
return advance, token, err
|
||||
}
|
||||
|
||||
// if we haven't written anything at all yet, then write the png header back into the writer first
|
||||
if v.lastWrittenChunk == -1 {
|
||||
if _, err := v.writer.Write(pngstructure.PngSignature[:]); err != nil {
|
||||
return advance, token, err
|
||||
}
|
||||
}
|
||||
|
||||
// Check if the splitter now has
|
||||
// any new chunks in it for us.
|
||||
chunkSlice, err := v.ps.Chunks()
|
||||
if err != nil {
|
||||
return advance, token, err
|
||||
}
|
||||
|
||||
// Write each chunk by passing it
|
||||
// through our custom write func,
|
||||
// which strips out exif and fixes
|
||||
// the CRC of each chunk.
|
||||
chunks := chunkSlice.Chunks()
|
||||
for i, chunk := range chunks {
|
||||
if i <= v.lastWrittenChunk {
|
||||
// Skip already
|
||||
// written chunks.
|
||||
continue
|
||||
}
|
||||
|
||||
// Write this new chunk.
|
||||
if err := v.writeChunk(chunk); err != nil {
|
||||
return advance, token, err
|
||||
}
|
||||
v.lastWrittenChunk = i
|
||||
|
||||
// Zero data; here you
|
||||
// go garbage collector.
|
||||
chunk.Data = nil
|
||||
}
|
||||
|
||||
return advance, token, err
|
||||
}
|
||||
|
||||
func (v *pngVisitor) writeChunk(chunk *pngstructure.Chunk) error {
|
||||
if chunk.Type == pngstructure.EXifChunkType {
|
||||
// Replace exif data
|
||||
// with zero bytes.
|
||||
clear(chunk.Data)
|
||||
}
|
||||
|
||||
// Fix CRC of each chunk.
|
||||
chunk.UpdateCrc32()
|
||||
|
||||
// finally, write chunk to writer.
|
||||
_, err := chunk.WriteTo(v.writer)
|
||||
return err
|
||||
}
|
||||
158
vendor/codeberg.org/superseriousbusiness/exif-terminator/terminator.go
generated
vendored
158
vendor/codeberg.org/superseriousbusiness/exif-terminator/terminator.go
generated
vendored
|
|
@ -1,158 +0,0 @@
|
|||
/*
|
||||
exif-terminator
|
||||
Copyright (C) 2022 SuperSeriousBusiness admin@gotosocial.org
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
package terminator
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
jpegstructure "github.com/superseriousbusiness/go-jpeg-image-structure/v2"
|
||||
pngstructure "github.com/superseriousbusiness/go-png-image-structure/v2"
|
||||
)
|
||||
|
||||
func Terminate(in io.Reader, fileSize int, mediaType string) (io.Reader, error) {
|
||||
// To avoid keeping too much stuff
|
||||
// in memory we want to pipe data
|
||||
// directly to the reader.
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
|
||||
// We don't know ahead of time how long
|
||||
// segments might be: they could be as
|
||||
// large as the file itself, so we need
|
||||
// a buffer with generous overhead.
|
||||
scanner := bufio.NewScanner(in)
|
||||
scanner.Buffer([]byte{}, fileSize)
|
||||
|
||||
var err error
|
||||
switch mediaType {
|
||||
case "image/jpeg", "jpeg", "jpg":
|
||||
err = terminateJpeg(scanner, pipeWriter, fileSize)
|
||||
|
||||
case "image/webp", "webp":
|
||||
err = terminateWebp(scanner, pipeWriter)
|
||||
|
||||
case "image/png", "png":
|
||||
// For pngs we need to skip the header bytes, so read
|
||||
// them in and check we're really dealing with a png.
|
||||
header := make([]byte, len(pngstructure.PngSignature))
|
||||
if _, headerError := in.Read(header); headerError != nil {
|
||||
err = headerError
|
||||
break
|
||||
}
|
||||
|
||||
if !bytes.Equal(header, pngstructure.PngSignature[:]) {
|
||||
err = errors.New("could not decode png: invalid header")
|
||||
break
|
||||
}
|
||||
|
||||
err = terminatePng(scanner, pipeWriter)
|
||||
default:
|
||||
err = fmt.Errorf("mediaType %s cannot be processed", mediaType)
|
||||
}
|
||||
|
||||
return pipeReader, err
|
||||
}
|
||||
|
||||
func terminateJpeg(scanner *bufio.Scanner, writer *io.PipeWriter, expectedFileSize int) error {
|
||||
v := &jpegVisitor{
|
||||
writer: writer,
|
||||
expectedFileSize: expectedFileSize,
|
||||
}
|
||||
|
||||
// Provide the visitor to the splitter so
|
||||
// that it triggers on every section scan.
|
||||
js := jpegstructure.NewJpegSplitter(v)
|
||||
|
||||
// The visitor also needs to read back the
|
||||
// list of segments: for this it needs to
|
||||
// know what jpeg splitter it's attached to,
|
||||
// so give it a pointer to the splitter.
|
||||
v.js = js
|
||||
|
||||
// Jpeg visitor's 'split' function
|
||||
// satisfies bufio.SplitFunc{}.
|
||||
scanner.Split(js.Split)
|
||||
|
||||
go scanAndClose(scanner, writer)
|
||||
return nil
|
||||
}
|
||||
|
||||
func terminateWebp(scanner *bufio.Scanner, writer *io.PipeWriter) error {
|
||||
v := &webpVisitor{
|
||||
writer: writer,
|
||||
}
|
||||
|
||||
// Webp visitor's 'split' function
|
||||
// satisfies bufio.SplitFunc{}.
|
||||
scanner.Split(v.split)
|
||||
|
||||
go scanAndClose(scanner, writer)
|
||||
return nil
|
||||
}
|
||||
|
||||
func terminatePng(scanner *bufio.Scanner, writer *io.PipeWriter) error {
|
||||
ps := pngstructure.NewPngSplitter()
|
||||
|
||||
// Don't bother checking CRC;
|
||||
// we're overwriting it anyway.
|
||||
ps.DoCheckCrc(false)
|
||||
|
||||
v := &pngVisitor{
|
||||
ps: ps,
|
||||
writer: writer,
|
||||
lastWrittenChunk: -1,
|
||||
}
|
||||
|
||||
// Png visitor's 'split' function
|
||||
// satisfies bufio.SplitFunc{}.
|
||||
scanner.Split(v.split)
|
||||
|
||||
go scanAndClose(scanner, writer)
|
||||
return nil
|
||||
}
|
||||
|
||||
// scanAndClose scans through the given scanner until there's
|
||||
// nothing left to scan, and then closes the writer so that the
|
||||
// reader on the other side of the pipe knows that we're done.
|
||||
//
|
||||
// Any error encountered when scanning will be logged by terminator.
|
||||
//
|
||||
// Due to the nature of io.Pipe, writing won't actually work
|
||||
// until the pipeReader starts being read by the caller, which
|
||||
// is why this function should always be called asynchronously.
|
||||
func scanAndClose(scanner *bufio.Scanner, writer *io.PipeWriter) {
|
||||
var err error
|
||||
|
||||
defer func() {
|
||||
// Always close writer, using returned
|
||||
// scanner error (if any). If err is nil
|
||||
// then the standard io.EOF will be used.
|
||||
// (this will not overwrite existing).
|
||||
writer.CloseWithError(err)
|
||||
}()
|
||||
|
||||
for scanner.Scan() {
|
||||
}
|
||||
|
||||
// Set error on return.
|
||||
err = scanner.Err()
|
||||
}
|
||||
101
vendor/codeberg.org/superseriousbusiness/exif-terminator/webp.go
generated
vendored
101
vendor/codeberg.org/superseriousbusiness/exif-terminator/webp.go
generated
vendored
|
|
@ -1,101 +0,0 @@
|
|||
/*
|
||||
exif-terminator
|
||||
Copyright (C) 2022 SuperSeriousBusiness admin@gotosocial.org
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
package terminator
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
const (
|
||||
riffHeaderSize = 4 * 3
|
||||
)
|
||||
|
||||
var (
|
||||
riffHeader = [4]byte{'R', 'I', 'F', 'F'}
|
||||
webpHeader = [4]byte{'W', 'E', 'B', 'P'}
|
||||
exifFourcc = [4]byte{'E', 'X', 'I', 'F'}
|
||||
xmpFourcc = [4]byte{'X', 'M', 'P', ' '}
|
||||
|
||||
errNoRiffHeader = errors.New("no RIFF header")
|
||||
errNoWebpHeader = errors.New("not a WEBP file")
|
||||
)
|
||||
|
||||
type webpVisitor struct {
|
||||
writer io.Writer
|
||||
doneHeader bool
|
||||
}
|
||||
|
||||
func fourCC(b []byte) [4]byte {
|
||||
return [4]byte{b[0], b[1], b[2], b[3]}
|
||||
}
|
||||
|
||||
func (v *webpVisitor) split(data []byte, atEOF bool) (advance int, token []byte, err error) {
|
||||
// parse/write the header first
|
||||
if !v.doneHeader {
|
||||
if len(data) < riffHeaderSize {
|
||||
// need the full header
|
||||
return
|
||||
}
|
||||
if fourCC(data) != riffHeader {
|
||||
err = errNoRiffHeader
|
||||
return
|
||||
}
|
||||
if fourCC(data[8:]) != webpHeader {
|
||||
err = errNoWebpHeader
|
||||
return
|
||||
}
|
||||
if _, err = v.writer.Write(data[:riffHeaderSize]); err != nil {
|
||||
return
|
||||
}
|
||||
advance += riffHeaderSize
|
||||
data = data[riffHeaderSize:]
|
||||
v.doneHeader = true
|
||||
}
|
||||
|
||||
// need enough for fourcc and size
|
||||
if len(data) < 8 {
|
||||
return
|
||||
}
|
||||
size := int64(binary.LittleEndian.Uint32(data[4:]))
|
||||
if (size & 1) != 0 {
|
||||
// odd chunk size - extra padding byte
|
||||
size++
|
||||
}
|
||||
// wait until there is enough
|
||||
if int64(len(data)-8) < size {
|
||||
return
|
||||
}
|
||||
|
||||
fourcc := fourCC(data)
|
||||
rawChunkData := data[8 : 8+size]
|
||||
if fourcc == exifFourcc || fourcc == xmpFourcc {
|
||||
// replace exif/xmp with blank
|
||||
rawChunkData = make([]byte, size)
|
||||
}
|
||||
|
||||
if _, err = v.writer.Write(data[:8]); err == nil {
|
||||
if _, err = v.writer.Write(rawChunkData); err == nil {
|
||||
advance += 8 + int(size)
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
1
vendor/github.com/abema/go-mp4/.gitignore
generated
vendored
1
vendor/github.com/abema/go-mp4/.gitignore
generated
vendored
|
|
@ -1 +0,0 @@
|
|||
vendor
|
||||
21
vendor/github.com/abema/go-mp4/LICENSE
generated
vendored
21
vendor/github.com/abema/go-mp4/LICENSE
generated
vendored
|
|
@ -1,21 +0,0 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2020 AbemaTV
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
159
vendor/github.com/abema/go-mp4/README.md
generated
vendored
159
vendor/github.com/abema/go-mp4/README.md
generated
vendored
|
|
@ -1,159 +0,0 @@
|
|||
go-mp4
|
||||
------
|
||||
|
||||
[](https://pkg.go.dev/github.com/abema/go-mp4)
|
||||

|
||||
[](https://coveralls.io/github/abema/go-mp4)
|
||||
[](https://goreportcard.com/report/github.com/abema/go-mp4)
|
||||
|
||||
go-mp4 is Go library which provides low-level I/O interfaces of MP4.
|
||||
This library supports you to parse or build any MP4 boxes(atoms) directly.
|
||||
|
||||
go-mp4 provides very flexible interfaces for reading boxes.
|
||||
If you want to read only specific parts of MP4 file, this library extracts those boxes via io.ReadSeeker interface.
|
||||
|
||||
On the other hand, this library is not suitable for complex data conversions.
|
||||
|
||||
## Integration with your Go application
|
||||
|
||||
### Reading
|
||||
|
||||
You can parse MP4 file as follows:
|
||||
|
||||
```go
|
||||
// expand all boxes
|
||||
_, err := mp4.ReadBoxStructure(file, func(h *mp4.ReadHandle) (interface{}, error) {
|
||||
fmt.Println("depth", len(h.Path))
|
||||
|
||||
// Box Type (e.g. "mdhd", "tfdt", "mdat")
|
||||
fmt.Println("type", h.BoxInfo.Type.String())
|
||||
|
||||
// Box Size
|
||||
fmt.Println("size", h.BoxInfo.Size)
|
||||
|
||||
if h.BoxInfo.IsSupportedType() {
|
||||
// Payload
|
||||
box, _, err := h.ReadPayload()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
str, err := mp4.Stringify(box, h.BoxInfo.Context)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fmt.Println("payload", str)
|
||||
|
||||
// Expands children
|
||||
return h.Expand()
|
||||
}
|
||||
return nil, nil
|
||||
})
|
||||
```
|
||||
|
||||
```go
|
||||
// extract specific boxes
|
||||
boxes, err := mp4.ExtractBoxWithPayload(file, nil, mp4.BoxPath{mp4.BoxTypeMoov(), mp4.BoxTypeTrak(), mp4.BoxTypeTkhd()})
|
||||
if err != nil {
|
||||
:
|
||||
}
|
||||
for _, box := range boxes {
|
||||
tkhd := box.Payload.(*mp4.Tkhd)
|
||||
fmt.Println("track ID:", tkhd.TrackID)
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
// get basic informations
|
||||
info, err := mp4.Probe(bufseekio.NewReadSeeker(file, 1024, 4))
|
||||
if err != nil {
|
||||
:
|
||||
}
|
||||
fmt.Println("track num:", len(info.Tracks))
|
||||
```
|
||||
|
||||
### Writing
|
||||
|
||||
Writer helps you to write box tree.
|
||||
The following sample code edits emsg box and writes to another file.
|
||||
|
||||
```go
|
||||
r := bufseekio.NewReadSeeker(inputFile, 128*1024, 4)
|
||||
w := mp4.NewWriter(outputFile)
|
||||
_, err = mp4.ReadBoxStructure(r, func(h *mp4.ReadHandle) (interface{}, error) {
|
||||
switch h.BoxInfo.Type {
|
||||
case mp4.BoxTypeEmsg():
|
||||
// write box size and box type
|
||||
_, err := w.StartBox(&h.BoxInfo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// read payload
|
||||
box, _, err := h.ReadPayload()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// update MessageData
|
||||
emsg := box.(*mp4.Emsg)
|
||||
emsg.MessageData = []byte("hello world")
|
||||
// write box playload
|
||||
if _, err := mp4.Marshal(w, emsg, h.BoxInfo.Context); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// rewrite box size
|
||||
_, err = w.EndBox()
|
||||
return nil, err
|
||||
default:
|
||||
// copy all
|
||||
return nil, w.CopyBox(r, &h.BoxInfo)
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### User-defined Boxes
|
||||
|
||||
You can create additional box definition as follows:
|
||||
|
||||
```go
|
||||
func BoxTypeXxxx() BoxType { return mp4.StrToBoxType("xxxx") }
|
||||
|
||||
func init() {
|
||||
mp4.AddBoxDef(&Xxxx{}, 0)
|
||||
}
|
||||
|
||||
type Xxxx struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
UI32 uint32 `mp4:"1,size=32"`
|
||||
ByteArray []byte `mp4:"2,size=8,len=dynamic"`
|
||||
}
|
||||
|
||||
func (*Xxxx) GetType() BoxType {
|
||||
return BoxTypeXxxx()
|
||||
}
|
||||
```
|
||||
|
||||
### Buffering
|
||||
|
||||
go-mp4 has no buffering feature for I/O.
|
||||
If you should reduce Read function calls, you can wrap the io.ReadSeeker by [bufseekio](https://github.com/sunfish-shogi/bufseekio).
|
||||
|
||||
## Command Line Tool
|
||||
|
||||
Install mp4tool as follows:
|
||||
|
||||
```sh
|
||||
go install github.com/abema/go-mp4/cmd/mp4tool@latest
|
||||
|
||||
mp4tool -help
|
||||
```
|
||||
|
||||
For example, `mp4tool dump MP4_FILE_NAME` command prints MP4 box tree as follows:
|
||||
|
||||
```
|
||||
[moof] Size=504
|
||||
[mfhd] Size=16 Version=0 Flags=0x000000 SequenceNumber=1
|
||||
[traf] Size=480
|
||||
[tfhd] Size=28 Version=0 Flags=0x020038 TrackID=1 DefaultSampleDuration=9000 DefaultSampleSize=33550 DefaultSampleFlags=0x1010000
|
||||
[tfdt] Size=20 Version=1 Flags=0x000000 BaseMediaDecodeTimeV1=0
|
||||
[trun] Size=424 ... (use -a option to show all)
|
||||
[mdat] Size=44569 Data=[...] (use -mdat option to expand)
|
||||
```
|
||||
19
vendor/github.com/abema/go-mp4/anytype.go
generated
vendored
19
vendor/github.com/abema/go-mp4/anytype.go
generated
vendored
|
|
@ -1,19 +0,0 @@
|
|||
package mp4
|
||||
|
||||
type IAnyType interface {
|
||||
IBox
|
||||
SetType(BoxType)
|
||||
}
|
||||
|
||||
type AnyTypeBox struct {
|
||||
Box
|
||||
Type BoxType
|
||||
}
|
||||
|
||||
func (e *AnyTypeBox) GetType() BoxType {
|
||||
return e.Type
|
||||
}
|
||||
|
||||
func (e *AnyTypeBox) SetType(boxType BoxType) {
|
||||
e.Type = boxType
|
||||
}
|
||||
188
vendor/github.com/abema/go-mp4/box.go
generated
vendored
188
vendor/github.com/abema/go-mp4/box.go
generated
vendored
|
|
@ -1,188 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"math"
|
||||
|
||||
"github.com/abema/go-mp4/internal/bitio"
|
||||
)
|
||||
|
||||
const LengthUnlimited = math.MaxUint32
|
||||
|
||||
type ICustomFieldObject interface {
|
||||
// GetFieldSize returns size of dynamic field
|
||||
GetFieldSize(name string, ctx Context) uint
|
||||
|
||||
// GetFieldLength returns length of dynamic field
|
||||
GetFieldLength(name string, ctx Context) uint
|
||||
|
||||
// IsOptFieldEnabled check whether if the optional field is enabled
|
||||
IsOptFieldEnabled(name string, ctx Context) bool
|
||||
|
||||
// StringifyField returns field value as string
|
||||
StringifyField(name string, indent string, depth int, ctx Context) (string, bool)
|
||||
|
||||
IsPString(name string, bytes []byte, remainingSize uint64, ctx Context) bool
|
||||
|
||||
BeforeUnmarshal(r io.ReadSeeker, size uint64, ctx Context) (n uint64, override bool, err error)
|
||||
|
||||
OnReadField(name string, r bitio.ReadSeeker, leftBits uint64, ctx Context) (rbits uint64, override bool, err error)
|
||||
|
||||
OnWriteField(name string, w bitio.Writer, ctx Context) (wbits uint64, override bool, err error)
|
||||
}
|
||||
|
||||
type BaseCustomFieldObject struct {
|
||||
}
|
||||
|
||||
// GetFieldSize returns size of dynamic field
|
||||
func (box *BaseCustomFieldObject) GetFieldSize(string, Context) uint {
|
||||
panic(errors.New("GetFieldSize not implemented"))
|
||||
}
|
||||
|
||||
// GetFieldLength returns length of dynamic field
|
||||
func (box *BaseCustomFieldObject) GetFieldLength(string, Context) uint {
|
||||
panic(errors.New("GetFieldLength not implemented"))
|
||||
}
|
||||
|
||||
// IsOptFieldEnabled check whether if the optional field is enabled
|
||||
func (box *BaseCustomFieldObject) IsOptFieldEnabled(string, Context) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (box *BaseCustomFieldObject) StringifyField(string, string, int, Context) (string, bool) {
|
||||
return "", false
|
||||
}
|
||||
|
||||
func (*BaseCustomFieldObject) IsPString(name string, bytes []byte, remainingSize uint64, ctx Context) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (*BaseCustomFieldObject) BeforeUnmarshal(io.ReadSeeker, uint64, Context) (uint64, bool, error) {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
func (*BaseCustomFieldObject) OnReadField(string, bitio.ReadSeeker, uint64, Context) (uint64, bool, error) {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
func (*BaseCustomFieldObject) OnWriteField(string, bitio.Writer, Context) (uint64, bool, error) {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
// IImmutableBox is common interface of box
|
||||
type IImmutableBox interface {
|
||||
ICustomFieldObject
|
||||
|
||||
// GetVersion returns the box version
|
||||
GetVersion() uint8
|
||||
|
||||
// GetFlags returns the flags
|
||||
GetFlags() uint32
|
||||
|
||||
// CheckFlag checks the flag status
|
||||
CheckFlag(uint32) bool
|
||||
|
||||
// GetType returns the BoxType
|
||||
GetType() BoxType
|
||||
}
|
||||
|
||||
// IBox is common interface of box
|
||||
type IBox interface {
|
||||
IImmutableBox
|
||||
|
||||
// SetVersion sets the box version
|
||||
SetVersion(uint8)
|
||||
|
||||
// SetFlags sets the flags
|
||||
SetFlags(uint32)
|
||||
|
||||
// AddFlag adds the flag
|
||||
AddFlag(uint32)
|
||||
|
||||
// RemoveFlag removes the flag
|
||||
RemoveFlag(uint32)
|
||||
}
|
||||
|
||||
type Box struct {
|
||||
BaseCustomFieldObject
|
||||
}
|
||||
|
||||
// GetVersion returns the box version
|
||||
func (box *Box) GetVersion() uint8 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// SetVersion sets the box version
|
||||
func (box *Box) SetVersion(uint8) {
|
||||
}
|
||||
|
||||
// GetFlags returns the flags
|
||||
func (box *Box) GetFlags() uint32 {
|
||||
return 0x000000
|
||||
}
|
||||
|
||||
// CheckFlag checks the flag status
|
||||
func (box *Box) CheckFlag(flag uint32) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// SetFlags sets the flags
|
||||
func (box *Box) SetFlags(uint32) {
|
||||
}
|
||||
|
||||
// AddFlag adds the flag
|
||||
func (box *Box) AddFlag(flag uint32) {
|
||||
}
|
||||
|
||||
// RemoveFlag removes the flag
|
||||
func (box *Box) RemoveFlag(flag uint32) {
|
||||
}
|
||||
|
||||
// FullBox is ISOBMFF FullBox
|
||||
type FullBox struct {
|
||||
BaseCustomFieldObject
|
||||
Version uint8 `mp4:"0,size=8"`
|
||||
Flags [3]byte `mp4:"1,size=8"`
|
||||
}
|
||||
|
||||
// GetVersion returns the box version
|
||||
func (box *FullBox) GetVersion() uint8 {
|
||||
return box.Version
|
||||
}
|
||||
|
||||
// SetVersion sets the box version
|
||||
func (box *FullBox) SetVersion(version uint8) {
|
||||
box.Version = version
|
||||
}
|
||||
|
||||
// GetFlags returns the flags
|
||||
func (box *FullBox) GetFlags() uint32 {
|
||||
flag := uint32(box.Flags[0]) << 16
|
||||
flag ^= uint32(box.Flags[1]) << 8
|
||||
flag ^= uint32(box.Flags[2])
|
||||
return flag
|
||||
}
|
||||
|
||||
// CheckFlag checks the flag status
|
||||
func (box *FullBox) CheckFlag(flag uint32) bool {
|
||||
return box.GetFlags()&flag != 0
|
||||
}
|
||||
|
||||
// SetFlags sets the flags
|
||||
func (box *FullBox) SetFlags(flags uint32) {
|
||||
box.Flags[0] = byte(flags >> 16)
|
||||
box.Flags[1] = byte(flags >> 8)
|
||||
box.Flags[2] = byte(flags)
|
||||
}
|
||||
|
||||
// AddFlag adds the flag
|
||||
func (box *FullBox) AddFlag(flag uint32) {
|
||||
box.SetFlags(box.GetFlags() | flag)
|
||||
}
|
||||
|
||||
// RemoveFlag removes the flag
|
||||
func (box *FullBox) RemoveFlag(flag uint32) {
|
||||
box.SetFlags(box.GetFlags() & (^flag))
|
||||
}
|
||||
162
vendor/github.com/abema/go-mp4/box_info.go
generated
vendored
162
vendor/github.com/abema/go-mp4/box_info.go
generated
vendored
|
|
@ -1,162 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
)
|
||||
|
||||
type Context struct {
|
||||
// IsQuickTimeCompatible represents whether ftyp.compatible_brands contains "qt ".
|
||||
IsQuickTimeCompatible bool
|
||||
|
||||
// QuickTimeKeysMetaEntryCount the expected number of items under the ilst box as observed from the keys box
|
||||
QuickTimeKeysMetaEntryCount int
|
||||
|
||||
// UnderWave represents whether current box is under the wave box.
|
||||
UnderWave bool
|
||||
|
||||
// UnderIlst represents whether current box is under the ilst box.
|
||||
UnderIlst bool
|
||||
|
||||
// UnderIlstMeta represents whether current box is under the metadata box under the ilst box.
|
||||
UnderIlstMeta bool
|
||||
|
||||
// UnderIlstFreeMeta represents whether current box is under "----" box.
|
||||
UnderIlstFreeMeta bool
|
||||
|
||||
// UnderUdta represents whether current box is under the udta box.
|
||||
UnderUdta bool
|
||||
}
|
||||
|
||||
// BoxInfo has common infomations of box
|
||||
type BoxInfo struct {
|
||||
// Offset specifies an offset of the box in a file.
|
||||
Offset uint64
|
||||
|
||||
// Size specifies size(bytes) of box.
|
||||
Size uint64
|
||||
|
||||
// HeaderSize specifies size(bytes) of common fields which are defined as "Box" class member at ISO/IEC 14496-12.
|
||||
HeaderSize uint64
|
||||
|
||||
// Type specifies box type which is represented by 4 characters.
|
||||
Type BoxType
|
||||
|
||||
// ExtendToEOF is set true when Box.size is zero. It means that end of box equals to end of file.
|
||||
ExtendToEOF bool
|
||||
|
||||
// Context would be set by ReadBoxStructure, not ReadBoxInfo.
|
||||
Context
|
||||
}
|
||||
|
||||
func (bi *BoxInfo) IsSupportedType() bool {
|
||||
return bi.Type.IsSupported(bi.Context)
|
||||
}
|
||||
|
||||
const (
|
||||
SmallHeaderSize = 8
|
||||
LargeHeaderSize = 16
|
||||
)
|
||||
|
||||
// WriteBoxInfo writes common fields which are defined as "Box" class member at ISO/IEC 14496-12.
|
||||
// This function ignores bi.Offset and returns BoxInfo which contains real Offset and recalculated Size/HeaderSize.
|
||||
func WriteBoxInfo(w io.WriteSeeker, bi *BoxInfo) (*BoxInfo, error) {
|
||||
offset, err := w.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var data []byte
|
||||
if bi.ExtendToEOF {
|
||||
data = make([]byte, SmallHeaderSize)
|
||||
} else if bi.Size <= math.MaxUint32 && bi.HeaderSize != LargeHeaderSize {
|
||||
data = make([]byte, SmallHeaderSize)
|
||||
binary.BigEndian.PutUint32(data, uint32(bi.Size))
|
||||
} else {
|
||||
data = make([]byte, LargeHeaderSize)
|
||||
binary.BigEndian.PutUint32(data, 1)
|
||||
binary.BigEndian.PutUint64(data[SmallHeaderSize:], bi.Size)
|
||||
}
|
||||
data[4] = bi.Type[0]
|
||||
data[5] = bi.Type[1]
|
||||
data[6] = bi.Type[2]
|
||||
data[7] = bi.Type[3]
|
||||
|
||||
if _, err := w.Write(data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &BoxInfo{
|
||||
Offset: uint64(offset),
|
||||
Size: bi.Size - bi.HeaderSize + uint64(len(data)),
|
||||
HeaderSize: uint64(len(data)),
|
||||
Type: bi.Type,
|
||||
ExtendToEOF: bi.ExtendToEOF,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ReadBoxInfo reads common fields which are defined as "Box" class member at ISO/IEC 14496-12.
|
||||
func ReadBoxInfo(r io.ReadSeeker) (*BoxInfo, error) {
|
||||
offset, err := r.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bi := &BoxInfo{
|
||||
Offset: uint64(offset),
|
||||
}
|
||||
|
||||
// read 8 bytes
|
||||
buf := bytes.NewBuffer(make([]byte, 0, SmallHeaderSize))
|
||||
if _, err := io.CopyN(buf, r, SmallHeaderSize); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bi.HeaderSize += SmallHeaderSize
|
||||
|
||||
// pick size and type
|
||||
data := buf.Bytes()
|
||||
bi.Size = uint64(binary.BigEndian.Uint32(data))
|
||||
bi.Type = BoxType{data[4], data[5], data[6], data[7]}
|
||||
|
||||
if bi.Size == 0 {
|
||||
// box extends to end of file
|
||||
offsetEOF, err := r.Seek(0, io.SeekEnd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bi.Size = uint64(offsetEOF) - bi.Offset
|
||||
bi.ExtendToEOF = true
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if bi.Size == 1 {
|
||||
// read more 8 bytes
|
||||
buf.Reset()
|
||||
if _, err := io.CopyN(buf, r, LargeHeaderSize-SmallHeaderSize); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bi.HeaderSize += LargeHeaderSize - SmallHeaderSize
|
||||
bi.Size = binary.BigEndian.Uint64(buf.Bytes())
|
||||
}
|
||||
|
||||
if bi.Size == 0 {
|
||||
return nil, fmt.Errorf("invalid size")
|
||||
}
|
||||
|
||||
return bi, nil
|
||||
}
|
||||
|
||||
func (bi *BoxInfo) SeekToStart(s io.Seeker) (int64, error) {
|
||||
return s.Seek(int64(bi.Offset), io.SeekStart)
|
||||
}
|
||||
|
||||
func (bi *BoxInfo) SeekToPayload(s io.Seeker) (int64, error) {
|
||||
return s.Seek(int64(bi.Offset+bi.HeaderSize), io.SeekStart)
|
||||
}
|
||||
|
||||
func (bi *BoxInfo) SeekToEnd(s io.Seeker) (int64, error) {
|
||||
return s.Seek(int64(bi.Offset+bi.Size), io.SeekStart)
|
||||
}
|
||||
24
vendor/github.com/abema/go-mp4/box_types_3gpp.go
generated
vendored
24
vendor/github.com/abema/go-mp4/box_types_3gpp.go
generated
vendored
|
|
@ -1,24 +0,0 @@
|
|||
package mp4
|
||||
|
||||
var udta3GppMetaBoxTypes = []BoxType{
|
||||
StrToBoxType("titl"),
|
||||
StrToBoxType("dscp"),
|
||||
StrToBoxType("cprt"),
|
||||
StrToBoxType("perf"),
|
||||
StrToBoxType("auth"),
|
||||
StrToBoxType("gnre"),
|
||||
}
|
||||
|
||||
func init() {
|
||||
for _, bt := range udta3GppMetaBoxTypes {
|
||||
AddAnyTypeBoxDefEx(&Udta3GppString{}, bt, isUnderUdta, 0)
|
||||
}
|
||||
}
|
||||
|
||||
type Udta3GppString struct {
|
||||
AnyTypeBox
|
||||
FullBox `mp4:"0,extend"`
|
||||
Pad bool `mp4:"1,size=1,hidden"`
|
||||
Language [3]byte `mp4:"2,size=5,iso639-2"` // ISO-639-2/T language code
|
||||
Data []byte `mp4:"3,size=8,string"`
|
||||
}
|
||||
44
vendor/github.com/abema/go-mp4/box_types_av1.go
generated
vendored
44
vendor/github.com/abema/go-mp4/box_types_av1.go
generated
vendored
|
|
@ -1,44 +0,0 @@
|
|||
package mp4
|
||||
|
||||
/*************************** av01 ****************************/
|
||||
|
||||
// https://aomediacodec.github.io/av1-isobmff
|
||||
|
||||
func BoxTypeAv01() BoxType { return StrToBoxType("av01") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&VisualSampleEntry{}, BoxTypeAv01())
|
||||
}
|
||||
|
||||
/*************************** av1C ****************************/
|
||||
|
||||
// https://aomediacodec.github.io/av1-isobmff
|
||||
|
||||
func BoxTypeAv1C() BoxType { return StrToBoxType("av1C") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Av1C{})
|
||||
}
|
||||
|
||||
type Av1C struct {
|
||||
Box
|
||||
Marker uint8 `mp4:"0,size=1,const=1"`
|
||||
Version uint8 `mp4:"1,size=7,const=1"`
|
||||
SeqProfile uint8 `mp4:"2,size=3"`
|
||||
SeqLevelIdx0 uint8 `mp4:"3,size=5"`
|
||||
SeqTier0 uint8 `mp4:"4,size=1"`
|
||||
HighBitdepth uint8 `mp4:"5,size=1"`
|
||||
TwelveBit uint8 `mp4:"6,size=1"`
|
||||
Monochrome uint8 `mp4:"7,size=1"`
|
||||
ChromaSubsamplingX uint8 `mp4:"8,size=1"`
|
||||
ChromaSubsamplingY uint8 `mp4:"9,size=1"`
|
||||
ChromaSamplePosition uint8 `mp4:"10,size=2"`
|
||||
Reserved uint8 `mp4:"11,size=3,const=0"`
|
||||
InitialPresentationDelayPresent uint8 `mp4:"12,size=1"`
|
||||
InitialPresentationDelayMinusOne uint8 `mp4:"13,size=4"`
|
||||
ConfigOBUs []uint8 `mp4:"14,size=8"`
|
||||
}
|
||||
|
||||
func (Av1C) GetType() BoxType {
|
||||
return BoxTypeAv1C()
|
||||
}
|
||||
36
vendor/github.com/abema/go-mp4/box_types_etsi_ts_102_366.go
generated
vendored
36
vendor/github.com/abema/go-mp4/box_types_etsi_ts_102_366.go
generated
vendored
|
|
@ -1,36 +0,0 @@
|
|||
package mp4
|
||||
|
||||
/*************************** ac-3 ****************************/
|
||||
|
||||
// https://www.etsi.org/deliver/etsi_ts/102300_102399/102366/01.04.01_60/ts_102366v010401p.pdf
|
||||
|
||||
func BoxTypeAC3() BoxType { return StrToBoxType("ac-3") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&AudioSampleEntry{}, BoxTypeAC3())
|
||||
}
|
||||
|
||||
/*************************** dac3 ****************************/
|
||||
|
||||
// https://www.etsi.org/deliver/etsi_ts/102300_102399/102366/01.04.01_60/ts_102366v010401p.pdf
|
||||
|
||||
func BoxTypeDAC3() BoxType { return StrToBoxType("dac3") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Dac3{})
|
||||
}
|
||||
|
||||
type Dac3 struct {
|
||||
Box
|
||||
Fscod uint8 `mp4:"0,size=2"`
|
||||
Bsid uint8 `mp4:"1,size=5"`
|
||||
Bsmod uint8 `mp4:"2,size=3"`
|
||||
Acmod uint8 `mp4:"3,size=3"`
|
||||
LfeOn uint8 `mp4:"4,size=1"`
|
||||
BitRateCode uint8 `mp4:"5,size=5"`
|
||||
Reserved uint8 `mp4:"6,size=5,const=0"`
|
||||
}
|
||||
|
||||
func (Dac3) GetType() BoxType {
|
||||
return BoxTypeDAC3()
|
||||
}
|
||||
2460
vendor/github.com/abema/go-mp4/box_types_iso14496_12.go
generated
vendored
2460
vendor/github.com/abema/go-mp4/box_types_iso14496_12.go
generated
vendored
File diff suppressed because it is too large
Load diff
126
vendor/github.com/abema/go-mp4/box_types_iso14496_14.go
generated
vendored
126
vendor/github.com/abema/go-mp4/box_types_iso14496_14.go
generated
vendored
|
|
@ -1,126 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import "fmt"
|
||||
|
||||
/*************************** esds ****************************/
|
||||
|
||||
// https://developer.apple.com/library/content/documentation/QuickTime/QTFF/QTFFChap3/qtff3.html
|
||||
|
||||
func BoxTypeEsds() BoxType { return StrToBoxType("esds") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Esds{}, 0)
|
||||
}
|
||||
|
||||
const (
|
||||
ESDescrTag = 0x03
|
||||
DecoderConfigDescrTag = 0x04
|
||||
DecSpecificInfoTag = 0x05
|
||||
SLConfigDescrTag = 0x06
|
||||
)
|
||||
|
||||
// Esds is ES descripter box
|
||||
type Esds struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
Descriptors []Descriptor `mp4:"1,array"`
|
||||
}
|
||||
|
||||
// GetType returns the BoxType
|
||||
func (*Esds) GetType() BoxType {
|
||||
return BoxTypeEsds()
|
||||
}
|
||||
|
||||
type Descriptor struct {
|
||||
BaseCustomFieldObject
|
||||
Tag int8 `mp4:"0,size=8"` // must be 0x03
|
||||
Size uint32 `mp4:"1,varint"`
|
||||
ESDescriptor *ESDescriptor `mp4:"2,extend,opt=dynamic"`
|
||||
DecoderConfigDescriptor *DecoderConfigDescriptor `mp4:"3,extend,opt=dynamic"`
|
||||
Data []byte `mp4:"4,size=8,opt=dynamic,len=dynamic"`
|
||||
}
|
||||
|
||||
// GetFieldLength returns length of dynamic field
|
||||
func (ds *Descriptor) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "Data":
|
||||
return uint(ds.Size)
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=esds fieldName=%s", name))
|
||||
}
|
||||
|
||||
func (ds *Descriptor) IsOptFieldEnabled(name string, ctx Context) bool {
|
||||
switch ds.Tag {
|
||||
case ESDescrTag:
|
||||
return name == "ESDescriptor"
|
||||
case DecoderConfigDescrTag:
|
||||
return name == "DecoderConfigDescriptor"
|
||||
default:
|
||||
return name == "Data"
|
||||
}
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (ds *Descriptor) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
switch name {
|
||||
case "Tag":
|
||||
switch ds.Tag {
|
||||
case ESDescrTag:
|
||||
return "ESDescr", true
|
||||
case DecoderConfigDescrTag:
|
||||
return "DecoderConfigDescr", true
|
||||
case DecSpecificInfoTag:
|
||||
return "DecSpecificInfo", true
|
||||
case SLConfigDescrTag:
|
||||
return "SLConfigDescr", true
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
}
|
||||
|
||||
type ESDescriptor struct {
|
||||
BaseCustomFieldObject
|
||||
ESID uint16 `mp4:"0,size=16"`
|
||||
StreamDependenceFlag bool `mp4:"1,size=1"`
|
||||
UrlFlag bool `mp4:"2,size=1"`
|
||||
OcrStreamFlag bool `mp4:"3,size=1"`
|
||||
StreamPriority int8 `mp4:"4,size=5"`
|
||||
DependsOnESID uint16 `mp4:"5,size=16,opt=dynamic"`
|
||||
URLLength uint8 `mp4:"6,size=8,opt=dynamic"`
|
||||
URLString []byte `mp4:"7,size=8,len=dynamic,opt=dynamic,string"`
|
||||
OCRESID uint16 `mp4:"8,size=16,opt=dynamic"`
|
||||
}
|
||||
|
||||
func (esds *ESDescriptor) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "URLString":
|
||||
return uint(esds.URLLength)
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=ESDescriptor fieldName=%s", name))
|
||||
}
|
||||
|
||||
func (esds *ESDescriptor) IsOptFieldEnabled(name string, ctx Context) bool {
|
||||
switch name {
|
||||
case "DependsOnESID":
|
||||
return esds.StreamDependenceFlag
|
||||
case "URLLength", "URLString":
|
||||
return esds.UrlFlag
|
||||
case "OCRESID":
|
||||
return esds.OcrStreamFlag
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
type DecoderConfigDescriptor struct {
|
||||
BaseCustomFieldObject
|
||||
ObjectTypeIndication byte `mp4:"0,size=8"`
|
||||
StreamType int8 `mp4:"1,size=6"`
|
||||
UpStream bool `mp4:"2,size=1"`
|
||||
Reserved bool `mp4:"3,size=1"`
|
||||
BufferSizeDB uint32 `mp4:"4,size=24"`
|
||||
MaxBitrate uint32 `mp4:"5,size=32"`
|
||||
AvgBitrate uint32 `mp4:"6,size=32"`
|
||||
}
|
||||
35
vendor/github.com/abema/go-mp4/box_types_iso23001_5.go
generated
vendored
35
vendor/github.com/abema/go-mp4/box_types_iso23001_5.go
generated
vendored
|
|
@ -1,35 +0,0 @@
|
|||
package mp4
|
||||
|
||||
/*************************** ipcm ****************************/
|
||||
|
||||
func BoxTypeIpcm() BoxType { return StrToBoxType("ipcm") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&AudioSampleEntry{}, BoxTypeIpcm())
|
||||
}
|
||||
|
||||
/*************************** fpcm ****************************/
|
||||
|
||||
func BoxTypeFpcm() BoxType { return StrToBoxType("fpcm") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&AudioSampleEntry{}, BoxTypeFpcm())
|
||||
}
|
||||
|
||||
/*************************** pcmC ****************************/
|
||||
|
||||
func BoxTypePcmC() BoxType { return StrToBoxType("pcmC") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&PcmC{}, 0, 1)
|
||||
}
|
||||
|
||||
type PcmC struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
FormatFlags uint8 `mp4:"1,size=8"`
|
||||
PCMSampleSize uint8 `mp4:"1,size=8"`
|
||||
}
|
||||
|
||||
func (PcmC) GetType() BoxType {
|
||||
return BoxTypePcmC()
|
||||
}
|
||||
108
vendor/github.com/abema/go-mp4/box_types_iso23001_7.go
generated
vendored
108
vendor/github.com/abema/go-mp4/box_types_iso23001_7.go
generated
vendored
|
|
@ -1,108 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
/*************************** pssh ****************************/
|
||||
|
||||
func BoxTypePssh() BoxType { return StrToBoxType("pssh") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Pssh{}, 0, 1)
|
||||
}
|
||||
|
||||
// Pssh is ISOBMFF pssh box type
|
||||
type Pssh struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
SystemID [16]byte `mp4:"1,size=8,uuid"`
|
||||
KIDCount uint32 `mp4:"2,size=32,nver=0"`
|
||||
KIDs []PsshKID `mp4:"3,nver=0,len=dynamic,size=128"`
|
||||
DataSize int32 `mp4:"4,size=32"`
|
||||
Data []byte `mp4:"5,size=8,len=dynamic"`
|
||||
}
|
||||
|
||||
type PsshKID struct {
|
||||
KID [16]byte `mp4:"0,size=8,uuid"`
|
||||
}
|
||||
|
||||
// GetFieldLength returns length of dynamic field
|
||||
func (pssh *Pssh) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "KIDs":
|
||||
return uint(pssh.KIDCount)
|
||||
case "Data":
|
||||
return uint(pssh.DataSize)
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=pssh fieldName=%s", name))
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (pssh *Pssh) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
switch name {
|
||||
case "KIDs":
|
||||
buf := bytes.NewBuffer(nil)
|
||||
buf.WriteString("[")
|
||||
for i, e := range pssh.KIDs {
|
||||
if i != 0 {
|
||||
buf.WriteString(", ")
|
||||
}
|
||||
buf.WriteString(uuid.UUID(e.KID).String())
|
||||
}
|
||||
buf.WriteString("]")
|
||||
return buf.String(), true
|
||||
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
}
|
||||
|
||||
// GetType returns the BoxType
|
||||
func (*Pssh) GetType() BoxType {
|
||||
return BoxTypePssh()
|
||||
}
|
||||
|
||||
/*************************** tenc ****************************/
|
||||
|
||||
func BoxTypeTenc() BoxType { return StrToBoxType("tenc") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Tenc{}, 0, 1)
|
||||
}
|
||||
|
||||
// Tenc is ISOBMFF tenc box type
|
||||
type Tenc struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
Reserved uint8 `mp4:"1,size=8,dec"`
|
||||
DefaultCryptByteBlock uint8 `mp4:"2,size=4,dec"` // always 0 on version 0
|
||||
DefaultSkipByteBlock uint8 `mp4:"3,size=4,dec"` // always 0 on version 0
|
||||
DefaultIsProtected uint8 `mp4:"4,size=8,dec"`
|
||||
DefaultPerSampleIVSize uint8 `mp4:"5,size=8,dec"`
|
||||
DefaultKID [16]byte `mp4:"6,size=8,uuid"`
|
||||
DefaultConstantIVSize uint8 `mp4:"7,size=8,opt=dynamic,dec"`
|
||||
DefaultConstantIV []byte `mp4:"8,size=8,opt=dynamic,len=dynamic"`
|
||||
}
|
||||
|
||||
func (tenc *Tenc) IsOptFieldEnabled(name string, ctx Context) bool {
|
||||
switch name {
|
||||
case "DefaultConstantIVSize", "DefaultConstantIV":
|
||||
return tenc.DefaultIsProtected == 1 && tenc.DefaultPerSampleIVSize == 0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (tenc *Tenc) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "DefaultConstantIV":
|
||||
return uint(tenc.DefaultConstantIVSize)
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=tenc fieldName=%s", name))
|
||||
}
|
||||
|
||||
// GetType returns the BoxType
|
||||
func (*Tenc) GetType() BoxType {
|
||||
return BoxTypeTenc()
|
||||
}
|
||||
257
vendor/github.com/abema/go-mp4/box_types_metadata.go
generated
vendored
257
vendor/github.com/abema/go-mp4/box_types_metadata.go
generated
vendored
|
|
@ -1,257 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/abema/go-mp4/internal/util"
|
||||
)
|
||||
|
||||
/*************************** ilst ****************************/
|
||||
|
||||
func BoxTypeIlst() BoxType { return StrToBoxType("ilst") }
|
||||
func BoxTypeData() BoxType { return StrToBoxType("data") }
|
||||
|
||||
var ilstMetaBoxTypes = []BoxType{
|
||||
StrToBoxType("----"),
|
||||
StrToBoxType("aART"),
|
||||
StrToBoxType("akID"),
|
||||
StrToBoxType("apID"),
|
||||
StrToBoxType("atID"),
|
||||
StrToBoxType("cmID"),
|
||||
StrToBoxType("cnID"),
|
||||
StrToBoxType("covr"),
|
||||
StrToBoxType("cpil"),
|
||||
StrToBoxType("cprt"),
|
||||
StrToBoxType("desc"),
|
||||
StrToBoxType("disk"),
|
||||
StrToBoxType("egid"),
|
||||
StrToBoxType("geID"),
|
||||
StrToBoxType("gnre"),
|
||||
StrToBoxType("pcst"),
|
||||
StrToBoxType("pgap"),
|
||||
StrToBoxType("plID"),
|
||||
StrToBoxType("purd"),
|
||||
StrToBoxType("purl"),
|
||||
StrToBoxType("rtng"),
|
||||
StrToBoxType("sfID"),
|
||||
StrToBoxType("soaa"),
|
||||
StrToBoxType("soal"),
|
||||
StrToBoxType("soar"),
|
||||
StrToBoxType("soco"),
|
||||
StrToBoxType("sonm"),
|
||||
StrToBoxType("sosn"),
|
||||
StrToBoxType("stik"),
|
||||
StrToBoxType("tmpo"),
|
||||
StrToBoxType("trkn"),
|
||||
StrToBoxType("tven"),
|
||||
StrToBoxType("tves"),
|
||||
StrToBoxType("tvnn"),
|
||||
StrToBoxType("tvsh"),
|
||||
StrToBoxType("tvsn"),
|
||||
{0xA9, 'A', 'R', 'T'},
|
||||
{0xA9, 'a', 'l', 'b'},
|
||||
{0xA9, 'c', 'm', 't'},
|
||||
{0xA9, 'c', 'o', 'm'},
|
||||
{0xA9, 'd', 'a', 'y'},
|
||||
{0xA9, 'g', 'e', 'n'},
|
||||
{0xA9, 'g', 'r', 'p'},
|
||||
{0xA9, 'n', 'a', 'm'},
|
||||
{0xA9, 't', 'o', 'o'},
|
||||
{0xA9, 'w', 'r', 't'},
|
||||
}
|
||||
|
||||
func IsIlstMetaBoxType(boxType BoxType) bool {
|
||||
for _, bt := range ilstMetaBoxTypes {
|
||||
if boxType == bt {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Ilst{})
|
||||
AddBoxDefEx(&Data{}, isUnderIlstMeta)
|
||||
for _, bt := range ilstMetaBoxTypes {
|
||||
AddAnyTypeBoxDefEx(&IlstMetaContainer{}, bt, isIlstMetaContainer)
|
||||
}
|
||||
AddAnyTypeBoxDefEx(&StringData{}, StrToBoxType("mean"), isUnderIlstFreeFormat)
|
||||
AddAnyTypeBoxDefEx(&StringData{}, StrToBoxType("name"), isUnderIlstFreeFormat)
|
||||
}
|
||||
|
||||
type Ilst struct {
|
||||
Box
|
||||
}
|
||||
|
||||
// GetType returns the BoxType
|
||||
func (*Ilst) GetType() BoxType {
|
||||
return BoxTypeIlst()
|
||||
}
|
||||
|
||||
type IlstMetaContainer struct {
|
||||
AnyTypeBox
|
||||
}
|
||||
|
||||
func isIlstMetaContainer(ctx Context) bool {
|
||||
return ctx.UnderIlst && !ctx.UnderIlstMeta
|
||||
}
|
||||
|
||||
const (
|
||||
DataTypeBinary = 0
|
||||
DataTypeStringUTF8 = 1
|
||||
DataTypeStringUTF16 = 2
|
||||
DataTypeStringMac = 3
|
||||
DataTypeStringJPEG = 14
|
||||
DataTypeSignedIntBigEndian = 21
|
||||
DataTypeFloat32BigEndian = 22
|
||||
DataTypeFloat64BigEndian = 23
|
||||
)
|
||||
|
||||
// Data is a Value BoxType
|
||||
// https://developer.apple.com/documentation/quicktime-file-format/value_atom
|
||||
type Data struct {
|
||||
Box
|
||||
DataType uint32 `mp4:"0,size=32"`
|
||||
DataLang uint32 `mp4:"1,size=32"`
|
||||
Data []byte `mp4:"2,size=8"`
|
||||
}
|
||||
|
||||
// GetType returns the BoxType
|
||||
func (*Data) GetType() BoxType {
|
||||
return BoxTypeData()
|
||||
}
|
||||
|
||||
func isUnderIlstMeta(ctx Context) bool {
|
||||
return ctx.UnderIlstMeta
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (data *Data) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
switch name {
|
||||
case "DataType":
|
||||
switch data.DataType {
|
||||
case DataTypeBinary:
|
||||
return "BINARY", true
|
||||
case DataTypeStringUTF8:
|
||||
return "UTF8", true
|
||||
case DataTypeStringUTF16:
|
||||
return "UTF16", true
|
||||
case DataTypeStringMac:
|
||||
return "MAC_STR", true
|
||||
case DataTypeStringJPEG:
|
||||
return "JPEG", true
|
||||
case DataTypeSignedIntBigEndian:
|
||||
return "INT", true
|
||||
case DataTypeFloat32BigEndian:
|
||||
return "FLOAT32", true
|
||||
case DataTypeFloat64BigEndian:
|
||||
return "FLOAT64", true
|
||||
}
|
||||
case "Data":
|
||||
switch data.DataType {
|
||||
case DataTypeStringUTF8:
|
||||
return fmt.Sprintf("\"%s\"", util.EscapeUnprintables(string(data.Data))), true
|
||||
}
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
type StringData struct {
|
||||
AnyTypeBox
|
||||
Data []byte `mp4:"0,size=8"`
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (sd *StringData) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
if name == "Data" {
|
||||
return fmt.Sprintf("\"%s\"", util.EscapeUnprintables(string(sd.Data))), true
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
/*************************** numbered items ****************************/
|
||||
|
||||
// Item is a numbered item under an item list atom
|
||||
// https://developer.apple.com/documentation/quicktime-file-format/metadata_item_list_atom/item_list
|
||||
type Item struct {
|
||||
AnyTypeBox
|
||||
Version uint8 `mp4:"0,size=8"`
|
||||
Flags [3]byte `mp4:"1,size=8"`
|
||||
ItemName []byte `mp4:"2,size=8,len=4"`
|
||||
Data Data `mp4:"3"`
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (i *Item) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
switch name {
|
||||
case "ItemName":
|
||||
return fmt.Sprintf("\"%s\"", util.EscapeUnprintables(string(i.ItemName))), true
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
func isUnderIlstFreeFormat(ctx Context) bool {
|
||||
return ctx.UnderIlstFreeMeta
|
||||
}
|
||||
|
||||
func BoxTypeKeys() BoxType { return StrToBoxType("keys") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&Keys{})
|
||||
}
|
||||
|
||||
/*************************** keys ****************************/
|
||||
|
||||
// Keys is the Keys BoxType
|
||||
// https://developer.apple.com/documentation/quicktime-file-format/metadata_item_keys_atom
|
||||
type Keys struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
EntryCount int32 `mp4:"1,size=32"`
|
||||
Entries []Key `mp4:"2,len=dynamic"`
|
||||
}
|
||||
|
||||
// GetType implements the IBox interface and returns the BoxType
|
||||
func (*Keys) GetType() BoxType {
|
||||
return BoxTypeKeys()
|
||||
}
|
||||
|
||||
// GetFieldLength implements the ICustomFieldObject interface and returns the length of dynamic fields
|
||||
func (k *Keys) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "Entries":
|
||||
return uint(k.EntryCount)
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=keys fieldName=%s", name))
|
||||
}
|
||||
|
||||
/*************************** key ****************************/
|
||||
|
||||
// Key is a key value field in the Keys BoxType
|
||||
// https://developer.apple.com/documentation/quicktime-file-format/metadata_item_keys_atom/key_value_key_size-8
|
||||
type Key struct {
|
||||
BaseCustomFieldObject
|
||||
KeySize int32 `mp4:"0,size=32"`
|
||||
KeyNamespace []byte `mp4:"1,size=8,len=4"`
|
||||
KeyValue []byte `mp4:"2,size=8,len=dynamic"`
|
||||
}
|
||||
|
||||
// GetFieldLength implements the ICustomFieldObject interface and returns the length of dynamic fields
|
||||
func (k *Key) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "KeyValue":
|
||||
// sizeOf(KeySize)+sizeOf(KeyNamespace) = 8 bytes
|
||||
return uint(k.KeySize) - 8
|
||||
}
|
||||
panic(fmt.Errorf("invalid name of dynamic-length field: boxType=key fieldName=%s", name))
|
||||
}
|
||||
|
||||
// StringifyField returns field value as string
|
||||
func (k *Key) StringifyField(name string, indent string, depth int, ctx Context) (string, bool) {
|
||||
switch name {
|
||||
case "KeyNamespace":
|
||||
return fmt.Sprintf("\"%s\"", util.EscapeUnprintables(string(k.KeyNamespace))), true
|
||||
case "KeyValue":
|
||||
return fmt.Sprintf("\"%s\"", util.EscapeUnprintables(string(k.KeyValue))), true
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
54
vendor/github.com/abema/go-mp4/box_types_opus.go
generated
vendored
54
vendor/github.com/abema/go-mp4/box_types_opus.go
generated
vendored
|
|
@ -1,54 +0,0 @@
|
|||
package mp4
|
||||
|
||||
/*************************** Opus ****************************/
|
||||
|
||||
// https://opus-codec.org/docs/opus_in_isobmff.html
|
||||
|
||||
func BoxTypeOpus() BoxType { return StrToBoxType("Opus") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&AudioSampleEntry{}, BoxTypeOpus())
|
||||
}
|
||||
|
||||
/*************************** dOps ****************************/
|
||||
|
||||
// https://opus-codec.org/docs/opus_in_isobmff.html
|
||||
|
||||
func BoxTypeDOps() BoxType { return StrToBoxType("dOps") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&DOps{})
|
||||
}
|
||||
|
||||
type DOps struct {
|
||||
Box
|
||||
Version uint8 `mp4:"0,size=8"`
|
||||
OutputChannelCount uint8 `mp4:"1,size=8"`
|
||||
PreSkip uint16 `mp4:"2,size=16"`
|
||||
InputSampleRate uint32 `mp4:"3,size=32"`
|
||||
OutputGain int16 `mp4:"4,size=16"`
|
||||
ChannelMappingFamily uint8 `mp4:"5,size=8"`
|
||||
StreamCount uint8 `mp4:"6,opt=dynamic,size=8"`
|
||||
CoupledCount uint8 `mp4:"7,opt=dynamic,size=8"`
|
||||
ChannelMapping []uint8 `mp4:"8,opt=dynamic,size=8,len=dynamic"`
|
||||
}
|
||||
|
||||
func (DOps) GetType() BoxType {
|
||||
return BoxTypeDOps()
|
||||
}
|
||||
|
||||
func (dops DOps) IsOptFieldEnabled(name string, ctx Context) bool {
|
||||
switch name {
|
||||
case "StreamCount", "CoupledCount", "ChannelMapping":
|
||||
return dops.ChannelMappingFamily != 0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (ops DOps) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "ChannelMapping":
|
||||
return uint(ops.OutputChannelCount)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
53
vendor/github.com/abema/go-mp4/box_types_vp.go
generated
vendored
53
vendor/github.com/abema/go-mp4/box_types_vp.go
generated
vendored
|
|
@ -1,53 +0,0 @@
|
|||
package mp4
|
||||
|
||||
// https://www.webmproject.org/vp9/mp4/
|
||||
|
||||
/*************************** vp08 ****************************/
|
||||
|
||||
func BoxTypeVp08() BoxType { return StrToBoxType("vp08") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&VisualSampleEntry{}, BoxTypeVp08())
|
||||
}
|
||||
|
||||
/*************************** vp09 ****************************/
|
||||
|
||||
func BoxTypeVp09() BoxType { return StrToBoxType("vp09") }
|
||||
|
||||
func init() {
|
||||
AddAnyTypeBoxDef(&VisualSampleEntry{}, BoxTypeVp09())
|
||||
}
|
||||
|
||||
/*************************** VpcC ****************************/
|
||||
|
||||
func BoxTypeVpcC() BoxType { return StrToBoxType("vpcC") }
|
||||
|
||||
func init() {
|
||||
AddBoxDef(&VpcC{})
|
||||
}
|
||||
|
||||
type VpcC struct {
|
||||
FullBox `mp4:"0,extend"`
|
||||
Profile uint8 `mp4:"1,size=8"`
|
||||
Level uint8 `mp4:"2,size=8"`
|
||||
BitDepth uint8 `mp4:"3,size=4"`
|
||||
ChromaSubsampling uint8 `mp4:"4,size=3"`
|
||||
VideoFullRangeFlag uint8 `mp4:"5,size=1"`
|
||||
ColourPrimaries uint8 `mp4:"6,size=8"`
|
||||
TransferCharacteristics uint8 `mp4:"7,size=8"`
|
||||
MatrixCoefficients uint8 `mp4:"8,size=8"`
|
||||
CodecInitializationDataSize uint16 `mp4:"9,size=16"`
|
||||
CodecInitializationData []uint8 `mp4:"10,size=8,len=dynamic"`
|
||||
}
|
||||
|
||||
func (VpcC) GetType() BoxType {
|
||||
return BoxTypeVpcC()
|
||||
}
|
||||
|
||||
func (vpcc VpcC) GetFieldLength(name string, ctx Context) uint {
|
||||
switch name {
|
||||
case "CodecInitializationData":
|
||||
return uint(vpcc.CodecInitializationDataSize)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
98
vendor/github.com/abema/go-mp4/extract.go
generated
vendored
98
vendor/github.com/abema/go-mp4/extract.go
generated
vendored
|
|
@ -1,98 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
type BoxInfoWithPayload struct {
|
||||
Info BoxInfo
|
||||
Payload IBox
|
||||
}
|
||||
|
||||
func ExtractBoxWithPayload(r io.ReadSeeker, parent *BoxInfo, path BoxPath) ([]*BoxInfoWithPayload, error) {
|
||||
return ExtractBoxesWithPayload(r, parent, []BoxPath{path})
|
||||
}
|
||||
|
||||
func ExtractBoxesWithPayload(r io.ReadSeeker, parent *BoxInfo, paths []BoxPath) ([]*BoxInfoWithPayload, error) {
|
||||
bis, err := ExtractBoxes(r, parent, paths)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bs := make([]*BoxInfoWithPayload, 0, len(bis))
|
||||
for _, bi := range bis {
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var ctx Context
|
||||
if parent != nil {
|
||||
ctx = parent.Context
|
||||
}
|
||||
box, _, err := UnmarshalAny(r, bi.Type, bi.Size-bi.HeaderSize, ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bs = append(bs, &BoxInfoWithPayload{
|
||||
Info: *bi,
|
||||
Payload: box,
|
||||
})
|
||||
}
|
||||
return bs, nil
|
||||
}
|
||||
|
||||
func ExtractBox(r io.ReadSeeker, parent *BoxInfo, path BoxPath) ([]*BoxInfo, error) {
|
||||
return ExtractBoxes(r, parent, []BoxPath{path})
|
||||
}
|
||||
|
||||
func ExtractBoxes(r io.ReadSeeker, parent *BoxInfo, paths []BoxPath) ([]*BoxInfo, error) {
|
||||
if len(paths) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
for i := range paths {
|
||||
if len(paths[i]) == 0 {
|
||||
return nil, errors.New("box path must not be empty")
|
||||
}
|
||||
}
|
||||
|
||||
boxes := make([]*BoxInfo, 0, 8)
|
||||
|
||||
handler := func(handle *ReadHandle) (interface{}, error) {
|
||||
path := handle.Path
|
||||
if parent != nil {
|
||||
path = path[1:]
|
||||
}
|
||||
if handle.BoxInfo.Type == BoxTypeAny() {
|
||||
return nil, nil
|
||||
}
|
||||
fm, m := matchPath(paths, path)
|
||||
if m {
|
||||
boxes = append(boxes, &handle.BoxInfo)
|
||||
}
|
||||
|
||||
if fm {
|
||||
if _, err := handle.Expand(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if parent != nil {
|
||||
_, err := ReadBoxStructureFromInternal(r, parent, handler)
|
||||
return boxes, err
|
||||
}
|
||||
_, err := ReadBoxStructure(r, handler)
|
||||
return boxes, err
|
||||
}
|
||||
|
||||
func matchPath(paths []BoxPath, path BoxPath) (forwardMatch bool, match bool) {
|
||||
for i := range paths {
|
||||
fm, m := path.compareWith(paths[i])
|
||||
forwardMatch = forwardMatch || fm
|
||||
match = match || m
|
||||
}
|
||||
return
|
||||
}
|
||||
290
vendor/github.com/abema/go-mp4/field.go
generated
vendored
290
vendor/github.com/abema/go-mp4/field.go
generated
vendored
|
|
@ -1,290 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type (
|
||||
stringType uint8
|
||||
fieldFlag uint16
|
||||
)
|
||||
|
||||
const (
|
||||
stringType_C stringType = iota
|
||||
stringType_C_P
|
||||
|
||||
fieldString fieldFlag = 1 << iota // 0
|
||||
fieldExtend // 1
|
||||
fieldDec // 2
|
||||
fieldHex // 3
|
||||
fieldISO639_2 // 4
|
||||
fieldUUID // 5
|
||||
fieldHidden // 6
|
||||
fieldOptDynamic // 7
|
||||
fieldVarint // 8
|
||||
fieldSizeDynamic // 9
|
||||
fieldLengthDynamic // 10
|
||||
)
|
||||
|
||||
type field struct {
|
||||
children []*field
|
||||
name string
|
||||
cnst string
|
||||
order int
|
||||
optFlag uint32
|
||||
nOptFlag uint32
|
||||
size uint
|
||||
length uint
|
||||
flags fieldFlag
|
||||
strType stringType
|
||||
version uint8
|
||||
nVersion uint8
|
||||
}
|
||||
|
||||
func (f *field) set(flag fieldFlag) {
|
||||
f.flags |= flag
|
||||
}
|
||||
|
||||
func (f *field) is(flag fieldFlag) bool {
|
||||
return f.flags&flag != 0
|
||||
}
|
||||
|
||||
func buildFields(box IImmutableBox) []*field {
|
||||
t := reflect.TypeOf(box).Elem()
|
||||
return buildFieldsStruct(t)
|
||||
}
|
||||
|
||||
func buildFieldsStruct(t reflect.Type) []*field {
|
||||
fs := make([]*field, 0, 8)
|
||||
for i := 0; i < t.NumField(); i++ {
|
||||
ft := t.Field(i).Type
|
||||
tag, ok := t.Field(i).Tag.Lookup("mp4")
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
f := buildField(t.Field(i).Name, tag)
|
||||
f.children = buildFieldsAny(ft)
|
||||
fs = append(fs, f)
|
||||
}
|
||||
sort.SliceStable(fs, func(i, j int) bool {
|
||||
return fs[i].order < fs[j].order
|
||||
})
|
||||
return fs
|
||||
}
|
||||
|
||||
func buildFieldsAny(t reflect.Type) []*field {
|
||||
switch t.Kind() {
|
||||
case reflect.Struct:
|
||||
return buildFieldsStruct(t)
|
||||
case reflect.Ptr, reflect.Array, reflect.Slice:
|
||||
return buildFieldsAny(t.Elem())
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func buildField(fieldName string, tag string) *field {
|
||||
f := &field{
|
||||
name: fieldName,
|
||||
}
|
||||
tagMap := parseFieldTag(tag)
|
||||
for key, val := range tagMap {
|
||||
if val != "" {
|
||||
continue
|
||||
}
|
||||
if order, err := strconv.Atoi(key); err == nil {
|
||||
f.order = order
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if val, contained := tagMap["string"]; contained {
|
||||
f.set(fieldString)
|
||||
if val == "c_p" {
|
||||
f.strType = stringType_C_P
|
||||
fmt.Fprint(os.Stderr, "go-mp4: string=c_p tag is deprecated!! See https://github.com/abema/go-mp4/issues/76\n")
|
||||
}
|
||||
}
|
||||
|
||||
if _, contained := tagMap["varint"]; contained {
|
||||
f.set(fieldVarint)
|
||||
}
|
||||
|
||||
if val, contained := tagMap["opt"]; contained {
|
||||
if val == "dynamic" {
|
||||
f.set(fieldOptDynamic)
|
||||
} else {
|
||||
base := 10
|
||||
if strings.HasPrefix(val, "0x") {
|
||||
val = val[2:]
|
||||
base = 16
|
||||
}
|
||||
opt, err := strconv.ParseUint(val, base, 32)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.optFlag = uint32(opt)
|
||||
}
|
||||
}
|
||||
|
||||
if val, contained := tagMap["nopt"]; contained {
|
||||
base := 10
|
||||
if strings.HasPrefix(val, "0x") {
|
||||
val = val[2:]
|
||||
base = 16
|
||||
}
|
||||
nopt, err := strconv.ParseUint(val, base, 32)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.nOptFlag = uint32(nopt)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["extend"]; contained {
|
||||
f.set(fieldExtend)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["dec"]; contained {
|
||||
f.set(fieldDec)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["hex"]; contained {
|
||||
f.set(fieldHex)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["iso639-2"]; contained {
|
||||
f.set(fieldISO639_2)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["uuid"]; contained {
|
||||
f.set(fieldUUID)
|
||||
}
|
||||
|
||||
if _, contained := tagMap["hidden"]; contained {
|
||||
f.set(fieldHidden)
|
||||
}
|
||||
|
||||
if val, contained := tagMap["const"]; contained {
|
||||
f.cnst = val
|
||||
}
|
||||
|
||||
f.version = anyVersion
|
||||
if val, contained := tagMap["ver"]; contained {
|
||||
ver, err := strconv.Atoi(val)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.version = uint8(ver)
|
||||
}
|
||||
|
||||
f.nVersion = anyVersion
|
||||
if val, contained := tagMap["nver"]; contained {
|
||||
ver, err := strconv.Atoi(val)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.nVersion = uint8(ver)
|
||||
}
|
||||
|
||||
if val, contained := tagMap["size"]; contained {
|
||||
if val == "dynamic" {
|
||||
f.set(fieldSizeDynamic)
|
||||
} else {
|
||||
size, err := strconv.ParseUint(val, 10, 32)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.size = uint(size)
|
||||
}
|
||||
}
|
||||
|
||||
f.length = LengthUnlimited
|
||||
if val, contained := tagMap["len"]; contained {
|
||||
if val == "dynamic" {
|
||||
f.set(fieldLengthDynamic)
|
||||
} else {
|
||||
l, err := strconv.ParseUint(val, 10, 32)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
f.length = uint(l)
|
||||
}
|
||||
}
|
||||
|
||||
return f
|
||||
}
|
||||
|
||||
func parseFieldTag(str string) map[string]string {
|
||||
tag := make(map[string]string, 8)
|
||||
|
||||
list := strings.Split(str, ",")
|
||||
for _, e := range list {
|
||||
kv := strings.SplitN(e, "=", 2)
|
||||
if len(kv) == 2 {
|
||||
tag[strings.Trim(kv[0], " ")] = strings.Trim(kv[1], " ")
|
||||
} else {
|
||||
tag[strings.Trim(kv[0], " ")] = ""
|
||||
}
|
||||
}
|
||||
|
||||
return tag
|
||||
}
|
||||
|
||||
type fieldInstance struct {
|
||||
field
|
||||
cfo ICustomFieldObject
|
||||
}
|
||||
|
||||
func resolveFieldInstance(f *field, box IImmutableBox, parent reflect.Value, ctx Context) *fieldInstance {
|
||||
fi := fieldInstance{
|
||||
field: *f,
|
||||
}
|
||||
|
||||
cfo, ok := parent.Addr().Interface().(ICustomFieldObject)
|
||||
if ok {
|
||||
fi.cfo = cfo
|
||||
} else {
|
||||
fi.cfo = box
|
||||
}
|
||||
|
||||
if fi.is(fieldSizeDynamic) {
|
||||
fi.size = fi.cfo.GetFieldSize(f.name, ctx)
|
||||
}
|
||||
|
||||
if fi.is(fieldLengthDynamic) {
|
||||
fi.length = fi.cfo.GetFieldLength(f.name, ctx)
|
||||
}
|
||||
|
||||
return &fi
|
||||
}
|
||||
|
||||
func isTargetField(box IImmutableBox, fi *fieldInstance, ctx Context) bool {
|
||||
if box.GetVersion() != anyVersion {
|
||||
if fi.version != anyVersion && box.GetVersion() != fi.version {
|
||||
return false
|
||||
}
|
||||
|
||||
if fi.nVersion != anyVersion && box.GetVersion() == fi.nVersion {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
if fi.optFlag != 0 && box.GetFlags()&fi.optFlag == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
if fi.nOptFlag != 0 && box.GetFlags()&fi.nOptFlag != 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
if fi.is(fieldOptDynamic) && !fi.cfo.IsOptFieldEnabled(fi.name, ctx) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
8
vendor/github.com/abema/go-mp4/internal/bitio/bitio.go
generated
vendored
8
vendor/github.com/abema/go-mp4/internal/bitio/bitio.go
generated
vendored
|
|
@ -1,8 +0,0 @@
|
|||
package bitio
|
||||
|
||||
import "errors"
|
||||
|
||||
var (
|
||||
ErrInvalidAlignment = errors.New("invalid alignment")
|
||||
ErrDiscouragedReader = errors.New("discouraged reader implementation")
|
||||
)
|
||||
97
vendor/github.com/abema/go-mp4/internal/bitio/read.go
generated
vendored
97
vendor/github.com/abema/go-mp4/internal/bitio/read.go
generated
vendored
|
|
@ -1,97 +0,0 @@
|
|||
package bitio
|
||||
|
||||
import "io"
|
||||
|
||||
type Reader interface {
|
||||
io.Reader
|
||||
|
||||
// alignment:
|
||||
// |-1-byte-block-|--------------|--------------|--------------|
|
||||
// |<-offset->|<-------------------width---------------------->|
|
||||
ReadBits(width uint) (data []byte, err error)
|
||||
|
||||
ReadBit() (bit bool, err error)
|
||||
}
|
||||
|
||||
type ReadSeeker interface {
|
||||
Reader
|
||||
io.Seeker
|
||||
}
|
||||
|
||||
type reader struct {
|
||||
reader io.Reader
|
||||
octet byte
|
||||
width uint
|
||||
}
|
||||
|
||||
func NewReader(r io.Reader) Reader {
|
||||
return &reader{reader: r}
|
||||
}
|
||||
|
||||
func (r *reader) Read(p []byte) (n int, err error) {
|
||||
if r.width != 0 {
|
||||
return 0, ErrInvalidAlignment
|
||||
}
|
||||
return r.reader.Read(p)
|
||||
}
|
||||
|
||||
func (r *reader) ReadBits(size uint) ([]byte, error) {
|
||||
bytes := (size + 7) / 8
|
||||
data := make([]byte, bytes)
|
||||
offset := (bytes * 8) - (size)
|
||||
|
||||
for i := uint(0); i < size; i++ {
|
||||
bit, err := r.ReadBit()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
byteIdx := (offset + i) / 8
|
||||
bitIdx := 7 - (offset+i)%8
|
||||
if bit {
|
||||
data[byteIdx] |= 0x1 << bitIdx
|
||||
}
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (r *reader) ReadBit() (bool, error) {
|
||||
if r.width == 0 {
|
||||
buf := make([]byte, 1)
|
||||
if n, err := r.reader.Read(buf); err != nil {
|
||||
return false, err
|
||||
} else if n != 1 {
|
||||
return false, ErrDiscouragedReader
|
||||
}
|
||||
r.octet = buf[0]
|
||||
r.width = 8
|
||||
}
|
||||
|
||||
r.width--
|
||||
return (r.octet>>r.width)&0x01 != 0, nil
|
||||
}
|
||||
|
||||
type readSeeker struct {
|
||||
reader
|
||||
seeker io.Seeker
|
||||
}
|
||||
|
||||
func NewReadSeeker(r io.ReadSeeker) ReadSeeker {
|
||||
return &readSeeker{
|
||||
reader: reader{reader: r},
|
||||
seeker: r,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *readSeeker) Seek(offset int64, whence int) (int64, error) {
|
||||
if whence == io.SeekCurrent && r.reader.width != 0 {
|
||||
return 0, ErrInvalidAlignment
|
||||
}
|
||||
n, err := r.seeker.Seek(offset, whence)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
r.reader.width = 0
|
||||
return n, nil
|
||||
}
|
||||
61
vendor/github.com/abema/go-mp4/internal/bitio/write.go
generated
vendored
61
vendor/github.com/abema/go-mp4/internal/bitio/write.go
generated
vendored
|
|
@ -1,61 +0,0 @@
|
|||
package bitio
|
||||
|
||||
import (
|
||||
"io"
|
||||
)
|
||||
|
||||
type Writer interface {
|
||||
io.Writer
|
||||
|
||||
// alignment:
|
||||
// |-1-byte-block-|--------------|--------------|--------------|
|
||||
// |<-offset->|<-------------------width---------------------->|
|
||||
WriteBits(data []byte, width uint) error
|
||||
|
||||
WriteBit(bit bool) error
|
||||
}
|
||||
|
||||
type writer struct {
|
||||
writer io.Writer
|
||||
octet byte
|
||||
width uint
|
||||
}
|
||||
|
||||
func NewWriter(w io.Writer) Writer {
|
||||
return &writer{writer: w}
|
||||
}
|
||||
|
||||
func (w *writer) Write(p []byte) (n int, err error) {
|
||||
if w.width != 0 {
|
||||
return 0, ErrInvalidAlignment
|
||||
}
|
||||
return w.writer.Write(p)
|
||||
}
|
||||
|
||||
func (w *writer) WriteBits(data []byte, width uint) error {
|
||||
length := uint(len(data)) * 8
|
||||
offset := length - width
|
||||
for i := offset; i < length; i++ {
|
||||
oi := i / 8
|
||||
if err := w.WriteBit((data[oi]>>(7-i%8))&0x01 != 0); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *writer) WriteBit(bit bool) error {
|
||||
if bit {
|
||||
w.octet |= 0x1 << (7 - w.width)
|
||||
}
|
||||
w.width++
|
||||
|
||||
if w.width == 8 {
|
||||
if _, err := w.writer.Write([]byte{w.octet}); err != nil {
|
||||
return err
|
||||
}
|
||||
w.octet = 0x00
|
||||
w.width = 0
|
||||
}
|
||||
return nil
|
||||
}
|
||||
30
vendor/github.com/abema/go-mp4/internal/util/io.go
generated
vendored
30
vendor/github.com/abema/go-mp4/internal/util/io.go
generated
vendored
|
|
@ -1,30 +0,0 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
func ReadString(r io.Reader) (string, error) {
|
||||
b := make([]byte, 1)
|
||||
buf := bytes.NewBuffer(nil)
|
||||
for {
|
||||
if _, err := r.Read(b); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if b[0] == 0 {
|
||||
return buf.String(), nil
|
||||
}
|
||||
buf.Write(b)
|
||||
}
|
||||
}
|
||||
|
||||
func WriteString(w io.Writer, s string) error {
|
||||
if _, err := w.Write([]byte(s)); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte{0}); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
42
vendor/github.com/abema/go-mp4/internal/util/string.go
generated
vendored
42
vendor/github.com/abema/go-mp4/internal/util/string.go
generated
vendored
|
|
@ -1,42 +0,0 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
func FormatSignedFixedFloat1616(val int32) string {
|
||||
if val&0xffff == 0 {
|
||||
return strconv.Itoa(int(val >> 16))
|
||||
} else {
|
||||
return strconv.FormatFloat(float64(val)/(1<<16), 'f', 5, 64)
|
||||
}
|
||||
}
|
||||
|
||||
func FormatUnsignedFixedFloat1616(val uint32) string {
|
||||
if val&0xffff == 0 {
|
||||
return strconv.Itoa(int(val >> 16))
|
||||
} else {
|
||||
return strconv.FormatFloat(float64(val)/(1<<16), 'f', 5, 64)
|
||||
}
|
||||
}
|
||||
|
||||
func FormatSignedFixedFloat88(val int16) string {
|
||||
if val&0xff == 0 {
|
||||
return strconv.Itoa(int(val >> 8))
|
||||
} else {
|
||||
return strconv.FormatFloat(float64(val)/(1<<8), 'f', 3, 32)
|
||||
}
|
||||
}
|
||||
|
||||
func EscapeUnprintable(r rune) rune {
|
||||
if unicode.IsGraphic(r) {
|
||||
return r
|
||||
}
|
||||
return rune('.')
|
||||
}
|
||||
|
||||
func EscapeUnprintables(src string) string {
|
||||
return strings.Map(EscapeUnprintable, src)
|
||||
}
|
||||
663
vendor/github.com/abema/go-mp4/marshaller.go
generated
vendored
663
vendor/github.com/abema/go-mp4/marshaller.go
generated
vendored
|
|
@ -1,663 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"reflect"
|
||||
|
||||
"github.com/abema/go-mp4/internal/bitio"
|
||||
)
|
||||
|
||||
const (
|
||||
anyVersion = math.MaxUint8
|
||||
)
|
||||
|
||||
var ErrUnsupportedBoxVersion = errors.New("unsupported box version")
|
||||
|
||||
func readerHasSize(reader bitio.ReadSeeker, size uint64) bool {
|
||||
pre, err := reader.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
end, err := reader.Seek(0, io.SeekEnd)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if uint64(end-pre) < size {
|
||||
return false
|
||||
}
|
||||
|
||||
_, err = reader.Seek(pre, io.SeekStart)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
type marshaller struct {
|
||||
writer bitio.Writer
|
||||
wbits uint64
|
||||
src IImmutableBox
|
||||
ctx Context
|
||||
}
|
||||
|
||||
func Marshal(w io.Writer, src IImmutableBox, ctx Context) (n uint64, err error) {
|
||||
boxDef := src.GetType().getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return 0, ErrBoxInfoNotFound
|
||||
}
|
||||
|
||||
v := reflect.ValueOf(src).Elem()
|
||||
|
||||
m := &marshaller{
|
||||
writer: bitio.NewWriter(w),
|
||||
src: src,
|
||||
ctx: ctx,
|
||||
}
|
||||
|
||||
if err := m.marshalStruct(v, boxDef.fields); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if m.wbits%8 != 0 {
|
||||
return 0, fmt.Errorf("box size is not multiple of 8 bits: type=%s, bits=%d", src.GetType().String(), m.wbits)
|
||||
}
|
||||
|
||||
return m.wbits / 8, nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshal(v reflect.Value, fi *fieldInstance) error {
|
||||
switch v.Type().Kind() {
|
||||
case reflect.Ptr:
|
||||
return m.marshalPtr(v, fi)
|
||||
case reflect.Struct:
|
||||
return m.marshalStruct(v, fi.children)
|
||||
case reflect.Array:
|
||||
return m.marshalArray(v, fi)
|
||||
case reflect.Slice:
|
||||
return m.marshalSlice(v, fi)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return m.marshalInt(v, fi)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return m.marshalUint(v, fi)
|
||||
case reflect.Bool:
|
||||
return m.marshalBool(v, fi)
|
||||
case reflect.String:
|
||||
return m.marshalString(v)
|
||||
default:
|
||||
return fmt.Errorf("unsupported type: %s", v.Type().Kind())
|
||||
}
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalPtr(v reflect.Value, fi *fieldInstance) error {
|
||||
return m.marshal(v.Elem(), fi)
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalStruct(v reflect.Value, fs []*field) error {
|
||||
for _, f := range fs {
|
||||
fi := resolveFieldInstance(f, m.src, v, m.ctx)
|
||||
|
||||
if !isTargetField(m.src, fi, m.ctx) {
|
||||
continue
|
||||
}
|
||||
|
||||
wbits, override, err := fi.cfo.OnWriteField(f.name, m.writer, m.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += wbits
|
||||
if override {
|
||||
continue
|
||||
}
|
||||
|
||||
err = m.marshal(v.FieldByName(f.name), fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalArray(v reflect.Value, fi *fieldInstance) error {
|
||||
size := v.Type().Size()
|
||||
for i := 0; i < int(size)/int(v.Type().Elem().Size()); i++ {
|
||||
var err error
|
||||
err = m.marshal(v.Index(i), fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalSlice(v reflect.Value, fi *fieldInstance) error {
|
||||
length := uint64(v.Len())
|
||||
if fi.length != LengthUnlimited {
|
||||
if length < uint64(fi.length) {
|
||||
return fmt.Errorf("the slice has too few elements: required=%d actual=%d", fi.length, length)
|
||||
}
|
||||
length = uint64(fi.length)
|
||||
}
|
||||
|
||||
elemType := v.Type().Elem()
|
||||
if elemType.Kind() == reflect.Uint8 && fi.size == 8 && m.wbits%8 == 0 {
|
||||
if _, err := io.CopyN(m.writer, bytes.NewBuffer(v.Bytes()), int64(length)); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += length * 8
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := 0; i < int(length); i++ {
|
||||
m.marshal(v.Index(i), fi)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalInt(v reflect.Value, fi *fieldInstance) error {
|
||||
signed := v.Int()
|
||||
|
||||
if fi.is(fieldVarint) {
|
||||
return errors.New("signed varint is unsupported")
|
||||
}
|
||||
|
||||
signBit := signed < 0
|
||||
val := uint64(signed)
|
||||
for i := uint(0); i < fi.size; i += 8 {
|
||||
v := val
|
||||
size := uint(8)
|
||||
if fi.size > i+8 {
|
||||
v = v >> (fi.size - (i + 8))
|
||||
} else if fi.size < i+8 {
|
||||
size = fi.size - i
|
||||
}
|
||||
|
||||
// set sign bit
|
||||
if i == 0 {
|
||||
if signBit {
|
||||
v |= 0x1 << (size - 1)
|
||||
} else {
|
||||
v &= 0x1<<(size-1) - 1
|
||||
}
|
||||
}
|
||||
|
||||
if err := m.writer.WriteBits([]byte{byte(v)}, size); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += uint64(size)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalUint(v reflect.Value, fi *fieldInstance) error {
|
||||
val := v.Uint()
|
||||
|
||||
if fi.is(fieldVarint) {
|
||||
m.writeUvarint(val)
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := uint(0); i < fi.size; i += 8 {
|
||||
v := val
|
||||
size := uint(8)
|
||||
if fi.size > i+8 {
|
||||
v = v >> (fi.size - (i + 8))
|
||||
} else if fi.size < i+8 {
|
||||
size = fi.size - i
|
||||
}
|
||||
if err := m.writer.WriteBits([]byte{byte(v)}, size); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += uint64(size)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalBool(v reflect.Value, fi *fieldInstance) error {
|
||||
var val byte
|
||||
if v.Bool() {
|
||||
val = 0xff
|
||||
} else {
|
||||
val = 0x00
|
||||
}
|
||||
if err := m.writer.WriteBits([]byte{val}, fi.size); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += uint64(fi.size)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) marshalString(v reflect.Value) error {
|
||||
data := []byte(v.String())
|
||||
for _, b := range data {
|
||||
if err := m.writer.WriteBits([]byte{b}, 8); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += 8
|
||||
}
|
||||
// null character
|
||||
if err := m.writer.WriteBits([]byte{0x00}, 8); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += 8
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *marshaller) writeUvarint(u uint64) error {
|
||||
for i := 21; i > 0; i -= 7 {
|
||||
if err := m.writer.WriteBits([]byte{(byte(u >> uint(i))) | 0x80}, 8); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += 8
|
||||
}
|
||||
|
||||
if err := m.writer.WriteBits([]byte{byte(u) & 0x7f}, 8); err != nil {
|
||||
return err
|
||||
}
|
||||
m.wbits += 8
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type unmarshaller struct {
|
||||
reader bitio.ReadSeeker
|
||||
dst IBox
|
||||
size uint64
|
||||
rbits uint64
|
||||
ctx Context
|
||||
}
|
||||
|
||||
func UnmarshalAny(r io.ReadSeeker, boxType BoxType, payloadSize uint64, ctx Context) (box IBox, n uint64, err error) {
|
||||
dst, err := boxType.New(ctx)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
n, err = Unmarshal(r, payloadSize, dst, ctx)
|
||||
return dst, n, err
|
||||
}
|
||||
|
||||
func Unmarshal(r io.ReadSeeker, payloadSize uint64, dst IBox, ctx Context) (n uint64, err error) {
|
||||
boxDef := dst.GetType().getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return 0, ErrBoxInfoNotFound
|
||||
}
|
||||
|
||||
v := reflect.ValueOf(dst).Elem()
|
||||
|
||||
dst.SetVersion(anyVersion)
|
||||
|
||||
u := &unmarshaller{
|
||||
reader: bitio.NewReadSeeker(r),
|
||||
dst: dst,
|
||||
size: payloadSize,
|
||||
ctx: ctx,
|
||||
}
|
||||
|
||||
if n, override, err := dst.BeforeUnmarshal(r, payloadSize, u.ctx); err != nil {
|
||||
return 0, err
|
||||
} else if override {
|
||||
return n, nil
|
||||
} else {
|
||||
u.rbits = n * 8
|
||||
}
|
||||
|
||||
sn, err := r.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if err := u.unmarshalStruct(v, boxDef.fields); err != nil {
|
||||
if err == ErrUnsupportedBoxVersion {
|
||||
r.Seek(sn, io.SeekStart)
|
||||
}
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if u.rbits%8 != 0 {
|
||||
return 0, fmt.Errorf("box size is not multiple of 8 bits: type=%s, size=%d, bits=%d", dst.GetType().String(), u.size, u.rbits)
|
||||
}
|
||||
|
||||
if u.rbits > u.size*8 {
|
||||
return 0, fmt.Errorf("overrun error: type=%s, size=%d, bits=%d", dst.GetType().String(), u.size, u.rbits)
|
||||
}
|
||||
|
||||
return u.rbits / 8, nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshal(v reflect.Value, fi *fieldInstance) error {
|
||||
var err error
|
||||
switch v.Type().Kind() {
|
||||
case reflect.Ptr:
|
||||
err = u.unmarshalPtr(v, fi)
|
||||
case reflect.Struct:
|
||||
err = u.unmarshalStructInternal(v, fi)
|
||||
case reflect.Array:
|
||||
err = u.unmarshalArray(v, fi)
|
||||
case reflect.Slice:
|
||||
err = u.unmarshalSlice(v, fi)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
err = u.unmarshalInt(v, fi)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
err = u.unmarshalUint(v, fi)
|
||||
case reflect.Bool:
|
||||
err = u.unmarshalBool(v, fi)
|
||||
case reflect.String:
|
||||
err = u.unmarshalString(v, fi)
|
||||
default:
|
||||
return fmt.Errorf("unsupported type: %s", v.Type().Kind())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalPtr(v reflect.Value, fi *fieldInstance) error {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
return u.unmarshal(v.Elem(), fi)
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalStructInternal(v reflect.Value, fi *fieldInstance) error {
|
||||
if fi.size != 0 && fi.size%8 == 0 {
|
||||
u2 := *u
|
||||
u2.size = uint64(fi.size / 8)
|
||||
u2.rbits = 0
|
||||
if err := u2.unmarshalStruct(v, fi.children); err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += u2.rbits
|
||||
if u2.rbits != uint64(fi.size) {
|
||||
return errors.New("invalid alignment")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return u.unmarshalStruct(v, fi.children)
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalStruct(v reflect.Value, fs []*field) error {
|
||||
for _, f := range fs {
|
||||
fi := resolveFieldInstance(f, u.dst, v, u.ctx)
|
||||
|
||||
if !isTargetField(u.dst, fi, u.ctx) {
|
||||
continue
|
||||
}
|
||||
|
||||
rbits, override, err := fi.cfo.OnReadField(f.name, u.reader, u.size*8-u.rbits, u.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += rbits
|
||||
if override {
|
||||
continue
|
||||
}
|
||||
|
||||
err = u.unmarshal(v.FieldByName(f.name), fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if v.FieldByName(f.name).Type() == reflect.TypeOf(FullBox{}) && !u.dst.GetType().IsSupportedVersion(u.dst.GetVersion(), u.ctx) {
|
||||
return ErrUnsupportedBoxVersion
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalArray(v reflect.Value, fi *fieldInstance) error {
|
||||
size := v.Type().Size()
|
||||
for i := 0; i < int(size)/int(v.Type().Elem().Size()); i++ {
|
||||
var err error
|
||||
err = u.unmarshal(v.Index(i), fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalSlice(v reflect.Value, fi *fieldInstance) error {
|
||||
var slice reflect.Value
|
||||
elemType := v.Type().Elem()
|
||||
|
||||
length := uint64(fi.length)
|
||||
if fi.length == LengthUnlimited {
|
||||
if fi.size != 0 {
|
||||
left := (u.size)*8 - u.rbits
|
||||
if left%uint64(fi.size) != 0 {
|
||||
return errors.New("invalid alignment")
|
||||
}
|
||||
length = left / uint64(fi.size)
|
||||
} else {
|
||||
length = 0
|
||||
}
|
||||
}
|
||||
|
||||
if u.rbits%8 == 0 && elemType.Kind() == reflect.Uint8 && fi.size == 8 {
|
||||
totalSize := length * uint64(fi.size) / 8
|
||||
|
||||
if !readerHasSize(u.reader, totalSize) {
|
||||
return fmt.Errorf("not enough bits")
|
||||
}
|
||||
|
||||
buf := bytes.NewBuffer(make([]byte, 0, totalSize))
|
||||
if _, err := io.CopyN(buf, u.reader, int64(totalSize)); err != nil {
|
||||
return err
|
||||
}
|
||||
slice = reflect.ValueOf(buf.Bytes())
|
||||
u.rbits += uint64(totalSize) * 8
|
||||
|
||||
} else {
|
||||
slice = reflect.MakeSlice(v.Type(), 0, 0)
|
||||
for i := 0; ; i++ {
|
||||
if fi.length != LengthUnlimited && uint(i) >= fi.length {
|
||||
break
|
||||
}
|
||||
if fi.length == LengthUnlimited && u.rbits >= u.size*8 {
|
||||
break
|
||||
}
|
||||
slice = reflect.Append(slice, reflect.Zero(elemType))
|
||||
if err := u.unmarshal(slice.Index(i), fi); err != nil {
|
||||
return err
|
||||
}
|
||||
if u.rbits > u.size*8 {
|
||||
return fmt.Errorf("failed to read array completely: fieldName=\"%s\"", fi.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
v.Set(slice)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalInt(v reflect.Value, fi *fieldInstance) error {
|
||||
if fi.is(fieldVarint) {
|
||||
return errors.New("signed varint is unsupported")
|
||||
}
|
||||
|
||||
if fi.size == 0 {
|
||||
return fmt.Errorf("size must not be zero: %s", fi.name)
|
||||
}
|
||||
|
||||
data, err := u.reader.ReadBits(fi.size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += uint64(fi.size)
|
||||
|
||||
signBit := false
|
||||
if len(data) > 0 {
|
||||
signMask := byte(0x01) << ((fi.size - 1) % 8)
|
||||
signBit = data[0]&signMask != 0
|
||||
if signBit {
|
||||
data[0] |= ^(signMask - 1)
|
||||
}
|
||||
}
|
||||
|
||||
var val uint64
|
||||
if signBit {
|
||||
val = ^uint64(0)
|
||||
}
|
||||
for i := range data {
|
||||
val <<= 8
|
||||
val |= uint64(data[i])
|
||||
}
|
||||
v.SetInt(int64(val))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalUint(v reflect.Value, fi *fieldInstance) error {
|
||||
if fi.is(fieldVarint) {
|
||||
val, err := u.readUvarint()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
v.SetUint(val)
|
||||
return nil
|
||||
}
|
||||
|
||||
if fi.size == 0 {
|
||||
return fmt.Errorf("size must not be zero: %s", fi.name)
|
||||
}
|
||||
|
||||
data, err := u.reader.ReadBits(fi.size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += uint64(fi.size)
|
||||
|
||||
val := uint64(0)
|
||||
for i := range data {
|
||||
val <<= 8
|
||||
val |= uint64(data[i])
|
||||
}
|
||||
v.SetUint(val)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalBool(v reflect.Value, fi *fieldInstance) error {
|
||||
if fi.size == 0 {
|
||||
return fmt.Errorf("size must not be zero: %s", fi.name)
|
||||
}
|
||||
|
||||
data, err := u.reader.ReadBits(fi.size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += uint64(fi.size)
|
||||
|
||||
val := false
|
||||
for _, b := range data {
|
||||
val = val || (b != byte(0))
|
||||
}
|
||||
v.SetBool(val)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalString(v reflect.Value, fi *fieldInstance) error {
|
||||
switch fi.strType {
|
||||
case stringType_C:
|
||||
return u.unmarshalStringC(v)
|
||||
case stringType_C_P:
|
||||
return u.unmarshalStringCP(v, fi)
|
||||
default:
|
||||
return fmt.Errorf("unknown string type: %d", fi.strType)
|
||||
}
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalStringC(v reflect.Value) error {
|
||||
data := make([]byte, 0, 16)
|
||||
for {
|
||||
if u.rbits >= u.size*8 {
|
||||
break
|
||||
}
|
||||
|
||||
c, err := u.reader.ReadBits(8)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
u.rbits += 8
|
||||
|
||||
if c[0] == 0 {
|
||||
break // null character
|
||||
}
|
||||
|
||||
data = append(data, c[0])
|
||||
}
|
||||
v.SetString(string(data))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) unmarshalStringCP(v reflect.Value, fi *fieldInstance) error {
|
||||
if ok, err := u.tryReadPString(v, fi); err != nil {
|
||||
return err
|
||||
} else if ok {
|
||||
return nil
|
||||
}
|
||||
return u.unmarshalStringC(v)
|
||||
}
|
||||
|
||||
func (u *unmarshaller) tryReadPString(v reflect.Value, fi *fieldInstance) (ok bool, err error) {
|
||||
remainingSize := (u.size*8 - u.rbits) / 8
|
||||
if remainingSize < 2 {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
offset, err := u.reader.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer func() {
|
||||
if err == nil && !ok {
|
||||
_, err = u.reader.Seek(offset, io.SeekStart)
|
||||
}
|
||||
}()
|
||||
|
||||
buf0 := make([]byte, 1)
|
||||
if _, err := io.ReadFull(u.reader, buf0); err != nil {
|
||||
return false, err
|
||||
}
|
||||
remainingSize--
|
||||
plen := buf0[0]
|
||||
if uint64(plen) > remainingSize {
|
||||
return false, nil
|
||||
}
|
||||
buf := make([]byte, int(plen))
|
||||
if _, err := io.ReadFull(u.reader, buf); err != nil {
|
||||
return false, err
|
||||
}
|
||||
remainingSize -= uint64(plen)
|
||||
if fi.cfo.IsPString(fi.name, buf, remainingSize, u.ctx) {
|
||||
u.rbits += uint64(len(buf)+1) * 8
|
||||
v.SetString(string(buf))
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (u *unmarshaller) readUvarint() (uint64, error) {
|
||||
var val uint64
|
||||
for {
|
||||
octet, err := u.reader.ReadBits(8)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
u.rbits += 8
|
||||
|
||||
val = (val << 7) + uint64(octet[0]&0x7f)
|
||||
|
||||
if octet[0]&0x80 == 0 {
|
||||
return val, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
171
vendor/github.com/abema/go-mp4/mp4.go
generated
vendored
171
vendor/github.com/abema/go-mp4/mp4.go
generated
vendored
|
|
@ -1,171 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var ErrBoxInfoNotFound = errors.New("box info not found")
|
||||
|
||||
// BoxType is mpeg box type
|
||||
type BoxType [4]byte
|
||||
|
||||
func StrToBoxType(code string) BoxType {
|
||||
if len(code) != 4 {
|
||||
panic(fmt.Errorf("invalid box type id length: [%s]", code))
|
||||
}
|
||||
return BoxType{code[0], code[1], code[2], code[3]}
|
||||
}
|
||||
|
||||
// Uint32ToBoxType returns a new BoxType from the provied uint32
|
||||
func Uint32ToBoxType(i uint32) BoxType {
|
||||
b := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(b, i)
|
||||
return BoxType{b[0], b[1], b[2], b[3]}
|
||||
}
|
||||
|
||||
func (boxType BoxType) String() string {
|
||||
if isPrintable(boxType[0]) && isPrintable(boxType[1]) && isPrintable(boxType[2]) && isPrintable(boxType[3]) {
|
||||
s := string([]byte{boxType[0], boxType[1], boxType[2], boxType[3]})
|
||||
s = strings.ReplaceAll(s, string([]byte{0xa9}), "(c)")
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("0x%02x%02x%02x%02x", boxType[0], boxType[1], boxType[2], boxType[3])
|
||||
}
|
||||
|
||||
func isASCII(c byte) bool {
|
||||
return c >= 0x20 && c <= 0x7e
|
||||
}
|
||||
|
||||
func isPrintable(c byte) bool {
|
||||
return isASCII(c) || c == 0xa9
|
||||
}
|
||||
|
||||
func (lhs BoxType) MatchWith(rhs BoxType) bool {
|
||||
if lhs == boxTypeAny || rhs == boxTypeAny {
|
||||
return true
|
||||
}
|
||||
return lhs == rhs
|
||||
}
|
||||
|
||||
var boxTypeAny = BoxType{0x00, 0x00, 0x00, 0x00}
|
||||
|
||||
func BoxTypeAny() BoxType {
|
||||
return boxTypeAny
|
||||
}
|
||||
|
||||
type boxDef struct {
|
||||
dataType reflect.Type
|
||||
versions []uint8
|
||||
isTarget func(Context) bool
|
||||
fields []*field
|
||||
}
|
||||
|
||||
var boxMap = make(map[BoxType][]boxDef, 64)
|
||||
|
||||
func AddBoxDef(payload IBox, versions ...uint8) {
|
||||
boxMap[payload.GetType()] = append(boxMap[payload.GetType()], boxDef{
|
||||
dataType: reflect.TypeOf(payload).Elem(),
|
||||
versions: versions,
|
||||
fields: buildFields(payload),
|
||||
})
|
||||
}
|
||||
|
||||
func AddBoxDefEx(payload IBox, isTarget func(Context) bool, versions ...uint8) {
|
||||
boxMap[payload.GetType()] = append(boxMap[payload.GetType()], boxDef{
|
||||
dataType: reflect.TypeOf(payload).Elem(),
|
||||
versions: versions,
|
||||
isTarget: isTarget,
|
||||
fields: buildFields(payload),
|
||||
})
|
||||
}
|
||||
|
||||
func AddAnyTypeBoxDef(payload IAnyType, boxType BoxType, versions ...uint8) {
|
||||
boxMap[boxType] = append(boxMap[boxType], boxDef{
|
||||
dataType: reflect.TypeOf(payload).Elem(),
|
||||
versions: versions,
|
||||
fields: buildFields(payload),
|
||||
})
|
||||
}
|
||||
|
||||
func AddAnyTypeBoxDefEx(payload IAnyType, boxType BoxType, isTarget func(Context) bool, versions ...uint8) {
|
||||
boxMap[boxType] = append(boxMap[boxType], boxDef{
|
||||
dataType: reflect.TypeOf(payload).Elem(),
|
||||
versions: versions,
|
||||
isTarget: isTarget,
|
||||
fields: buildFields(payload),
|
||||
})
|
||||
}
|
||||
|
||||
var itemBoxFields = buildFields(&Item{})
|
||||
|
||||
func (boxType BoxType) getBoxDef(ctx Context) *boxDef {
|
||||
boxDefs := boxMap[boxType]
|
||||
for i := len(boxDefs) - 1; i >= 0; i-- {
|
||||
boxDef := &boxDefs[i]
|
||||
if boxDef.isTarget == nil || boxDef.isTarget(ctx) {
|
||||
return boxDef
|
||||
}
|
||||
}
|
||||
if ctx.UnderIlst {
|
||||
typeID := int(binary.BigEndian.Uint32(boxType[:]))
|
||||
if typeID >= 1 && typeID <= ctx.QuickTimeKeysMetaEntryCount {
|
||||
return &boxDef{
|
||||
dataType: reflect.TypeOf(Item{}),
|
||||
isTarget: isIlstMetaContainer,
|
||||
fields: itemBoxFields,
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (boxType BoxType) IsSupported(ctx Context) bool {
|
||||
return boxType.getBoxDef(ctx) != nil
|
||||
}
|
||||
|
||||
func (boxType BoxType) New(ctx Context) (IBox, error) {
|
||||
boxDef := boxType.getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return nil, ErrBoxInfoNotFound
|
||||
}
|
||||
|
||||
box, ok := reflect.New(boxDef.dataType).Interface().(IBox)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("box type not implements IBox interface: %s", boxType.String())
|
||||
}
|
||||
|
||||
anyTypeBox, ok := box.(IAnyType)
|
||||
if ok {
|
||||
anyTypeBox.SetType(boxType)
|
||||
}
|
||||
|
||||
return box, nil
|
||||
}
|
||||
|
||||
func (boxType BoxType) GetSupportedVersions(ctx Context) ([]uint8, error) {
|
||||
boxDef := boxType.getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return nil, ErrBoxInfoNotFound
|
||||
}
|
||||
return boxDef.versions, nil
|
||||
}
|
||||
|
||||
func (boxType BoxType) IsSupportedVersion(ver uint8, ctx Context) bool {
|
||||
boxDef := boxType.getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return false
|
||||
}
|
||||
if len(boxDef.versions) == 0 {
|
||||
return true
|
||||
}
|
||||
for _, sver := range boxDef.versions {
|
||||
if ver == sver {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
684
vendor/github.com/abema/go-mp4/probe.go
generated
vendored
684
vendor/github.com/abema/go-mp4/probe.go
generated
vendored
|
|
@ -1,684 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"github.com/abema/go-mp4/internal/bitio"
|
||||
)
|
||||
|
||||
type ProbeInfo struct {
|
||||
MajorBrand [4]byte
|
||||
MinorVersion uint32
|
||||
CompatibleBrands [][4]byte
|
||||
FastStart bool
|
||||
Timescale uint32
|
||||
Duration uint64
|
||||
Tracks Tracks
|
||||
Segments Segments
|
||||
}
|
||||
|
||||
// Deprecated: replace with ProbeInfo
|
||||
type FraProbeInfo = ProbeInfo
|
||||
|
||||
type Tracks []*Track
|
||||
|
||||
// Deprecated: replace with Track
|
||||
type TrackInfo = Track
|
||||
|
||||
type Track struct {
|
||||
TrackID uint32
|
||||
Timescale uint32
|
||||
Duration uint64
|
||||
Codec Codec
|
||||
Encrypted bool
|
||||
EditList EditList
|
||||
Samples Samples
|
||||
Chunks Chunks
|
||||
AVC *AVCDecConfigInfo
|
||||
MP4A *MP4AInfo
|
||||
}
|
||||
|
||||
type Codec int
|
||||
|
||||
const (
|
||||
CodecUnknown Codec = iota
|
||||
CodecAVC1
|
||||
CodecMP4A
|
||||
)
|
||||
|
||||
type EditList []*EditListEntry
|
||||
|
||||
type EditListEntry struct {
|
||||
MediaTime int64
|
||||
SegmentDuration uint64
|
||||
}
|
||||
|
||||
type Samples []*Sample
|
||||
|
||||
type Sample struct {
|
||||
Size uint32
|
||||
TimeDelta uint32
|
||||
CompositionTimeOffset int64
|
||||
}
|
||||
|
||||
type Chunks []*Chunk
|
||||
|
||||
type Chunk struct {
|
||||
DataOffset uint64
|
||||
SamplesPerChunk uint32
|
||||
}
|
||||
|
||||
type AVCDecConfigInfo struct {
|
||||
ConfigurationVersion uint8
|
||||
Profile uint8
|
||||
ProfileCompatibility uint8
|
||||
Level uint8
|
||||
LengthSize uint16
|
||||
Width uint16
|
||||
Height uint16
|
||||
}
|
||||
|
||||
type MP4AInfo struct {
|
||||
OTI uint8
|
||||
AudOTI uint8
|
||||
ChannelCount uint16
|
||||
}
|
||||
|
||||
type Segments []*Segment
|
||||
|
||||
// Deprecated: replace with Segment
|
||||
type SegmentInfo = Segment
|
||||
|
||||
type Segment struct {
|
||||
TrackID uint32
|
||||
MoofOffset uint64
|
||||
BaseMediaDecodeTime uint64
|
||||
DefaultSampleDuration uint32
|
||||
SampleCount uint32
|
||||
Duration uint32
|
||||
CompositionTimeOffset int32
|
||||
Size uint32
|
||||
}
|
||||
|
||||
// Probe probes MP4 file
|
||||
func Probe(r io.ReadSeeker) (*ProbeInfo, error) {
|
||||
probeInfo := &ProbeInfo{
|
||||
Tracks: make([]*Track, 0, 8),
|
||||
Segments: make([]*Segment, 0, 8),
|
||||
}
|
||||
bis, err := ExtractBoxes(r, nil, []BoxPath{
|
||||
{BoxTypeFtyp()},
|
||||
{BoxTypeMoov()},
|
||||
{BoxTypeMoov(), BoxTypeMvhd()},
|
||||
{BoxTypeMoov(), BoxTypeTrak()},
|
||||
{BoxTypeMoof()},
|
||||
{BoxTypeMdat()},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var mdatAppeared bool
|
||||
for _, bi := range bis {
|
||||
switch bi.Type {
|
||||
case BoxTypeFtyp():
|
||||
var ftyp Ftyp
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := Unmarshal(r, bi.Size-bi.HeaderSize, &ftyp, bi.Context); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
probeInfo.MajorBrand = ftyp.MajorBrand
|
||||
probeInfo.MinorVersion = ftyp.MinorVersion
|
||||
probeInfo.CompatibleBrands = make([][4]byte, 0, len(ftyp.CompatibleBrands))
|
||||
for _, entry := range ftyp.CompatibleBrands {
|
||||
probeInfo.CompatibleBrands = append(probeInfo.CompatibleBrands, entry.CompatibleBrand)
|
||||
}
|
||||
case BoxTypeMoov():
|
||||
probeInfo.FastStart = !mdatAppeared
|
||||
case BoxTypeMvhd():
|
||||
var mvhd Mvhd
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := Unmarshal(r, bi.Size-bi.HeaderSize, &mvhd, bi.Context); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
probeInfo.Timescale = mvhd.Timescale
|
||||
if mvhd.GetVersion() == 0 {
|
||||
probeInfo.Duration = uint64(mvhd.DurationV0)
|
||||
} else {
|
||||
probeInfo.Duration = mvhd.DurationV1
|
||||
}
|
||||
case BoxTypeTrak():
|
||||
track, err := probeTrak(r, bi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
probeInfo.Tracks = append(probeInfo.Tracks, track)
|
||||
case BoxTypeMoof():
|
||||
segment, err := probeMoof(r, bi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
probeInfo.Segments = append(probeInfo.Segments, segment)
|
||||
case BoxTypeMdat():
|
||||
mdatAppeared = true
|
||||
}
|
||||
}
|
||||
return probeInfo, nil
|
||||
}
|
||||
|
||||
// ProbeFra probes fragmented MP4 file
|
||||
// Deprecated: replace with Probe
|
||||
func ProbeFra(r io.ReadSeeker) (*FraProbeInfo, error) {
|
||||
probeInfo, err := Probe(r)
|
||||
return (*FraProbeInfo)(probeInfo), err
|
||||
}
|
||||
|
||||
func probeTrak(r io.ReadSeeker, bi *BoxInfo) (*Track, error) {
|
||||
track := new(Track)
|
||||
|
||||
bips, err := ExtractBoxesWithPayload(r, bi, []BoxPath{
|
||||
{BoxTypeTkhd()},
|
||||
{BoxTypeEdts(), BoxTypeElst()},
|
||||
{BoxTypeMdia(), BoxTypeMdhd()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeAvc1()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeAvc1(), BoxTypeAvcC()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeEncv()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeEncv(), BoxTypeAvcC()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeMp4a()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeMp4a(), BoxTypeEsds()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeMp4a(), BoxTypeWave(), BoxTypeEsds()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeEnca()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsd(), BoxTypeEnca(), BoxTypeEsds()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStco()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeCo64()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStts()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeCtts()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsc()},
|
||||
{BoxTypeMdia(), BoxTypeMinf(), BoxTypeStbl(), BoxTypeStsz()},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var tkhd *Tkhd
|
||||
var elst *Elst
|
||||
var mdhd *Mdhd
|
||||
var avc1 *VisualSampleEntry
|
||||
var avcC *AVCDecoderConfiguration
|
||||
var audioSampleEntry *AudioSampleEntry
|
||||
var esds *Esds
|
||||
var stco *Stco
|
||||
var stts *Stts
|
||||
var stsc *Stsc
|
||||
var ctts *Ctts
|
||||
var stsz *Stsz
|
||||
var co64 *Co64
|
||||
for _, bip := range bips {
|
||||
switch bip.Info.Type {
|
||||
case BoxTypeTkhd():
|
||||
tkhd = bip.Payload.(*Tkhd)
|
||||
case BoxTypeElst():
|
||||
elst = bip.Payload.(*Elst)
|
||||
case BoxTypeMdhd():
|
||||
mdhd = bip.Payload.(*Mdhd)
|
||||
case BoxTypeAvc1():
|
||||
track.Codec = CodecAVC1
|
||||
avc1 = bip.Payload.(*VisualSampleEntry)
|
||||
case BoxTypeAvcC():
|
||||
avcC = bip.Payload.(*AVCDecoderConfiguration)
|
||||
case BoxTypeEncv():
|
||||
track.Codec = CodecAVC1
|
||||
track.Encrypted = true
|
||||
case BoxTypeMp4a():
|
||||
track.Codec = CodecMP4A
|
||||
audioSampleEntry = bip.Payload.(*AudioSampleEntry)
|
||||
case BoxTypeEnca():
|
||||
track.Codec = CodecMP4A
|
||||
track.Encrypted = true
|
||||
audioSampleEntry = bip.Payload.(*AudioSampleEntry)
|
||||
case BoxTypeEsds():
|
||||
esds = bip.Payload.(*Esds)
|
||||
case BoxTypeStco():
|
||||
stco = bip.Payload.(*Stco)
|
||||
case BoxTypeStts():
|
||||
stts = bip.Payload.(*Stts)
|
||||
case BoxTypeStsc():
|
||||
stsc = bip.Payload.(*Stsc)
|
||||
case BoxTypeCtts():
|
||||
ctts = bip.Payload.(*Ctts)
|
||||
case BoxTypeStsz():
|
||||
stsz = bip.Payload.(*Stsz)
|
||||
case BoxTypeCo64():
|
||||
co64 = bip.Payload.(*Co64)
|
||||
}
|
||||
}
|
||||
|
||||
if tkhd == nil {
|
||||
return nil, errors.New("tkhd box not found")
|
||||
}
|
||||
track.TrackID = tkhd.TrackID
|
||||
|
||||
if elst != nil {
|
||||
editList := make([]*EditListEntry, 0, len(elst.Entries))
|
||||
for i := range elst.Entries {
|
||||
editList = append(editList, &EditListEntry{
|
||||
MediaTime: elst.GetMediaTime(i),
|
||||
SegmentDuration: elst.GetSegmentDuration(i),
|
||||
})
|
||||
}
|
||||
track.EditList = editList
|
||||
}
|
||||
|
||||
if mdhd == nil {
|
||||
return nil, errors.New("mdhd box not found")
|
||||
}
|
||||
track.Timescale = mdhd.Timescale
|
||||
track.Duration = mdhd.GetDuration()
|
||||
|
||||
if avc1 != nil && avcC != nil {
|
||||
track.AVC = &AVCDecConfigInfo{
|
||||
ConfigurationVersion: avcC.ConfigurationVersion,
|
||||
Profile: avcC.Profile,
|
||||
ProfileCompatibility: avcC.ProfileCompatibility,
|
||||
Level: avcC.Level,
|
||||
LengthSize: uint16(avcC.LengthSizeMinusOne) + 1,
|
||||
Width: avc1.Width,
|
||||
Height: avc1.Height,
|
||||
}
|
||||
}
|
||||
|
||||
if audioSampleEntry != nil && esds != nil {
|
||||
oti, audOTI, err := detectAACProfile(esds)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
track.MP4A = &MP4AInfo{
|
||||
OTI: oti,
|
||||
AudOTI: audOTI,
|
||||
ChannelCount: audioSampleEntry.ChannelCount,
|
||||
}
|
||||
}
|
||||
|
||||
track.Chunks = make([]*Chunk, 0)
|
||||
if stco != nil {
|
||||
for _, offset := range stco.ChunkOffset {
|
||||
track.Chunks = append(track.Chunks, &Chunk{
|
||||
DataOffset: uint64(offset),
|
||||
})
|
||||
}
|
||||
} else if co64 != nil {
|
||||
for _, offset := range co64.ChunkOffset {
|
||||
track.Chunks = append(track.Chunks, &Chunk{
|
||||
DataOffset: offset,
|
||||
})
|
||||
}
|
||||
} else {
|
||||
return nil, errors.New("stco/co64 box not found")
|
||||
}
|
||||
|
||||
if stts == nil {
|
||||
return nil, errors.New("stts box not found")
|
||||
}
|
||||
track.Samples = make([]*Sample, 0)
|
||||
for _, entry := range stts.Entries {
|
||||
for i := uint32(0); i < entry.SampleCount; i++ {
|
||||
track.Samples = append(track.Samples, &Sample{
|
||||
TimeDelta: entry.SampleDelta,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if stsc == nil {
|
||||
return nil, errors.New("stsc box not found")
|
||||
}
|
||||
for si, entry := range stsc.Entries {
|
||||
end := uint32(len(track.Chunks))
|
||||
if si != len(stsc.Entries)-1 && stsc.Entries[si+1].FirstChunk-1 < end {
|
||||
end = stsc.Entries[si+1].FirstChunk - 1
|
||||
}
|
||||
for ci := entry.FirstChunk - 1; ci < end; ci++ {
|
||||
track.Chunks[ci].SamplesPerChunk = entry.SamplesPerChunk
|
||||
}
|
||||
}
|
||||
|
||||
if ctts != nil {
|
||||
var si uint32
|
||||
for ci, entry := range ctts.Entries {
|
||||
for i := uint32(0); i < entry.SampleCount; i++ {
|
||||
if si >= uint32(len(track.Samples)) {
|
||||
break
|
||||
}
|
||||
track.Samples[si].CompositionTimeOffset = ctts.GetSampleOffset(ci)
|
||||
si++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stsz != nil {
|
||||
for i := 0; i < len(stsz.EntrySize) && i < len(track.Samples); i++ {
|
||||
track.Samples[i].Size = stsz.EntrySize[i]
|
||||
}
|
||||
}
|
||||
|
||||
return track, nil
|
||||
}
|
||||
|
||||
func detectAACProfile(esds *Esds) (oti, audOTI uint8, err error) {
|
||||
configDscr := findDescriptorByTag(esds.Descriptors, DecoderConfigDescrTag)
|
||||
if configDscr == nil || configDscr.DecoderConfigDescriptor == nil {
|
||||
return 0, 0, nil
|
||||
}
|
||||
if configDscr.DecoderConfigDescriptor.ObjectTypeIndication != 0x40 {
|
||||
return configDscr.DecoderConfigDescriptor.ObjectTypeIndication, 0, nil
|
||||
}
|
||||
|
||||
specificDscr := findDescriptorByTag(esds.Descriptors, DecSpecificInfoTag)
|
||||
if specificDscr == nil {
|
||||
return 0, 0, errors.New("DecoderSpecificationInfoDescriptor not found")
|
||||
}
|
||||
|
||||
r := bitio.NewReader(bytes.NewReader(specificDscr.Data))
|
||||
remaining := len(specificDscr.Data) * 8
|
||||
|
||||
// audio object type
|
||||
audioObjectType, read, err := getAudioObjectType(r)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= read
|
||||
|
||||
// sampling frequency index
|
||||
samplingFrequencyIndex, err := r.ReadBits(4)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 4
|
||||
if samplingFrequencyIndex[0] == 0x0f {
|
||||
if _, err = r.ReadBits(24); err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 24
|
||||
}
|
||||
|
||||
if audioObjectType == 2 && remaining >= 20 {
|
||||
if _, err = r.ReadBits(4); err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 4
|
||||
syncExtensionType, err := r.ReadBits(11)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 11
|
||||
if syncExtensionType[0] == 0x2 && syncExtensionType[1] == 0xb7 {
|
||||
extAudioObjectType, _, err := getAudioObjectType(r)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
if extAudioObjectType == 5 || extAudioObjectType == 22 {
|
||||
sbr, err := r.ReadBits(1)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining--
|
||||
if sbr[0] != 0 {
|
||||
if extAudioObjectType == 5 {
|
||||
sfi, err := r.ReadBits(4)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 4
|
||||
if sfi[0] == 0xf {
|
||||
if _, err := r.ReadBits(24); err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
remaining -= 24
|
||||
}
|
||||
if remaining >= 12 {
|
||||
syncExtensionType, err := r.ReadBits(11)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
if syncExtensionType[0] == 0x5 && syncExtensionType[1] == 0x48 {
|
||||
ps, err := r.ReadBits(1)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
if ps[0] != 0 {
|
||||
return 0x40, 29, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0x40, 5, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0x40, audioObjectType, nil
|
||||
}
|
||||
|
||||
func findDescriptorByTag(dscrs []Descriptor, tag int8) *Descriptor {
|
||||
for _, dscr := range dscrs {
|
||||
if dscr.Tag == tag {
|
||||
return &dscr
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getAudioObjectType(r bitio.Reader) (byte, int, error) {
|
||||
audioObjectType, err := r.ReadBits(5)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
if audioObjectType[0] != 0x1f {
|
||||
return audioObjectType[0], 5, nil
|
||||
}
|
||||
audioObjectType, err = r.ReadBits(6)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
return audioObjectType[0] + 32, 11, nil
|
||||
}
|
||||
|
||||
func probeMoof(r io.ReadSeeker, bi *BoxInfo) (*Segment, error) {
|
||||
bips, err := ExtractBoxesWithPayload(r, bi, []BoxPath{
|
||||
{BoxTypeTraf(), BoxTypeTfhd()},
|
||||
{BoxTypeTraf(), BoxTypeTfdt()},
|
||||
{BoxTypeTraf(), BoxTypeTrun()},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tfhd *Tfhd
|
||||
var tfdt *Tfdt
|
||||
var trun *Trun
|
||||
|
||||
segment := &Segment{
|
||||
MoofOffset: bi.Offset,
|
||||
}
|
||||
for _, bip := range bips {
|
||||
switch bip.Info.Type {
|
||||
case BoxTypeTfhd():
|
||||
tfhd = bip.Payload.(*Tfhd)
|
||||
case BoxTypeTfdt():
|
||||
tfdt = bip.Payload.(*Tfdt)
|
||||
case BoxTypeTrun():
|
||||
trun = bip.Payload.(*Trun)
|
||||
}
|
||||
}
|
||||
|
||||
if tfhd == nil {
|
||||
return nil, errors.New("tfhd not found")
|
||||
}
|
||||
segment.TrackID = tfhd.TrackID
|
||||
segment.DefaultSampleDuration = tfhd.DefaultSampleDuration
|
||||
|
||||
if tfdt != nil {
|
||||
if tfdt.Version == 0 {
|
||||
segment.BaseMediaDecodeTime = uint64(tfdt.BaseMediaDecodeTimeV0)
|
||||
} else {
|
||||
segment.BaseMediaDecodeTime = tfdt.BaseMediaDecodeTimeV1
|
||||
}
|
||||
}
|
||||
|
||||
if trun != nil {
|
||||
segment.SampleCount = trun.SampleCount
|
||||
|
||||
if trun.CheckFlag(0x000100) {
|
||||
segment.Duration = 0
|
||||
for ei := range trun.Entries {
|
||||
segment.Duration += trun.Entries[ei].SampleDuration
|
||||
}
|
||||
} else {
|
||||
segment.Duration = tfhd.DefaultSampleDuration * segment.SampleCount
|
||||
}
|
||||
|
||||
if trun.CheckFlag(0x000200) {
|
||||
segment.Size = 0
|
||||
for ei := range trun.Entries {
|
||||
segment.Size += trun.Entries[ei].SampleSize
|
||||
}
|
||||
} else {
|
||||
segment.Size = tfhd.DefaultSampleSize * segment.SampleCount
|
||||
}
|
||||
|
||||
var duration uint32
|
||||
for ei := range trun.Entries {
|
||||
offset := int32(duration) + int32(trun.GetSampleCompositionTimeOffset(ei))
|
||||
if ei == 0 || offset < segment.CompositionTimeOffset {
|
||||
segment.CompositionTimeOffset = offset
|
||||
}
|
||||
if trun.CheckFlag(0x000100) {
|
||||
duration += trun.Entries[ei].SampleDuration
|
||||
} else {
|
||||
duration += tfhd.DefaultSampleDuration
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return segment, nil
|
||||
}
|
||||
|
||||
func FindIDRFrames(r io.ReadSeeker, trackInfo *TrackInfo) ([]int, error) {
|
||||
if trackInfo.AVC == nil {
|
||||
return nil, nil
|
||||
}
|
||||
lengthSize := uint32(trackInfo.AVC.LengthSize)
|
||||
|
||||
var si int
|
||||
idxs := make([]int, 0, 8)
|
||||
for _, chunk := range trackInfo.Chunks {
|
||||
end := si + int(chunk.SamplesPerChunk)
|
||||
dataOffset := chunk.DataOffset
|
||||
for ; si < end && si < len(trackInfo.Samples); si++ {
|
||||
sample := trackInfo.Samples[si]
|
||||
if sample.Size == 0 {
|
||||
continue
|
||||
}
|
||||
for nalOffset := uint32(0); nalOffset+lengthSize+1 <= sample.Size; {
|
||||
if _, err := r.Seek(int64(dataOffset+uint64(nalOffset)), io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
data := make([]byte, lengthSize+1)
|
||||
if _, err := io.ReadFull(r, data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var length uint32
|
||||
for i := 0; i < int(lengthSize); i++ {
|
||||
length = (length << 8) + uint32(data[i])
|
||||
}
|
||||
nalHeader := data[lengthSize]
|
||||
nalType := nalHeader & 0x1f
|
||||
if nalType == 5 {
|
||||
idxs = append(idxs, si)
|
||||
break
|
||||
}
|
||||
nalOffset += lengthSize + length
|
||||
}
|
||||
dataOffset += uint64(sample.Size)
|
||||
}
|
||||
}
|
||||
return idxs, nil
|
||||
}
|
||||
|
||||
func (samples Samples) GetBitrate(timescale uint32) uint64 {
|
||||
var totalSize uint64
|
||||
var totalDuration uint64
|
||||
for _, sample := range samples {
|
||||
totalSize += uint64(sample.Size)
|
||||
totalDuration += uint64(sample.TimeDelta)
|
||||
}
|
||||
if totalDuration == 0 {
|
||||
return 0
|
||||
}
|
||||
return 8 * totalSize * uint64(timescale) / totalDuration
|
||||
}
|
||||
|
||||
func (samples Samples) GetMaxBitrate(timescale uint32, timeDelta uint64) uint64 {
|
||||
if timeDelta == 0 {
|
||||
return 0
|
||||
}
|
||||
var maxBitrate uint64
|
||||
var size uint64
|
||||
var duration uint64
|
||||
var begin int
|
||||
var end int
|
||||
for end < len(samples) {
|
||||
for {
|
||||
size += uint64(samples[end].Size)
|
||||
duration += uint64(samples[end].TimeDelta)
|
||||
end++
|
||||
if duration >= timeDelta || end == len(samples) {
|
||||
break
|
||||
}
|
||||
}
|
||||
bitrate := 8 * size * uint64(timescale) / duration
|
||||
if bitrate > maxBitrate {
|
||||
maxBitrate = bitrate
|
||||
}
|
||||
for {
|
||||
size -= uint64(samples[begin].Size)
|
||||
duration -= uint64(samples[begin].TimeDelta)
|
||||
begin++
|
||||
if duration < timeDelta {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return maxBitrate
|
||||
}
|
||||
|
||||
func (segments Segments) GetBitrate(trackID uint32, timescale uint32) uint64 {
|
||||
var totalSize uint64
|
||||
var totalDuration uint64
|
||||
for _, segment := range segments {
|
||||
if segment.TrackID == trackID {
|
||||
totalSize += uint64(segment.Size)
|
||||
totalDuration += uint64(segment.Duration)
|
||||
}
|
||||
}
|
||||
if totalDuration == 0 {
|
||||
return 0
|
||||
}
|
||||
return 8 * totalSize * uint64(timescale) / totalDuration
|
||||
}
|
||||
|
||||
func (segments Segments) GetMaxBitrate(trackID uint32, timescale uint32) uint64 {
|
||||
var maxBitrate uint64
|
||||
for _, segment := range segments {
|
||||
if segment.TrackID == trackID && segment.Duration != 0 {
|
||||
bitrate := 8 * uint64(segment.Size) * uint64(timescale) / uint64(segment.Duration)
|
||||
if bitrate > maxBitrate {
|
||||
maxBitrate = bitrate
|
||||
}
|
||||
}
|
||||
}
|
||||
return maxBitrate
|
||||
}
|
||||
199
vendor/github.com/abema/go-mp4/read.go
generated
vendored
199
vendor/github.com/abema/go-mp4/read.go
generated
vendored
|
|
@ -1,199 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
type BoxPath []BoxType
|
||||
|
||||
func (lhs BoxPath) compareWith(rhs BoxPath) (forwardMatch bool, match bool) {
|
||||
if len(lhs) > len(rhs) {
|
||||
return false, false
|
||||
}
|
||||
for i := 0; i < len(lhs); i++ {
|
||||
if !lhs[i].MatchWith(rhs[i]) {
|
||||
return false, false
|
||||
}
|
||||
}
|
||||
if len(lhs) < len(rhs) {
|
||||
return true, false
|
||||
}
|
||||
return false, true
|
||||
}
|
||||
|
||||
type ReadHandle struct {
|
||||
Params []interface{}
|
||||
BoxInfo BoxInfo
|
||||
Path BoxPath
|
||||
ReadPayload func() (box IBox, n uint64, err error)
|
||||
ReadData func(io.Writer) (n uint64, err error)
|
||||
Expand func(params ...interface{}) (vals []interface{}, err error)
|
||||
}
|
||||
|
||||
type ReadHandler func(handle *ReadHandle) (val interface{}, err error)
|
||||
|
||||
func ReadBoxStructure(r io.ReadSeeker, handler ReadHandler, params ...interface{}) ([]interface{}, error) {
|
||||
if _, err := r.Seek(0, io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return readBoxStructure(r, 0, true, nil, Context{}, handler, params)
|
||||
}
|
||||
|
||||
func ReadBoxStructureFromInternal(r io.ReadSeeker, bi *BoxInfo, handler ReadHandler, params ...interface{}) (interface{}, error) {
|
||||
return readBoxStructureFromInternal(r, bi, nil, handler, params)
|
||||
}
|
||||
|
||||
func readBoxStructureFromInternal(r io.ReadSeeker, bi *BoxInfo, path BoxPath, handler ReadHandler, params []interface{}) (interface{}, error) {
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// check comatible-brands
|
||||
if len(path) == 0 && bi.Type == BoxTypeFtyp() {
|
||||
var ftyp Ftyp
|
||||
if _, err := Unmarshal(r, bi.Size-bi.HeaderSize, &ftyp, bi.Context); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ftyp.HasCompatibleBrand(BrandQT()) {
|
||||
bi.IsQuickTimeCompatible = true
|
||||
}
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// parse numbered ilst items after keys box by saving EntryCount field to context
|
||||
if bi.Type == BoxTypeKeys() {
|
||||
var keys Keys
|
||||
if _, err := Unmarshal(r, bi.Size-bi.HeaderSize, &keys, bi.Context); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bi.QuickTimeKeysMetaEntryCount = int(keys.EntryCount)
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
ctx := bi.Context
|
||||
if bi.Type == BoxTypeWave() {
|
||||
ctx.UnderWave = true
|
||||
} else if bi.Type == BoxTypeIlst() {
|
||||
ctx.UnderIlst = true
|
||||
} else if bi.UnderIlst && !bi.UnderIlstMeta && IsIlstMetaBoxType(bi.Type) {
|
||||
ctx.UnderIlstMeta = true
|
||||
if bi.Type == StrToBoxType("----") {
|
||||
ctx.UnderIlstFreeMeta = true
|
||||
}
|
||||
} else if bi.Type == BoxTypeUdta() {
|
||||
ctx.UnderUdta = true
|
||||
}
|
||||
|
||||
newPath := make(BoxPath, len(path)+1)
|
||||
copy(newPath, path)
|
||||
newPath[len(path)] = bi.Type
|
||||
|
||||
h := &ReadHandle{
|
||||
Params: params,
|
||||
BoxInfo: *bi,
|
||||
Path: newPath,
|
||||
}
|
||||
|
||||
var childrenOffset uint64
|
||||
|
||||
h.ReadPayload = func() (IBox, uint64, error) {
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
box, n, err := UnmarshalAny(r, bi.Type, bi.Size-bi.HeaderSize, bi.Context)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
childrenOffset = bi.Offset + bi.HeaderSize + n
|
||||
return box, n, nil
|
||||
}
|
||||
|
||||
h.ReadData = func(w io.Writer) (uint64, error) {
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
size := bi.Size - bi.HeaderSize
|
||||
if _, err := io.CopyN(w, r, int64(size)); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return size, nil
|
||||
}
|
||||
|
||||
h.Expand = func(params ...interface{}) ([]interface{}, error) {
|
||||
if childrenOffset == 0 {
|
||||
if _, err := bi.SeekToPayload(r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, n, err := UnmarshalAny(r, bi.Type, bi.Size-bi.HeaderSize, bi.Context)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
childrenOffset = bi.Offset + bi.HeaderSize + n
|
||||
} else {
|
||||
if _, err := r.Seek(int64(childrenOffset), io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
childrenSize := bi.Offset + bi.Size - childrenOffset
|
||||
return readBoxStructure(r, childrenSize, false, newPath, ctx, handler, params)
|
||||
}
|
||||
|
||||
if val, err := handler(h); err != nil {
|
||||
return nil, err
|
||||
} else if _, err := bi.SeekToEnd(r); err != nil {
|
||||
return nil, err
|
||||
} else {
|
||||
return val, nil
|
||||
}
|
||||
}
|
||||
|
||||
func readBoxStructure(r io.ReadSeeker, totalSize uint64, isRoot bool, path BoxPath, ctx Context, handler ReadHandler, params []interface{}) ([]interface{}, error) {
|
||||
vals := make([]interface{}, 0, 8)
|
||||
|
||||
for isRoot || totalSize >= SmallHeaderSize {
|
||||
bi, err := ReadBoxInfo(r)
|
||||
if isRoot && err == io.EOF {
|
||||
return vals, nil
|
||||
} else if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !isRoot && bi.Size > totalSize {
|
||||
return nil, fmt.Errorf("too large box size: type=%s, size=%d, actualBufSize=%d", bi.Type.String(), bi.Size, totalSize)
|
||||
}
|
||||
totalSize -= bi.Size
|
||||
|
||||
bi.Context = ctx
|
||||
|
||||
val, err := readBoxStructureFromInternal(r, bi, path, handler, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
vals = append(vals, val)
|
||||
|
||||
if bi.IsQuickTimeCompatible {
|
||||
ctx.IsQuickTimeCompatible = true
|
||||
}
|
||||
|
||||
// preserve keys entry count on context for subsequent ilst number item box
|
||||
if bi.Type == BoxTypeKeys() {
|
||||
ctx.QuickTimeKeysMetaEntryCount = bi.QuickTimeKeysMetaEntryCount
|
||||
}
|
||||
}
|
||||
|
||||
if totalSize != 0 && !ctx.IsQuickTimeCompatible {
|
||||
return nil, errors.New("Unexpected EOF")
|
||||
}
|
||||
|
||||
return vals, nil
|
||||
}
|
||||
261
vendor/github.com/abema/go-mp4/string.go
generated
vendored
261
vendor/github.com/abema/go-mp4/string.go
generated
vendored
|
|
@ -1,261 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"strconv"
|
||||
|
||||
"github.com/abema/go-mp4/internal/util"
|
||||
)
|
||||
|
||||
type stringifier struct {
|
||||
buf *bytes.Buffer
|
||||
src IImmutableBox
|
||||
indent string
|
||||
ctx Context
|
||||
}
|
||||
|
||||
func Stringify(src IImmutableBox, ctx Context) (string, error) {
|
||||
return StringifyWithIndent(src, "", ctx)
|
||||
}
|
||||
|
||||
func StringifyWithIndent(src IImmutableBox, indent string, ctx Context) (string, error) {
|
||||
boxDef := src.GetType().getBoxDef(ctx)
|
||||
if boxDef == nil {
|
||||
return "", ErrBoxInfoNotFound
|
||||
}
|
||||
|
||||
v := reflect.ValueOf(src).Elem()
|
||||
|
||||
m := &stringifier{
|
||||
buf: bytes.NewBuffer(nil),
|
||||
src: src,
|
||||
indent: indent,
|
||||
ctx: ctx,
|
||||
}
|
||||
|
||||
err := m.stringifyStruct(v, boxDef.fields, 0, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return m.buf.String(), nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringify(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
switch v.Type().Kind() {
|
||||
case reflect.Ptr:
|
||||
return m.stringifyPtr(v, fi, depth)
|
||||
case reflect.Struct:
|
||||
return m.stringifyStruct(v, fi.children, depth, fi.is(fieldExtend))
|
||||
case reflect.Array:
|
||||
return m.stringifyArray(v, fi, depth)
|
||||
case reflect.Slice:
|
||||
return m.stringifySlice(v, fi, depth)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return m.stringifyInt(v, fi, depth)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return m.stringifyUint(v, fi, depth)
|
||||
case reflect.Bool:
|
||||
return m.stringifyBool(v, depth)
|
||||
case reflect.String:
|
||||
return m.stringifyString(v, depth)
|
||||
default:
|
||||
return fmt.Errorf("unsupported type: %s", v.Type().Kind())
|
||||
}
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyPtr(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
return m.stringify(v.Elem(), fi, depth)
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyStruct(v reflect.Value, fs []*field, depth int, extended bool) error {
|
||||
if !extended {
|
||||
m.buf.WriteString("{")
|
||||
if m.indent != "" {
|
||||
m.buf.WriteString("\n")
|
||||
}
|
||||
depth++
|
||||
}
|
||||
|
||||
for _, f := range fs {
|
||||
fi := resolveFieldInstance(f, m.src, v, m.ctx)
|
||||
|
||||
if !isTargetField(m.src, fi, m.ctx) {
|
||||
continue
|
||||
}
|
||||
|
||||
if f.cnst != "" || f.is(fieldHidden) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !f.is(fieldExtend) {
|
||||
if m.indent != "" {
|
||||
writeIndent(m.buf, m.indent, depth+1)
|
||||
} else if m.buf.Len() != 0 && m.buf.Bytes()[m.buf.Len()-1] != '{' {
|
||||
m.buf.WriteString(" ")
|
||||
}
|
||||
m.buf.WriteString(f.name)
|
||||
m.buf.WriteString("=")
|
||||
}
|
||||
|
||||
str, ok := fi.cfo.StringifyField(f.name, m.indent, depth+1, m.ctx)
|
||||
if ok {
|
||||
m.buf.WriteString(str)
|
||||
if !f.is(fieldExtend) && m.indent != "" {
|
||||
m.buf.WriteString("\n")
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if f.name == "Version" {
|
||||
m.buf.WriteString(strconv.Itoa(int(m.src.GetVersion())))
|
||||
} else if f.name == "Flags" {
|
||||
fmt.Fprintf(m.buf, "0x%06x", m.src.GetFlags())
|
||||
} else {
|
||||
err := m.stringify(v.FieldByName(f.name), fi, depth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if !f.is(fieldExtend) && m.indent != "" {
|
||||
m.buf.WriteString("\n")
|
||||
}
|
||||
}
|
||||
|
||||
if !extended {
|
||||
if m.indent != "" {
|
||||
writeIndent(m.buf, m.indent, depth)
|
||||
}
|
||||
m.buf.WriteString("}")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyArray(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
begin, sep, end := "[", ", ", "]"
|
||||
if fi.is(fieldString) || fi.is(fieldISO639_2) {
|
||||
begin, sep, end = "\"", "", "\""
|
||||
} else if fi.is(fieldUUID) {
|
||||
begin, sep, end = "", "", ""
|
||||
}
|
||||
|
||||
m.buf.WriteString(begin)
|
||||
|
||||
m2 := *m
|
||||
if fi.is(fieldString) {
|
||||
m2.buf = bytes.NewBuffer(nil)
|
||||
}
|
||||
size := v.Type().Size()
|
||||
for i := 0; i < int(size)/int(v.Type().Elem().Size()); i++ {
|
||||
if i != 0 {
|
||||
m2.buf.WriteString(sep)
|
||||
}
|
||||
|
||||
if err := m2.stringify(v.Index(i), fi, depth+1); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if fi.is(fieldUUID) && (i == 3 || i == 5 || i == 7 || i == 9) {
|
||||
m.buf.WriteString("-")
|
||||
}
|
||||
}
|
||||
if fi.is(fieldString) {
|
||||
m.buf.WriteString(util.EscapeUnprintables(m2.buf.String()))
|
||||
}
|
||||
|
||||
m.buf.WriteString(end)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifySlice(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
begin, sep, end := "[", ", ", "]"
|
||||
if fi.is(fieldString) || fi.is(fieldISO639_2) {
|
||||
begin, sep, end = "\"", "", "\""
|
||||
}
|
||||
|
||||
m.buf.WriteString(begin)
|
||||
|
||||
m2 := *m
|
||||
if fi.is(fieldString) {
|
||||
m2.buf = bytes.NewBuffer(nil)
|
||||
}
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
if fi.length != LengthUnlimited && uint(i) >= fi.length {
|
||||
break
|
||||
}
|
||||
|
||||
if i != 0 {
|
||||
m2.buf.WriteString(sep)
|
||||
}
|
||||
|
||||
if err := m2.stringify(v.Index(i), fi, depth+1); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if fi.is(fieldString) {
|
||||
m.buf.WriteString(util.EscapeUnprintables(m2.buf.String()))
|
||||
}
|
||||
|
||||
m.buf.WriteString(end)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyInt(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
if fi.is(fieldHex) {
|
||||
val := v.Int()
|
||||
if val >= 0 {
|
||||
m.buf.WriteString("0x")
|
||||
m.buf.WriteString(strconv.FormatInt(val, 16))
|
||||
} else {
|
||||
m.buf.WriteString("-0x")
|
||||
m.buf.WriteString(strconv.FormatInt(-val, 16))
|
||||
}
|
||||
} else {
|
||||
m.buf.WriteString(strconv.FormatInt(v.Int(), 10))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyUint(v reflect.Value, fi *fieldInstance, depth int) error {
|
||||
if fi.is(fieldISO639_2) {
|
||||
m.buf.WriteString(string([]byte{byte(v.Uint() + 0x60)}))
|
||||
} else if fi.is(fieldUUID) {
|
||||
fmt.Fprintf(m.buf, "%02x", v.Uint())
|
||||
} else if fi.is(fieldString) {
|
||||
m.buf.WriteString(string([]byte{byte(v.Uint())}))
|
||||
} else if fi.is(fieldHex) || (!fi.is(fieldDec) && v.Type().Kind() == reflect.Uint8) || v.Type().Kind() == reflect.Uintptr {
|
||||
m.buf.WriteString("0x")
|
||||
m.buf.WriteString(strconv.FormatUint(v.Uint(), 16))
|
||||
} else {
|
||||
m.buf.WriteString(strconv.FormatUint(v.Uint(), 10))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyBool(v reflect.Value, depth int) error {
|
||||
m.buf.WriteString(strconv.FormatBool(v.Bool()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *stringifier) stringifyString(v reflect.Value, depth int) error {
|
||||
m.buf.WriteString("\"")
|
||||
m.buf.WriteString(util.EscapeUnprintables(v.String()))
|
||||
m.buf.WriteString("\"")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeIndent(w io.Writer, indent string, depth int) {
|
||||
for i := 0; i < depth; i++ {
|
||||
io.WriteString(w, indent)
|
||||
}
|
||||
}
|
||||
68
vendor/github.com/abema/go-mp4/write.go
generated
vendored
68
vendor/github.com/abema/go-mp4/write.go
generated
vendored
|
|
@ -1,68 +0,0 @@
|
|||
package mp4
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
type Writer struct {
|
||||
writer io.WriteSeeker
|
||||
biStack []*BoxInfo
|
||||
}
|
||||
|
||||
func NewWriter(w io.WriteSeeker) *Writer {
|
||||
return &Writer{
|
||||
writer: w,
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Writer) Write(p []byte) (int, error) {
|
||||
return w.writer.Write(p)
|
||||
}
|
||||
|
||||
func (w *Writer) Seek(offset int64, whence int) (int64, error) {
|
||||
return w.writer.Seek(offset, whence)
|
||||
}
|
||||
|
||||
func (w *Writer) StartBox(bi *BoxInfo) (*BoxInfo, error) {
|
||||
bi, err := WriteBoxInfo(w.writer, bi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
w.biStack = append(w.biStack, bi)
|
||||
return bi, nil
|
||||
}
|
||||
|
||||
func (w *Writer) EndBox() (*BoxInfo, error) {
|
||||
bi := w.biStack[len(w.biStack)-1]
|
||||
w.biStack = w.biStack[:len(w.biStack)-1]
|
||||
end, err := w.writer.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bi.Size = uint64(end) - bi.Offset
|
||||
if _, err = bi.SeekToStart(w.writer); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if bi2, err := WriteBoxInfo(w.writer, bi); err != nil {
|
||||
return nil, err
|
||||
} else if bi.HeaderSize != bi2.HeaderSize {
|
||||
return nil, errors.New("header size changed")
|
||||
}
|
||||
if _, err := w.writer.Seek(end, io.SeekStart); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return bi, nil
|
||||
}
|
||||
|
||||
func (w *Writer) CopyBox(r io.ReadSeeker, bi *BoxInfo) error {
|
||||
if _, err := bi.SeekToStart(r); err != nil {
|
||||
return err
|
||||
}
|
||||
if n, err := io.CopyN(w, r, int64(bi.Size)); err != nil {
|
||||
return err
|
||||
} else if n != int64(bi.Size) {
|
||||
return errors.New("failed to copy box")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
0
vendor/github.com/dsoprea/go-exif/v3/.MODULE_ROOT
generated
vendored
0
vendor/github.com/dsoprea/go-exif/v3/.MODULE_ROOT
generated
vendored
9
vendor/github.com/dsoprea/go-exif/v3/LICENSE
generated
vendored
9
vendor/github.com/dsoprea/go-exif/v3/LICENSE
generated
vendored
|
|
@ -1,9 +0,0 @@
|
|||
MIT LICENSE
|
||||
|
||||
Copyright 2019 Dustin Oprea
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
651
vendor/github.com/dsoprea/go-exif/v3/common/ifd.go
generated
vendored
651
vendor/github.com/dsoprea/go-exif/v3/common/ifd.go
generated
vendored
|
|
@ -1,651 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
ifdLogger = log.NewLogger("exifcommon.ifd")
|
||||
)
|
||||
|
||||
var (
|
||||
ErrChildIfdNotMapped = errors.New("no child-IFD for that tag-ID under parent")
|
||||
)
|
||||
|
||||
// MappedIfd is one node in the IFD-mapping.
|
||||
type MappedIfd struct {
|
||||
ParentTagId uint16
|
||||
Placement []uint16
|
||||
Path []string
|
||||
|
||||
Name string
|
||||
TagId uint16
|
||||
Children map[uint16]*MappedIfd
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (mi *MappedIfd) String() string {
|
||||
pathPhrase := mi.PathPhrase()
|
||||
return fmt.Sprintf("MappedIfd<(0x%04X) [%s] PATH=[%s]>", mi.TagId, mi.Name, pathPhrase)
|
||||
}
|
||||
|
||||
// PathPhrase returns a non-fully-qualified IFD path.
|
||||
func (mi *MappedIfd) PathPhrase() string {
|
||||
return strings.Join(mi.Path, "/")
|
||||
}
|
||||
|
||||
// TODO(dustin): Refactor this to use IfdIdentity structs.
|
||||
|
||||
// IfdMapping describes all of the IFDs that we currently recognize.
|
||||
type IfdMapping struct {
|
||||
rootNode *MappedIfd
|
||||
}
|
||||
|
||||
// NewIfdMapping returns a new IfdMapping struct.
|
||||
func NewIfdMapping() (ifdMapping *IfdMapping) {
|
||||
rootNode := &MappedIfd{
|
||||
Path: make([]string, 0),
|
||||
Children: make(map[uint16]*MappedIfd),
|
||||
}
|
||||
|
||||
return &IfdMapping{
|
||||
rootNode: rootNode,
|
||||
}
|
||||
}
|
||||
|
||||
// NewIfdMappingWithStandard retruns a new IfdMapping struct preloaded with the
|
||||
// standard IFDs.
|
||||
func NewIfdMappingWithStandard() (ifdMapping *IfdMapping, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
im := NewIfdMapping()
|
||||
|
||||
err = LoadStandardIfds(im)
|
||||
log.PanicIf(err)
|
||||
|
||||
return im, nil
|
||||
}
|
||||
|
||||
// Get returns the node given the path slice.
|
||||
func (im *IfdMapping) Get(parentPlacement []uint16) (childIfd *MappedIfd, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ptr := im.rootNode
|
||||
for _, tagId := range parentPlacement {
|
||||
if descendantPtr, found := ptr.Children[tagId]; found == false {
|
||||
log.Panicf("ifd child with tag-ID (%04x) not registered: [%s]", tagId, ptr.PathPhrase())
|
||||
} else {
|
||||
ptr = descendantPtr
|
||||
}
|
||||
}
|
||||
|
||||
return ptr, nil
|
||||
}
|
||||
|
||||
// GetWithPath returns the node given the path string.
|
||||
func (im *IfdMapping) GetWithPath(pathPhrase string) (mi *MappedIfd, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if pathPhrase == "" {
|
||||
log.Panicf("path-phrase is empty")
|
||||
}
|
||||
|
||||
path := strings.Split(pathPhrase, "/")
|
||||
ptr := im.rootNode
|
||||
|
||||
for _, name := range path {
|
||||
var hit *MappedIfd
|
||||
for _, mi := range ptr.Children {
|
||||
if mi.Name == name {
|
||||
hit = mi
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if hit == nil {
|
||||
log.Panicf("ifd child with name [%s] not registered: [%s]", name, ptr.PathPhrase())
|
||||
}
|
||||
|
||||
ptr = hit
|
||||
}
|
||||
|
||||
return ptr, nil
|
||||
}
|
||||
|
||||
// GetChild is a convenience function to get the child path for a given parent
|
||||
// placement and child tag-ID.
|
||||
func (im *IfdMapping) GetChild(parentPathPhrase string, tagId uint16) (mi *MappedIfd, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
mi, err = im.GetWithPath(parentPathPhrase)
|
||||
log.PanicIf(err)
|
||||
|
||||
for _, childMi := range mi.Children {
|
||||
if childMi.TagId == tagId {
|
||||
return childMi, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Whether or not an IFD is defined in data, such an IFD is not registered
|
||||
// and would be unknown.
|
||||
log.Panic(ErrChildIfdNotMapped)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// IfdTagIdAndIndex represents a specific part of the IFD path.
|
||||
//
|
||||
// This is a legacy type.
|
||||
type IfdTagIdAndIndex struct {
|
||||
Name string
|
||||
TagId uint16
|
||||
Index int
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (itii IfdTagIdAndIndex) String() string {
|
||||
return fmt.Sprintf("IfdTagIdAndIndex<NAME=[%s] ID=(%04x) INDEX=(%d)>", itii.Name, itii.TagId, itii.Index)
|
||||
}
|
||||
|
||||
// ResolvePath takes a list of names, which can also be suffixed with indices
|
||||
// (to identify the second, third, etc.. sibling IFD) and returns a list of
|
||||
// tag-IDs and those indices.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// - IFD/Exif/Iop
|
||||
// - IFD0/Exif/Iop
|
||||
//
|
||||
// This is the only call that supports adding the numeric indices.
|
||||
func (im *IfdMapping) ResolvePath(pathPhrase string) (lineage []IfdTagIdAndIndex, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
pathPhrase = strings.TrimSpace(pathPhrase)
|
||||
|
||||
if pathPhrase == "" {
|
||||
log.Panicf("can not resolve empty path-phrase")
|
||||
}
|
||||
|
||||
path := strings.Split(pathPhrase, "/")
|
||||
lineage = make([]IfdTagIdAndIndex, len(path))
|
||||
|
||||
ptr := im.rootNode
|
||||
empty := IfdTagIdAndIndex{}
|
||||
for i, name := range path {
|
||||
indexByte := name[len(name)-1]
|
||||
index := 0
|
||||
if indexByte >= '0' && indexByte <= '9' {
|
||||
index = int(indexByte - '0')
|
||||
name = name[:len(name)-1]
|
||||
}
|
||||
|
||||
itii := IfdTagIdAndIndex{}
|
||||
for _, mi := range ptr.Children {
|
||||
if mi.Name != name {
|
||||
continue
|
||||
}
|
||||
|
||||
itii.Name = name
|
||||
itii.TagId = mi.TagId
|
||||
itii.Index = index
|
||||
|
||||
ptr = mi
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
if itii == empty {
|
||||
log.Panicf("ifd child with name [%s] not registered: [%s]", name, pathPhrase)
|
||||
}
|
||||
|
||||
lineage[i] = itii
|
||||
}
|
||||
|
||||
return lineage, nil
|
||||
}
|
||||
|
||||
// FqPathPhraseFromLineage returns the fully-qualified IFD path from the slice.
|
||||
func (im *IfdMapping) FqPathPhraseFromLineage(lineage []IfdTagIdAndIndex) (fqPathPhrase string) {
|
||||
fqPathParts := make([]string, len(lineage))
|
||||
for i, itii := range lineage {
|
||||
if itii.Index > 0 {
|
||||
fqPathParts[i] = fmt.Sprintf("%s%d", itii.Name, itii.Index)
|
||||
} else {
|
||||
fqPathParts[i] = itii.Name
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Join(fqPathParts, "/")
|
||||
}
|
||||
|
||||
// PathPhraseFromLineage returns the non-fully-qualified IFD path from the
|
||||
// slice.
|
||||
func (im *IfdMapping) PathPhraseFromLineage(lineage []IfdTagIdAndIndex) (pathPhrase string) {
|
||||
pathParts := make([]string, len(lineage))
|
||||
for i, itii := range lineage {
|
||||
pathParts[i] = itii.Name
|
||||
}
|
||||
|
||||
return strings.Join(pathParts, "/")
|
||||
}
|
||||
|
||||
// StripPathPhraseIndices returns a non-fully-qualified path-phrase (no
|
||||
// indices).
|
||||
func (im *IfdMapping) StripPathPhraseIndices(pathPhrase string) (strippedPathPhrase string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
lineage, err := im.ResolvePath(pathPhrase)
|
||||
log.PanicIf(err)
|
||||
|
||||
strippedPathPhrase = im.PathPhraseFromLineage(lineage)
|
||||
return strippedPathPhrase, nil
|
||||
}
|
||||
|
||||
// Add puts the given IFD at the given position of the tree. The position of the
|
||||
// tree is referred to as the placement and is represented by a set of tag-IDs,
|
||||
// where the leftmost is the root tag and the tags going to the right are
|
||||
// progressive descendants.
|
||||
func (im *IfdMapping) Add(parentPlacement []uint16, tagId uint16, name string) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): !! It would be nicer to provide a list of names in the placement rather than tag-IDs.
|
||||
|
||||
ptr, err := im.Get(parentPlacement)
|
||||
log.PanicIf(err)
|
||||
|
||||
path := make([]string, len(parentPlacement)+1)
|
||||
if len(parentPlacement) > 0 {
|
||||
copy(path, ptr.Path)
|
||||
}
|
||||
|
||||
path[len(path)-1] = name
|
||||
|
||||
placement := make([]uint16, len(parentPlacement)+1)
|
||||
if len(placement) > 0 {
|
||||
copy(placement, ptr.Placement)
|
||||
}
|
||||
|
||||
placement[len(placement)-1] = tagId
|
||||
|
||||
childIfd := &MappedIfd{
|
||||
ParentTagId: ptr.TagId,
|
||||
Path: path,
|
||||
Placement: placement,
|
||||
Name: name,
|
||||
TagId: tagId,
|
||||
Children: make(map[uint16]*MappedIfd),
|
||||
}
|
||||
|
||||
if _, found := ptr.Children[tagId]; found == true {
|
||||
log.Panicf("child IFD with tag-ID (%04x) already registered under IFD [%s] with tag-ID (%04x)", tagId, ptr.Name, ptr.TagId)
|
||||
}
|
||||
|
||||
ptr.Children[tagId] = childIfd
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (im *IfdMapping) dumpLineages(stack []*MappedIfd, input []string) (output []string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
currentIfd := stack[len(stack)-1]
|
||||
|
||||
output = input
|
||||
for _, childIfd := range currentIfd.Children {
|
||||
stackCopy := make([]*MappedIfd, len(stack)+1)
|
||||
|
||||
copy(stackCopy, stack)
|
||||
stackCopy[len(stack)] = childIfd
|
||||
|
||||
// Add to output, but don't include the obligatory root node.
|
||||
parts := make([]string, len(stackCopy)-1)
|
||||
for i, mi := range stackCopy[1:] {
|
||||
parts[i] = mi.Name
|
||||
}
|
||||
|
||||
output = append(output, strings.Join(parts, "/"))
|
||||
|
||||
output, err = im.dumpLineages(stackCopy, output)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
// DumpLineages returns a slice of strings representing all mappings.
|
||||
func (im *IfdMapping) DumpLineages() (output []string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
stack := []*MappedIfd{im.rootNode}
|
||||
output = make([]string, 0)
|
||||
|
||||
output, err = im.dumpLineages(stack, output)
|
||||
log.PanicIf(err)
|
||||
|
||||
return output, nil
|
||||
}
|
||||
|
||||
// LoadStandardIfds loads the standard IFDs into the mapping.
|
||||
func LoadStandardIfds(im *IfdMapping) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
err = im.Add(
|
||||
[]uint16{},
|
||||
IfdStandardIfdIdentity.TagId(), IfdStandardIfdIdentity.Name())
|
||||
|
||||
log.PanicIf(err)
|
||||
|
||||
err = im.Add(
|
||||
[]uint16{IfdStandardIfdIdentity.TagId()},
|
||||
IfdExifStandardIfdIdentity.TagId(), IfdExifStandardIfdIdentity.Name())
|
||||
|
||||
log.PanicIf(err)
|
||||
|
||||
err = im.Add(
|
||||
[]uint16{IfdStandardIfdIdentity.TagId(), IfdExifStandardIfdIdentity.TagId()},
|
||||
IfdExifIopStandardIfdIdentity.TagId(), IfdExifIopStandardIfdIdentity.Name())
|
||||
|
||||
log.PanicIf(err)
|
||||
|
||||
err = im.Add(
|
||||
[]uint16{IfdStandardIfdIdentity.TagId()},
|
||||
IfdGpsInfoStandardIfdIdentity.TagId(), IfdGpsInfoStandardIfdIdentity.Name())
|
||||
|
||||
log.PanicIf(err)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IfdTag describes a single IFD tag and its parent (if any).
|
||||
type IfdTag struct {
|
||||
parentIfdTag *IfdTag
|
||||
tagId uint16
|
||||
name string
|
||||
}
|
||||
|
||||
func NewIfdTag(parentIfdTag *IfdTag, tagId uint16, name string) IfdTag {
|
||||
return IfdTag{
|
||||
parentIfdTag: parentIfdTag,
|
||||
tagId: tagId,
|
||||
name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// ParentIfd returns the IfdTag of this IFD's parent.
|
||||
func (it IfdTag) ParentIfd() *IfdTag {
|
||||
return it.parentIfdTag
|
||||
}
|
||||
|
||||
// TagId returns the tag-ID of this IFD.
|
||||
func (it IfdTag) TagId() uint16 {
|
||||
return it.tagId
|
||||
}
|
||||
|
||||
// Name returns the simple name of this IFD.
|
||||
func (it IfdTag) Name() string {
|
||||
return it.name
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (it IfdTag) String() string {
|
||||
parentIfdPhrase := ""
|
||||
if it.parentIfdTag != nil {
|
||||
parentIfdPhrase = fmt.Sprintf(" PARENT=(0x%04x)[%s]", it.parentIfdTag.tagId, it.parentIfdTag.name)
|
||||
}
|
||||
|
||||
return fmt.Sprintf("IfdTag<TAG-ID=(0x%04x) NAME=[%s]%s>", it.tagId, it.name, parentIfdPhrase)
|
||||
}
|
||||
|
||||
var (
|
||||
// rootStandardIfd is the standard root IFD.
|
||||
rootStandardIfd = NewIfdTag(nil, 0x0000, "IFD") // IFD
|
||||
|
||||
// exifStandardIfd is the standard "Exif" IFD.
|
||||
exifStandardIfd = NewIfdTag(&rootStandardIfd, 0x8769, "Exif") // IFD/Exif
|
||||
|
||||
// iopStandardIfd is the standard "Iop" IFD.
|
||||
iopStandardIfd = NewIfdTag(&exifStandardIfd, 0xA005, "Iop") // IFD/Exif/Iop
|
||||
|
||||
// gpsInfoStandardIfd is the standard "GPS" IFD.
|
||||
gpsInfoStandardIfd = NewIfdTag(&rootStandardIfd, 0x8825, "GPSInfo") // IFD/GPSInfo
|
||||
)
|
||||
|
||||
// IfdIdentityPart represents one component in an IFD path.
|
||||
type IfdIdentityPart struct {
|
||||
Name string
|
||||
Index int
|
||||
}
|
||||
|
||||
// String returns a fully-qualified IFD path.
|
||||
func (iip IfdIdentityPart) String() string {
|
||||
if iip.Index > 0 {
|
||||
return fmt.Sprintf("%s%d", iip.Name, iip.Index)
|
||||
} else {
|
||||
return iip.Name
|
||||
}
|
||||
}
|
||||
|
||||
// UnindexedString returned a non-fully-qualified IFD path.
|
||||
func (iip IfdIdentityPart) UnindexedString() string {
|
||||
return iip.Name
|
||||
}
|
||||
|
||||
// IfdIdentity represents a single IFD path and provides access to various
|
||||
// information and representations.
|
||||
//
|
||||
// Only global instances can be used for equality checks.
|
||||
type IfdIdentity struct {
|
||||
ifdTag IfdTag
|
||||
parts []IfdIdentityPart
|
||||
ifdPath string
|
||||
fqIfdPath string
|
||||
}
|
||||
|
||||
// NewIfdIdentity returns a new IfdIdentity struct.
|
||||
func NewIfdIdentity(ifdTag IfdTag, parts ...IfdIdentityPart) (ii *IfdIdentity) {
|
||||
ii = &IfdIdentity{
|
||||
ifdTag: ifdTag,
|
||||
parts: parts,
|
||||
}
|
||||
|
||||
ii.ifdPath = ii.getIfdPath()
|
||||
ii.fqIfdPath = ii.getFqIfdPath()
|
||||
|
||||
return ii
|
||||
}
|
||||
|
||||
// NewIfdIdentityFromString parses a string like "IFD/Exif" or "IFD1" or
|
||||
// something more exotic with custom IFDs ("SomeIFD4/SomeChildIFD6"). Note that
|
||||
// this will valid the unindexed IFD structure (because the standard tags from
|
||||
// the specification are unindexed), but not, obviously, any indices (e.g.
|
||||
// the numbers in "IFD0", "IFD1", "SomeIFD4/SomeChildIFD6"). It is
|
||||
// required for the caller to check whether these specific instances
|
||||
// were actually parsed out of the stream.
|
||||
func NewIfdIdentityFromString(im *IfdMapping, fqIfdPath string) (ii *IfdIdentity, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
lineage, err := im.ResolvePath(fqIfdPath)
|
||||
log.PanicIf(err)
|
||||
|
||||
var lastIt *IfdTag
|
||||
identityParts := make([]IfdIdentityPart, len(lineage))
|
||||
for i, itii := range lineage {
|
||||
// Build out the tag that will eventually point to the IFD represented
|
||||
// by the right-most part in the IFD path.
|
||||
|
||||
it := &IfdTag{
|
||||
parentIfdTag: lastIt,
|
||||
tagId: itii.TagId,
|
||||
name: itii.Name,
|
||||
}
|
||||
|
||||
lastIt = it
|
||||
|
||||
// Create the next IfdIdentity part.
|
||||
|
||||
iip := IfdIdentityPart{
|
||||
Name: itii.Name,
|
||||
Index: itii.Index,
|
||||
}
|
||||
|
||||
identityParts[i] = iip
|
||||
}
|
||||
|
||||
ii = NewIfdIdentity(*lastIt, identityParts...)
|
||||
return ii, nil
|
||||
}
|
||||
|
||||
func (ii *IfdIdentity) getFqIfdPath() string {
|
||||
partPhrases := make([]string, len(ii.parts))
|
||||
for i, iip := range ii.parts {
|
||||
partPhrases[i] = iip.String()
|
||||
}
|
||||
|
||||
return strings.Join(partPhrases, "/")
|
||||
}
|
||||
|
||||
func (ii *IfdIdentity) getIfdPath() string {
|
||||
partPhrases := make([]string, len(ii.parts))
|
||||
for i, iip := range ii.parts {
|
||||
partPhrases[i] = iip.UnindexedString()
|
||||
}
|
||||
|
||||
return strings.Join(partPhrases, "/")
|
||||
}
|
||||
|
||||
// String returns a fully-qualified IFD path.
|
||||
func (ii *IfdIdentity) String() string {
|
||||
return ii.fqIfdPath
|
||||
}
|
||||
|
||||
// UnindexedString returns a non-fully-qualified IFD path.
|
||||
func (ii *IfdIdentity) UnindexedString() string {
|
||||
return ii.ifdPath
|
||||
}
|
||||
|
||||
// IfdTag returns the tag struct behind this IFD.
|
||||
func (ii *IfdIdentity) IfdTag() IfdTag {
|
||||
return ii.ifdTag
|
||||
}
|
||||
|
||||
// TagId returns the tag-ID of the IFD.
|
||||
func (ii *IfdIdentity) TagId() uint16 {
|
||||
return ii.ifdTag.TagId()
|
||||
}
|
||||
|
||||
// LeafPathPart returns the last right-most path-part, which represents the
|
||||
// current IFD.
|
||||
func (ii *IfdIdentity) LeafPathPart() IfdIdentityPart {
|
||||
return ii.parts[len(ii.parts)-1]
|
||||
}
|
||||
|
||||
// Name returns the simple name of this IFD.
|
||||
func (ii *IfdIdentity) Name() string {
|
||||
return ii.LeafPathPart().Name
|
||||
}
|
||||
|
||||
// Index returns the index of this IFD (more then one IFD under a parent IFD
|
||||
// will be numbered [0..n]).
|
||||
func (ii *IfdIdentity) Index() int {
|
||||
return ii.LeafPathPart().Index
|
||||
}
|
||||
|
||||
// Equals returns true if the two IfdIdentity instances are effectively
|
||||
// identical.
|
||||
//
|
||||
// Since there's no way to get a specific fully-qualified IFD path without a
|
||||
// certain slice of parts and all other fields are also derived from this,
|
||||
// checking that the fully-qualified IFD path is equals is sufficient.
|
||||
func (ii *IfdIdentity) Equals(ii2 *IfdIdentity) bool {
|
||||
return ii.String() == ii2.String()
|
||||
}
|
||||
|
||||
// NewChild creates an IfdIdentity for an IFD that is a child of the current
|
||||
// IFD.
|
||||
func (ii *IfdIdentity) NewChild(childIfdTag IfdTag, index int) (iiChild *IfdIdentity) {
|
||||
if *childIfdTag.parentIfdTag != ii.ifdTag {
|
||||
log.Panicf("can not add child; we are not the parent:\nUS=%v\nCHILD=%v", ii.ifdTag, childIfdTag)
|
||||
}
|
||||
|
||||
childPart := IfdIdentityPart{childIfdTag.name, index}
|
||||
childParts := append(ii.parts, childPart)
|
||||
|
||||
iiChild = NewIfdIdentity(childIfdTag, childParts...)
|
||||
return iiChild
|
||||
}
|
||||
|
||||
// NewSibling creates an IfdIdentity for an IFD that is a sibling to the current
|
||||
// one.
|
||||
func (ii *IfdIdentity) NewSibling(index int) (iiSibling *IfdIdentity) {
|
||||
parts := make([]IfdIdentityPart, len(ii.parts))
|
||||
|
||||
copy(parts, ii.parts)
|
||||
parts[len(parts)-1].Index = index
|
||||
|
||||
iiSibling = NewIfdIdentity(ii.ifdTag, parts...)
|
||||
return iiSibling
|
||||
}
|
||||
|
||||
var (
|
||||
// IfdStandardIfdIdentity represents the IFD path for IFD0.
|
||||
IfdStandardIfdIdentity = NewIfdIdentity(rootStandardIfd, IfdIdentityPart{"IFD", 0})
|
||||
|
||||
// IfdExifStandardIfdIdentity represents the IFD path for IFD0/Exif0.
|
||||
IfdExifStandardIfdIdentity = IfdStandardIfdIdentity.NewChild(exifStandardIfd, 0)
|
||||
|
||||
// IfdExifIopStandardIfdIdentity represents the IFD path for IFD0/Exif0/Iop0.
|
||||
IfdExifIopStandardIfdIdentity = IfdExifStandardIfdIdentity.NewChild(iopStandardIfd, 0)
|
||||
|
||||
// IfdGPSInfoStandardIfdIdentity represents the IFD path for IFD0/GPSInfo0.
|
||||
IfdGpsInfoStandardIfdIdentity = IfdStandardIfdIdentity.NewChild(gpsInfoStandardIfd, 0)
|
||||
|
||||
// Ifd1StandardIfdIdentity represents the IFD path for IFD1.
|
||||
Ifd1StandardIfdIdentity = NewIfdIdentity(rootStandardIfd, IfdIdentityPart{"IFD", 1})
|
||||
)
|
||||
280
vendor/github.com/dsoprea/go-exif/v3/common/parser.go
generated
vendored
280
vendor/github.com/dsoprea/go-exif/v3/common/parser.go
generated
vendored
|
|
@ -1,280 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"math"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
parserLogger = log.NewLogger("exifcommon.parser")
|
||||
)
|
||||
|
||||
var (
|
||||
ErrParseFail = errors.New("parse failure")
|
||||
)
|
||||
|
||||
// Parser knows how to parse all well-defined, encoded EXIF types.
|
||||
type Parser struct {
|
||||
}
|
||||
|
||||
// ParseBytesknows how to parse a byte-type value.
|
||||
func (p *Parser) ParseBytes(data []byte, unitCount uint32) (value []uint8, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeByte.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = []uint8(data[:count])
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseAscii returns a string and auto-strips the trailing NUL character that
|
||||
// should be at the end of the encoding.
|
||||
func (p *Parser) ParseAscii(data []byte, unitCount uint32) (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeAscii.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
if len(data) == 0 || data[count-1] != 0 {
|
||||
s := string(data[:count])
|
||||
parserLogger.Warningf(nil, "ASCII not terminated with NUL as expected: [%v]", s)
|
||||
|
||||
for i, c := range s {
|
||||
if c > 127 {
|
||||
// Binary
|
||||
|
||||
t := s[:i]
|
||||
parserLogger.Warningf(nil, "ASCII also had binary characters. Truncating: [%v]->[%s]", s, t)
|
||||
|
||||
return t, nil
|
||||
}
|
||||
}
|
||||
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Auto-strip the NUL from the end. It serves no purpose outside of
|
||||
// encoding semantics.
|
||||
|
||||
return string(data[:count-1]), nil
|
||||
}
|
||||
|
||||
// ParseAsciiNoNul returns a string without any consideration for a trailing NUL
|
||||
// character.
|
||||
func (p *Parser) ParseAsciiNoNul(data []byte, unitCount uint32) (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeAscii.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
return string(data[:count]), nil
|
||||
}
|
||||
|
||||
// ParseShorts knows how to parse an encoded list of shorts.
|
||||
func (p *Parser) ParseShorts(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []uint16, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeShort.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = make([]uint16, count)
|
||||
for i := 0; i < count; i++ {
|
||||
value[i] = byteOrder.Uint16(data[i*2:])
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseLongs knows how to encode an encoded list of unsigned longs.
|
||||
func (p *Parser) ParseLongs(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeLong.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = make([]uint32, count)
|
||||
for i := 0; i < count; i++ {
|
||||
value[i] = byteOrder.Uint32(data[i*4:])
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseFloats knows how to encode an encoded list of floats.
|
||||
func (p *Parser) ParseFloats(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []float32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) != (TypeFloat.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = make([]float32, count)
|
||||
for i := 0; i < count; i++ {
|
||||
value[i] = math.Float32frombits(byteOrder.Uint32(data[i*4 : (i+1)*4]))
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseDoubles knows how to encode an encoded list of doubles.
|
||||
func (p *Parser) ParseDoubles(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []float64, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) != (TypeDouble.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = make([]float64, count)
|
||||
for i := 0; i < count; i++ {
|
||||
value[i] = math.Float64frombits(byteOrder.Uint64(data[i*8 : (i+1)*8]))
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseRationals knows how to parse an encoded list of unsigned rationals.
|
||||
func (p *Parser) ParseRationals(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []Rational, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeRational.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
value = make([]Rational, count)
|
||||
for i := 0; i < count; i++ {
|
||||
value[i].Numerator = byteOrder.Uint32(data[i*8:])
|
||||
value[i].Denominator = byteOrder.Uint32(data[i*8+4:])
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseSignedLongs knows how to parse an encoded list of signed longs.
|
||||
func (p *Parser) ParseSignedLongs(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []int32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeSignedLong.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
b := bytes.NewBuffer(data)
|
||||
|
||||
value = make([]int32, count)
|
||||
for i := 0; i < count; i++ {
|
||||
err := binary.Read(b, byteOrder, &value[i])
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ParseSignedRationals knows how to parse an encoded list of signed
|
||||
// rationals.
|
||||
func (p *Parser) ParseSignedRationals(data []byte, unitCount uint32, byteOrder binary.ByteOrder) (value []SignedRational, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
count := int(unitCount)
|
||||
|
||||
if len(data) < (TypeSignedRational.Size() * count) {
|
||||
log.Panic(ErrNotEnoughData)
|
||||
}
|
||||
|
||||
b := bytes.NewBuffer(data)
|
||||
|
||||
value = make([]SignedRational, count)
|
||||
for i := 0; i < count; i++ {
|
||||
err = binary.Read(b, byteOrder, &value[i].Numerator)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Read(b, byteOrder, &value[i].Denominator)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
88
vendor/github.com/dsoprea/go-exif/v3/common/testing_common.go
generated
vendored
88
vendor/github.com/dsoprea/go-exif/v3/common/testing_common.go
generated
vendored
|
|
@ -1,88 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"encoding/binary"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
moduleRootPath = ""
|
||||
|
||||
testExifData []byte = nil
|
||||
|
||||
// EncodeDefaultByteOrder is the default byte-order for encoding operations.
|
||||
EncodeDefaultByteOrder = binary.BigEndian
|
||||
|
||||
// Default byte order for tests.
|
||||
TestDefaultByteOrder = binary.BigEndian
|
||||
)
|
||||
|
||||
func GetModuleRootPath() string {
|
||||
if moduleRootPath == "" {
|
||||
moduleRootPath = os.Getenv("EXIF_MODULE_ROOT_PATH")
|
||||
if moduleRootPath != "" {
|
||||
return moduleRootPath
|
||||
}
|
||||
|
||||
currentWd, err := os.Getwd()
|
||||
log.PanicIf(err)
|
||||
|
||||
currentPath := currentWd
|
||||
|
||||
visited := make([]string, 0)
|
||||
|
||||
for {
|
||||
tryStampFilepath := path.Join(currentPath, ".MODULE_ROOT")
|
||||
|
||||
_, err := os.Stat(tryStampFilepath)
|
||||
if err != nil && os.IsNotExist(err) != true {
|
||||
log.Panic(err)
|
||||
} else if err == nil {
|
||||
break
|
||||
}
|
||||
|
||||
visited = append(visited, tryStampFilepath)
|
||||
|
||||
currentPath = path.Dir(currentPath)
|
||||
if currentPath == "/" {
|
||||
log.Panicf("could not find module-root: %v", visited)
|
||||
}
|
||||
}
|
||||
|
||||
moduleRootPath = currentPath
|
||||
}
|
||||
|
||||
return moduleRootPath
|
||||
}
|
||||
|
||||
func GetTestAssetsPath() string {
|
||||
moduleRootPath := GetModuleRootPath()
|
||||
assetsPath := path.Join(moduleRootPath, "assets")
|
||||
|
||||
return assetsPath
|
||||
}
|
||||
|
||||
func getTestImageFilepath() string {
|
||||
assetsPath := GetTestAssetsPath()
|
||||
testImageFilepath := path.Join(assetsPath, "NDM_8901.jpg")
|
||||
return testImageFilepath
|
||||
}
|
||||
|
||||
func getTestExifData() []byte {
|
||||
if testExifData == nil {
|
||||
assetsPath := GetTestAssetsPath()
|
||||
filepath := path.Join(assetsPath, "NDM_8901.jpg.exif")
|
||||
|
||||
var err error
|
||||
|
||||
testExifData, err = ioutil.ReadFile(filepath)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return testExifData
|
||||
}
|
||||
482
vendor/github.com/dsoprea/go-exif/v3/common/type.go
generated
vendored
482
vendor/github.com/dsoprea/go-exif/v3/common/type.go
generated
vendored
|
|
@ -1,482 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
typeLogger = log.NewLogger("exif.type")
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrNotEnoughData is used when there isn't enough data to accommodate what
|
||||
// we're trying to parse (sizeof(type) * unit_count).
|
||||
ErrNotEnoughData = errors.New("not enough data for type")
|
||||
|
||||
// ErrWrongType is used when we try to parse anything other than the
|
||||
// current type.
|
||||
ErrWrongType = errors.New("wrong type, can not parse")
|
||||
|
||||
// ErrUnhandledUndefinedTypedTag is used when we try to parse a tag that's
|
||||
// recorded as an "unknown" type but not a documented tag (therefore
|
||||
// leaving us not knowning how to read it).
|
||||
ErrUnhandledUndefinedTypedTag = errors.New("not a standard unknown-typed tag")
|
||||
)
|
||||
|
||||
// TagTypePrimitive is a type-alias that let's us easily lookup type properties.
|
||||
type TagTypePrimitive uint16
|
||||
|
||||
const (
|
||||
// TypeByte describes an encoded list of bytes.
|
||||
TypeByte TagTypePrimitive = 1
|
||||
|
||||
// TypeAscii describes an encoded list of characters that is terminated
|
||||
// with a NUL in its encoded form.
|
||||
TypeAscii TagTypePrimitive = 2
|
||||
|
||||
// TypeShort describes an encoded list of shorts.
|
||||
TypeShort TagTypePrimitive = 3
|
||||
|
||||
// TypeLong describes an encoded list of longs.
|
||||
TypeLong TagTypePrimitive = 4
|
||||
|
||||
// TypeRational describes an encoded list of rationals.
|
||||
TypeRational TagTypePrimitive = 5
|
||||
|
||||
// TypeUndefined describes an encoded value that has a complex/non-clearcut
|
||||
// interpretation.
|
||||
TypeUndefined TagTypePrimitive = 7
|
||||
|
||||
// We've seen type-8, but have no documentation on it.
|
||||
|
||||
// TypeSignedLong describes an encoded list of signed longs.
|
||||
TypeSignedLong TagTypePrimitive = 9
|
||||
|
||||
// TypeSignedRational describes an encoded list of signed rationals.
|
||||
TypeSignedRational TagTypePrimitive = 10
|
||||
|
||||
// TypeFloat describes an encoded list of floats
|
||||
TypeFloat TagTypePrimitive = 11
|
||||
|
||||
// TypeDouble describes an encoded list of doubles.
|
||||
TypeDouble TagTypePrimitive = 12
|
||||
|
||||
// TypeAsciiNoNul is just a pseudo-type, for our own purposes.
|
||||
TypeAsciiNoNul TagTypePrimitive = 0xf0
|
||||
)
|
||||
|
||||
// String returns the name of the type
|
||||
func (typeType TagTypePrimitive) String() string {
|
||||
return TypeNames[typeType]
|
||||
}
|
||||
|
||||
// Size returns the size of one atomic unit of the type.
|
||||
func (tagType TagTypePrimitive) Size() int {
|
||||
switch tagType {
|
||||
case TypeByte, TypeAscii, TypeAsciiNoNul:
|
||||
return 1
|
||||
case TypeShort:
|
||||
return 2
|
||||
case TypeLong, TypeSignedLong, TypeFloat:
|
||||
return 4
|
||||
case TypeRational, TypeSignedRational, TypeDouble:
|
||||
return 8
|
||||
default:
|
||||
log.Panicf("can not determine tag-value size for type (%d): [%s]",
|
||||
tagType,
|
||||
TypeNames[tagType])
|
||||
// Never called.
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// IsValid returns true if tagType is a valid type.
|
||||
func (tagType TagTypePrimitive) IsValid() bool {
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
return tagType == TypeByte ||
|
||||
tagType == TypeAscii ||
|
||||
tagType == TypeAsciiNoNul ||
|
||||
tagType == TypeShort ||
|
||||
tagType == TypeLong ||
|
||||
tagType == TypeRational ||
|
||||
tagType == TypeSignedLong ||
|
||||
tagType == TypeSignedRational ||
|
||||
tagType == TypeFloat ||
|
||||
tagType == TypeDouble ||
|
||||
tagType == TypeUndefined
|
||||
}
|
||||
|
||||
var (
|
||||
// TODO(dustin): Rename TypeNames() to typeNames() and add getter.
|
||||
TypeNames = map[TagTypePrimitive]string{
|
||||
TypeByte: "BYTE",
|
||||
TypeAscii: "ASCII",
|
||||
TypeShort: "SHORT",
|
||||
TypeLong: "LONG",
|
||||
TypeRational: "RATIONAL",
|
||||
TypeUndefined: "UNDEFINED",
|
||||
TypeSignedLong: "SLONG",
|
||||
TypeSignedRational: "SRATIONAL",
|
||||
TypeFloat: "FLOAT",
|
||||
TypeDouble: "DOUBLE",
|
||||
|
||||
TypeAsciiNoNul: "_ASCII_NO_NUL",
|
||||
}
|
||||
|
||||
typeNamesR = map[string]TagTypePrimitive{}
|
||||
)
|
||||
|
||||
// Rational describes an unsigned rational value.
|
||||
type Rational struct {
|
||||
// Numerator is the numerator of the rational value.
|
||||
Numerator uint32
|
||||
|
||||
// Denominator is the numerator of the rational value.
|
||||
Denominator uint32
|
||||
}
|
||||
|
||||
// SignedRational describes a signed rational value.
|
||||
type SignedRational struct {
|
||||
// Numerator is the numerator of the rational value.
|
||||
Numerator int32
|
||||
|
||||
// Denominator is the numerator of the rational value.
|
||||
Denominator int32
|
||||
}
|
||||
|
||||
func isPrintableText(s string) bool {
|
||||
for _, c := range s {
|
||||
// unicode.IsPrint() returns false for newline characters.
|
||||
if c == 0x0d || c == 0x0a {
|
||||
continue
|
||||
} else if unicode.IsPrint(rune(c)) == false {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Format returns a stringified value for the given encoding. Automatically
|
||||
// parses. Automatically calculates count based on type size. This function
|
||||
// also supports undefined-type values (the ones that we support, anyway) by
|
||||
// way of the String() method that they all require. We can't be more specific
|
||||
// because we're a base package and we can't refer to it.
|
||||
func FormatFromType(value interface{}, justFirst bool) (phrase string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): !! Add test
|
||||
|
||||
switch t := value.(type) {
|
||||
case []byte:
|
||||
return DumpBytesToString(t), nil
|
||||
case string:
|
||||
for i, c := range t {
|
||||
if c == 0 {
|
||||
t = t[:i]
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if isPrintableText(t) == false {
|
||||
phrase = fmt.Sprintf("string with binary data (%d bytes)", len(t))
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
return t, nil
|
||||
case []uint16, []uint32, []int32, []float64, []float32:
|
||||
val := reflect.ValueOf(t)
|
||||
|
||||
if val.Len() == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if justFirst == true {
|
||||
var valueSuffix string
|
||||
if val.Len() > 1 {
|
||||
valueSuffix = "..."
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v%s", val.Index(0), valueSuffix), nil
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v", val), nil
|
||||
case []Rational:
|
||||
if len(t) == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
parts := make([]string, len(t))
|
||||
for i, r := range t {
|
||||
parts[i] = fmt.Sprintf("%d/%d", r.Numerator, r.Denominator)
|
||||
|
||||
if justFirst == true {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if justFirst == true {
|
||||
var valueSuffix string
|
||||
if len(t) > 1 {
|
||||
valueSuffix = "..."
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v%s", parts[0], valueSuffix), nil
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v", parts), nil
|
||||
case []SignedRational:
|
||||
if len(t) == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
parts := make([]string, len(t))
|
||||
for i, r := range t {
|
||||
parts[i] = fmt.Sprintf("%d/%d", r.Numerator, r.Denominator)
|
||||
|
||||
if justFirst == true {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if justFirst == true {
|
||||
var valueSuffix string
|
||||
if len(t) > 1 {
|
||||
valueSuffix = "..."
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v%s", parts[0], valueSuffix), nil
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%v", parts), nil
|
||||
case fmt.Stringer:
|
||||
s := t.String()
|
||||
if isPrintableText(s) == false {
|
||||
phrase = fmt.Sprintf("stringable with binary data (%d bytes)", len(s))
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
// An undefined value that is documented (or that we otherwise support).
|
||||
return s, nil
|
||||
default:
|
||||
// Affects only "unknown" values, in general.
|
||||
log.Panicf("type can not be formatted into string: %v", reflect.TypeOf(value).Name())
|
||||
|
||||
// Never called.
|
||||
return "", nil
|
||||
}
|
||||
}
|
||||
|
||||
// Format returns a stringified value for the given encoding. Automatically
|
||||
// parses. Automatically calculates count based on type size.
|
||||
func FormatFromBytes(rawBytes []byte, tagType TagTypePrimitive, justFirst bool, byteOrder binary.ByteOrder) (phrase string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): !! Add test
|
||||
|
||||
typeSize := tagType.Size()
|
||||
|
||||
if len(rawBytes)%typeSize != 0 {
|
||||
log.Panicf("byte-count (%d) does not align for [%s] type with a size of (%d) bytes", len(rawBytes), TypeNames[tagType], typeSize)
|
||||
}
|
||||
|
||||
// unitCount is the calculated unit-count. This should equal the original
|
||||
// value from the tag (pre-resolution).
|
||||
unitCount := uint32(len(rawBytes) / typeSize)
|
||||
|
||||
// Truncate the items if it's not bytes or a string and we just want the first.
|
||||
|
||||
var value interface{}
|
||||
|
||||
switch tagType {
|
||||
case TypeByte:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseBytes(rawBytes, unitCount)
|
||||
log.PanicIf(err)
|
||||
case TypeAscii:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseAscii(rawBytes, unitCount)
|
||||
log.PanicIf(err)
|
||||
case TypeAsciiNoNul:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseAsciiNoNul(rawBytes, unitCount)
|
||||
log.PanicIf(err)
|
||||
case TypeShort:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseShorts(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeLong:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseLongs(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeFloat:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseFloats(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeDouble:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseDoubles(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeRational:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseRationals(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeSignedLong:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseSignedLongs(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
case TypeSignedRational:
|
||||
var err error
|
||||
|
||||
value, err = parser.ParseSignedRationals(rawBytes, unitCount, byteOrder)
|
||||
log.PanicIf(err)
|
||||
default:
|
||||
// Affects only "unknown" values, in general.
|
||||
log.Panicf("value of type [%s] can not be formatted into string", tagType.String())
|
||||
|
||||
// Never called.
|
||||
return "", nil
|
||||
}
|
||||
|
||||
phrase, err = FormatFromType(value, justFirst)
|
||||
log.PanicIf(err)
|
||||
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
// TranslateStringToType converts user-provided strings to properly-typed
|
||||
// values. If a string, returns a string. Else, assumes that it's a single
|
||||
// number. If a list needs to be processed, it is the caller's responsibility to
|
||||
// split it (according to whichever convention has been established).
|
||||
func TranslateStringToType(tagType TagTypePrimitive, valueString string) (value interface{}, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if tagType == TypeUndefined {
|
||||
// The caller should just call String() on the decoded type.
|
||||
log.Panicf("undefined-type values are not supported")
|
||||
}
|
||||
|
||||
if tagType == TypeByte {
|
||||
wide, err := strconv.ParseInt(valueString, 16, 8)
|
||||
log.PanicIf(err)
|
||||
|
||||
return byte(wide), nil
|
||||
} else if tagType == TypeAscii || tagType == TypeAsciiNoNul {
|
||||
// Whether or not we're putting an NUL on the end is only relevant for
|
||||
// byte-level encoding. This function really just supports a user
|
||||
// interface.
|
||||
|
||||
return valueString, nil
|
||||
} else if tagType == TypeShort {
|
||||
n, err := strconv.ParseUint(valueString, 10, 16)
|
||||
log.PanicIf(err)
|
||||
|
||||
return uint16(n), nil
|
||||
} else if tagType == TypeLong {
|
||||
n, err := strconv.ParseUint(valueString, 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
return uint32(n), nil
|
||||
} else if tagType == TypeRational {
|
||||
parts := strings.SplitN(valueString, "/", 2)
|
||||
|
||||
numerator, err := strconv.ParseUint(parts[0], 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
denominator, err := strconv.ParseUint(parts[1], 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
return Rational{
|
||||
Numerator: uint32(numerator),
|
||||
Denominator: uint32(denominator),
|
||||
}, nil
|
||||
} else if tagType == TypeSignedLong {
|
||||
n, err := strconv.ParseInt(valueString, 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
return int32(n), nil
|
||||
} else if tagType == TypeFloat {
|
||||
n, err := strconv.ParseFloat(valueString, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
return float32(n), nil
|
||||
} else if tagType == TypeDouble {
|
||||
n, err := strconv.ParseFloat(valueString, 64)
|
||||
log.PanicIf(err)
|
||||
|
||||
return float64(n), nil
|
||||
} else if tagType == TypeSignedRational {
|
||||
parts := strings.SplitN(valueString, "/", 2)
|
||||
|
||||
numerator, err := strconv.ParseInt(parts[0], 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
denominator, err := strconv.ParseInt(parts[1], 10, 32)
|
||||
log.PanicIf(err)
|
||||
|
||||
return SignedRational{
|
||||
Numerator: int32(numerator),
|
||||
Denominator: int32(denominator),
|
||||
}, nil
|
||||
}
|
||||
|
||||
log.Panicf("from-string encoding for type not supported; this shouldn't happen: [%s]", tagType.String())
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// GetTypeByName returns the `TagTypePrimitive` for the given type name.
|
||||
// Returns (0) if not valid.
|
||||
func GetTypeByName(typeName string) (tagType TagTypePrimitive, found bool) {
|
||||
tagType, found = typeNamesR[typeName]
|
||||
return tagType, found
|
||||
}
|
||||
|
||||
// BasicTag describes a single tag for any purpose.
|
||||
type BasicTag struct {
|
||||
// FqIfdPath is the fully-qualified IFD-path.
|
||||
FqIfdPath string
|
||||
|
||||
// IfdPath is the unindexed IFD-path.
|
||||
IfdPath string
|
||||
|
||||
// TagId is the tag-ID.
|
||||
TagId uint16
|
||||
}
|
||||
|
||||
func init() {
|
||||
for typeId, typeName := range TypeNames {
|
||||
typeNamesR[typeName] = typeId
|
||||
}
|
||||
}
|
||||
148
vendor/github.com/dsoprea/go-exif/v3/common/utility.go
generated
vendored
148
vendor/github.com/dsoprea/go-exif/v3/common/utility.go
generated
vendored
|
|
@ -1,148 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
timeType = reflect.TypeOf(time.Time{})
|
||||
)
|
||||
|
||||
// DumpBytes prints a list of hex-encoded bytes.
|
||||
func DumpBytes(data []byte) {
|
||||
fmt.Printf("DUMP: ")
|
||||
for _, x := range data {
|
||||
fmt.Printf("%02x ", x)
|
||||
}
|
||||
|
||||
fmt.Printf("\n")
|
||||
}
|
||||
|
||||
// DumpBytesClause prints a list like DumpBytes(), but encapsulated in
|
||||
// "[]byte { ... }".
|
||||
func DumpBytesClause(data []byte) {
|
||||
fmt.Printf("DUMP: ")
|
||||
|
||||
fmt.Printf("[]byte { ")
|
||||
|
||||
for i, x := range data {
|
||||
fmt.Printf("0x%02x", x)
|
||||
|
||||
if i < len(data)-1 {
|
||||
fmt.Printf(", ")
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf(" }\n")
|
||||
}
|
||||
|
||||
// DumpBytesToString returns a stringified list of hex-encoded bytes.
|
||||
func DumpBytesToString(data []byte) string {
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
for i, x := range data {
|
||||
_, err := b.WriteString(fmt.Sprintf("%02x", x))
|
||||
log.PanicIf(err)
|
||||
|
||||
if i < len(data)-1 {
|
||||
_, err := b.WriteRune(' ')
|
||||
log.PanicIf(err)
|
||||
}
|
||||
}
|
||||
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// DumpBytesClauseToString returns a comma-separated list of hex-encoded bytes.
|
||||
func DumpBytesClauseToString(data []byte) string {
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
for i, x := range data {
|
||||
_, err := b.WriteString(fmt.Sprintf("0x%02x", x))
|
||||
log.PanicIf(err)
|
||||
|
||||
if i < len(data)-1 {
|
||||
_, err := b.WriteString(", ")
|
||||
log.PanicIf(err)
|
||||
}
|
||||
}
|
||||
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// ExifFullTimestampString produces a string like "2018:11:30 13:01:49" from a
|
||||
// `time.Time` struct. It will attempt to convert to UTC first.
|
||||
func ExifFullTimestampString(t time.Time) (fullTimestampPhrase string) {
|
||||
t = t.UTC()
|
||||
|
||||
return fmt.Sprintf("%04d:%02d:%02d %02d:%02d:%02d", t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second())
|
||||
}
|
||||
|
||||
// ParseExifFullTimestamp parses dates like "2018:11:30 13:01:49" into a UTC
|
||||
// `time.Time` struct.
|
||||
func ParseExifFullTimestamp(fullTimestampPhrase string) (timestamp time.Time, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
parts := strings.Split(fullTimestampPhrase, " ")
|
||||
datestampValue, timestampValue := parts[0], parts[1]
|
||||
|
||||
// Normalize the separators.
|
||||
datestampValue = strings.ReplaceAll(datestampValue, "-", ":")
|
||||
timestampValue = strings.ReplaceAll(timestampValue, "-", ":")
|
||||
|
||||
dateParts := strings.Split(datestampValue, ":")
|
||||
|
||||
year, err := strconv.ParseUint(dateParts[0], 10, 16)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse year")
|
||||
}
|
||||
|
||||
month, err := strconv.ParseUint(dateParts[1], 10, 8)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse month")
|
||||
}
|
||||
|
||||
day, err := strconv.ParseUint(dateParts[2], 10, 8)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse day")
|
||||
}
|
||||
|
||||
timeParts := strings.Split(timestampValue, ":")
|
||||
|
||||
hour, err := strconv.ParseUint(timeParts[0], 10, 8)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse hour")
|
||||
}
|
||||
|
||||
minute, err := strconv.ParseUint(timeParts[1], 10, 8)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse minute")
|
||||
}
|
||||
|
||||
second, err := strconv.ParseUint(timeParts[2], 10, 8)
|
||||
if err != nil {
|
||||
log.Panicf("could not parse second")
|
||||
}
|
||||
|
||||
timestamp = time.Date(int(year), time.Month(month), int(day), int(hour), int(minute), int(second), 0, time.UTC)
|
||||
return timestamp, nil
|
||||
}
|
||||
|
||||
// IsTime returns true if the value is a `time.Time`.
|
||||
func IsTime(v interface{}) bool {
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
return reflect.TypeOf(v) == timeType
|
||||
}
|
||||
464
vendor/github.com/dsoprea/go-exif/v3/common/value_context.go
generated
vendored
464
vendor/github.com/dsoprea/go-exif/v3/common/value_context.go
generated
vendored
|
|
@ -1,464 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
parser *Parser
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrNotFarValue indicates that an offset-based lookup was attempted for a
|
||||
// non-offset-based (embedded) value.
|
||||
ErrNotFarValue = errors.New("not a far value")
|
||||
)
|
||||
|
||||
// ValueContext embeds all of the parameters required to find and extract the
|
||||
// actual tag value.
|
||||
type ValueContext struct {
|
||||
unitCount uint32
|
||||
valueOffset uint32
|
||||
rawValueOffset []byte
|
||||
rs io.ReadSeeker
|
||||
|
||||
tagType TagTypePrimitive
|
||||
byteOrder binary.ByteOrder
|
||||
|
||||
// undefinedValueTagType is the effective type to use if this is an
|
||||
// "undefined" value.
|
||||
undefinedValueTagType TagTypePrimitive
|
||||
|
||||
ifdPath string
|
||||
tagId uint16
|
||||
}
|
||||
|
||||
// TODO(dustin): We can update newValueContext() to derive `valueOffset` itself (from `rawValueOffset`).
|
||||
|
||||
// NewValueContext returns a new ValueContext struct.
|
||||
func NewValueContext(ifdPath string, tagId uint16, unitCount, valueOffset uint32, rawValueOffset []byte, rs io.ReadSeeker, tagType TagTypePrimitive, byteOrder binary.ByteOrder) *ValueContext {
|
||||
return &ValueContext{
|
||||
unitCount: unitCount,
|
||||
valueOffset: valueOffset,
|
||||
rawValueOffset: rawValueOffset,
|
||||
rs: rs,
|
||||
|
||||
tagType: tagType,
|
||||
byteOrder: byteOrder,
|
||||
|
||||
ifdPath: ifdPath,
|
||||
tagId: tagId,
|
||||
}
|
||||
}
|
||||
|
||||
// SetUndefinedValueType sets the effective type if this is an unknown-type tag.
|
||||
func (vc *ValueContext) SetUndefinedValueType(tagType TagTypePrimitive) {
|
||||
if vc.tagType != TypeUndefined {
|
||||
log.Panicf("can not set effective type for unknown-type tag because this is *not* an unknown-type tag")
|
||||
}
|
||||
|
||||
vc.undefinedValueTagType = tagType
|
||||
}
|
||||
|
||||
// UnitCount returns the embedded unit-count.
|
||||
func (vc *ValueContext) UnitCount() uint32 {
|
||||
return vc.unitCount
|
||||
}
|
||||
|
||||
// ValueOffset returns the value-offset decoded as a `uint32`.
|
||||
func (vc *ValueContext) ValueOffset() uint32 {
|
||||
return vc.valueOffset
|
||||
}
|
||||
|
||||
// RawValueOffset returns the uninterpreted value-offset. This is used for
|
||||
// embedded values (values small enough to fit within the offset bytes rather
|
||||
// than needing to be stored elsewhere and referred to by an actual offset).
|
||||
func (vc *ValueContext) RawValueOffset() []byte {
|
||||
return vc.rawValueOffset
|
||||
}
|
||||
|
||||
// AddressableData returns the block of data that we can dereference into.
|
||||
func (vc *ValueContext) AddressableData() io.ReadSeeker {
|
||||
|
||||
// RELEASE)dustin): Rename from AddressableData() to ReadSeeker()
|
||||
|
||||
return vc.rs
|
||||
}
|
||||
|
||||
// ByteOrder returns the byte-order of numbers.
|
||||
func (vc *ValueContext) ByteOrder() binary.ByteOrder {
|
||||
return vc.byteOrder
|
||||
}
|
||||
|
||||
// IfdPath returns the path of the IFD containing this tag.
|
||||
func (vc *ValueContext) IfdPath() string {
|
||||
return vc.ifdPath
|
||||
}
|
||||
|
||||
// TagId returns the ID of the tag that we represent.
|
||||
func (vc *ValueContext) TagId() uint16 {
|
||||
return vc.tagId
|
||||
}
|
||||
|
||||
// isEmbedded returns whether the value is embedded or a reference. This can't
|
||||
// be precalculated since the size is not defined for all types (namely the
|
||||
// "undefined" types).
|
||||
func (vc *ValueContext) isEmbedded() bool {
|
||||
tagType := vc.effectiveValueType()
|
||||
|
||||
return (tagType.Size() * int(vc.unitCount)) <= 4
|
||||
}
|
||||
|
||||
// SizeInBytes returns the number of bytes that this value requires. The
|
||||
// underlying call will panic if the type is UNDEFINED. It is the
|
||||
// responsibility of the caller to preemptively check that.
|
||||
func (vc *ValueContext) SizeInBytes() int {
|
||||
tagType := vc.effectiveValueType()
|
||||
|
||||
return tagType.Size() * int(vc.unitCount)
|
||||
}
|
||||
|
||||
// effectiveValueType returns the effective type of the unknown-type tag or, if
|
||||
// not unknown, the actual type.
|
||||
func (vc *ValueContext) effectiveValueType() (tagType TagTypePrimitive) {
|
||||
if vc.tagType == TypeUndefined {
|
||||
tagType = vc.undefinedValueTagType
|
||||
|
||||
if tagType == 0 {
|
||||
log.Panicf("undefined-value type not set")
|
||||
}
|
||||
} else {
|
||||
tagType = vc.tagType
|
||||
}
|
||||
|
||||
return tagType
|
||||
}
|
||||
|
||||
// readRawEncoded returns the encoded bytes for the value that we represent.
|
||||
func (vc *ValueContext) readRawEncoded() (rawBytes []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
tagType := vc.effectiveValueType()
|
||||
|
||||
unitSizeRaw := uint32(tagType.Size())
|
||||
|
||||
if vc.isEmbedded() == true {
|
||||
byteLength := unitSizeRaw * vc.unitCount
|
||||
return vc.rawValueOffset[:byteLength], nil
|
||||
}
|
||||
|
||||
_, err = vc.rs.Seek(int64(vc.valueOffset), io.SeekStart)
|
||||
log.PanicIf(err)
|
||||
|
||||
rawBytes = make([]byte, vc.unitCount*unitSizeRaw)
|
||||
|
||||
_, err = io.ReadFull(vc.rs, rawBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
return rawBytes, nil
|
||||
}
|
||||
|
||||
// GetFarOffset returns the offset if the value is not embedded [within the
|
||||
// pointer itself] or an error if an embedded value.
|
||||
func (vc *ValueContext) GetFarOffset() (offset uint32, err error) {
|
||||
if vc.isEmbedded() == true {
|
||||
return 0, ErrNotFarValue
|
||||
}
|
||||
|
||||
return vc.valueOffset, nil
|
||||
}
|
||||
|
||||
// ReadRawEncoded returns the encoded bytes for the value that we represent.
|
||||
func (vc *ValueContext) ReadRawEncoded() (rawBytes []byte, err error) {
|
||||
|
||||
// TODO(dustin): Remove this method and rename readRawEncoded in its place.
|
||||
|
||||
return vc.readRawEncoded()
|
||||
}
|
||||
|
||||
// Format returns a string representation for the value.
|
||||
//
|
||||
// Where the type is not ASCII, `justFirst` indicates whether to just stringify
|
||||
// the first item in the slice (or return an empty string if the slice is
|
||||
// empty).
|
||||
//
|
||||
// Since this method lacks the information to process undefined-type tags (e.g.
|
||||
// byte-order, tag-ID, IFD type), it will return an error if attempted. See
|
||||
// `Undefined()`.
|
||||
func (vc *ValueContext) Format() (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawBytes, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
phrase, err := FormatFromBytes(rawBytes, vc.effectiveValueType(), false, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
// FormatFirst is similar to `Format` but only gets and stringifies the first
|
||||
// item.
|
||||
func (vc *ValueContext) FormatFirst() (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawBytes, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
phrase, err := FormatFromBytes(rawBytes, vc.tagType, true, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
// ReadBytes parses the encoded byte-array from the value-context.
|
||||
func (vc *ValueContext) ReadBytes() (value []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseBytes(rawValue, vc.unitCount)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadAscii parses the encoded NUL-terminated ASCII string from the value-
|
||||
// context.
|
||||
func (vc *ValueContext) ReadAscii() (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseAscii(rawValue, vc.unitCount)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadAsciiNoNul parses the non-NUL-terminated encoded ASCII string from the
|
||||
// value-context.
|
||||
func (vc *ValueContext) ReadAsciiNoNul() (value string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseAsciiNoNul(rawValue, vc.unitCount)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadShorts parses the list of encoded shorts from the value-context.
|
||||
func (vc *ValueContext) ReadShorts() (value []uint16, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseShorts(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadLongs parses the list of encoded, unsigned longs from the value-context.
|
||||
func (vc *ValueContext) ReadLongs() (value []uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseLongs(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadFloats parses the list of encoded, floats from the value-context.
|
||||
func (vc *ValueContext) ReadFloats() (value []float32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseFloats(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadDoubles parses the list of encoded, doubles from the value-context.
|
||||
func (vc *ValueContext) ReadDoubles() (value []float64, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseDoubles(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadRationals parses the list of encoded, unsigned rationals from the value-
|
||||
// context.
|
||||
func (vc *ValueContext) ReadRationals() (value []Rational, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseRationals(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadSignedLongs parses the list of encoded, signed longs from the value-context.
|
||||
func (vc *ValueContext) ReadSignedLongs() (value []int32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseSignedLongs(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// ReadSignedRationals parses the list of encoded, signed rationals from the
|
||||
// value-context.
|
||||
func (vc *ValueContext) ReadSignedRationals() (value []SignedRational, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawValue, err := vc.readRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
value, err = parser.ParseSignedRationals(rawValue, vc.unitCount, vc.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// Values knows how to resolve the given value. This value is always a list
|
||||
// (undefined-values aside), so we're named accordingly.
|
||||
//
|
||||
// Since this method lacks the information to process unknown-type tags (e.g.
|
||||
// byte-order, tag-ID, IFD type), it will return an error if attempted. See
|
||||
// `Undefined()`.
|
||||
func (vc *ValueContext) Values() (values interface{}, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if vc.tagType == TypeByte {
|
||||
values, err = vc.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeAscii {
|
||||
values, err = vc.ReadAscii()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeAsciiNoNul {
|
||||
values, err = vc.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeShort {
|
||||
values, err = vc.ReadShorts()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeLong {
|
||||
values, err = vc.ReadLongs()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeRational {
|
||||
values, err = vc.ReadRationals()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeSignedLong {
|
||||
values, err = vc.ReadSignedLongs()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeSignedRational {
|
||||
values, err = vc.ReadSignedRationals()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeFloat {
|
||||
values, err = vc.ReadFloats()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeDouble {
|
||||
values, err = vc.ReadDoubles()
|
||||
log.PanicIf(err)
|
||||
} else if vc.tagType == TypeUndefined {
|
||||
log.Panicf("will not parse undefined-type value")
|
||||
|
||||
// Never called.
|
||||
return nil, nil
|
||||
} else {
|
||||
log.Panicf("value of type [%s] is unparseable", vc.tagType)
|
||||
// Never called.
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
parser = new(Parser)
|
||||
}
|
||||
273
vendor/github.com/dsoprea/go-exif/v3/common/value_encoder.go
generated
vendored
273
vendor/github.com/dsoprea/go-exif/v3/common/value_encoder.go
generated
vendored
|
|
@ -1,273 +0,0 @@
|
|||
package exifcommon
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"math"
|
||||
"reflect"
|
||||
"time"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
typeEncodeLogger = log.NewLogger("exif.type_encode")
|
||||
)
|
||||
|
||||
// EncodedData encapsulates the compound output of an encoding operation.
|
||||
type EncodedData struct {
|
||||
Type TagTypePrimitive
|
||||
Encoded []byte
|
||||
|
||||
// TODO(dustin): Is this really necessary? We might have this just to correlate to the incoming stream format (raw bytes and a unit-count both for incoming and outgoing).
|
||||
UnitCount uint32
|
||||
}
|
||||
|
||||
// ValueEncoder knows how to encode values of every type to bytes.
|
||||
type ValueEncoder struct {
|
||||
byteOrder binary.ByteOrder
|
||||
}
|
||||
|
||||
// NewValueEncoder returns a new ValueEncoder.
|
||||
func NewValueEncoder(byteOrder binary.ByteOrder) *ValueEncoder {
|
||||
return &ValueEncoder{
|
||||
byteOrder: byteOrder,
|
||||
}
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeBytes(value []uint8) (ed EncodedData, err error) {
|
||||
ed.Type = TypeByte
|
||||
ed.Encoded = []byte(value)
|
||||
ed.UnitCount = uint32(len(value))
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeAscii(value string) (ed EncodedData, err error) {
|
||||
ed.Type = TypeAscii
|
||||
|
||||
ed.Encoded = []byte(value)
|
||||
ed.Encoded = append(ed.Encoded, 0)
|
||||
|
||||
ed.UnitCount = uint32(len(ed.Encoded))
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
// encodeAsciiNoNul returns a string encoded as a byte-string without a trailing
|
||||
// NUL byte.
|
||||
//
|
||||
// Note that:
|
||||
//
|
||||
// 1. This type can not be automatically encoded using `Encode()`. The default
|
||||
// mode is to encode *with* a trailing NUL byte using `encodeAscii`. Only
|
||||
// certain undefined-type tags using an unterminated ASCII string and these
|
||||
// are exceptional in nature.
|
||||
//
|
||||
// 2. The presence of this method allows us to completely test the complimentary
|
||||
// no-nul parser.
|
||||
//
|
||||
func (ve *ValueEncoder) encodeAsciiNoNul(value string) (ed EncodedData, err error) {
|
||||
ed.Type = TypeAsciiNoNul
|
||||
ed.Encoded = []byte(value)
|
||||
ed.UnitCount = uint32(len(ed.Encoded))
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeShorts(value []uint16) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
ed.Encoded = make([]byte, ed.UnitCount*2)
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
ve.byteOrder.PutUint16(ed.Encoded[i*2:(i+1)*2], value[i])
|
||||
}
|
||||
|
||||
ed.Type = TypeShort
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeLongs(value []uint32) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
ed.Encoded = make([]byte, ed.UnitCount*4)
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
ve.byteOrder.PutUint32(ed.Encoded[i*4:(i+1)*4], value[i])
|
||||
}
|
||||
|
||||
ed.Type = TypeLong
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeFloats(value []float32) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
ed.Encoded = make([]byte, ed.UnitCount*4)
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
ve.byteOrder.PutUint32(ed.Encoded[i*4:(i+1)*4], math.Float32bits(value[i]))
|
||||
}
|
||||
|
||||
ed.Type = TypeFloat
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeDoubles(value []float64) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
ed.Encoded = make([]byte, ed.UnitCount*8)
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
ve.byteOrder.PutUint64(ed.Encoded[i*8:(i+1)*8], math.Float64bits(value[i]))
|
||||
}
|
||||
|
||||
ed.Type = TypeDouble
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeRationals(value []Rational) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
ed.Encoded = make([]byte, ed.UnitCount*8)
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
ve.byteOrder.PutUint32(ed.Encoded[i*8+0:i*8+4], value[i].Numerator)
|
||||
ve.byteOrder.PutUint32(ed.Encoded[i*8+4:i*8+8], value[i].Denominator)
|
||||
}
|
||||
|
||||
ed.Type = TypeRational
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeSignedLongs(value []int32) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
|
||||
b := bytes.NewBuffer(make([]byte, 0, 8*ed.UnitCount))
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
err := binary.Write(b, ve.byteOrder, value[i])
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ed.Type = TypeSignedLong
|
||||
ed.Encoded = b.Bytes()
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
func (ve *ValueEncoder) encodeSignedRationals(value []SignedRational) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ed.UnitCount = uint32(len(value))
|
||||
|
||||
b := bytes.NewBuffer(make([]byte, 0, 8*ed.UnitCount))
|
||||
|
||||
for i := uint32(0); i < ed.UnitCount; i++ {
|
||||
err := binary.Write(b, ve.byteOrder, value[i].Numerator)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Write(b, ve.byteOrder, value[i].Denominator)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ed.Type = TypeSignedRational
|
||||
ed.Encoded = b.Bytes()
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
|
||||
// Encode returns bytes for the given value, infering type from the actual
|
||||
// value. This does not support `TypeAsciiNoNull` (all strings are encoded as
|
||||
// `TypeAscii`).
|
||||
func (ve *ValueEncoder) Encode(value interface{}) (ed EncodedData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
switch t := value.(type) {
|
||||
case []byte:
|
||||
ed, err = ve.encodeBytes(t)
|
||||
log.PanicIf(err)
|
||||
case string:
|
||||
ed, err = ve.encodeAscii(t)
|
||||
log.PanicIf(err)
|
||||
case []uint16:
|
||||
ed, err = ve.encodeShorts(t)
|
||||
log.PanicIf(err)
|
||||
case []uint32:
|
||||
ed, err = ve.encodeLongs(t)
|
||||
log.PanicIf(err)
|
||||
case []float32:
|
||||
ed, err = ve.encodeFloats(t)
|
||||
log.PanicIf(err)
|
||||
case []float64:
|
||||
ed, err = ve.encodeDoubles(t)
|
||||
log.PanicIf(err)
|
||||
case []Rational:
|
||||
ed, err = ve.encodeRationals(t)
|
||||
log.PanicIf(err)
|
||||
case []int32:
|
||||
ed, err = ve.encodeSignedLongs(t)
|
||||
log.PanicIf(err)
|
||||
case []SignedRational:
|
||||
ed, err = ve.encodeSignedRationals(t)
|
||||
log.PanicIf(err)
|
||||
case time.Time:
|
||||
// For convenience, if the user doesn't want to deal with translation
|
||||
// semantics with timestamps.
|
||||
|
||||
s := ExifFullTimestampString(t)
|
||||
|
||||
ed, err = ve.encodeAscii(s)
|
||||
log.PanicIf(err)
|
||||
default:
|
||||
log.Panicf("value not encodable: [%s] [%v]", reflect.TypeOf(value), value)
|
||||
}
|
||||
|
||||
return ed, nil
|
||||
}
|
||||
50
vendor/github.com/dsoprea/go-exif/v3/data_layer.go
generated
vendored
50
vendor/github.com/dsoprea/go-exif/v3/data_layer.go
generated
vendored
|
|
@ -1,50 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
"github.com/dsoprea/go-utility/v2/filesystem"
|
||||
)
|
||||
|
||||
type ExifBlobSeeker interface {
|
||||
GetReadSeeker(initialOffset int64) (rs io.ReadSeeker, err error)
|
||||
}
|
||||
|
||||
// ExifReadSeeker knows how to retrieve data from the EXIF blob relative to the
|
||||
// beginning of the blob (so, absolute position (0) is the first byte of the
|
||||
// EXIF data).
|
||||
type ExifReadSeeker struct {
|
||||
rs io.ReadSeeker
|
||||
}
|
||||
|
||||
func NewExifReadSeeker(rs io.ReadSeeker) *ExifReadSeeker {
|
||||
return &ExifReadSeeker{
|
||||
rs: rs,
|
||||
}
|
||||
}
|
||||
|
||||
func NewExifReadSeekerWithBytes(exifData []byte) *ExifReadSeeker {
|
||||
sb := rifs.NewSeekableBufferWithBytes(exifData)
|
||||
edbs := NewExifReadSeeker(sb)
|
||||
|
||||
return edbs
|
||||
}
|
||||
|
||||
// Fork creates a new ReadSeeker instead that wraps a BouncebackReader to
|
||||
// maintain its own position in the stream.
|
||||
func (edbs *ExifReadSeeker) GetReadSeeker(initialOffset int64) (rs io.ReadSeeker, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
br, err := rifs.NewBouncebackReader(edbs.rs)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = br.Seek(initialOffset, io.SeekStart)
|
||||
log.PanicIf(err)
|
||||
|
||||
return br, nil
|
||||
}
|
||||
14
vendor/github.com/dsoprea/go-exif/v3/error.go
generated
vendored
14
vendor/github.com/dsoprea/go-exif/v3/error.go
generated
vendored
|
|
@ -1,14 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTagNotFound indicates that the tag was not found.
|
||||
ErrTagNotFound = errors.New("tag not found")
|
||||
|
||||
// ErrTagNotKnown indicates that the tag is not registered with us as a
|
||||
// known tag.
|
||||
ErrTagNotKnown = errors.New("tag is not known")
|
||||
)
|
||||
333
vendor/github.com/dsoprea/go-exif/v3/exif.go
generated
vendored
333
vendor/github.com/dsoprea/go-exif/v3/exif.go
generated
vendored
|
|
@ -1,333 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"encoding/binary"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
const (
|
||||
// ExifAddressableAreaStart is the absolute offset in the file that all
|
||||
// offsets are relative to.
|
||||
ExifAddressableAreaStart = uint32(0x0)
|
||||
|
||||
// ExifDefaultFirstIfdOffset is essentially the number of bytes in addition
|
||||
// to `ExifAddressableAreaStart` that you have to move in order to escape
|
||||
// the rest of the header and get to the earliest point where we can put
|
||||
// stuff (which has to be the first IFD). This is the size of the header
|
||||
// sequence containing the two-character byte-order, two-character fixed-
|
||||
// bytes, and the four bytes describing the first-IFD offset.
|
||||
ExifDefaultFirstIfdOffset = uint32(2 + 2 + 4)
|
||||
)
|
||||
|
||||
const (
|
||||
// ExifSignatureLength is the number of bytes in the EXIF signature (which
|
||||
// customarily includes the first IFD offset).
|
||||
ExifSignatureLength = 8
|
||||
)
|
||||
|
||||
var (
|
||||
exifLogger = log.NewLogger("exif.exif")
|
||||
|
||||
ExifBigEndianSignature = [4]byte{'M', 'M', 0x00, 0x2a}
|
||||
ExifLittleEndianSignature = [4]byte{'I', 'I', 0x2a, 0x00}
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoExif = errors.New("no exif data")
|
||||
ErrExifHeaderError = errors.New("exif header error")
|
||||
)
|
||||
|
||||
// SearchAndExtractExif searches for an EXIF blob in the byte-slice.
|
||||
func SearchAndExtractExif(data []byte) (rawExif []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
b := bytes.NewBuffer(data)
|
||||
|
||||
rawExif, err = SearchAndExtractExifWithReader(b)
|
||||
if err != nil {
|
||||
if err == ErrNoExif {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
return rawExif, nil
|
||||
}
|
||||
|
||||
// SearchAndExtractExifN searches for an EXIF blob in the byte-slice, but skips
|
||||
// the given number of EXIF blocks first. This is a forensics tool that helps
|
||||
// identify multiple EXIF blocks in a file.
|
||||
func SearchAndExtractExifN(data []byte, n int) (rawExif []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
skips := 0
|
||||
totalDiscarded := 0
|
||||
for {
|
||||
b := bytes.NewBuffer(data)
|
||||
|
||||
var discarded int
|
||||
|
||||
rawExif, discarded, err = searchAndExtractExifWithReaderWithDiscarded(b)
|
||||
if err != nil {
|
||||
if err == ErrNoExif {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
exifLogger.Debugf(nil, "Read EXIF block (%d).", skips)
|
||||
|
||||
totalDiscarded += discarded
|
||||
|
||||
if skips >= n {
|
||||
exifLogger.Debugf(nil, "Reached requested EXIF block (%d).", n)
|
||||
break
|
||||
}
|
||||
|
||||
nextOffset := discarded + 1
|
||||
exifLogger.Debugf(nil, "Skipping EXIF block (%d) by seeking to position (%d).", skips, nextOffset)
|
||||
|
||||
data = data[nextOffset:]
|
||||
skips++
|
||||
}
|
||||
|
||||
exifLogger.Debugf(nil, "Found EXIF blob (%d) bytes from initial position.", totalDiscarded)
|
||||
return rawExif, nil
|
||||
}
|
||||
|
||||
// searchAndExtractExifWithReaderWithDiscarded searches for an EXIF blob using
|
||||
// an `io.Reader`. We can't know how much long the EXIF data is without parsing
|
||||
// it, so this will likely grab up a lot of the image-data, too.
|
||||
//
|
||||
// This function returned the count of preceding bytes.
|
||||
func searchAndExtractExifWithReaderWithDiscarded(r io.Reader) (rawExif []byte, discarded int, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// Search for the beginning of the EXIF information. The EXIF is near the
|
||||
// beginning of most JPEGs, so this likely doesn't have a high cost (at
|
||||
// least, again, with JPEGs).
|
||||
|
||||
br := bufio.NewReader(r)
|
||||
|
||||
for {
|
||||
window, err := br.Peek(ExifSignatureLength)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil, 0, ErrNoExif
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
_, err = ParseExifHeader(window)
|
||||
if err != nil {
|
||||
if log.Is(err, ErrNoExif) == true {
|
||||
// No EXIF. Move forward by one byte.
|
||||
|
||||
_, err := br.Discard(1)
|
||||
log.PanicIf(err)
|
||||
|
||||
discarded++
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// Some other error.
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
exifLogger.Debugf(nil, "Found EXIF blob (%d) bytes from initial position.", discarded)
|
||||
|
||||
rawExif, err = ioutil.ReadAll(br)
|
||||
log.PanicIf(err)
|
||||
|
||||
return rawExif, discarded, nil
|
||||
}
|
||||
|
||||
// RELEASE(dustin): We should replace the implementation of SearchAndExtractExifWithReader with searchAndExtractExifWithReaderWithDiscarded and drop the latter.
|
||||
|
||||
// SearchAndExtractExifWithReader searches for an EXIF blob using an
|
||||
// `io.Reader`. We can't know how much long the EXIF data is without parsing it,
|
||||
// so this will likely grab up a lot of the image-data, too.
|
||||
func SearchAndExtractExifWithReader(r io.Reader) (rawExif []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
rawExif, _, err = searchAndExtractExifWithReaderWithDiscarded(r)
|
||||
if err != nil {
|
||||
if err == ErrNoExif {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
return rawExif, nil
|
||||
}
|
||||
|
||||
// SearchFileAndExtractExif returns a slice from the beginning of the EXIF data
|
||||
// to the end of the file (it's not practical to try and calculate where the
|
||||
// data actually ends).
|
||||
func SearchFileAndExtractExif(filepath string) (rawExif []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// Open the file.
|
||||
|
||||
f, err := os.Open(filepath)
|
||||
log.PanicIf(err)
|
||||
|
||||
defer f.Close()
|
||||
|
||||
rawExif, err = SearchAndExtractExifWithReader(f)
|
||||
log.PanicIf(err)
|
||||
|
||||
return rawExif, nil
|
||||
}
|
||||
|
||||
type ExifHeader struct {
|
||||
ByteOrder binary.ByteOrder
|
||||
FirstIfdOffset uint32
|
||||
}
|
||||
|
||||
func (eh ExifHeader) String() string {
|
||||
return fmt.Sprintf("ExifHeader<BYTE-ORDER=[%v] FIRST-IFD-OFFSET=(0x%02x)>", eh.ByteOrder, eh.FirstIfdOffset)
|
||||
}
|
||||
|
||||
// ParseExifHeader parses the bytes at the very top of the header.
|
||||
//
|
||||
// This will panic with ErrNoExif on any data errors so that we can double as
|
||||
// an EXIF-detection routine.
|
||||
func ParseExifHeader(data []byte) (eh ExifHeader, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// Good reference:
|
||||
//
|
||||
// CIPA DC-008-2016; JEITA CP-3451D
|
||||
// -> http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf
|
||||
|
||||
if len(data) < ExifSignatureLength {
|
||||
exifLogger.Warningf(nil, "Not enough data for EXIF header: (%d)", len(data))
|
||||
return eh, ErrNoExif
|
||||
}
|
||||
|
||||
if bytes.Equal(data[:4], ExifBigEndianSignature[:]) == true {
|
||||
exifLogger.Debugf(nil, "Byte-order is big-endian.")
|
||||
eh.ByteOrder = binary.BigEndian
|
||||
} else if bytes.Equal(data[:4], ExifLittleEndianSignature[:]) == true {
|
||||
eh.ByteOrder = binary.LittleEndian
|
||||
exifLogger.Debugf(nil, "Byte-order is little-endian.")
|
||||
} else {
|
||||
return eh, ErrNoExif
|
||||
}
|
||||
|
||||
eh.FirstIfdOffset = eh.ByteOrder.Uint32(data[4:8])
|
||||
|
||||
return eh, nil
|
||||
}
|
||||
|
||||
// Visit recursively invokes a callback for every tag.
|
||||
func Visit(rootIfdIdentity *exifcommon.IfdIdentity, ifdMapping *exifcommon.IfdMapping, tagIndex *TagIndex, exifData []byte, visitor TagVisitorFn, so *ScanOptions) (eh ExifHeader, furthestOffset uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
eh, err = ParseExifHeader(exifData)
|
||||
log.PanicIf(err)
|
||||
|
||||
ebs := NewExifReadSeekerWithBytes(exifData)
|
||||
ie := NewIfdEnumerate(ifdMapping, tagIndex, ebs, eh.ByteOrder)
|
||||
|
||||
_, err = ie.Scan(rootIfdIdentity, eh.FirstIfdOffset, visitor, so)
|
||||
log.PanicIf(err)
|
||||
|
||||
furthestOffset = ie.FurthestOffset()
|
||||
|
||||
return eh, furthestOffset, nil
|
||||
}
|
||||
|
||||
// Collect recursively builds a static structure of all IFDs and tags.
|
||||
func Collect(ifdMapping *exifcommon.IfdMapping, tagIndex *TagIndex, exifData []byte) (eh ExifHeader, index IfdIndex, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
eh, err = ParseExifHeader(exifData)
|
||||
log.PanicIf(err)
|
||||
|
||||
ebs := NewExifReadSeekerWithBytes(exifData)
|
||||
ie := NewIfdEnumerate(ifdMapping, tagIndex, ebs, eh.ByteOrder)
|
||||
|
||||
index, err = ie.Collect(eh.FirstIfdOffset)
|
||||
log.PanicIf(err)
|
||||
|
||||
return eh, index, nil
|
||||
}
|
||||
|
||||
// BuildExifHeader constructs the bytes that go at the front of the stream.
|
||||
func BuildExifHeader(byteOrder binary.ByteOrder, firstIfdOffset uint32) (headerBytes []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
var signatureBytes []byte
|
||||
if byteOrder == binary.BigEndian {
|
||||
signatureBytes = ExifBigEndianSignature[:]
|
||||
} else {
|
||||
signatureBytes = ExifLittleEndianSignature[:]
|
||||
}
|
||||
|
||||
_, err = b.Write(signatureBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Write(b, byteOrder, firstIfdOffset)
|
||||
log.PanicIf(err)
|
||||
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
117
vendor/github.com/dsoprea/go-exif/v3/gps.go
generated
vendored
117
vendor/github.com/dsoprea/go-exif/v3/gps.go
generated
vendored
|
|
@ -1,117 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
"github.com/golang/geo/s2"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrGpsCoordinatesNotValid means that some part of the geographic data was
|
||||
// unparseable.
|
||||
ErrGpsCoordinatesNotValid = errors.New("GPS coordinates not valid")
|
||||
)
|
||||
|
||||
// GpsDegrees is a high-level struct representing geographic data.
|
||||
type GpsDegrees struct {
|
||||
// Orientation describes the N/E/S/W direction that this position is
|
||||
// relative to.
|
||||
Orientation byte
|
||||
|
||||
// Degrees is a simple float representing the underlying rational degrees
|
||||
// amount.
|
||||
Degrees float64
|
||||
|
||||
// Minutes is a simple float representing the underlying rational minutes
|
||||
// amount.
|
||||
Minutes float64
|
||||
|
||||
// Seconds is a simple float representing the underlying ration seconds
|
||||
// amount.
|
||||
Seconds float64
|
||||
}
|
||||
|
||||
// NewGpsDegreesFromRationals returns a GpsDegrees struct given the EXIF-encoded
|
||||
// information. The refValue is the N/E/S/W direction that this position is
|
||||
// relative to.
|
||||
func NewGpsDegreesFromRationals(refValue string, rawCoordinate []exifcommon.Rational) (gd GpsDegrees, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if len(rawCoordinate) != 3 {
|
||||
log.Panicf("new GpsDegrees struct requires a raw-coordinate with exactly three rationals")
|
||||
}
|
||||
|
||||
gd = GpsDegrees{
|
||||
Orientation: refValue[0],
|
||||
Degrees: float64(rawCoordinate[0].Numerator) / float64(rawCoordinate[0].Denominator),
|
||||
Minutes: float64(rawCoordinate[1].Numerator) / float64(rawCoordinate[1].Denominator),
|
||||
Seconds: float64(rawCoordinate[2].Numerator) / float64(rawCoordinate[2].Denominator),
|
||||
}
|
||||
|
||||
return gd, nil
|
||||
}
|
||||
|
||||
// String provides returns a descriptive string.
|
||||
func (d GpsDegrees) String() string {
|
||||
return fmt.Sprintf("Degrees<O=[%s] D=(%g) M=(%g) S=(%g)>", string([]byte{d.Orientation}), d.Degrees, d.Minutes, d.Seconds)
|
||||
}
|
||||
|
||||
// Decimal calculates and returns the simplified float representation of the
|
||||
// component degrees.
|
||||
func (d GpsDegrees) Decimal() float64 {
|
||||
decimal := float64(d.Degrees) + float64(d.Minutes)/60.0 + float64(d.Seconds)/3600.0
|
||||
|
||||
if d.Orientation == 'S' || d.Orientation == 'W' {
|
||||
return -decimal
|
||||
}
|
||||
|
||||
return decimal
|
||||
}
|
||||
|
||||
// Raw returns a Rational struct that can be used to *write* coordinates. In
|
||||
// practice, the denominator are typically (1) in the original EXIF data, and,
|
||||
// that being the case, this will best preserve precision.
|
||||
func (d GpsDegrees) Raw() []exifcommon.Rational {
|
||||
return []exifcommon.Rational{
|
||||
{Numerator: uint32(d.Degrees), Denominator: 1},
|
||||
{Numerator: uint32(d.Minutes), Denominator: 1},
|
||||
{Numerator: uint32(d.Seconds), Denominator: 1},
|
||||
}
|
||||
}
|
||||
|
||||
// GpsInfo encapsulates all of the geographic information in one place.
|
||||
type GpsInfo struct {
|
||||
Latitude, Longitude GpsDegrees
|
||||
Altitude int
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (gi *GpsInfo) String() string {
|
||||
return fmt.Sprintf("GpsInfo<LAT=(%.05f) LON=(%.05f) ALT=(%d) TIME=[%s]>",
|
||||
gi.Latitude.Decimal(), gi.Longitude.Decimal(), gi.Altitude, gi.Timestamp)
|
||||
}
|
||||
|
||||
// S2CellId returns the cell-ID of the geographic location on the earth.
|
||||
func (gi *GpsInfo) S2CellId() s2.CellID {
|
||||
latitude := gi.Latitude.Decimal()
|
||||
longitude := gi.Longitude.Decimal()
|
||||
|
||||
ll := s2.LatLngFromDegrees(latitude, longitude)
|
||||
cellId := s2.CellIDFromLatLng(ll)
|
||||
|
||||
if cellId.IsValid() == false {
|
||||
panic(ErrGpsCoordinatesNotValid)
|
||||
}
|
||||
|
||||
return cellId
|
||||
}
|
||||
1199
vendor/github.com/dsoprea/go-exif/v3/ifd_builder.go
generated
vendored
1199
vendor/github.com/dsoprea/go-exif/v3/ifd_builder.go
generated
vendored
File diff suppressed because it is too large
Load diff
532
vendor/github.com/dsoprea/go-exif/v3/ifd_builder_encode.go
generated
vendored
532
vendor/github.com/dsoprea/go-exif/v3/ifd_builder_encode.go
generated
vendored
|
|
@ -1,532 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
const (
|
||||
// Tag-ID + Tag-Type + Unit-Count + Value/Offset.
|
||||
IfdTagEntrySize = uint32(2 + 2 + 4 + 4)
|
||||
)
|
||||
|
||||
type ByteWriter struct {
|
||||
b *bytes.Buffer
|
||||
byteOrder binary.ByteOrder
|
||||
}
|
||||
|
||||
func NewByteWriter(b *bytes.Buffer, byteOrder binary.ByteOrder) (bw *ByteWriter) {
|
||||
return &ByteWriter{
|
||||
b: b,
|
||||
byteOrder: byteOrder,
|
||||
}
|
||||
}
|
||||
|
||||
func (bw ByteWriter) writeAsBytes(value interface{}) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
err = binary.Write(bw.b, bw.byteOrder, value)
|
||||
log.PanicIf(err)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (bw ByteWriter) WriteUint32(value uint32) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
err = bw.writeAsBytes(value)
|
||||
log.PanicIf(err)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (bw ByteWriter) WriteUint16(value uint16) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
err = bw.writeAsBytes(value)
|
||||
log.PanicIf(err)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (bw ByteWriter) WriteFourBytes(value []byte) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
len_ := len(value)
|
||||
if len_ != 4 {
|
||||
log.Panicf("value is not four-bytes: (%d)", len_)
|
||||
}
|
||||
|
||||
_, err = bw.b.Write(value)
|
||||
log.PanicIf(err)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ifdOffsetIterator keeps track of where the next IFD should be written by
|
||||
// keeping track of where the offsets start, the data that has been added, and
|
||||
// bumping the offset *when* the data is added.
|
||||
type ifdDataAllocator struct {
|
||||
offset uint32
|
||||
b bytes.Buffer
|
||||
}
|
||||
|
||||
func newIfdDataAllocator(ifdDataAddressableOffset uint32) *ifdDataAllocator {
|
||||
return &ifdDataAllocator{
|
||||
offset: ifdDataAddressableOffset,
|
||||
}
|
||||
}
|
||||
|
||||
func (ida *ifdDataAllocator) Allocate(value []byte) (offset uint32, err error) {
|
||||
_, err = ida.b.Write(value)
|
||||
log.PanicIf(err)
|
||||
|
||||
offset = ida.offset
|
||||
ida.offset += uint32(len(value))
|
||||
|
||||
return offset, nil
|
||||
}
|
||||
|
||||
func (ida *ifdDataAllocator) NextOffset() uint32 {
|
||||
return ida.offset
|
||||
}
|
||||
|
||||
func (ida *ifdDataAllocator) Bytes() []byte {
|
||||
return ida.b.Bytes()
|
||||
}
|
||||
|
||||
// IfdByteEncoder converts an IB to raw bytes (for writing) while also figuring
|
||||
// out all of the allocations and indirection that is required for extended
|
||||
// data.
|
||||
type IfdByteEncoder struct {
|
||||
// journal holds a list of actions taken while encoding.
|
||||
journal [][3]string
|
||||
}
|
||||
|
||||
func NewIfdByteEncoder() (ibe *IfdByteEncoder) {
|
||||
return &IfdByteEncoder{
|
||||
journal: make([][3]string, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (ibe *IfdByteEncoder) Journal() [][3]string {
|
||||
return ibe.journal
|
||||
}
|
||||
|
||||
func (ibe *IfdByteEncoder) TableSize(entryCount int) uint32 {
|
||||
// Tag-Count + (Entry-Size * Entry-Count) + Next-IFD-Offset.
|
||||
return uint32(2) + (IfdTagEntrySize * uint32(entryCount)) + uint32(4)
|
||||
}
|
||||
|
||||
func (ibe *IfdByteEncoder) pushToJournal(where, direction, format string, args ...interface{}) {
|
||||
event := [3]string{
|
||||
direction,
|
||||
where,
|
||||
fmt.Sprintf(format, args...),
|
||||
}
|
||||
|
||||
ibe.journal = append(ibe.journal, event)
|
||||
}
|
||||
|
||||
// PrintJournal prints a hierarchical representation of the steps taken during
|
||||
// encoding.
|
||||
func (ibe *IfdByteEncoder) PrintJournal() {
|
||||
maxWhereLength := 0
|
||||
for _, event := range ibe.journal {
|
||||
where := event[1]
|
||||
|
||||
len_ := len(where)
|
||||
if len_ > maxWhereLength {
|
||||
maxWhereLength = len_
|
||||
}
|
||||
}
|
||||
|
||||
level := 0
|
||||
for i, event := range ibe.journal {
|
||||
direction := event[0]
|
||||
where := event[1]
|
||||
message := event[2]
|
||||
|
||||
if direction != ">" && direction != "<" && direction != "-" {
|
||||
log.Panicf("journal operation not valid: [%s]", direction)
|
||||
}
|
||||
|
||||
if direction == "<" {
|
||||
if level <= 0 {
|
||||
log.Panicf("journal operations unbalanced (too many closes)")
|
||||
}
|
||||
|
||||
level--
|
||||
}
|
||||
|
||||
indent := strings.Repeat(" ", level)
|
||||
|
||||
fmt.Printf("%3d %s%s %s: %s\n", i, indent, direction, where, message)
|
||||
|
||||
if direction == ">" {
|
||||
level++
|
||||
}
|
||||
}
|
||||
|
||||
if level != 0 {
|
||||
log.Panicf("journal operations unbalanced (too many opens)")
|
||||
}
|
||||
}
|
||||
|
||||
// encodeTagToBytes encodes the given tag to a byte stream. If
|
||||
// `nextIfdOffsetToWrite` is more than (0), recurse into child IFDs
|
||||
// (`nextIfdOffsetToWrite` is required in order for them to know where the its
|
||||
// IFD data will be written, in order for them to know the offset of where
|
||||
// their allocated-data block will start, which follows right behind).
|
||||
func (ibe *IfdByteEncoder) encodeTagToBytes(ib *IfdBuilder, bt *BuilderTag, bw *ByteWriter, ida *ifdDataAllocator, nextIfdOffsetToWrite uint32) (childIfdBlock []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// Write tag-ID.
|
||||
err = bw.WriteUint16(bt.tagId)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Works for both values and child IFDs (which have an official size of
|
||||
// LONG).
|
||||
err = bw.WriteUint16(uint16(bt.typeId))
|
||||
log.PanicIf(err)
|
||||
|
||||
// Write unit-count.
|
||||
|
||||
if bt.value.IsBytes() == true {
|
||||
effectiveType := bt.typeId
|
||||
if bt.typeId == exifcommon.TypeUndefined {
|
||||
effectiveType = exifcommon.TypeByte
|
||||
}
|
||||
|
||||
// It's a non-unknown value.Calculate the count of values of
|
||||
// the type that we're writing and the raw bytes for the whole list.
|
||||
|
||||
typeSize := uint32(effectiveType.Size())
|
||||
|
||||
valueBytes := bt.value.Bytes()
|
||||
|
||||
len_ := len(valueBytes)
|
||||
unitCount := uint32(len_) / typeSize
|
||||
|
||||
if _, found := tagsWithoutAlignment[bt.tagId]; found == false {
|
||||
remainder := uint32(len_) % typeSize
|
||||
|
||||
if remainder > 0 {
|
||||
log.Panicf("tag (0x%04x) value of (%d) bytes not evenly divisible by type-size (%d)", bt.tagId, len_, typeSize)
|
||||
}
|
||||
}
|
||||
|
||||
err = bw.WriteUint32(unitCount)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Write four-byte value/offset.
|
||||
|
||||
if len_ > 4 {
|
||||
offset, err := ida.Allocate(valueBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = bw.WriteUint32(offset)
|
||||
log.PanicIf(err)
|
||||
} else {
|
||||
fourBytes := make([]byte, 4)
|
||||
copy(fourBytes, valueBytes)
|
||||
|
||||
err = bw.WriteFourBytes(fourBytes)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
} else {
|
||||
if bt.value.IsIb() == false {
|
||||
log.Panicf("tag value is not a byte-slice but also not a child IB: %v", bt)
|
||||
}
|
||||
|
||||
// Write unit-count (one LONG representing one offset).
|
||||
err = bw.WriteUint32(1)
|
||||
log.PanicIf(err)
|
||||
|
||||
if nextIfdOffsetToWrite > 0 {
|
||||
var err error
|
||||
|
||||
ibe.pushToJournal("encodeTagToBytes", ">", "[%s]->[%s]", ib.IfdIdentity().UnindexedString(), bt.value.Ib().IfdIdentity().UnindexedString())
|
||||
|
||||
// Create the block of IFD data and everything it requires.
|
||||
childIfdBlock, err = ibe.encodeAndAttachIfd(bt.value.Ib(), nextIfdOffsetToWrite)
|
||||
log.PanicIf(err)
|
||||
|
||||
ibe.pushToJournal("encodeTagToBytes", "<", "[%s]->[%s]", bt.value.Ib().IfdIdentity().UnindexedString(), ib.IfdIdentity().UnindexedString())
|
||||
|
||||
// Use the next-IFD offset for it. The IFD will actually get
|
||||
// attached after we return.
|
||||
err = bw.WriteUint32(nextIfdOffsetToWrite)
|
||||
log.PanicIf(err)
|
||||
|
||||
} else {
|
||||
// No child-IFDs are to be allocated. Finish the entry with a NULL
|
||||
// pointer.
|
||||
|
||||
ibe.pushToJournal("encodeTagToBytes", "-", "*Not* descending to child: [%s]", bt.value.Ib().IfdIdentity().UnindexedString())
|
||||
|
||||
err = bw.WriteUint32(0)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
}
|
||||
|
||||
return childIfdBlock, nil
|
||||
}
|
||||
|
||||
// encodeIfdToBytes encodes the given IB to a byte-slice. We are given the
|
||||
// offset at which this IFD will be written. This method is used called both to
|
||||
// pre-determine how big the table is going to be (so that we can calculate the
|
||||
// address to allocate data at) as well as to write the final table.
|
||||
//
|
||||
// It is necessary to fully realize the table in order to predetermine its size
|
||||
// because it is not enough to know the size of the table: If there are child
|
||||
// IFDs, we will not be able to allocate them without first knowing how much
|
||||
// data we need to allocate for the current IFD.
|
||||
func (ibe *IfdByteEncoder) encodeIfdToBytes(ib *IfdBuilder, ifdAddressableOffset uint32, nextIfdOffsetToWrite uint32, setNextIb bool) (data []byte, tableSize uint32, dataSize uint32, childIfdSizes []uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ibe.pushToJournal("encodeIfdToBytes", ">", "%s", ib)
|
||||
|
||||
tableSize = ibe.TableSize(len(ib.tags))
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
bw := NewByteWriter(b, ib.byteOrder)
|
||||
|
||||
// Write tag count.
|
||||
err = bw.WriteUint16(uint16(len(ib.tags)))
|
||||
log.PanicIf(err)
|
||||
|
||||
ida := newIfdDataAllocator(ifdAddressableOffset)
|
||||
|
||||
childIfdBlocks := make([][]byte, 0)
|
||||
|
||||
// Write raw bytes for each tag entry. Allocate larger data to be referred
|
||||
// to in the follow-up data-block as required. Any "unknown"-byte tags that
|
||||
// we can't parse will not be present here (using AddTagsFromExisting(), at
|
||||
// least).
|
||||
for _, bt := range ib.tags {
|
||||
childIfdBlock, err := ibe.encodeTagToBytes(ib, bt, bw, ida, nextIfdOffsetToWrite)
|
||||
log.PanicIf(err)
|
||||
|
||||
if childIfdBlock != nil {
|
||||
// We aren't allowed to have non-nil child IFDs if we're just
|
||||
// sizing things up.
|
||||
if nextIfdOffsetToWrite == 0 {
|
||||
log.Panicf("no IFD offset provided for child-IFDs; no new child-IFDs permitted")
|
||||
}
|
||||
|
||||
nextIfdOffsetToWrite += uint32(len(childIfdBlock))
|
||||
childIfdBlocks = append(childIfdBlocks, childIfdBlock)
|
||||
}
|
||||
}
|
||||
|
||||
dataBytes := ida.Bytes()
|
||||
dataSize = uint32(len(dataBytes))
|
||||
|
||||
childIfdSizes = make([]uint32, len(childIfdBlocks))
|
||||
childIfdsTotalSize := uint32(0)
|
||||
for i, childIfdBlock := range childIfdBlocks {
|
||||
len_ := uint32(len(childIfdBlock))
|
||||
childIfdSizes[i] = len_
|
||||
childIfdsTotalSize += len_
|
||||
}
|
||||
|
||||
// N the link from this IFD to the next IFD that will be written in the
|
||||
// next cycle.
|
||||
if setNextIb == true {
|
||||
// Write address of next IFD in chain. This will be the original
|
||||
// allocation offset plus the size of everything we have allocated for
|
||||
// this IFD and its child-IFDs.
|
||||
//
|
||||
// It is critical that this number is stepped properly. We experienced
|
||||
// an issue whereby it first looked like we were duplicating the IFD and
|
||||
// then that we were duplicating the tags in the wrong IFD, and then
|
||||
// finally we determined that the next-IFD offset for the first IFD was
|
||||
// accidentally pointing back to the EXIF IFD, so we were visiting it
|
||||
// twice when visiting through the tags after decoding. It was an
|
||||
// expensive bug to find.
|
||||
|
||||
ibe.pushToJournal("encodeIfdToBytes", "-", "Setting 'next' IFD to (0x%08x).", nextIfdOffsetToWrite)
|
||||
|
||||
err := bw.WriteUint32(nextIfdOffsetToWrite)
|
||||
log.PanicIf(err)
|
||||
} else {
|
||||
err := bw.WriteUint32(0)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
_, err = b.Write(dataBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Append any child IFD blocks after our table and data blocks. These IFDs
|
||||
// were equipped with the appropriate offset information so it's expected
|
||||
// that all offsets referred to by these will be correct.
|
||||
//
|
||||
// Note that child-IFDs are append after the current IFD and before the
|
||||
// next IFD, as opposed to the root IFDs, which are chained together but
|
||||
// will be interrupted by these child-IFDs (which is expected, per the
|
||||
// standard).
|
||||
|
||||
for _, childIfdBlock := range childIfdBlocks {
|
||||
_, err = b.Write(childIfdBlock)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ibe.pushToJournal("encodeIfdToBytes", "<", "%s", ib)
|
||||
|
||||
return b.Bytes(), tableSize, dataSize, childIfdSizes, nil
|
||||
}
|
||||
|
||||
// encodeAndAttachIfd is a reentrant function that processes the IFD chain.
|
||||
func (ibe *IfdByteEncoder) encodeAndAttachIfd(ib *IfdBuilder, ifdAddressableOffset uint32) (data []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", ">", "%s", ib)
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
i := 0
|
||||
|
||||
for thisIb := ib; thisIb != nil; thisIb = thisIb.nextIb {
|
||||
|
||||
// Do a dry-run in order to pre-determine its size requirement.
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", ">", "Beginning encoding process: (%d) [%s]", i, thisIb.IfdIdentity().UnindexedString())
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", ">", "Calculating size: (%d) [%s]", i, thisIb.IfdIdentity().UnindexedString())
|
||||
|
||||
_, tableSize, allocatedDataSize, _, err := ibe.encodeIfdToBytes(thisIb, ifdAddressableOffset, 0, false)
|
||||
log.PanicIf(err)
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", "<", "Finished calculating size: (%d) [%s]", i, thisIb.IfdIdentity().UnindexedString())
|
||||
|
||||
ifdAddressableOffset += tableSize
|
||||
nextIfdOffsetToWrite := ifdAddressableOffset + allocatedDataSize
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", ">", "Next IFD will be written at offset (0x%08x)", nextIfdOffsetToWrite)
|
||||
|
||||
// Write our IFD as well as any child-IFDs (now that we know the offset
|
||||
// where new IFDs and their data will be allocated).
|
||||
|
||||
setNextIb := thisIb.nextIb != nil
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", ">", "Encoding starting: (%d) [%s] NEXT-IFD-OFFSET-TO-WRITE=(0x%08x)", i, thisIb.IfdIdentity().UnindexedString(), nextIfdOffsetToWrite)
|
||||
|
||||
tableAndAllocated, effectiveTableSize, effectiveAllocatedDataSize, childIfdSizes, err :=
|
||||
ibe.encodeIfdToBytes(thisIb, ifdAddressableOffset, nextIfdOffsetToWrite, setNextIb)
|
||||
|
||||
log.PanicIf(err)
|
||||
|
||||
if effectiveTableSize != tableSize {
|
||||
log.Panicf("written table size does not match the pre-calculated table size: (%d) != (%d) %s", effectiveTableSize, tableSize, ib)
|
||||
} else if effectiveAllocatedDataSize != allocatedDataSize {
|
||||
log.Panicf("written allocated-data size does not match the pre-calculated allocated-data size: (%d) != (%d) %s", effectiveAllocatedDataSize, allocatedDataSize, ib)
|
||||
}
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", "<", "Encoding done: (%d) [%s]", i, thisIb.IfdIdentity().UnindexedString())
|
||||
|
||||
totalChildIfdSize := uint32(0)
|
||||
for _, childIfdSize := range childIfdSizes {
|
||||
totalChildIfdSize += childIfdSize
|
||||
}
|
||||
|
||||
if len(tableAndAllocated) != int(tableSize+allocatedDataSize+totalChildIfdSize) {
|
||||
log.Panicf("IFD table and data is not a consistent size: (%d) != (%d)", len(tableAndAllocated), tableSize+allocatedDataSize+totalChildIfdSize)
|
||||
}
|
||||
|
||||
// TODO(dustin): We might want to verify the original tableAndAllocated length, too.
|
||||
|
||||
_, err = b.Write(tableAndAllocated)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Advance past what we've allocated, thus far.
|
||||
|
||||
ifdAddressableOffset += allocatedDataSize + totalChildIfdSize
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", "<", "Finishing encoding process: (%d) [%s] [FINAL:] NEXT-IFD-OFFSET-TO-WRITE=(0x%08x)", i, ib.IfdIdentity().UnindexedString(), nextIfdOffsetToWrite)
|
||||
|
||||
i++
|
||||
}
|
||||
|
||||
ibe.pushToJournal("encodeAndAttachIfd", "<", "%s", ib)
|
||||
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
||||
// EncodeToExifPayload is the base encoding step that transcribes the entire IB
|
||||
// structure to its on-disk layout.
|
||||
func (ibe *IfdByteEncoder) EncodeToExifPayload(ib *IfdBuilder) (data []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
data, err = ibe.encodeAndAttachIfd(ib, ExifDefaultFirstIfdOffset)
|
||||
log.PanicIf(err)
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// EncodeToExif calls EncodeToExifPayload and then packages the result into a
|
||||
// complete EXIF block.
|
||||
func (ibe *IfdByteEncoder) EncodeToExif(ib *IfdBuilder) (data []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
encodedIfds, err := ibe.EncodeToExifPayload(ib)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Wrap the IFD in a formal EXIF block.
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
headerBytes, err := BuildExifHeader(ib.byteOrder, ExifDefaultFirstIfdOffset)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write(headerBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write(encodedIfds)
|
||||
log.PanicIf(err)
|
||||
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
1672
vendor/github.com/dsoprea/go-exif/v3/ifd_enumerate.go
generated
vendored
1672
vendor/github.com/dsoprea/go-exif/v3/ifd_enumerate.go
generated
vendored
File diff suppressed because it is too large
Load diff
298
vendor/github.com/dsoprea/go-exif/v3/ifd_tag_entry.go
generated
vendored
298
vendor/github.com/dsoprea/go-exif/v3/ifd_tag_entry.go
generated
vendored
|
|
@ -1,298 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
"github.com/dsoprea/go-exif/v3/undefined"
|
||||
)
|
||||
|
||||
var (
|
||||
iteLogger = log.NewLogger("exif.ifd_tag_entry")
|
||||
)
|
||||
|
||||
// IfdTagEntry refers to a tag in the loaded EXIF block.
|
||||
type IfdTagEntry struct {
|
||||
tagId uint16
|
||||
tagIndex int
|
||||
tagType exifcommon.TagTypePrimitive
|
||||
unitCount uint32
|
||||
valueOffset uint32
|
||||
rawValueOffset []byte
|
||||
|
||||
// childIfdName is the right most atom in the IFD-path. We need this to
|
||||
// construct the fully-qualified IFD-path.
|
||||
childIfdName string
|
||||
|
||||
// childIfdPath is the IFD-path of the child if this tag represents a child
|
||||
// IFD.
|
||||
childIfdPath string
|
||||
|
||||
// childFqIfdPath is the IFD-path of the child if this tag represents a
|
||||
// child IFD. Includes indices.
|
||||
childFqIfdPath string
|
||||
|
||||
// TODO(dustin): !! IB's host the child-IBs directly in the tag, but that's not the case here. Refactor to accommodate it for a consistent experience.
|
||||
|
||||
ifdIdentity *exifcommon.IfdIdentity
|
||||
|
||||
isUnhandledUnknown bool
|
||||
|
||||
rs io.ReadSeeker
|
||||
byteOrder binary.ByteOrder
|
||||
|
||||
tagName string
|
||||
}
|
||||
|
||||
func newIfdTagEntry(ii *exifcommon.IfdIdentity, tagId uint16, tagIndex int, tagType exifcommon.TagTypePrimitive, unitCount uint32, valueOffset uint32, rawValueOffset []byte, rs io.ReadSeeker, byteOrder binary.ByteOrder) *IfdTagEntry {
|
||||
return &IfdTagEntry{
|
||||
ifdIdentity: ii,
|
||||
tagId: tagId,
|
||||
tagIndex: tagIndex,
|
||||
tagType: tagType,
|
||||
unitCount: unitCount,
|
||||
valueOffset: valueOffset,
|
||||
rawValueOffset: rawValueOffset,
|
||||
rs: rs,
|
||||
byteOrder: byteOrder,
|
||||
}
|
||||
}
|
||||
|
||||
// String returns a stringified representation of the struct.
|
||||
func (ite *IfdTagEntry) String() string {
|
||||
return fmt.Sprintf("IfdTagEntry<TAG-IFD-PATH=[%s] TAG-ID=(0x%04x) TAG-TYPE=[%s] UNIT-COUNT=(%d)>", ite.ifdIdentity.String(), ite.tagId, ite.tagType.String(), ite.unitCount)
|
||||
}
|
||||
|
||||
// TagName returns the name of the tag. This is determined else and set after
|
||||
// the parse (since it's not actually stored in the stream). If it's empty, it
|
||||
// is because it is an unknown tag (nonstandard or otherwise unavailable in the
|
||||
// tag-index).
|
||||
func (ite *IfdTagEntry) TagName() string {
|
||||
return ite.tagName
|
||||
}
|
||||
|
||||
// setTagName sets the tag-name. This provides the name for convenience and
|
||||
// efficiency by determining it when most efficient while we're parsing rather
|
||||
// than delegating it to the caller (or, worse, the user).
|
||||
func (ite *IfdTagEntry) setTagName(tagName string) {
|
||||
ite.tagName = tagName
|
||||
}
|
||||
|
||||
// IfdPath returns the fully-qualified path of the IFD that owns this tag.
|
||||
func (ite *IfdTagEntry) IfdPath() string {
|
||||
return ite.ifdIdentity.String()
|
||||
}
|
||||
|
||||
// TagId returns the ID of the tag that we represent. The combination of
|
||||
// (IfdPath(), TagId()) is unique.
|
||||
func (ite *IfdTagEntry) TagId() uint16 {
|
||||
return ite.tagId
|
||||
}
|
||||
|
||||
// IsThumbnailOffset returns true if the tag has the IFD and tag-ID of a
|
||||
// thumbnail offset.
|
||||
func (ite *IfdTagEntry) IsThumbnailOffset() bool {
|
||||
return ite.tagId == ThumbnailOffsetTagId && ite.ifdIdentity.String() == ThumbnailFqIfdPath
|
||||
}
|
||||
|
||||
// IsThumbnailSize returns true if the tag has the IFD and tag-ID of a thumbnail
|
||||
// size.
|
||||
func (ite *IfdTagEntry) IsThumbnailSize() bool {
|
||||
return ite.tagId == ThumbnailSizeTagId && ite.ifdIdentity.String() == ThumbnailFqIfdPath
|
||||
}
|
||||
|
||||
// TagType is the type of value for this tag.
|
||||
func (ite *IfdTagEntry) TagType() exifcommon.TagTypePrimitive {
|
||||
return ite.tagType
|
||||
}
|
||||
|
||||
// updateTagType sets an alternatively interpreted tag-type.
|
||||
func (ite *IfdTagEntry) updateTagType(tagType exifcommon.TagTypePrimitive) {
|
||||
ite.tagType = tagType
|
||||
}
|
||||
|
||||
// UnitCount returns the unit-count of the tag's value.
|
||||
func (ite *IfdTagEntry) UnitCount() uint32 {
|
||||
return ite.unitCount
|
||||
}
|
||||
|
||||
// updateUnitCount sets an alternatively interpreted unit-count.
|
||||
func (ite *IfdTagEntry) updateUnitCount(unitCount uint32) {
|
||||
ite.unitCount = unitCount
|
||||
}
|
||||
|
||||
// getValueOffset is the four-byte offset converted to an integer to point to
|
||||
// the location of its value in the EXIF block. The "get" parameter is obviously
|
||||
// used in order to differentiate the naming of the method from the field.
|
||||
func (ite *IfdTagEntry) getValueOffset() uint32 {
|
||||
return ite.valueOffset
|
||||
}
|
||||
|
||||
// GetRawBytes renders a specific list of bytes from the value in this tag.
|
||||
func (ite *IfdTagEntry) GetRawBytes() (rawBytes []byte, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext := ite.getValueContext()
|
||||
|
||||
if ite.tagType == exifcommon.TypeUndefined {
|
||||
value, err := exifundefined.Decode(valueContext)
|
||||
if err != nil {
|
||||
if err == exifcommon.ErrUnhandledUndefinedTypedTag {
|
||||
ite.setIsUnhandledUnknown(true)
|
||||
return nil, exifundefined.ErrUnparseableValue
|
||||
} else if err == exifundefined.ErrUnparseableValue {
|
||||
return nil, err
|
||||
} else {
|
||||
log.Panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Encode it back, in order to get the raw bytes. This is the best,
|
||||
// general way to do it with an undefined tag.
|
||||
|
||||
rawBytes, _, err := exifundefined.Encode(value, ite.byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return rawBytes, nil
|
||||
}
|
||||
|
||||
rawBytes, err = valueContext.ReadRawEncoded()
|
||||
log.PanicIf(err)
|
||||
|
||||
return rawBytes, nil
|
||||
}
|
||||
|
||||
// Value returns the specific, parsed, typed value from the tag.
|
||||
func (ite *IfdTagEntry) Value() (value interface{}, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext := ite.getValueContext()
|
||||
|
||||
if ite.tagType == exifcommon.TypeUndefined {
|
||||
var err error
|
||||
|
||||
value, err = exifundefined.Decode(valueContext)
|
||||
if err != nil {
|
||||
if err == exifcommon.ErrUnhandledUndefinedTypedTag || err == exifundefined.ErrUnparseableValue {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
} else {
|
||||
var err error
|
||||
|
||||
value, err = valueContext.Values()
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
// Format returns the tag's value as a string.
|
||||
func (ite *IfdTagEntry) Format() (phrase string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
value, err := ite.Value()
|
||||
if err != nil {
|
||||
if err == exifcommon.ErrUnhandledUndefinedTypedTag {
|
||||
return exifundefined.UnparseableUnknownTagValuePlaceholder, nil
|
||||
} else if err == exifundefined.ErrUnparseableValue {
|
||||
return exifundefined.UnparseableHandledTagValuePlaceholder, nil
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
phrase, err = exifcommon.FormatFromType(value, false)
|
||||
log.PanicIf(err)
|
||||
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
// FormatFirst returns the same as Format() but only the first item.
|
||||
func (ite *IfdTagEntry) FormatFirst() (phrase string, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): We should add a convenience type "timestamp", to simplify translating to and from the physical ASCII and provide validation.
|
||||
|
||||
value, err := ite.Value()
|
||||
if err != nil {
|
||||
if err == exifcommon.ErrUnhandledUndefinedTypedTag {
|
||||
return exifundefined.UnparseableUnknownTagValuePlaceholder, nil
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
phrase, err = exifcommon.FormatFromType(value, true)
|
||||
log.PanicIf(err)
|
||||
|
||||
return phrase, nil
|
||||
}
|
||||
|
||||
func (ite *IfdTagEntry) setIsUnhandledUnknown(isUnhandledUnknown bool) {
|
||||
ite.isUnhandledUnknown = isUnhandledUnknown
|
||||
}
|
||||
|
||||
// SetChildIfd sets child-IFD information (if we represent a child IFD).
|
||||
func (ite *IfdTagEntry) SetChildIfd(ii *exifcommon.IfdIdentity) {
|
||||
ite.childFqIfdPath = ii.String()
|
||||
ite.childIfdPath = ii.UnindexedString()
|
||||
ite.childIfdName = ii.Name()
|
||||
}
|
||||
|
||||
// ChildIfdName returns the name of the child IFD
|
||||
func (ite *IfdTagEntry) ChildIfdName() string {
|
||||
return ite.childIfdName
|
||||
}
|
||||
|
||||
// ChildIfdPath returns the path of the child IFD.
|
||||
func (ite *IfdTagEntry) ChildIfdPath() string {
|
||||
return ite.childIfdPath
|
||||
}
|
||||
|
||||
// ChildFqIfdPath returns the complete path of the child IFD along with the
|
||||
// numeric suffixes differentiating sibling occurrences of the same type. "0"
|
||||
// indices are omitted.
|
||||
func (ite *IfdTagEntry) ChildFqIfdPath() string {
|
||||
return ite.childFqIfdPath
|
||||
}
|
||||
|
||||
// IfdIdentity returns the IfdIdentity associated with this tag.
|
||||
func (ite *IfdTagEntry) IfdIdentity() *exifcommon.IfdIdentity {
|
||||
return ite.ifdIdentity
|
||||
}
|
||||
|
||||
func (ite *IfdTagEntry) getValueContext() *exifcommon.ValueContext {
|
||||
return exifcommon.NewValueContext(
|
||||
ite.ifdIdentity.String(),
|
||||
ite.tagId,
|
||||
ite.unitCount,
|
||||
ite.valueOffset,
|
||||
ite.rawValueOffset,
|
||||
ite.rs,
|
||||
ite.tagType,
|
||||
ite.byteOrder)
|
||||
}
|
||||
8
vendor/github.com/dsoprea/go-exif/v3/package.go
generated
vendored
8
vendor/github.com/dsoprea/go-exif/v3/package.go
generated
vendored
|
|
@ -1,8 +0,0 @@
|
|||
// Package exif parses raw EXIF information given a block of raw EXIF data. It
|
||||
// can also construct new EXIF information, and provides tools for doing so.
|
||||
// This package is not involved with the parsing of particular file-formats.
|
||||
//
|
||||
// The EXIF data must first be extracted and then provided to us. Conversely,
|
||||
// when constructing new EXIF data, the caller is responsible for packaging
|
||||
// this in whichever format they require.
|
||||
package exif
|
||||
475
vendor/github.com/dsoprea/go-exif/v3/tags.go
generated
vendored
475
vendor/github.com/dsoprea/go-exif/v3/tags.go
generated
vendored
|
|
@ -1,475 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
const (
|
||||
// IFD1
|
||||
|
||||
// ThumbnailFqIfdPath is the fully-qualified IFD path that the thumbnail
|
||||
// must be found in.
|
||||
ThumbnailFqIfdPath = "IFD1"
|
||||
|
||||
// ThumbnailOffsetTagId returns the tag-ID of the thumbnail offset.
|
||||
ThumbnailOffsetTagId = 0x0201
|
||||
|
||||
// ThumbnailSizeTagId returns the tag-ID of the thumbnail size.
|
||||
ThumbnailSizeTagId = 0x0202
|
||||
)
|
||||
|
||||
const (
|
||||
// GPS
|
||||
|
||||
// TagGpsVersionId is the ID of the GPS version tag.
|
||||
TagGpsVersionId = 0x0000
|
||||
|
||||
// TagLatitudeId is the ID of the GPS latitude tag.
|
||||
TagLatitudeId = 0x0002
|
||||
|
||||
// TagLatitudeRefId is the ID of the GPS latitude orientation tag.
|
||||
TagLatitudeRefId = 0x0001
|
||||
|
||||
// TagLongitudeId is the ID of the GPS longitude tag.
|
||||
TagLongitudeId = 0x0004
|
||||
|
||||
// TagLongitudeRefId is the ID of the GPS longitude-orientation tag.
|
||||
TagLongitudeRefId = 0x0003
|
||||
|
||||
// TagTimestampId is the ID of the GPS time tag.
|
||||
TagTimestampId = 0x0007
|
||||
|
||||
// TagDatestampId is the ID of the GPS date tag.
|
||||
TagDatestampId = 0x001d
|
||||
|
||||
// TagAltitudeId is the ID of the GPS altitude tag.
|
||||
TagAltitudeId = 0x0006
|
||||
|
||||
// TagAltitudeRefId is the ID of the GPS altitude-orientation tag.
|
||||
TagAltitudeRefId = 0x0005
|
||||
)
|
||||
|
||||
var (
|
||||
// tagsWithoutAlignment is a tag-lookup for tags whose value size won't
|
||||
// necessarily be a multiple of its tag-type.
|
||||
tagsWithoutAlignment = map[uint16]struct{}{
|
||||
// The thumbnail offset is stored as a long, but its data is a binary
|
||||
// blob (not a slice of longs).
|
||||
ThumbnailOffsetTagId: {},
|
||||
}
|
||||
)
|
||||
|
||||
var (
|
||||
tagsLogger = log.NewLogger("exif.tags")
|
||||
)
|
||||
|
||||
// File structures.
|
||||
|
||||
type encodedTag struct {
|
||||
// id is signed, here, because YAML doesn't have enough information to
|
||||
// support unsigned.
|
||||
Id int `yaml:"id"`
|
||||
Name string `yaml:"name"`
|
||||
TypeName string `yaml:"type_name"`
|
||||
TypeNames []string `yaml:"type_names"`
|
||||
}
|
||||
|
||||
// Indexing structures.
|
||||
|
||||
// IndexedTag describes one index lookup result.
|
||||
type IndexedTag struct {
|
||||
// Id is the tag-ID.
|
||||
Id uint16
|
||||
|
||||
// Name is the tag name.
|
||||
Name string
|
||||
|
||||
// IfdPath is the proper IFD path of this tag. This is not fully-qualified.
|
||||
IfdPath string
|
||||
|
||||
// SupportedTypes is an unsorted list of allowed tag-types.
|
||||
SupportedTypes []exifcommon.TagTypePrimitive
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (it *IndexedTag) String() string {
|
||||
return fmt.Sprintf("TAG<ID=(0x%04x) NAME=[%s] IFD=[%s]>", it.Id, it.Name, it.IfdPath)
|
||||
}
|
||||
|
||||
// IsName returns true if this tag matches the given tag name.
|
||||
func (it *IndexedTag) IsName(ifdPath, name string) bool {
|
||||
return it.Name == name && it.IfdPath == ifdPath
|
||||
}
|
||||
|
||||
// Is returns true if this tag matched the given tag ID.
|
||||
func (it *IndexedTag) Is(ifdPath string, id uint16) bool {
|
||||
return it.Id == id && it.IfdPath == ifdPath
|
||||
}
|
||||
|
||||
// GetEncodingType returns the largest type that this tag's value can occupy.
|
||||
func (it *IndexedTag) GetEncodingType(value interface{}) exifcommon.TagTypePrimitive {
|
||||
// For convenience, we handle encoding a `time.Time` directly.
|
||||
if exifcommon.IsTime(value) == true {
|
||||
// Timestamps are encoded as ASCII.
|
||||
value = ""
|
||||
}
|
||||
|
||||
if len(it.SupportedTypes) == 0 {
|
||||
log.Panicf("IndexedTag [%s] (%d) has no supported types.", it.IfdPath, it.Id)
|
||||
} else if len(it.SupportedTypes) == 1 {
|
||||
return it.SupportedTypes[0]
|
||||
}
|
||||
|
||||
supportsLong := false
|
||||
supportsShort := false
|
||||
supportsRational := false
|
||||
supportsSignedRational := false
|
||||
for _, supportedType := range it.SupportedTypes {
|
||||
if supportedType == exifcommon.TypeLong {
|
||||
supportsLong = true
|
||||
} else if supportedType == exifcommon.TypeShort {
|
||||
supportsShort = true
|
||||
} else if supportedType == exifcommon.TypeRational {
|
||||
supportsRational = true
|
||||
} else if supportedType == exifcommon.TypeSignedRational {
|
||||
supportsSignedRational = true
|
||||
}
|
||||
}
|
||||
|
||||
// We specifically check for the cases that we know to expect.
|
||||
|
||||
if supportsLong == true && supportsShort == true {
|
||||
return exifcommon.TypeLong
|
||||
} else if supportsRational == true && supportsSignedRational == true {
|
||||
if value == nil {
|
||||
log.Panicf("GetEncodingType: require value to be given")
|
||||
}
|
||||
|
||||
if _, ok := value.(exifcommon.SignedRational); ok == true {
|
||||
return exifcommon.TypeSignedRational
|
||||
}
|
||||
|
||||
return exifcommon.TypeRational
|
||||
}
|
||||
|
||||
log.Panicf("WidestSupportedType() case is not handled for tag [%s] (0x%04x): %v", it.IfdPath, it.Id, it.SupportedTypes)
|
||||
return 0
|
||||
}
|
||||
|
||||
// DoesSupportType returns true if this tag can be found/decoded with this type.
|
||||
func (it *IndexedTag) DoesSupportType(tagType exifcommon.TagTypePrimitive) bool {
|
||||
// This is always a very small collection. So, we keep it unsorted.
|
||||
for _, thisTagType := range it.SupportedTypes {
|
||||
if thisTagType == tagType {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// TagIndex is a tag-lookup facility.
|
||||
type TagIndex struct {
|
||||
tagsByIfd map[string]map[uint16]*IndexedTag
|
||||
tagsByIfdR map[string]map[string]*IndexedTag
|
||||
|
||||
mutex sync.Mutex
|
||||
|
||||
doUniversalSearch bool
|
||||
}
|
||||
|
||||
// NewTagIndex returns a new TagIndex struct.
|
||||
func NewTagIndex() *TagIndex {
|
||||
ti := new(TagIndex)
|
||||
|
||||
ti.tagsByIfd = make(map[string]map[uint16]*IndexedTag)
|
||||
ti.tagsByIfdR = make(map[string]map[string]*IndexedTag)
|
||||
|
||||
return ti
|
||||
}
|
||||
|
||||
// SetUniversalSearch enables a fallback to matching tags under *any* IFD.
|
||||
func (ti *TagIndex) SetUniversalSearch(flag bool) {
|
||||
ti.doUniversalSearch = flag
|
||||
}
|
||||
|
||||
// UniversalSearch enables a fallback to matching tags under *any* IFD.
|
||||
func (ti *TagIndex) UniversalSearch() bool {
|
||||
return ti.doUniversalSearch
|
||||
}
|
||||
|
||||
// Add registers a new tag to be recognized during the parse.
|
||||
func (ti *TagIndex) Add(it *IndexedTag) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ti.mutex.Lock()
|
||||
defer ti.mutex.Unlock()
|
||||
|
||||
// Store by ID.
|
||||
|
||||
family, found := ti.tagsByIfd[it.IfdPath]
|
||||
if found == false {
|
||||
family = make(map[uint16]*IndexedTag)
|
||||
ti.tagsByIfd[it.IfdPath] = family
|
||||
}
|
||||
|
||||
if _, found := family[it.Id]; found == true {
|
||||
log.Panicf("tag-ID defined more than once for IFD [%s]: (%02x)", it.IfdPath, it.Id)
|
||||
}
|
||||
|
||||
family[it.Id] = it
|
||||
|
||||
// Store by name.
|
||||
|
||||
familyR, found := ti.tagsByIfdR[it.IfdPath]
|
||||
if found == false {
|
||||
familyR = make(map[string]*IndexedTag)
|
||||
ti.tagsByIfdR[it.IfdPath] = familyR
|
||||
}
|
||||
|
||||
if _, found := familyR[it.Name]; found == true {
|
||||
log.Panicf("tag-name defined more than once for IFD [%s]: (%s)", it.IfdPath, it.Name)
|
||||
}
|
||||
|
||||
familyR[it.Name] = it
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ti *TagIndex) getOne(ifdPath string, id uint16) (it *IndexedTag, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if len(ti.tagsByIfd) == 0 {
|
||||
err := LoadStandardTags(ti)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ti.mutex.Lock()
|
||||
defer ti.mutex.Unlock()
|
||||
|
||||
family, found := ti.tagsByIfd[ifdPath]
|
||||
if found == false {
|
||||
return nil, ErrTagNotFound
|
||||
}
|
||||
|
||||
it, found = family[id]
|
||||
if found == false {
|
||||
return nil, ErrTagNotFound
|
||||
}
|
||||
|
||||
return it, nil
|
||||
}
|
||||
|
||||
// Get returns information about the non-IFD tag given a tag ID. `ifdPath` must
|
||||
// not be fully-qualified.
|
||||
func (ti *TagIndex) Get(ii *exifcommon.IfdIdentity, id uint16) (it *IndexedTag, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
ifdPath := ii.UnindexedString()
|
||||
|
||||
it, err = ti.getOne(ifdPath, id)
|
||||
if err == nil {
|
||||
return it, nil
|
||||
} else if err != ErrTagNotFound {
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
if ti.doUniversalSearch == false {
|
||||
return nil, ErrTagNotFound
|
||||
}
|
||||
|
||||
// We've been told to fallback to look for the tag in other IFDs.
|
||||
|
||||
skipIfdPath := ii.UnindexedString()
|
||||
|
||||
for currentIfdPath, _ := range ti.tagsByIfd {
|
||||
if currentIfdPath == skipIfdPath {
|
||||
// Skip the primary IFD, which has already been checked.
|
||||
continue
|
||||
}
|
||||
|
||||
it, err = ti.getOne(currentIfdPath, id)
|
||||
if err == nil {
|
||||
tagsLogger.Warningf(nil,
|
||||
"Found tag (0x%02x) in the wrong IFD: [%s] != [%s]",
|
||||
id, currentIfdPath, ifdPath)
|
||||
|
||||
return it, nil
|
||||
} else if err != ErrTagNotFound {
|
||||
log.Panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil, ErrTagNotFound
|
||||
}
|
||||
|
||||
var (
|
||||
// tagGuessDefaultIfdIdentities describes which IFDs we'll look for a given
|
||||
// tag-ID in, if it's not found where it's supposed to be. We suppose that
|
||||
// Exif-IFD tags might be found in IFD0 or IFD1, or IFD0/IFD1 tags might be
|
||||
// found in the Exif IFD. This is the only thing we've seen so far. So, this
|
||||
// is the limit of our guessing.
|
||||
tagGuessDefaultIfdIdentities = []*exifcommon.IfdIdentity{
|
||||
exifcommon.IfdExifStandardIfdIdentity,
|
||||
exifcommon.IfdStandardIfdIdentity,
|
||||
}
|
||||
)
|
||||
|
||||
// FindFirst looks for the given tag-ID in each of the given IFDs in the given
|
||||
// order. If `fqIfdPaths` is `nil` then use a default search order. This defies
|
||||
// the standard, which requires each tag to exist in certain IFDs. This is a
|
||||
// contingency to make recommendations for malformed data.
|
||||
//
|
||||
// Things *can* end badly here, in that the same tag-ID in different IFDs might
|
||||
// describe different data and different ata-types, and our decode might then
|
||||
// produce binary and non-printable data.
|
||||
func (ti *TagIndex) FindFirst(id uint16, typeId exifcommon.TagTypePrimitive, ifdIdentities []*exifcommon.IfdIdentity) (it *IndexedTag, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if ifdIdentities == nil {
|
||||
ifdIdentities = tagGuessDefaultIfdIdentities
|
||||
}
|
||||
|
||||
for _, ii := range ifdIdentities {
|
||||
it, err := ti.Get(ii, id)
|
||||
if err != nil {
|
||||
if err == ErrTagNotFound {
|
||||
continue
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
// Even though the tag might be mislocated, the type should still be the
|
||||
// same. Check this so we don't accidentally end-up on a complete
|
||||
// irrelevant tag with a totally different data type. This attempts to
|
||||
// mitigate producing garbage.
|
||||
for _, supportedType := range it.SupportedTypes {
|
||||
if supportedType == typeId {
|
||||
return it, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil, ErrTagNotFound
|
||||
}
|
||||
|
||||
// GetWithName returns information about the non-IFD tag given a tag name.
|
||||
func (ti *TagIndex) GetWithName(ii *exifcommon.IfdIdentity, name string) (it *IndexedTag, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
if len(ti.tagsByIfdR) == 0 {
|
||||
err := LoadStandardTags(ti)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ifdPath := ii.UnindexedString()
|
||||
|
||||
it, found := ti.tagsByIfdR[ifdPath][name]
|
||||
if found != true {
|
||||
log.Panic(ErrTagNotFound)
|
||||
}
|
||||
|
||||
return it, nil
|
||||
}
|
||||
|
||||
// LoadStandardTags registers the tags that all devices/applications should
|
||||
// support.
|
||||
func LoadStandardTags(ti *TagIndex) (err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// Read static data.
|
||||
|
||||
encodedIfds := make(map[string][]encodedTag)
|
||||
|
||||
err = yaml.Unmarshal([]byte(tagsYaml), encodedIfds)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Load structure.
|
||||
|
||||
count := 0
|
||||
for ifdPath, tags := range encodedIfds {
|
||||
for _, tagInfo := range tags {
|
||||
tagId := uint16(tagInfo.Id)
|
||||
tagName := tagInfo.Name
|
||||
tagTypeName := tagInfo.TypeName
|
||||
tagTypeNames := tagInfo.TypeNames
|
||||
|
||||
if tagTypeNames == nil {
|
||||
if tagTypeName == "" {
|
||||
log.Panicf("no tag-types were given when registering standard tag [%s] (0x%04x) [%s]", ifdPath, tagId, tagName)
|
||||
}
|
||||
|
||||
tagTypeNames = []string{
|
||||
tagTypeName,
|
||||
}
|
||||
} else if tagTypeName != "" {
|
||||
log.Panicf("both 'type_names' and 'type_name' were given when registering standard tag [%s] (0x%04x) [%s]", ifdPath, tagId, tagName)
|
||||
}
|
||||
|
||||
tagTypes := make([]exifcommon.TagTypePrimitive, 0)
|
||||
for _, tagTypeName := range tagTypeNames {
|
||||
|
||||
// TODO(dustin): Discard unsupported types. This helps us with non-standard types that have actually been found in real data, that we ignore for right now. e.g. SSHORT, FLOAT, DOUBLE
|
||||
tagTypeId, found := exifcommon.GetTypeByName(tagTypeName)
|
||||
if found == false {
|
||||
tagsLogger.Warningf(nil, "Type [%s] for tag [%s] being loaded is not valid and is being ignored.", tagTypeName, tagName)
|
||||
continue
|
||||
}
|
||||
|
||||
tagTypes = append(tagTypes, tagTypeId)
|
||||
}
|
||||
|
||||
if len(tagTypes) == 0 {
|
||||
tagsLogger.Warningf(nil, "Tag [%s] (0x%04x) [%s] being loaded does not have any supported types and will not be registered.", ifdPath, tagId, tagName)
|
||||
continue
|
||||
}
|
||||
|
||||
it := &IndexedTag{
|
||||
IfdPath: ifdPath,
|
||||
Id: tagId,
|
||||
Name: tagName,
|
||||
SupportedTypes: tagTypes,
|
||||
}
|
||||
|
||||
err = ti.Add(it)
|
||||
log.PanicIf(err)
|
||||
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
tagsLogger.Debugf(nil, "(%d) tags loaded.", count)
|
||||
|
||||
return nil
|
||||
}
|
||||
968
vendor/github.com/dsoprea/go-exif/v3/tags_data.go
generated
vendored
968
vendor/github.com/dsoprea/go-exif/v3/tags_data.go
generated
vendored
|
|
@ -1,968 +0,0 @@
|
|||
package exif
|
||||
|
||||
var (
|
||||
// From assets/tags.yaml . Needs to be here so it's embedded in the binary.
|
||||
tagsYaml = `
|
||||
# Notes:
|
||||
#
|
||||
# This file was produced from http://www.exiv2.org/tags.html, using the included
|
||||
# tool, though that document appears to have some duplicates when all IDs are
|
||||
# supposed to be unique (EXIF information only has IDs, not IFDs; IFDs are
|
||||
# determined by our pre-existing knowledge of those tags).
|
||||
#
|
||||
# The webpage that we've produced this file from appears to indicate that
|
||||
# ImageWidth is represented by both 0x0100 and 0x0001 depending on whether the
|
||||
# encoding is RGB or YCbCr.
|
||||
IFD/Exif:
|
||||
- id: 0x829a
|
||||
name: ExposureTime
|
||||
type_name: RATIONAL
|
||||
- id: 0x829d
|
||||
name: FNumber
|
||||
type_name: RATIONAL
|
||||
- id: 0x8822
|
||||
name: ExposureProgram
|
||||
type_name: SHORT
|
||||
- id: 0x8824
|
||||
name: SpectralSensitivity
|
||||
type_name: ASCII
|
||||
- id: 0x8827
|
||||
name: ISOSpeedRatings
|
||||
type_name: SHORT
|
||||
- id: 0x8828
|
||||
name: OECF
|
||||
type_name: UNDEFINED
|
||||
- id: 0x8830
|
||||
name: SensitivityType
|
||||
type_name: SHORT
|
||||
- id: 0x8831
|
||||
name: StandardOutputSensitivity
|
||||
type_name: LONG
|
||||
- id: 0x8832
|
||||
name: RecommendedExposureIndex
|
||||
type_name: LONG
|
||||
- id: 0x8833
|
||||
name: ISOSpeed
|
||||
type_name: LONG
|
||||
- id: 0x8834
|
||||
name: ISOSpeedLatitudeyyy
|
||||
type_name: LONG
|
||||
- id: 0x8835
|
||||
name: ISOSpeedLatitudezzz
|
||||
type_name: LONG
|
||||
- id: 0x9000
|
||||
name: ExifVersion
|
||||
type_name: UNDEFINED
|
||||
- id: 0x9003
|
||||
name: DateTimeOriginal
|
||||
type_name: ASCII
|
||||
- id: 0x9004
|
||||
name: DateTimeDigitized
|
||||
type_name: ASCII
|
||||
- id: 0x9010
|
||||
name: OffsetTime
|
||||
type_name: ASCII
|
||||
- id: 0x9011
|
||||
name: OffsetTimeOriginal
|
||||
type_name: ASCII
|
||||
- id: 0x9012
|
||||
name: OffsetTimeDigitized
|
||||
type_name: ASCII
|
||||
- id: 0x9101
|
||||
name: ComponentsConfiguration
|
||||
type_name: UNDEFINED
|
||||
- id: 0x9102
|
||||
name: CompressedBitsPerPixel
|
||||
type_name: RATIONAL
|
||||
- id: 0x9201
|
||||
name: ShutterSpeedValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9202
|
||||
name: ApertureValue
|
||||
type_name: RATIONAL
|
||||
- id: 0x9203
|
||||
name: BrightnessValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9204
|
||||
name: ExposureBiasValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9205
|
||||
name: MaxApertureValue
|
||||
type_name: RATIONAL
|
||||
- id: 0x9206
|
||||
name: SubjectDistance
|
||||
type_name: RATIONAL
|
||||
- id: 0x9207
|
||||
name: MeteringMode
|
||||
type_name: SHORT
|
||||
- id: 0x9208
|
||||
name: LightSource
|
||||
type_name: SHORT
|
||||
- id: 0x9209
|
||||
name: Flash
|
||||
type_name: SHORT
|
||||
- id: 0x920a
|
||||
name: FocalLength
|
||||
type_name: RATIONAL
|
||||
- id: 0x9214
|
||||
name: SubjectArea
|
||||
type_name: SHORT
|
||||
- id: 0x927c
|
||||
name: MakerNote
|
||||
type_name: UNDEFINED
|
||||
- id: 0x9286
|
||||
name: UserComment
|
||||
type_name: UNDEFINED
|
||||
- id: 0x9290
|
||||
name: SubSecTime
|
||||
type_name: ASCII
|
||||
- id: 0x9291
|
||||
name: SubSecTimeOriginal
|
||||
type_name: ASCII
|
||||
- id: 0x9292
|
||||
name: SubSecTimeDigitized
|
||||
type_name: ASCII
|
||||
- id: 0xa000
|
||||
name: FlashpixVersion
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa001
|
||||
name: ColorSpace
|
||||
type_name: SHORT
|
||||
- id: 0xa002
|
||||
name: PixelXDimension
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0xa003
|
||||
name: PixelYDimension
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0xa004
|
||||
name: RelatedSoundFile
|
||||
type_name: ASCII
|
||||
- id: 0xa005
|
||||
name: InteroperabilityTag
|
||||
type_name: LONG
|
||||
- id: 0xa20b
|
||||
name: FlashEnergy
|
||||
type_name: RATIONAL
|
||||
- id: 0xa20c
|
||||
name: SpatialFrequencyResponse
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa20e
|
||||
name: FocalPlaneXResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0xa20f
|
||||
name: FocalPlaneYResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0xa210
|
||||
name: FocalPlaneResolutionUnit
|
||||
type_name: SHORT
|
||||
- id: 0xa214
|
||||
name: SubjectLocation
|
||||
type_name: SHORT
|
||||
- id: 0xa215
|
||||
name: ExposureIndex
|
||||
type_name: RATIONAL
|
||||
- id: 0xa217
|
||||
name: SensingMethod
|
||||
type_name: SHORT
|
||||
- id: 0xa300
|
||||
name: FileSource
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa301
|
||||
name: SceneType
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa302
|
||||
name: CFAPattern
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa401
|
||||
name: CustomRendered
|
||||
type_name: SHORT
|
||||
- id: 0xa402
|
||||
name: ExposureMode
|
||||
type_name: SHORT
|
||||
- id: 0xa403
|
||||
name: WhiteBalance
|
||||
type_name: SHORT
|
||||
- id: 0xa404
|
||||
name: DigitalZoomRatio
|
||||
type_name: RATIONAL
|
||||
- id: 0xa405
|
||||
name: FocalLengthIn35mmFilm
|
||||
type_name: SHORT
|
||||
- id: 0xa406
|
||||
name: SceneCaptureType
|
||||
type_name: SHORT
|
||||
- id: 0xa407
|
||||
name: GainControl
|
||||
type_name: SHORT
|
||||
- id: 0xa408
|
||||
name: Contrast
|
||||
type_name: SHORT
|
||||
- id: 0xa409
|
||||
name: Saturation
|
||||
type_name: SHORT
|
||||
- id: 0xa40a
|
||||
name: Sharpness
|
||||
type_name: SHORT
|
||||
- id: 0xa40b
|
||||
name: DeviceSettingDescription
|
||||
type_name: UNDEFINED
|
||||
- id: 0xa40c
|
||||
name: SubjectDistanceRange
|
||||
type_name: SHORT
|
||||
- id: 0xa420
|
||||
name: ImageUniqueID
|
||||
type_name: ASCII
|
||||
- id: 0xa430
|
||||
name: CameraOwnerName
|
||||
type_name: ASCII
|
||||
- id: 0xa431
|
||||
name: BodySerialNumber
|
||||
type_name: ASCII
|
||||
- id: 0xa432
|
||||
name: LensSpecification
|
||||
type_name: RATIONAL
|
||||
- id: 0xa433
|
||||
name: LensMake
|
||||
type_name: ASCII
|
||||
- id: 0xa434
|
||||
name: LensModel
|
||||
type_name: ASCII
|
||||
- id: 0xa435
|
||||
name: LensSerialNumber
|
||||
type_name: ASCII
|
||||
IFD/GPSInfo:
|
||||
- id: 0x0000
|
||||
name: GPSVersionID
|
||||
type_name: BYTE
|
||||
- id: 0x0001
|
||||
name: GPSLatitudeRef
|
||||
type_name: ASCII
|
||||
- id: 0x0002
|
||||
name: GPSLatitude
|
||||
type_name: RATIONAL
|
||||
- id: 0x0003
|
||||
name: GPSLongitudeRef
|
||||
type_name: ASCII
|
||||
- id: 0x0004
|
||||
name: GPSLongitude
|
||||
type_name: RATIONAL
|
||||
- id: 0x0005
|
||||
name: GPSAltitudeRef
|
||||
type_name: BYTE
|
||||
- id: 0x0006
|
||||
name: GPSAltitude
|
||||
type_name: RATIONAL
|
||||
- id: 0x0007
|
||||
name: GPSTimeStamp
|
||||
type_name: RATIONAL
|
||||
- id: 0x0008
|
||||
name: GPSSatellites
|
||||
type_name: ASCII
|
||||
- id: 0x0009
|
||||
name: GPSStatus
|
||||
type_name: ASCII
|
||||
- id: 0x000a
|
||||
name: GPSMeasureMode
|
||||
type_name: ASCII
|
||||
- id: 0x000b
|
||||
name: GPSDOP
|
||||
type_name: RATIONAL
|
||||
- id: 0x000c
|
||||
name: GPSSpeedRef
|
||||
type_name: ASCII
|
||||
- id: 0x000d
|
||||
name: GPSSpeed
|
||||
type_name: RATIONAL
|
||||
- id: 0x000e
|
||||
name: GPSTrackRef
|
||||
type_name: ASCII
|
||||
- id: 0x000f
|
||||
name: GPSTrack
|
||||
type_name: RATIONAL
|
||||
- id: 0x0010
|
||||
name: GPSImgDirectionRef
|
||||
type_name: ASCII
|
||||
- id: 0x0011
|
||||
name: GPSImgDirection
|
||||
type_name: RATIONAL
|
||||
- id: 0x0012
|
||||
name: GPSMapDatum
|
||||
type_name: ASCII
|
||||
- id: 0x0013
|
||||
name: GPSDestLatitudeRef
|
||||
type_name: ASCII
|
||||
- id: 0x0014
|
||||
name: GPSDestLatitude
|
||||
type_name: RATIONAL
|
||||
- id: 0x0015
|
||||
name: GPSDestLongitudeRef
|
||||
type_name: ASCII
|
||||
- id: 0x0016
|
||||
name: GPSDestLongitude
|
||||
type_name: RATIONAL
|
||||
- id: 0x0017
|
||||
name: GPSDestBearingRef
|
||||
type_name: ASCII
|
||||
- id: 0x0018
|
||||
name: GPSDestBearing
|
||||
type_name: RATIONAL
|
||||
- id: 0x0019
|
||||
name: GPSDestDistanceRef
|
||||
type_name: ASCII
|
||||
- id: 0x001a
|
||||
name: GPSDestDistance
|
||||
type_name: RATIONAL
|
||||
- id: 0x001b
|
||||
name: GPSProcessingMethod
|
||||
type_name: UNDEFINED
|
||||
- id: 0x001c
|
||||
name: GPSAreaInformation
|
||||
type_name: UNDEFINED
|
||||
- id: 0x001d
|
||||
name: GPSDateStamp
|
||||
type_name: ASCII
|
||||
- id: 0x001e
|
||||
name: GPSDifferential
|
||||
type_name: SHORT
|
||||
IFD:
|
||||
- id: 0x000b
|
||||
name: ProcessingSoftware
|
||||
type_name: ASCII
|
||||
- id: 0x00fe
|
||||
name: NewSubfileType
|
||||
type_name: LONG
|
||||
- id: 0x00ff
|
||||
name: SubfileType
|
||||
type_name: SHORT
|
||||
- id: 0x0100
|
||||
name: ImageWidth
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0x0101
|
||||
name: ImageLength
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0x0102
|
||||
name: BitsPerSample
|
||||
type_name: SHORT
|
||||
- id: 0x0103
|
||||
name: Compression
|
||||
type_name: SHORT
|
||||
- id: 0x0106
|
||||
name: PhotometricInterpretation
|
||||
type_name: SHORT
|
||||
- id: 0x0107
|
||||
name: Thresholding
|
||||
type_name: SHORT
|
||||
- id: 0x0108
|
||||
name: CellWidth
|
||||
type_name: SHORT
|
||||
- id: 0x0109
|
||||
name: CellLength
|
||||
type_name: SHORT
|
||||
- id: 0x010a
|
||||
name: FillOrder
|
||||
type_name: SHORT
|
||||
- id: 0x010d
|
||||
name: DocumentName
|
||||
type_name: ASCII
|
||||
- id: 0x010e
|
||||
name: ImageDescription
|
||||
type_name: ASCII
|
||||
- id: 0x010f
|
||||
name: Make
|
||||
type_name: ASCII
|
||||
- id: 0x0110
|
||||
name: Model
|
||||
type_name: ASCII
|
||||
- id: 0x0111
|
||||
name: StripOffsets
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0x0112
|
||||
name: Orientation
|
||||
type_name: SHORT
|
||||
- id: 0x0115
|
||||
name: SamplesPerPixel
|
||||
type_name: SHORT
|
||||
- id: 0x0116
|
||||
name: RowsPerStrip
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0x0117
|
||||
name: StripByteCounts
|
||||
type_names: [LONG, SHORT]
|
||||
- id: 0x011a
|
||||
name: XResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0x011b
|
||||
name: YResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0x011c
|
||||
name: PlanarConfiguration
|
||||
type_name: SHORT
|
||||
- id: 0x0122
|
||||
name: GrayResponseUnit
|
||||
type_name: SHORT
|
||||
- id: 0x0123
|
||||
name: GrayResponseCurve
|
||||
type_name: SHORT
|
||||
- id: 0x0124
|
||||
name: T4Options
|
||||
type_name: LONG
|
||||
- id: 0x0125
|
||||
name: T6Options
|
||||
type_name: LONG
|
||||
- id: 0x0128
|
||||
name: ResolutionUnit
|
||||
type_name: SHORT
|
||||
- id: 0x0129
|
||||
name: PageNumber
|
||||
type_name: SHORT
|
||||
- id: 0x012d
|
||||
name: TransferFunction
|
||||
type_name: SHORT
|
||||
- id: 0x0131
|
||||
name: Software
|
||||
type_name: ASCII
|
||||
- id: 0x0132
|
||||
name: DateTime
|
||||
type_name: ASCII
|
||||
- id: 0x013b
|
||||
name: Artist
|
||||
type_name: ASCII
|
||||
- id: 0x013c
|
||||
name: HostComputer
|
||||
type_name: ASCII
|
||||
- id: 0x013d
|
||||
name: Predictor
|
||||
type_name: SHORT
|
||||
- id: 0x013e
|
||||
name: WhitePoint
|
||||
type_name: RATIONAL
|
||||
- id: 0x013f
|
||||
name: PrimaryChromaticities
|
||||
type_name: RATIONAL
|
||||
- id: 0x0140
|
||||
name: ColorMap
|
||||
type_name: SHORT
|
||||
- id: 0x0141
|
||||
name: HalftoneHints
|
||||
type_name: SHORT
|
||||
- id: 0x0142
|
||||
name: TileWidth
|
||||
type_name: SHORT
|
||||
- id: 0x0143
|
||||
name: TileLength
|
||||
type_name: SHORT
|
||||
- id: 0x0144
|
||||
name: TileOffsets
|
||||
type_name: SHORT
|
||||
- id: 0x0145
|
||||
name: TileByteCounts
|
||||
type_name: SHORT
|
||||
- id: 0x014a
|
||||
name: SubIFDs
|
||||
type_name: LONG
|
||||
- id: 0x014c
|
||||
name: InkSet
|
||||
type_name: SHORT
|
||||
- id: 0x014d
|
||||
name: InkNames
|
||||
type_name: ASCII
|
||||
- id: 0x014e
|
||||
name: NumberOfInks
|
||||
type_name: SHORT
|
||||
- id: 0x0150
|
||||
name: DotRange
|
||||
type_name: BYTE
|
||||
- id: 0x0151
|
||||
name: TargetPrinter
|
||||
type_name: ASCII
|
||||
- id: 0x0152
|
||||
name: ExtraSamples
|
||||
type_name: SHORT
|
||||
- id: 0x0153
|
||||
name: SampleFormat
|
||||
type_name: SHORT
|
||||
- id: 0x0154
|
||||
name: SMinSampleValue
|
||||
type_name: SHORT
|
||||
- id: 0x0155
|
||||
name: SMaxSampleValue
|
||||
type_name: SHORT
|
||||
- id: 0x0156
|
||||
name: TransferRange
|
||||
type_name: SHORT
|
||||
- id: 0x0157
|
||||
name: ClipPath
|
||||
type_name: BYTE
|
||||
- id: 0x015a
|
||||
name: Indexed
|
||||
type_name: SHORT
|
||||
- id: 0x015b
|
||||
name: JPEGTables
|
||||
type_name: UNDEFINED
|
||||
- id: 0x015f
|
||||
name: OPIProxy
|
||||
type_name: SHORT
|
||||
- id: 0x0200
|
||||
name: JPEGProc
|
||||
type_name: LONG
|
||||
- id: 0x0201
|
||||
name: JPEGInterchangeFormat
|
||||
type_name: LONG
|
||||
- id: 0x0202
|
||||
name: JPEGInterchangeFormatLength
|
||||
type_name: LONG
|
||||
- id: 0x0203
|
||||
name: JPEGRestartInterval
|
||||
type_name: SHORT
|
||||
- id: 0x0205
|
||||
name: JPEGLosslessPredictors
|
||||
type_name: SHORT
|
||||
- id: 0x0206
|
||||
name: JPEGPointTransforms
|
||||
type_name: SHORT
|
||||
- id: 0x0207
|
||||
name: JPEGQTables
|
||||
type_name: LONG
|
||||
- id: 0x0208
|
||||
name: JPEGDCTables
|
||||
type_name: LONG
|
||||
- id: 0x0209
|
||||
name: JPEGACTables
|
||||
type_name: LONG
|
||||
- id: 0x0211
|
||||
name: YCbCrCoefficients
|
||||
type_name: RATIONAL
|
||||
- id: 0x0212
|
||||
name: YCbCrSubSampling
|
||||
type_name: SHORT
|
||||
- id: 0x0213
|
||||
name: YCbCrPositioning
|
||||
type_name: SHORT
|
||||
- id: 0x0214
|
||||
name: ReferenceBlackWhite
|
||||
type_name: RATIONAL
|
||||
- id: 0x02bc
|
||||
name: XMLPacket
|
||||
type_name: BYTE
|
||||
- id: 0x4746
|
||||
name: Rating
|
||||
type_name: SHORT
|
||||
- id: 0x4749
|
||||
name: RatingPercent
|
||||
type_name: SHORT
|
||||
- id: 0x800d
|
||||
name: ImageID
|
||||
type_name: ASCII
|
||||
- id: 0x828d
|
||||
name: CFARepeatPatternDim
|
||||
type_name: SHORT
|
||||
- id: 0x828e
|
||||
name: CFAPattern
|
||||
type_name: BYTE
|
||||
- id: 0x828f
|
||||
name: BatteryLevel
|
||||
type_name: RATIONAL
|
||||
- id: 0x8298
|
||||
name: Copyright
|
||||
type_name: ASCII
|
||||
- id: 0x829a
|
||||
name: ExposureTime
|
||||
# NOTE(dustin): SRATIONAL isn't mentioned in the standard, but we have seen it in real data.
|
||||
type_names: [RATIONAL, SRATIONAL]
|
||||
- id: 0x829d
|
||||
name: FNumber
|
||||
# NOTE(dustin): SRATIONAL isn't mentioned in the standard, but we have seen it in real data.
|
||||
type_names: [RATIONAL, SRATIONAL]
|
||||
- id: 0x83bb
|
||||
name: IPTCNAA
|
||||
type_name: LONG
|
||||
- id: 0x8649
|
||||
name: ImageResources
|
||||
type_name: BYTE
|
||||
- id: 0x8769
|
||||
name: ExifTag
|
||||
type_name: LONG
|
||||
- id: 0x8773
|
||||
name: InterColorProfile
|
||||
type_name: UNDEFINED
|
||||
- id: 0x8822
|
||||
name: ExposureProgram
|
||||
type_name: SHORT
|
||||
- id: 0x8824
|
||||
name: SpectralSensitivity
|
||||
type_name: ASCII
|
||||
- id: 0x8825
|
||||
name: GPSTag
|
||||
type_name: LONG
|
||||
- id: 0x8827
|
||||
name: ISOSpeedRatings
|
||||
type_name: SHORT
|
||||
- id: 0x8828
|
||||
name: OECF
|
||||
type_name: UNDEFINED
|
||||
- id: 0x8829
|
||||
name: Interlace
|
||||
type_name: SHORT
|
||||
- id: 0x882b
|
||||
name: SelfTimerMode
|
||||
type_name: SHORT
|
||||
- id: 0x9003
|
||||
name: DateTimeOriginal
|
||||
type_name: ASCII
|
||||
- id: 0x9102
|
||||
name: CompressedBitsPerPixel
|
||||
type_name: RATIONAL
|
||||
- id: 0x9201
|
||||
name: ShutterSpeedValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9202
|
||||
name: ApertureValue
|
||||
type_name: RATIONAL
|
||||
- id: 0x9203
|
||||
name: BrightnessValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9204
|
||||
name: ExposureBiasValue
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9205
|
||||
name: MaxApertureValue
|
||||
type_name: RATIONAL
|
||||
- id: 0x9206
|
||||
name: SubjectDistance
|
||||
type_name: SRATIONAL
|
||||
- id: 0x9207
|
||||
name: MeteringMode
|
||||
type_name: SHORT
|
||||
- id: 0x9208
|
||||
name: LightSource
|
||||
type_name: SHORT
|
||||
- id: 0x9209
|
||||
name: Flash
|
||||
type_name: SHORT
|
||||
- id: 0x920a
|
||||
name: FocalLength
|
||||
type_name: RATIONAL
|
||||
- id: 0x920b
|
||||
name: FlashEnergy
|
||||
type_name: RATIONAL
|
||||
- id: 0x920c
|
||||
name: SpatialFrequencyResponse
|
||||
type_name: UNDEFINED
|
||||
- id: 0x920d
|
||||
name: Noise
|
||||
type_name: UNDEFINED
|
||||
- id: 0x920e
|
||||
name: FocalPlaneXResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0x920f
|
||||
name: FocalPlaneYResolution
|
||||
type_name: RATIONAL
|
||||
- id: 0x9210
|
||||
name: FocalPlaneResolutionUnit
|
||||
type_name: SHORT
|
||||
- id: 0x9211
|
||||
name: ImageNumber
|
||||
type_name: LONG
|
||||
- id: 0x9212
|
||||
name: SecurityClassification
|
||||
type_name: ASCII
|
||||
- id: 0x9213
|
||||
name: ImageHistory
|
||||
type_name: ASCII
|
||||
- id: 0x9214
|
||||
name: SubjectLocation
|
||||
type_name: SHORT
|
||||
- id: 0x9215
|
||||
name: ExposureIndex
|
||||
type_name: RATIONAL
|
||||
- id: 0x9216
|
||||
name: TIFFEPStandardID
|
||||
type_name: BYTE
|
||||
- id: 0x9217
|
||||
name: SensingMethod
|
||||
type_name: SHORT
|
||||
- id: 0x9c9b
|
||||
name: XPTitle
|
||||
type_name: BYTE
|
||||
- id: 0x9c9c
|
||||
name: XPComment
|
||||
type_name: BYTE
|
||||
- id: 0x9c9d
|
||||
name: XPAuthor
|
||||
type_name: BYTE
|
||||
- id: 0x9c9e
|
||||
name: XPKeywords
|
||||
type_name: BYTE
|
||||
- id: 0x9c9f
|
||||
name: XPSubject
|
||||
type_name: BYTE
|
||||
- id: 0xc4a5
|
||||
name: PrintImageMatching
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc612
|
||||
name: DNGVersion
|
||||
type_name: BYTE
|
||||
- id: 0xc613
|
||||
name: DNGBackwardVersion
|
||||
type_name: BYTE
|
||||
- id: 0xc614
|
||||
name: UniqueCameraModel
|
||||
type_name: ASCII
|
||||
- id: 0xc615
|
||||
name: LocalizedCameraModel
|
||||
type_name: BYTE
|
||||
- id: 0xc616
|
||||
name: CFAPlaneColor
|
||||
type_name: BYTE
|
||||
- id: 0xc617
|
||||
name: CFALayout
|
||||
type_name: SHORT
|
||||
- id: 0xc618
|
||||
name: LinearizationTable
|
||||
type_name: SHORT
|
||||
- id: 0xc619
|
||||
name: BlackLevelRepeatDim
|
||||
type_name: SHORT
|
||||
- id: 0xc61a
|
||||
name: BlackLevel
|
||||
type_name: RATIONAL
|
||||
- id: 0xc61b
|
||||
name: BlackLevelDeltaH
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc61c
|
||||
name: BlackLevelDeltaV
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc61d
|
||||
name: WhiteLevel
|
||||
type_name: SHORT
|
||||
- id: 0xc61e
|
||||
name: DefaultScale
|
||||
type_name: RATIONAL
|
||||
- id: 0xc61f
|
||||
name: DefaultCropOrigin
|
||||
type_name: SHORT
|
||||
- id: 0xc620
|
||||
name: DefaultCropSize
|
||||
type_name: SHORT
|
||||
- id: 0xc621
|
||||
name: ColorMatrix1
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc622
|
||||
name: ColorMatrix2
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc623
|
||||
name: CameraCalibration1
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc624
|
||||
name: CameraCalibration2
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc625
|
||||
name: ReductionMatrix1
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc626
|
||||
name: ReductionMatrix2
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc627
|
||||
name: AnalogBalance
|
||||
type_name: RATIONAL
|
||||
- id: 0xc628
|
||||
name: AsShotNeutral
|
||||
type_name: SHORT
|
||||
- id: 0xc629
|
||||
name: AsShotWhiteXY
|
||||
type_name: RATIONAL
|
||||
- id: 0xc62a
|
||||
name: BaselineExposure
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc62b
|
||||
name: BaselineNoise
|
||||
type_name: RATIONAL
|
||||
- id: 0xc62c
|
||||
name: BaselineSharpness
|
||||
type_name: RATIONAL
|
||||
- id: 0xc62d
|
||||
name: BayerGreenSplit
|
||||
type_name: LONG
|
||||
- id: 0xc62e
|
||||
name: LinearResponseLimit
|
||||
type_name: RATIONAL
|
||||
- id: 0xc62f
|
||||
name: CameraSerialNumber
|
||||
type_name: ASCII
|
||||
- id: 0xc630
|
||||
name: LensInfo
|
||||
type_name: RATIONAL
|
||||
- id: 0xc631
|
||||
name: ChromaBlurRadius
|
||||
type_name: RATIONAL
|
||||
- id: 0xc632
|
||||
name: AntiAliasStrength
|
||||
type_name: RATIONAL
|
||||
- id: 0xc633
|
||||
name: ShadowScale
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc634
|
||||
name: DNGPrivateData
|
||||
type_name: BYTE
|
||||
- id: 0xc635
|
||||
name: MakerNoteSafety
|
||||
type_name: SHORT
|
||||
- id: 0xc65a
|
||||
name: CalibrationIlluminant1
|
||||
type_name: SHORT
|
||||
- id: 0xc65b
|
||||
name: CalibrationIlluminant2
|
||||
type_name: SHORT
|
||||
- id: 0xc65c
|
||||
name: BestQualityScale
|
||||
type_name: RATIONAL
|
||||
- id: 0xc65d
|
||||
name: RawDataUniqueID
|
||||
type_name: BYTE
|
||||
- id: 0xc68b
|
||||
name: OriginalRawFileName
|
||||
type_name: BYTE
|
||||
- id: 0xc68c
|
||||
name: OriginalRawFileData
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc68d
|
||||
name: ActiveArea
|
||||
type_name: SHORT
|
||||
- id: 0xc68e
|
||||
name: MaskedAreas
|
||||
type_name: SHORT
|
||||
- id: 0xc68f
|
||||
name: AsShotICCProfile
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc690
|
||||
name: AsShotPreProfileMatrix
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc691
|
||||
name: CurrentICCProfile
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc692
|
||||
name: CurrentPreProfileMatrix
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc6bf
|
||||
name: ColorimetricReference
|
||||
type_name: SHORT
|
||||
- id: 0xc6f3
|
||||
name: CameraCalibrationSignature
|
||||
type_name: BYTE
|
||||
- id: 0xc6f4
|
||||
name: ProfileCalibrationSignature
|
||||
type_name: BYTE
|
||||
- id: 0xc6f6
|
||||
name: AsShotProfileName
|
||||
type_name: BYTE
|
||||
- id: 0xc6f7
|
||||
name: NoiseReductionApplied
|
||||
type_name: RATIONAL
|
||||
- id: 0xc6f8
|
||||
name: ProfileName
|
||||
type_name: BYTE
|
||||
- id: 0xc6f9
|
||||
name: ProfileHueSatMapDims
|
||||
type_name: LONG
|
||||
- id: 0xc6fd
|
||||
name: ProfileEmbedPolicy
|
||||
type_name: LONG
|
||||
- id: 0xc6fe
|
||||
name: ProfileCopyright
|
||||
type_name: BYTE
|
||||
- id: 0xc714
|
||||
name: ForwardMatrix1
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc715
|
||||
name: ForwardMatrix2
|
||||
type_name: SRATIONAL
|
||||
- id: 0xc716
|
||||
name: PreviewApplicationName
|
||||
type_name: BYTE
|
||||
- id: 0xc717
|
||||
name: PreviewApplicationVersion
|
||||
type_name: BYTE
|
||||
- id: 0xc718
|
||||
name: PreviewSettingsName
|
||||
type_name: BYTE
|
||||
- id: 0xc719
|
||||
name: PreviewSettingsDigest
|
||||
type_name: BYTE
|
||||
- id: 0xc71a
|
||||
name: PreviewColorSpace
|
||||
type_name: LONG
|
||||
- id: 0xc71b
|
||||
name: PreviewDateTime
|
||||
type_name: ASCII
|
||||
- id: 0xc71c
|
||||
name: RawImageDigest
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc71d
|
||||
name: OriginalRawFileDigest
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc71e
|
||||
name: SubTileBlockSize
|
||||
type_name: LONG
|
||||
- id: 0xc71f
|
||||
name: RowInterleaveFactor
|
||||
type_name: LONG
|
||||
- id: 0xc725
|
||||
name: ProfileLookTableDims
|
||||
type_name: LONG
|
||||
- id: 0xc740
|
||||
name: OpcodeList1
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc741
|
||||
name: OpcodeList2
|
||||
type_name: UNDEFINED
|
||||
- id: 0xc74e
|
||||
name: OpcodeList3
|
||||
type_name: UNDEFINED
|
||||
# This tag may be used to specify the size of raster pixel spacing in the
|
||||
# model space units, when the raster space can be embedded in the model space
|
||||
# coordinate system without rotation, and consists of the following 3 values:
|
||||
# ModelPixelScaleTag = (ScaleX, ScaleY, ScaleZ)
|
||||
# where ScaleX and ScaleY give the horizontal and vertical spacing of raster
|
||||
# pixels. The ScaleZ is primarily used to map the pixel value of a digital
|
||||
# elevation model into the correct Z-scale, and so for most other purposes
|
||||
# this value should be zero (since most model spaces are 2-D, with Z=0).
|
||||
# Source: http://geotiff.maptools.org/spec/geotiff2.6.html#2.6.1
|
||||
- id: 0x830e
|
||||
name: ModelPixelScaleTag
|
||||
type_name: DOUBLE
|
||||
# This tag stores raster->model tiepoint pairs in the order
|
||||
# ModelTiepointTag = (...,I,J,K, X,Y,Z...),
|
||||
# where (I,J,K) is the point at location (I,J) in raster space with
|
||||
# pixel-value K, and (X,Y,Z) is a vector in model space. In most cases the
|
||||
# model space is only two-dimensional, in which case both K and Z should be
|
||||
# set to zero; this third dimension is provided in anticipation of future
|
||||
# support for 3D digital elevation models and vertical coordinate systems.
|
||||
# Source: http://geotiff.maptools.org/spec/geotiff2.6.html#2.6.1
|
||||
- id: 0x8482
|
||||
name: ModelTiepointTag
|
||||
type_name: DOUBLE
|
||||
# This tag may be used to specify the transformation matrix between the
|
||||
# raster space (and its dependent pixel-value space) and the (possibly 3D)
|
||||
# model space.
|
||||
# Source: http://geotiff.maptools.org/spec/geotiff2.6.html#2.6.1
|
||||
- id: 0x85d8
|
||||
name: ModelTransformationTag
|
||||
type_name: DOUBLE
|
||||
IFD/Exif/Iop:
|
||||
- id: 0x0001
|
||||
name: InteroperabilityIndex
|
||||
type_name: ASCII
|
||||
- id: 0x0002
|
||||
name: InteroperabilityVersion
|
||||
type_name: UNDEFINED
|
||||
- id: 0x1000
|
||||
name: RelatedImageFileFormat
|
||||
type_name: ASCII
|
||||
- id: 0x1001
|
||||
name: RelatedImageWidth
|
||||
type_name: LONG
|
||||
- id: 0x1002
|
||||
name: RelatedImageLength
|
||||
type_name: LONG
|
||||
`
|
||||
)
|
||||
188
vendor/github.com/dsoprea/go-exif/v3/testing_common.go
generated
vendored
188
vendor/github.com/dsoprea/go-exif/v3/testing_common.go
generated
vendored
|
|
@ -1,188 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"path"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
var (
|
||||
testExifData []byte
|
||||
)
|
||||
|
||||
func getExifSimpleTestIb() *IfdBuilder {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err := log.Wrap(state.(error))
|
||||
log.Panic(err)
|
||||
}
|
||||
}()
|
||||
|
||||
im := exifcommon.NewIfdMapping()
|
||||
|
||||
err := exifcommon.LoadStandardIfds(im)
|
||||
log.PanicIf(err)
|
||||
|
||||
ti := NewTagIndex()
|
||||
ib := NewIfdBuilder(im, ti, exifcommon.IfdStandardIfdIdentity, exifcommon.TestDefaultByteOrder)
|
||||
|
||||
err = ib.AddStandard(0x000b, "asciivalue")
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x00ff, []uint16{0x1122})
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x0100, []uint32{0x33445566})
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x013e, []exifcommon.Rational{{Numerator: 0x11112222, Denominator: 0x33334444}})
|
||||
log.PanicIf(err)
|
||||
|
||||
return ib
|
||||
}
|
||||
|
||||
func getExifSimpleTestIbBytes() []byte {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err := log.Wrap(state.(error))
|
||||
log.Panic(err)
|
||||
}
|
||||
}()
|
||||
|
||||
im := exifcommon.NewIfdMapping()
|
||||
|
||||
err := exifcommon.LoadStandardIfds(im)
|
||||
log.PanicIf(err)
|
||||
|
||||
ti := NewTagIndex()
|
||||
ib := NewIfdBuilder(im, ti, exifcommon.IfdStandardIfdIdentity, exifcommon.TestDefaultByteOrder)
|
||||
|
||||
err = ib.AddStandard(0x000b, "asciivalue")
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x00ff, []uint16{0x1122})
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x0100, []uint32{0x33445566})
|
||||
log.PanicIf(err)
|
||||
|
||||
err = ib.AddStandard(0x013e, []exifcommon.Rational{{Numerator: 0x11112222, Denominator: 0x33334444}})
|
||||
log.PanicIf(err)
|
||||
|
||||
ibe := NewIfdByteEncoder()
|
||||
|
||||
exifData, err := ibe.EncodeToExif(ib)
|
||||
log.PanicIf(err)
|
||||
|
||||
return exifData
|
||||
}
|
||||
|
||||
func validateExifSimpleTestIb(exifData []byte, t *testing.T) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err := log.Wrap(state.(error))
|
||||
log.Panic(err)
|
||||
}
|
||||
}()
|
||||
|
||||
im := exifcommon.NewIfdMapping()
|
||||
|
||||
err := exifcommon.LoadStandardIfds(im)
|
||||
log.PanicIf(err)
|
||||
|
||||
ti := NewTagIndex()
|
||||
|
||||
eh, index, err := Collect(im, ti, exifData)
|
||||
log.PanicIf(err)
|
||||
|
||||
if eh.ByteOrder != exifcommon.TestDefaultByteOrder {
|
||||
t.Fatalf("EXIF byte-order is not correct: %v", eh.ByteOrder)
|
||||
} else if eh.FirstIfdOffset != ExifDefaultFirstIfdOffset {
|
||||
t.Fatalf("EXIF first IFD-offset not correct: (0x%02x)", eh.FirstIfdOffset)
|
||||
}
|
||||
|
||||
if len(index.Ifds) != 1 {
|
||||
t.Fatalf("There wasn't exactly one IFD decoded: (%d)", len(index.Ifds))
|
||||
}
|
||||
|
||||
ifd := index.RootIfd
|
||||
|
||||
if ifd.ByteOrder() != exifcommon.TestDefaultByteOrder {
|
||||
t.Fatalf("IFD byte-order not correct.")
|
||||
} else if ifd.ifdIdentity.UnindexedString() != exifcommon.IfdStandardIfdIdentity.UnindexedString() {
|
||||
t.Fatalf("IFD name not correct.")
|
||||
} else if ifd.ifdIdentity.Index() != 0 {
|
||||
t.Fatalf("IFD index not zero: (%d)", ifd.ifdIdentity.Index())
|
||||
} else if ifd.Offset() != uint32(0x0008) {
|
||||
t.Fatalf("IFD offset not correct.")
|
||||
} else if len(ifd.Entries()) != 4 {
|
||||
t.Fatalf("IFD number of entries not correct: (%d)", len(ifd.Entries()))
|
||||
} else if ifd.nextIfdOffset != uint32(0) {
|
||||
t.Fatalf("Next-IFD offset is non-zero.")
|
||||
} else if ifd.nextIfd != nil {
|
||||
t.Fatalf("Next-IFD pointer is non-nil.")
|
||||
}
|
||||
|
||||
// Verify the values by using the actual, original types (this is awesome).
|
||||
|
||||
expected := []struct {
|
||||
tagId uint16
|
||||
value interface{}
|
||||
}{
|
||||
{tagId: 0x000b, value: "asciivalue"},
|
||||
{tagId: 0x00ff, value: []uint16{0x1122}},
|
||||
{tagId: 0x0100, value: []uint32{0x33445566}},
|
||||
{tagId: 0x013e, value: []exifcommon.Rational{{Numerator: 0x11112222, Denominator: 0x33334444}}},
|
||||
}
|
||||
|
||||
for i, ite := range ifd.Entries() {
|
||||
if ite.TagId() != expected[i].tagId {
|
||||
t.Fatalf("Tag-ID for entry (%d) not correct: (0x%02x) != (0x%02x)", i, ite.TagId(), expected[i].tagId)
|
||||
}
|
||||
|
||||
value, err := ite.Value()
|
||||
log.PanicIf(err)
|
||||
|
||||
if reflect.DeepEqual(value, expected[i].value) != true {
|
||||
t.Fatalf("Value for entry (%d) not correct: [%v] != [%v]", i, value, expected[i].value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getTestImageFilepath() string {
|
||||
assetsPath := exifcommon.GetTestAssetsPath()
|
||||
testImageFilepath := path.Join(assetsPath, "NDM_8901.jpg")
|
||||
return testImageFilepath
|
||||
}
|
||||
|
||||
func getTestExifData() []byte {
|
||||
if testExifData == nil {
|
||||
assetsPath := exifcommon.GetTestAssetsPath()
|
||||
filepath := path.Join(assetsPath, "NDM_8901.jpg.exif")
|
||||
|
||||
var err error
|
||||
|
||||
testExifData, err = ioutil.ReadFile(filepath)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
return testExifData
|
||||
}
|
||||
|
||||
func getTestGpsImageFilepath() string {
|
||||
assetsPath := exifcommon.GetTestAssetsPath()
|
||||
testGpsImageFilepath := path.Join(assetsPath, "gps.jpg")
|
||||
return testGpsImageFilepath
|
||||
}
|
||||
|
||||
func getTestGeotiffFilepath() string {
|
||||
assetsPath := exifcommon.GetTestAssetsPath()
|
||||
testGeotiffFilepath := path.Join(assetsPath, "geotiff_example.tif")
|
||||
return testGeotiffFilepath
|
||||
}
|
||||
4
vendor/github.com/dsoprea/go-exif/v3/undefined/README.md
generated
vendored
4
vendor/github.com/dsoprea/go-exif/v3/undefined/README.md
generated
vendored
|
|
@ -1,4 +0,0 @@
|
|||
|
||||
## 0xa40b
|
||||
|
||||
The specification is not specific/clear enough to be handled. Without a working example ,we're deferring until some point in the future when either we or someone else has a better understanding.
|
||||
62
vendor/github.com/dsoprea/go-exif/v3/undefined/accessor.go
generated
vendored
62
vendor/github.com/dsoprea/go-exif/v3/undefined/accessor.go
generated
vendored
|
|
@ -1,62 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
// Encode encodes the given encodeable undefined value to bytes.
|
||||
func Encode(value EncodeableValue, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
encoderName := value.EncoderName()
|
||||
|
||||
encoder, found := encoders[encoderName]
|
||||
if found == false {
|
||||
log.Panicf("no encoder registered for type [%s]", encoderName)
|
||||
}
|
||||
|
||||
encoded, unitCount, err = encoder.Encode(value, byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
return encoded, unitCount, nil
|
||||
}
|
||||
|
||||
// Decode constructs a value from raw encoded bytes
|
||||
func Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
uth := UndefinedTagHandle{
|
||||
IfdPath: valueContext.IfdPath(),
|
||||
TagId: valueContext.TagId(),
|
||||
}
|
||||
|
||||
decoder, found := decoders[uth]
|
||||
if found == false {
|
||||
// We have no choice but to return the error. We have no way of knowing how
|
||||
// much data there is without already knowing what data-type this tag is.
|
||||
return nil, exifcommon.ErrUnhandledUndefinedTypedTag
|
||||
}
|
||||
|
||||
value, err = decoder.Decode(valueContext)
|
||||
if err != nil {
|
||||
if err == ErrUnparseableValue {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
148
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_8828_oecf.go
generated
vendored
148
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_8828_oecf.go
generated
vendored
|
|
@ -1,148 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag8828Oecf struct {
|
||||
Columns uint16
|
||||
Rows uint16
|
||||
ColumnNames []string
|
||||
Values []exifcommon.SignedRational
|
||||
}
|
||||
|
||||
func (oecf Tag8828Oecf) String() string {
|
||||
return fmt.Sprintf("Tag8828Oecf<COLUMNS=(%d) ROWS=(%d)>", oecf.Columns, oecf.Rows)
|
||||
}
|
||||
|
||||
func (oecf Tag8828Oecf) EncoderName() string {
|
||||
return "Codec8828Oecf"
|
||||
}
|
||||
|
||||
type Codec8828Oecf struct {
|
||||
}
|
||||
|
||||
func (Codec8828Oecf) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test
|
||||
|
||||
oecf, ok := value.(Tag8828Oecf)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag8828Oecf")
|
||||
}
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
err = binary.Write(b, byteOrder, oecf.Columns)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Write(b, byteOrder, oecf.Rows)
|
||||
log.PanicIf(err)
|
||||
|
||||
for _, name := range oecf.ColumnNames {
|
||||
_, err := b.Write([]byte(name))
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write([]byte{0})
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
ve := exifcommon.NewValueEncoder(byteOrder)
|
||||
|
||||
ed, err := ve.Encode(oecf.Values)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write(ed.Encoded)
|
||||
log.PanicIf(err)
|
||||
|
||||
return b.Bytes(), uint32(b.Len()), nil
|
||||
}
|
||||
|
||||
func (Codec8828Oecf) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test using known good data.
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
oecf := Tag8828Oecf{}
|
||||
|
||||
oecf.Columns = valueContext.ByteOrder().Uint16(valueBytes[0:2])
|
||||
oecf.Rows = valueContext.ByteOrder().Uint16(valueBytes[2:4])
|
||||
|
||||
columnNames := make([]string, oecf.Columns)
|
||||
|
||||
// startAt is where the current column name starts.
|
||||
startAt := 4
|
||||
|
||||
// offset is our current position.
|
||||
offset := startAt
|
||||
|
||||
currentColumnNumber := uint16(0)
|
||||
|
||||
for currentColumnNumber < oecf.Columns {
|
||||
if valueBytes[offset] == 0 {
|
||||
columnName := string(valueBytes[startAt:offset])
|
||||
if len(columnName) == 0 {
|
||||
log.Panicf("SFR column (%d) has zero length", currentColumnNumber)
|
||||
}
|
||||
|
||||
columnNames[currentColumnNumber] = columnName
|
||||
currentColumnNumber++
|
||||
|
||||
offset++
|
||||
startAt = offset
|
||||
continue
|
||||
}
|
||||
|
||||
offset++
|
||||
}
|
||||
|
||||
oecf.ColumnNames = columnNames
|
||||
|
||||
rawRationalBytes := valueBytes[offset:]
|
||||
|
||||
rationalSize := exifcommon.TypeSignedRational.Size()
|
||||
if len(rawRationalBytes)%rationalSize > 0 {
|
||||
log.Panicf("OECF signed-rationals not aligned: (%d) %% (%d) > 0", len(rawRationalBytes), rationalSize)
|
||||
}
|
||||
|
||||
rationalCount := len(rawRationalBytes) / rationalSize
|
||||
|
||||
parser := new(exifcommon.Parser)
|
||||
|
||||
byteOrder := valueContext.ByteOrder()
|
||||
|
||||
items, err := parser.ParseSignedRationals(rawRationalBytes, uint32(rationalCount), byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
oecf.Values = items
|
||||
|
||||
return oecf, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0x8828,
|
||||
Codec8828Oecf{})
|
||||
}
|
||||
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9000_exif_version.go
generated
vendored
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9000_exif_version.go
generated
vendored
|
|
@ -1,69 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag9000ExifVersion struct {
|
||||
ExifVersion string
|
||||
}
|
||||
|
||||
func (Tag9000ExifVersion) EncoderName() string {
|
||||
return "Codec9000ExifVersion"
|
||||
}
|
||||
|
||||
func (ev Tag9000ExifVersion) String() string {
|
||||
return ev.ExifVersion
|
||||
}
|
||||
|
||||
type Codec9000ExifVersion struct {
|
||||
}
|
||||
|
||||
func (Codec9000ExifVersion) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
s, ok := value.(Tag9000ExifVersion)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag9000ExifVersion")
|
||||
}
|
||||
|
||||
return []byte(s.ExifVersion), uint32(len(s.ExifVersion)), nil
|
||||
}
|
||||
|
||||
func (Codec9000ExifVersion) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeAsciiNoNul)
|
||||
|
||||
valueString, err := valueContext.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
|
||||
ev := Tag9000ExifVersion{
|
||||
ExifVersion: valueString,
|
||||
}
|
||||
|
||||
return ev, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag9000ExifVersion{},
|
||||
Codec9000ExifVersion{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0x9000,
|
||||
Codec9000ExifVersion{})
|
||||
}
|
||||
124
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9101_components_configuration.go
generated
vendored
124
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9101_components_configuration.go
generated
vendored
|
|
@ -1,124 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
const (
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Y = 0x1
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Cb = 0x2
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Cr = 0x3
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_R = 0x4
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_G = 0x5
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_B = 0x6
|
||||
)
|
||||
|
||||
const (
|
||||
TagUndefinedType_9101_ComponentsConfiguration_OTHER = iota
|
||||
TagUndefinedType_9101_ComponentsConfiguration_RGB = iota
|
||||
TagUndefinedType_9101_ComponentsConfiguration_YCBCR = iota
|
||||
)
|
||||
|
||||
var (
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Names = map[int]string{
|
||||
TagUndefinedType_9101_ComponentsConfiguration_OTHER: "OTHER",
|
||||
TagUndefinedType_9101_ComponentsConfiguration_RGB: "RGB",
|
||||
TagUndefinedType_9101_ComponentsConfiguration_YCBCR: "YCBCR",
|
||||
}
|
||||
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Configurations = map[int][]byte{
|
||||
TagUndefinedType_9101_ComponentsConfiguration_RGB: {
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_R,
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_G,
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_B,
|
||||
0,
|
||||
},
|
||||
|
||||
TagUndefinedType_9101_ComponentsConfiguration_YCBCR: {
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Y,
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Cb,
|
||||
TagUndefinedType_9101_ComponentsConfiguration_Channel_Cr,
|
||||
0,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
type TagExif9101ComponentsConfiguration struct {
|
||||
ConfigurationId int
|
||||
ConfigurationBytes []byte
|
||||
}
|
||||
|
||||
func (TagExif9101ComponentsConfiguration) EncoderName() string {
|
||||
return "CodecExif9101ComponentsConfiguration"
|
||||
}
|
||||
|
||||
func (cc TagExif9101ComponentsConfiguration) String() string {
|
||||
return fmt.Sprintf("Exif9101ComponentsConfiguration<ID=[%s] BYTES=%v>", TagUndefinedType_9101_ComponentsConfiguration_Names[cc.ConfigurationId], cc.ConfigurationBytes)
|
||||
}
|
||||
|
||||
type CodecExif9101ComponentsConfiguration struct {
|
||||
}
|
||||
|
||||
func (CodecExif9101ComponentsConfiguration) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
cc, ok := value.(TagExif9101ComponentsConfiguration)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagExif9101ComponentsConfiguration")
|
||||
}
|
||||
|
||||
return cc.ConfigurationBytes, uint32(len(cc.ConfigurationBytes)), nil
|
||||
}
|
||||
|
||||
func (CodecExif9101ComponentsConfiguration) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
for configurationId, configurationBytes := range TagUndefinedType_9101_ComponentsConfiguration_Configurations {
|
||||
if bytes.Equal(configurationBytes, valueBytes) == true {
|
||||
cc := TagExif9101ComponentsConfiguration{
|
||||
ConfigurationId: configurationId,
|
||||
ConfigurationBytes: valueBytes,
|
||||
}
|
||||
|
||||
return cc, nil
|
||||
}
|
||||
}
|
||||
|
||||
cc := TagExif9101ComponentsConfiguration{
|
||||
ConfigurationId: TagUndefinedType_9101_ComponentsConfiguration_OTHER,
|
||||
ConfigurationBytes: valueBytes,
|
||||
}
|
||||
|
||||
return cc, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagExif9101ComponentsConfiguration{},
|
||||
CodecExif9101ComponentsConfiguration{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0x9101,
|
||||
CodecExif9101ComponentsConfiguration{})
|
||||
}
|
||||
114
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_927C_maker_note.go
generated
vendored
114
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_927C_maker_note.go
generated
vendored
|
|
@ -1,114 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"crypto/sha1"
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag927CMakerNote struct {
|
||||
MakerNoteType []byte
|
||||
MakerNoteBytes []byte
|
||||
}
|
||||
|
||||
func (Tag927CMakerNote) EncoderName() string {
|
||||
return "Codec927CMakerNote"
|
||||
}
|
||||
|
||||
func (mn Tag927CMakerNote) String() string {
|
||||
parts := make([]string, len(mn.MakerNoteType))
|
||||
|
||||
for i, c := range mn.MakerNoteType {
|
||||
parts[i] = fmt.Sprintf("%02x", c)
|
||||
}
|
||||
|
||||
h := sha1.New()
|
||||
|
||||
_, err := h.Write(mn.MakerNoteBytes)
|
||||
log.PanicIf(err)
|
||||
|
||||
digest := h.Sum(nil)
|
||||
|
||||
return fmt.Sprintf("MakerNote<TYPE-ID=[%s] LEN=(%d) SHA1=[%020x]>", strings.Join(parts, " "), len(mn.MakerNoteBytes), digest)
|
||||
}
|
||||
|
||||
type Codec927CMakerNote struct {
|
||||
}
|
||||
|
||||
func (Codec927CMakerNote) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
mn, ok := value.(Tag927CMakerNote)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag927CMakerNote")
|
||||
}
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification.
|
||||
|
||||
return mn.MakerNoteBytes, uint32(len(mn.MakerNoteBytes)), nil
|
||||
}
|
||||
|
||||
func (Codec927CMakerNote) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// MakerNote
|
||||
// TODO(dustin): !! This is the Wild Wild West. This very well might be a child IFD, but any and all OEM's define their own formats. If we're going to be writing changes and this is complete EXIF (which may not have the first eight bytes), it might be fine. However, if these are just IFDs they'll be relative to the main EXIF, this will invalidate the MakerNote data for IFDs and any other implementations that use offsets unless we can interpret them all. It be best to return to this later and just exclude this from being written for now, though means a loss of a wealth of image metadata.
|
||||
// -> We can also just blindly try to interpret as an IFD and just validate that it's looks good (maybe it will even have a 'next ifd' pointer that we can validate is 0x0).
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
// TODO(dustin): Doesn't work, but here as an example.
|
||||
// ie := NewIfdEnumerate(valueBytes, byteOrder)
|
||||
|
||||
// // TODO(dustin): !! Validate types (might have proprietary types, but it might be worth splitting the list between valid and not valid; maybe fail if a certain proportion are invalid, or maybe aren't less then a certain small integer)?
|
||||
// ii, err := ie.Collect(0x0)
|
||||
|
||||
// for _, entry := range ii.RootIfd.Entries {
|
||||
// fmt.Printf("ENTRY: 0x%02x %d\n", entry.TagId, entry.TagType)
|
||||
// }
|
||||
|
||||
var makerNoteType []byte
|
||||
if len(valueBytes) >= 20 {
|
||||
makerNoteType = valueBytes[:20]
|
||||
} else {
|
||||
makerNoteType = valueBytes
|
||||
}
|
||||
|
||||
mn := Tag927CMakerNote{
|
||||
MakerNoteType: makerNoteType,
|
||||
|
||||
// MakerNoteBytes has the whole length of bytes. There's always
|
||||
// the chance that the first 20 bytes includes actual data.
|
||||
MakerNoteBytes: valueBytes,
|
||||
}
|
||||
|
||||
return mn, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag927CMakerNote{},
|
||||
Codec927CMakerNote{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0x927c,
|
||||
Codec927CMakerNote{})
|
||||
}
|
||||
142
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9286_user_comment.go
generated
vendored
142
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_9286_user_comment.go
generated
vendored
|
|
@ -1,142 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
var (
|
||||
exif9286Logger = log.NewLogger("exifundefined.exif_9286_user_comment")
|
||||
)
|
||||
|
||||
const (
|
||||
TagUndefinedType_9286_UserComment_Encoding_ASCII = iota
|
||||
TagUndefinedType_9286_UserComment_Encoding_JIS = iota
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNICODE = iota
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNDEFINED = iota
|
||||
)
|
||||
|
||||
var (
|
||||
TagUndefinedType_9286_UserComment_Encoding_Names = map[int]string{
|
||||
TagUndefinedType_9286_UserComment_Encoding_ASCII: "ASCII",
|
||||
TagUndefinedType_9286_UserComment_Encoding_JIS: "JIS",
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNICODE: "UNICODE",
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNDEFINED: "UNDEFINED",
|
||||
}
|
||||
|
||||
TagUndefinedType_9286_UserComment_Encodings = map[int][]byte{
|
||||
TagUndefinedType_9286_UserComment_Encoding_ASCII: {'A', 'S', 'C', 'I', 'I', 0, 0, 0},
|
||||
TagUndefinedType_9286_UserComment_Encoding_JIS: {'J', 'I', 'S', 0, 0, 0, 0, 0},
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNICODE: {'U', 'n', 'i', 'c', 'o', 'd', 'e', 0},
|
||||
TagUndefinedType_9286_UserComment_Encoding_UNDEFINED: {0, 0, 0, 0, 0, 0, 0, 0},
|
||||
}
|
||||
)
|
||||
|
||||
type Tag9286UserComment struct {
|
||||
EncodingType int
|
||||
EncodingBytes []byte
|
||||
}
|
||||
|
||||
func (Tag9286UserComment) EncoderName() string {
|
||||
return "Codec9286UserComment"
|
||||
}
|
||||
|
||||
func (uc Tag9286UserComment) String() string {
|
||||
var valuePhrase string
|
||||
|
||||
if uc.EncodingType == TagUndefinedType_9286_UserComment_Encoding_ASCII {
|
||||
return fmt.Sprintf("[ASCII] %s", string(uc.EncodingBytes))
|
||||
} else {
|
||||
if len(uc.EncodingBytes) <= 8 {
|
||||
valuePhrase = fmt.Sprintf("%v", uc.EncodingBytes)
|
||||
} else {
|
||||
valuePhrase = fmt.Sprintf("%v...", uc.EncodingBytes[:8])
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Sprintf("UserComment<SIZE=(%d) ENCODING=[%s] V=%v LEN=(%d)>", len(uc.EncodingBytes), TagUndefinedType_9286_UserComment_Encoding_Names[uc.EncodingType], valuePhrase, len(uc.EncodingBytes))
|
||||
}
|
||||
|
||||
type Codec9286UserComment struct {
|
||||
}
|
||||
|
||||
func (Codec9286UserComment) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
uc, ok := value.(Tag9286UserComment)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag9286UserComment")
|
||||
}
|
||||
|
||||
encodingTypeBytes, found := TagUndefinedType_9286_UserComment_Encodings[uc.EncodingType]
|
||||
if found == false {
|
||||
log.Panicf("encoding-type not valid for unknown-type tag 9286 (UserComment): (%d)", uc.EncodingType)
|
||||
}
|
||||
|
||||
encoded = make([]byte, len(uc.EncodingBytes)+8)
|
||||
|
||||
copy(encoded[:8], encodingTypeBytes)
|
||||
copy(encoded[8:], uc.EncodingBytes)
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification.
|
||||
|
||||
return encoded, uint32(len(encoded)), nil
|
||||
}
|
||||
|
||||
func (Codec9286UserComment) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
if len(valueBytes) < 8 {
|
||||
return nil, ErrUnparseableValue
|
||||
}
|
||||
|
||||
unknownUc := Tag9286UserComment{
|
||||
EncodingType: TagUndefinedType_9286_UserComment_Encoding_UNDEFINED,
|
||||
EncodingBytes: []byte{},
|
||||
}
|
||||
|
||||
encoding := valueBytes[:8]
|
||||
for encodingIndex, encodingBytes := range TagUndefinedType_9286_UserComment_Encodings {
|
||||
if bytes.Compare(encoding, encodingBytes) == 0 {
|
||||
uc := Tag9286UserComment{
|
||||
EncodingType: encodingIndex,
|
||||
EncodingBytes: valueBytes[8:],
|
||||
}
|
||||
|
||||
return uc, nil
|
||||
}
|
||||
}
|
||||
|
||||
exif9286Logger.Warningf(nil, "User-comment encoding not valid. Returning 'unknown' type (the default).")
|
||||
return unknownUc, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag9286UserComment{},
|
||||
Codec9286UserComment{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0x9286,
|
||||
Codec9286UserComment{})
|
||||
}
|
||||
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A000_flashpix_version.go
generated
vendored
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A000_flashpix_version.go
generated
vendored
|
|
@ -1,69 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type TagA000FlashpixVersion struct {
|
||||
FlashpixVersion string
|
||||
}
|
||||
|
||||
func (TagA000FlashpixVersion) EncoderName() string {
|
||||
return "CodecA000FlashpixVersion"
|
||||
}
|
||||
|
||||
func (fv TagA000FlashpixVersion) String() string {
|
||||
return fv.FlashpixVersion
|
||||
}
|
||||
|
||||
type CodecA000FlashpixVersion struct {
|
||||
}
|
||||
|
||||
func (CodecA000FlashpixVersion) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
s, ok := value.(TagA000FlashpixVersion)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagA000FlashpixVersion")
|
||||
}
|
||||
|
||||
return []byte(s.FlashpixVersion), uint32(len(s.FlashpixVersion)), nil
|
||||
}
|
||||
|
||||
func (CodecA000FlashpixVersion) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeAsciiNoNul)
|
||||
|
||||
valueString, err := valueContext.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
|
||||
fv := TagA000FlashpixVersion{
|
||||
FlashpixVersion: valueString,
|
||||
}
|
||||
|
||||
return fv, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagA000FlashpixVersion{},
|
||||
CodecA000FlashpixVersion{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0xa000,
|
||||
CodecA000FlashpixVersion{})
|
||||
}
|
||||
160
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A20C_spatial_frequency_response.go
generated
vendored
160
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A20C_spatial_frequency_response.go
generated
vendored
|
|
@ -1,160 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type TagA20CSpatialFrequencyResponse struct {
|
||||
Columns uint16
|
||||
Rows uint16
|
||||
ColumnNames []string
|
||||
Values []exifcommon.Rational
|
||||
}
|
||||
|
||||
func (TagA20CSpatialFrequencyResponse) EncoderName() string {
|
||||
return "CodecA20CSpatialFrequencyResponse"
|
||||
}
|
||||
|
||||
func (sfr TagA20CSpatialFrequencyResponse) String() string {
|
||||
return fmt.Sprintf("CodecA20CSpatialFrequencyResponse<COLUMNS=(%d) ROWS=(%d)>", sfr.Columns, sfr.Rows)
|
||||
}
|
||||
|
||||
type CodecA20CSpatialFrequencyResponse struct {
|
||||
}
|
||||
|
||||
func (CodecA20CSpatialFrequencyResponse) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test.
|
||||
|
||||
sfr, ok := value.(TagA20CSpatialFrequencyResponse)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagA20CSpatialFrequencyResponse")
|
||||
}
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
err = binary.Write(b, byteOrder, sfr.Columns)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Write(b, byteOrder, sfr.Rows)
|
||||
log.PanicIf(err)
|
||||
|
||||
// Write columns.
|
||||
|
||||
for _, name := range sfr.ColumnNames {
|
||||
_, err := b.WriteString(name)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = b.WriteByte(0)
|
||||
log.PanicIf(err)
|
||||
}
|
||||
|
||||
// Write values.
|
||||
|
||||
ve := exifcommon.NewValueEncoder(byteOrder)
|
||||
|
||||
ed, err := ve.Encode(sfr.Values)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write(ed.Encoded)
|
||||
log.PanicIf(err)
|
||||
|
||||
encoded = b.Bytes()
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification.
|
||||
|
||||
return encoded, uint32(len(encoded)), nil
|
||||
}
|
||||
|
||||
func (CodecA20CSpatialFrequencyResponse) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test using known good data.
|
||||
|
||||
byteOrder := valueContext.ByteOrder()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
sfr := TagA20CSpatialFrequencyResponse{}
|
||||
|
||||
sfr.Columns = byteOrder.Uint16(valueBytes[0:2])
|
||||
sfr.Rows = byteOrder.Uint16(valueBytes[2:4])
|
||||
|
||||
columnNames := make([]string, sfr.Columns)
|
||||
|
||||
// startAt is where the current column name starts.
|
||||
startAt := 4
|
||||
|
||||
// offset is our current position.
|
||||
offset := 4
|
||||
|
||||
currentColumnNumber := uint16(0)
|
||||
|
||||
for currentColumnNumber < sfr.Columns {
|
||||
if valueBytes[offset] == 0 {
|
||||
columnName := string(valueBytes[startAt:offset])
|
||||
if len(columnName) == 0 {
|
||||
log.Panicf("SFR column (%d) has zero length", currentColumnNumber)
|
||||
}
|
||||
|
||||
columnNames[currentColumnNumber] = columnName
|
||||
currentColumnNumber++
|
||||
|
||||
offset++
|
||||
startAt = offset
|
||||
continue
|
||||
}
|
||||
|
||||
offset++
|
||||
}
|
||||
|
||||
sfr.ColumnNames = columnNames
|
||||
|
||||
rawRationalBytes := valueBytes[offset:]
|
||||
|
||||
rationalSize := exifcommon.TypeRational.Size()
|
||||
if len(rawRationalBytes)%rationalSize > 0 {
|
||||
log.Panicf("SFR rationals not aligned: (%d) %% (%d) > 0", len(rawRationalBytes), rationalSize)
|
||||
}
|
||||
|
||||
rationalCount := len(rawRationalBytes) / rationalSize
|
||||
|
||||
parser := new(exifcommon.Parser)
|
||||
|
||||
items, err := parser.ParseRationals(rawRationalBytes, uint32(rationalCount), byteOrder)
|
||||
log.PanicIf(err)
|
||||
|
||||
sfr.Values = items
|
||||
|
||||
return sfr, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagA20CSpatialFrequencyResponse{},
|
||||
CodecA20CSpatialFrequencyResponse{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0xa20c,
|
||||
CodecA20CSpatialFrequencyResponse{})
|
||||
}
|
||||
79
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A300_file_source.go
generated
vendored
79
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A300_file_source.go
generated
vendored
|
|
@ -1,79 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type TagExifA300FileSource uint32
|
||||
|
||||
func (TagExifA300FileSource) EncoderName() string {
|
||||
return "CodecExifA300FileSource"
|
||||
}
|
||||
|
||||
func (af TagExifA300FileSource) String() string {
|
||||
return fmt.Sprintf("0x%08x", uint32(af))
|
||||
}
|
||||
|
||||
const (
|
||||
TagUndefinedType_A300_SceneType_Others TagExifA300FileSource = 0
|
||||
TagUndefinedType_A300_SceneType_ScannerOfTransparentType TagExifA300FileSource = 1
|
||||
TagUndefinedType_A300_SceneType_ScannerOfReflexType TagExifA300FileSource = 2
|
||||
TagUndefinedType_A300_SceneType_Dsc TagExifA300FileSource = 3
|
||||
)
|
||||
|
||||
type CodecExifA300FileSource struct {
|
||||
}
|
||||
|
||||
func (CodecExifA300FileSource) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
st, ok := value.(TagExifA300FileSource)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagExifA300FileSource")
|
||||
}
|
||||
|
||||
ve := exifcommon.NewValueEncoder(byteOrder)
|
||||
|
||||
ed, err := ve.Encode([]uint32{uint32(st)})
|
||||
log.PanicIf(err)
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification. It's non-specific about what type it is, but it looks to be no more than a single integer scalar. So, we're assuming it's a LONG.
|
||||
|
||||
return ed.Encoded, 1, nil
|
||||
}
|
||||
|
||||
func (CodecExifA300FileSource) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeLong)
|
||||
|
||||
valueLongs, err := valueContext.ReadLongs()
|
||||
log.PanicIf(err)
|
||||
|
||||
return TagExifA300FileSource(valueLongs[0]), nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagExifA300FileSource(0),
|
||||
CodecExifA300FileSource{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0xa300,
|
||||
CodecExifA300FileSource{})
|
||||
}
|
||||
76
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A301_scene_type.go
generated
vendored
76
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A301_scene_type.go
generated
vendored
|
|
@ -1,76 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type TagExifA301SceneType uint32
|
||||
|
||||
func (TagExifA301SceneType) EncoderName() string {
|
||||
return "CodecExifA301SceneType"
|
||||
}
|
||||
|
||||
func (st TagExifA301SceneType) String() string {
|
||||
return fmt.Sprintf("0x%08x", uint32(st))
|
||||
}
|
||||
|
||||
const (
|
||||
TagUndefinedType_A301_SceneType_DirectlyPhotographedImage TagExifA301SceneType = 1
|
||||
)
|
||||
|
||||
type CodecExifA301SceneType struct {
|
||||
}
|
||||
|
||||
func (CodecExifA301SceneType) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
st, ok := value.(TagExifA301SceneType)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagExif9101ComponentsConfiguration")
|
||||
}
|
||||
|
||||
ve := exifcommon.NewValueEncoder(byteOrder)
|
||||
|
||||
ed, err := ve.Encode([]uint32{uint32(st)})
|
||||
log.PanicIf(err)
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification. It's non-specific about what type it is, but it looks to be no more than a single integer scalar. So, we're assuming it's a LONG.
|
||||
|
||||
return ed.Encoded, uint32(int(ed.UnitCount)), nil
|
||||
}
|
||||
|
||||
func (CodecExifA301SceneType) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeLong)
|
||||
|
||||
valueLongs, err := valueContext.ReadLongs()
|
||||
log.PanicIf(err)
|
||||
|
||||
return TagExifA301SceneType(valueLongs[0]), nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagExifA301SceneType(0),
|
||||
CodecExifA301SceneType{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0xa301,
|
||||
CodecExifA301SceneType{})
|
||||
}
|
||||
97
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A302_cfa_pattern.go
generated
vendored
97
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_A302_cfa_pattern.go
generated
vendored
|
|
@ -1,97 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type TagA302CfaPattern struct {
|
||||
HorizontalRepeat uint16
|
||||
VerticalRepeat uint16
|
||||
CfaValue []byte
|
||||
}
|
||||
|
||||
func (TagA302CfaPattern) EncoderName() string {
|
||||
return "CodecA302CfaPattern"
|
||||
}
|
||||
|
||||
func (cp TagA302CfaPattern) String() string {
|
||||
return fmt.Sprintf("TagA302CfaPattern<HORZ-REPEAT=(%d) VERT-REPEAT=(%d) CFA-VALUE=(%d)>", cp.HorizontalRepeat, cp.VerticalRepeat, len(cp.CfaValue))
|
||||
}
|
||||
|
||||
type CodecA302CfaPattern struct {
|
||||
}
|
||||
|
||||
func (CodecA302CfaPattern) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test.
|
||||
|
||||
cp, ok := value.(TagA302CfaPattern)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a TagA302CfaPattern")
|
||||
}
|
||||
|
||||
b := new(bytes.Buffer)
|
||||
|
||||
err = binary.Write(b, byteOrder, cp.HorizontalRepeat)
|
||||
log.PanicIf(err)
|
||||
|
||||
err = binary.Write(b, byteOrder, cp.VerticalRepeat)
|
||||
log.PanicIf(err)
|
||||
|
||||
_, err = b.Write(cp.CfaValue)
|
||||
log.PanicIf(err)
|
||||
|
||||
encoded = b.Bytes()
|
||||
|
||||
// TODO(dustin): Confirm this size against the specification.
|
||||
|
||||
return encoded, uint32(len(encoded)), nil
|
||||
}
|
||||
|
||||
func (CodecA302CfaPattern) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
// TODO(dustin): Add test using known good data.
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeByte)
|
||||
|
||||
valueBytes, err := valueContext.ReadBytes()
|
||||
log.PanicIf(err)
|
||||
|
||||
cp := TagA302CfaPattern{}
|
||||
|
||||
cp.HorizontalRepeat = valueContext.ByteOrder().Uint16(valueBytes[0:2])
|
||||
cp.VerticalRepeat = valueContext.ByteOrder().Uint16(valueBytes[2:4])
|
||||
|
||||
expectedLength := int(cp.HorizontalRepeat * cp.VerticalRepeat)
|
||||
cp.CfaValue = valueBytes[4 : 4+expectedLength]
|
||||
|
||||
return cp, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
TagA302CfaPattern{},
|
||||
CodecA302CfaPattern{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifStandardIfdIdentity.UnindexedString(),
|
||||
0xa302,
|
||||
CodecA302CfaPattern{})
|
||||
}
|
||||
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_iop_0002_interop_version.go
generated
vendored
69
vendor/github.com/dsoprea/go-exif/v3/undefined/exif_iop_0002_interop_version.go
generated
vendored
|
|
@ -1,69 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag0002InteropVersion struct {
|
||||
InteropVersion string
|
||||
}
|
||||
|
||||
func (Tag0002InteropVersion) EncoderName() string {
|
||||
return "Codec0002InteropVersion"
|
||||
}
|
||||
|
||||
func (iv Tag0002InteropVersion) String() string {
|
||||
return iv.InteropVersion
|
||||
}
|
||||
|
||||
type Codec0002InteropVersion struct {
|
||||
}
|
||||
|
||||
func (Codec0002InteropVersion) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
s, ok := value.(Tag0002InteropVersion)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag0002InteropVersion")
|
||||
}
|
||||
|
||||
return []byte(s.InteropVersion), uint32(len(s.InteropVersion)), nil
|
||||
}
|
||||
|
||||
func (Codec0002InteropVersion) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeAsciiNoNul)
|
||||
|
||||
valueString, err := valueContext.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
|
||||
iv := Tag0002InteropVersion{
|
||||
InteropVersion: valueString,
|
||||
}
|
||||
|
||||
return iv, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag0002InteropVersion{},
|
||||
Codec0002InteropVersion{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdExifIopStandardIfdIdentity.UnindexedString(),
|
||||
0x0002,
|
||||
Codec0002InteropVersion{})
|
||||
}
|
||||
65
vendor/github.com/dsoprea/go-exif/v3/undefined/gps_001B_gps_processing_method.go
generated
vendored
65
vendor/github.com/dsoprea/go-exif/v3/undefined/gps_001B_gps_processing_method.go
generated
vendored
|
|
@ -1,65 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag001BGPSProcessingMethod struct {
|
||||
string
|
||||
}
|
||||
|
||||
func (Tag001BGPSProcessingMethod) EncoderName() string {
|
||||
return "Codec001BGPSProcessingMethod"
|
||||
}
|
||||
|
||||
func (gpm Tag001BGPSProcessingMethod) String() string {
|
||||
return gpm.string
|
||||
}
|
||||
|
||||
type Codec001BGPSProcessingMethod struct {
|
||||
}
|
||||
|
||||
func (Codec001BGPSProcessingMethod) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
s, ok := value.(Tag001BGPSProcessingMethod)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag001BGPSProcessingMethod")
|
||||
}
|
||||
|
||||
return []byte(s.string), uint32(len(s.string)), nil
|
||||
}
|
||||
|
||||
func (Codec001BGPSProcessingMethod) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeAsciiNoNul)
|
||||
|
||||
valueString, err := valueContext.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
|
||||
return Tag001BGPSProcessingMethod{valueString}, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag001BGPSProcessingMethod{},
|
||||
Codec001BGPSProcessingMethod{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdGpsInfoStandardIfdIdentity.UnindexedString(),
|
||||
0x001b,
|
||||
Codec001BGPSProcessingMethod{})
|
||||
}
|
||||
65
vendor/github.com/dsoprea/go-exif/v3/undefined/gps_001C_gps_area_information.go
generated
vendored
65
vendor/github.com/dsoprea/go-exif/v3/undefined/gps_001C_gps_area_information.go
generated
vendored
|
|
@ -1,65 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
type Tag001CGPSAreaInformation struct {
|
||||
string
|
||||
}
|
||||
|
||||
func (Tag001CGPSAreaInformation) EncoderName() string {
|
||||
return "Codec001CGPSAreaInformation"
|
||||
}
|
||||
|
||||
func (gai Tag001CGPSAreaInformation) String() string {
|
||||
return gai.string
|
||||
}
|
||||
|
||||
type Codec001CGPSAreaInformation struct {
|
||||
}
|
||||
|
||||
func (Codec001CGPSAreaInformation) Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
s, ok := value.(Tag001CGPSAreaInformation)
|
||||
if ok == false {
|
||||
log.Panicf("can only encode a Tag001CGPSAreaInformation")
|
||||
}
|
||||
|
||||
return []byte(s.string), uint32(len(s.string)), nil
|
||||
}
|
||||
|
||||
func (Codec001CGPSAreaInformation) Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
valueContext.SetUndefinedValueType(exifcommon.TypeAsciiNoNul)
|
||||
|
||||
valueString, err := valueContext.ReadAsciiNoNul()
|
||||
log.PanicIf(err)
|
||||
|
||||
return Tag001CGPSAreaInformation{valueString}, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
registerEncoder(
|
||||
Tag001CGPSAreaInformation{},
|
||||
Codec001CGPSAreaInformation{})
|
||||
|
||||
registerDecoder(
|
||||
exifcommon.IfdGpsInfoStandardIfdIdentity.UnindexedString(),
|
||||
0x001c,
|
||||
Codec001CGPSAreaInformation{})
|
||||
}
|
||||
42
vendor/github.com/dsoprea/go-exif/v3/undefined/registration.go
generated
vendored
42
vendor/github.com/dsoprea/go-exif/v3/undefined/registration.go
generated
vendored
|
|
@ -1,42 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
// UndefinedTagHandle defines one undefined-type tag with a corresponding
|
||||
// decoder.
|
||||
type UndefinedTagHandle struct {
|
||||
IfdPath string
|
||||
TagId uint16
|
||||
}
|
||||
|
||||
func registerEncoder(entity EncodeableValue, encoder UndefinedValueEncoder) {
|
||||
typeName := entity.EncoderName()
|
||||
|
||||
_, found := encoders[typeName]
|
||||
if found == true {
|
||||
log.Panicf("encoder already registered: %v", typeName)
|
||||
}
|
||||
|
||||
encoders[typeName] = encoder
|
||||
}
|
||||
|
||||
func registerDecoder(ifdPath string, tagId uint16, decoder UndefinedValueDecoder) {
|
||||
uth := UndefinedTagHandle{
|
||||
IfdPath: ifdPath,
|
||||
TagId: tagId,
|
||||
}
|
||||
|
||||
_, found := decoders[uth]
|
||||
if found == true {
|
||||
log.Panicf("decoder already registered: %v", uth)
|
||||
}
|
||||
|
||||
decoders[uth] = decoder
|
||||
}
|
||||
|
||||
var (
|
||||
encoders = make(map[string]UndefinedValueEncoder)
|
||||
decoders = make(map[UndefinedTagHandle]UndefinedValueDecoder)
|
||||
)
|
||||
44
vendor/github.com/dsoprea/go-exif/v3/undefined/type.go
generated
vendored
44
vendor/github.com/dsoprea/go-exif/v3/undefined/type.go
generated
vendored
|
|
@ -1,44 +0,0 @@
|
|||
package exifundefined
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
)
|
||||
|
||||
const (
|
||||
// UnparseableUnknownTagValuePlaceholder is the string to use for an unknown
|
||||
// undefined tag.
|
||||
UnparseableUnknownTagValuePlaceholder = "!UNKNOWN"
|
||||
|
||||
// UnparseableHandledTagValuePlaceholder is the string to use for a known
|
||||
// value that is not parseable.
|
||||
UnparseableHandledTagValuePlaceholder = "!MALFORMED"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrUnparseableValue is the error for a value that we should have been
|
||||
// able to parse but were not able to.
|
||||
ErrUnparseableValue = errors.New("unparseable undefined tag")
|
||||
)
|
||||
|
||||
// UndefinedValueEncoder knows how to encode an undefined-type tag's value to
|
||||
// bytes.
|
||||
type UndefinedValueEncoder interface {
|
||||
Encode(value interface{}, byteOrder binary.ByteOrder) (encoded []byte, unitCount uint32, err error)
|
||||
}
|
||||
|
||||
// EncodeableValue wraps a value with the information that will be needed to re-
|
||||
// encode it later.
|
||||
type EncodeableValue interface {
|
||||
EncoderName() string
|
||||
String() string
|
||||
}
|
||||
|
||||
// UndefinedValueDecoder knows how to decode an undefined-type tag's value from
|
||||
// bytes.
|
||||
type UndefinedValueDecoder interface {
|
||||
Decode(valueContext *exifcommon.ValueContext) (value EncodeableValue, err error)
|
||||
}
|
||||
237
vendor/github.com/dsoprea/go-exif/v3/utility.go
generated
vendored
237
vendor/github.com/dsoprea/go-exif/v3/utility.go
generated
vendored
|
|
@ -1,237 +0,0 @@
|
|||
package exif
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
"github.com/dsoprea/go-utility/v2/filesystem"
|
||||
|
||||
"github.com/dsoprea/go-exif/v3/common"
|
||||
"github.com/dsoprea/go-exif/v3/undefined"
|
||||
)
|
||||
|
||||
var (
|
||||
utilityLogger = log.NewLogger("exif.utility")
|
||||
)
|
||||
|
||||
// ExifTag is one simple representation of a tag in a flat list of all of them.
|
||||
type ExifTag struct {
|
||||
// IfdPath is the fully-qualified IFD path (even though it is not named as
|
||||
// such).
|
||||
IfdPath string `json:"ifd_path"`
|
||||
|
||||
// TagId is the tag-ID.
|
||||
TagId uint16 `json:"id"`
|
||||
|
||||
// TagName is the tag-name. This is never empty.
|
||||
TagName string `json:"name"`
|
||||
|
||||
// UnitCount is the recorded number of units constution of the value.
|
||||
UnitCount uint32 `json:"unit_count"`
|
||||
|
||||
// TagTypeId is the type-ID.
|
||||
TagTypeId exifcommon.TagTypePrimitive `json:"type_id"`
|
||||
|
||||
// TagTypeName is the type name.
|
||||
TagTypeName string `json:"type_name"`
|
||||
|
||||
// Value is the decoded value.
|
||||
Value interface{} `json:"value"`
|
||||
|
||||
// ValueBytes is the raw, encoded value.
|
||||
ValueBytes []byte `json:"value_bytes"`
|
||||
|
||||
// Formatted is the human representation of the first value (tag values are
|
||||
// always an array).
|
||||
FormattedFirst string `json:"formatted_first"`
|
||||
|
||||
// Formatted is the human representation of the complete value.
|
||||
Formatted string `json:"formatted"`
|
||||
|
||||
// ChildIfdPath is the name of the child IFD this tag represents (if it
|
||||
// represents any). Otherwise, this is empty.
|
||||
ChildIfdPath string `json:"child_ifd_path"`
|
||||
}
|
||||
|
||||
// String returns a string representation.
|
||||
func (et ExifTag) String() string {
|
||||
return fmt.Sprintf(
|
||||
"ExifTag<"+
|
||||
"IFD-PATH=[%s] "+
|
||||
"TAG-ID=(0x%02x) "+
|
||||
"TAG-NAME=[%s] "+
|
||||
"TAG-TYPE=[%s] "+
|
||||
"VALUE=[%v] "+
|
||||
"VALUE-BYTES=(%d) "+
|
||||
"CHILD-IFD-PATH=[%s]",
|
||||
et.IfdPath, et.TagId, et.TagName, et.TagTypeName, et.FormattedFirst,
|
||||
len(et.ValueBytes), et.ChildIfdPath)
|
||||
}
|
||||
|
||||
// GetFlatExifData returns a simple, flat representation of all tags.
|
||||
func GetFlatExifData(exifData []byte, so *ScanOptions) (exifTags []ExifTag, med *MiscellaneousExifData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
sb := rifs.NewSeekableBufferWithBytes(exifData)
|
||||
|
||||
exifTags, med, err = getFlatExifDataUniversalSearchWithReadSeeker(sb, so, false)
|
||||
log.PanicIf(err)
|
||||
|
||||
return exifTags, med, nil
|
||||
}
|
||||
|
||||
// RELEASE(dustin): GetFlatExifDataUniversalSearch is a kludge to allow univeral tag searching in a backwards-compatible manner. For the next release, undo this and simply add the flag to GetFlatExifData.
|
||||
|
||||
// GetFlatExifDataUniversalSearch returns a simple, flat representation of all
|
||||
// tags.
|
||||
func GetFlatExifDataUniversalSearch(exifData []byte, so *ScanOptions, doUniversalSearch bool) (exifTags []ExifTag, med *MiscellaneousExifData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
sb := rifs.NewSeekableBufferWithBytes(exifData)
|
||||
|
||||
exifTags, med, err = getFlatExifDataUniversalSearchWithReadSeeker(sb, so, doUniversalSearch)
|
||||
log.PanicIf(err)
|
||||
|
||||
return exifTags, med, nil
|
||||
}
|
||||
|
||||
// RELEASE(dustin): GetFlatExifDataUniversalSearchWithReadSeeker is a kludge to allow using a ReadSeeker in a backwards-compatible manner. For the next release, drop this and refactor GetFlatExifDataUniversalSearch to take a ReadSeeker.
|
||||
|
||||
// GetFlatExifDataUniversalSearchWithReadSeeker returns a simple, flat
|
||||
// representation of all tags given a ReadSeeker.
|
||||
func GetFlatExifDataUniversalSearchWithReadSeeker(rs io.ReadSeeker, so *ScanOptions, doUniversalSearch bool) (exifTags []ExifTag, med *MiscellaneousExifData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
exifTags, med, err = getFlatExifDataUniversalSearchWithReadSeeker(rs, so, doUniversalSearch)
|
||||
log.PanicIf(err)
|
||||
|
||||
return exifTags, med, nil
|
||||
}
|
||||
|
||||
// getFlatExifDataUniversalSearchWithReadSeeker returns a simple, flat
|
||||
// representation of all tags given a ReadSeeker.
|
||||
func getFlatExifDataUniversalSearchWithReadSeeker(rs io.ReadSeeker, so *ScanOptions, doUniversalSearch bool) (exifTags []ExifTag, med *MiscellaneousExifData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
headerData := make([]byte, ExifSignatureLength)
|
||||
if _, err = io.ReadFull(rs, headerData); err != nil {
|
||||
if err == io.EOF {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
eh, err := ParseExifHeader(headerData)
|
||||
log.PanicIf(err)
|
||||
|
||||
im, err := exifcommon.NewIfdMappingWithStandard()
|
||||
log.PanicIf(err)
|
||||
|
||||
ti := NewTagIndex()
|
||||
|
||||
if doUniversalSearch == true {
|
||||
ti.SetUniversalSearch(true)
|
||||
}
|
||||
|
||||
ebs := NewExifReadSeeker(rs)
|
||||
ie := NewIfdEnumerate(im, ti, ebs, eh.ByteOrder)
|
||||
|
||||
exifTags = make([]ExifTag, 0)
|
||||
|
||||
visitor := func(ite *IfdTagEntry) (err error) {
|
||||
// This encodes down to base64. Since this an example tool and we do not
|
||||
// expect to ever decode the output, we are not worried about
|
||||
// specifically base64-encoding it in order to have a measure of
|
||||
// control.
|
||||
valueBytes, err := ite.GetRawBytes()
|
||||
if err != nil {
|
||||
if err == exifundefined.ErrUnparseableValue {
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
value, err := ite.Value()
|
||||
if err != nil {
|
||||
if err == exifcommon.ErrUnhandledUndefinedTypedTag {
|
||||
value = exifundefined.UnparseableUnknownTagValuePlaceholder
|
||||
} else if log.Is(err, exifcommon.ErrParseFail) == true {
|
||||
utilityLogger.Warningf(nil,
|
||||
"Could not parse value for tag [%s] (%04x) [%s].",
|
||||
ite.IfdPath(), ite.TagId(), ite.TagName())
|
||||
|
||||
return nil
|
||||
} else {
|
||||
log.Panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
et := ExifTag{
|
||||
IfdPath: ite.IfdPath(),
|
||||
TagId: ite.TagId(),
|
||||
TagName: ite.TagName(),
|
||||
UnitCount: ite.UnitCount(),
|
||||
TagTypeId: ite.TagType(),
|
||||
TagTypeName: ite.TagType().String(),
|
||||
Value: value,
|
||||
ValueBytes: valueBytes,
|
||||
ChildIfdPath: ite.ChildIfdPath(),
|
||||
}
|
||||
|
||||
et.Formatted, err = ite.Format()
|
||||
log.PanicIf(err)
|
||||
|
||||
et.FormattedFirst, err = ite.FormatFirst()
|
||||
log.PanicIf(err)
|
||||
|
||||
exifTags = append(exifTags, et)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
med, err = ie.Scan(exifcommon.IfdStandardIfdIdentity, eh.FirstIfdOffset, visitor, nil)
|
||||
log.PanicIf(err)
|
||||
|
||||
return exifTags, med, nil
|
||||
}
|
||||
|
||||
// GpsDegreesEquals returns true if the two `GpsDegrees` are identical.
|
||||
func GpsDegreesEquals(gi1, gi2 GpsDegrees) bool {
|
||||
if gi2.Orientation != gi1.Orientation {
|
||||
return false
|
||||
}
|
||||
|
||||
degreesRightBound := math.Nextafter(gi1.Degrees, gi1.Degrees+1)
|
||||
minutesRightBound := math.Nextafter(gi1.Minutes, gi1.Minutes+1)
|
||||
secondsRightBound := math.Nextafter(gi1.Seconds, gi1.Seconds+1)
|
||||
|
||||
if gi2.Degrees < gi1.Degrees || gi2.Degrees >= degreesRightBound {
|
||||
return false
|
||||
} else if gi2.Minutes < gi1.Minutes || gi2.Minutes >= minutesRightBound {
|
||||
return false
|
||||
} else if gi2.Seconds < gi1.Seconds || gi2.Seconds >= secondsRightBound {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
0
vendor/github.com/dsoprea/go-iptc/.MODULE_ROOT
generated
vendored
0
vendor/github.com/dsoprea/go-iptc/.MODULE_ROOT
generated
vendored
14
vendor/github.com/dsoprea/go-iptc/.travis.yml
generated
vendored
14
vendor/github.com/dsoprea/go-iptc/.travis.yml
generated
vendored
|
|
@ -1,14 +0,0 @@
|
|||
language: go
|
||||
go:
|
||||
- master
|
||||
- stable
|
||||
- "1.13"
|
||||
- "1.12"
|
||||
env:
|
||||
- GO111MODULE=on
|
||||
install:
|
||||
- go get -t ./...
|
||||
- go get github.com/mattn/goveralls
|
||||
script:
|
||||
- go test -v ./...
|
||||
- goveralls -v -service=travis-ci
|
||||
21
vendor/github.com/dsoprea/go-iptc/LICENSE
generated
vendored
21
vendor/github.com/dsoprea/go-iptc/LICENSE
generated
vendored
|
|
@ -1,21 +0,0 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2020 Dustin Oprea
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
8
vendor/github.com/dsoprea/go-iptc/README.md
generated
vendored
8
vendor/github.com/dsoprea/go-iptc/README.md
generated
vendored
|
|
@ -1,8 +0,0 @@
|
|||
[](https://travis-ci.org/dsoprea/go-iptc)
|
||||
[](https://coveralls.io/github/dsoprea/go-iptc?branch=master)
|
||||
[](https://goreportcard.com/report/github.com/dsoprea/go-iptc)
|
||||
[](https://godoc.org/github.com/dsoprea/go-iptc)
|
||||
|
||||
# Overview
|
||||
|
||||
This project provides functionality to parse a series of IPTC records/datasets. It also provides name resolution, but other constraints/validation is not yet implemented (though there is structure present that can accommodate this when desired/required).
|
||||
101
vendor/github.com/dsoprea/go-iptc/standard.go
generated
vendored
101
vendor/github.com/dsoprea/go-iptc/standard.go
generated
vendored
|
|
@ -1,101 +0,0 @@
|
|||
package iptc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
)
|
||||
|
||||
// StreamTagInfo encapsulates the properties of each tag.
|
||||
type StreamTagInfo struct {
|
||||
// Description is the human-readable description of the tag.
|
||||
Description string
|
||||
}
|
||||
|
||||
var (
|
||||
standardTags = map[StreamTagKey]StreamTagInfo{
|
||||
{1, 120}: {"ARM Identifier"},
|
||||
|
||||
{1, 122}: {"ARM Version"},
|
||||
{2, 0}: {"Record Version"},
|
||||
{2, 3}: {"Object Type Reference"},
|
||||
{2, 4}: {"Object Attribute Reference"},
|
||||
{2, 5}: {"Object Name"},
|
||||
{2, 7}: {"Edit Status"},
|
||||
{2, 8}: {"Editorial Update"},
|
||||
{2, 10}: {"Urgency"},
|
||||
{2, 12}: {"Subject Reference"},
|
||||
{2, 15}: {"Category"},
|
||||
{2, 20}: {"Supplemental Category"},
|
||||
{2, 22}: {"Fixture Identifier"},
|
||||
{2, 25}: {"Keywords"},
|
||||
{2, 26}: {"Content Location Code"},
|
||||
{2, 27}: {"Content Location Name"},
|
||||
{2, 30}: {"Release Date"},
|
||||
{2, 35}: {"Release Time"},
|
||||
{2, 37}: {"Expiration Date"},
|
||||
{2, 38}: {"Expiration Time"},
|
||||
{2, 40}: {"Special Instructions"},
|
||||
{2, 42}: {"Action Advised"},
|
||||
{2, 45}: {"Reference Service"},
|
||||
{2, 47}: {"Reference Date"},
|
||||
{2, 50}: {"Reference Number"},
|
||||
{2, 55}: {"Date Created"},
|
||||
{2, 60}: {"Time Created"},
|
||||
{2, 62}: {"Digital Creation Date"},
|
||||
{2, 63}: {"Digital Creation Time"},
|
||||
{2, 65}: {"Originating Program"},
|
||||
{2, 70}: {"Program Version"},
|
||||
{2, 75}: {"Object Cycle"},
|
||||
{2, 80}: {"By-line"},
|
||||
{2, 85}: {"By-line Title"},
|
||||
{2, 90}: {"City"},
|
||||
{2, 92}: {"Sublocation"},
|
||||
{2, 95}: {"Province/State"},
|
||||
{2, 100}: {"Country/Primary Location Code"},
|
||||
{2, 101}: {"Country/Primary Location Name"},
|
||||
{2, 103}: {"Original Transmission Reference"},
|
||||
{2, 105}: {"Headline"},
|
||||
{2, 110}: {"Credit"},
|
||||
{2, 115}: {"Source"},
|
||||
{2, 116}: {"Copyright Notice"},
|
||||
{2, 118}: {"Contact"},
|
||||
{2, 120}: {"Caption/Abstract"},
|
||||
{2, 122}: {"Writer/Editor"},
|
||||
{2, 125}: {"Rasterized Caption"},
|
||||
{2, 130}: {"Image Type"},
|
||||
{2, 131}: {"Image Orientation"},
|
||||
{2, 135}: {"Language Identifier"},
|
||||
{2, 150}: {"Audio Type"},
|
||||
{2, 151}: {"Audio Sampling Rate"},
|
||||
{2, 152}: {"Audio Sampling Resolution"},
|
||||
{2, 153}: {"Audio Duration"},
|
||||
{2, 154}: {"Audio Outcue"},
|
||||
{2, 200}: {"ObjectData Preview File Format"},
|
||||
{2, 201}: {"ObjectData Preview File Format Version"},
|
||||
{2, 202}: {"ObjectData Preview Data"},
|
||||
{7, 10}: {"Size Mode"},
|
||||
{7, 20}: {"Max Subfile Size"},
|
||||
{7, 90}: {"ObjectData Size Announced"},
|
||||
{7, 95}: {"Maximum ObjectData Size"},
|
||||
{8, 10}: {"Subfile"},
|
||||
{9, 10}: {"Confirmed ObjectData Size"},
|
||||
}
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTagNotStandard indicates that the given tag is not known among the
|
||||
// documented standard set.
|
||||
ErrTagNotStandard = errors.New("not a standard tag")
|
||||
)
|
||||
|
||||
// GetTagInfo return the info for the given tag. Returns ErrTagNotStandard if
|
||||
// not known.
|
||||
func GetTagInfo(recordNumber, datasetNumber int) (sti StreamTagInfo, err error) {
|
||||
stk := StreamTagKey{uint8(recordNumber), uint8(datasetNumber)}
|
||||
|
||||
sti, found := standardTags[stk]
|
||||
if found == false {
|
||||
return sti, ErrTagNotStandard
|
||||
}
|
||||
|
||||
return sti, nil
|
||||
}
|
||||
277
vendor/github.com/dsoprea/go-iptc/tag.go
generated
vendored
277
vendor/github.com/dsoprea/go-iptc/tag.go
generated
vendored
|
|
@ -1,277 +0,0 @@
|
|||
package iptc
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/dsoprea/go-logging"
|
||||
)
|
||||
|
||||
var (
|
||||
// TODO(dustin): We're still not sure if this is the right endianness. No search to IPTC or IIM seems to state one or the other.
|
||||
|
||||
// DefaultEncoding is the standard encoding for the IPTC format.
|
||||
defaultEncoding = binary.BigEndian
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrInvalidTagMarker indicates that the tag can not be parsed because the
|
||||
// tag boundary marker is not the expected value.
|
||||
ErrInvalidTagMarker = errors.New("invalid tag marker")
|
||||
)
|
||||
|
||||
// Tag describes one tag read from the stream.
|
||||
type Tag struct {
|
||||
recordNumber uint8
|
||||
datasetNumber uint8
|
||||
dataSize uint64
|
||||
}
|
||||
|
||||
// String expresses state as a string.
|
||||
func (tag *Tag) String() string {
|
||||
return fmt.Sprintf(
|
||||
"Tag<DATASET=(%d:%d) DATA-SIZE=(%d)>",
|
||||
tag.recordNumber, tag.datasetNumber, tag.dataSize)
|
||||
}
|
||||
|
||||
// DecodeTag parses one tag from the stream.
|
||||
func DecodeTag(r io.Reader) (tag Tag, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
tagMarker := uint8(0)
|
||||
err = binary.Read(r, defaultEncoding, &tagMarker)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return tag, err
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
if tagMarker != 0x1c {
|
||||
return tag, ErrInvalidTagMarker
|
||||
}
|
||||
|
||||
recordNumber := uint8(0)
|
||||
err = binary.Read(r, defaultEncoding, &recordNumber)
|
||||
log.PanicIf(err)
|
||||
|
||||
datasetNumber := uint8(0)
|
||||
err = binary.Read(r, defaultEncoding, &datasetNumber)
|
||||
log.PanicIf(err)
|
||||
|
||||
dataSize16Raw := uint16(0)
|
||||
err = binary.Read(r, defaultEncoding, &dataSize16Raw)
|
||||
log.PanicIf(err)
|
||||
|
||||
var dataSize uint64
|
||||
|
||||
if dataSize16Raw < 32768 {
|
||||
// We only had 16-bits (has the MSB set to (0)).
|
||||
dataSize = uint64(dataSize16Raw)
|
||||
} else {
|
||||
// This field is just the length of the length (has the MSB set to (1)).
|
||||
|
||||
// Clear the MSB.
|
||||
lengthLength := dataSize16Raw & 32767
|
||||
|
||||
if lengthLength == 4 {
|
||||
dataSize32Raw := uint32(0)
|
||||
err := binary.Read(r, defaultEncoding, &dataSize32Raw)
|
||||
log.PanicIf(err)
|
||||
|
||||
dataSize = uint64(dataSize32Raw)
|
||||
} else if lengthLength == 8 {
|
||||
err := binary.Read(r, defaultEncoding, &dataSize)
|
||||
log.PanicIf(err)
|
||||
} else {
|
||||
// No specific sizes or limits are specified in the specification
|
||||
// so we need to impose our own limits in order to implement.
|
||||
|
||||
log.Panicf("extended data-set tag size is not supported: (%d)", lengthLength)
|
||||
}
|
||||
}
|
||||
|
||||
tag = Tag{
|
||||
recordNumber: recordNumber,
|
||||
datasetNumber: datasetNumber,
|
||||
dataSize: dataSize,
|
||||
}
|
||||
|
||||
return tag, nil
|
||||
}
|
||||
|
||||
// StreamTagKey is a convenience type that lets us key our index with a high-
|
||||
// level type.
|
||||
type StreamTagKey struct {
|
||||
// RecordNumber is the major classification of the dataset.
|
||||
RecordNumber uint8
|
||||
|
||||
// DatasetNumber is the minor classification of the dataset.
|
||||
DatasetNumber uint8
|
||||
}
|
||||
|
||||
// String returns a descriptive string.
|
||||
func (stk StreamTagKey) String() string {
|
||||
return fmt.Sprintf("%d:%d", stk.RecordNumber, stk.DatasetNumber)
|
||||
}
|
||||
|
||||
// TagData is a convenience wrapper around a byte-slice.
|
||||
type TagData []byte
|
||||
|
||||
// IsPrintable returns true if all characters are printable.
|
||||
func (tg TagData) IsPrintable() bool {
|
||||
for _, b := range tg {
|
||||
r := rune(b)
|
||||
|
||||
// Newline characters aren't considered printable.
|
||||
if r == 0x0d || r == 0x0a {
|
||||
continue
|
||||
}
|
||||
|
||||
if unicode.IsGraphic(r) == false || unicode.IsPrint(r) == false {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// String returns a descriptive string. If the data doesn't include any non-
|
||||
// printable characters, it will include the value itself.
|
||||
func (tg TagData) String() string {
|
||||
if tg.IsPrintable() == true {
|
||||
return string(tg)
|
||||
}
|
||||
|
||||
return fmt.Sprintf("BINARY<(%d) bytes>", len(tg))
|
||||
}
|
||||
|
||||
// ParsedTags is the complete, unordered set of tags parsed from the stream.
|
||||
type ParsedTags map[StreamTagKey][]TagData
|
||||
|
||||
// ParseStream parses a serial sequence of tags and tag data out of the stream.
|
||||
func ParseStream(r io.Reader) (tags map[StreamTagKey][]TagData, err error) {
|
||||
defer func() {
|
||||
if state := recover(); state != nil {
|
||||
err = log.Wrap(state.(error))
|
||||
}
|
||||
}()
|
||||
|
||||
tags = make(ParsedTags)
|
||||
|
||||
for {
|
||||
tag, err := DecodeTag(r)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
log.Panic(err)
|
||||
}
|
||||
|
||||
raw := make([]byte, tag.dataSize)
|
||||
|
||||
_, err = io.ReadFull(r, raw)
|
||||
log.PanicIf(err)
|
||||
|
||||
data := TagData(raw)
|
||||
|
||||
stk := StreamTagKey{
|
||||
RecordNumber: tag.recordNumber,
|
||||
DatasetNumber: tag.datasetNumber,
|
||||
}
|
||||
|
||||
if existing, found := tags[stk]; found == true {
|
||||
tags[stk] = append(existing, data)
|
||||
} else {
|
||||
tags[stk] = []TagData{data}
|
||||
}
|
||||
}
|
||||
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
// GetSimpleDictionaryFromParsedTags returns a dictionary of tag names to tag
|
||||
// values, where all values are strings and any tag that had a non-printable
|
||||
// value is omitted. We will also only return the first value, therefore
|
||||
// dropping any follow-up values for repeatable tags. This will ignore non-
|
||||
// standard tags. This will trim whitespace from the ends of strings.
|
||||
//
|
||||
// This is a convenience function for quickly displaying only the summary IPTC
|
||||
// metadata that a user might actually be interested in at first glance.
|
||||
func GetSimpleDictionaryFromParsedTags(pt ParsedTags) (distilled map[string]string) {
|
||||
distilled = make(map[string]string)
|
||||
|
||||
for stk, dataSlice := range pt {
|
||||
sti, err := GetTagInfo(int(stk.RecordNumber), int(stk.DatasetNumber))
|
||||
if err != nil {
|
||||
if err == ErrTagNotStandard {
|
||||
continue
|
||||
} else {
|
||||
log.Panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
data := dataSlice[0]
|
||||
|
||||
if data.IsPrintable() == false {
|
||||
continue
|
||||
}
|
||||
|
||||
// TODO(dustin): Trim leading whitespace, too.
|
||||
distilled[sti.Description] = strings.Trim(string(data), "\r\n")
|
||||
}
|
||||
|
||||
return distilled
|
||||
}
|
||||
|
||||
// GetDictionaryFromParsedTags returns all tags. It will keep non-printable
|
||||
// values, though will not print a placeholder instead. This will keep non-
|
||||
// standard tags (and print the fully-qualified dataset ID rather than the
|
||||
// name). It will keep repeated values (with the counter value appended to the
|
||||
// end).
|
||||
func GetDictionaryFromParsedTags(pt ParsedTags) (distilled map[string]string) {
|
||||
distilled = make(map[string]string)
|
||||
for stk, dataSlice := range pt {
|
||||
var keyPhrase string
|
||||
|
||||
sti, err := GetTagInfo(int(stk.RecordNumber), int(stk.DatasetNumber))
|
||||
if err != nil {
|
||||
if err == ErrTagNotStandard {
|
||||
keyPhrase = fmt.Sprintf("%s (not a standard tag)", stk.String())
|
||||
} else {
|
||||
log.Panic(err)
|
||||
}
|
||||
} else {
|
||||
keyPhrase = sti.Description
|
||||
}
|
||||
|
||||
for i, data := range dataSlice {
|
||||
currentKeyPhrase := keyPhrase
|
||||
if len(dataSlice) > 1 {
|
||||
currentKeyPhrase = fmt.Sprintf("%s (%d)", currentKeyPhrase, i+1)
|
||||
}
|
||||
|
||||
var presentable string
|
||||
if data.IsPrintable() == false {
|
||||
presentable = fmt.Sprintf("[BINARY] %s", DumpBytesToString(data))
|
||||
} else {
|
||||
presentable = string(data)
|
||||
}
|
||||
|
||||
distilled[currentKeyPhrase] = presentable
|
||||
}
|
||||
}
|
||||
|
||||
return distilled
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue