Add generated file

This PR adds generated files under pkg/client and vendor folder.
This commit is contained in:
xing-yang
2018-07-12 10:55:15 -07:00
parent 36b1de0341
commit e213d1890d
17729 changed files with 5090889 additions and 0 deletions

311
vendor/golang.org/x/tools/go/packages/doc.go generated vendored Normal file
View File

@@ -0,0 +1,311 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package packages provides information about Go packages,
such as their path, source files, and imports.
It can optionally load, parse, and type-check the source files of a
package, and obtain type information for their dependencies either by
loading export data files produced by the Go compiler or by
recursively loading dependencies from source code.
THIS INTERFACE IS EXPERIMENTAL AND IS LIKELY TO CHANGE.
This package is intended to replace golang.org/x/tools/go/loader.
It provides a simpler interface to the same functionality and serves
as a foundation for analysis tools that work with 'go build',
including its support for versioned packages,
and also with alternative build systems such as Bazel and Blaze.
Its primary operation is to load packages through
the Metadata, TypeCheck, and WholeProgram functions,
which accept a list of string arguments that denote
one or more packages according to the conventions
of the underlying build system.
For example, in a 'go build' workspace,
they may be a list of package names,
or relative directory names,
or even an ad-hoc list of source files:
fmt
encoding/json
./json
a.go b.go
For a Bazel project, the arguments use Bazel's package notation:
@repo//project:target
//project:target
:target
target
An application that loads packages can thus pass its command-line
arguments directly to the loading functions and it will integrate with the
usual conventions for that project.
The result of a call to a loading function is a set of Package
objects describing the packages denoted by the arguments.
These "initial" packages are in fact the roots of a graph of Packages,
the import graph, that includes complete transitive dependencies.
Clients may traverse the import graph by following the edges in the
Package.Imports map, which relates the import paths that appear in the
package's source files to the packages they import.
Each package has three kinds of name: ID, PkgPath, and Name.
A package's ID is an unspecified identifier that uniquely
identifies it throughout the workspace, and thus may be used as a key in
a map of packages. Clients should not interpret this string, no matter
how intelligible it looks, as its structure varies across build systems.
A package's PkgPath is the name by which the package is known to the
compiler, linker, and runtime: it is the string returned by
reflect.Type.PkgPath or fmt.Sprintf("%T", x). The PkgPath is not
necessarily unique throughout the workspace; for example, an in-package
test has the same PkgPath as the package under test.
A package's Name is the identifier that appears in the "package"
declaration at the start of each of its source files,
and is the name declared when importing it into another file.
A package whose Name is "main" is linked as an executable.
The loader's three entry points, Metadata, TypeCheck, and
WholeProgram, provide increasing levels of detail.
Metadata returns only a description of each package,
its source files and imports.
Some build systems permit build steps to generate
Go source files that are then compiled.
The Packages describing such a program report
the locations of the generated files.
The process of loading packages invokes the
underlying build system to ensure that these
files are present and up-to-date.
Although 'go build' does not in general allow code generation,
it does in a limited form in its support for cgo.
For a package whose source files import "C", subjecting them to cgo
preprocessing, the loader reports the location of the pure-Go source
files generated by cgo. This too may entail a partial build.
Cgo processing is disabled for Metadata queries,
or when the DisableCgo option is set.
TypeCheck additionally loads, parses, and type-checks
the source files of the initial packages,
and exposes their syntax trees and type information.
Type information for dependencies of the initial
packages is obtained not from Go source code but from
compiler-generated export data files.
Again, loading invokes the underlying build system to
ensure that these files are present and up-to-date.
WholeProgram loads complete type information about
the initial packages and all of their transitive dependencies.
Example:
pkgs, err := packages.TypeCheck(nil, flag.Args()...)
if err != nil { ... }
for _, pkg := range pkgs {
...
}
*/
package packages // import "golang.org/x/tools/go/packages"
/*
Motivation and design considerations
The new package's design solves problems addressed by two existing
packages: go/build, which locates and describes packages, and
golang.org/x/tools/go/loader, which loads, parses and type-checks them.
The go/build.Package structure encodes too much of the 'go build' way
of organizing projects, leaving us in need of a data type that describes a
package of Go source code independent of the underlying build system.
We wanted something that works equally well with go build and vgo, and
also other build systems such as Bazel and Blaze, making it possible to
construct analysis tools that work in all these environments.
Tools such as errcheck and staticcheck were essentially unavailable to
the Go community at Google, and some of Google's internal tools for Go
are unavailable externally.
This new package provides a uniform way to obtain package metadata by
querying each of these build systems, optionally supporting their
preferred command-line notations for packages, so that tools integrate
neatly with users' build environments. The Metadata query function
executes an external query tool appropriate to the current workspace.
Loading packages always returns the complete import graph "all the way down",
even if all you want is information about a single package, because the query
mechanisms of all the build systems we currently support ({go,vgo} list, and
blaze/bazel aspect-based query) cannot provide detailed information
about one package without visiting all its dependencies too, so there is
no additional asymptotic cost to providing transitive information.
(This property might not be true of a hypothetical 5th build system.)
This package provides no parse-but-don't-typecheck operation because most tools
that need only untyped syntax (such as gofmt, goimports, and golint)
seem not to care about any files other than the ones they are directly
instructed to look at. Also, it is trivial for a client to supplement
this functionality on top of a Metadata query.
In calls to TypeCheck, all initial packages, and any package that
transitively depends on one of them, must be loaded from source.
Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from
source; D may be loaded from export data, and E may not be loaded at all
(though it's possible that D's export data mentions it, so a
types.Package may be created for it and exposed.)
The old loader had a feature to suppress type-checking of function
bodies on a per-package basis, primarily intended to reduce the work of
obtaining type information for imported packages. Now that imports are
satisfied by export data, the optimization no longer seems necessary.
Despite some early attempts, the old loader did not exploit export data,
instead always using the equivalent of WholeProgram mode. This was due
to the complexity of mixing source and export data packages (now
resolved by the upward traversal mentioned above), and because export data
files were nearly always missing or stale. Now that 'go build' supports
caching, all the underlying build systems can guarantee to produce
export data in a reasonable (amortized) time.
Packages that are part of a test are marked IsTest=true.
Such packages include in-package tests, external tests,
and the test "main" packages synthesized by the build system.
The latter packages are reported as first-class packages,
avoiding the need for clients (such as go/ssa) to reinvent this
generation logic.
One way in which go/packages is simpler than the old loader is in its
treatment of in-package tests. In-package tests are packages that
consist of all the files of the library under test, plus the test files.
The old loader constructed in-package tests by a two-phase process of
mutation called "augmentation": first it would construct and type check
all the ordinary library packages and type-check the packages that
depend on them; then it would add more (test) files to the package and
type-check again. This two-phase approach had four major problems:
1) in processing the tests, the loader modified the library package,
leaving no way for a client application to see both the test
package and the library package; one would mutate into the other.
2) because test files can declare additional methods on types defined in
the library portion of the package, the dispatch of method calls in
the library portion was affected by the presence of the test files.
This should have been a clue that the packages were logically
different.
3) this model of "augmentation" assumed at most one in-package test
per library package, which is true of projects using 'go build',
but not other build systems.
4) because of the two-phase nature of test processing, all packages that
import the library package had to be processed before augmentation,
forcing a "one-shot" API and preventing the client from calling Load
in several times in sequence as is now possible in WholeProgram mode.
(TypeCheck mode has a similar one-shot restriction for a different reason.)
Early drafts of this package supported "multi-shot" operation
in the Metadata and WholeProgram modes, although this feature is not exposed
through the API and will likely be removed.
Although it allowed clients to make a sequence of calls (or concurrent
calls) to Load, building up the graph of Packages incrementally,
it was of marginal value: it complicated the API
(since it allowed some options to vary across calls but not others),
it complicated the implementation,
it cannot be made to work in TypeCheck mode, as explained above,
and it was less efficient than making one combined call (when this is possible).
Among the clients we have inspected, none made multiple calls to load
but could not be easily and satisfactorily modified to make only a single call.
However, applications changes may be required.
For example, the ssadump command loads the user-specified packages
and in addition the runtime package. It is tempting to simply append
"runtime" to the user-provided list, but that does not work if the user
specified an ad-hoc package such as [a.go b.go].
Instead, ssadump no longer requests the runtime package,
but seeks it among the dependencies of the user-specified packages,
and emits an error if it is not found.
Overlays: the ParseFile hook in the API permits clients to vary the way
in which ASTs are obtained from filenames; the default implementation is
based on parser.ParseFile. This features enables editor-integrated tools
that analyze the contents of modified but unsaved buffers: rather than
read from the file system, a tool can read from an archive of modified
buffers provided by the editor.
This approach has its limits. Because package metadata is obtained by
fork/execing an external query command for each build system, we can
fake only the file contents seen by the parser, type-checker, and
application, but not by the metadata query, so, for example:
- additional imports in the fake file will not be described by the
metadata, so the type checker will fail to load imports that create
new dependencies.
- in TypeCheck mode, because export data is produced by the query
command, it will not reflect the fake file contents.
- this mechanism cannot add files to a package without first saving them.
Questions & Tasks
- Add pass-through options for the underlying query tool:
Dir string
Environ []string
Flags []string
Do away with GOROOT and don't add GOARCH/GOOS:
they are not portable concepts.
The goal is to allow users to express themselves using the conventions
of the underlying build system: if the build system honors GOARCH
during a build and during a metadata query, then so should
applications built atop that query mechanism.
Conversely, if the target architecture of the build is determined by
command-line flags, the application must pass the relevant
flags through to the build system using a command such as:
myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin"
- Build tags: where do they fit in? How does Bazel/Blaze handle them?
- Add an 'IncludeTests bool' option to include tests among the results.
This flag is needed to avoid unnecessary dependencies (and, for vgo, downloads).
Should it include/skip implied tests? (all tests are implied in go build)
Or include/skip all tests?
- How should we handle partial failures such as a mixture of good and
malformed patterns, existing and non-existent packages, succesful and
failed builds, import failures, import cycles, and so on, in a call to
Load?
- Do we need a GeneratedBy map that maps the name of each generated Go
source file in Srcs to that of the original file, if known, or "" otherwise?
Or are //line directives and "Generated" comments in those files enough?
- Support bazel/blaze, not just "go list".
- Support a "contains" query: a boolean option would cause the the
pattern words to be interpreted as filenames, and the query would
return the package(s) to which the file(s) belong.
- Handle (and test) various partial success cases, e.g.
a mixture of good packages and:
invalid patterns
nonexistent packages
empty packages
packages with malformed package or import declarations
unreadable files
import cycles
other parse errors
type errors
Make sure we record errors at the correct place in the graph.
- Missing packages among initial arguments are not reported.
Return bogus packages for them, like golist does.
- "undeclared name" errors (for example) are reported out of source file
order. I suspect this is due to the breadth-first resolution now used
by go/types. Is that a bug? Discuss with gri.
- https://github.com/golang/go/issues/25980 causes these commands to crash:
$ GOPATH=/none ./gopackages -all all
due to:
$ GOPATH=/none go list -e -test -json all
and:
$ go list -e -test ./relative/path
- Modify stringer to use go/packages, perhaps initially under flag control.
- Bug: "gopackages fmt a.go" doesn't produce an error.
*/

234
vendor/golang.org/x/tools/go/packages/golist.go generated vendored Normal file
View File

@@ -0,0 +1,234 @@
package packages
// This file defines the "go list" implementation of the Packages metadata query.
import (
"bytes"
"context"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
)
// golistPackages uses the "go list" command to expand the
// pattern words and return metadata for the specified packages.
func golistPackages(ctx context.Context, gopath string, cgo, export bool, words []string) ([]*Package, error) {
// Fields must match go list;
// see $GOROOT/src/cmd/go/internal/load/pkg.go.
type jsonPackage struct {
ImportPath string
Dir string
Name string
Export string
GoFiles []string
CFiles []string
CgoFiles []string
SFiles []string
Imports []string
ImportMap map[string]string
Deps []string
TestGoFiles []string
TestImports []string
XTestGoFiles []string
XTestImports []string
ForTest string // q in a "p [q.test]" package, else ""
DepOnly bool
}
// go list uses the following identifiers in ImportPath and Imports:
//
// "p" -- importable package or main (command)
// "q.test" -- q's test executable
// "p [q.test]" -- variant of p as built for q's test executable
// "q_test [q.test]" -- q's external test package
//
// The packages p that are built differently for a test q.test
// are q itself, plus any helpers used by the external test q_test,
// typically including "testing" and all its dependencies.
// Run "go list" for complete
// information on the specified packages.
buf, err := golist(ctx, gopath, cgo, export, words)
if err != nil {
return nil, err
}
// Decode the JSON and convert it to Package form.
var result []*Package
for dec := json.NewDecoder(buf); dec.More(); {
p := new(jsonPackage)
if err := dec.Decode(p); err != nil {
return nil, fmt.Errorf("JSON decoding failed: %v", err)
}
// Bad package?
if p.Name == "" {
// This could be due to:
// - no such package
// - package directory contains no Go source files
// - all package declarations are mangled
// - and possibly other things.
//
// For now, we throw it away and let later
// stages rediscover the problem, but this
// discards the error message computed by go list
// and computes a new one---by different logic:
// if only one of the package declarations is
// bad, for example, should we report an error
// in Metadata mode?
// Unless we parse and typecheck, we might not
// notice there's a problem.
//
// Perhaps we should save a map of PackageID to
// errors for such cases.
continue
}
id := p.ImportPath
// Extract the PkgPath from the package's ID.
pkgpath := id
if i := strings.IndexByte(id, ' '); i >= 0 {
pkgpath = id[:i]
}
// Is this a test?
// ("foo [foo.test]" package or "foo.test" command)
isTest := p.ForTest != "" || strings.HasSuffix(pkgpath, ".test")
if pkgpath == "unsafe" {
p.GoFiles = nil // ignore fake unsafe.go file
}
export := p.Export
if export != "" && !filepath.IsAbs(export) {
export = filepath.Join(p.Dir, export)
}
// imports
//
// Imports contains the IDs of all imported packages.
// ImportsMap records (path, ID) only where they differ.
ids := make(map[string]bool)
for _, id := range p.Imports {
ids[id] = true
}
imports := make(map[string]string)
for path, id := range p.ImportMap {
imports[path] = id // non-identity import
delete(ids, id)
}
for id := range ids {
// Go issue 26136: go list omits imports in cgo-generated files.
if id == "C" && cgo {
imports["unsafe"] = "unsafe"
imports["syscall"] = "syscall"
if pkgpath != "runtime/cgo" {
imports["runtime/cgo"] = "runtime/cgo"
}
continue
}
imports[id] = id // identity import
}
pkg := &Package{
ID: id,
Name: p.Name,
PkgPath: pkgpath,
IsTest: isTest,
Srcs: absJoin(p.Dir, p.GoFiles, p.CgoFiles),
OtherSrcs: absJoin(p.Dir, p.SFiles, p.CFiles),
imports: imports,
export: export,
indirect: p.DepOnly,
}
result = append(result, pkg)
}
return result, nil
}
// absJoin absolutizes and flattens the lists of files.
func absJoin(dir string, fileses ...[]string) (res []string) {
for _, files := range fileses {
for _, file := range files {
if !filepath.IsAbs(file) {
file = filepath.Join(dir, file)
}
res = append(res, file)
}
}
return res
}
// golist returns the JSON-encoded result of a "go list args..." query.
func golist(ctx context.Context, gopath string, cgo, export bool, args []string) (*bytes.Buffer, error) {
out := new(bytes.Buffer)
if len(args) == 0 {
return out, nil
}
const test = true // TODO(adonovan): expose a flag for this.
cmd := exec.CommandContext(ctx, "go", append([]string{
"list",
"-e",
fmt.Sprintf("-cgo=%t", cgo),
fmt.Sprintf("-test=%t", test),
fmt.Sprintf("-export=%t", export),
"-deps",
"-json",
"--",
}, args...)...)
cmd.Env = append(append([]string(nil), os.Environ()...), "GOPATH="+gopath)
if !cgo {
cmd.Env = append(cmd.Env, "CGO_ENABLED=0")
}
cmd.Stdout = out
cmd.Stderr = new(bytes.Buffer)
if err := cmd.Run(); err != nil {
exitErr, ok := err.(*exec.ExitError)
if !ok {
// Catastrophic error:
// - executable not found
// - context cancellation
return nil, fmt.Errorf("couldn't exec 'go list': %s %T", err, err)
}
// Old go list?
if strings.Contains(fmt.Sprint(cmd.Stderr), "flag provided but not defined") {
return nil, fmt.Errorf("unsupported version of go list: %s: %s", exitErr, cmd.Stderr)
}
// Export mode entails a build.
// If that build fails, errors appear on stderr
// (despite the -e flag) and the Export field is blank.
// Do not fail in that case.
if !export {
return nil, fmt.Errorf("go list: %s: %s", exitErr, cmd.Stderr)
}
}
// Print standard error output from "go list".
// Due to the -e flag, this should be empty.
// However, in -export mode it contains build errors.
// Should go list save build errors in the Package.Error JSON field?
// See https://github.com/golang/go/issues/26319.
// If so, then we should continue to print stderr as go list
// will be silent unless something unexpected happened.
// If not, perhaps we should suppress it to reduce noise.
if stderr := fmt.Sprint(cmd.Stderr); stderr != "" {
fmt.Fprintf(os.Stderr, "go list stderr <<%s>>\n", stderr)
}
// debugging
if false {
fmt.Fprintln(os.Stderr, out)
}
return out, nil
}

View File

@@ -0,0 +1,241 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The gopackages command is a diagnostic tool that demonstrates
// how to use golang.org/x/tools/go/packages to load, parse,
// type-check, and print one or more Go packages.
// Its precise output is unspecified and may change.
package main
import (
"flag"
"fmt"
"go/types"
"log"
"os"
"runtime"
"runtime/pprof"
"runtime/trace"
"sort"
"strings"
"golang.org/x/tools/go/packages"
"golang.org/x/tools/go/types/typeutil"
)
// flags
var (
depsFlag = flag.Bool("deps", false, "show dependencies too")
cgoFlag = flag.Bool("cgo", true, "process cgo files")
mode = flag.String("mode", "metadata", "mode (one of metadata, typecheck, wholeprogram)")
private = flag.Bool("private", false, "show non-exported declarations too")
cpuprofile = flag.String("cpuprofile", "", "write CPU profile to this file")
memprofile = flag.String("memprofile", "", "write memory profile to this file")
traceFlag = flag.String("trace", "", "write trace log to this file")
)
func usage() {
fmt.Fprintln(os.Stderr, `Usage: gopackages [-deps] [-cgo] [-mode=...] [-private] package...
The gopackages command loads, parses, type-checks,
and prints one or more Go packages.
Packages are specified using the notation of "go list",
or other underlying build system.
Flags:`)
flag.PrintDefaults()
}
func main() {
log.SetPrefix("gopackages: ")
log.SetFlags(0)
flag.Usage = usage
flag.Parse()
if len(flag.Args()) == 0 {
usage()
os.Exit(1)
}
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
log.Fatal(err)
}
if err := pprof.StartCPUProfile(f); err != nil {
log.Fatal(err)
}
// NB: profile won't be written in case of error.
defer pprof.StopCPUProfile()
}
if *traceFlag != "" {
f, err := os.Create(*traceFlag)
if err != nil {
log.Fatal(err)
}
if err := trace.Start(f); err != nil {
log.Fatal(err)
}
// NB: trace log won't be written in case of error.
defer func() {
trace.Stop()
log.Printf("To view the trace, run:\n$ go tool trace view %s", *traceFlag)
}()
}
if *memprofile != "" {
f, err := os.Create(*memprofile)
if err != nil {
log.Fatal(err)
}
// NB: memprofile won't be written in case of error.
defer func() {
runtime.GC() // get up-to-date statistics
if err := pprof.WriteHeapProfile(f); err != nil {
log.Fatalf("Writing memory profile: %v", err)
}
f.Close()
}()
}
// -mode flag
load := packages.TypeCheck
switch strings.ToLower(*mode) {
case "metadata":
load = packages.Metadata
case "typecheck":
load = packages.TypeCheck
case "wholeprogram":
load = packages.WholeProgram
default:
log.Fatalf("invalid mode: %s", *mode)
}
// Load, parse, and type-check the packages named on the command line.
opts := &packages.Options{
Error: func(error) {}, // we'll take responsibility for printing errors
DisableCgo: !*cgoFlag,
}
lpkgs, err := load(opts, flag.Args()...)
if err != nil {
log.Fatal(err)
}
// -deps: print dependencies too.
if *depsFlag {
// We can't use packages.All because
// we need an ordered traversal.
var all []*packages.Package // postorder
seen := make(map[*packages.Package]bool)
var visit func(*packages.Package)
visit = func(lpkg *packages.Package) {
if !seen[lpkg] {
seen[lpkg] = true
// visit imports
var importPaths []string
for path := range lpkg.Imports {
importPaths = append(importPaths, path)
}
sort.Strings(importPaths) // for determinism
for _, path := range importPaths {
visit(lpkg.Imports[path])
}
all = append(all, lpkg)
}
}
for _, lpkg := range lpkgs {
visit(lpkg)
}
lpkgs = all
}
for _, lpkg := range lpkgs {
print(lpkg)
}
}
func print(lpkg *packages.Package) {
// title
var kind string
if lpkg.IsTest {
kind = "test "
}
if lpkg.Name == "main" {
kind += "command"
} else {
kind += "package"
}
fmt.Printf("Go %s %q:\n", kind, lpkg.ID) // unique ID
fmt.Printf("\tpackage %s\n", lpkg.Name)
fmt.Printf("\treflect.Type.PkgPath %q\n", lpkg.PkgPath)
// characterize type info
if lpkg.Type == nil {
fmt.Printf("\thas no exported type info\n")
} else if !lpkg.Type.Complete() {
fmt.Printf("\thas incomplete exported type info\n")
} else if len(lpkg.Files) == 0 {
fmt.Printf("\thas complete exported type info\n")
} else {
fmt.Printf("\thas complete exported type info and typed ASTs\n")
}
if lpkg.Type != nil && lpkg.IllTyped && len(lpkg.Errors) == 0 {
fmt.Printf("\thas an error among its dependencies\n")
}
// source files
for _, src := range lpkg.Srcs {
fmt.Printf("\tfile %s\n", src)
}
// imports
var lines []string
for importPath, imp := range lpkg.Imports {
var line string
if imp.ID == importPath {
line = fmt.Sprintf("\timport %q", importPath)
} else {
line = fmt.Sprintf("\timport %q => %q", importPath, imp.ID)
}
lines = append(lines, line)
}
sort.Strings(lines)
for _, line := range lines {
fmt.Println(line)
}
// errors
for _, err := range lpkg.Errors {
fmt.Printf("\t%s\n", err)
}
// package members (TypeCheck or WholeProgram mode)
if lpkg.Type != nil {
qual := types.RelativeTo(lpkg.Type)
scope := lpkg.Type.Scope()
for _, name := range scope.Names() {
obj := scope.Lookup(name)
if !obj.Exported() && !*private {
continue // skip unexported names
}
fmt.Printf("\t%s\n", types.ObjectString(obj, qual))
if _, ok := obj.(*types.TypeName); ok {
for _, meth := range typeutil.IntuitiveMethodSet(obj.Type(), nil) {
if !meth.Obj().Exported() && !*private {
continue // skip unexported names
}
fmt.Printf("\t%s\n", types.SelectionString(meth, qual))
}
}
}
}
fmt.Println()
}

744
vendor/golang.org/x/tools/go/packages/packages.go generated vendored Normal file
View File

@@ -0,0 +1,744 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package packages
// See doc.go for package documentation and implementation notes.
import (
"context"
"fmt"
"go/ast"
"go/parser"
"go/token"
"go/types"
"log"
"os"
"sync"
"golang.org/x/tools/go/gcexportdata"
)
// An Options holds the options for a call to Metadata, TypeCheck
// or WholeProgram to load Go packages from source code.
type Options struct {
// Fset is the file set for the parser
// to use when loading the program.
Fset *token.FileSet
// Context may be used to cancel a pending call.
// Context is optional; the default behavior
// is equivalent to context.Background().
Context context.Context
// GOPATH is the effective value of the GOPATH environment variable.
// If unset, the default is Getenv("GOPATH").
//
// TODO(adonovan): this is primarily needed for testing, but it
// is not a build-system portable concept.
// Replace with flags/cwd/environ pass-through.
GOPATH string
// DisableCgo disables cgo-processing of files that import "C",
// and removes the 'cgo' build tag, which may affect source file selection.
// By default, TypeCheck, and WholeProgram queries process such
// files, and the resulting Package.Srcs describes the generated
// files seen by the compiler.
DisableCgo bool
// TypeChecker contains options relating to the type checker,
// such as the Sizes function.
//
// The following fields of TypeChecker are ignored:
// - Import: the Loader provides the import machinery.
// - Error: errors are reported to the Error function, below.
TypeChecker types.Config
// Error is called for each error encountered during package loading.
// Implementations must be concurrency-safe.
// If nil, the default implementation prints errors to os.Stderr.
// Errors are additionally recorded in each Package.
// Error is not used in Metadata mode.
Error func(error)
// ParseFile is called to read and parse each file,
// Implementations must be concurrency-safe.
// If nil, the default implementation uses parser.ParseFile.
// A client may supply a custom implementation to,
// for example, provide alternative contents for files
// modified in a text editor but unsaved,
// or to selectively eliminate unwanted function
// bodies to reduce the load on the type-checker.
// ParseFile is not used in Metadata mode.
ParseFile func(fset *token.FileSet, filename string) (*ast.File, error)
}
// Metadata returns the metadata for a set of Go packages,
// but does not parse or type-check their source files.
// The returned packages are the roots of a directed acyclic graph,
// the "import graph", whose edges are represented by Package.Imports
// and whose transitive closure includes all dependencies of the
// initial packages.
//
// The packages are denoted by patterns, using the usual notation of the
// build system (currently "go build", but in future others such as
// Bazel). Clients should not attempt to infer the relationship between
// patterns and the packages they denote, as in general it is complex
// and many-to-many. Metadata reports an error if the patterns denote no
// packages.
//
// If Metadata was unable to expand the specified patterns to a set of
// packages, or if there was a cycle in the dependency graph, it returns
// an error. Otherwise it returns a set of loaded Packages, even if
// errors were encountered while loading some of them; such errors are
// recorded in each Package.
//
func Metadata(o *Options, patterns ...string) ([]*Package, error) {
l := &loader{mode: metadata}
if o != nil {
l.Options = *o
}
return l.load(patterns...)
}
// TypeCheck returns metadata, syntax trees, and type information
// for a set of Go packages.
//
// In addition to the information returned by the Metadata function,
// TypeCheck loads, parses, and type-checks each of the requested packages.
// These packages are "source packages", and the resulting Package
// structure provides complete syntax and type information.
// Due to limitations of the type checker, any package that transitively
// depends on a source package must also be loaded from source.
//
// For each immediate dependency of a source package that is not itself
// a source package, type information is obtained from export data
// files produced by the Go compiler; this mode may entail a partial build.
// The Package for these dependencies provides complete package-level type
// information (types.Package), but no syntax trees.
//
// The remaining packages, comprising the indirect dependencies of the
// packages with complete export data, may have partial package-level type
// information or perhaps none at all.
//
// For example, consider the import graph A->B->C->D->E.
// If the requested packages are A and C,
// then packages A, B, C are source packages,
// D is a complete export data package,
// and E is a partial export data package.
// (B must be a source package because it
// transitively depends on C, a source package.)
//
// Each package bears a flag, IllTyped, indicating whether it
// or one of its transitive dependencies contains an error.
// A package that is not IllTyped is buildable.
//
// Use this mode for compiler-like tools
// that analyze one package at a time.
//
func TypeCheck(o *Options, patterns ...string) ([]*Package, error) {
l := &loader{mode: typeCheck}
if o != nil {
l.Options = *o
}
return l.load(patterns...)
}
// WholeProgram returns metadata, complete syntax trees, and complete
// type information for a set of Go packages and their entire transitive
// closure of dependencies.
// Every package in the returned import graph is a source package,
// as defined by the documentation for TypeCheck
//
// Use this mode for whole-program analysis tools.
//
func WholeProgram(o *Options, patterns ...string) ([]*Package, error) {
l := &loader{mode: wholeProgram}
if o != nil {
l.Options = *o
}
return l.load(patterns...)
}
// Package holds the metadata, and optionally syntax trees
// and type information, for a single Go package.
//
// The import graph, Imports, forms a directed acyclic graph over Packages.
// (Cycle-forming edges are not inserted into the map.)
//
// A Package is not mutated once returned.
type Package struct {
// ID is a unique, opaque identifier for a package,
// as determined by the underlying workspace.
//
// IDs distinguish packages that have the same PkgPath, such as
// a regular package and the variant of that package built
// during testing. (IDs also distinguish packages that would be
// lumped together by the go/build API, such as a regular
// package and its external tests.)
//
// Clients should not interpret the ID string as its
// structure varies from one build system to another.
ID string
// PkgPath is the path of the package as understood
// by the Go compiler and by reflect.Type.PkgPath.
//
// PkgPaths are unique for each package in a given executable
// program, but are not necessarily unique within a workspace.
// For example, an importable package (fmt) and its in-package
// tests (fmt·test) may have the same PkgPath, but those
// two packages are never linked together.
PkgPath string
// Name is the identifier appearing in the package declaration
// at the start of each source file in this package.
// The name of an executable is "main".
Name string
// IsTest indicates whether this package is a test.
IsTest bool
// Srcs is the list of names of this package's Go
// source files as presented to the compiler.
// Names aren't guaranteed to be absolute,
// but they are openable.
//
// In Metadata queries, or if DisableCgo is set,
// Srcs includes the unmodified source files even
// if they use cgo (import "C").
// In all other queries, Srcs contains the files
// resulting from cgo processing.
Srcs []string
// OtherSrcs is the list of names of non-Go source files that the package
// contains. This includes assembly and C source files. The names are
// "openable" in the same sense as are Srcs above.
OtherSrcs []string
// Imports maps each import path to its package
// The keys are import paths as they appear in the source files.
Imports map[string]*Package
// syntax and type information (only in TypeCheck and WholeProgram modes)
Fset *token.FileSet // source position information
Files []*ast.File // syntax trees for the package's Srcs files
Errors []error // non-nil if the package had errors
Type *types.Package // type information about the package
Info *types.Info // type-checker deductions
IllTyped bool // this package or a dependency has a parse or type error
// ---- temporary state ----
// export holds the path to the export data file
// for this package, if mode == TypeCheck.
// The export data file contains the package's type information
// in a compiler-specific format; see
// golang.org/x/tools/go/{gc,gccgo}exportdata.
// May be the empty string if the build failed.
export string
indirect bool // package is a dependency, not explicitly requested
imports map[string]string // nominal form of Imports graph
importErrors map[string]error // maps each bad import to its error
loadOnce sync.Once
color uint8 // for cycle detection
mark, needsrc bool // used in TypeCheck mode only
}
func (lpkg *Package) String() string { return lpkg.ID }
// loader holds the working state of a single call to load.
type loader struct {
mode mode
cgo bool
Options
exportMu sync.Mutex // enforces mutual exclusion of exportdata operations
}
// The mode determines which packages are visited
// and the level of information reported about each one.
// Modes are ordered by increasing detail.
type mode uint8
const (
metadata = iota
typeCheck
wholeProgram
)
func (ld *loader) load(patterns ...string) ([]*Package, error) {
if ld.Context == nil {
ld.Context = context.Background()
}
if ld.mode > metadata {
if ld.Fset == nil {
ld.Fset = token.NewFileSet()
}
ld.cgo = !ld.DisableCgo
if ld.Error == nil {
ld.Error = func(e error) {
fmt.Fprintln(os.Stderr, e)
}
}
if ld.ParseFile == nil {
ld.ParseFile = func(fset *token.FileSet, filename string) (*ast.File, error) {
const mode = parser.AllErrors | parser.ParseComments
return parser.ParseFile(fset, filename, nil, mode)
}
}
}
if ld.GOPATH == "" {
ld.GOPATH = os.Getenv("GOPATH")
}
// Do the metadata query and partial build.
// TODO(adonovan): support alternative build systems at this seam.
list, err := golistPackages(ld.Context, ld.GOPATH, ld.cgo, ld.mode == typeCheck, patterns)
if err != nil {
return nil, err
}
pkgs := make(map[string]*Package)
var initial []*Package
for _, pkg := range list {
pkgs[pkg.ID] = pkg
// Record the set of initial packages
// corresponding to the patterns.
if !pkg.indirect {
initial = append(initial, pkg)
if ld.mode == typeCheck {
pkg.needsrc = true
}
}
}
if len(pkgs) == 0 {
return nil, fmt.Errorf("no packages to load")
}
// Materialize the import graph.
const (
white = 0 // new
grey = 1 // in progress
black = 2 // complete
)
// visit traverses the import graph, depth-first,
// and materializes the graph as Packages.Imports.
//
// Valid imports are saved in the Packages.Import map.
// Invalid imports (cycles and missing nodes) are saved in the importErrors map.
// Thus, even in the presence of both kinds of errors, the Import graph remains a DAG.
//
// visit returns whether the package is initial or has a transitive
// dependency on an initial package. These are the only packages
// for which we load source code in typeCheck mode.
var stack []*Package
var visit func(lpkg *Package) bool
visit = func(lpkg *Package) bool {
switch lpkg.color {
case black:
return lpkg.needsrc
case grey:
panic("internal error: grey node")
}
lpkg.color = grey
stack = append(stack, lpkg) // push
imports := make(map[string]*Package)
for importPath, id := range lpkg.imports {
var importErr error
imp := pkgs[id]
if imp == nil {
// (includes package "C" when DisableCgo)
importErr = fmt.Errorf("missing package: %q", id)
} else if imp.color == grey {
importErr = fmt.Errorf("import cycle: %s", stack)
}
if importErr != nil {
if lpkg.importErrors == nil {
lpkg.importErrors = make(map[string]error)
}
lpkg.importErrors[importPath] = importErr
continue
}
if visit(imp) {
lpkg.needsrc = true
}
imports[importPath] = imp
}
lpkg.imports = nil // no longer needed
lpkg.Imports = imports
stack = stack[:len(stack)-1] // pop
lpkg.color = black
return lpkg.needsrc
}
// For each initial package, create its import DAG.
for _, lpkg := range initial {
visit(lpkg)
}
// Load some/all packages from source, starting at
// the initial packages (roots of the import DAG).
if ld.mode != metadata {
var wg sync.WaitGroup
for _, lpkg := range initial {
wg.Add(1)
go func(lpkg *Package) {
ld.loadRecursive(lpkg)
wg.Done()
}(lpkg)
}
wg.Wait()
}
return initial, nil
}
// loadRecursive loads, parses, and type-checks the specified package and its
// dependencies, recursively, in parallel, in topological order.
// It is atomic and idempotent.
// Precondition: ld.mode != Metadata.
// In typeCheck mode, only needsrc packages are loaded.
func (ld *loader) loadRecursive(lpkg *Package) {
lpkg.loadOnce.Do(func() {
// Load the direct dependencies, in parallel.
var wg sync.WaitGroup
for _, imp := range lpkg.Imports {
wg.Add(1)
go func(imp *Package) {
ld.loadRecursive(imp)
wg.Done()
}(imp)
}
wg.Wait()
ld.loadPackage(lpkg)
})
}
// loadPackage loads, parses, and type-checks the
// files of the specified package, if needed.
// It must be called only once per Package,
// after immediate dependencies are loaded.
// Precondition: ld.mode != Metadata.
func (ld *loader) loadPackage(lpkg *Package) {
if lpkg.PkgPath == "unsafe" {
// Fill in the blanks to avoid surprises.
lpkg.Type = types.Unsafe
lpkg.Fset = ld.Fset
lpkg.Files = []*ast.File{}
lpkg.Info = new(types.Info)
return
}
if ld.mode == typeCheck && !lpkg.needsrc {
return // not a source package
}
hardErrors := false
appendError := func(err error) {
if terr, ok := err.(types.Error); ok && terr.Soft {
// Don't mark the package as bad.
} else {
hardErrors = true
}
ld.Error(err)
lpkg.Errors = append(lpkg.Errors, err)
}
files, errs := ld.parseFiles(lpkg.Srcs)
for _, err := range errs {
appendError(err)
}
lpkg.Fset = ld.Fset
lpkg.Files = files
// Call NewPackage directly with explicit name.
// This avoids skew between golist and go/types when the files'
// package declarations are inconsistent.
lpkg.Type = types.NewPackage(lpkg.PkgPath, lpkg.Name)
lpkg.Info = &types.Info{
Types: make(map[ast.Expr]types.TypeAndValue),
Defs: make(map[*ast.Ident]types.Object),
Uses: make(map[*ast.Ident]types.Object),
Implicits: make(map[ast.Node]types.Object),
Scopes: make(map[ast.Node]*types.Scope),
Selections: make(map[*ast.SelectorExpr]*types.Selection),
}
// Copy the prototype types.Config as it must vary across Packages.
tc := ld.TypeChecker // copy
if !ld.cgo {
tc.FakeImportC = true
}
tc.Importer = importerFunc(func(path string) (*types.Package, error) {
if path == "unsafe" {
return types.Unsafe, nil
}
// The imports map is keyed by import path.
imp := lpkg.Imports[path]
if imp == nil {
if err := lpkg.importErrors[path]; err != nil {
return nil, err
}
// There was skew between the metadata and the
// import declarations, likely due to an edit
// race, or because the ParseFile feature was
// used to supply alternative file contents.
return nil, fmt.Errorf("no metadata for %s", path)
}
if imp.Type != nil && imp.Type.Complete() {
return imp.Type, nil
}
if ld.mode == typeCheck && !imp.needsrc {
return ld.loadFromExportData(imp)
}
log.Fatalf("internal error: nil Pkg importing %q from %q", path, lpkg)
panic("unreachable")
})
tc.Error = appendError
// type-check
types.NewChecker(&tc, ld.Fset, lpkg.Type, lpkg.Info).Files(lpkg.Files)
lpkg.importErrors = nil // no longer needed
// If !Cgo, the type-checker uses FakeImportC mode, so
// it doesn't invoke the importer for import "C",
// nor report an error for the import,
// or for any undefined C.f reference.
// We must detect this explicitly and correctly
// mark the package as IllTyped (by reporting an error).
// TODO(adonovan): if these errors are annoying,
// we could just set IllTyped quietly.
if tc.FakeImportC {
outer:
for _, f := range lpkg.Files {
for _, imp := range f.Imports {
if imp.Path.Value == `"C"` {
appendError(fmt.Errorf(`%s: import "C" ignored`,
lpkg.Fset.Position(imp.Pos())))
break outer
}
}
}
}
// Record accumulated errors.
for _, imp := range lpkg.Imports {
if imp.IllTyped {
hardErrors = true
break
}
}
lpkg.IllTyped = hardErrors
}
// An importFunc is an implementation of the single-method
// types.Importer interface based on a function value.
type importerFunc func(path string) (*types.Package, error)
func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) }
// We use a counting semaphore to limit
// the number of parallel I/O calls per process.
var ioLimit = make(chan bool, 20)
// parseFiles reads and parses the Go source files and returns the ASTs
// of the ones that could be at least partially parsed, along with a
// list of I/O and parse errors encountered.
//
// Because files are scanned in parallel, the token.Pos
// positions of the resulting ast.Files are not ordered.
//
func (ld *loader) parseFiles(filenames []string) ([]*ast.File, []error) {
var wg sync.WaitGroup
n := len(filenames)
parsed := make([]*ast.File, n)
errors := make([]error, n)
for i, file := range filenames {
wg.Add(1)
go func(i int, filename string) {
ioLimit <- true // wait
// ParseFile may return both an AST and an error.
parsed[i], errors[i] = ld.ParseFile(ld.Fset, filename)
<-ioLimit // signal
wg.Done()
}(i, file)
}
wg.Wait()
// Eliminate nils, preserving order.
var o int
for _, f := range parsed {
if f != nil {
parsed[o] = f
o++
}
}
parsed = parsed[:o]
o = 0
for _, err := range errors {
if err != nil {
errors[o] = err
o++
}
}
errors = errors[:o]
return parsed, errors
}
// loadFromExportData returns type information for the specified
// package, loading it from an export data file on the first request.
func (ld *loader) loadFromExportData(lpkg *Package) (*types.Package, error) {
if lpkg.PkgPath == "" {
log.Fatalf("internal error: Package %s has no PkgPath", lpkg)
}
// Because gcexportdata.Read has the potential to create or
// modify the types.Package for each node in the transitive
// closure of dependencies of lpkg, all exportdata operations
// must be sequential. (Finer-grained locking would require
// changes to the gcexportdata API.)
//
// The exportMu lock guards the Package.Pkg field and the
// types.Package it points to, for each Package in the graph.
//
// Not all accesses to Package.Pkg need to be protected by exportMu:
// graph ordering ensures that direct dependencies of source
// packages are fully loaded before the importer reads their Pkg field.
ld.exportMu.Lock()
defer ld.exportMu.Unlock()
if tpkg := lpkg.Type; tpkg != nil && tpkg.Complete() {
return tpkg, nil // cache hit
}
lpkg.IllTyped = true // fail safe
if lpkg.export == "" {
// Errors while building export data will have been printed to stderr.
return nil, fmt.Errorf("no export data file")
}
f, err := os.Open(lpkg.export)
if err != nil {
return nil, err
}
defer f.Close()
// Read gc export data.
//
// We don't currently support gccgo export data because all
// underlying workspaces use the gc toolchain. (Even build
// systems that support gccgo don't use it for workspace
// queries.)
r, err := gcexportdata.NewReader(f)
if err != nil {
return nil, fmt.Errorf("reading %s: %v", lpkg.export, err)
}
// Build the view.
//
// The gcexportdata machinery has no concept of package ID.
// It identifies packages by their PkgPath, which although not
// globally unique is unique within the scope of one invocation
// of the linker, type-checker, or gcexportdata.
//
// So, we must build a PkgPath-keyed view of the global
// (conceptually ID-keyed) cache of packages and pass it to
// gcexportdata, then copy back to the global cache any newly
// created entries in the view map. The view must contain every
// existing package that might possibly be mentioned by the
// current package---its reflexive transitive closure.
//
// (Yes, reflexive: although loadRecursive processes source
// packages in topological order, export data packages are
// processed only lazily within Importer calls. In the graph
// A->B->C, A->C where A is a source package and B and C are
// export data packages, processing of the A->B and A->C import
// edges may occur in either order, depending on the sequence
// of imports within A. If B is processed first, and its export
// data mentions C, an imcomplete package for C will be created
// before processing of C.)
// We could do export data processing in topological order using
// loadRecursive, but there's no parallelism to be gained.
//
// TODO(adonovan): it would be more simpler and more efficient
// if the export data machinery invoked a callback to
// get-or-create a package instead of a map.
//
view := make(map[string]*types.Package) // view seen by gcexportdata
seen := make(map[*Package]bool) // all visited packages
var copyback []*Package // candidates for copying back to global cache
var visit func(p *Package)
visit = func(p *Package) {
if !seen[p] {
seen[p] = true
if p.Type != nil {
view[p.PkgPath] = p.Type
} else {
copyback = append(copyback, p)
}
for _, p := range p.Imports {
visit(p)
}
}
}
visit(lpkg)
// Parse the export data.
// (May create/modify packages in view.)
tpkg, err := gcexportdata.Read(r, ld.Fset, view, lpkg.PkgPath)
if err != nil {
return nil, fmt.Errorf("reading %s: %v", lpkg.export, err)
}
// For each newly created types.Package in the view,
// save it in the main graph.
for _, p := range copyback {
p.Type = view[p.PkgPath] // may still be nil
}
lpkg.Type = tpkg
lpkg.IllTyped = false
return tpkg, nil
}
// All returns a new map containing all the transitive dependencies of
// the specified initial packages, keyed by ID.
func All(initial []*Package) map[string]*Package {
all := make(map[string]*Package)
var visit func(p *Package)
visit = func(p *Package) {
if all[p.ID] == nil {
all[p.ID] = p
for _, imp := range p.Imports {
visit(imp)
}
}
}
for _, p := range initial {
visit(p)
}
return all
}

599
vendor/golang.org/x/tools/go/packages/packages_test.go generated vendored Normal file
View File

@@ -0,0 +1,599 @@
package packages_test
import (
"bytes"
"fmt"
"go/ast"
"go/parser"
"go/token"
"go/types"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"sort"
"strings"
"sync"
"testing"
"golang.org/x/tools/go/packages"
)
// TODO(adonovan): more test cases to write:
//
// - When the tests fail, make them print a 'cd & load' command
// that will allow the maintainer to interact with the failing scenario.
// - vendoring
// - errors in go-list metadata
// - all returned file names should be openable
// - a foo.test package that cannot be built for some reason (e.g.
// import error) will result in a JSON blob with no name and a
// nonexistent testmain file in GoFiles. Test that we handle this
// gracefully.
// - import graph for synthetic testmain and "p [t.test]" packages.
// - IsTest boolean
//
// TypeCheck & WholeProgram modes:
// - Fset may be user-supplied or not.
// - Packages.Info is correctly set.
// - typechecker configuration is honored
// - import cycles are gracefully handled in type checker.
func TestMetadataImportGraph(t *testing.T) {
tmp, cleanup := enterTree(t, map[string]string{
"src/a/a.go": `package a; const A = 1`,
"src/b/b.go": `package b; import ("a"; _ "errors"); var B = a.A`,
"src/c/c.go": `package c; import (_ "b"; _ "unsafe")`,
"src/c/c2.go": "//+build ignore\n\n" + `package c; import _ "fmt"`,
"src/subdir/d/d.go": `package d`,
"src/subdir/d/d_test.go": `package d; import _ "math/bits"`,
"src/subdir/d/x_test.go": `package d_test; import _ "subdir/d"`, // TODO(adonovan): test bad import here
"src/subdir/e/d.go": `package e`,
"src/e/e.go": `package main; import _ "b"`,
"src/e/e2.go": `package main; import _ "c"`,
"src/f/f.go": `package f`,
})
defer cleanup()
// -- tmp is now the current directory --
opts := &packages.Options{GOPATH: tmp}
initial, err := packages.Metadata(opts, "c", "subdir/d", "e")
if err != nil {
t.Fatal(err)
}
// Check graph topology.
graph, all := importGraph(initial)
wantGraph := `
a
b
* c
* e
errors
math/bits
* subdir/d
subdir/d [subdir/d.test]
* subdir/d.test
subdir/d_test [subdir/d.test]
unsafe
b -> a
b -> errors
c -> b
c -> unsafe
e -> b
e -> c
subdir/d [subdir/d.test] -> math/bits
subdir/d.test -> os (pruned)
subdir/d.test -> subdir/d [subdir/d.test]
subdir/d.test -> subdir/d_test [subdir/d.test]
subdir/d.test -> testing (pruned)
subdir/d.test -> testing/internal/testdeps (pruned)
subdir/d_test [subdir/d.test] -> subdir/d [subdir/d.test]
`[1:]
if graph != wantGraph {
t.Errorf("wrong import graph: got <<%s>>, want <<%s>>", graph, wantGraph)
}
// Check node information: kind, name, srcs.
for _, test := range []struct {
id string
wantName string
wantKind string
wantSrcs string
}{
{"a", "a", "package", "a.go"},
{"b", "b", "package", "b.go"},
{"c", "c", "package", "c.go"}, // c2.go is ignored
{"e", "main", "command", "e.go e2.go"},
{"errors", "errors", "package", "errors.go"},
{"subdir/d", "d", "package", "d.go"},
// {"subdir/d.test", "main", "test command", "<hideous generated file name>"},
{"unsafe", "unsafe", "package", ""},
} {
p, ok := all[test.id]
if !ok {
t.Errorf("no package %s", test.id)
continue
}
if p.Name != test.wantName {
t.Errorf("%s.Name = %q, want %q", test.id, p.Name, test.wantName)
}
// kind
var kind string
if p.IsTest {
kind = "test "
}
if p.Name == "main" {
kind += "command"
} else {
kind += "package"
}
if kind != test.wantKind {
t.Errorf("%s.Kind = %q, want %q", test.id, kind, test.wantKind)
}
if srcs := strings.Join(srcs(p), " "); srcs != test.wantSrcs {
t.Errorf("%s.Srcs = [%s], want [%s]", test.id, srcs, test.wantSrcs)
}
}
// Test an ad-hoc package, analogous to "go run hello.go".
if initial, err := packages.Metadata(opts, "src/c/c.go"); len(initial) == 0 {
t.Errorf("failed to obtain metadata for ad-hoc package (err=%v)", err)
} else {
got := fmt.Sprintf("%s %s", initial[0].ID, srcs(initial[0]))
if want := "command-line-arguments [c.go]"; got != want {
t.Errorf("oops: got %s, want %s", got, want)
}
}
// Wildcards
// See StdlibTest for effective test of "std" wildcard.
// TODO(adonovan): test "all" returns everything in the current module.
{
// "..." (subdirectory)
initial, err = packages.Metadata(opts, "subdir/...")
if err != nil {
t.Fatal(err)
}
const want = "[subdir/d subdir/e subdir/d.test]"
if fmt.Sprint(initial) != want {
t.Errorf("for subdir/... wildcard, got %s, want %s", initial, want)
}
}
}
type errCollector struct {
mu sync.Mutex
errors []error
}
func (ec *errCollector) add(err error) {
ec.mu.Lock()
ec.errors = append(ec.errors, err)
ec.mu.Unlock()
}
func TestTypeCheckOK(t *testing.T) {
tmp, cleanup := enterTree(t, map[string]string{
"src/a/a.go": `package a; import "b"; const A = "a" + b.B`,
"src/b/b.go": `package b; import "c"; const B = "b" + c.C`,
"src/c/c.go": `package c; import "d"; const C = "c" + d.D`,
"src/d/d.go": `package d; import "e"; const D = "d" + e.E`,
"src/e/e.go": `package e; const E = "e"`,
})
defer cleanup()
// -- tmp is now the current directory --
opts := &packages.Options{GOPATH: tmp, Error: func(error) {}}
initial, err := packages.TypeCheck(opts, "a", "c")
if err != nil {
t.Fatal(err)
}
graph, all := importGraph(initial)
wantGraph := `
* a
b
* c
d
e
a -> b
b -> c
c -> d
d -> e
`[1:]
if graph != wantGraph {
t.Errorf("wrong import graph: got <<%s>>, want <<%s>>", graph, wantGraph)
}
for _, test := range []struct {
id string
wantType bool
wantFiles bool
}{
{"a", true, true}, // source package
{"b", true, true}, // source package
{"c", true, true}, // source package
{"d", true, false}, // export data package
{"e", false, false}, // no package
} {
p := all[test.id]
if p == nil {
t.Errorf("missing package: %s", test.id)
continue
}
if (p.Type != nil) != test.wantType {
if test.wantType {
t.Errorf("missing types.Package for %s", p)
} else {
t.Errorf("unexpected types.Package for %s", p)
}
}
if (p.Files != nil) != test.wantFiles {
if test.wantFiles {
t.Errorf("missing ast.Files for %s", p)
} else {
t.Errorf("unexpected ast.Files for for %s", p)
}
}
if p.Errors != nil {
t.Errorf("errors in package: %s: %s", p, p.Errors)
}
}
// Check value of constant.
aA := all["a"].Type.Scope().Lookup("A").(*types.Const)
if got, want := fmt.Sprintf("%v %v", aA, aA.Val()), `const a.A untyped string "abcde"`; got != want {
t.Errorf("a.A: got %s, want %s", got, want)
}
}
func TestTypeCheckError(t *testing.T) {
// A type error in a lower-level package (e) prevents go list
// from producing export data for all packages that depend on it
// [a-e]. Export data is only required for package d, so package
// c, which imports d, gets an error, and all packages above d
// are IllTyped. Package e is not ill-typed, because the user
// did not demand its type information (despite it actually
// containing a type error).
tmp, cleanup := enterTree(t, map[string]string{
"src/a/a.go": `package a; import "b"; const A = "a" + b.B`,
"src/b/b.go": `package b; import "c"; const B = "b" + c.C`,
"src/c/c.go": `package c; import "d"; const C = "c" + d.D`,
"src/d/d.go": `package d; import "e"; const D = "d" + e.E`,
"src/e/e.go": `package e; const E = "e" + 1`, // type error
})
defer cleanup()
// -- tmp is now the current directory --
opts := &packages.Options{GOPATH: tmp, Error: func(error) {}}
initial, err := packages.TypeCheck(opts, "a", "c")
if err != nil {
t.Fatal(err)
}
all := packages.All(initial)
for _, test := range []struct {
id string
wantType bool
wantFiles bool
wantIllTyped bool
wantErrs []string
}{
{"a", true, true, true, nil},
{"b", true, true, true, nil},
{"c", true, true, true, []string{"could not import d (no export data file)"}},
{"d", false, false, true, nil}, // missing export data
{"e", false, false, false, nil}, // type info not requested (despite type error)
} {
p := all[test.id]
if p == nil {
t.Errorf("missing package: %s", test.id)
continue
}
if (p.Type != nil) != test.wantType {
if test.wantType {
t.Errorf("missing types.Package for %s", test.id)
} else {
t.Errorf("unexpected types.Package for %s", test.id)
}
}
if (p.Files != nil) != test.wantFiles {
if test.wantFiles {
t.Errorf("missing ast.Files for %s", test.id)
} else {
t.Errorf("unexpected ast.Files for for %s", test.id)
}
}
if p.IllTyped != test.wantIllTyped {
t.Errorf("IllTyped was %t for %s", p.IllTyped, test.id)
}
if errs := errorMessages(p.Errors); !reflect.DeepEqual(errs, test.wantErrs) {
t.Errorf("in package %s, got errors %s, want %s", p, errs, test.wantErrs)
}
}
// Check value of constant.
aA := all["a"].Type.Scope().Lookup("A").(*types.Const)
if got, want := aA.String(), `const a.A invalid type`; got != want {
t.Errorf("a.A: got %s, want %s", got, want)
}
}
// This function tests use of the ParseFile hook to supply
// alternative file contents to the parser and type-checker.
func TestWholeProgramOverlay(t *testing.T) {
type M = map[string]string
tmp, cleanup := enterTree(t, M{
"src/a/a.go": `package a; import "b"; const A = "a" + b.B`,
"src/b/b.go": `package b; import "c"; const B = "b" + c.C`,
"src/c/c.go": `package c; const C = "c"`,
"src/d/d.go": `package d; const D = "d"`,
})
defer cleanup()
// -- tmp is now the current directory --
for i, test := range []struct {
overlay M
want string // expected value of a.A
wantErrs []string
}{
{nil, `"abc"`, nil}, // default
{M{}, `"abc"`, nil}, // empty overlay
{M{filepath.Join(tmp, "src/c/c.go"): `package c; const C = "C"`}, `"abC"`, nil},
{M{filepath.Join(tmp, "src/b/b.go"): `package b; import "c"; const B = "B" + c.C`}, `"aBc"`, nil},
{M{filepath.Join(tmp, "src/b/b.go"): `package b; import "d"; const B = "B" + d.D`}, `unknown`,
[]string{`could not import d (no metadata for d)`}},
} {
var parseFile func(fset *token.FileSet, filename string) (*ast.File, error)
if test.overlay != nil {
parseFile = func(fset *token.FileSet, filename string) (*ast.File, error) {
var src interface{}
if content, ok := test.overlay[filename]; ok {
src = content
}
const mode = parser.AllErrors | parser.ParseComments
return parser.ParseFile(fset, filename, src, mode)
}
}
var errs errCollector
opts := &packages.Options{
GOPATH: tmp,
Error: errs.add,
ParseFile: parseFile,
}
initial, err := packages.WholeProgram(opts, "a")
if err != nil {
t.Error(err)
continue
}
// Check value of a.A.
a := initial[0]
got := a.Type.Scope().Lookup("A").(*types.Const).Val().String()
if got != test.want {
t.Errorf("%d. a.A: got %s, want %s", i, got, test.want)
}
if errs := errorMessages(errs.errors); !reflect.DeepEqual(errs, test.wantErrs) {
t.Errorf("%d. got errors %s, want %s", i, errs, test.wantErrs)
}
}
}
func TestWholeProgramImportErrors(t *testing.T) {
tmp, cleanup := enterTree(t, map[string]string{
"src/unicycle/unicycle.go": `package unicycle; import _ "unicycle"`,
"src/bicycle1/bicycle1.go": `package bicycle1; import _ "bicycle2"`,
"src/bicycle2/bicycle2.go": `package bicycle2; import _ "bicycle1"`,
"src/bad/bad.go": `not a package declaration`,
"src/root/root.go": `package root
import (
_ "bicycle1"
_ "unicycle"
_ "nonesuch"
_ "empty"
_ "bad"
)`,
})
defer cleanup()
// -- tmp is now the current directory --
os.Mkdir("src/empty", 0777) // create an existing but empty package
var errs2 errCollector
opts := &packages.Options{GOPATH: tmp, Error: errs2.add}
initial, err := packages.WholeProgram(opts, "root")
if err != nil {
t.Fatal(err)
}
// Cycle-forming edges are removed from the graph:
// bicycle2 -> bicycle1
// unicycle -> unicycle
graph, all := importGraph(initial)
wantGraph := `
bicycle1
bicycle2
* root
unicycle
bicycle1 -> bicycle2
root -> bicycle1
root -> unicycle
`[1:]
if graph != wantGraph {
t.Errorf("wrong import graph: got <<%s>>, want <<%s>>", graph, wantGraph)
}
for _, test := range []struct {
id string
wantErrs []string
}{
{"bicycle1", nil},
{"bicycle2", []string{
"could not import bicycle1 (import cycle: [root bicycle1 bicycle2])",
}},
{"unicycle", []string{
"could not import unicycle (import cycle: [root unicycle])",
}},
{"root", []string{
`could not import bad (missing package: "bad")`,
`could not import empty (missing package: "empty")`,
`could not import nonesuch (missing package: "nonesuch")`,
}},
} {
p := all[test.id]
if p == nil {
t.Errorf("missing package: %s", test.id)
continue
}
if p.Type == nil {
t.Errorf("missing types.Package for %s", test.id)
}
if p.Files == nil {
t.Errorf("missing ast.Files for %s", test.id)
}
if !p.IllTyped {
t.Errorf("IllTyped was false for %s", test.id)
}
if errs := errorMessages(p.Errors); !reflect.DeepEqual(errs, test.wantErrs) {
t.Errorf("in package %s, got errors %s, want %s", p, errs, test.wantErrs)
}
}
}
func errorMessages(errors []error) []string {
var msgs []string
for _, err := range errors {
msg := err.Error()
// Strip off /tmp filename.
if i := strings.Index(msg, ": "); i >= 0 {
msg = msg[i+len(": "):]
}
msgs = append(msgs, msg)
}
sort.Strings(msgs)
return msgs
}
func srcs(p *packages.Package) (basenames []string) {
// Ideally we would show the root-relative portion (e.g. after
// src/) but vgo doesn't necessarily have a src/ dir.
for _, src := range p.Srcs {
basenames = append(basenames, filepath.Base(src))
}
return basenames
}
// importGraph returns the import graph as a user-friendly string,
// and a map containing all packages keyed by ID.
func importGraph(initial []*packages.Package) (string, map[string]*packages.Package) {
out := new(bytes.Buffer)
initialSet := make(map[*packages.Package]bool)
for _, p := range initial {
initialSet[p] = true
}
// We can't use packages.All because
// we need to prune the traversal.
var nodes, edges []string
res := make(map[string]*packages.Package)
seen := make(map[*packages.Package]bool)
var visit func(p *packages.Package)
visit = func(p *packages.Package) {
if !seen[p] {
seen[p] = true
if res[p.ID] != nil {
panic("duplicate ID: " + p.ID)
}
res[p.ID] = p
star := ' ' // mark initial packages with a star
if initialSet[p] {
star = '*'
}
nodes = append(nodes, fmt.Sprintf("%c %s", star, p.ID))
// To avoid a lot of noise,
// we prune uninteresting dependencies of testmain packages,
// which we identify by this import:
isTestMain := p.Imports["testing/internal/testdeps"] != nil
for _, imp := range p.Imports {
if isTestMain {
switch imp.ID {
case "os", "testing", "testing/internal/testdeps":
edges = append(edges, fmt.Sprintf("%s -> %s (pruned)", p, imp))
continue
}
}
edges = append(edges, fmt.Sprintf("%s -> %s", p, imp))
visit(imp)
}
}
}
for _, p := range initial {
visit(p)
}
// Sort, ignoring leading optional star prefix.
sort.Slice(nodes, func(i, j int) bool { return nodes[i][2:] < nodes[j][2:] })
for _, node := range nodes {
fmt.Fprintf(out, "%s\n", node)
}
sort.Strings(edges)
for _, edge := range edges {
fmt.Fprintf(out, " %s\n", edge)
}
return out.String(), res
}
const skipCleanup = false // for debugging; don't commit 'true'!
// enterTree creates a new temporary directory containing the specified
// file tree, and chdirs to it. Call the cleanup function to restore the
// cwd and delete the tree.
func enterTree(t *testing.T, tree map[string]string) (dir string, cleanup func()) {
oldcwd, err := os.Getwd()
if err != nil {
t.Fatal(err)
}
dir, err = ioutil.TempDir("", "")
if err != nil {
t.Fatal(err)
}
cleanup = func() {
if err := os.Chdir(oldcwd); err != nil {
t.Errorf("cannot restore cwd: %v", err)
}
if skipCleanup {
t.Logf("Skipping cleanup of temp dir: %s", dir)
} else {
os.RemoveAll(dir) // ignore errors
}
}
if err := os.Chdir(dir); err != nil {
t.Fatalf("chdir: %v", err)
}
for name, content := range tree {
if err := os.MkdirAll(filepath.Dir(name), 0777); err != nil {
cleanup()
t.Fatal(err)
}
if err := ioutil.WriteFile(name, []byte(content), 0666); err != nil {
cleanup()
t.Fatal(err)
}
}
return dir, cleanup
}

138
vendor/golang.org/x/tools/go/packages/stdlib_test.go generated vendored Normal file
View File

@@ -0,0 +1,138 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package packages_test
import (
"bytes"
"io/ioutil"
"path/filepath"
"runtime"
"strings"
"testing"
"time"
"golang.org/x/tools/go/packages"
)
// This test loads the metadata for the standard library,
func TestStdlibMetadata(t *testing.T) {
// TODO(adonovan): see if we can get away without this hack.
// if runtime.GOOS == "android" {
// t.Skipf("incomplete std lib on %s", runtime.GOOS)
// }
runtime.GC()
t0 := time.Now()
var memstats runtime.MemStats
runtime.ReadMemStats(&memstats)
alloc := memstats.Alloc
// Load, parse and type-check the program.
pkgs, err := packages.Metadata(nil, "std")
if err != nil {
t.Fatalf("failed to load metadata: %v", err)
}
t1 := time.Now()
runtime.GC()
runtime.ReadMemStats(&memstats)
runtime.KeepAlive(pkgs)
t.Logf("Loaded %d packages", len(pkgs))
numPkgs := len(pkgs)
if want := 340; numPkgs < want {
t.Errorf("Loaded only %d packages, want at least %d", numPkgs, want)
}
t.Log("GOMAXPROCS: ", runtime.GOMAXPROCS(0))
t.Log("Metadata: ", t1.Sub(t0)) // ~800ms on 12 threads
t.Log("#MB: ", int64(memstats.Alloc-alloc)/1000000) // ~1MB
}
func TestCgoOption(t *testing.T) {
if testing.Short() {
t.Skip("skipping in short mode; uses tons of memory (golang.org/issue/14113)")
}
// TODO(adonovan): see if we can get away without these old
// go/loader hacks now that we use the go list command.
//
// switch runtime.GOOS {
// // On these systems, the net and os/user packages don't use cgo
// // or the std library is incomplete (Android).
// case "android", "plan9", "solaris", "windows":
// t.Skipf("no cgo or incomplete std lib on %s", runtime.GOOS)
// }
// // In nocgo builds (e.g. linux-amd64-nocgo),
// // there is no "runtime/cgo" package,
// // so cgo-generated Go files will have a failing import.
// if !build.Default.CgoEnabled {
// return
// }
// Test that we can load cgo-using packages with
// DisableCgo=true/false, which, among other things, causes go
// list to select pure Go/native implementations, respectively,
// based on build tags.
//
// Each entry specifies a package-level object and the generic
// file expected to define it when cgo is disabled.
// When cgo is enabled, the exact file is not specified (since
// it varies by platform), but must differ from the generic one.
//
// The test also loads the actual file to verify that the
// object is indeed defined at that location.
for _, test := range []struct {
pkg, name, genericFile string
}{
{"net", "cgoLookupHost", "cgo_stub.go"},
{"os/user", "current", "lookup_stubs.go"},
} {
for i := 0; i < 2; i++ { // !cgo, cgo
opts := &packages.Options{
DisableCgo: i == 0,
Error: func(error) {},
}
pkgs, err := packages.TypeCheck(opts, test.pkg)
if err != nil {
t.Errorf("Load failed: %v", err)
continue
}
pkg := pkgs[0]
obj := pkg.Type.Scope().Lookup(test.name)
if obj == nil {
t.Errorf("no object %s.%s", test.pkg, test.name)
continue
}
posn := pkg.Fset.Position(obj.Pos())
if false {
t.Logf("DisableCgo=%t, obj=%s, posn=%s", opts.DisableCgo, obj, posn)
}
gotFile := filepath.Base(posn.Filename)
filesMatch := gotFile == test.genericFile
if !opts.DisableCgo && filesMatch {
t.Errorf("!DisableCgo: %s found in %s, want native file",
obj, gotFile)
} else if opts.DisableCgo && !filesMatch {
t.Errorf("DisableCgo: %s found in %s, want %s",
obj, gotFile, test.genericFile)
}
// Load the file and check the object is declared at the right place.
b, err := ioutil.ReadFile(posn.Filename)
if err != nil {
t.Errorf("can't read %s: %s", posn.Filename, err)
continue
}
line := string(bytes.Split(b, []byte("\n"))[posn.Line-1])
// Don't assume posn.Column is accurate.
if !strings.Contains(line, "func "+test.name) {
t.Errorf("%s: %s not declared here (looking at %q)", posn, obj, line)
}
}
}
}