updated GHA
Some checks reported errors
continuous-integration/drone/pr Build encountered an error
continuous-integration/drone/push Build encountered an error

Update to v2 SDK
updated dependencies
This commit is contained in:
2022-08-06 16:21:18 +02:00
parent 989e7079a5
commit e1266ebf64
1909 changed files with 122367 additions and 279095 deletions

View File

@ -1,9 +0,0 @@
y.output
# ignore intellij files
.idea
*.iml
*.ipr
*.iws
*.test

View File

@ -1,12 +0,0 @@
sudo: false
language: go
go:
- 1.8
branches:
only:
- master
script: make test

View File

@ -1,354 +0,0 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

View File

@ -1,18 +0,0 @@
TEST?=./...
default: test
fmt: generate
go fmt ./...
test: generate
go get -t ./...
go test $(TEST) $(TESTARGS)
generate:
go generate ./...
updatedeps:
go get -u golang.org/x/tools/cmd/stringer
.PHONY: default generate test updatedeps

View File

@ -1,125 +0,0 @@
# HCL
[![GoDoc](https://godoc.org/github.com/hashicorp/hcl?status.png)](https://godoc.org/github.com/hashicorp/hcl) [![Build Status](https://travis-ci.org/hashicorp/hcl.svg?branch=master)](https://travis-ci.org/hashicorp/hcl)
HCL (HashiCorp Configuration Language) is a configuration language built
by HashiCorp. The goal of HCL is to build a structured configuration language
that is both human and machine friendly for use with command-line tools, but
specifically targeted towards DevOps tools, servers, etc.
HCL is also fully JSON compatible. That is, JSON can be used as completely
valid input to a system expecting HCL. This helps makes systems
interoperable with other systems.
HCL is heavily inspired by
[libucl](https://github.com/vstakhov/libucl),
nginx configuration, and others similar.
## Why?
A common question when viewing HCL is to ask the question: why not
JSON, YAML, etc.?
Prior to HCL, the tools we built at [HashiCorp](http://www.hashicorp.com)
used a variety of configuration languages from full programming languages
such as Ruby to complete data structure languages such as JSON. What we
learned is that some people wanted human-friendly configuration languages
and some people wanted machine-friendly languages.
JSON fits a nice balance in this, but is fairly verbose and most
importantly doesn't support comments. With YAML, we found that beginners
had a really hard time determining what the actual structure was, and
ended up guessing more often than not whether to use a hyphen, colon, etc.
in order to represent some configuration key.
Full programming languages such as Ruby enable complex behavior
a configuration language shouldn't usually allow, and also forces
people to learn some set of Ruby.
Because of this, we decided to create our own configuration language
that is JSON-compatible. Our configuration language (HCL) is designed
to be written and modified by humans. The API for HCL allows JSON
as an input so that it is also machine-friendly (machines can generate
JSON instead of trying to generate HCL).
Our goal with HCL is not to alienate other configuration languages.
It is instead to provide HCL as a specialized language for our tools,
and JSON as the interoperability layer.
## Syntax
For a complete grammar, please see the parser itself. A high-level overview
of the syntax and grammar is listed here.
* Single line comments start with `#` or `//`
* Multi-line comments are wrapped in `/*` and `*/`. Nested block comments
are not allowed. A multi-line comment (also known as a block comment)
terminates at the first `*/` found.
* Values are assigned with the syntax `key = value` (whitespace doesn't
matter). The value can be any primitive: a string, number, boolean,
object, or list.
* Strings are double-quoted and can contain any UTF-8 characters.
Example: `"Hello, World"`
* Multi-line strings start with `<<EOF` at the end of a line, and end
with `EOF` on its own line ([here documents](https://en.wikipedia.org/wiki/Here_document)).
Any text may be used in place of `EOF`. Example:
```
<<FOO
hello
world
FOO
```
* Numbers are assumed to be base 10. If you prefix a number with 0x,
it is treated as a hexadecimal. If it is prefixed with 0, it is
treated as an octal. Numbers can be in scientific notation: "1e10".
* Boolean values: `true`, `false`
* Arrays can be made by wrapping it in `[]`. Example:
`["foo", "bar", 42]`. Arrays can contain primitives,
other arrays, and objects. As an alternative, lists
of objects can be created with repeated blocks, using
this structure:
```hcl
service {
key = "value"
}
service {
key = "value"
}
```
Objects and nested objects are created using the structure shown below:
```
variable "ami" {
description = "the AMI to use"
}
```
This would be equivalent to the following json:
``` json
{
"variable": {
"ami": {
"description": "the AMI to use"
}
}
}
```
## Thanks
Thanks to:
* [@vstakhov](https://github.com/vstakhov) - The original libucl parser
and syntax that HCL was based off of.
* [@fatih](https://github.com/fatih) - The rewritten HCL parser
in pure Go (no goyacc) and support for a printer.

View File

@ -1,19 +0,0 @@
version: "build-{branch}-{build}"
image: Visual Studio 2015
clone_folder: c:\gopath\src\github.com\hashicorp\hcl
environment:
GOPATH: c:\gopath
init:
- git config --global core.autocrlf false
install:
- cmd: >-
echo %Path%
go version
go env
go get -t ./...
build_script:
- cmd: go test -v ./...

View File

@ -1,724 +0,0 @@
package hcl
import (
"errors"
"fmt"
"reflect"
"sort"
"strconv"
"strings"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/hcl/hcl/parser"
"github.com/hashicorp/hcl/hcl/token"
)
// This is the tag to use with structures to have settings for HCL
const tagName = "hcl"
var (
// nodeType holds a reference to the type of ast.Node
nodeType reflect.Type = findNodeType()
)
// Unmarshal accepts a byte slice as input and writes the
// data to the value pointed to by v.
func Unmarshal(bs []byte, v interface{}) error {
root, err := parse(bs)
if err != nil {
return err
}
return DecodeObject(v, root)
}
// Decode reads the given input and decodes it into the structure
// given by `out`.
func Decode(out interface{}, in string) error {
obj, err := Parse(in)
if err != nil {
return err
}
return DecodeObject(out, obj)
}
// DecodeObject is a lower-level version of Decode. It decodes a
// raw Object into the given output.
func DecodeObject(out interface{}, n ast.Node) error {
val := reflect.ValueOf(out)
if val.Kind() != reflect.Ptr {
return errors.New("result must be a pointer")
}
// If we have the file, we really decode the root node
if f, ok := n.(*ast.File); ok {
n = f.Node
}
var d decoder
return d.decode("root", n, val.Elem())
}
type decoder struct {
stack []reflect.Kind
}
func (d *decoder) decode(name string, node ast.Node, result reflect.Value) error {
k := result
// If we have an interface with a valid value, we use that
// for the check.
if result.Kind() == reflect.Interface {
elem := result.Elem()
if elem.IsValid() {
k = elem
}
}
// Push current onto stack unless it is an interface.
if k.Kind() != reflect.Interface {
d.stack = append(d.stack, k.Kind())
// Schedule a pop
defer func() {
d.stack = d.stack[:len(d.stack)-1]
}()
}
switch k.Kind() {
case reflect.Bool:
return d.decodeBool(name, node, result)
case reflect.Float64:
return d.decodeFloat(name, node, result)
case reflect.Int, reflect.Int32, reflect.Int64:
return d.decodeInt(name, node, result)
case reflect.Interface:
// When we see an interface, we make our own thing
return d.decodeInterface(name, node, result)
case reflect.Map:
return d.decodeMap(name, node, result)
case reflect.Ptr:
return d.decodePtr(name, node, result)
case reflect.Slice:
return d.decodeSlice(name, node, result)
case reflect.String:
return d.decodeString(name, node, result)
case reflect.Struct:
return d.decodeStruct(name, node, result)
default:
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unknown kind to decode into: %s", name, k.Kind()),
}
}
}
func (d *decoder) decodeBool(name string, node ast.Node, result reflect.Value) error {
switch n := node.(type) {
case *ast.LiteralType:
if n.Token.Type == token.BOOL {
v, err := strconv.ParseBool(n.Token.Text)
if err != nil {
return err
}
result.Set(reflect.ValueOf(v))
return nil
}
}
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unknown type %T", name, node),
}
}
func (d *decoder) decodeFloat(name string, node ast.Node, result reflect.Value) error {
switch n := node.(type) {
case *ast.LiteralType:
if n.Token.Type == token.FLOAT {
v, err := strconv.ParseFloat(n.Token.Text, 64)
if err != nil {
return err
}
result.Set(reflect.ValueOf(v))
return nil
}
}
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unknown type %T", name, node),
}
}
func (d *decoder) decodeInt(name string, node ast.Node, result reflect.Value) error {
switch n := node.(type) {
case *ast.LiteralType:
switch n.Token.Type {
case token.NUMBER:
v, err := strconv.ParseInt(n.Token.Text, 0, 0)
if err != nil {
return err
}
if result.Kind() == reflect.Interface {
result.Set(reflect.ValueOf(int(v)))
} else {
result.SetInt(v)
}
return nil
case token.STRING:
v, err := strconv.ParseInt(n.Token.Value().(string), 0, 0)
if err != nil {
return err
}
if result.Kind() == reflect.Interface {
result.Set(reflect.ValueOf(int(v)))
} else {
result.SetInt(v)
}
return nil
}
}
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unknown type %T", name, node),
}
}
func (d *decoder) decodeInterface(name string, node ast.Node, result reflect.Value) error {
// When we see an ast.Node, we retain the value to enable deferred decoding.
// Very useful in situations where we want to preserve ast.Node information
// like Pos
if result.Type() == nodeType && result.CanSet() {
result.Set(reflect.ValueOf(node))
return nil
}
var set reflect.Value
redecode := true
// For testing types, ObjectType should just be treated as a list. We
// set this to a temporary var because we want to pass in the real node.
testNode := node
if ot, ok := node.(*ast.ObjectType); ok {
testNode = ot.List
}
switch n := testNode.(type) {
case *ast.ObjectList:
// If we're at the root or we're directly within a slice, then we
// decode objects into map[string]interface{}, otherwise we decode
// them into lists.
if len(d.stack) == 0 || d.stack[len(d.stack)-1] == reflect.Slice {
var temp map[string]interface{}
tempVal := reflect.ValueOf(temp)
result := reflect.MakeMap(
reflect.MapOf(
reflect.TypeOf(""),
tempVal.Type().Elem()))
set = result
} else {
var temp []map[string]interface{}
tempVal := reflect.ValueOf(temp)
result := reflect.MakeSlice(
reflect.SliceOf(tempVal.Type().Elem()), 0, len(n.Items))
set = result
}
case *ast.ObjectType:
// If we're at the root or we're directly within a slice, then we
// decode objects into map[string]interface{}, otherwise we decode
// them into lists.
if len(d.stack) == 0 || d.stack[len(d.stack)-1] == reflect.Slice {
var temp map[string]interface{}
tempVal := reflect.ValueOf(temp)
result := reflect.MakeMap(
reflect.MapOf(
reflect.TypeOf(""),
tempVal.Type().Elem()))
set = result
} else {
var temp []map[string]interface{}
tempVal := reflect.ValueOf(temp)
result := reflect.MakeSlice(
reflect.SliceOf(tempVal.Type().Elem()), 0, 1)
set = result
}
case *ast.ListType:
var temp []interface{}
tempVal := reflect.ValueOf(temp)
result := reflect.MakeSlice(
reflect.SliceOf(tempVal.Type().Elem()), 0, 0)
set = result
case *ast.LiteralType:
switch n.Token.Type {
case token.BOOL:
var result bool
set = reflect.Indirect(reflect.New(reflect.TypeOf(result)))
case token.FLOAT:
var result float64
set = reflect.Indirect(reflect.New(reflect.TypeOf(result)))
case token.NUMBER:
var result int
set = reflect.Indirect(reflect.New(reflect.TypeOf(result)))
case token.STRING, token.HEREDOC:
set = reflect.Indirect(reflect.New(reflect.TypeOf("")))
default:
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: cannot decode into interface: %T", name, node),
}
}
default:
return fmt.Errorf(
"%s: cannot decode into interface: %T",
name, node)
}
// Set the result to what its supposed to be, then reset
// result so we don't reflect into this method anymore.
result.Set(set)
if redecode {
// Revisit the node so that we can use the newly instantiated
// thing and populate it.
if err := d.decode(name, node, result); err != nil {
return err
}
}
return nil
}
func (d *decoder) decodeMap(name string, node ast.Node, result reflect.Value) error {
if item, ok := node.(*ast.ObjectItem); ok {
node = &ast.ObjectList{Items: []*ast.ObjectItem{item}}
}
if ot, ok := node.(*ast.ObjectType); ok {
node = ot.List
}
n, ok := node.(*ast.ObjectList)
if !ok {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: not an object type for map (%T)", name, node),
}
}
// If we have an interface, then we can address the interface,
// but not the slice itself, so get the element but set the interface
set := result
if result.Kind() == reflect.Interface {
result = result.Elem()
}
resultType := result.Type()
resultElemType := resultType.Elem()
resultKeyType := resultType.Key()
if resultKeyType.Kind() != reflect.String {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: map must have string keys", name),
}
}
// Make a map if it is nil
resultMap := result
if result.IsNil() {
resultMap = reflect.MakeMap(
reflect.MapOf(resultKeyType, resultElemType))
}
// Go through each element and decode it.
done := make(map[string]struct{})
for _, item := range n.Items {
if item.Val == nil {
continue
}
// github.com/hashicorp/terraform/issue/5740
if len(item.Keys) == 0 {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: map must have string keys", name),
}
}
// Get the key we're dealing with, which is the first item
keyStr := item.Keys[0].Token.Value().(string)
// If we've already processed this key, then ignore it
if _, ok := done[keyStr]; ok {
continue
}
// Determine the value. If we have more than one key, then we
// get the objectlist of only these keys.
itemVal := item.Val
if len(item.Keys) > 1 {
itemVal = n.Filter(keyStr)
done[keyStr] = struct{}{}
}
// Make the field name
fieldName := fmt.Sprintf("%s.%s", name, keyStr)
// Get the key/value as reflection values
key := reflect.ValueOf(keyStr)
val := reflect.Indirect(reflect.New(resultElemType))
// If we have a pre-existing value in the map, use that
oldVal := resultMap.MapIndex(key)
if oldVal.IsValid() {
val.Set(oldVal)
}
// Decode!
if err := d.decode(fieldName, itemVal, val); err != nil {
return err
}
// Set the value on the map
resultMap.SetMapIndex(key, val)
}
// Set the final map if we can
set.Set(resultMap)
return nil
}
func (d *decoder) decodePtr(name string, node ast.Node, result reflect.Value) error {
// Create an element of the concrete (non pointer) type and decode
// into that. Then set the value of the pointer to this type.
resultType := result.Type()
resultElemType := resultType.Elem()
val := reflect.New(resultElemType)
if err := d.decode(name, node, reflect.Indirect(val)); err != nil {
return err
}
result.Set(val)
return nil
}
func (d *decoder) decodeSlice(name string, node ast.Node, result reflect.Value) error {
// If we have an interface, then we can address the interface,
// but not the slice itself, so get the element but set the interface
set := result
if result.Kind() == reflect.Interface {
result = result.Elem()
}
// Create the slice if it isn't nil
resultType := result.Type()
resultElemType := resultType.Elem()
if result.IsNil() {
resultSliceType := reflect.SliceOf(resultElemType)
result = reflect.MakeSlice(
resultSliceType, 0, 0)
}
// Figure out the items we'll be copying into the slice
var items []ast.Node
switch n := node.(type) {
case *ast.ObjectList:
items = make([]ast.Node, len(n.Items))
for i, item := range n.Items {
items[i] = item
}
case *ast.ObjectType:
items = []ast.Node{n}
case *ast.ListType:
items = n.List
default:
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("unknown slice type: %T", node),
}
}
for i, item := range items {
fieldName := fmt.Sprintf("%s[%d]", name, i)
// Decode
val := reflect.Indirect(reflect.New(resultElemType))
// if item is an object that was decoded from ambiguous JSON and
// flattened, make sure it's expanded if it needs to decode into a
// defined structure.
item := expandObject(item, val)
if err := d.decode(fieldName, item, val); err != nil {
return err
}
// Append it onto the slice
result = reflect.Append(result, val)
}
set.Set(result)
return nil
}
// expandObject detects if an ambiguous JSON object was flattened to a List which
// should be decoded into a struct, and expands the ast to properly deocode.
func expandObject(node ast.Node, result reflect.Value) ast.Node {
item, ok := node.(*ast.ObjectItem)
if !ok {
return node
}
elemType := result.Type()
// our target type must be a struct
switch elemType.Kind() {
case reflect.Ptr:
switch elemType.Elem().Kind() {
case reflect.Struct:
//OK
default:
return node
}
case reflect.Struct:
//OK
default:
return node
}
// A list value will have a key and field name. If it had more fields,
// it wouldn't have been flattened.
if len(item.Keys) != 2 {
return node
}
keyToken := item.Keys[0].Token
item.Keys = item.Keys[1:]
// we need to un-flatten the ast enough to decode
newNode := &ast.ObjectItem{
Keys: []*ast.ObjectKey{
&ast.ObjectKey{
Token: keyToken,
},
},
Val: &ast.ObjectType{
List: &ast.ObjectList{
Items: []*ast.ObjectItem{item},
},
},
}
return newNode
}
func (d *decoder) decodeString(name string, node ast.Node, result reflect.Value) error {
switch n := node.(type) {
case *ast.LiteralType:
switch n.Token.Type {
case token.NUMBER:
result.Set(reflect.ValueOf(n.Token.Text).Convert(result.Type()))
return nil
case token.STRING, token.HEREDOC:
result.Set(reflect.ValueOf(n.Token.Value()).Convert(result.Type()))
return nil
}
}
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unknown type for string %T", name, node),
}
}
func (d *decoder) decodeStruct(name string, node ast.Node, result reflect.Value) error {
var item *ast.ObjectItem
if it, ok := node.(*ast.ObjectItem); ok {
item = it
node = it.Val
}
if ot, ok := node.(*ast.ObjectType); ok {
node = ot.List
}
// Handle the special case where the object itself is a literal. Previously
// the yacc parser would always ensure top-level elements were arrays. The new
// parser does not make the same guarantees, thus we need to convert any
// top-level literal elements into a list.
if _, ok := node.(*ast.LiteralType); ok && item != nil {
node = &ast.ObjectList{Items: []*ast.ObjectItem{item}}
}
list, ok := node.(*ast.ObjectList)
if !ok {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: not an object type for struct (%T)", name, node),
}
}
// This slice will keep track of all the structs we'll be decoding.
// There can be more than one struct if there are embedded structs
// that are squashed.
structs := make([]reflect.Value, 1, 5)
structs[0] = result
// Compile the list of all the fields that we're going to be decoding
// from all the structs.
fields := make(map[*reflect.StructField]reflect.Value)
for len(structs) > 0 {
structVal := structs[0]
structs = structs[1:]
structType := structVal.Type()
for i := 0; i < structType.NumField(); i++ {
fieldType := structType.Field(i)
tagParts := strings.Split(fieldType.Tag.Get(tagName), ",")
// Ignore fields with tag name "-"
if tagParts[0] == "-" {
continue
}
if fieldType.Anonymous {
fieldKind := fieldType.Type.Kind()
if fieldKind != reflect.Struct {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: unsupported type to struct: %s",
fieldType.Name, fieldKind),
}
}
// We have an embedded field. We "squash" the fields down
// if specified in the tag.
squash := false
for _, tag := range tagParts[1:] {
if tag == "squash" {
squash = true
break
}
}
if squash {
structs = append(
structs, result.FieldByName(fieldType.Name))
continue
}
}
// Normal struct field, store it away
fields[&fieldType] = structVal.Field(i)
}
}
usedKeys := make(map[string]struct{})
decodedFields := make([]string, 0, len(fields))
decodedFieldsVal := make([]reflect.Value, 0)
unusedKeysVal := make([]reflect.Value, 0)
for fieldType, field := range fields {
if !field.IsValid() {
// This should never happen
panic("field is not valid")
}
// If we can't set the field, then it is unexported or something,
// and we just continue onwards.
if !field.CanSet() {
continue
}
fieldName := fieldType.Name
tagValue := fieldType.Tag.Get(tagName)
tagParts := strings.SplitN(tagValue, ",", 2)
if len(tagParts) >= 2 {
switch tagParts[1] {
case "decodedFields":
decodedFieldsVal = append(decodedFieldsVal, field)
continue
case "key":
if item == nil {
return &parser.PosError{
Pos: node.Pos(),
Err: fmt.Errorf("%s: %s asked for 'key', impossible",
name, fieldName),
}
}
field.SetString(item.Keys[0].Token.Value().(string))
continue
case "unusedKeys":
unusedKeysVal = append(unusedKeysVal, field)
continue
}
}
if tagParts[0] != "" {
fieldName = tagParts[0]
}
// Determine the element we'll use to decode. If it is a single
// match (only object with the field), then we decode it exactly.
// If it is a prefix match, then we decode the matches.
filter := list.Filter(fieldName)
prefixMatches := filter.Children()
matches := filter.Elem()
if len(matches.Items) == 0 && len(prefixMatches.Items) == 0 {
continue
}
// Track the used key
usedKeys[fieldName] = struct{}{}
// Create the field name and decode. We range over the elements
// because we actually want the value.
fieldName = fmt.Sprintf("%s.%s", name, fieldName)
if len(prefixMatches.Items) > 0 {
if err := d.decode(fieldName, prefixMatches, field); err != nil {
return err
}
}
for _, match := range matches.Items {
var decodeNode ast.Node = match.Val
if ot, ok := decodeNode.(*ast.ObjectType); ok {
decodeNode = &ast.ObjectList{Items: ot.List.Items}
}
if err := d.decode(fieldName, decodeNode, field); err != nil {
return err
}
}
decodedFields = append(decodedFields, fieldType.Name)
}
if len(decodedFieldsVal) > 0 {
// Sort it so that it is deterministic
sort.Strings(decodedFields)
for _, v := range decodedFieldsVal {
v.Set(reflect.ValueOf(decodedFields))
}
}
return nil
}
// findNodeType returns the type of ast.Node
func findNodeType() reflect.Type {
var nodeContainer struct {
Node ast.Node
}
value := reflect.ValueOf(nodeContainer).FieldByName("Node")
return value.Type()
}

View File

@ -1,11 +0,0 @@
// Package hcl decodes HCL into usable Go structures.
//
// hcl input can come in either pure HCL format or JSON format.
// It can be parsed into an AST, and then decoded into a structure,
// or it can be decoded directly from a string into a structure.
//
// If you choose to parse HCL into a raw AST, the benefit is that you
// can write custom visitor implementations to implement custom
// semantic checks. By default, HCL does not perform any semantic
// checks.
package hcl

View File

@ -1,219 +0,0 @@
// Package ast declares the types used to represent syntax trees for HCL
// (HashiCorp Configuration Language)
package ast
import (
"fmt"
"strings"
"github.com/hashicorp/hcl/hcl/token"
)
// Node is an element in the abstract syntax tree.
type Node interface {
node()
Pos() token.Pos
}
func (File) node() {}
func (ObjectList) node() {}
func (ObjectKey) node() {}
func (ObjectItem) node() {}
func (Comment) node() {}
func (CommentGroup) node() {}
func (ObjectType) node() {}
func (LiteralType) node() {}
func (ListType) node() {}
// File represents a single HCL file
type File struct {
Node Node // usually a *ObjectList
Comments []*CommentGroup // list of all comments in the source
}
func (f *File) Pos() token.Pos {
return f.Node.Pos()
}
// ObjectList represents a list of ObjectItems. An HCL file itself is an
// ObjectList.
type ObjectList struct {
Items []*ObjectItem
}
func (o *ObjectList) Add(item *ObjectItem) {
o.Items = append(o.Items, item)
}
// Filter filters out the objects with the given key list as a prefix.
//
// The returned list of objects contain ObjectItems where the keys have
// this prefix already stripped off. This might result in objects with
// zero-length key lists if they have no children.
//
// If no matches are found, an empty ObjectList (non-nil) is returned.
func (o *ObjectList) Filter(keys ...string) *ObjectList {
var result ObjectList
for _, item := range o.Items {
// If there aren't enough keys, then ignore this
if len(item.Keys) < len(keys) {
continue
}
match := true
for i, key := range item.Keys[:len(keys)] {
key := key.Token.Value().(string)
if key != keys[i] && !strings.EqualFold(key, keys[i]) {
match = false
break
}
}
if !match {
continue
}
// Strip off the prefix from the children
newItem := *item
newItem.Keys = newItem.Keys[len(keys):]
result.Add(&newItem)
}
return &result
}
// Children returns further nested objects (key length > 0) within this
// ObjectList. This should be used with Filter to get at child items.
func (o *ObjectList) Children() *ObjectList {
var result ObjectList
for _, item := range o.Items {
if len(item.Keys) > 0 {
result.Add(item)
}
}
return &result
}
// Elem returns items in the list that are direct element assignments
// (key length == 0). This should be used with Filter to get at elements.
func (o *ObjectList) Elem() *ObjectList {
var result ObjectList
for _, item := range o.Items {
if len(item.Keys) == 0 {
result.Add(item)
}
}
return &result
}
func (o *ObjectList) Pos() token.Pos {
// always returns the uninitiliazed position
return o.Items[0].Pos()
}
// ObjectItem represents a HCL Object Item. An item is represented with a key
// (or keys). It can be an assignment or an object (both normal and nested)
type ObjectItem struct {
// keys is only one length long if it's of type assignment. If it's a
// nested object it can be larger than one. In that case "assign" is
// invalid as there is no assignments for a nested object.
Keys []*ObjectKey
// assign contains the position of "=", if any
Assign token.Pos
// val is the item itself. It can be an object,list, number, bool or a
// string. If key length is larger than one, val can be only of type
// Object.
Val Node
LeadComment *CommentGroup // associated lead comment
LineComment *CommentGroup // associated line comment
}
func (o *ObjectItem) Pos() token.Pos {
// I'm not entirely sure what causes this, but removing this causes
// a test failure. We should investigate at some point.
if len(o.Keys) == 0 {
return token.Pos{}
}
return o.Keys[0].Pos()
}
// ObjectKeys are either an identifier or of type string.
type ObjectKey struct {
Token token.Token
}
func (o *ObjectKey) Pos() token.Pos {
return o.Token.Pos
}
// LiteralType represents a literal of basic type. Valid types are:
// token.NUMBER, token.FLOAT, token.BOOL and token.STRING
type LiteralType struct {
Token token.Token
// comment types, only used when in a list
LeadComment *CommentGroup
LineComment *CommentGroup
}
func (l *LiteralType) Pos() token.Pos {
return l.Token.Pos
}
// ListStatement represents a HCL List type
type ListType struct {
Lbrack token.Pos // position of "["
Rbrack token.Pos // position of "]"
List []Node // the elements in lexical order
}
func (l *ListType) Pos() token.Pos {
return l.Lbrack
}
func (l *ListType) Add(node Node) {
l.List = append(l.List, node)
}
// ObjectType represents a HCL Object Type
type ObjectType struct {
Lbrace token.Pos // position of "{"
Rbrace token.Pos // position of "}"
List *ObjectList // the nodes in lexical order
}
func (o *ObjectType) Pos() token.Pos {
return o.Lbrace
}
// Comment node represents a single //, # style or /*- style commment
type Comment struct {
Start token.Pos // position of / or #
Text string
}
func (c *Comment) Pos() token.Pos {
return c.Start
}
// CommentGroup node represents a sequence of comments with no other tokens and
// no empty lines between.
type CommentGroup struct {
List []*Comment // len(List) > 0
}
func (c *CommentGroup) Pos() token.Pos {
return c.List[0].Pos()
}
//-------------------------------------------------------------------
// GoStringer
//-------------------------------------------------------------------
func (o *ObjectKey) GoString() string { return fmt.Sprintf("*%#v", *o) }
func (o *ObjectList) GoString() string { return fmt.Sprintf("*%#v", *o) }

View File

@ -1,52 +0,0 @@
package ast
import "fmt"
// WalkFunc describes a function to be called for each node during a Walk. The
// returned node can be used to rewrite the AST. Walking stops the returned
// bool is false.
type WalkFunc func(Node) (Node, bool)
// Walk traverses an AST in depth-first order: It starts by calling fn(node);
// node must not be nil. If fn returns true, Walk invokes fn recursively for
// each of the non-nil children of node, followed by a call of fn(nil). The
// returned node of fn can be used to rewrite the passed node to fn.
func Walk(node Node, fn WalkFunc) Node {
rewritten, ok := fn(node)
if !ok {
return rewritten
}
switch n := node.(type) {
case *File:
n.Node = Walk(n.Node, fn)
case *ObjectList:
for i, item := range n.Items {
n.Items[i] = Walk(item, fn).(*ObjectItem)
}
case *ObjectKey:
// nothing to do
case *ObjectItem:
for i, k := range n.Keys {
n.Keys[i] = Walk(k, fn).(*ObjectKey)
}
if n.Val != nil {
n.Val = Walk(n.Val, fn)
}
case *LiteralType:
// nothing to do
case *ListType:
for i, l := range n.List {
n.List[i] = Walk(l, fn)
}
case *ObjectType:
n.List = Walk(n.List, fn).(*ObjectList)
default:
// should we panic here?
fmt.Printf("unknown type: %T\n", n)
}
fn(nil)
return rewritten
}

View File

@ -1,17 +0,0 @@
package parser
import (
"fmt"
"github.com/hashicorp/hcl/hcl/token"
)
// PosError is a parse error that contains a position.
type PosError struct {
Pos token.Pos
Err error
}
func (e *PosError) Error() string {
return fmt.Sprintf("At %s: %s", e.Pos, e.Err)
}

View File

@ -1,520 +0,0 @@
// Package parser implements a parser for HCL (HashiCorp Configuration
// Language)
package parser
import (
"bytes"
"errors"
"fmt"
"strings"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/hcl/hcl/scanner"
"github.com/hashicorp/hcl/hcl/token"
)
type Parser struct {
sc *scanner.Scanner
// Last read token
tok token.Token
commaPrev token.Token
comments []*ast.CommentGroup
leadComment *ast.CommentGroup // last lead comment
lineComment *ast.CommentGroup // last line comment
enableTrace bool
indent int
n int // buffer size (max = 1)
}
func newParser(src []byte) *Parser {
return &Parser{
sc: scanner.New(src),
}
}
// Parse returns the fully parsed source and returns the abstract syntax tree.
func Parse(src []byte) (*ast.File, error) {
// normalize all line endings
// since the scanner and output only work with "\n" line endings, we may
// end up with dangling "\r" characters in the parsed data.
src = bytes.Replace(src, []byte("\r\n"), []byte("\n"), -1)
p := newParser(src)
return p.Parse()
}
var errEofToken = errors.New("EOF token found")
// Parse returns the fully parsed source and returns the abstract syntax tree.
func (p *Parser) Parse() (*ast.File, error) {
f := &ast.File{}
var err, scerr error
p.sc.Error = func(pos token.Pos, msg string) {
scerr = &PosError{Pos: pos, Err: errors.New(msg)}
}
f.Node, err = p.objectList(false)
if scerr != nil {
return nil, scerr
}
if err != nil {
return nil, err
}
f.Comments = p.comments
return f, nil
}
// objectList parses a list of items within an object (generally k/v pairs).
// The parameter" obj" tells this whether to we are within an object (braces:
// '{', '}') or just at the top level. If we're within an object, we end
// at an RBRACE.
func (p *Parser) objectList(obj bool) (*ast.ObjectList, error) {
defer un(trace(p, "ParseObjectList"))
node := &ast.ObjectList{}
for {
if obj {
tok := p.scan()
p.unscan()
if tok.Type == token.RBRACE {
break
}
}
n, err := p.objectItem()
if err == errEofToken {
break // we are finished
}
// we don't return a nil node, because might want to use already
// collected items.
if err != nil {
return node, err
}
node.Add(n)
// object lists can be optionally comma-delimited e.g. when a list of maps
// is being expressed, so a comma is allowed here - it's simply consumed
tok := p.scan()
if tok.Type != token.COMMA {
p.unscan()
}
}
return node, nil
}
func (p *Parser) consumeComment() (comment *ast.Comment, endline int) {
endline = p.tok.Pos.Line
// count the endline if it's multiline comment, ie starting with /*
if len(p.tok.Text) > 1 && p.tok.Text[1] == '*' {
// don't use range here - no need to decode Unicode code points
for i := 0; i < len(p.tok.Text); i++ {
if p.tok.Text[i] == '\n' {
endline++
}
}
}
comment = &ast.Comment{Start: p.tok.Pos, Text: p.tok.Text}
p.tok = p.sc.Scan()
return
}
func (p *Parser) consumeCommentGroup(n int) (comments *ast.CommentGroup, endline int) {
var list []*ast.Comment
endline = p.tok.Pos.Line
for p.tok.Type == token.COMMENT && p.tok.Pos.Line <= endline+n {
var comment *ast.Comment
comment, endline = p.consumeComment()
list = append(list, comment)
}
// add comment group to the comments list
comments = &ast.CommentGroup{List: list}
p.comments = append(p.comments, comments)
return
}
// objectItem parses a single object item
func (p *Parser) objectItem() (*ast.ObjectItem, error) {
defer un(trace(p, "ParseObjectItem"))
keys, err := p.objectKey()
if len(keys) > 0 && err == errEofToken {
// We ignore eof token here since it is an error if we didn't
// receive a value (but we did receive a key) for the item.
err = nil
}
if len(keys) > 0 && err != nil && p.tok.Type == token.RBRACE {
// This is a strange boolean statement, but what it means is:
// We have keys with no value, and we're likely in an object
// (since RBrace ends an object). For this, we set err to nil so
// we continue and get the error below of having the wrong value
// type.
err = nil
// Reset the token type so we don't think it completed fine. See
// objectType which uses p.tok.Type to check if we're done with
// the object.
p.tok.Type = token.EOF
}
if err != nil {
return nil, err
}
o := &ast.ObjectItem{
Keys: keys,
}
if p.leadComment != nil {
o.LeadComment = p.leadComment
p.leadComment = nil
}
switch p.tok.Type {
case token.ASSIGN:
o.Assign = p.tok.Pos
o.Val, err = p.object()
if err != nil {
return nil, err
}
case token.LBRACE:
o.Val, err = p.objectType()
if err != nil {
return nil, err
}
default:
keyStr := make([]string, 0, len(keys))
for _, k := range keys {
keyStr = append(keyStr, k.Token.Text)
}
return nil, fmt.Errorf(
"key '%s' expected start of object ('{') or assignment ('=')",
strings.Join(keyStr, " "))
}
// do a look-ahead for line comment
p.scan()
if len(keys) > 0 && o.Val.Pos().Line == keys[0].Pos().Line && p.lineComment != nil {
o.LineComment = p.lineComment
p.lineComment = nil
}
p.unscan()
return o, nil
}
// objectKey parses an object key and returns a ObjectKey AST
func (p *Parser) objectKey() ([]*ast.ObjectKey, error) {
keyCount := 0
keys := make([]*ast.ObjectKey, 0)
for {
tok := p.scan()
switch tok.Type {
case token.EOF:
// It is very important to also return the keys here as well as
// the error. This is because we need to be able to tell if we
// did parse keys prior to finding the EOF, or if we just found
// a bare EOF.
return keys, errEofToken
case token.ASSIGN:
// assignment or object only, but not nested objects. this is not
// allowed: `foo bar = {}`
if keyCount > 1 {
return nil, &PosError{
Pos: p.tok.Pos,
Err: fmt.Errorf("nested object expected: LBRACE got: %s", p.tok.Type),
}
}
if keyCount == 0 {
return nil, &PosError{
Pos: p.tok.Pos,
Err: errors.New("no object keys found!"),
}
}
return keys, nil
case token.LBRACE:
var err error
// If we have no keys, then it is a syntax error. i.e. {{}} is not
// allowed.
if len(keys) == 0 {
err = &PosError{
Pos: p.tok.Pos,
Err: fmt.Errorf("expected: IDENT | STRING got: %s", p.tok.Type),
}
}
// object
return keys, err
case token.IDENT, token.STRING:
keyCount++
keys = append(keys, &ast.ObjectKey{Token: p.tok})
case token.ILLEGAL:
return keys, &PosError{
Pos: p.tok.Pos,
Err: fmt.Errorf("illegal character"),
}
default:
return keys, &PosError{
Pos: p.tok.Pos,
Err: fmt.Errorf("expected: IDENT | STRING | ASSIGN | LBRACE got: %s", p.tok.Type),
}
}
}
}
// object parses any type of object, such as number, bool, string, object or
// list.
func (p *Parser) object() (ast.Node, error) {
defer un(trace(p, "ParseType"))
tok := p.scan()
switch tok.Type {
case token.NUMBER, token.FLOAT, token.BOOL, token.STRING, token.HEREDOC:
return p.literalType()
case token.LBRACE:
return p.objectType()
case token.LBRACK:
return p.listType()
case token.COMMENT:
// implement comment
case token.EOF:
return nil, errEofToken
}
return nil, &PosError{
Pos: tok.Pos,
Err: fmt.Errorf("Unknown token: %+v", tok),
}
}
// objectType parses an object type and returns a ObjectType AST
func (p *Parser) objectType() (*ast.ObjectType, error) {
defer un(trace(p, "ParseObjectType"))
// we assume that the currently scanned token is a LBRACE
o := &ast.ObjectType{
Lbrace: p.tok.Pos,
}
l, err := p.objectList(true)
// if we hit RBRACE, we are good to go (means we parsed all Items), if it's
// not a RBRACE, it's an syntax error and we just return it.
if err != nil && p.tok.Type != token.RBRACE {
return nil, err
}
// No error, scan and expect the ending to be a brace
if tok := p.scan(); tok.Type != token.RBRACE {
return nil, fmt.Errorf("object expected closing RBRACE got: %s", tok.Type)
}
o.List = l
o.Rbrace = p.tok.Pos // advanced via parseObjectList
return o, nil
}
// listType parses a list type and returns a ListType AST
func (p *Parser) listType() (*ast.ListType, error) {
defer un(trace(p, "ParseListType"))
// we assume that the currently scanned token is a LBRACK
l := &ast.ListType{
Lbrack: p.tok.Pos,
}
needComma := false
for {
tok := p.scan()
if needComma {
switch tok.Type {
case token.COMMA, token.RBRACK:
default:
return nil, &PosError{
Pos: tok.Pos,
Err: fmt.Errorf(
"error parsing list, expected comma or list end, got: %s",
tok.Type),
}
}
}
switch tok.Type {
case token.BOOL, token.NUMBER, token.FLOAT, token.STRING, token.HEREDOC:
node, err := p.literalType()
if err != nil {
return nil, err
}
// If there is a lead comment, apply it
if p.leadComment != nil {
node.LeadComment = p.leadComment
p.leadComment = nil
}
l.Add(node)
needComma = true
case token.COMMA:
// get next list item or we are at the end
// do a look-ahead for line comment
p.scan()
if p.lineComment != nil && len(l.List) > 0 {
lit, ok := l.List[len(l.List)-1].(*ast.LiteralType)
if ok {
lit.LineComment = p.lineComment
l.List[len(l.List)-1] = lit
p.lineComment = nil
}
}
p.unscan()
needComma = false
continue
case token.LBRACE:
// Looks like a nested object, so parse it out
node, err := p.objectType()
if err != nil {
return nil, &PosError{
Pos: tok.Pos,
Err: fmt.Errorf(
"error while trying to parse object within list: %s", err),
}
}
l.Add(node)
needComma = true
case token.LBRACK:
node, err := p.listType()
if err != nil {
return nil, &PosError{
Pos: tok.Pos,
Err: fmt.Errorf(
"error while trying to parse list within list: %s", err),
}
}
l.Add(node)
case token.RBRACK:
// finished
l.Rbrack = p.tok.Pos
return l, nil
default:
return nil, &PosError{
Pos: tok.Pos,
Err: fmt.Errorf("unexpected token while parsing list: %s", tok.Type),
}
}
}
}
// literalType parses a literal type and returns a LiteralType AST
func (p *Parser) literalType() (*ast.LiteralType, error) {
defer un(trace(p, "ParseLiteral"))
return &ast.LiteralType{
Token: p.tok,
}, nil
}
// scan returns the next token from the underlying scanner. If a token has
// been unscanned then read that instead. In the process, it collects any
// comment groups encountered, and remembers the last lead and line comments.
func (p *Parser) scan() token.Token {
// If we have a token on the buffer, then return it.
if p.n != 0 {
p.n = 0
return p.tok
}
// Otherwise read the next token from the scanner and Save it to the buffer
// in case we unscan later.
prev := p.tok
p.tok = p.sc.Scan()
if p.tok.Type == token.COMMENT {
var comment *ast.CommentGroup
var endline int
// fmt.Printf("p.tok.Pos.Line = %+v prev: %d endline %d \n",
// p.tok.Pos.Line, prev.Pos.Line, endline)
if p.tok.Pos.Line == prev.Pos.Line {
// The comment is on same line as the previous token; it
// cannot be a lead comment but may be a line comment.
comment, endline = p.consumeCommentGroup(0)
if p.tok.Pos.Line != endline {
// The next token is on a different line, thus
// the last comment group is a line comment.
p.lineComment = comment
}
}
// consume successor comments, if any
endline = -1
for p.tok.Type == token.COMMENT {
comment, endline = p.consumeCommentGroup(1)
}
if endline+1 == p.tok.Pos.Line && p.tok.Type != token.RBRACE {
switch p.tok.Type {
case token.RBRACE, token.RBRACK:
// Do not count for these cases
default:
// The next token is following on the line immediately after the
// comment group, thus the last comment group is a lead comment.
p.leadComment = comment
}
}
}
return p.tok
}
// unscan pushes the previously read token back onto the buffer.
func (p *Parser) unscan() {
p.n = 1
}
// ----------------------------------------------------------------------------
// Parsing support
func (p *Parser) printTrace(a ...interface{}) {
if !p.enableTrace {
return
}
const dots = ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "
const n = len(dots)
fmt.Printf("%5d:%3d: ", p.tok.Pos.Line, p.tok.Pos.Column)
i := 2 * p.indent
for i > n {
fmt.Print(dots)
i -= n
}
// i <= n
fmt.Print(dots[0:i])
fmt.Println(a...)
}
func trace(p *Parser, msg string) *Parser {
p.printTrace(msg, "(")
p.indent++
return p
}
// Usage pattern: defer un(trace(p, "..."))
func un(p *Parser) {
p.indent--
p.printTrace(")")
}

View File

@ -1,651 +0,0 @@
// Package scanner implements a scanner for HCL (HashiCorp Configuration
// Language) source text.
package scanner
import (
"bytes"
"fmt"
"os"
"regexp"
"unicode"
"unicode/utf8"
"github.com/hashicorp/hcl/hcl/token"
)
// eof represents a marker rune for the end of the reader.
const eof = rune(0)
// Scanner defines a lexical scanner
type Scanner struct {
buf *bytes.Buffer // Source buffer for advancing and scanning
src []byte // Source buffer for immutable access
// Source Position
srcPos token.Pos // current position
prevPos token.Pos // previous position, used for peek() method
lastCharLen int // length of last character in bytes
lastLineLen int // length of last line in characters (for correct column reporting)
tokStart int // token text start position
tokEnd int // token text end position
// Error is called for each error encountered. If no Error
// function is set, the error is reported to os.Stderr.
Error func(pos token.Pos, msg string)
// ErrorCount is incremented by one for each error encountered.
ErrorCount int
// tokPos is the start position of most recently scanned token; set by
// Scan. The Filename field is always left untouched by the Scanner. If
// an error is reported (via Error) and Position is invalid, the scanner is
// not inside a token.
tokPos token.Pos
}
// New creates and initializes a new instance of Scanner using src as
// its source content.
func New(src []byte) *Scanner {
// even though we accept a src, we read from a io.Reader compatible type
// (*bytes.Buffer). So in the future we might easily change it to streaming
// read.
b := bytes.NewBuffer(src)
s := &Scanner{
buf: b,
src: src,
}
// srcPosition always starts with 1
s.srcPos.Line = 1
return s
}
// next reads the next rune from the bufferred reader. Returns the rune(0) if
// an error occurs (or io.EOF is returned).
func (s *Scanner) next() rune {
ch, size, err := s.buf.ReadRune()
if err != nil {
// advance for error reporting
s.srcPos.Column++
s.srcPos.Offset += size
s.lastCharLen = size
return eof
}
if ch == utf8.RuneError && size == 1 {
s.srcPos.Column++
s.srcPos.Offset += size
s.lastCharLen = size
s.err("illegal UTF-8 encoding")
return ch
}
// remember last position
s.prevPos = s.srcPos
s.srcPos.Column++
s.lastCharLen = size
s.srcPos.Offset += size
if ch == '\n' {
s.srcPos.Line++
s.lastLineLen = s.srcPos.Column
s.srcPos.Column = 0
}
// If we see a null character with data left, then that is an error
if ch == '\x00' && s.buf.Len() > 0 {
s.err("unexpected null character (0x00)")
return eof
}
// debug
// fmt.Printf("ch: %q, offset:column: %d:%d\n", ch, s.srcPos.Offset, s.srcPos.Column)
return ch
}
// unread unreads the previous read Rune and updates the source position
func (s *Scanner) unread() {
if err := s.buf.UnreadRune(); err != nil {
panic(err) // this is user fault, we should catch it
}
s.srcPos = s.prevPos // put back last position
}
// peek returns the next rune without advancing the reader.
func (s *Scanner) peek() rune {
peek, _, err := s.buf.ReadRune()
if err != nil {
return eof
}
s.buf.UnreadRune()
return peek
}
// Scan scans the next token and returns the token.
func (s *Scanner) Scan() token.Token {
ch := s.next()
// skip white space
for isWhitespace(ch) {
ch = s.next()
}
var tok token.Type
// token text markings
s.tokStart = s.srcPos.Offset - s.lastCharLen
// token position, initial next() is moving the offset by one(size of rune
// actually), though we are interested with the starting point
s.tokPos.Offset = s.srcPos.Offset - s.lastCharLen
if s.srcPos.Column > 0 {
// common case: last character was not a '\n'
s.tokPos.Line = s.srcPos.Line
s.tokPos.Column = s.srcPos.Column
} else {
// last character was a '\n'
// (we cannot be at the beginning of the source
// since we have called next() at least once)
s.tokPos.Line = s.srcPos.Line - 1
s.tokPos.Column = s.lastLineLen
}
switch {
case isLetter(ch):
tok = token.IDENT
lit := s.scanIdentifier()
if lit == "true" || lit == "false" {
tok = token.BOOL
}
case isDecimal(ch):
tok = s.scanNumber(ch)
default:
switch ch {
case eof:
tok = token.EOF
case '"':
tok = token.STRING
s.scanString()
case '#', '/':
tok = token.COMMENT
s.scanComment(ch)
case '.':
tok = token.PERIOD
ch = s.peek()
if isDecimal(ch) {
tok = token.FLOAT
ch = s.scanMantissa(ch)
ch = s.scanExponent(ch)
}
case '<':
tok = token.HEREDOC
s.scanHeredoc()
case '[':
tok = token.LBRACK
case ']':
tok = token.RBRACK
case '{':
tok = token.LBRACE
case '}':
tok = token.RBRACE
case ',':
tok = token.COMMA
case '=':
tok = token.ASSIGN
case '+':
tok = token.ADD
case '-':
if isDecimal(s.peek()) {
ch := s.next()
tok = s.scanNumber(ch)
} else {
tok = token.SUB
}
default:
s.err("illegal char")
}
}
// finish token ending
s.tokEnd = s.srcPos.Offset
// create token literal
var tokenText string
if s.tokStart >= 0 {
tokenText = string(s.src[s.tokStart:s.tokEnd])
}
s.tokStart = s.tokEnd // ensure idempotency of tokenText() call
return token.Token{
Type: tok,
Pos: s.tokPos,
Text: tokenText,
}
}
func (s *Scanner) scanComment(ch rune) {
// single line comments
if ch == '#' || (ch == '/' && s.peek() != '*') {
if ch == '/' && s.peek() != '/' {
s.err("expected '/' for comment")
return
}
ch = s.next()
for ch != '\n' && ch >= 0 && ch != eof {
ch = s.next()
}
if ch != eof && ch >= 0 {
s.unread()
}
return
}
// be sure we get the character after /* This allows us to find comment's
// that are not erminated
if ch == '/' {
s.next()
ch = s.next() // read character after "/*"
}
// look for /* - style comments
for {
if ch < 0 || ch == eof {
s.err("comment not terminated")
break
}
ch0 := ch
ch = s.next()
if ch0 == '*' && ch == '/' {
break
}
}
}
// scanNumber scans a HCL number definition starting with the given rune
func (s *Scanner) scanNumber(ch rune) token.Type {
if ch == '0' {
// check for hexadecimal, octal or float
ch = s.next()
if ch == 'x' || ch == 'X' {
// hexadecimal
ch = s.next()
found := false
for isHexadecimal(ch) {
ch = s.next()
found = true
}
if !found {
s.err("illegal hexadecimal number")
}
if ch != eof {
s.unread()
}
return token.NUMBER
}
// now it's either something like: 0421(octal) or 0.1231(float)
illegalOctal := false
for isDecimal(ch) {
ch = s.next()
if ch == '8' || ch == '9' {
// this is just a possibility. For example 0159 is illegal, but
// 0159.23 is valid. So we mark a possible illegal octal. If
// the next character is not a period, we'll print the error.
illegalOctal = true
}
}
if ch == 'e' || ch == 'E' {
ch = s.scanExponent(ch)
return token.FLOAT
}
if ch == '.' {
ch = s.scanFraction(ch)
if ch == 'e' || ch == 'E' {
ch = s.next()
ch = s.scanExponent(ch)
}
return token.FLOAT
}
if illegalOctal {
s.err("illegal octal number")
}
if ch != eof {
s.unread()
}
return token.NUMBER
}
s.scanMantissa(ch)
ch = s.next() // seek forward
if ch == 'e' || ch == 'E' {
ch = s.scanExponent(ch)
return token.FLOAT
}
if ch == '.' {
ch = s.scanFraction(ch)
if ch == 'e' || ch == 'E' {
ch = s.next()
ch = s.scanExponent(ch)
}
return token.FLOAT
}
if ch != eof {
s.unread()
}
return token.NUMBER
}
// scanMantissa scans the mantissa begining from the rune. It returns the next
// non decimal rune. It's used to determine wheter it's a fraction or exponent.
func (s *Scanner) scanMantissa(ch rune) rune {
scanned := false
for isDecimal(ch) {
ch = s.next()
scanned = true
}
if scanned && ch != eof {
s.unread()
}
return ch
}
// scanFraction scans the fraction after the '.' rune
func (s *Scanner) scanFraction(ch rune) rune {
if ch == '.' {
ch = s.peek() // we peek just to see if we can move forward
ch = s.scanMantissa(ch)
}
return ch
}
// scanExponent scans the remaining parts of an exponent after the 'e' or 'E'
// rune.
func (s *Scanner) scanExponent(ch rune) rune {
if ch == 'e' || ch == 'E' {
ch = s.next()
if ch == '-' || ch == '+' {
ch = s.next()
}
ch = s.scanMantissa(ch)
}
return ch
}
// scanHeredoc scans a heredoc string
func (s *Scanner) scanHeredoc() {
// Scan the second '<' in example: '<<EOF'
if s.next() != '<' {
s.err("heredoc expected second '<', didn't see it")
return
}
// Get the original offset so we can read just the heredoc ident
offs := s.srcPos.Offset
// Scan the identifier
ch := s.next()
// Indented heredoc syntax
if ch == '-' {
ch = s.next()
}
for isLetter(ch) || isDigit(ch) {
ch = s.next()
}
// If we reached an EOF then that is not good
if ch == eof {
s.err("heredoc not terminated")
return
}
// Ignore the '\r' in Windows line endings
if ch == '\r' {
if s.peek() == '\n' {
ch = s.next()
}
}
// If we didn't reach a newline then that is also not good
if ch != '\n' {
s.err("invalid characters in heredoc anchor")
return
}
// Read the identifier
identBytes := s.src[offs : s.srcPos.Offset-s.lastCharLen]
if len(identBytes) == 0 {
s.err("zero-length heredoc anchor")
return
}
var identRegexp *regexp.Regexp
if identBytes[0] == '-' {
identRegexp = regexp.MustCompile(fmt.Sprintf(`[[:space:]]*%s\z`, identBytes[1:]))
} else {
identRegexp = regexp.MustCompile(fmt.Sprintf(`[[:space:]]*%s\z`, identBytes))
}
// Read the actual string value
lineStart := s.srcPos.Offset
for {
ch := s.next()
// Special newline handling.
if ch == '\n' {
// Math is fast, so we first compare the byte counts to see if we have a chance
// of seeing the same identifier - if the length is less than the number of bytes
// in the identifier, this cannot be a valid terminator.
lineBytesLen := s.srcPos.Offset - s.lastCharLen - lineStart
if lineBytesLen >= len(identBytes) && identRegexp.Match(s.src[lineStart:s.srcPos.Offset-s.lastCharLen]) {
break
}
// Not an anchor match, record the start of a new line
lineStart = s.srcPos.Offset
}
if ch == eof {
s.err("heredoc not terminated")
return
}
}
return
}
// scanString scans a quoted string
func (s *Scanner) scanString() {
braces := 0
for {
// '"' opening already consumed
// read character after quote
ch := s.next()
if (ch == '\n' && braces == 0) || ch < 0 || ch == eof {
s.err("literal not terminated")
return
}
if ch == '"' && braces == 0 {
break
}
// If we're going into a ${} then we can ignore quotes for awhile
if braces == 0 && ch == '$' && s.peek() == '{' {
braces++
s.next()
} else if braces > 0 && ch == '{' {
braces++
}
if braces > 0 && ch == '}' {
braces--
}
if ch == '\\' {
s.scanEscape()
}
}
return
}
// scanEscape scans an escape sequence
func (s *Scanner) scanEscape() rune {
// http://en.cppreference.com/w/cpp/language/escape
ch := s.next() // read character after '/'
switch ch {
case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', '"':
// nothing to do
case '0', '1', '2', '3', '4', '5', '6', '7':
// octal notation
ch = s.scanDigits(ch, 8, 3)
case 'x':
// hexademical notation
ch = s.scanDigits(s.next(), 16, 2)
case 'u':
// universal character name
ch = s.scanDigits(s.next(), 16, 4)
case 'U':
// universal character name
ch = s.scanDigits(s.next(), 16, 8)
default:
s.err("illegal char escape")
}
return ch
}
// scanDigits scans a rune with the given base for n times. For example an
// octal notation \184 would yield in scanDigits(ch, 8, 3)
func (s *Scanner) scanDigits(ch rune, base, n int) rune {
start := n
for n > 0 && digitVal(ch) < base {
ch = s.next()
if ch == eof {
// If we see an EOF, we halt any more scanning of digits
// immediately.
break
}
n--
}
if n > 0 {
s.err("illegal char escape")
}
if n != start {
// we scanned all digits, put the last non digit char back,
// only if we read anything at all
s.unread()
}
return ch
}
// scanIdentifier scans an identifier and returns the literal string
func (s *Scanner) scanIdentifier() string {
offs := s.srcPos.Offset - s.lastCharLen
ch := s.next()
for isLetter(ch) || isDigit(ch) || ch == '-' || ch == '.' {
ch = s.next()
}
if ch != eof {
s.unread() // we got identifier, put back latest char
}
return string(s.src[offs:s.srcPos.Offset])
}
// recentPosition returns the position of the character immediately after the
// character or token returned by the last call to Scan.
func (s *Scanner) recentPosition() (pos token.Pos) {
pos.Offset = s.srcPos.Offset - s.lastCharLen
switch {
case s.srcPos.Column > 0:
// common case: last character was not a '\n'
pos.Line = s.srcPos.Line
pos.Column = s.srcPos.Column
case s.lastLineLen > 0:
// last character was a '\n'
// (we cannot be at the beginning of the source
// since we have called next() at least once)
pos.Line = s.srcPos.Line - 1
pos.Column = s.lastLineLen
default:
// at the beginning of the source
pos.Line = 1
pos.Column = 1
}
return
}
// err prints the error of any scanning to s.Error function. If the function is
// not defined, by default it prints them to os.Stderr
func (s *Scanner) err(msg string) {
s.ErrorCount++
pos := s.recentPosition()
if s.Error != nil {
s.Error(pos, msg)
return
}
fmt.Fprintf(os.Stderr, "%s: %s\n", pos, msg)
}
// isHexadecimal returns true if the given rune is a letter
func isLetter(ch rune) bool {
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch >= 0x80 && unicode.IsLetter(ch)
}
// isDigit returns true if the given rune is a decimal digit
func isDigit(ch rune) bool {
return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch)
}
// isDecimal returns true if the given rune is a decimal number
func isDecimal(ch rune) bool {
return '0' <= ch && ch <= '9'
}
// isHexadecimal returns true if the given rune is an hexadecimal number
func isHexadecimal(ch rune) bool {
return '0' <= ch && ch <= '9' || 'a' <= ch && ch <= 'f' || 'A' <= ch && ch <= 'F'
}
// isWhitespace returns true if the rune is a space, tab, newline or carriage return
func isWhitespace(ch rune) bool {
return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r'
}
// digitVal returns the integer value of a given octal,decimal or hexadecimal rune
func digitVal(ch rune) int {
switch {
case '0' <= ch && ch <= '9':
return int(ch - '0')
case 'a' <= ch && ch <= 'f':
return int(ch - 'a' + 10)
case 'A' <= ch && ch <= 'F':
return int(ch - 'A' + 10)
}
return 16 // larger than any legal digit val
}

View File

@ -1,241 +0,0 @@
package strconv
import (
"errors"
"unicode/utf8"
)
// ErrSyntax indicates that a value does not have the right syntax for the target type.
var ErrSyntax = errors.New("invalid syntax")
// Unquote interprets s as a single-quoted, double-quoted,
// or backquoted Go string literal, returning the string value
// that s quotes. (If s is single-quoted, it would be a Go
// character literal; Unquote returns the corresponding
// one-character string.)
func Unquote(s string) (t string, err error) {
n := len(s)
if n < 2 {
return "", ErrSyntax
}
quote := s[0]
if quote != s[n-1] {
return "", ErrSyntax
}
s = s[1 : n-1]
if quote != '"' {
return "", ErrSyntax
}
if !contains(s, '$') && !contains(s, '{') && contains(s, '\n') {
return "", ErrSyntax
}
// Is it trivial? Avoid allocation.
if !contains(s, '\\') && !contains(s, quote) && !contains(s, '$') {
switch quote {
case '"':
return s, nil
case '\'':
r, size := utf8.DecodeRuneInString(s)
if size == len(s) && (r != utf8.RuneError || size != 1) {
return s, nil
}
}
}
var runeTmp [utf8.UTFMax]byte
buf := make([]byte, 0, 3*len(s)/2) // Try to avoid more allocations.
for len(s) > 0 {
// If we're starting a '${}' then let it through un-unquoted.
// Specifically: we don't unquote any characters within the `${}`
// section.
if s[0] == '$' && len(s) > 1 && s[1] == '{' {
buf = append(buf, '$', '{')
s = s[2:]
// Continue reading until we find the closing brace, copying as-is
braces := 1
for len(s) > 0 && braces > 0 {
r, size := utf8.DecodeRuneInString(s)
if r == utf8.RuneError {
return "", ErrSyntax
}
s = s[size:]
n := utf8.EncodeRune(runeTmp[:], r)
buf = append(buf, runeTmp[:n]...)
switch r {
case '{':
braces++
case '}':
braces--
}
}
if braces != 0 {
return "", ErrSyntax
}
if len(s) == 0 {
// If there's no string left, we're done!
break
} else {
// If there's more left, we need to pop back up to the top of the loop
// in case there's another interpolation in this string.
continue
}
}
if s[0] == '\n' {
return "", ErrSyntax
}
c, multibyte, ss, err := unquoteChar(s, quote)
if err != nil {
return "", err
}
s = ss
if c < utf8.RuneSelf || !multibyte {
buf = append(buf, byte(c))
} else {
n := utf8.EncodeRune(runeTmp[:], c)
buf = append(buf, runeTmp[:n]...)
}
if quote == '\'' && len(s) != 0 {
// single-quoted must be single character
return "", ErrSyntax
}
}
return string(buf), nil
}
// contains reports whether the string contains the byte c.
func contains(s string, c byte) bool {
for i := 0; i < len(s); i++ {
if s[i] == c {
return true
}
}
return false
}
func unhex(b byte) (v rune, ok bool) {
c := rune(b)
switch {
case '0' <= c && c <= '9':
return c - '0', true
case 'a' <= c && c <= 'f':
return c - 'a' + 10, true
case 'A' <= c && c <= 'F':
return c - 'A' + 10, true
}
return
}
func unquoteChar(s string, quote byte) (value rune, multibyte bool, tail string, err error) {
// easy cases
switch c := s[0]; {
case c == quote && (quote == '\'' || quote == '"'):
err = ErrSyntax
return
case c >= utf8.RuneSelf:
r, size := utf8.DecodeRuneInString(s)
return r, true, s[size:], nil
case c != '\\':
return rune(s[0]), false, s[1:], nil
}
// hard case: c is backslash
if len(s) <= 1 {
err = ErrSyntax
return
}
c := s[1]
s = s[2:]
switch c {
case 'a':
value = '\a'
case 'b':
value = '\b'
case 'f':
value = '\f'
case 'n':
value = '\n'
case 'r':
value = '\r'
case 't':
value = '\t'
case 'v':
value = '\v'
case 'x', 'u', 'U':
n := 0
switch c {
case 'x':
n = 2
case 'u':
n = 4
case 'U':
n = 8
}
var v rune
if len(s) < n {
err = ErrSyntax
return
}
for j := 0; j < n; j++ {
x, ok := unhex(s[j])
if !ok {
err = ErrSyntax
return
}
v = v<<4 | x
}
s = s[n:]
if c == 'x' {
// single-byte string, possibly not UTF-8
value = v
break
}
if v > utf8.MaxRune {
err = ErrSyntax
return
}
value = v
multibyte = true
case '0', '1', '2', '3', '4', '5', '6', '7':
v := rune(c) - '0'
if len(s) < 2 {
err = ErrSyntax
return
}
for j := 0; j < 2; j++ { // one digit already; two more
x := rune(s[j]) - '0'
if x < 0 || x > 7 {
err = ErrSyntax
return
}
v = (v << 3) | x
}
s = s[2:]
if v > 255 {
err = ErrSyntax
return
}
value = v
case '\\':
value = '\\'
case '\'', '"':
if c != quote {
err = ErrSyntax
return
}
value = rune(c)
default:
err = ErrSyntax
return
}
tail = s
return
}

View File

@ -1,46 +0,0 @@
package token
import "fmt"
// Pos describes an arbitrary source position
// including the file, line, and column location.
// A Position is valid if the line number is > 0.
type Pos struct {
Filename string // filename, if any
Offset int // offset, starting at 0
Line int // line number, starting at 1
Column int // column number, starting at 1 (character count)
}
// IsValid returns true if the position is valid.
func (p *Pos) IsValid() bool { return p.Line > 0 }
// String returns a string in one of several forms:
//
// file:line:column valid position with file name
// line:column valid position without file name
// file invalid position with file name
// - invalid position without file name
func (p Pos) String() string {
s := p.Filename
if p.IsValid() {
if s != "" {
s += ":"
}
s += fmt.Sprintf("%d:%d", p.Line, p.Column)
}
if s == "" {
s = "-"
}
return s
}
// Before reports whether the position p is before u.
func (p Pos) Before(u Pos) bool {
return u.Offset > p.Offset || u.Line > p.Line
}
// After reports whether the position p is after u.
func (p Pos) After(u Pos) bool {
return u.Offset < p.Offset || u.Line < p.Line
}

View File

@ -1,219 +0,0 @@
// Package token defines constants representing the lexical tokens for HCL
// (HashiCorp Configuration Language)
package token
import (
"fmt"
"strconv"
"strings"
hclstrconv "github.com/hashicorp/hcl/hcl/strconv"
)
// Token defines a single HCL token which can be obtained via the Scanner
type Token struct {
Type Type
Pos Pos
Text string
JSON bool
}
// Type is the set of lexical tokens of the HCL (HashiCorp Configuration Language)
type Type int
const (
// Special tokens
ILLEGAL Type = iota
EOF
COMMENT
identifier_beg
IDENT // literals
literal_beg
NUMBER // 12345
FLOAT // 123.45
BOOL // true,false
STRING // "abc"
HEREDOC // <<FOO\nbar\nFOO
literal_end
identifier_end
operator_beg
LBRACK // [
LBRACE // {
COMMA // ,
PERIOD // .
RBRACK // ]
RBRACE // }
ASSIGN // =
ADD // +
SUB // -
operator_end
)
var tokens = [...]string{
ILLEGAL: "ILLEGAL",
EOF: "EOF",
COMMENT: "COMMENT",
IDENT: "IDENT",
NUMBER: "NUMBER",
FLOAT: "FLOAT",
BOOL: "BOOL",
STRING: "STRING",
LBRACK: "LBRACK",
LBRACE: "LBRACE",
COMMA: "COMMA",
PERIOD: "PERIOD",
HEREDOC: "HEREDOC",
RBRACK: "RBRACK",
RBRACE: "RBRACE",
ASSIGN: "ASSIGN",
ADD: "ADD",
SUB: "SUB",
}
// String returns the string corresponding to the token tok.
func (t Type) String() string {
s := ""
if 0 <= t && t < Type(len(tokens)) {
s = tokens[t]
}
if s == "" {
s = "token(" + strconv.Itoa(int(t)) + ")"
}
return s
}
// IsIdentifier returns true for tokens corresponding to identifiers and basic
// type literals; it returns false otherwise.
func (t Type) IsIdentifier() bool { return identifier_beg < t && t < identifier_end }
// IsLiteral returns true for tokens corresponding to basic type literals; it
// returns false otherwise.
func (t Type) IsLiteral() bool { return literal_beg < t && t < literal_end }
// IsOperator returns true for tokens corresponding to operators and
// delimiters; it returns false otherwise.
func (t Type) IsOperator() bool { return operator_beg < t && t < operator_end }
// String returns the token's literal text. Note that this is only
// applicable for certain token types, such as token.IDENT,
// token.STRING, etc..
func (t Token) String() string {
return fmt.Sprintf("%s %s %s", t.Pos.String(), t.Type.String(), t.Text)
}
// Value returns the properly typed value for this token. The type of
// the returned interface{} is guaranteed based on the Type field.
//
// This can only be called for literal types. If it is called for any other
// type, this will panic.
func (t Token) Value() interface{} {
switch t.Type {
case BOOL:
if t.Text == "true" {
return true
} else if t.Text == "false" {
return false
}
panic("unknown bool value: " + t.Text)
case FLOAT:
v, err := strconv.ParseFloat(t.Text, 64)
if err != nil {
panic(err)
}
return float64(v)
case NUMBER:
v, err := strconv.ParseInt(t.Text, 0, 64)
if err != nil {
panic(err)
}
return int64(v)
case IDENT:
return t.Text
case HEREDOC:
return unindentHeredoc(t.Text)
case STRING:
// Determine the Unquote method to use. If it came from JSON,
// then we need to use the built-in unquote since we have to
// escape interpolations there.
f := hclstrconv.Unquote
if t.JSON {
f = strconv.Unquote
}
// This case occurs if json null is used
if t.Text == "" {
return ""
}
v, err := f(t.Text)
if err != nil {
panic(fmt.Sprintf("unquote %s err: %s", t.Text, err))
}
return v
default:
panic(fmt.Sprintf("unimplemented Value for type: %s", t.Type))
}
}
// unindentHeredoc returns the string content of a HEREDOC if it is started with <<
// and the content of a HEREDOC with the hanging indent removed if it is started with
// a <<-, and the terminating line is at least as indented as the least indented line.
func unindentHeredoc(heredoc string) string {
// We need to find the end of the marker
idx := strings.IndexByte(heredoc, '\n')
if idx == -1 {
panic("heredoc doesn't contain newline")
}
unindent := heredoc[2] == '-'
// We can optimize if the heredoc isn't marked for indentation
if !unindent {
return string(heredoc[idx+1 : len(heredoc)-idx+1])
}
// We need to unindent each line based on the indentation level of the marker
lines := strings.Split(string(heredoc[idx+1:len(heredoc)-idx+2]), "\n")
whitespacePrefix := lines[len(lines)-1]
isIndented := true
for _, v := range lines {
if strings.HasPrefix(v, whitespacePrefix) {
continue
}
isIndented = false
break
}
// If all lines are not at least as indented as the terminating mark, return the
// heredoc as is, but trim the leading space from the marker on the final line.
if !isIndented {
return strings.TrimRight(string(heredoc[idx+1:len(heredoc)-idx+1]), " \t")
}
unindentedLines := make([]string, len(lines))
for k, v := range lines {
if k == len(lines)-1 {
unindentedLines[k] = ""
break
}
unindentedLines[k] = strings.TrimPrefix(v, whitespacePrefix)
}
return strings.Join(unindentedLines, "\n")
}

View File

@ -1,117 +0,0 @@
package parser
import "github.com/hashicorp/hcl/hcl/ast"
// flattenObjects takes an AST node, walks it, and flattens
func flattenObjects(node ast.Node) {
ast.Walk(node, func(n ast.Node) (ast.Node, bool) {
// We only care about lists, because this is what we modify
list, ok := n.(*ast.ObjectList)
if !ok {
return n, true
}
// Rebuild the item list
items := make([]*ast.ObjectItem, 0, len(list.Items))
frontier := make([]*ast.ObjectItem, len(list.Items))
copy(frontier, list.Items)
for len(frontier) > 0 {
// Pop the current item
n := len(frontier)
item := frontier[n-1]
frontier = frontier[:n-1]
switch v := item.Val.(type) {
case *ast.ObjectType:
items, frontier = flattenObjectType(v, item, items, frontier)
case *ast.ListType:
items, frontier = flattenListType(v, item, items, frontier)
default:
items = append(items, item)
}
}
// Reverse the list since the frontier model runs things backwards
for i := len(items)/2 - 1; i >= 0; i-- {
opp := len(items) - 1 - i
items[i], items[opp] = items[opp], items[i]
}
// Done! Set the original items
list.Items = items
return n, true
})
}
func flattenListType(
ot *ast.ListType,
item *ast.ObjectItem,
items []*ast.ObjectItem,
frontier []*ast.ObjectItem) ([]*ast.ObjectItem, []*ast.ObjectItem) {
// If the list is empty, keep the original list
if len(ot.List) == 0 {
items = append(items, item)
return items, frontier
}
// All the elements of this object must also be objects!
for _, subitem := range ot.List {
if _, ok := subitem.(*ast.ObjectType); !ok {
items = append(items, item)
return items, frontier
}
}
// Great! We have a match go through all the items and flatten
for _, elem := range ot.List {
// Add it to the frontier so that we can recurse
frontier = append(frontier, &ast.ObjectItem{
Keys: item.Keys,
Assign: item.Assign,
Val: elem,
LeadComment: item.LeadComment,
LineComment: item.LineComment,
})
}
return items, frontier
}
func flattenObjectType(
ot *ast.ObjectType,
item *ast.ObjectItem,
items []*ast.ObjectItem,
frontier []*ast.ObjectItem) ([]*ast.ObjectItem, []*ast.ObjectItem) {
// If the list has no items we do not have to flatten anything
if ot.List.Items == nil {
items = append(items, item)
return items, frontier
}
// All the elements of this object must also be objects!
for _, subitem := range ot.List.Items {
if _, ok := subitem.Val.(*ast.ObjectType); !ok {
items = append(items, item)
return items, frontier
}
}
// Great! We have a match go through all the items and flatten
for _, subitem := range ot.List.Items {
// Copy the new key
keys := make([]*ast.ObjectKey, len(item.Keys)+len(subitem.Keys))
copy(keys, item.Keys)
copy(keys[len(item.Keys):], subitem.Keys)
// Add it to the frontier so that we can recurse
frontier = append(frontier, &ast.ObjectItem{
Keys: keys,
Assign: item.Assign,
Val: subitem.Val,
LeadComment: item.LeadComment,
LineComment: item.LineComment,
})
}
return items, frontier
}

View File

@ -1,313 +0,0 @@
package parser
import (
"errors"
"fmt"
"github.com/hashicorp/hcl/hcl/ast"
hcltoken "github.com/hashicorp/hcl/hcl/token"
"github.com/hashicorp/hcl/json/scanner"
"github.com/hashicorp/hcl/json/token"
)
type Parser struct {
sc *scanner.Scanner
// Last read token
tok token.Token
commaPrev token.Token
enableTrace bool
indent int
n int // buffer size (max = 1)
}
func newParser(src []byte) *Parser {
return &Parser{
sc: scanner.New(src),
}
}
// Parse returns the fully parsed source and returns the abstract syntax tree.
func Parse(src []byte) (*ast.File, error) {
p := newParser(src)
return p.Parse()
}
var errEofToken = errors.New("EOF token found")
// Parse returns the fully parsed source and returns the abstract syntax tree.
func (p *Parser) Parse() (*ast.File, error) {
f := &ast.File{}
var err, scerr error
p.sc.Error = func(pos token.Pos, msg string) {
scerr = fmt.Errorf("%s: %s", pos, msg)
}
// The root must be an object in JSON
object, err := p.object()
if scerr != nil {
return nil, scerr
}
if err != nil {
return nil, err
}
// We make our final node an object list so it is more HCL compatible
f.Node = object.List
// Flatten it, which finds patterns and turns them into more HCL-like
// AST trees.
flattenObjects(f.Node)
return f, nil
}
func (p *Parser) objectList() (*ast.ObjectList, error) {
defer un(trace(p, "ParseObjectList"))
node := &ast.ObjectList{}
for {
n, err := p.objectItem()
if err == errEofToken {
break // we are finished
}
// we don't return a nil node, because might want to use already
// collected items.
if err != nil {
return node, err
}
node.Add(n)
// Check for a followup comma. If it isn't a comma, then we're done
if tok := p.scan(); tok.Type != token.COMMA {
break
}
}
return node, nil
}
// objectItem parses a single object item
func (p *Parser) objectItem() (*ast.ObjectItem, error) {
defer un(trace(p, "ParseObjectItem"))
keys, err := p.objectKey()
if err != nil {
return nil, err
}
o := &ast.ObjectItem{
Keys: keys,
}
switch p.tok.Type {
case token.COLON:
pos := p.tok.Pos
o.Assign = hcltoken.Pos{
Filename: pos.Filename,
Offset: pos.Offset,
Line: pos.Line,
Column: pos.Column,
}
o.Val, err = p.objectValue()
if err != nil {
return nil, err
}
}
return o, nil
}
// objectKey parses an object key and returns a ObjectKey AST
func (p *Parser) objectKey() ([]*ast.ObjectKey, error) {
keyCount := 0
keys := make([]*ast.ObjectKey, 0)
for {
tok := p.scan()
switch tok.Type {
case token.EOF:
return nil, errEofToken
case token.STRING:
keyCount++
keys = append(keys, &ast.ObjectKey{
Token: p.tok.HCLToken(),
})
case token.COLON:
// If we have a zero keycount it means that we never got
// an object key, i.e. `{ :`. This is a syntax error.
if keyCount == 0 {
return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type)
}
// Done
return keys, nil
case token.ILLEGAL:
return nil, errors.New("illegal")
default:
return nil, fmt.Errorf("expected: STRING got: %s", p.tok.Type)
}
}
}
// object parses any type of object, such as number, bool, string, object or
// list.
func (p *Parser) objectValue() (ast.Node, error) {
defer un(trace(p, "ParseObjectValue"))
tok := p.scan()
switch tok.Type {
case token.NUMBER, token.FLOAT, token.BOOL, token.NULL, token.STRING:
return p.literalType()
case token.LBRACE:
return p.objectType()
case token.LBRACK:
return p.listType()
case token.EOF:
return nil, errEofToken
}
return nil, fmt.Errorf("Expected object value, got unknown token: %+v", tok)
}
// object parses any type of object, such as number, bool, string, object or
// list.
func (p *Parser) object() (*ast.ObjectType, error) {
defer un(trace(p, "ParseType"))
tok := p.scan()
switch tok.Type {
case token.LBRACE:
return p.objectType()
case token.EOF:
return nil, errEofToken
}
return nil, fmt.Errorf("Expected object, got unknown token: %+v", tok)
}
// objectType parses an object type and returns a ObjectType AST
func (p *Parser) objectType() (*ast.ObjectType, error) {
defer un(trace(p, "ParseObjectType"))
// we assume that the currently scanned token is a LBRACE
o := &ast.ObjectType{}
l, err := p.objectList()
// if we hit RBRACE, we are good to go (means we parsed all Items), if it's
// not a RBRACE, it's an syntax error and we just return it.
if err != nil && p.tok.Type != token.RBRACE {
return nil, err
}
o.List = l
return o, nil
}
// listType parses a list type and returns a ListType AST
func (p *Parser) listType() (*ast.ListType, error) {
defer un(trace(p, "ParseListType"))
// we assume that the currently scanned token is a LBRACK
l := &ast.ListType{}
for {
tok := p.scan()
switch tok.Type {
case token.NUMBER, token.FLOAT, token.STRING:
node, err := p.literalType()
if err != nil {
return nil, err
}
l.Add(node)
case token.COMMA:
continue
case token.LBRACE:
node, err := p.objectType()
if err != nil {
return nil, err
}
l.Add(node)
case token.BOOL:
// TODO(arslan) should we support? not supported by HCL yet
case token.LBRACK:
// TODO(arslan) should we support nested lists? Even though it's
// written in README of HCL, it's not a part of the grammar
// (not defined in parse.y)
case token.RBRACK:
// finished
return l, nil
default:
return nil, fmt.Errorf("unexpected token while parsing list: %s", tok.Type)
}
}
}
// literalType parses a literal type and returns a LiteralType AST
func (p *Parser) literalType() (*ast.LiteralType, error) {
defer un(trace(p, "ParseLiteral"))
return &ast.LiteralType{
Token: p.tok.HCLToken(),
}, nil
}
// scan returns the next token from the underlying scanner. If a token has
// been unscanned then read that instead.
func (p *Parser) scan() token.Token {
// If we have a token on the buffer, then return it.
if p.n != 0 {
p.n = 0
return p.tok
}
p.tok = p.sc.Scan()
return p.tok
}
// unscan pushes the previously read token back onto the buffer.
func (p *Parser) unscan() {
p.n = 1
}
// ----------------------------------------------------------------------------
// Parsing support
func (p *Parser) printTrace(a ...interface{}) {
if !p.enableTrace {
return
}
const dots = ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "
const n = len(dots)
fmt.Printf("%5d:%3d: ", p.tok.Pos.Line, p.tok.Pos.Column)
i := 2 * p.indent
for i > n {
fmt.Print(dots)
i -= n
}
// i <= n
fmt.Print(dots[0:i])
fmt.Println(a...)
}
func trace(p *Parser, msg string) *Parser {
p.printTrace(msg, "(")
p.indent++
return p
}
// Usage pattern: defer un(trace(p, "..."))
func un(p *Parser) {
p.indent--
p.printTrace(")")
}

View File

@ -1,451 +0,0 @@
package scanner
import (
"bytes"
"fmt"
"os"
"unicode"
"unicode/utf8"
"github.com/hashicorp/hcl/json/token"
)
// eof represents a marker rune for the end of the reader.
const eof = rune(0)
// Scanner defines a lexical scanner
type Scanner struct {
buf *bytes.Buffer // Source buffer for advancing and scanning
src []byte // Source buffer for immutable access
// Source Position
srcPos token.Pos // current position
prevPos token.Pos // previous position, used for peek() method
lastCharLen int // length of last character in bytes
lastLineLen int // length of last line in characters (for correct column reporting)
tokStart int // token text start position
tokEnd int // token text end position
// Error is called for each error encountered. If no Error
// function is set, the error is reported to os.Stderr.
Error func(pos token.Pos, msg string)
// ErrorCount is incremented by one for each error encountered.
ErrorCount int
// tokPos is the start position of most recently scanned token; set by
// Scan. The Filename field is always left untouched by the Scanner. If
// an error is reported (via Error) and Position is invalid, the scanner is
// not inside a token.
tokPos token.Pos
}
// New creates and initializes a new instance of Scanner using src as
// its source content.
func New(src []byte) *Scanner {
// even though we accept a src, we read from a io.Reader compatible type
// (*bytes.Buffer). So in the future we might easily change it to streaming
// read.
b := bytes.NewBuffer(src)
s := &Scanner{
buf: b,
src: src,
}
// srcPosition always starts with 1
s.srcPos.Line = 1
return s
}
// next reads the next rune from the bufferred reader. Returns the rune(0) if
// an error occurs (or io.EOF is returned).
func (s *Scanner) next() rune {
ch, size, err := s.buf.ReadRune()
if err != nil {
// advance for error reporting
s.srcPos.Column++
s.srcPos.Offset += size
s.lastCharLen = size
return eof
}
if ch == utf8.RuneError && size == 1 {
s.srcPos.Column++
s.srcPos.Offset += size
s.lastCharLen = size
s.err("illegal UTF-8 encoding")
return ch
}
// remember last position
s.prevPos = s.srcPos
s.srcPos.Column++
s.lastCharLen = size
s.srcPos.Offset += size
if ch == '\n' {
s.srcPos.Line++
s.lastLineLen = s.srcPos.Column
s.srcPos.Column = 0
}
// debug
// fmt.Printf("ch: %q, offset:column: %d:%d\n", ch, s.srcPos.Offset, s.srcPos.Column)
return ch
}
// unread unreads the previous read Rune and updates the source position
func (s *Scanner) unread() {
if err := s.buf.UnreadRune(); err != nil {
panic(err) // this is user fault, we should catch it
}
s.srcPos = s.prevPos // put back last position
}
// peek returns the next rune without advancing the reader.
func (s *Scanner) peek() rune {
peek, _, err := s.buf.ReadRune()
if err != nil {
return eof
}
s.buf.UnreadRune()
return peek
}
// Scan scans the next token and returns the token.
func (s *Scanner) Scan() token.Token {
ch := s.next()
// skip white space
for isWhitespace(ch) {
ch = s.next()
}
var tok token.Type
// token text markings
s.tokStart = s.srcPos.Offset - s.lastCharLen
// token position, initial next() is moving the offset by one(size of rune
// actually), though we are interested with the starting point
s.tokPos.Offset = s.srcPos.Offset - s.lastCharLen
if s.srcPos.Column > 0 {
// common case: last character was not a '\n'
s.tokPos.Line = s.srcPos.Line
s.tokPos.Column = s.srcPos.Column
} else {
// last character was a '\n'
// (we cannot be at the beginning of the source
// since we have called next() at least once)
s.tokPos.Line = s.srcPos.Line - 1
s.tokPos.Column = s.lastLineLen
}
switch {
case isLetter(ch):
lit := s.scanIdentifier()
if lit == "true" || lit == "false" {
tok = token.BOOL
} else if lit == "null" {
tok = token.NULL
} else {
s.err("illegal char")
}
case isDecimal(ch):
tok = s.scanNumber(ch)
default:
switch ch {
case eof:
tok = token.EOF
case '"':
tok = token.STRING
s.scanString()
case '.':
tok = token.PERIOD
ch = s.peek()
if isDecimal(ch) {
tok = token.FLOAT
ch = s.scanMantissa(ch)
ch = s.scanExponent(ch)
}
case '[':
tok = token.LBRACK
case ']':
tok = token.RBRACK
case '{':
tok = token.LBRACE
case '}':
tok = token.RBRACE
case ',':
tok = token.COMMA
case ':':
tok = token.COLON
case '-':
if isDecimal(s.peek()) {
ch := s.next()
tok = s.scanNumber(ch)
} else {
s.err("illegal char")
}
default:
s.err("illegal char: " + string(ch))
}
}
// finish token ending
s.tokEnd = s.srcPos.Offset
// create token literal
var tokenText string
if s.tokStart >= 0 {
tokenText = string(s.src[s.tokStart:s.tokEnd])
}
s.tokStart = s.tokEnd // ensure idempotency of tokenText() call
return token.Token{
Type: tok,
Pos: s.tokPos,
Text: tokenText,
}
}
// scanNumber scans a HCL number definition starting with the given rune
func (s *Scanner) scanNumber(ch rune) token.Type {
zero := ch == '0'
pos := s.srcPos
s.scanMantissa(ch)
ch = s.next() // seek forward
if ch == 'e' || ch == 'E' {
ch = s.scanExponent(ch)
return token.FLOAT
}
if ch == '.' {
ch = s.scanFraction(ch)
if ch == 'e' || ch == 'E' {
ch = s.next()
ch = s.scanExponent(ch)
}
return token.FLOAT
}
if ch != eof {
s.unread()
}
// If we have a larger number and this is zero, error
if zero && pos != s.srcPos {
s.err("numbers cannot start with 0")
}
return token.NUMBER
}
// scanMantissa scans the mantissa begining from the rune. It returns the next
// non decimal rune. It's used to determine wheter it's a fraction or exponent.
func (s *Scanner) scanMantissa(ch rune) rune {
scanned := false
for isDecimal(ch) {
ch = s.next()
scanned = true
}
if scanned && ch != eof {
s.unread()
}
return ch
}
// scanFraction scans the fraction after the '.' rune
func (s *Scanner) scanFraction(ch rune) rune {
if ch == '.' {
ch = s.peek() // we peek just to see if we can move forward
ch = s.scanMantissa(ch)
}
return ch
}
// scanExponent scans the remaining parts of an exponent after the 'e' or 'E'
// rune.
func (s *Scanner) scanExponent(ch rune) rune {
if ch == 'e' || ch == 'E' {
ch = s.next()
if ch == '-' || ch == '+' {
ch = s.next()
}
ch = s.scanMantissa(ch)
}
return ch
}
// scanString scans a quoted string
func (s *Scanner) scanString() {
braces := 0
for {
// '"' opening already consumed
// read character after quote
ch := s.next()
if ch == '\n' || ch < 0 || ch == eof {
s.err("literal not terminated")
return
}
if ch == '"' {
break
}
// If we're going into a ${} then we can ignore quotes for awhile
if braces == 0 && ch == '$' && s.peek() == '{' {
braces++
s.next()
} else if braces > 0 && ch == '{' {
braces++
}
if braces > 0 && ch == '}' {
braces--
}
if ch == '\\' {
s.scanEscape()
}
}
return
}
// scanEscape scans an escape sequence
func (s *Scanner) scanEscape() rune {
// http://en.cppreference.com/w/cpp/language/escape
ch := s.next() // read character after '/'
switch ch {
case 'a', 'b', 'f', 'n', 'r', 't', 'v', '\\', '"':
// nothing to do
case '0', '1', '2', '3', '4', '5', '6', '7':
// octal notation
ch = s.scanDigits(ch, 8, 3)
case 'x':
// hexademical notation
ch = s.scanDigits(s.next(), 16, 2)
case 'u':
// universal character name
ch = s.scanDigits(s.next(), 16, 4)
case 'U':
// universal character name
ch = s.scanDigits(s.next(), 16, 8)
default:
s.err("illegal char escape")
}
return ch
}
// scanDigits scans a rune with the given base for n times. For example an
// octal notation \184 would yield in scanDigits(ch, 8, 3)
func (s *Scanner) scanDigits(ch rune, base, n int) rune {
for n > 0 && digitVal(ch) < base {
ch = s.next()
n--
}
if n > 0 {
s.err("illegal char escape")
}
// we scanned all digits, put the last non digit char back
s.unread()
return ch
}
// scanIdentifier scans an identifier and returns the literal string
func (s *Scanner) scanIdentifier() string {
offs := s.srcPos.Offset - s.lastCharLen
ch := s.next()
for isLetter(ch) || isDigit(ch) || ch == '-' {
ch = s.next()
}
if ch != eof {
s.unread() // we got identifier, put back latest char
}
return string(s.src[offs:s.srcPos.Offset])
}
// recentPosition returns the position of the character immediately after the
// character or token returned by the last call to Scan.
func (s *Scanner) recentPosition() (pos token.Pos) {
pos.Offset = s.srcPos.Offset - s.lastCharLen
switch {
case s.srcPos.Column > 0:
// common case: last character was not a '\n'
pos.Line = s.srcPos.Line
pos.Column = s.srcPos.Column
case s.lastLineLen > 0:
// last character was a '\n'
// (we cannot be at the beginning of the source
// since we have called next() at least once)
pos.Line = s.srcPos.Line - 1
pos.Column = s.lastLineLen
default:
// at the beginning of the source
pos.Line = 1
pos.Column = 1
}
return
}
// err prints the error of any scanning to s.Error function. If the function is
// not defined, by default it prints them to os.Stderr
func (s *Scanner) err(msg string) {
s.ErrorCount++
pos := s.recentPosition()
if s.Error != nil {
s.Error(pos, msg)
return
}
fmt.Fprintf(os.Stderr, "%s: %s\n", pos, msg)
}
// isHexadecimal returns true if the given rune is a letter
func isLetter(ch rune) bool {
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch >= 0x80 && unicode.IsLetter(ch)
}
// isHexadecimal returns true if the given rune is a decimal digit
func isDigit(ch rune) bool {
return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch)
}
// isHexadecimal returns true if the given rune is a decimal number
func isDecimal(ch rune) bool {
return '0' <= ch && ch <= '9'
}
// isHexadecimal returns true if the given rune is an hexadecimal number
func isHexadecimal(ch rune) bool {
return '0' <= ch && ch <= '9' || 'a' <= ch && ch <= 'f' || 'A' <= ch && ch <= 'F'
}
// isWhitespace returns true if the rune is a space, tab, newline or carriage return
func isWhitespace(ch rune) bool {
return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r'
}
// digitVal returns the integer value of a given octal,decimal or hexadecimal rune
func digitVal(ch rune) int {
switch {
case '0' <= ch && ch <= '9':
return int(ch - '0')
case 'a' <= ch && ch <= 'f':
return int(ch - 'a' + 10)
case 'A' <= ch && ch <= 'F':
return int(ch - 'A' + 10)
}
return 16 // larger than any legal digit val
}

View File

@ -1,46 +0,0 @@
package token
import "fmt"
// Pos describes an arbitrary source position
// including the file, line, and column location.
// A Position is valid if the line number is > 0.
type Pos struct {
Filename string // filename, if any
Offset int // offset, starting at 0
Line int // line number, starting at 1
Column int // column number, starting at 1 (character count)
}
// IsValid returns true if the position is valid.
func (p *Pos) IsValid() bool { return p.Line > 0 }
// String returns a string in one of several forms:
//
// file:line:column valid position with file name
// line:column valid position without file name
// file invalid position with file name
// - invalid position without file name
func (p Pos) String() string {
s := p.Filename
if p.IsValid() {
if s != "" {
s += ":"
}
s += fmt.Sprintf("%d:%d", p.Line, p.Column)
}
if s == "" {
s = "-"
}
return s
}
// Before reports whether the position p is before u.
func (p Pos) Before(u Pos) bool {
return u.Offset > p.Offset || u.Line > p.Line
}
// After reports whether the position p is after u.
func (p Pos) After(u Pos) bool {
return u.Offset < p.Offset || u.Line < p.Line
}

View File

@ -1,118 +0,0 @@
package token
import (
"fmt"
"strconv"
hcltoken "github.com/hashicorp/hcl/hcl/token"
)
// Token defines a single HCL token which can be obtained via the Scanner
type Token struct {
Type Type
Pos Pos
Text string
}
// Type is the set of lexical tokens of the HCL (HashiCorp Configuration Language)
type Type int
const (
// Special tokens
ILLEGAL Type = iota
EOF
identifier_beg
literal_beg
NUMBER // 12345
FLOAT // 123.45
BOOL // true,false
STRING // "abc"
NULL // null
literal_end
identifier_end
operator_beg
LBRACK // [
LBRACE // {
COMMA // ,
PERIOD // .
COLON // :
RBRACK // ]
RBRACE // }
operator_end
)
var tokens = [...]string{
ILLEGAL: "ILLEGAL",
EOF: "EOF",
NUMBER: "NUMBER",
FLOAT: "FLOAT",
BOOL: "BOOL",
STRING: "STRING",
NULL: "NULL",
LBRACK: "LBRACK",
LBRACE: "LBRACE",
COMMA: "COMMA",
PERIOD: "PERIOD",
COLON: "COLON",
RBRACK: "RBRACK",
RBRACE: "RBRACE",
}
// String returns the string corresponding to the token tok.
func (t Type) String() string {
s := ""
if 0 <= t && t < Type(len(tokens)) {
s = tokens[t]
}
if s == "" {
s = "token(" + strconv.Itoa(int(t)) + ")"
}
return s
}
// IsIdentifier returns true for tokens corresponding to identifiers and basic
// type literals; it returns false otherwise.
func (t Type) IsIdentifier() bool { return identifier_beg < t && t < identifier_end }
// IsLiteral returns true for tokens corresponding to basic type literals; it
// returns false otherwise.
func (t Type) IsLiteral() bool { return literal_beg < t && t < literal_end }
// IsOperator returns true for tokens corresponding to operators and
// delimiters; it returns false otherwise.
func (t Type) IsOperator() bool { return operator_beg < t && t < operator_end }
// String returns the token's literal text. Note that this is only
// applicable for certain token types, such as token.IDENT,
// token.STRING, etc..
func (t Token) String() string {
return fmt.Sprintf("%s %s %s", t.Pos.String(), t.Type.String(), t.Text)
}
// HCLToken converts this token to an HCL token.
//
// The token type must be a literal type or this will panic.
func (t Token) HCLToken() hcltoken.Token {
switch t.Type {
case BOOL:
return hcltoken.Token{Type: hcltoken.BOOL, Text: t.Text}
case FLOAT:
return hcltoken.Token{Type: hcltoken.FLOAT, Text: t.Text}
case NULL:
return hcltoken.Token{Type: hcltoken.STRING, Text: ""}
case NUMBER:
return hcltoken.Token{Type: hcltoken.NUMBER, Text: t.Text}
case STRING:
return hcltoken.Token{Type: hcltoken.STRING, Text: t.Text, JSON: true}
default:
panic(fmt.Sprintf("unimplemented HCLToken for type: %s", t.Type))
}
}

View File

@ -1,38 +0,0 @@
package hcl
import (
"unicode"
"unicode/utf8"
)
type lexModeValue byte
const (
lexModeUnknown lexModeValue = iota
lexModeHcl
lexModeJson
)
// lexMode returns whether we're going to be parsing in JSON
// mode or HCL mode.
func lexMode(v []byte) lexModeValue {
var (
r rune
w int
offset int
)
for {
r, w = utf8.DecodeRune(v[offset:])
offset += w
if unicode.IsSpace(r) {
continue
}
if r == '{' {
return lexModeJson
}
break
}
return lexModeHcl
}

View File

@ -1,39 +0,0 @@
package hcl
import (
"fmt"
"github.com/hashicorp/hcl/hcl/ast"
hclParser "github.com/hashicorp/hcl/hcl/parser"
jsonParser "github.com/hashicorp/hcl/json/parser"
)
// ParseBytes accepts as input byte slice and returns ast tree.
//
// Input can be either JSON or HCL
func ParseBytes(in []byte) (*ast.File, error) {
return parse(in)
}
// ParseString accepts input as a string and returns ast tree.
func ParseString(input string) (*ast.File, error) {
return parse([]byte(input))
}
func parse(in []byte) (*ast.File, error) {
switch lexMode(in) {
case lexModeHcl:
return hclParser.Parse(in)
case lexModeJson:
return jsonParser.Parse(in)
}
return nil, fmt.Errorf("unknown config format")
}
// Parse parses the given input and returns the root object.
//
// The input format can be either HCL or JSON.
func Parse(input string) (*ast.File, error) {
return parse([]byte(input))
}

View File

@ -1,5 +1,194 @@
# HCL Changelog
## v2.13.0 (June 22, 2022)
### Enhancements
* hcl: `hcl.Diagnostic` how has an additional field `Extra` which is intended for carrying arbitrary supporting data ("extra information") related to the diagnostic message, intended to allow diagnostic renderers to optionally tailor the presentation of messages for particular situations. ([#539](https://github.com/hashicorp/hcl/pull/539))
* hclsyntax: When an error occurs during a function call, the returned diagnostics will include _extra information_ (as described in the previous point) about which function was being called and, if the message is about an error returned by the function itself, that raw `error` value without any post-processing. ([#539](https://github.com/hashicorp/hcl/pull/539))
### Bugs Fixed
* hclwrite: Fixed a potential data race for any situation where `hclwrite.Format` runs concurrently with itself. ([#534](https://github.com/hashicorp/hcl/pull/534))
## v2.12.0 (April 22, 2022)
### Enhancements
* hclsyntax: Evaluation of conditional expressions will now produce more precise error messages about inconsistencies between the types of the true and false result expressions, particularly in cases where both are of the same structural type kind but differ in their nested elements. ([#530](https://github.com/hashicorp/hcl/pull/530))
* hclsyntax: The lexer will no longer allocate a small object on the heap for each token. Instead, in that situation it will allocate only when needed to return a diagnostic message with source location information. ([#490](https://github.com/hashicorp/hcl/pull/490))
* hclwrite: New functions `TokensForTuple`, `TokensForObject`, and `TokensForFunctionCall` allow for more easily constructing the three constructs which are supported for static analysis and which HCL-based languages typically use in contexts where an expression is used only for its syntax, and not evaluated to produce a real value. For example, these new functions together are sufficient to construct all valid type constraint expressions from [the Type Expressions Extension](./ext/typeexpr/), which is the basis of variable type constraints in the Terraform language at the time of writing. ([#502](https://github.com/hashicorp/hcl/pull/502))
* json: New functions `IsJSONExpression` and `IsJSONBody` to determine if a given expression or body was created by the JSON syntax parser. In normal situations it's better not to worry about what syntax a particular expression/body originated in, but this can be useful in some trickier cases where an application needs to shim for backwards-compatibility or for static analysis that needs to have special handling of the JSON syntax's embedded expression/template conventions. ([#524](https://github.com/hashicorp/hcl/pull/524))
### Bugs Fixed
* gohcl: Fix docs about supported types for blocks. ([#507](https://github.com/hashicorp/hcl/pull/507))
## v2.11.1 (December 1, 2021)
### Bugs Fixed
* hclsyntax: The type for an upgraded unknown value with a splat expression cannot be known ([#495](https://github.com/hashicorp/hcl/pull/495))
## v2.11.0 (December 1, 2021)
### Enhancements
* hclsyntax: Various error messages related to unexpectedly reaching end of file while parsing a delimited subtree will now return specialized messages describing the opening tokens as "unclosed", instead of returning a generic diagnostic that just happens to refer to the empty source range at the end of the file. This gives better feedback when error messages are being presented alongside a source code snippet, as is common in HCL-based applications, because it shows which innermost container the parser was working on when it encountered the error. ([#492](https://github.com/hashicorp/hcl/pull/492))
### Bugs Fixed
* hclsyntax: Upgrading an unknown single value to a list using a splat expression must return unknown ([#493](https://github.com/hashicorp/hcl/pull/493))
## v2.10.1 (July 21, 2021)
* dynblock: Decode unknown dynamic blocks in order to obtain any diagnostics even though the decoded value is not used ([#476](https://github.com/hashicorp/hcl/pull/476))
* hclsyntax: Calling functions is now more robust in the face of an incorrectly-implemented function which returns a `function.ArgError` whose argument index is out of range for the length of the arguments. Previously this would often lead to a panic, but now it'll return a less-precice error message instead. Functions that return out-of-bounds argument indices still ought to be fixed so that the resulting error diagnostics can be as precise as possible. ([#472](https://github.com/hashicorp/hcl/pull/472))
* hclsyntax: Ensure marks on unknown values are maintained when processing string templates. ([#478](https://github.com/hashicorp/hcl/pull/478))
* hcl: Improved error messages for various common error situtions in `hcl.Index` and `hcl.GetAttr`. These are part of the implementation of indexing and attribute lookup in the native syntax expression language too, so the new error messages will apply to problems using those operators. ([#474](https://github.com/hashicorp/hcl/pull/474))
## v2.10.0 (April 20, 2021)
### Enhancements
* dynblock,hcldec: Using dynblock in conjunction with hcldec can now decode blocks with unknown dynamic for_each arguments as entirely unknown values ([#461](https://github.com/hashicorp/hcl/pull/461))
* hclsyntax: Some syntax errors during parsing of the inside of `${` ... `}` template interpolation sequences will now produce an extra hint message about the need to escape as `$${` when trying to include interpolation syntax for other languages like shell scripting, AWS IAM policies, etc. ([#462](https://github.com/hashicorp/hcl/pull/462))
## v2.9.1 (March 10, 2021)
### Bugs Fixed
* hclsyntax: Fix panic for marked index value. ([#451](https://github.com/hashicorp/hcl/pull/451))
## v2.9.0 (February 23, 2021)
### Enhancements
* HCL's native syntax and JSON scanners -- and thus all of the other parsing components that build on top of them -- are now using Unicode 13 rules for text segmentation when counting text characters for the purpose of reporting source location columns. Previously HCL was using Unicode 12. Unicode 13 still uses the same algorithm but includes some additions to the character tables the algorithm is defined in terms of, to properly categorize new characters defined in Unicode 13.
## v2.8.2 (January 6, 2021)
### Bugs Fixed
* hclsyntax: Fix panic for marked collection splat. ([#436](https://github.com/hashicorp/hcl/pull/436))
* hclsyntax: Fix panic for marked template loops. ([#437](https://github.com/hashicorp/hcl/pull/437))
* hclsyntax: Fix `for` expression marked conditional. ([#438](https://github.com/hashicorp/hcl/pull/438))
* hclsyntax: Mark objects with keys that are sensitive. ([#440](https://github.com/hashicorp/hcl/pull/440))
## v2.8.1 (December 17, 2020)
### Bugs Fixed
* hclsyntax: Fix panic when expanding marked function arguments. ([#429](https://github.com/hashicorp/hcl/pull/429))
* hclsyntax: Error when attempting to use a marked value as an object key. ([#434](https://github.com/hashicorp/hcl/pull/434))
* hclsyntax: Error when attempting to use a marked value as an object key in expressions. ([#433](https://github.com/hashicorp/hcl/pull/433))
## v2.8.0 (December 7, 2020)
### Enhancements
* hclsyntax: Expression grouping parentheses will now be reflected by an explicit node in the AST, whereas before they were only considered during parsing. ([#426](https://github.com/hashicorp/hcl/pull/426))
### Bugs Fixed
* hclwrite: The parser will now correctly include the `(` and `)` tokens when an expression is surrounded by parentheses. Previously it would incorrectly recognize those tokens as being extraneous tokens outside of the expression. ([#426](https://github.com/hashicorp/hcl/pull/426))
* hclwrite: The formatter will now remove (rather than insert) spaces between the `!` (unary boolean "not") operator and its subsequent operand. ([#403](https://github.com/hashicorp/hcl/pull/403))
* hclsyntax: Unmark conditional values in expressions before checking their truthfulness ([#427](https://github.com/hashicorp/hcl/pull/427))
## v2.7.2 (November 30, 2020)
### Bugs Fixed
* gohcl: Fix panic when decoding into type containing value slices. ([#335](https://github.com/hashicorp/hcl/pull/335))
* hclsyntax: The unusual expression `null[*]` was previously always returning an unknown value, even though the rules for `[*]` normally call for it to return an empty tuple when applied to a null. As well as being a surprising result, it was particularly problematic because it violated the rule that a calling application may assume that an expression result will always be known unless the application itself introduces unknown values via the evaluation context. `null[*]` will now produce an empty tuple. ([#416](https://github.com/hashicorp/hcl/pull/416))
* hclsyntax: Fix panic when traversing a list, tuple, or map with cty "marks" ([#424](https://github.com/hashicorp/hcl/pull/424))
## v2.7.1 (November 18, 2020)
### Bugs Fixed
* hclwrite: Correctly handle blank quoted string block labels, instead of dropping them ([#422](https://github.com/hashicorp/hcl/pull/422))
## v2.7.0 (October 14, 2020)
### Enhancements
* json: There is a new function `ParseWithStartPos`, which allows overriding the starting position for parsing in case the given JSON bytes are a fragment of a larger document, such as might happen when decoding with `encoding/json` into a `json.RawMessage`. ([#389](https://github.com/hashicorp/hcl/pull/389))
* json: There is a new function `ParseExpression`, which allows parsing a JSON string directly in expression mode, whereas previously it was only possible to parse a JSON string in body mode. ([#381](https://github.com/hashicorp/hcl/pull/381))
* hclwrite: `Block` type now supports `SetType` and `SetLabels`, allowing surgical changes to the type and labels of an existing block without having to reconstruct the entire block. ([#340](https://github.com/hashicorp/hcl/pull/340))
### Bugs Fixed
* hclsyntax: Fix confusing error message for bitwise OR operator ([#380](https://github.com/hashicorp/hcl/pull/380))
* hclsyntax: Several bug fixes for using HCL with values containing cty "marks" ([#404](https://github.com/hashicorp/hcl/pull/404), [#406](https://github.com/hashicorp/hcl/pull/404), [#407](https://github.com/hashicorp/hcl/pull/404))
## v2.6.0 (June 4, 2020)
### Enhancements
* hcldec: Add a new `Spec`, `ValidateSpec`, which allows custom validation of values at decode-time. ([#387](https://github.com/hashicorp/hcl/pull/387))
### Bugs Fixed
* hclsyntax: Fix panic with combination of sequences and null arguments ([#386](https://github.com/hashicorp/hcl/pull/386))
* hclsyntax: Fix handling of unknown values and sequences ([#386](https://github.com/hashicorp/hcl/pull/386))
## v2.5.1 (May 14, 2020)
### Bugs Fixed
* hclwrite: handle legacy dot access of numeric indexes. ([#369](https://github.com/hashicorp/hcl/pull/369))
* hclwrite: Fix panic for dotted full splat (`foo.*`) ([#374](https://github.com/hashicorp/hcl/pull/374))
## v2.5.0 (May 6, 2020)
### Enhancements
* hclwrite: Generate multi-line objects and maps. ([#372](https://github.com/hashicorp/hcl/pull/372))
## v2.4.0 (Apr 13, 2020)
### Enhancements
* The Unicode data tables that HCL uses to produce user-perceived "column" positions in diagnostics and other source ranges are now updated to Unicode 12.0.0, which will cause HCL to produce more accurate column numbers for combining characters introduced to Unicode since Unicode 9.0.0.
### Bugs Fixed
* json: Fix panic when parsing malformed JSON. ([#358](https://github.com/hashicorp/hcl/pull/358))
## v2.3.0 (Jan 3, 2020)
### Enhancements
* ext/tryfunc: Optional functions `try` and `can` to include in your `hcl.EvalContext` when evaluating expressions, which allow users to make decisions based on the success of expressions. ([#330](https://github.com/hashicorp/hcl/pull/330))
* ext/typeexpr: Now has an optional function `convert` which you can include in your `hcl.EvalContext` when evaluating expressions, allowing users to convert values to specific type constraints using the type constraint expression syntax. ([#330](https://github.com/hashicorp/hcl/pull/330))
* ext/typeexpr: A new `cty` capsule type `typeexpr.TypeConstraintType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression to be interpreted as a type constraint expression rather than a value expression. ([#330](https://github.com/hashicorp/hcl/pull/330))
* ext/customdecode: An optional extension that allows overriding the static decoding behavior for expressions either in function arguments or `hcldec` attribute specifications. ([#330](https://github.com/hashicorp/hcl/pull/330))
* ext/customdecode: New `cty` capsuletypes `customdecode.ExpressionType` and `customdecode.ExpressionClosureType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression (and, for the closure type, also the `hcl.EvalContext` it was evaluated in) to be captured for later analysis, rather than immediately evaluated. ([#330](https://github.com/hashicorp/hcl/pull/330))
## v2.2.0 (Dec 11, 2019)
### Enhancements
* hcldec: Attribute evaluation (as part of `AttrSpec` or `BlockAttrsSpec`) now captures expression evaluation metadata in any errors it produces during type conversions, allowing for better feedback in calling applications that are able to make use of this metadata when printing diagnostic messages. ([#329](https://github.com/hashicorp/hcl/pull/329))
### Bugs Fixed
* hclsyntax: `IndexExpr`, `SplatExpr`, and `RelativeTraversalExpr` will now report a source range that covers all of their child expression nodes. Previously they would report only the operator part, such as `["foo"]`, `[*]`, or `.foo`, which was problematic for callers using source ranges for code analysis. ([#328](https://github.com/hashicorp/hcl/pull/328))
* hclwrite: Parser will no longer panic when the input includes index, splat, or relative traversal syntax. ([#328](https://github.com/hashicorp/hcl/pull/328))
## v2.1.0 (Nov 19, 2019)
### Enhancements
* gohcl: When decoding into a struct value with some fields already populated, those values will be retained if not explicitly overwritten in the given HCL body, with similar overriding/merging behavior as `json.Unmarshal` in the Go standard library.
* hclwrite: New interface to set the expression for an attribute to be a raw token sequence, with no special processing. This has some caveats, so if you intend to use it please refer to the godoc comments. ([#320](https://github.com/hashicorp/hcl/pull/320))
### Bugs Fixed
* hclwrite: The `Body.Blocks` method was returing the blocks in an indefined order, rather than preserving the order of declaration in the source input. ([#313](https://github.com/hashicorp/hcl/pull/313))
* hclwrite: The `TokensForTraversal` function (and thus in turn the `Body.SetAttributeTraversal` method) was not correctly handling index steps in traversals, and thus producing invalid results. ([#319](https://github.com/hashicorp/hcl/pull/319))
## v2.0.0 (Oct 2, 2019)
Initial release of HCL 2, which is a new implementating combining the HCL 1

View File

@ -8,7 +8,7 @@ towards devops tools, servers, etc.
> **NOTE:** This is major version 2 of HCL, whose Go API is incompatible with
> major version 1. Both versions are available for selection in Go Modules
> projects. HCL 2 _cannot_ be imported from Go projects that are not using Go Modules. For more information, see
> [our version selection guide](https://github.com/golang/go/wiki/Version-Selection).
> [our version selection guide](https://github.com/hashicorp/hcl/wiki/Version-Selection).
HCL has both a _native syntax_, intended to be pleasant to read and write for
humans, and a JSON-based variant that is easier for machines to generate
@ -33,11 +33,25 @@ package main
import (
"log"
"github.com/hashicorp/hcl/v2/hclsimple"
)
type Config struct {
LogLevel string `hcl:"log_level"`
IOMode string `hcl:"io_mode"`
Service ServiceConfig `hcl:"service,block"`
}
type ServiceConfig struct {
Protocol string `hcl:"protocol,label"`
Type string `hcl:"type,label"`
ListenAddr string `hcl:"listen_addr"`
Processes []ProcessConfig `hcl:"process,block"`
}
type ProcessConfig struct {
Type string `hcl:"type,label"`
Command []string `hcl:"command"`
}
func main() {
@ -51,7 +65,8 @@ func main() {
```
A lower-level API is available for applications that need more control over
the parsing, decoding, and evaluation of configuration.
the parsing, decoding, and evaluation of configuration. For more information,
see [the package documentation](https://pkg.go.dev/github.com/hashicorp/hcl/v2).
## Why?
@ -156,9 +171,9 @@ syntax allows use of arbitrary expressions within JSON strings:
For more information, see the detailed specifications:
* [Syntax-agnostic Information Model](hcl/spec.md)
* [HCL Native Syntax](hcl/hclsyntax/spec.md)
* [JSON Representation](hcl/json/spec.md)
* [Syntax-agnostic Information Model](spec.md)
* [HCL Native Syntax](hclsyntax/spec.md)
* [JSON Representation](json/spec.md)
## Changes in 2.0

View File

@ -22,14 +22,14 @@ const (
)
// Diagnostic represents information to be presented to a user about an
// error or anomoly in parsing or evaluating configuration.
// error or anomaly in parsing or evaluating configuration.
type Diagnostic struct {
Severity DiagnosticSeverity
// Summary and Detail contain the English-language description of the
// problem. Summary is a terse description of the general problem and
// detail is a more elaborate, often-multi-sentence description of
// the probem and what might be done to solve it.
// the problem and what might be done to solve it.
Summary string
Detail string
@ -63,6 +63,28 @@ type Diagnostic struct {
// case of colliding names.
Expression Expression
EvalContext *EvalContext
// Extra is an extension point for additional machine-readable information
// about this problem.
//
// Recipients of diagnostic objects may type-assert this value with
// specific interface types they know about to discover if any additional
// information is available that is interesting for their use-case.
//
// Extra is always considered to be optional extra information and so a
// diagnostic message should still always be fully described (from the
// perspective of a human who understands the language the messages are
// written in) by the other fields in case a particular recipient.
//
// Functions that return diagnostics with Extra populated should typically
// document that they place values implementing a particular interface,
// rather than a concrete type, and define that interface such that its
// methods can dynamically indicate a lack of support at runtime even
// if the interface happens to be statically available. An Extra
// type that wraps other Extra values should additionally implement
// interface DiagnosticExtraUnwrapper to return the value they are wrapping
// so that callers can access inner values to type-assert against.
Extra interface{}
}
// Diagnostics is a list of Diagnostic instances.
@ -141,3 +163,24 @@ type DiagnosticWriter interface {
WriteDiagnostic(*Diagnostic) error
WriteDiagnostics(Diagnostics) error
}
// DiagnosticExtraUnwrapper is an interface implemented by values in the
// Extra field of Diagnostic when they are wrapping another "Extra" value that
// was generated downstream.
//
// Diagnostic recipients which want to examine "Extra" values to sniff for
// particular types of extra data can either type-assert this interface
// directly and repeatedly unwrap until they recieve nil, or can use the
// helper function DiagnosticExtra.
type DiagnosticExtraUnwrapper interface {
// If the reciever is wrapping another "diagnostic extra" value, returns
// that value. Otherwise returns nil to indicate dynamically that nothing
// is wrapped.
//
// The "nothing is wrapped" condition can be signalled either by this
// method returning nil or by a type not implementing this interface at all.
//
// Implementers should never create unwrap "cycles" where a nested extra
// value returns a value that was also wrapping it.
UnwrapDiagnosticExtra() interface{}
}

View File

@ -0,0 +1,39 @@
//go:build go1.18
// +build go1.18
package hcl
// This file contains additional diagnostics-related symbols that use the
// Go 1.18 type parameters syntax and would therefore be incompatible with
// Go 1.17 and earlier.
// DiagnosticExtra attempts to retrieve an "extra value" of type T from the
// given diagnostic, if either the diag.Extra field directly contains a value
// of that type or the value implements DiagnosticExtraUnwrapper and directly
// or indirectly returns a value of that type.
//
// Type T should typically be an interface type, so that code which generates
// diagnostics can potentially return different implementations of the same
// interface dynamically as needed.
//
// If a value of type T is found, returns that value and true to indicate
// success. Otherwise, returns the zero value of T and false to indicate
// failure.
func DiagnosticExtra[T any](diag *Diagnostic) (T, bool) {
extra := diag.Extra
var zero T
for {
if ret, ok := extra.(T); ok {
return ret, true
}
if unwrap, ok := extra.(DiagnosticExtraUnwrapper); ok {
// If our "extra" implements DiagnosticExtraUnwrapper then we'll
// unwrap one level and try this again.
extra = unwrap.UnwrapDiagnosticExtra()
} else {
return zero, false
}
}
}

View File

@ -0,0 +1,209 @@
# HCL Custom Static Decoding Extension
This HCL extension provides a mechanism for defining arguments in an HCL-based
language whose values are derived using custom decoding rules against the
HCL expression syntax, overriding the usual behavior of normal expression
evaluation.
"Arguments", for the purpose of this extension, currently includes the
following two contexts:
* For applications using `hcldec` for dynamic decoding, a `hcldec.AttrSpec`
or `hcldec.BlockAttrsSpec` can be given a special type constraint that
opts in to custom decoding behavior for the attribute(s) that are selected
by that specification.
* When working with the HCL native expression syntax, a function given in
the `hcl.EvalContext` during evaluation can have parameters with special
type constraints that opt in to custom decoding behavior for the argument
expression associated with that parameter in any call.
The above use-cases are rather abstract, so we'll consider a motivating
real-world example: sometimes we (language designers) need to allow users
to specify type constraints directly in the language itself, such as in
[Terraform's Input Variables](https://www.terraform.io/docs/configuration/variables.html).
Terraform's `variable` blocks include an argument called `type` which takes
a type constraint given using HCL expression building-blocks as defined by
[the HCL `typeexpr` extension](../typeexpr/README.md).
A "type constraint expression" of that sort is not an expression intended to
be evaluated in the usual way. Instead, the physical expression is
deconstructed using [the static analysis operations](../../spec.md#static-analysis)
to produce a `cty.Type` as the result, rather than a `cty.Value`.
The purpose of this Custom Static Decoding Extension, then, is to provide a
bridge to allow that sort of custom decoding to be used via mechanisms that
normally deal in `cty.Value`, such as `hcldec` and native syntax function
calls as listed above.
(Note: [`gohcl`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/gohcl) has
its own mechanism to support this use case, exploiting the fact that it is
working directly with "normal" Go types. Decoding into a struct field of
type `hcl.Expression` obtains the expression directly without evaluating it
first. The Custom Static Decoding Extension is not necessary for that `gohcl`
technique. You can also implement custom decoding by working directly with
the lowest-level HCL API, which separates extraction of and evaluation of
expressions into two steps.)
## Custom Decoding Types
This extension relies on a convention implemented in terms of
[_Capsule Types_ in the underlying `cty` type system](https://github.com/zclconf/go-cty/blob/master/docs/types.md#capsule-types). `cty` allows a capsule type to carry arbitrary
extension metadata values as an aid to creating higher-level abstractions like
this extension.
A custom argument decoding mode, then, is implemented by creating a new `cty`
capsule type that implements the `ExtensionData` custom operation to return
a decoding function when requested. For example:
```go
var keywordType cty.Type
keywordType = cty.CapsuleWithOps("keyword", reflect.TypeOf(""), &cty.CapsuleOps{
ExtensionData: func(key interface{}) interface{} {
switch key {
case customdecode.CustomExpressionDecoder:
return customdecode.CustomExpressionDecoderFunc(
func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
var diags hcl.Diagnostics
kw := hcl.ExprAsKeyword(expr)
if kw == "" {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid keyword",
Detail: "A keyword is required",
Subject: expr.Range().Ptr(),
})
return cty.UnkownVal(keywordType), diags
}
return cty.CapsuleVal(keywordType, &kw)
},
)
default:
return nil
}
},
})
```
The boilerplate here is a bit fussy, but the important part for our purposes
is the `case customdecode.CustomExpressionDecoder:` clause, which uses
a custom extension key type defined in this package to recognize when a
component implementing this extension is checking to see if a target type
has a custom decode implementation.
In the above case we've defined a type that decodes expressions as static
keywords, so a keyword like `foo` would decode as an encapsulated `"foo"`
string, while any other sort of expression like `"baz"` or `1 + 1` would
return an error.
We could then use `keywordType` as a type constraint either for a function
parameter or a `hcldec` attribute specification, which would require the
argument for that function parameter or the expression for the matching
attributes to be a static keyword, rather than an arbitrary expression.
For example, in a `hcldec.AttrSpec`:
```go
keywordSpec := &hcldec.AttrSpec{
Name: "keyword",
Type: keywordType,
}
```
The above would accept input like the following and would set its result to
a `cty.Value` of `keywordType`, after decoding:
```hcl
keyword = foo
```
## The Expression and Expression Closure `cty` types
Building on the above, this package also includes two capsule types that use
the above mechanism to allow calling applications to capture expressions
directly and thus defer analysis to a later step, after initial decoding.
The `customdecode.ExpressionType` type encapsulates an `hcl.Expression` alone,
for situations like our type constraint expression example above where it's
the static structure of the expression we want to inspect, and thus any
variables and functions defined in the evaluation context are irrelevant.
The `customdecode.ExpressionClosureType` type encapsulates a
`*customdecode.ExpressionClosure` value, which binds the given expression to
the `hcl.EvalContext` it was asked to evaluate against and thus allows the
receiver of that result to later perform normal evaluation of the expression
with all the same variables and functions that would've been available to it
naturally.
Both of these types can be used as type constraints either for `hcldec`
attribute specifications or for function arguments. Here's an example of
`ExpressionClosureType` to implement a function that can evaluate
an expression with some additional variables defined locally, which we'll
call the `with(...)` function:
```go
var WithFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "variables",
Type: cty.DynamicPseudoType,
},
{
Name: "expression",
Type: customdecode.ExpressionClosureType,
},
},
Type: func(args []cty.Value) (cty.Type, error) {
varsVal := args[0]
exprVal := args[1]
if !varsVal.Type().IsObjectType() {
return cty.NilVal, function.NewArgErrorf(0, "must be an object defining local variables")
}
if !varsVal.IsKnown() {
// We can't predict our result type until the variables object
// is known.
return cty.DynamicPseudoType, nil
}
vars := varsVal.AsValueMap()
closure := customdecode.ExpressionClosureFromVal(exprVal)
result, err := evalWithLocals(vars, closure)
if err != nil {
return cty.NilVal, err
}
return result.Type(), nil
},
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
varsVal := args[0]
exprVal := args[1]
vars := varsVal.AsValueMap()
closure := customdecode.ExpressionClosureFromVal(exprVal)
return evalWithLocals(vars, closure)
},
})
func evalWithLocals(locals map[string]cty.Value, closure *customdecode.ExpressionClosure) (cty.Value, error) {
childCtx := closure.EvalContext.NewChild()
childCtx.Variables = locals
val, diags := closure.Expression.Value(childCtx)
if diags.HasErrors() {
return cty.NilVal, function.NewArgErrorf(1, "couldn't evaluate expression: %s", diags.Error())
}
return val, nil
}
```
If the above function were placed into an `hcl.EvalContext` as `with`, it
could be used in a native syntax call to that function as follows:
```hcl
foo = with({name = "Cory"}, "${greeting}, ${name}!")
```
The above assumes a variable in the main context called `greeting`, to which
the `with` function adds `name` before evaluating the expression given in
its second argument. This makes that second argument context-sensitive -- it
would behave differently if the user wrote the same thing somewhere else -- so
this capability should be used with care to make sure it doesn't cause confusion
for the end-users of your language.
There are some other examples of this capability to evaluate expressions in
unusual ways in the `tryfunc` directory that is a sibling of this one.

View File

@ -0,0 +1,56 @@
// Package customdecode contains a HCL extension that allows, in certain
// contexts, expression evaluation to be overridden by custom static analysis.
//
// This mechanism is only supported in certain specific contexts where
// expressions are decoded with a specific target type in mind. For more
// information, see the documentation on CustomExpressionDecoder.
package customdecode
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
type customDecoderImpl int
// CustomExpressionDecoder is a value intended to be used as a cty capsule
// type ExtensionData key for capsule types whose values are to be obtained
// by static analysis of an expression rather than normal evaluation of that
// expression.
//
// When a cooperating capsule type is asked for ExtensionData with this key,
// it must return a non-nil CustomExpressionDecoderFunc value.
//
// This mechanism is not universally supported; instead, it's handled in a few
// specific places where expressions are evaluated with the intent of producing
// a cty.Value of a type given by the calling application.
//
// Specifically, this currently works for type constraints given in
// hcldec.AttrSpec and hcldec.BlockAttrsSpec, and it works for arguments to
// function calls in the HCL native syntax. HCL extensions implemented outside
// of the main HCL module may also implement this; consult their own
// documentation for details.
const CustomExpressionDecoder = customDecoderImpl(1)
// CustomExpressionDecoderFunc is the type of value that must be returned by
// a capsule type handling the key CustomExpressionDecoder in its ExtensionData
// implementation.
//
// If no error diagnostics are returned, the result value MUST be of the
// capsule type that the decoder function was derived from. If the returned
// error diagnostics prevent producing a value at all, return cty.NilVal.
type CustomExpressionDecoderFunc func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
// CustomExpressionDecoderForType takes any cty type and returns its
// custom expression decoder implementation if it has one. If it is not a
// capsule type or it does not implement a custom expression decoder, this
// function returns nil.
func CustomExpressionDecoderForType(ty cty.Type) CustomExpressionDecoderFunc {
if !ty.IsCapsuleType() {
return nil
}
if fn, ok := ty.CapsuleExtensionData(CustomExpressionDecoder).(CustomExpressionDecoderFunc); ok {
return fn
}
return nil
}

View File

@ -0,0 +1,146 @@
package customdecode
import (
"fmt"
"reflect"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// ExpressionType is a cty capsule type that carries hcl.Expression values.
//
// This type implements custom decoding in the most general way possible: it
// just captures whatever expression is given to it, with no further processing
// whatsoever. It could therefore be useful in situations where an application
// must defer processing of the expression content until a later step.
//
// ExpressionType only captures the expression, not the evaluation context it
// was destined to be evaluated in. That means this type can be fine for
// situations where the recipient of the value only intends to do static
// analysis, but ExpressionClosureType is more appropriate in situations where
// the recipient will eventually evaluate the given expression.
var ExpressionType cty.Type
// ExpressionVal returns a new cty value of type ExpressionType, wrapping the
// given expression.
func ExpressionVal(expr hcl.Expression) cty.Value {
return cty.CapsuleVal(ExpressionType, &expr)
}
// ExpressionFromVal returns the expression encapsulated in the given value, or
// panics if the value is not a known value of ExpressionType.
func ExpressionFromVal(v cty.Value) hcl.Expression {
if !v.Type().Equals(ExpressionType) {
panic("value is not of ExpressionType")
}
ptr := v.EncapsulatedValue().(*hcl.Expression)
return *ptr
}
// ExpressionClosureType is a cty capsule type that carries hcl.Expression
// values along with their original evaluation contexts.
//
// This is similar to ExpressionType except that during custom decoding it
// also captures the hcl.EvalContext that was provided, allowing callers to
// evaluate the expression later in the same context where it would originally
// have been evaluated, or a context derived from that one.
var ExpressionClosureType cty.Type
// ExpressionClosure is the type encapsulated in ExpressionClosureType
type ExpressionClosure struct {
Expression hcl.Expression
EvalContext *hcl.EvalContext
}
// ExpressionClosureVal returns a new cty value of type ExpressionClosureType,
// wrapping the given expression closure.
func ExpressionClosureVal(closure *ExpressionClosure) cty.Value {
return cty.CapsuleVal(ExpressionClosureType, closure)
}
// Value evaluates the closure's expression using the closure's EvalContext,
// returning the result.
func (c *ExpressionClosure) Value() (cty.Value, hcl.Diagnostics) {
return c.Expression.Value(c.EvalContext)
}
// ExpressionClosureFromVal returns the expression closure encapsulated in the
// given value, or panics if the value is not a known value of
// ExpressionClosureType.
//
// The caller MUST NOT modify the returned closure or the EvalContext inside
// it. To derive a new EvalContext, either create a child context or make
// a copy.
func ExpressionClosureFromVal(v cty.Value) *ExpressionClosure {
if !v.Type().Equals(ExpressionClosureType) {
panic("value is not of ExpressionClosureType")
}
return v.EncapsulatedValue().(*ExpressionClosure)
}
func init() {
// Getting hold of a reflect.Type for hcl.Expression is a bit tricky because
// it's an interface type, but we can do it with some indirection.
goExpressionType := reflect.TypeOf((*hcl.Expression)(nil)).Elem()
ExpressionType = cty.CapsuleWithOps("expression", goExpressionType, &cty.CapsuleOps{
ExtensionData: func(key interface{}) interface{} {
switch key {
case CustomExpressionDecoder:
return CustomExpressionDecoderFunc(
func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return ExpressionVal(expr), nil
},
)
default:
return nil
}
},
TypeGoString: func(_ reflect.Type) string {
return "customdecode.ExpressionType"
},
GoString: func(raw interface{}) string {
exprPtr := raw.(*hcl.Expression)
return fmt.Sprintf("customdecode.ExpressionVal(%#v)", *exprPtr)
},
RawEquals: func(a, b interface{}) bool {
aPtr := a.(*hcl.Expression)
bPtr := b.(*hcl.Expression)
return reflect.DeepEqual(*aPtr, *bPtr)
},
})
ExpressionClosureType = cty.CapsuleWithOps("expression closure", reflect.TypeOf(ExpressionClosure{}), &cty.CapsuleOps{
ExtensionData: func(key interface{}) interface{} {
switch key {
case CustomExpressionDecoder:
return CustomExpressionDecoderFunc(
func(expr hcl.Expression, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return ExpressionClosureVal(&ExpressionClosure{
Expression: expr,
EvalContext: ctx,
}), nil
},
)
default:
return nil
}
},
TypeGoString: func(_ reflect.Type) string {
return "customdecode.ExpressionClosureType"
},
GoString: func(raw interface{}) string {
closure := raw.(*ExpressionClosure)
return fmt.Sprintf("customdecode.ExpressionClosureVal(%#v)", closure)
},
RawEquals: func(a, b interface{}) bool {
closureA := a.(*ExpressionClosure)
closureB := b.(*ExpressionClosure)
// The expression itself compares by deep equality, but EvalContexts
// conventionally compare by pointer identity, so we'll comply
// with both conventions here by testing them separately.
return closureA.EvalContext == closureB.EvalContext &&
reflect.DeepEqual(closureA.Expression, closureB.Expression)
},
})
}

View File

@ -1,184 +0,0 @@
# HCL Dynamic Blocks Extension
This HCL extension implements a special block type named "dynamic" that can
be used to dynamically generate blocks of other types by iterating over
collection values.
Normally the block structure in an HCL configuration file is rigid, even
though dynamic expressions can be used within attribute values. This is
convenient for most applications since it allows the overall structure of
the document to be decoded easily, but in some applications it is desirable
to allow dynamic block generation within certain portions of the configuration.
Dynamic block generation is performed using the `dynamic` block type:
```hcl
toplevel {
nested {
foo = "static block 1"
}
dynamic "nested" {
for_each = ["a", "b", "c"]
iterator = nested
content {
foo = "dynamic block ${nested.value}"
}
}
nested {
foo = "static block 2"
}
}
```
The above is interpreted as if it were written as follows:
```hcl
toplevel {
nested {
foo = "static block 1"
}
nested {
foo = "dynamic block a"
}
nested {
foo = "dynamic block b"
}
nested {
foo = "dynamic block c"
}
nested {
foo = "static block 2"
}
}
```
Since HCL block syntax is not normally exposed to the possibility of unknown
values, this extension must make some compromises when asked to iterate over
an unknown collection. If the length of the collection cannot be statically
recognized (because it is an unknown value of list, map, or set type) then
the `dynamic` construct will generate a _single_ dynamic block whose iterator
key and value are both unknown values of the dynamic pseudo-type, thus causing
any attribute values derived from iteration to appear as unknown values. There
is no explicit representation of the fact that the length of the collection may
eventually be different than one.
## Usage
Pass a body to function `Expand` to obtain a new body that will, on access
to its content, evaluate and expand any nested `dynamic` blocks.
Dynamic block processing is also automatically propagated into any nested
blocks that are returned, allowing users to nest dynamic blocks inside
one another and to nest dynamic blocks inside other static blocks.
HCL structural decoding does not normally have access to an `EvalContext`, so
any variables and functions that should be available to the `for_each`
and `labels` expressions must be passed in when calling `Expand`. Expressions
within the `content` block are evaluated separately and so can be passed a
separate `EvalContext` if desired, during normal attribute expression
evaluation.
## Detecting Variables
Some applications dynamically generate an `EvalContext` by analyzing which
variables are referenced by an expression before evaluating it.
This unfortunately requires some extra effort when this analysis is required
for the context passed to `Expand`: the HCL API requires a schema to be
provided in order to do any analysis of the blocks in a body, but the low-level
schema model provides a description of only one level of nested blocks at
a time, and thus a new schema must be provided for each additional level of
nesting.
To make this arduous process as convenient as possible, this package provides
a helper function `WalkForEachVariables`, which returns a `WalkVariablesNode`
instance that can be used to find variables directly in a given body and also
determine which nested blocks require recursive calls. Using this mechanism
requires that the caller be able to look up a schema given a nested block type.
For _simple_ formats where a specific block type name always has the same schema
regardless of context, a walk can be implemented as follows:
```go
func walkVariables(node dynblock.WalkVariablesNode, schema *hcl.BodySchema) []hcl.Traversal {
vars, children := node.Visit(schema)
for _, child := range children {
var childSchema *hcl.BodySchema
switch child.BlockTypeName {
case "a":
childSchema = &hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "b",
LabelNames: []string{"key"},
},
},
}
case "b":
childSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "val",
Required: true,
},
},
}
default:
// Should never happen, because the above cases should be exhaustive
// for the application's configuration format.
panic(fmt.Errorf("can't find schema for unknown block type %q", child.BlockTypeName))
}
vars = append(vars, testWalkAndAccumVars(child.Node, childSchema)...)
}
}
```
### Detecting Variables with `hcldec` Specifications
For applications that use the higher-level `hcldec` package to decode nested
configuration structures into `cty` values, the same specification can be used
to automatically drive the recursive variable-detection walk described above.
The helper function `ForEachVariablesHCLDec` allows an entire recursive
configuration structure to be analyzed in a single call given a `hcldec.Spec`
that describes the nested block structure. This means a `hcldec`-based
application can support dynamic blocks with only a little additional effort:
```go
func decodeBody(body hcl.Body, spec hcldec.Spec) (cty.Value, hcl.Diagnostics) {
// Determine which variables are needed to expand dynamic blocks
neededForDynamic := dynblock.ForEachVariablesHCLDec(body, spec)
// Build a suitable EvalContext and expand dynamic blocks
dynCtx := buildEvalContext(neededForDynamic)
dynBody := dynblock.Expand(body, dynCtx)
// Determine which variables are needed to fully decode the expanded body
// This will analyze expressions that came both from static blocks in the
// original body and from blocks that were dynamically added by Expand.
neededForDecode := hcldec.Variables(dynBody, spec)
// Build a suitable EvalContext and then fully decode the body as per the
// hcldec specification.
decCtx := buildEvalContext(neededForDecode)
return hcldec.Decode(dynBody, spec, decCtx)
}
func buildEvalContext(needed []hcl.Traversal) *hcl.EvalContext {
// (to be implemented by your application)
}
```
# Performance
This extension is going quite harshly against the grain of the HCL API, and
so it uses lots of wrapping objects and temporary data structures to get its
work done. HCL in general is not suitable for use in high-performance situations
or situations sensitive to memory pressure, but that is _especially_ true for
this extension.

View File

@ -1,262 +0,0 @@
package dynblock
import (
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// expandBody wraps another hcl.Body and expands any "dynamic" blocks found
// inside whenever Content or PartialContent is called.
type expandBody struct {
original hcl.Body
forEachCtx *hcl.EvalContext
iteration *iteration // non-nil if we're nested inside another "dynamic" block
// These are used with PartialContent to produce a "remaining items"
// body to return. They are nil on all bodies fresh out of the transformer.
//
// Note that this is re-implemented here rather than delegating to the
// existing support required by the underlying body because we need to
// retain access to the entire original body on subsequent decode operations
// so we can retain any "dynamic" blocks for types we didn't take consume
// on the first pass.
hiddenAttrs map[string]struct{}
hiddenBlocks map[string]hcl.BlockHeaderSchema
}
func (b *expandBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
extSchema := b.extendSchema(schema)
rawContent, diags := b.original.Content(extSchema)
blocks, blockDiags := b.expandBlocks(schema, rawContent.Blocks, false)
diags = append(diags, blockDiags...)
attrs := b.prepareAttributes(rawContent.Attributes)
content := &hcl.BodyContent{
Attributes: attrs,
Blocks: blocks,
MissingItemRange: b.original.MissingItemRange(),
}
return content, diags
}
func (b *expandBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
extSchema := b.extendSchema(schema)
rawContent, _, diags := b.original.PartialContent(extSchema)
// We discard the "remain" argument above because we're going to construct
// our own remain that also takes into account remaining "dynamic" blocks.
blocks, blockDiags := b.expandBlocks(schema, rawContent.Blocks, true)
diags = append(diags, blockDiags...)
attrs := b.prepareAttributes(rawContent.Attributes)
content := &hcl.BodyContent{
Attributes: attrs,
Blocks: blocks,
MissingItemRange: b.original.MissingItemRange(),
}
remain := &expandBody{
original: b.original,
forEachCtx: b.forEachCtx,
iteration: b.iteration,
hiddenAttrs: make(map[string]struct{}),
hiddenBlocks: make(map[string]hcl.BlockHeaderSchema),
}
for name := range b.hiddenAttrs {
remain.hiddenAttrs[name] = struct{}{}
}
for typeName, blockS := range b.hiddenBlocks {
remain.hiddenBlocks[typeName] = blockS
}
for _, attrS := range schema.Attributes {
remain.hiddenAttrs[attrS.Name] = struct{}{}
}
for _, blockS := range schema.Blocks {
remain.hiddenBlocks[blockS.Type] = blockS
}
return content, remain, diags
}
func (b *expandBody) extendSchema(schema *hcl.BodySchema) *hcl.BodySchema {
// We augment the requested schema to also include our special "dynamic"
// block type, since then we'll get instances of it interleaved with
// all of the literal child blocks we must also include.
extSchema := &hcl.BodySchema{
Attributes: schema.Attributes,
Blocks: make([]hcl.BlockHeaderSchema, len(schema.Blocks), len(schema.Blocks)+len(b.hiddenBlocks)+1),
}
copy(extSchema.Blocks, schema.Blocks)
extSchema.Blocks = append(extSchema.Blocks, dynamicBlockHeaderSchema)
// If we have any hiddenBlocks then we also need to register those here
// so that a call to "Content" on the underlying body won't fail.
// (We'll filter these out again once we process the result of either
// Content or PartialContent.)
for _, blockS := range b.hiddenBlocks {
extSchema.Blocks = append(extSchema.Blocks, blockS)
}
// If we have any hiddenAttrs then we also need to register these, for
// the same reason as we deal with hiddenBlocks above.
if len(b.hiddenAttrs) != 0 {
newAttrs := make([]hcl.AttributeSchema, len(schema.Attributes), len(schema.Attributes)+len(b.hiddenAttrs))
copy(newAttrs, extSchema.Attributes)
for name := range b.hiddenAttrs {
newAttrs = append(newAttrs, hcl.AttributeSchema{
Name: name,
Required: false,
})
}
extSchema.Attributes = newAttrs
}
return extSchema
}
func (b *expandBody) prepareAttributes(rawAttrs hcl.Attributes) hcl.Attributes {
if len(b.hiddenAttrs) == 0 && b.iteration == nil {
// Easy path: just pass through the attrs from the original body verbatim
return rawAttrs
}
// Otherwise we have some work to do: we must filter out any attributes
// that are hidden (since a previous PartialContent call already saw these)
// and wrap the expressions of the inner attributes so that they will
// have access to our iteration variables.
attrs := make(hcl.Attributes, len(rawAttrs))
for name, rawAttr := range rawAttrs {
if _, hidden := b.hiddenAttrs[name]; hidden {
continue
}
if b.iteration != nil {
attr := *rawAttr // shallow copy so we can mutate it
attr.Expr = exprWrap{
Expression: attr.Expr,
i: b.iteration,
}
attrs[name] = &attr
} else {
// If we have no active iteration then no wrapping is required.
attrs[name] = rawAttr
}
}
return attrs
}
func (b *expandBody) expandBlocks(schema *hcl.BodySchema, rawBlocks hcl.Blocks, partial bool) (hcl.Blocks, hcl.Diagnostics) {
var blocks hcl.Blocks
var diags hcl.Diagnostics
for _, rawBlock := range rawBlocks {
switch rawBlock.Type {
case "dynamic":
realBlockType := rawBlock.Labels[0]
if _, hidden := b.hiddenBlocks[realBlockType]; hidden {
continue
}
var blockS *hcl.BlockHeaderSchema
for _, candidate := range schema.Blocks {
if candidate.Type == realBlockType {
blockS = &candidate
break
}
}
if blockS == nil {
// Not a block type that the caller requested.
if !partial {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsupported block type",
Detail: fmt.Sprintf("Blocks of type %q are not expected here.", realBlockType),
Subject: &rawBlock.LabelRanges[0],
})
}
continue
}
spec, specDiags := b.decodeSpec(blockS, rawBlock)
diags = append(diags, specDiags...)
if specDiags.HasErrors() {
continue
}
if spec.forEachVal.IsKnown() {
for it := spec.forEachVal.ElementIterator(); it.Next(); {
key, value := it.Element()
i := b.iteration.MakeChild(spec.iteratorName, key, value)
block, blockDiags := spec.newBlock(i, b.forEachCtx)
diags = append(diags, blockDiags...)
if block != nil {
// Attach our new iteration context so that attributes
// and other nested blocks can refer to our iterator.
block.Body = b.expandChild(block.Body, i)
blocks = append(blocks, block)
}
}
} else {
// If our top-level iteration value isn't known then we're forced
// to compromise since HCL doesn't have any concept of an
// "unknown block". In this case then, we'll produce a single
// dynamic block with the iterator values set to DynamicVal,
// which at least makes the potential for a block visible
// in our result, even though it's not represented in a fully-accurate
// way.
i := b.iteration.MakeChild(spec.iteratorName, cty.DynamicVal, cty.DynamicVal)
block, blockDiags := spec.newBlock(i, b.forEachCtx)
diags = append(diags, blockDiags...)
if block != nil {
block.Body = b.expandChild(block.Body, i)
// We additionally force all of the leaf attribute values
// in the result to be unknown so the calling application
// can, if necessary, use that as a heuristic to detect
// when a single nested block might be standing in for
// multiple blocks yet to be expanded. This retains the
// structure of the generated body but forces all of its
// leaf attribute values to be unknown.
block.Body = unknownBody{block.Body}
blocks = append(blocks, block)
}
}
default:
if _, hidden := b.hiddenBlocks[rawBlock.Type]; !hidden {
// A static block doesn't create a new iteration context, but
// it does need to inherit _our own_ iteration context in
// case it contains expressions that refer to our inherited
// iterators, or nested "dynamic" blocks.
expandedBlock := *rawBlock // shallow copy
expandedBlock.Body = b.expandChild(rawBlock.Body, b.iteration)
blocks = append(blocks, &expandedBlock)
}
}
}
return blocks, diags
}
func (b *expandBody) expandChild(child hcl.Body, i *iteration) hcl.Body {
chiCtx := i.EvalContext(b.forEachCtx)
ret := Expand(child, chiCtx)
ret.(*expandBody).iteration = i
return ret
}
func (b *expandBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
// blocks aren't allowed in JustAttributes mode and this body can
// only produce blocks, so we'll just pass straight through to our
// underlying body here.
return b.original.JustAttributes()
}
func (b *expandBody) MissingItemRange() hcl.Range {
return b.original.MissingItemRange()
}

View File

@ -1,215 +0,0 @@
package dynblock
import (
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
type expandSpec struct {
blockType string
blockTypeRange hcl.Range
defRange hcl.Range
forEachVal cty.Value
iteratorName string
labelExprs []hcl.Expression
contentBody hcl.Body
inherited map[string]*iteration
}
func (b *expandBody) decodeSpec(blockS *hcl.BlockHeaderSchema, rawSpec *hcl.Block) (*expandSpec, hcl.Diagnostics) {
var diags hcl.Diagnostics
var schema *hcl.BodySchema
if len(blockS.LabelNames) != 0 {
schema = dynamicBlockBodySchemaLabels
} else {
schema = dynamicBlockBodySchemaNoLabels
}
specContent, specDiags := rawSpec.Body.Content(schema)
diags = append(diags, specDiags...)
if specDiags.HasErrors() {
return nil, diags
}
//// for_each attribute
eachAttr := specContent.Attributes["for_each"]
eachVal, eachDiags := eachAttr.Expr.Value(b.forEachCtx)
diags = append(diags, eachDiags...)
if !eachVal.CanIterateElements() && eachVal.Type() != cty.DynamicPseudoType {
// We skip this error for DynamicPseudoType because that means we either
// have a null (which is checked immediately below) or an unknown
// (which is handled in the expandBody Content methods).
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic for_each value",
Detail: fmt.Sprintf("Cannot use a %s value in for_each. An iterable collection is required.", eachVal.Type().FriendlyName()),
Subject: eachAttr.Expr.Range().Ptr(),
Expression: eachAttr.Expr,
EvalContext: b.forEachCtx,
})
return nil, diags
}
if eachVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic for_each value",
Detail: "Cannot use a null value in for_each.",
Subject: eachAttr.Expr.Range().Ptr(),
Expression: eachAttr.Expr,
EvalContext: b.forEachCtx,
})
return nil, diags
}
//// iterator attribute
iteratorName := blockS.Type
if iteratorAttr := specContent.Attributes["iterator"]; iteratorAttr != nil {
itTraversal, itDiags := hcl.AbsTraversalForExpr(iteratorAttr.Expr)
diags = append(diags, itDiags...)
if itDiags.HasErrors() {
return nil, diags
}
if len(itTraversal) != 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic iterator name",
Detail: "Dynamic iterator must be a single variable name.",
Subject: itTraversal.SourceRange().Ptr(),
})
return nil, diags
}
iteratorName = itTraversal.RootName()
}
var labelExprs []hcl.Expression
if labelsAttr := specContent.Attributes["labels"]; labelsAttr != nil {
var labelDiags hcl.Diagnostics
labelExprs, labelDiags = hcl.ExprList(labelsAttr.Expr)
diags = append(diags, labelDiags...)
if labelDiags.HasErrors() {
return nil, diags
}
if len(labelExprs) > len(blockS.LabelNames) {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous dynamic block label",
Detail: fmt.Sprintf("Blocks of type %q require %d label(s).", blockS.Type, len(blockS.LabelNames)),
Subject: labelExprs[len(blockS.LabelNames)].Range().Ptr(),
})
return nil, diags
} else if len(labelExprs) < len(blockS.LabelNames) {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Insufficient dynamic block labels",
Detail: fmt.Sprintf("Blocks of type %q require %d label(s).", blockS.Type, len(blockS.LabelNames)),
Subject: labelsAttr.Expr.Range().Ptr(),
})
return nil, diags
}
}
// Since our schema requests only blocks of type "content", we can assume
// that all entries in specContent.Blocks are content blocks.
if len(specContent.Blocks) == 0 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing dynamic content block",
Detail: "A dynamic block must have a nested block of type \"content\" to describe the body of each generated block.",
Subject: &specContent.MissingItemRange,
})
return nil, diags
}
if len(specContent.Blocks) > 1 {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous dynamic content block",
Detail: "Only one nested content block is allowed for each dynamic block.",
Subject: &specContent.Blocks[1].DefRange,
})
return nil, diags
}
return &expandSpec{
blockType: blockS.Type,
blockTypeRange: rawSpec.LabelRanges[0],
defRange: rawSpec.DefRange,
forEachVal: eachVal,
iteratorName: iteratorName,
labelExprs: labelExprs,
contentBody: specContent.Blocks[0].Body,
}, diags
}
func (s *expandSpec) newBlock(i *iteration, ctx *hcl.EvalContext) (*hcl.Block, hcl.Diagnostics) {
var diags hcl.Diagnostics
var labels []string
var labelRanges []hcl.Range
lCtx := i.EvalContext(ctx)
for _, labelExpr := range s.labelExprs {
labelVal, labelDiags := labelExpr.Value(lCtx)
diags = append(diags, labelDiags...)
if labelDiags.HasErrors() {
return nil, diags
}
var convErr error
labelVal, convErr = convert.Convert(labelVal, cty.String)
if convErr != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic block label",
Detail: fmt.Sprintf("Cannot use this value as a dynamic block label: %s.", convErr),
Subject: labelExpr.Range().Ptr(),
Expression: labelExpr,
EvalContext: lCtx,
})
return nil, diags
}
if labelVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic block label",
Detail: "Cannot use a null value as a dynamic block label.",
Subject: labelExpr.Range().Ptr(),
Expression: labelExpr,
EvalContext: lCtx,
})
return nil, diags
}
if !labelVal.IsKnown() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid dynamic block label",
Detail: "This value is not yet known. Dynamic block labels must be immediately-known values.",
Subject: labelExpr.Range().Ptr(),
Expression: labelExpr,
EvalContext: lCtx,
})
return nil, diags
}
labels = append(labels, labelVal.AsString())
labelRanges = append(labelRanges, labelExpr.Range())
}
block := &hcl.Block{
Type: s.blockType,
TypeRange: s.blockTypeRange,
Labels: labels,
LabelRanges: labelRanges,
DefRange: s.defRange,
Body: s.contentBody,
}
return block, diags
}

View File

@ -1,42 +0,0 @@
package dynblock
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
type exprWrap struct {
hcl.Expression
i *iteration
}
func (e exprWrap) Variables() []hcl.Traversal {
raw := e.Expression.Variables()
ret := make([]hcl.Traversal, 0, len(raw))
// Filter out traversals that refer to our iterator name or any
// iterator we've inherited; we're going to provide those in
// our Value wrapper, so the caller doesn't need to know about them.
for _, traversal := range raw {
rootName := traversal.RootName()
if rootName == e.i.IteratorName {
continue
}
if _, inherited := e.i.Inherited[rootName]; inherited {
continue
}
ret = append(ret, traversal)
}
return ret
}
func (e exprWrap) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
extCtx := e.i.EvalContext(ctx)
return e.Expression.Value(extCtx)
}
// UnwrapExpression returns the expression being wrapped by this instance.
// This allows the original expression to be recovered by hcl.UnwrapExpression.
func (e exprWrap) UnwrapExpression() hcl.Expression {
return e.Expression
}

View File

@ -1,66 +0,0 @@
package dynblock
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
type iteration struct {
IteratorName string
Key cty.Value
Value cty.Value
Inherited map[string]*iteration
}
func (s *expandSpec) MakeIteration(key, value cty.Value) *iteration {
return &iteration{
IteratorName: s.iteratorName,
Key: key,
Value: value,
Inherited: s.inherited,
}
}
func (i *iteration) Object() cty.Value {
return cty.ObjectVal(map[string]cty.Value{
"key": i.Key,
"value": i.Value,
})
}
func (i *iteration) EvalContext(base *hcl.EvalContext) *hcl.EvalContext {
new := base.NewChild()
if i != nil {
new.Variables = map[string]cty.Value{}
for name, otherIt := range i.Inherited {
new.Variables[name] = otherIt.Object()
}
new.Variables[i.IteratorName] = i.Object()
}
return new
}
func (i *iteration) MakeChild(iteratorName string, key, value cty.Value) *iteration {
if i == nil {
// Create entirely new root iteration, then
return &iteration{
IteratorName: iteratorName,
Key: key,
Value: value,
}
}
inherited := map[string]*iteration{}
for name, otherIt := range i.Inherited {
inherited[name] = otherIt
}
inherited[i.IteratorName] = i
return &iteration{
IteratorName: iteratorName,
Key: key,
Value: value,
Inherited: inherited,
}
}

View File

@ -1,47 +0,0 @@
// Package dynblock provides an extension to HCL that allows dynamic
// declaration of nested blocks in certain contexts via a special block type
// named "dynamic".
package dynblock
import (
"github.com/hashicorp/hcl/v2"
)
// Expand "dynamic" blocks in the given body, returning a new body that
// has those blocks expanded.
//
// The given EvalContext is used when evaluating "for_each" and "labels"
// attributes within dynamic blocks, allowing those expressions access to
// variables and functions beyond the iterator variable created by the
// iteration.
//
// Expand returns no diagnostics because no blocks are actually expanded
// until a call to Content or PartialContent on the returned body, which
// will then expand only the blocks selected by the schema.
//
// "dynamic" blocks are also expanded automatically within nested blocks
// in the given body, including within other dynamic blocks, thus allowing
// multi-dimensional iteration. However, it is not possible to
// dynamically-generate the "dynamic" blocks themselves except through nesting.
//
// parent {
// dynamic "child" {
// for_each = child_objs
// content {
// dynamic "grandchild" {
// for_each = child.value.children
// labels = [grandchild.key]
// content {
// parent_key = child.key
// value = grandchild.value
// }
// }
// }
// }
// }
func Expand(body hcl.Body, ctx *hcl.EvalContext) hcl.Body {
return &expandBody{
original: body,
forEachCtx: ctx,
}
}

View File

@ -1,50 +0,0 @@
package dynblock
import "github.com/hashicorp/hcl/v2"
var dynamicBlockHeaderSchema = hcl.BlockHeaderSchema{
Type: "dynamic",
LabelNames: []string{"type"},
}
var dynamicBlockBodySchemaLabels = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "for_each",
Required: true,
},
{
Name: "iterator",
Required: false,
},
{
Name: "labels",
Required: true,
},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "content",
LabelNames: nil,
},
},
}
var dynamicBlockBodySchemaNoLabels = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "for_each",
Required: true,
},
{
Name: "iterator",
Required: false,
},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "content",
LabelNames: nil,
},
},
}

View File

@ -1,84 +0,0 @@
package dynblock
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// unknownBody is a funny body that just reports everything inside it as
// unknown. It uses a given other body as a sort of template for what attributes
// and blocks are inside -- including source location information -- but
// subsitutes unknown values of unknown type for all attributes.
//
// This rather odd process is used to handle expansion of dynamic blocks whose
// for_each expression is unknown. Since a block cannot itself be unknown,
// we instead arrange for everything _inside_ the block to be unknown instead,
// to give the best possible approximation.
type unknownBody struct {
template hcl.Body
}
var _ hcl.Body = unknownBody{}
func (b unknownBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
content, diags := b.template.Content(schema)
content = b.fixupContent(content)
// We're intentionally preserving the diagnostics reported from the
// inner body so that we can still report where the template body doesn't
// match the requested schema.
return content, diags
}
func (b unknownBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
content, remain, diags := b.template.PartialContent(schema)
content = b.fixupContent(content)
remain = unknownBody{remain} // remaining content must also be wrapped
// We're intentionally preserving the diagnostics reported from the
// inner body so that we can still report where the template body doesn't
// match the requested schema.
return content, remain, diags
}
func (b unknownBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
attrs, diags := b.template.JustAttributes()
attrs = b.fixupAttrs(attrs)
// We're intentionally preserving the diagnostics reported from the
// inner body so that we can still report where the template body doesn't
// match the requested schema.
return attrs, diags
}
func (b unknownBody) MissingItemRange() hcl.Range {
return b.template.MissingItemRange()
}
func (b unknownBody) fixupContent(got *hcl.BodyContent) *hcl.BodyContent {
ret := &hcl.BodyContent{}
ret.Attributes = b.fixupAttrs(got.Attributes)
if len(got.Blocks) > 0 {
ret.Blocks = make(hcl.Blocks, 0, len(got.Blocks))
for _, gotBlock := range got.Blocks {
new := *gotBlock // shallow copy
new.Body = unknownBody{gotBlock.Body} // nested content must also be marked unknown
ret.Blocks = append(ret.Blocks, &new)
}
}
return ret
}
func (b unknownBody) fixupAttrs(got hcl.Attributes) hcl.Attributes {
if len(got) == 0 {
return nil
}
ret := make(hcl.Attributes, len(got))
for name, gotAttr := range got {
new := *gotAttr // shallow copy
new.Expr = hcl.StaticExpr(cty.DynamicVal, gotAttr.Expr.Range())
ret[name] = &new
}
return ret
}

View File

@ -1,209 +0,0 @@
package dynblock
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// WalkVariables begins the recursive process of walking all expressions and
// nested blocks in the given body and its child bodies while taking into
// account any "dynamic" blocks.
//
// This function requires that the caller walk through the nested block
// structure in the given body level-by-level so that an appropriate schema
// can be provided at each level to inform further processing. This workflow
// is thus easiest to use for calling applications that have some higher-level
// schema representation available with which to drive this multi-step
// process. If your application uses the hcldec package, you may be able to
// use VariablesHCLDec instead for a more automatic approach.
func WalkVariables(body hcl.Body) WalkVariablesNode {
return WalkVariablesNode{
body: body,
includeContent: true,
}
}
// WalkExpandVariables is like Variables but it includes only the variables
// required for successful block expansion, ignoring any variables referenced
// inside block contents. The result is the minimal set of all variables
// required for a call to Expand, excluding variables that would only be
// needed to subsequently call Content or PartialContent on the expanded
// body.
func WalkExpandVariables(body hcl.Body) WalkVariablesNode {
return WalkVariablesNode{
body: body,
}
}
type WalkVariablesNode struct {
body hcl.Body
it *iteration
includeContent bool
}
type WalkVariablesChild struct {
BlockTypeName string
Node WalkVariablesNode
}
// Body returns the HCL Body associated with the child node, in case the caller
// wants to do some sort of inspection of it in order to decide what schema
// to pass to Visit.
//
// Most implementations should just fetch a fixed schema based on the
// BlockTypeName field and not access this. Deciding on a schema dynamically
// based on the body is a strange thing to do and generally necessary only if
// your caller is already doing other bizarre things with HCL bodies.
func (c WalkVariablesChild) Body() hcl.Body {
return c.Node.body
}
// Visit returns the variable traversals required for any "dynamic" blocks
// directly in the body associated with this node, and also returns any child
// nodes that must be visited in order to continue the walk.
//
// Each child node has its associated block type name given in its BlockTypeName
// field, which the calling application should use to determine the appropriate
// schema for the content of each child node and pass it to the child node's
// own Visit method to continue the walk recursively.
func (n WalkVariablesNode) Visit(schema *hcl.BodySchema) (vars []hcl.Traversal, children []WalkVariablesChild) {
extSchema := n.extendSchema(schema)
container, _, _ := n.body.PartialContent(extSchema)
if container == nil {
return vars, children
}
children = make([]WalkVariablesChild, 0, len(container.Blocks))
if n.includeContent {
for _, attr := range container.Attributes {
for _, traversal := range attr.Expr.Variables() {
var ours, inherited bool
if n.it != nil {
ours = traversal.RootName() == n.it.IteratorName
_, inherited = n.it.Inherited[traversal.RootName()]
}
if !(ours || inherited) {
vars = append(vars, traversal)
}
}
}
}
for _, block := range container.Blocks {
switch block.Type {
case "dynamic":
blockTypeName := block.Labels[0]
inner, _, _ := block.Body.PartialContent(variableDetectionInnerSchema)
if inner == nil {
continue
}
iteratorName := blockTypeName
if attr, exists := inner.Attributes["iterator"]; exists {
iterTraversal, _ := hcl.AbsTraversalForExpr(attr.Expr)
if len(iterTraversal) == 0 {
// Ignore this invalid dynamic block, since it'll produce
// an error if someone tries to extract content from it
// later anyway.
continue
}
iteratorName = iterTraversal.RootName()
}
blockIt := n.it.MakeChild(iteratorName, cty.DynamicVal, cty.DynamicVal)
if attr, exists := inner.Attributes["for_each"]; exists {
// Filter out iterator names inherited from parent blocks
for _, traversal := range attr.Expr.Variables() {
if _, inherited := blockIt.Inherited[traversal.RootName()]; !inherited {
vars = append(vars, traversal)
}
}
}
if attr, exists := inner.Attributes["labels"]; exists {
// Filter out both our own iterator name _and_ those inherited
// from parent blocks, since we provide _both_ of these to the
// label expressions.
for _, traversal := range attr.Expr.Variables() {
ours := traversal.RootName() == iteratorName
_, inherited := blockIt.Inherited[traversal.RootName()]
if !(ours || inherited) {
vars = append(vars, traversal)
}
}
}
for _, contentBlock := range inner.Blocks {
// We only request "content" blocks in our schema, so we know
// any blocks we find here will be content blocks. We require
// exactly one content block for actual expansion, but we'll
// be more liberal here so that callers can still collect
// variables from erroneous "dynamic" blocks.
children = append(children, WalkVariablesChild{
BlockTypeName: blockTypeName,
Node: WalkVariablesNode{
body: contentBlock.Body,
it: blockIt,
includeContent: n.includeContent,
},
})
}
default:
children = append(children, WalkVariablesChild{
BlockTypeName: block.Type,
Node: WalkVariablesNode{
body: block.Body,
it: n.it,
includeContent: n.includeContent,
},
})
}
}
return vars, children
}
func (n WalkVariablesNode) extendSchema(schema *hcl.BodySchema) *hcl.BodySchema {
// We augment the requested schema to also include our special "dynamic"
// block type, since then we'll get instances of it interleaved with
// all of the literal child blocks we must also include.
extSchema := &hcl.BodySchema{
Attributes: schema.Attributes,
Blocks: make([]hcl.BlockHeaderSchema, len(schema.Blocks), len(schema.Blocks)+1),
}
copy(extSchema.Blocks, schema.Blocks)
extSchema.Blocks = append(extSchema.Blocks, dynamicBlockHeaderSchema)
return extSchema
}
// This is a more relaxed schema than what's in schema.go, since we
// want to maximize the amount of variables we can find even if there
// are erroneous blocks.
var variableDetectionInnerSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "for_each",
Required: false,
},
{
Name: "labels",
Required: false,
},
{
Name: "iterator",
Required: false,
},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "content",
},
},
}

View File

@ -1,43 +0,0 @@
package dynblock
import (
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hcldec"
)
// VariablesHCLDec is a wrapper around WalkVariables that uses the given hcldec
// specification to automatically drive the recursive walk through nested
// blocks in the given body.
//
// This is a drop-in replacement for hcldec.Variables which is able to treat
// blocks of type "dynamic" in the same special way that dynblock.Expand would,
// exposing both the variables referenced in the "for_each" and "labels"
// arguments and variables used in the nested "content" block.
func VariablesHCLDec(body hcl.Body, spec hcldec.Spec) []hcl.Traversal {
rootNode := WalkVariables(body)
return walkVariablesWithHCLDec(rootNode, spec)
}
// ExpandVariablesHCLDec is like VariablesHCLDec but it includes only the
// minimal set of variables required to call Expand, ignoring variables that
// are referenced only inside normal block contents. See WalkExpandVariables
// for more information.
func ExpandVariablesHCLDec(body hcl.Body, spec hcldec.Spec) []hcl.Traversal {
rootNode := WalkExpandVariables(body)
return walkVariablesWithHCLDec(rootNode, spec)
}
func walkVariablesWithHCLDec(node WalkVariablesNode, spec hcldec.Spec) []hcl.Traversal {
vars, children := node.Visit(hcldec.ImpliedSchema(spec))
if len(children) > 0 {
childSpecs := hcldec.ChildBlockTypes(spec)
for _, child := range children {
if childSpec, exists := childSpecs[child.BlockTypeName]; exists {
vars = append(vars, walkVariablesWithHCLDec(child.Node, childSpec)...)
}
}
}
return vars
}

View File

@ -1,67 +0,0 @@
# HCL Type Expressions Extension
This HCL extension defines a convention for describing HCL types using function
call and variable reference syntax, allowing configuration formats to include
type information provided by users.
The type syntax is processed statically from a hcl.Expression, so it cannot
use any of the usual language operators. This is similar to type expressions
in statically-typed programming languages.
```hcl
variable "example" {
type = list(string)
}
```
The extension is built using the `hcl.ExprAsKeyword` and `hcl.ExprCall`
functions, and so it relies on the underlying syntax to define how "keyword"
and "call" are interpreted. The above shows how they are interpreted in
the HCL native syntax, while the following shows the same information
expressed in JSON:
```json
{
"variable": {
"example": {
"type": "list(string)"
}
}
}
```
Notice that since we have additional contextual information that we intend
to allow only calls and keywords the JSON syntax is able to parse the given
string directly as an expression, rather than as a template as would be
the case for normal expression evaluation.
For more information, see [the godoc reference](http://godoc.org/github.com/hashicorp/hcl/v2/ext/typeexpr).
## Type Expression Syntax
When expressed in the native syntax, the following expressions are permitted
in a type expression:
* `string` - string
* `bool` - boolean
* `number` - number
* `any` - `cty.DynamicPseudoType` (in function `TypeConstraint` only)
* `list(<type_expr>)` - list of the type given as an argument
* `set(<type_expr>)` - set of the type given as an argument
* `map(<type_expr>)` - map of the type given as an argument
* `tuple([<type_exprs...>])` - tuple with the element types given in the single list argument
* `object({<attr_name>=<type_expr>, ...}` - object with the attributes and corresponding types given in the single map argument
For example:
* `list(string)`
* `object({name=string,age=number})`
* `map(object({name=string,age=number}))`
Note that the object constructor syntax is not fully-general for all possible
object types because it requires the attribute names to be valid identifiers.
In practice it is expected that any time an object type is being fixed for
type checking it will be one that has identifiers as its attributes; object
types with weird attributes generally show up only from arbitrary object
constructors in configuration files, which are usually treated either as maps
or as the dynamic pseudo-type.

View File

@ -1,11 +0,0 @@
// Package typeexpr extends HCL with a convention for describing HCL types
// within configuration files.
//
// The type syntax is processed statically from a hcl.Expression, so it cannot
// use any of the usual language operators. This is similar to type expressions
// in statically-typed programming languages.
//
// variable "example" {
// type = list(string)
// }
package typeexpr

View File

@ -1,196 +0,0 @@
package typeexpr
import (
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
const invalidTypeSummary = "Invalid type specification"
// getType is the internal implementation of both Type and TypeConstraint,
// using the passed flag to distinguish. When constraint is false, the "any"
// keyword will produce an error.
func getType(expr hcl.Expression, constraint bool) (cty.Type, hcl.Diagnostics) {
// First we'll try for one of our keywords
kw := hcl.ExprAsKeyword(expr)
switch kw {
case "bool":
return cty.Bool, nil
case "string":
return cty.String, nil
case "number":
return cty.Number, nil
case "any":
if constraint {
return cty.DynamicPseudoType, nil
}
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("The keyword %q cannot be used in this type specification: an exact type is required.", kw),
Subject: expr.Range().Ptr(),
}}
case "list", "map", "set":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("The %s type constructor requires one argument specifying the element type.", kw),
Subject: expr.Range().Ptr(),
}}
case "object":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "The object type constructor requires one argument specifying the attribute types and values as a map.",
Subject: expr.Range().Ptr(),
}}
case "tuple":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "The tuple type constructor requires one argument specifying the element types as a list.",
Subject: expr.Range().Ptr(),
}}
case "":
// okay! we'll fall through and try processing as a call, then.
default:
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("The keyword %q is not a valid type specification.", kw),
Subject: expr.Range().Ptr(),
}}
}
// If we get down here then our expression isn't just a keyword, so we'll
// try to process it as a call instead.
call, diags := hcl.ExprCall(expr)
if diags.HasErrors() {
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "A type specification is either a primitive type keyword (bool, number, string) or a complex type constructor call, like list(string).",
Subject: expr.Range().Ptr(),
}}
}
switch call.Name {
case "bool", "string", "number", "any":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("Primitive type keyword %q does not expect arguments.", call.Name),
Subject: &call.ArgsRange,
}}
}
if len(call.Arguments) != 1 {
contextRange := call.ArgsRange
subjectRange := call.ArgsRange
if len(call.Arguments) > 1 {
// If we have too many arguments (as opposed to too _few_) then
// we'll highlight the extraneous arguments as the diagnostic
// subject.
subjectRange = hcl.RangeBetween(call.Arguments[1].Range(), call.Arguments[len(call.Arguments)-1].Range())
}
switch call.Name {
case "list", "set", "map":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("The %s type constructor requires one argument specifying the element type.", call.Name),
Subject: &subjectRange,
Context: &contextRange,
}}
case "object":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "The object type constructor requires one argument specifying the attribute types and values as a map.",
Subject: &subjectRange,
Context: &contextRange,
}}
case "tuple":
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "The tuple type constructor requires one argument specifying the element types as a list.",
Subject: &subjectRange,
Context: &contextRange,
}}
}
}
switch call.Name {
case "list":
ety, diags := getType(call.Arguments[0], constraint)
return cty.List(ety), diags
case "set":
ety, diags := getType(call.Arguments[0], constraint)
return cty.Set(ety), diags
case "map":
ety, diags := getType(call.Arguments[0], constraint)
return cty.Map(ety), diags
case "object":
attrDefs, diags := hcl.ExprMap(call.Arguments[0])
if diags.HasErrors() {
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "Object type constructor requires a map whose keys are attribute names and whose values are the corresponding attribute types.",
Subject: call.Arguments[0].Range().Ptr(),
Context: expr.Range().Ptr(),
}}
}
atys := make(map[string]cty.Type)
for _, attrDef := range attrDefs {
attrName := hcl.ExprAsKeyword(attrDef.Key)
if attrName == "" {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "Object constructor map keys must be attribute names.",
Subject: attrDef.Key.Range().Ptr(),
Context: expr.Range().Ptr(),
})
continue
}
aty, attrDiags := getType(attrDef.Value, constraint)
diags = append(diags, attrDiags...)
atys[attrName] = aty
}
return cty.Object(atys), diags
case "tuple":
elemDefs, diags := hcl.ExprList(call.Arguments[0])
if diags.HasErrors() {
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: "Tuple type constructor requires a list of element types.",
Subject: call.Arguments[0].Range().Ptr(),
Context: expr.Range().Ptr(),
}}
}
etys := make([]cty.Type, len(elemDefs))
for i, defExpr := range elemDefs {
ety, elemDiags := getType(defExpr, constraint)
diags = append(diags, elemDiags...)
etys[i] = ety
}
return cty.Tuple(etys), diags
default:
// Can't access call.Arguments in this path because we've not validated
// that it contains exactly one expression here.
return cty.DynamicPseudoType, hcl.Diagnostics{{
Severity: hcl.DiagError,
Summary: invalidTypeSummary,
Detail: fmt.Sprintf("Keyword %q is not a valid type constructor.", call.Name),
Subject: expr.Range().Ptr(),
}}
}
}

View File

@ -1,129 +0,0 @@
package typeexpr
import (
"bytes"
"fmt"
"sort"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// Type attempts to process the given expression as a type expression and, if
// successful, returns the resulting type. If unsuccessful, error diagnostics
// are returned.
func Type(expr hcl.Expression) (cty.Type, hcl.Diagnostics) {
return getType(expr, false)
}
// TypeConstraint attempts to parse the given expression as a type constraint
// and, if successful, returns the resulting type. If unsuccessful, error
// diagnostics are returned.
//
// A type constraint has the same structure as a type, but it additionally
// allows the keyword "any" to represent cty.DynamicPseudoType, which is often
// used as a wildcard in type checking and type conversion operations.
func TypeConstraint(expr hcl.Expression) (cty.Type, hcl.Diagnostics) {
return getType(expr, true)
}
// TypeString returns a string rendering of the given type as it would be
// expected to appear in the HCL native syntax.
//
// This is primarily intended for showing types to the user in an application
// that uses typexpr, where the user can be assumed to be familiar with the
// type expression syntax. In applications that do not use typeexpr these
// results may be confusing to the user and so type.FriendlyName may be
// preferable, even though it's less precise.
//
// TypeString produces reasonable results only for types like what would be
// produced by the Type and TypeConstraint functions. In particular, it cannot
// support capsule types.
func TypeString(ty cty.Type) string {
// Easy cases first
switch ty {
case cty.String:
return "string"
case cty.Bool:
return "bool"
case cty.Number:
return "number"
case cty.DynamicPseudoType:
return "any"
}
if ty.IsCapsuleType() {
panic("TypeString does not support capsule types")
}
if ty.IsCollectionType() {
ety := ty.ElementType()
etyString := TypeString(ety)
switch {
case ty.IsListType():
return fmt.Sprintf("list(%s)", etyString)
case ty.IsSetType():
return fmt.Sprintf("set(%s)", etyString)
case ty.IsMapType():
return fmt.Sprintf("map(%s)", etyString)
default:
// Should never happen because the above is exhaustive
panic("unsupported collection type")
}
}
if ty.IsObjectType() {
var buf bytes.Buffer
buf.WriteString("object({")
atys := ty.AttributeTypes()
names := make([]string, 0, len(atys))
for name := range atys {
names = append(names, name)
}
sort.Strings(names)
first := true
for _, name := range names {
aty := atys[name]
if !first {
buf.WriteByte(',')
}
if !hclsyntax.ValidIdentifier(name) {
// Should never happen for any type produced by this package,
// but we'll do something reasonable here just so we don't
// produce garbage if someone gives us a hand-assembled object
// type that has weird attribute names.
// Using Go-style quoting here isn't perfect, since it doesn't
// exactly match HCL syntax, but it's fine for an edge-case.
buf.WriteString(fmt.Sprintf("%q", name))
} else {
buf.WriteString(name)
}
buf.WriteByte('=')
buf.WriteString(TypeString(aty))
first = false
}
buf.WriteString("})")
return buf.String()
}
if ty.IsTupleType() {
var buf bytes.Buffer
buf.WriteString("tuple([")
etys := ty.TupleElementTypes()
first := true
for _, ety := range etys {
if !first {
buf.WriteByte(',')
}
buf.WriteString(TypeString(ety))
first = false
}
buf.WriteString("])")
return buf.String()
}
// Should never happen because we covered all cases above.
panic(fmt.Errorf("unsupported type %#v", ty))
}

View File

@ -1,304 +0,0 @@
package gohcl
import (
"fmt"
"reflect"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/gocty"
)
// DecodeBody extracts the configuration within the given body into the given
// value. This value must be a non-nil pointer to either a struct or
// a map, where in the former case the configuration will be decoded using
// struct tags and in the latter case only attributes are allowed and their
// values are decoded into the map.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeBody(body hcl.Body, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics {
rv := reflect.ValueOf(val)
if rv.Kind() != reflect.Ptr {
panic(fmt.Sprintf("target value must be a pointer, not %s", rv.Type().String()))
}
return decodeBodyToValue(body, ctx, rv.Elem())
}
func decodeBodyToValue(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
et := val.Type()
switch et.Kind() {
case reflect.Struct:
return decodeBodyToStruct(body, ctx, val)
case reflect.Map:
return decodeBodyToMap(body, ctx, val)
default:
panic(fmt.Sprintf("target value must be pointer to struct or map, not %s", et.String()))
}
}
func decodeBodyToStruct(body hcl.Body, ctx *hcl.EvalContext, val reflect.Value) hcl.Diagnostics {
schema, partial := ImpliedBodySchema(val.Interface())
var content *hcl.BodyContent
var leftovers hcl.Body
var diags hcl.Diagnostics
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
if content == nil {
return diags
}
tags := getFieldTags(val.Type())
if tags.Remain != nil {
fieldIdx := *tags.Remain
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
switch {
case bodyType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(leftovers))
case attrsType.AssignableTo(field.Type):
attrs, attrsDiags := leftovers.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
fieldV.Set(reflect.ValueOf(attrs))
default:
diags = append(diags, decodeBodyToValue(leftovers, ctx, fieldV)...)
}
}
for name, fieldIdx := range tags.Attributes {
attr := content.Attributes[name]
field := val.Type().Field(fieldIdx)
fieldV := val.Field(fieldIdx)
if attr == nil {
if !exprType.AssignableTo(field.Type) {
continue
}
// As a special case, if the target is of type hcl.Expression then
// we'll assign an actual expression that evalues to a cty null,
// so the caller can deal with it within the cty realm rather
// than within the Go realm.
synthExpr := hcl.StaticExpr(cty.NullVal(cty.DynamicPseudoType), body.MissingItemRange())
fieldV.Set(reflect.ValueOf(synthExpr))
continue
}
switch {
case attrType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr))
case exprType.AssignableTo(field.Type):
fieldV.Set(reflect.ValueOf(attr.Expr))
default:
diags = append(diags, DecodeExpression(
attr.Expr, ctx, fieldV.Addr().Interface(),
)...)
}
}
blocksByType := content.Blocks.ByType()
for typeName, fieldIdx := range tags.Blocks {
blocks := blocksByType[typeName]
field := val.Type().Field(fieldIdx)
ty := field.Type
isSlice := false
isPtr := false
if ty.Kind() == reflect.Slice {
isSlice = true
ty = ty.Elem()
}
if ty.Kind() == reflect.Ptr {
isPtr = true
ty = ty.Elem()
}
if len(blocks) > 1 && !isSlice {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Duplicate %s block", typeName),
Detail: fmt.Sprintf(
"Only one %s block is allowed. Another was defined at %s.",
typeName, blocks[0].DefRange.String(),
),
Subject: &blocks[1].DefRange,
})
continue
}
if len(blocks) == 0 {
if isSlice || isPtr {
val.Field(fieldIdx).Set(reflect.Zero(field.Type))
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Missing %s block", typeName),
Detail: fmt.Sprintf("A %s block is required.", typeName),
Subject: body.MissingItemRange().Ptr(),
})
}
continue
}
switch {
case isSlice:
elemType := ty
if isPtr {
elemType = reflect.PtrTo(ty)
}
sli := reflect.MakeSlice(reflect.SliceOf(elemType), len(blocks), len(blocks))
for i, block := range blocks {
if isPtr {
v := reflect.New(ty)
diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...)
sli.Index(i).Set(v)
} else {
diags = append(diags, decodeBlockToValue(block, ctx, sli.Index(i))...)
}
}
val.Field(fieldIdx).Set(sli)
default:
block := blocks[0]
if isPtr {
v := reflect.New(ty)
diags = append(diags, decodeBlockToValue(block, ctx, v.Elem())...)
val.Field(fieldIdx).Set(v)
} else {
diags = append(diags, decodeBlockToValue(block, ctx, val.Field(fieldIdx))...)
}
}
}
return diags
}
func decodeBodyToMap(body hcl.Body, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
attrs, diags := body.JustAttributes()
if attrs == nil {
return diags
}
mv := reflect.MakeMap(v.Type())
for k, attr := range attrs {
switch {
case attrType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr))
case exprType.AssignableTo(v.Type().Elem()):
mv.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(attr.Expr))
default:
ev := reflect.New(v.Type().Elem())
diags = append(diags, DecodeExpression(attr.Expr, ctx, ev.Interface())...)
mv.SetMapIndex(reflect.ValueOf(k), ev.Elem())
}
}
v.Set(mv)
return diags
}
func decodeBlockToValue(block *hcl.Block, ctx *hcl.EvalContext, v reflect.Value) hcl.Diagnostics {
var diags hcl.Diagnostics
ty := v.Type()
switch {
case blockType.AssignableTo(ty):
v.Elem().Set(reflect.ValueOf(block))
case bodyType.AssignableTo(ty):
v.Elem().Set(reflect.ValueOf(block.Body))
case attrsType.AssignableTo(ty):
attrs, attrsDiags := block.Body.JustAttributes()
if len(attrsDiags) > 0 {
diags = append(diags, attrsDiags...)
}
v.Elem().Set(reflect.ValueOf(attrs))
default:
diags = append(diags, decodeBodyToValue(block.Body, ctx, v)...)
if len(block.Labels) > 0 {
blockTags := getFieldTags(ty)
for li, lv := range block.Labels {
lfieldIdx := blockTags.Labels[li].FieldIndex
v.Field(lfieldIdx).Set(reflect.ValueOf(lv))
}
}
}
return diags
}
// DecodeExpression extracts the value of the given expression into the given
// value. This value must be something that gocty is able to decode into,
// since the final decoding is delegated to that package.
//
// The given EvalContext is used to resolve any variables or functions in
// expressions encountered while decoding. This may be nil to require only
// constant values, for simple applications that do not support variables or
// functions.
//
// The returned diagnostics should be inspected with its HasErrors method to
// determine if the populated value is valid and complete. If error diagnostics
// are returned then the given value may have been partially-populated but
// may still be accessed by a careful caller for static analysis and editor
// integration use-cases.
func DecodeExpression(expr hcl.Expression, ctx *hcl.EvalContext, val interface{}) hcl.Diagnostics {
srcVal, diags := expr.Value(ctx)
convTy, err := gocty.ImpliedType(val)
if err != nil {
panic(fmt.Sprintf("unsuitable DecodeExpression target: %s", err))
}
srcVal, err = convert.Convert(srcVal, convTy)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
return diags
}
err = gocty.FromCtyValue(srcVal, val)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unsuitable value type",
Detail: fmt.Sprintf("Unsuitable value: %s", err.Error()),
Subject: expr.StartRange().Ptr(),
Context: expr.Range().Ptr(),
})
}
return diags
}

View File

@ -1,53 +0,0 @@
// Package gohcl allows decoding HCL configurations into Go data structures.
//
// It provides a convenient and concise way of describing the schema for
// configuration and then accessing the resulting data via native Go
// types.
//
// A struct field tag scheme is used, similar to other decoding and
// unmarshalling libraries. The tags are formatted as in the following example:
//
// ThingType string `hcl:"thing_type,attr"`
//
// Within each tag there are two comma-separated tokens. The first is the
// name of the corresponding construct in configuration, while the second
// is a keyword giving the kind of construct expected. The following
// kind keywords are supported:
//
// attr (the default) indicates that the value is to be populated from an attribute
// block indicates that the value is to populated from a block
// label indicates that the value is to populated from a block label
// remain indicates that the value is to be populated from the remaining body after populating other fields
//
// "attr" fields may either be of type *hcl.Expression, in which case the raw
// expression is assigned, or of any type accepted by gocty, in which case
// gocty will be used to assign the value to a native Go type.
//
// "block" fields may be of type *hcl.Block or hcl.Body, in which case the
// corresponding raw value is assigned, or may be a struct that recursively
// uses the same tags. Block fields may also be slices of any of these types,
// in which case multiple blocks of the corresponding type are decoded into
// the slice.
//
// "label" fields are considered only in a struct used as the type of a field
// marked as "block", and are used sequentially to capture the labels of
// the blocks being decoded. In this case, the name token is used only as
// an identifier for the label in diagnostic messages.
//
// "remain" can be placed on a single field that may be either of type
// hcl.Body or hcl.Attributes, in which case any remaining body content is
// placed into this field for delayed processing. If no "remain" field is
// present then any attributes or blocks not matched by another valid tag
// will cause an error diagnostic.
//
// Only a subset of this tagging/typing vocabulary is supported for the
// "Encode" family of functions. See the EncodeIntoBody docs for full details
// on the constraints there.
//
// Broadly-speaking this package deals with two types of error. The first is
// errors in the configuration itself, which are returned as diagnostics
// written with the configuration author as the target audience. The second
// is bugs in the calling program, such as invalid struct tags, which are
// surfaced via panics since there can be no useful runtime handling of such
// errors and they should certainly not be returned to the user as diagnostics.
package gohcl

View File

@ -1,191 +0,0 @@
package gohcl
import (
"fmt"
"reflect"
"sort"
"github.com/hashicorp/hcl/v2/hclwrite"
"github.com/zclconf/go-cty/cty/gocty"
)
// EncodeIntoBody replaces the contents of the given hclwrite Body with
// attributes and blocks derived from the given value, which must be a
// struct value or a pointer to a struct value with the struct tags defined
// in this package.
//
// This function can work only with fully-decoded data. It will ignore any
// fields tagged as "remain", any fields that decode attributes into either
// hcl.Attribute or hcl.Expression values, and any fields that decode blocks
// into hcl.Attributes values. This function does not have enough information
// to complete the decoding of these types.
//
// Any fields tagged as "label" are ignored by this function. Use EncodeAsBlock
// to produce a whole hclwrite.Block including block labels.
//
// As long as a suitable value is given to encode and the destination body
// is non-nil, this function will always complete. It will panic in case of
// any errors in the calling program, such as passing an inappropriate type
// or a nil body.
//
// The layout of the resulting HCL source is derived from the ordering of
// the struct fields, with blank lines around nested blocks of different types.
// Fields representing attributes should usually precede those representing
// blocks so that the attributes can group togather in the result. For more
// control, use the hclwrite API directly.
func EncodeIntoBody(val interface{}, dst *hclwrite.Body) {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
populateBody(rv, ty, tags, dst)
}
// EncodeAsBlock creates a new hclwrite.Block populated with the data from
// the given value, which must be a struct or pointer to struct with the
// struct tags defined in this package.
//
// If the given struct type has fields tagged with "label" tags then they
// will be used in order to annotate the created block with labels.
//
// This function has the same constraints as EncodeIntoBody and will panic
// if they are violated.
func EncodeAsBlock(val interface{}, blockType string) *hclwrite.Block {
rv := reflect.ValueOf(val)
ty := rv.Type()
if ty.Kind() == reflect.Ptr {
rv = rv.Elem()
ty = rv.Type()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("value is %s, not struct", ty.Kind()))
}
tags := getFieldTags(ty)
labels := make([]string, len(tags.Labels))
for i, lf := range tags.Labels {
lv := rv.Field(lf.FieldIndex)
// We just stringify whatever we find. It should always be a string
// but if not then we'll still do something reasonable.
labels[i] = fmt.Sprintf("%s", lv.Interface())
}
block := hclwrite.NewBlock(blockType, labels)
populateBody(rv, ty, tags, block.Body())
return block
}
func populateBody(rv reflect.Value, ty reflect.Type, tags *fieldTags, dst *hclwrite.Body) {
nameIdxs := make(map[string]int, len(tags.Attributes)+len(tags.Blocks))
namesOrder := make([]string, 0, len(tags.Attributes)+len(tags.Blocks))
for n, i := range tags.Attributes {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
for n, i := range tags.Blocks {
nameIdxs[n] = i
namesOrder = append(namesOrder, n)
}
sort.SliceStable(namesOrder, func(i, j int) bool {
ni, nj := namesOrder[i], namesOrder[j]
return nameIdxs[ni] < nameIdxs[nj]
})
dst.Clear()
prevWasBlock := false
for _, name := range namesOrder {
fieldIdx := nameIdxs[name]
field := ty.Field(fieldIdx)
fieldTy := field.Type
fieldVal := rv.Field(fieldIdx)
if fieldTy.Kind() == reflect.Ptr {
fieldTy = fieldTy.Elem()
fieldVal = fieldVal.Elem()
}
if _, isAttr := tags.Attributes[name]; isAttr {
if exprType.AssignableTo(fieldTy) || attrType.AssignableTo(fieldTy) {
continue // ignore undecoded fields
}
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if fieldTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
if prevWasBlock {
dst.AppendNewline()
prevWasBlock = false
}
valTy, err := gocty.ImpliedType(fieldVal.Interface())
if err != nil {
panic(fmt.Sprintf("cannot encode %T as HCL expression: %s", fieldVal.Interface(), err))
}
val, err := gocty.ToCtyValue(fieldVal.Interface(), valTy)
if err != nil {
// This should never happen, since we should always be able
// to decode into the implied type.
panic(fmt.Sprintf("failed to encode %T as %#v: %s", fieldVal.Interface(), valTy, err))
}
dst.SetAttributeValue(name, val)
} else { // must be a block, then
elemTy := fieldTy
isSeq := false
if elemTy.Kind() == reflect.Slice || elemTy.Kind() == reflect.Array {
isSeq = true
elemTy = elemTy.Elem()
}
if bodyType.AssignableTo(elemTy) || attrsType.AssignableTo(elemTy) {
continue // ignore undecoded fields
}
prevWasBlock = false
if isSeq {
l := fieldVal.Len()
for i := 0; i < l; i++ {
elemVal := fieldVal.Index(i)
if !elemVal.IsValid() {
continue // ignore (elem value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && elemVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(elemVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
} else {
if !fieldVal.IsValid() {
continue // ignore (field value is nil pointer)
}
if elemTy.Kind() == reflect.Ptr && fieldVal.IsNil() {
continue // ignore
}
block := EncodeAsBlock(fieldVal.Interface(), name)
if !prevWasBlock {
dst.AppendNewline()
prevWasBlock = true
}
dst.AppendBlock(block)
}
}
}
}

View File

@ -1,174 +0,0 @@
package gohcl
import (
"fmt"
"reflect"
"sort"
"strings"
"github.com/hashicorp/hcl/v2"
)
// ImpliedBodySchema produces a hcl.BodySchema derived from the type of the
// given value, which must be a struct value or a pointer to one. If an
// inappropriate value is passed, this function will panic.
//
// The second return argument indicates whether the given struct includes
// a "remain" field, and thus the returned schema is non-exhaustive.
//
// This uses the tags on the fields of the struct to discover how each
// field's value should be expressed within configuration. If an invalid
// mapping is attempted, this function will panic.
func ImpliedBodySchema(val interface{}) (schema *hcl.BodySchema, partial bool) {
ty := reflect.TypeOf(val)
if ty.Kind() == reflect.Ptr {
ty = ty.Elem()
}
if ty.Kind() != reflect.Struct {
panic(fmt.Sprintf("given value must be struct, not %T", val))
}
var attrSchemas []hcl.AttributeSchema
var blockSchemas []hcl.BlockHeaderSchema
tags := getFieldTags(ty)
attrNames := make([]string, 0, len(tags.Attributes))
for n := range tags.Attributes {
attrNames = append(attrNames, n)
}
sort.Strings(attrNames)
for _, n := range attrNames {
idx := tags.Attributes[n]
optional := tags.Optional[n]
field := ty.Field(idx)
var required bool
switch {
case field.Type.AssignableTo(exprType):
// If we're decoding to hcl.Expression then absense can be
// indicated via a null value, so we don't specify that
// the field is required during decoding.
required = false
case field.Type.Kind() != reflect.Ptr && !optional:
required = true
default:
required = false
}
attrSchemas = append(attrSchemas, hcl.AttributeSchema{
Name: n,
Required: required,
})
}
blockNames := make([]string, 0, len(tags.Blocks))
for n := range tags.Blocks {
blockNames = append(blockNames, n)
}
sort.Strings(blockNames)
for _, n := range blockNames {
idx := tags.Blocks[n]
field := ty.Field(idx)
fty := field.Type
if fty.Kind() == reflect.Slice {
fty = fty.Elem()
}
if fty.Kind() == reflect.Ptr {
fty = fty.Elem()
}
if fty.Kind() != reflect.Struct {
panic(fmt.Sprintf(
"hcl 'block' tag kind cannot be applied to %s field %s: struct required", field.Type.String(), field.Name,
))
}
ftags := getFieldTags(fty)
var labelNames []string
if len(ftags.Labels) > 0 {
labelNames = make([]string, len(ftags.Labels))
for i, l := range ftags.Labels {
labelNames[i] = l.Name
}
}
blockSchemas = append(blockSchemas, hcl.BlockHeaderSchema{
Type: n,
LabelNames: labelNames,
})
}
partial = tags.Remain != nil
schema = &hcl.BodySchema{
Attributes: attrSchemas,
Blocks: blockSchemas,
}
return schema, partial
}
type fieldTags struct {
Attributes map[string]int
Blocks map[string]int
Labels []labelField
Remain *int
Optional map[string]bool
}
type labelField struct {
FieldIndex int
Name string
}
func getFieldTags(ty reflect.Type) *fieldTags {
ret := &fieldTags{
Attributes: map[string]int{},
Blocks: map[string]int{},
Optional: map[string]bool{},
}
ct := ty.NumField()
for i := 0; i < ct; i++ {
field := ty.Field(i)
tag := field.Tag.Get("hcl")
if tag == "" {
continue
}
comma := strings.Index(tag, ",")
var name, kind string
if comma != -1 {
name = tag[:comma]
kind = tag[comma+1:]
} else {
name = tag
kind = "attr"
}
switch kind {
case "attr":
ret.Attributes[name] = i
case "block":
ret.Blocks[name] = i
case "label":
ret.Labels = append(ret.Labels, labelField{
FieldIndex: i,
Name: name,
})
case "remain":
if ret.Remain != nil {
panic("only one 'remain' tag is permitted")
}
idx := i // copy, because this loop will continue assigning to i
ret.Remain = &idx
case "optional":
ret.Attributes[name] = i
ret.Optional[name] = true
default:
panic(fmt.Sprintf("invalid hcl field tag kind %q on %s %q", kind, field.Type.String(), field.Name))
}
}
return ret
}

View File

@ -1,16 +0,0 @@
package gohcl
import (
"reflect"
"github.com/hashicorp/hcl/v2"
)
var victimExpr hcl.Expression
var victimBody hcl.Body
var exprType = reflect.TypeOf(&victimExpr).Elem()
var bodyType = reflect.TypeOf(&victimBody).Elem()
var blockType = reflect.TypeOf((*hcl.Block)(nil))
var attrType = reflect.TypeOf((*hcl.Attribute)(nil))
var attrsType = reflect.TypeOf(hcl.Attributes(nil))

View File

@ -1,21 +0,0 @@
package hcldec
import (
"github.com/hashicorp/hcl/v2"
)
type blockLabel struct {
Value string
Range hcl.Range
}
func labelsForBlock(block *hcl.Block) []blockLabel {
ret := make([]blockLabel, len(block.Labels))
for i := range block.Labels {
ret[i] = blockLabel{
Value: block.Labels[i],
Range: block.LabelRanges[i],
}
}
return ret
}

View File

@ -1,36 +0,0 @@
package hcldec
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
func decode(body hcl.Body, blockLabels []blockLabel, ctx *hcl.EvalContext, spec Spec, partial bool) (cty.Value, hcl.Body, hcl.Diagnostics) {
schema := ImpliedSchema(spec)
var content *hcl.BodyContent
var diags hcl.Diagnostics
var leftovers hcl.Body
if partial {
content, leftovers, diags = body.PartialContent(schema)
} else {
content, diags = body.Content(schema)
}
val, valDiags := spec.decode(content, blockLabels, ctx)
diags = append(diags, valDiags...)
return val, leftovers, diags
}
func impliedType(spec Spec) cty.Type {
return spec.impliedType()
}
func sourceRange(body hcl.Body, blockLabels []blockLabel, spec Spec) hcl.Range {
schema := ImpliedSchema(spec)
content, _, _ := body.PartialContent(schema)
return spec.sourceRange(content, blockLabels)
}

View File

@ -1,12 +0,0 @@
// Package hcldec provides a higher-level API for unpacking the content of
// HCL bodies, implemented in terms of the low-level "Content" API exposed
// by the bodies themselves.
//
// It allows decoding an entire nested configuration in a single operation
// by providing a description of the intended structure.
//
// For some applications it may be more convenient to use the "gohcl"
// package, which has a similar purpose but decodes directly into native
// Go data types. hcldec instead targets the cty type system, and thus allows
// a cty-driven application to remain within that type system.
package hcldec

View File

@ -1,23 +0,0 @@
package hcldec
import (
"encoding/gob"
)
func init() {
// Every Spec implementation should be registered with gob, so that
// specs can be sent over gob channels, such as using
// github.com/hashicorp/go-plugin with plugins that need to describe
// what shape of configuration they are expecting.
gob.Register(ObjectSpec(nil))
gob.Register(TupleSpec(nil))
gob.Register((*AttrSpec)(nil))
gob.Register((*LiteralSpec)(nil))
gob.Register((*ExprSpec)(nil))
gob.Register((*BlockSpec)(nil))
gob.Register((*BlockListSpec)(nil))
gob.Register((*BlockSetSpec)(nil))
gob.Register((*BlockMapSpec)(nil))
gob.Register((*BlockLabelSpec)(nil))
gob.Register((*DefaultSpec)(nil))
}

View File

@ -1,81 +0,0 @@
package hcldec
import (
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
// Decode interprets the given body using the given specification and returns
// the resulting value. If the given body is not valid per the spec, error
// diagnostics are returned and the returned value is likely to be incomplete.
//
// The ctx argument may be nil, in which case any references to variables or
// functions will produce error diagnostics.
func Decode(body hcl.Body, spec Spec, ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
val, _, diags := decode(body, nil, ctx, spec, false)
return val, diags
}
// PartialDecode is like Decode except that it permits "leftover" items in
// the top-level body, which are returned as a new body to allow for
// further processing.
//
// Any descendent block bodies are _not_ decoded partially and thus must
// be fully described by the given specification.
func PartialDecode(body hcl.Body, spec Spec, ctx *hcl.EvalContext) (cty.Value, hcl.Body, hcl.Diagnostics) {
return decode(body, nil, ctx, spec, true)
}
// ImpliedType returns the value type that should result from decoding the
// given spec.
func ImpliedType(spec Spec) cty.Type {
return impliedType(spec)
}
// SourceRange interprets the given body using the given specification and
// then returns the source range of the value that would be used to
// fulfill the spec.
//
// This can be used if application-level validation detects value errors, to
// obtain a reasonable SourceRange to use for generated diagnostics. It works
// best when applied to specific body items (e.g. using AttrSpec, BlockSpec, ...)
// as opposed to entire bodies using ObjectSpec, TupleSpec. The result will
// be less useful the broader the specification, so e.g. a spec that returns
// the entirety of all of the blocks of a given type is likely to be
// _particularly_ arbitrary and useless.
//
// If the given body is not valid per the given spec, the result is best-effort
// and may not actually be something ideal. It's expected that an application
// will already have used Decode or PartialDecode earlier and thus had an
// opportunity to detect and report spec violations.
func SourceRange(body hcl.Body, spec Spec) hcl.Range {
return sourceRange(body, nil, spec)
}
// ChildBlockTypes returns a map of all of the child block types declared
// by the given spec, with block type names as keys and the associated
// nested body specs as values.
func ChildBlockTypes(spec Spec) map[string]Spec {
ret := map[string]Spec{}
// visitSameBodyChildren walks through the spec structure, calling
// the given callback for each descendent spec encountered. We are
// interested in the specs that reference attributes and blocks.
var visit visitFunc
visit = func(s Spec) {
if bs, ok := s.(blockSpec); ok {
for _, blockS := range bs.blockHeaderSchemata() {
nested := bs.nestedSpec()
if nested != nil { // nil can be returned to dynamically opt out of this interface
ret[blockS.Type] = nested
}
}
}
s.visitSameBodyChildren(visit)
}
visit(spec)
return ret
}

View File

@ -1,36 +0,0 @@
package hcldec
import (
"github.com/hashicorp/hcl/v2"
)
// ImpliedSchema returns the *hcl.BodySchema implied by the given specification.
// This is the schema that the Decode function will use internally to
// access the content of a given body.
func ImpliedSchema(spec Spec) *hcl.BodySchema {
var attrs []hcl.AttributeSchema
var blocks []hcl.BlockHeaderSchema
// visitSameBodyChildren walks through the spec structure, calling
// the given callback for each descendent spec encountered. We are
// interested in the specs that reference attributes and blocks.
var visit visitFunc
visit = func(s Spec) {
if as, ok := s.(attrSpec); ok {
attrs = append(attrs, as.attrSchemata()...)
}
if bs, ok := s.(blockSpec); ok {
blocks = append(blocks, bs.blockHeaderSchemata()...)
}
s.visitSameBodyChildren(visit)
}
visit(spec)
return &hcl.BodySchema{
Attributes: attrs,
Blocks: blocks,
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,36 +0,0 @@
package hcldec
import (
"github.com/hashicorp/hcl/v2"
)
// Variables processes the given body with the given spec and returns a
// list of the variable traversals that would be required to decode
// the same pairing of body and spec.
//
// This can be used to conditionally populate the variables in the EvalContext
// passed to Decode, for applications where a static scope is insufficient.
//
// If the given body is not compliant with the given schema, the result may
// be incomplete, but that's assumed to be okay because the eventual call
// to Decode will produce error diagnostics anyway.
func Variables(body hcl.Body, spec Spec) []hcl.Traversal {
var vars []hcl.Traversal
schema := ImpliedSchema(spec)
content, _, _ := body.PartialContent(schema)
if vs, ok := spec.(specNeedingVariables); ok {
vars = append(vars, vs.variablesNeeded(content)...)
}
var visitFn visitFunc
visitFn = func(s Spec) {
if vs, ok := s.(specNeedingVariables); ok {
vars = append(vars, vs.variablesNeeded(content)...)
}
s.visitSameBodyChildren(visitFn)
}
spec.visitSameBodyChildren(visitFn)
return vars
}

View File

@ -1,4 +0,0 @@
// Package hcled provides functionality intended to help an application
// that embeds HCL to deliver relevant information to a text editor or IDE
// for navigating around and analyzing configuration files.
package hcled

View File

@ -1,34 +0,0 @@
package hcled
import (
"github.com/hashicorp/hcl/v2"
)
type contextStringer interface {
ContextString(offset int) string
}
// ContextString returns a string describing the context of the given byte
// offset, if available. An empty string is returned if no such information
// is available, or otherwise the returned string is in a form that depends
// on the language used to write the referenced file.
func ContextString(file *hcl.File, offset int) string {
if cser, ok := file.Nav.(contextStringer); ok {
return cser.ContextString(offset)
}
return ""
}
type contextDefRanger interface {
ContextDefRange(offset int) hcl.Range
}
func ContextDefRange(file *hcl.File, offset int) hcl.Range {
if cser, ok := file.Nav.(contextDefRanger); ok {
defRange := cser.ContextDefRange(offset)
if !defRange.Empty() {
return defRange
}
}
return file.Body.MissingItemRange()
}

View File

@ -1,135 +0,0 @@
// Package hclparse has the main API entry point for parsing both HCL native
// syntax and HCL JSON.
//
// The main HCL package also includes SimpleParse and SimpleParseFile which
// can be a simpler interface for the common case where an application just
// needs to parse a single file. The gohcl package simplifies that further
// in its SimpleDecode function, which combines hcl.SimpleParse with decoding
// into Go struct values
//
// Package hclparse, then, is useful for applications that require more fine
// control over parsing or which need to load many separate files and keep
// track of them for possible error reporting or other analysis.
package hclparse
import (
"fmt"
"io/ioutil"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/hcl/v2/json"
)
// NOTE: This is the public interface for parsing. The actual parsers are
// in other packages alongside this one, with this package just wrapping them
// to provide a unified interface for the caller across all supported formats.
// Parser is the main interface for parsing configuration files. As well as
// parsing files, a parser also retains a registry of all of the files it
// has parsed so that multiple attempts to parse the same file will return
// the same object and so the collected files can be used when printing
// diagnostics.
//
// Any diagnostics for parsing a file are only returned once on the first
// call to parse that file. Callers are expected to collect up diagnostics
// and present them together, so returning diagnostics for the same file
// multiple times would create a confusing result.
type Parser struct {
files map[string]*hcl.File
}
// NewParser creates a new parser, ready to parse configuration files.
func NewParser() *Parser {
return &Parser{
files: map[string]*hcl.File{},
}
}
// ParseHCL parses the given buffer (which is assumed to have been loaded from
// the given filename) as a native-syntax configuration file and returns the
// hcl.File object representing it.
func (p *Parser) ParseHCL(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := hclsyntax.ParseConfig(src, filename, hcl.Pos{Byte: 0, Line: 1, Column: 1})
p.files[filename] = file
return file, diags
}
// ParseHCLFile reads the given filename and parses it as a native-syntax HCL
// configuration file. An error diagnostic is returned if the given file
// cannot be read.
func (p *Parser) ParseHCLFile(filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
src, err := ioutil.ReadFile(filename)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to read file",
Detail: fmt.Sprintf("The configuration file %q could not be read.", filename),
},
}
}
return p.ParseHCL(src, filename)
}
// ParseJSON parses the given JSON buffer (which is assumed to have been loaded
// from the given filename) and returns the hcl.File object representing it.
func (p *Parser) ParseJSON(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := json.Parse(src, filename)
p.files[filename] = file
return file, diags
}
// ParseJSONFile reads the given filename and parses it as JSON, similarly to
// ParseJSON. An error diagnostic is returned if the given file cannot be read.
func (p *Parser) ParseJSONFile(filename string) (*hcl.File, hcl.Diagnostics) {
if existing := p.files[filename]; existing != nil {
return existing, nil
}
file, diags := json.ParseFile(filename)
p.files[filename] = file
return file, diags
}
// AddFile allows a caller to record in a parser a file that was parsed some
// other way, thus allowing it to be included in the registry of sources.
func (p *Parser) AddFile(filename string, file *hcl.File) {
p.files[filename] = file
}
// Sources returns a map from filenames to the raw source code that was
// read from them. This is intended to be used, for example, to print
// diagnostics with contextual information.
//
// The arrays underlying the returned slices should not be modified.
func (p *Parser) Sources() map[string][]byte {
ret := make(map[string][]byte)
for fn, f := range p.files {
ret[fn] = f.Bytes
}
return ret
}
// Files returns a map from filenames to the File objects produced from them.
// This is intended to be used, for example, to print diagnostics with
// contextual information.
//
// The returned map and all of the objects it refers to directly or indirectly
// must not be modified.
func (p *Parser) Files() map[string]*hcl.File {
return p.files
}

View File

@ -2,9 +2,11 @@ package hclsyntax
import (
"fmt"
"sort"
"sync"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/ext/customdecode"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/function"
@ -24,7 +26,33 @@ type Expression interface {
}
// Assert that Expression implements hcl.Expression
var assertExprImplExpr hcl.Expression = Expression(nil)
var _ hcl.Expression = Expression(nil)
// ParenthesesExpr represents an expression written in grouping
// parentheses.
//
// The parser takes care of the precedence effect of the parentheses, so the
// only purpose of this separate expression node is to capture the source range
// of the parentheses themselves, rather than the source range of the
// expression within. All of the other expression operations just pass through
// to the underlying expression.
type ParenthesesExpr struct {
Expression
SrcRange hcl.Range
}
var _ hcl.Expression = (*ParenthesesExpr)(nil)
func (e *ParenthesesExpr) Range() hcl.Range {
return e.SrcRange
}
func (e *ParenthesesExpr) walkChildNodes(w internalWalkFunc) {
// We override the walkChildNodes from the embedded Expression to
// ensure that both the parentheses _and_ the content are visible
// in a walk.
w(e.Expression)
}
// LiteralValueExpr is an expression that just always returns a given value.
type LiteralValueExpr struct {
@ -242,6 +270,10 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
}
}
diagExtra := functionCallDiagExtra{
calledFunctionName: e.Name,
}
params := f.Params()
varParam := f.VarParam()
@ -259,6 +291,21 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
}
switch {
case expandVal.Type().Equals(cty.DynamicPseudoType):
if expandVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expanding argument value",
Detail: "The expanding argument (indicated by ...) must not be null.",
Subject: expandExpr.Range().Ptr(),
Context: e.Range().Ptr(),
Expression: expandExpr,
EvalContext: ctx,
Extra: &diagExtra,
})
return cty.DynamicVal, diags
}
return cty.DynamicVal, diags
case expandVal.Type().IsTupleType() || expandVal.Type().IsListType() || expandVal.Type().IsSetType():
if expandVal.IsNull() {
diags = append(diags, &hcl.Diagnostic{
@ -269,6 +316,7 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
Context: e.Range().Ptr(),
Expression: expandExpr,
EvalContext: ctx,
Extra: &diagExtra,
})
return cty.DynamicVal, diags
}
@ -276,13 +324,17 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
return cty.DynamicVal, diags
}
// When expanding arguments from a collection, we must first unmark
// the collection itself, and apply any marks directly to the
// elements. This ensures that marks propagate correctly.
expandVal, marks := expandVal.Unmark()
newArgs := make([]Expression, 0, (len(args)-1)+expandVal.LengthInt())
newArgs = append(newArgs, args[:len(args)-1]...)
it := expandVal.ElementIterator()
for it.Next() {
_, val := it.Element()
newArgs = append(newArgs, &LiteralValueExpr{
Val: val,
Val: val.WithMarks(marks),
SrcRange: expandExpr.Range(),
})
}
@ -296,6 +348,7 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
Context: e.Range().Ptr(),
Expression: expandExpr,
EvalContext: ctx,
Extra: &diagExtra,
})
return cty.DynamicVal, diags
}
@ -319,6 +372,7 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
Context: e.Range().Ptr(),
Expression: e,
EvalContext: ctx,
Extra: &diagExtra,
},
}
}
@ -336,6 +390,7 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
Context: e.Range().Ptr(),
Expression: e,
EvalContext: ctx,
Extra: &diagExtra,
},
}
}
@ -350,26 +405,39 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
param = varParam
}
val, argDiags := argExpr.Value(ctx)
if len(argDiags) > 0 {
var val cty.Value
if decodeFn := customdecode.CustomExpressionDecoderForType(param.Type); decodeFn != nil {
var argDiags hcl.Diagnostics
val, argDiags = decodeFn(argExpr, ctx)
diags = append(diags, argDiags...)
}
if val == cty.NilVal {
val = cty.UnknownVal(param.Type)
}
} else {
var argDiags hcl.Diagnostics
val, argDiags = argExpr.Value(ctx)
if len(argDiags) > 0 {
diags = append(diags, argDiags...)
}
// Try to convert our value to the parameter type
val, err := convert.Convert(val, param.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function argument",
Detail: fmt.Sprintf(
"Invalid value for %q parameter: %s.",
param.Name, err,
),
Subject: argExpr.StartRange().Ptr(),
Context: e.Range().Ptr(),
Expression: argExpr,
EvalContext: ctx,
})
// Try to convert our value to the parameter type
var err error
val, err = convert.Convert(val, param.Type)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function argument",
Detail: fmt.Sprintf(
"Invalid value for %q parameter: %s.",
param.Name, err,
),
Subject: argExpr.StartRange().Ptr(),
Context: e.Range().Ptr(),
Expression: argExpr,
EvalContext: ctx,
Extra: &diagExtra,
})
}
}
argVals[i] = val
@ -384,6 +452,10 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
resultVal, err := f.Call(argVals)
if err != nil {
// For errors in the underlying call itself we also return the raw
// call error via an extra method on our "diagnostic extra" value.
diagExtra.functionCallError = err
switch terr := err.(type) {
case function.ArgError:
i := terr.Index
@ -393,22 +465,75 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
} else {
param = varParam
}
argExpr := e.Args[i]
// TODO: we should also unpick a PathError here and show the
// path to the deep value where the error was detected.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function argument",
Detail: fmt.Sprintf(
"Invalid value for %q parameter: %s.",
param.Name, err,
),
Subject: argExpr.StartRange().Ptr(),
Context: e.Range().Ptr(),
Expression: argExpr,
EvalContext: ctx,
})
if param == nil || i > len(args)-1 {
// Getting here means that the function we called has a bug:
// it returned an arg error that refers to an argument index
// that wasn't present in the call. For that situation
// we'll degrade to a less specific error just to give
// some sort of answer, but best to still fix the buggy
// function so that it only returns argument indices that
// are in range.
switch {
case param != nil:
// In this case we'll assume that the function was trying
// to talk about a final variadic parameter but the caller
// didn't actually provide any arguments for it. That means
// we can at least still name the parameter in the
// error message, but our source range will be the call
// as a whole because we don't have an argument expression
// to highlight specifically.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function argument",
Detail: fmt.Sprintf(
"Invalid value for %q parameter: %s.",
param.Name, err,
),
Subject: e.Range().Ptr(),
Expression: e,
EvalContext: ctx,
Extra: &diagExtra,
})
default:
// This is the most degenerate case of all, where the
// index is out of range even for the declared parameters,
// and so we can't tell which parameter the function is
// trying to report an error for. Just a generic error
// report in that case.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Error in function call",
Detail: fmt.Sprintf(
"Call to function %q failed: %s.",
e.Name, err,
),
Subject: e.StartRange().Ptr(),
Context: e.Range().Ptr(),
Expression: e,
EvalContext: ctx,
Extra: &diagExtra,
})
}
} else {
argExpr := args[i]
// TODO: we should also unpick a PathError here and show the
// path to the deep value where the error was detected.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid function argument",
Detail: fmt.Sprintf(
"Invalid value for %q parameter: %s.",
param.Name, err,
),
Subject: argExpr.StartRange().Ptr(),
Context: e.Range().Ptr(),
Expression: argExpr,
EvalContext: ctx,
Extra: &diagExtra,
})
}
default:
diags = append(diags, &hcl.Diagnostic{
@ -422,6 +547,7 @@ func (e *FunctionCallExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
Context: e.Range().Ptr(),
Expression: e,
EvalContext: ctx,
Extra: &diagExtra,
})
}
@ -454,6 +580,39 @@ func (e *FunctionCallExpr) ExprCall() *hcl.StaticCall {
return ret
}
// FunctionCallDiagExtra is an interface implemented by the value in the "Extra"
// field of some diagnostics returned by FunctionCallExpr.Value, giving
// cooperating callers access to some machine-readable information about the
// call that a diagnostic relates to.
type FunctionCallDiagExtra interface {
// CalledFunctionName returns the name of the function being called at
// the time the diagnostic was generated, if any. Returns an empty string
// if there is no known called function.
CalledFunctionName() string
// FunctionCallError returns the error value returned by the implementation
// of the function being called, if any. Returns nil if the diagnostic was
// not returned in response to a call error.
//
// Some errors related to calling functions are generated by HCL itself
// rather than by the underlying function, in which case this method
// will return nil.
FunctionCallError() error
}
type functionCallDiagExtra struct {
calledFunctionName string
functionCallError error
}
func (e *functionCallDiagExtra) CalledFunctionName() string {
return e.calledFunctionName
}
func (e *functionCallDiagExtra) FunctionCallError() error {
return e.functionCallError
}
type ConditionalExpr struct {
Condition Expression
TrueResult Expression
@ -508,12 +667,8 @@ func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostic
Severity: hcl.DiagError,
Summary: "Inconsistent conditional result types",
Detail: fmt.Sprintf(
// FIXME: Need a helper function for showing natural-language type diffs,
// since this will generate some useless messages in some cases, like
// "These expressions are object and object respectively" if the
// object types don't exactly match.
"The true and false result expressions must have consistent types. The given expressions are %s and %s, respectively.",
trueResult.Type().FriendlyName(), falseResult.Type().FriendlyName(),
"The true and false result expressions must have consistent types. %s.",
describeConditionalTypeMismatch(trueResult.Type(), falseResult.Type()),
),
Subject: hcl.RangeBetween(e.TrueResult.Range(), e.FalseResult.Range()).Ptr(),
Context: &e.SrcRange,
@ -545,7 +700,7 @@ func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostic
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect condition type",
Detail: fmt.Sprintf("The condition expression must be of type bool."),
Detail: "The condition expression must be of type bool.",
Subject: e.Condition.Range().Ptr(),
Context: &e.SrcRange,
Expression: e.Condition,
@ -554,6 +709,8 @@ func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostic
return cty.UnknownVal(resultType), diags
}
// Unmark result before testing for truthiness
condResult, _ = condResult.UnmarkDeep()
if condResult.True() {
diags = append(diags, trueDiags...)
if convs[0] != nil {
@ -603,6 +760,144 @@ func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostic
}
}
// describeConditionalTypeMismatch makes a best effort to describe the
// difference between types in the true and false arms of a conditional
// expression in a way that would be useful to someone trying to understand
// why their conditional expression isn't valid.
//
// NOTE: This function is only designed to deal with situations
// where trueTy and falseTy are different. Calling it with two equal
// types will produce a nonsense result. This function also only really
// deals with situations that type unification can't resolve, so we should
// call this function only after trying type unification first.
func describeConditionalTypeMismatch(trueTy, falseTy cty.Type) string {
// The main tricky cases here are when both trueTy and falseTy are
// of the same structural type kind, such as both being object types
// or both being tuple types. In that case the "FriendlyName" method
// returns only "object" or "tuple" and so we need to do some more
// work to describe what's different inside them.
switch {
case trueTy.IsObjectType() && falseTy.IsObjectType():
// We'll first gather up the attribute names and sort them. In the
// event that there are multiple attributes that disagree across
// the two types, we'll prefer to report the one that sorts lexically
// least just so that our error message is consistent between
// evaluations.
var trueAttrs, falseAttrs []string
for name := range trueTy.AttributeTypes() {
trueAttrs = append(trueAttrs, name)
}
sort.Strings(trueAttrs)
for name := range falseTy.AttributeTypes() {
falseAttrs = append(falseAttrs, name)
}
sort.Strings(falseAttrs)
for _, name := range trueAttrs {
if !falseTy.HasAttribute(name) {
return fmt.Sprintf("The 'true' value includes object attribute %q, which is absent in the 'false' value", name)
}
trueAty := trueTy.AttributeType(name)
falseAty := falseTy.AttributeType(name)
if !trueAty.Equals(falseAty) {
// For deeply-nested differences this will likely get very
// clunky quickly by nesting these messages inside one another,
// but we'll accept that for now in the interests of producing
// _some_ useful feedback, even if it isn't as concise as
// we'd prefer it to be. Deeply-nested structures in
// conditionals are thankfully not super common.
return fmt.Sprintf(
"Type mismatch for object attribute %q: %s",
name, describeConditionalTypeMismatch(trueAty, falseAty),
)
}
}
for _, name := range falseAttrs {
if !trueTy.HasAttribute(name) {
return fmt.Sprintf("The 'false' value includes object attribute %q, which is absent in the 'true' value", name)
}
// NOTE: We don't need to check the attribute types again, because
// any attribute that both types have in common would already have
// been checked in the previous loop.
}
case trueTy.IsTupleType() && falseTy.IsTupleType():
trueEtys := trueTy.TupleElementTypes()
falseEtys := falseTy.TupleElementTypes()
if trueCount, falseCount := len(trueEtys), len(falseEtys); trueCount != falseCount {
return fmt.Sprintf("The 'true' tuple has length %d, but the 'false' tuple has length %d", trueCount, falseCount)
}
// NOTE: Thanks to the condition above, we know that both tuples are
// of the same length and so they must have some differing types
// instead.
for i := range trueEtys {
trueEty := trueEtys[i]
falseEty := falseEtys[i]
if !trueEty.Equals(falseEty) {
// For deeply-nested differences this will likely get very
// clunky quickly by nesting these messages inside one another,
// but we'll accept that for now in the interests of producing
// _some_ useful feedback, even if it isn't as concise as
// we'd prefer it to be. Deeply-nested structures in
// conditionals are thankfully not super common.
return fmt.Sprintf(
"Type mismatch for tuple element %d: %s",
i, describeConditionalTypeMismatch(trueEty, falseEty),
)
}
}
case trueTy.IsCollectionType() && falseTy.IsCollectionType():
// For this case we're specifically interested in the situation where:
// - both collections are of the same kind, AND
// - the element types of both are either object or tuple types.
// This is just to avoid writing a useless statement like
// "The 'true' value is list of object, but the 'false' value is list of object".
// This still doesn't account for more awkward cases like collections
// of collections of structural types, but we won't let perfect be
// the enemy of the good.
trueEty := trueTy.ElementType()
falseEty := falseTy.ElementType()
if (trueTy.IsListType() && falseTy.IsListType()) || (trueTy.IsMapType() && falseTy.IsMapType()) || (trueTy.IsSetType() && falseTy.IsSetType()) {
if (trueEty.IsObjectType() && falseEty.IsObjectType()) || (trueEty.IsTupleType() && falseEty.IsTupleType()) {
noun := "collection"
switch { // NOTE: We now know that trueTy and falseTy have the same collection kind
case trueTy.IsListType():
noun = "list"
case trueTy.IsSetType():
noun = "set"
case trueTy.IsMapType():
noun = "map"
}
return fmt.Sprintf(
"Mismatched %s element types: %s",
noun, describeConditionalTypeMismatch(trueEty, falseEty),
)
}
}
}
// If we don't manage any more specialized message, we'll just report
// what the two types are.
trueName := trueTy.FriendlyName()
falseName := falseTy.FriendlyName()
if trueName == falseName {
// Absolute last resort for when we have no special rule above but
// we have two types with the same friendly name anyway. This is
// the most vague of all possible messages but is reserved for
// particularly awkward cases, like lists of lists of differing tuple
// types.
return "At least one deeply-nested attribute or element is not compatible across both the 'true' and the 'false' value"
}
return fmt.Sprintf(
"The 'true' value is %s, but the 'false' value is %s",
trueTy.FriendlyName(), falseTy.FriendlyName(),
)
}
func (e *ConditionalExpr) Range() hcl.Range {
return e.SrcRange
}
@ -615,8 +910,9 @@ type IndexExpr struct {
Collection Expression
Key Expression
SrcRange hcl.Range
OpenRange hcl.Range
SrcRange hcl.Range
OpenRange hcl.Range
BracketRange hcl.Range
}
func (e *IndexExpr) walkChildNodes(w internalWalkFunc) {
@ -631,7 +927,7 @@ func (e *IndexExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
diags = append(diags, collDiags...)
diags = append(diags, keyDiags...)
val, indexDiags := hcl.Index(coll, key, &e.SrcRange)
val, indexDiags := hcl.Index(coll, key, &e.BracketRange)
setDiagEvalContext(indexDiags, e, ctx)
diags = append(diags, indexDiags...)
return val, diags
@ -711,6 +1007,7 @@ func (e *ObjectConsExpr) walkChildNodes(w internalWalkFunc) {
func (e *ObjectConsExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
var vals map[string]cty.Value
var diags hcl.Diagnostics
var marks []cty.ValueMarks
// This will get set to true if we fail to produce any of our keys,
// either because they are actually unknown or if the evaluation produces
@ -748,6 +1045,9 @@ func (e *ObjectConsExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics
continue
}
key, keyMarks := key.Unmark()
marks = append(marks, keyMarks)
var err error
key, err = convert.Convert(key, cty.String)
if err != nil {
@ -777,7 +1077,7 @@ func (e *ObjectConsExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics
return cty.DynamicVal, diags
}
return cty.ObjectVal(vals), diags
return cty.ObjectVal(vals).WithMarks(marks...), diags
}
func (e *ObjectConsExpr) Range() hcl.Range {
@ -907,6 +1207,7 @@ type ForExpr struct {
func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
var diags hcl.Diagnostics
var marks []cty.ValueMarks
collVal, collDiags := e.CollExpr.Value(ctx)
diags = append(diags, collDiags...)
@ -926,6 +1227,10 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
if collVal.Type() == cty.DynamicPseudoType {
return cty.DynamicVal, diags
}
// Unmark collection before checking for iterability, because marked
// values cannot be iterated
collVal, collMarks := collVal.Unmark()
marks = append(marks, collMarks)
if !collVal.CanIterateElements() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
@ -1050,7 +1355,11 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
continue
}
if include.False() {
// Extract and merge marks from the include expression into the
// main set of marks
includeUnmarked, includeMarks := include.Unmark()
marks = append(marks, includeMarks)
if includeUnmarked.False() {
// Skip this element
continue
}
@ -1095,6 +1404,9 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
continue
}
key, keyMarks := key.Unmark()
marks = append(marks, keyMarks)
val, valDiags := e.ValExpr.Value(childCtx)
diags = append(diags, valDiags...)
@ -1133,7 +1445,7 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
}
}
return cty.ObjectVal(vals), diags
return cty.ObjectVal(vals).WithMarks(marks...), diags
} else {
// Producing a tuple
@ -1194,7 +1506,11 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
continue
}
if include.False() {
// Extract and merge marks from the include expression into the
// main set of marks
includeUnmarked, includeMarks := include.Unmark()
marks = append(marks, includeMarks)
if includeUnmarked.False() {
// Skip this element
continue
}
@ -1209,7 +1525,7 @@ func (e *ForExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return cty.DynamicVal, diags
}
return cty.TupleVal(vals), diags
return cty.TupleVal(vals).WithMarks(marks...), diags
}
}
@ -1272,12 +1588,6 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
}
sourceTy := sourceVal.Type()
if sourceTy == cty.DynamicPseudoType {
// If we don't even know the _type_ of our source value yet then
// we'll need to defer all processing, since we can't decide our
// result type either.
return cty.DynamicVal, diags
}
// A "special power" of splat expressions is that they can be applied
// both to tuples/lists and to other values, and in the latter case
@ -1301,9 +1611,29 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return cty.DynamicVal, diags
}
if sourceTy == cty.DynamicPseudoType {
// If we don't even know the _type_ of our source value yet then
// we'll need to defer all processing, since we can't decide our
// result type either.
return cty.DynamicVal, diags
}
upgradedUnknown := false
if autoUpgrade {
// If we're upgrading an unknown value to a tuple/list, the result
// cannot be known. Otherwise a tuple containing an unknown value will
// upgrade to a different number of elements depending on whether
// sourceVal becomes null or not.
// We record this condition here so we can process any remaining
// expression after the * to verify the result of the traversal. For
// example, it is valid to use a splat on a single object to retrieve a
// list of a single attribute, but we still need to check if that
// attribute actually exists.
upgradedUnknown = !sourceVal.IsKnown()
sourceVal = cty.TupleVal([]cty.Value{sourceVal})
sourceTy = sourceVal.Type()
}
// We'll compute our result type lazily if we need it. In the normal case
@ -1345,6 +1675,9 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
return cty.UnknownVal(ty), diags
}
// Unmark the collection, and save the marks to apply to the returned
// collection result
sourceVal, marks := sourceVal.Unmark()
vals := make([]cty.Value, 0, sourceVal.LengthInt())
it := sourceVal.ElementIterator()
if ctx == nil {
@ -1365,6 +1698,10 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
}
e.Item.clearValue(ctx) // clean up our temporary value
if upgradedUnknown {
return cty.DynamicVal, diags
}
if !isKnown {
// We'll ingore the resultTy diagnostics in this case since they
// will just be the same errors we saw while iterating above.
@ -1379,9 +1716,9 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
diags = append(diags, tyDiags...)
return cty.ListValEmpty(ty.ElementType()), diags
}
return cty.ListVal(vals), diags
return cty.ListVal(vals).WithMarks(marks), diags
default:
return cty.TupleVal(vals), diags
return cty.TupleVal(vals).WithMarks(marks), diags
}
}

View File

@ -26,6 +26,9 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
var diags hcl.Diagnostics
isKnown := true
// Maintain a set of marks for values used in the template
marks := make(cty.ValueMarks)
for _, part := range e.Parts {
partVal, partDiags := part.Value(ctx)
diags = append(diags, partDiags...)
@ -45,6 +48,12 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
continue
}
// Unmark the part and merge its marks into the set
unmarkedVal, partMarks := partVal.Unmark()
for k, v := range partMarks {
marks[k] = v
}
if !partVal.IsKnown() {
// If any part is unknown then the result as a whole must be
// unknown too. We'll keep on processing the rest of the parts
@ -54,7 +63,7 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
continue
}
strVal, err := convert.Convert(partVal, cty.String)
strVal, err := convert.Convert(unmarkedVal, cty.String)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
@ -74,11 +83,15 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics)
buf.WriteString(strVal.AsString())
}
var ret cty.Value
if !isKnown {
return cty.UnknownVal(cty.String), diags
ret = cty.UnknownVal(cty.String)
} else {
ret = cty.StringVal(buf.String())
}
return cty.StringVal(buf.String()), diags
// Apply the full set of marks to the returned value
return ret.WithMarks(marks), diags
}
func (e *TemplateExpr) Range() hcl.Range {
@ -139,6 +152,8 @@ func (e *TemplateJoinExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
return cty.UnknownVal(cty.String), diags
}
tuple, marks := tuple.Unmark()
allMarks := []cty.ValueMarks{marks}
buf := &bytes.Buffer{}
it := tuple.ElementIterator()
for it.Next() {
@ -158,7 +173,7 @@ func (e *TemplateJoinExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
continue
}
if val.Type() == cty.DynamicPseudoType {
return cty.UnknownVal(cty.String), diags
return cty.UnknownVal(cty.String).WithMarks(marks), diags
}
strVal, err := convert.Convert(val, cty.String)
if err != nil {
@ -176,13 +191,17 @@ func (e *TemplateJoinExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnosti
continue
}
if !val.IsKnown() {
return cty.UnknownVal(cty.String), diags
return cty.UnknownVal(cty.String).WithMarks(marks), diags
}
strVal, strValMarks := strVal.Unmark()
if len(strValMarks) > 0 {
allMarks = append(allMarks, strValMarks)
}
buf.WriteString(strVal.AsString())
}
return cty.StringVal(buf.String()), diags
return cty.StringVal(buf.String()).WithMarks(allMarks...), diags
}
func (e *TemplateJoinExpr) Range() hcl.Range {

View File

@ -6,7 +6,7 @@ import (
"strconv"
"unicode/utf8"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v13/textseg"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
@ -76,14 +76,37 @@ Token:
default:
bad := p.Read()
if !p.recovery {
if bad.Type == TokenOQuote {
switch bad.Type {
case TokenOQuote:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid argument name",
Detail: "Argument names must not be quoted.",
Subject: &bad.Range,
})
} else {
case TokenEOF:
switch end {
case TokenCBrace:
// If we're looking for a closing brace then we're parsing a block
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed configuration block",
Detail: "There is no closing brace for this block before the end of the file. This may be caused by incorrect brace nesting elsewhere in this file.",
Subject: &startRange,
})
default:
// The only other "end" should itself be TokenEOF (for
// the top-level body) and so we shouldn't get here,
// but we'll return a generic error message anyway to
// be resilient.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed configuration body",
Detail: "Found end of file before the end of this configuration body.",
Subject: &startRange,
})
}
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Argument or block definition required",
@ -144,8 +167,6 @@ func (p *parser) ParseBodyItem() (Node, hcl.Diagnostics) {
},
}
}
return nil, nil
}
// parseSingleAttrBody is a weird variant of ParseBody that deals with the
@ -388,12 +409,23 @@ Token:
// user intent for this one, we'll skip it if we're already in
// recovery mode.
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid single-argument block definition",
Detail: "A single-line block definition must end with a closing brace immediately after its single argument definition.",
Subject: p.Peek().Range.Ptr(),
})
switch p.Peek().Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed configuration block",
Detail: "There is no closing brace for this block before the end of the file. This may be caused by incorrect brace nesting elsewhere in this file.",
Subject: oBrace.Range.Ptr(),
Context: hcl.RangeBetween(ident.Range, oBrace.Range).Ptr(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid single-argument block definition",
Detail: "A single-line block definition must end with a closing brace immediately after its single argument definition.",
Subject: p.Peek().Range.Ptr(),
})
}
}
p.recover(TokenCBrace)
}
@ -670,6 +702,7 @@ Traversal:
trav := make(hcl.Traversal, 0, 1)
var firstRange, lastRange hcl.Range
firstRange = p.NextRange()
lastRange = marker.Range
for p.Peek().Type == TokenDot {
dot := p.Read()
@ -760,7 +793,7 @@ Traversal:
Each: travExpr,
Item: itemExpr,
SrcRange: hcl.RangeBetween(dot.Range, lastRange),
SrcRange: hcl.RangeBetween(from.Range(), lastRange),
MarkerRange: hcl.RangeBetween(dot.Range, marker.Range),
}
@ -819,7 +852,7 @@ Traversal:
Each: travExpr,
Item: itemExpr,
SrcRange: hcl.RangeBetween(open.Range, travExpr.Range()),
SrcRange: hcl.RangeBetween(from.Range(), travExpr.Range()),
MarkerRange: hcl.RangeBetween(open.Range, close.Range),
}
@ -867,8 +900,9 @@ Traversal:
Collection: ret,
Key: keyExpr,
SrcRange: rng,
OpenRange: open.Range,
SrcRange: hcl.RangeBetween(from.Range(), rng),
OpenRange: open.Range,
BracketRange: rng,
}
}
}
@ -899,7 +933,7 @@ func makeRelativeTraversal(expr Expression, next hcl.Traverser, rng hcl.Range) E
return &RelativeTraversalExpr{
Source: expr,
Traversal: hcl.Traversal{next},
SrcRange: rng,
SrcRange: hcl.RangeBetween(expr.Range(), rng),
}
}
}
@ -909,7 +943,7 @@ func (p *parser) parseExpressionTerm() (Expression, hcl.Diagnostics) {
switch start.Type {
case TokenOParen:
p.Read() // eat open paren
oParen := p.Read() // eat open paren
p.PushIncludeNewlines(false)
@ -935,9 +969,19 @@ func (p *parser) parseExpressionTerm() (Expression, hcl.Diagnostics) {
p.setRecovery()
}
p.Read() // eat closing paren
cParen := p.Read() // eat closing paren
p.PopIncludeNewlines()
// Our parser's already taken care of the precedence effect of the
// parentheses by considering them to be a kind of "term", but we
// still need to include the parentheses in our AST so we can give
// an accurate representation of the source range that includes the
// open and closing parentheses.
expr = &ParenthesesExpr{
Expression: expr,
SrcRange: hcl.RangeBetween(oParen.Range, cParen.Range),
}
return expr, diags
case TokenNumberLit:
@ -1047,12 +1091,22 @@ func (p *parser) parseExpressionTerm() (Expression, hcl.Diagnostics) {
default:
var diags hcl.Diagnostics
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: "Expected the start of an expression, but found an invalid expression token.",
Subject: &start.Range,
})
switch start.Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing expression",
Detail: "Expected the start of an expression, but found the end of the file.",
Subject: &start.Range,
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid expression",
Detail: "Expected the start of an expression, but found an invalid expression token.",
Subject: &start.Range,
})
}
}
p.setRecovery()
@ -1151,13 +1205,23 @@ Token:
}
if sep.Type != TokenComma {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing argument separator",
Detail: "A comma is required to separate each function argument from the next.",
Subject: &sep.Range,
Context: hcl.RangeBetween(name.Range, sep.Range).Ptr(),
})
switch sep.Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unterminated function call",
Detail: "There is no closing parenthesis for this function call before the end of the file. This may be caused by incorrect parethesis nesting elsewhere in this file.",
Subject: hcl.RangeBetween(name.Range, openTok.Range).Ptr(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing argument separator",
Detail: "A comma is required to separate each function argument from the next.",
Subject: &sep.Range,
Context: hcl.RangeBetween(name.Range, sep.Range).Ptr(),
})
}
closeTok = p.recover(TokenCParen)
break Token
}
@ -1230,13 +1294,23 @@ func (p *parser) parseTupleCons() (Expression, hcl.Diagnostics) {
if next.Type != TokenComma {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing item separator",
Detail: "Expected a comma to mark the beginning of the next item.",
Subject: &next.Range,
Context: hcl.RangeBetween(open.Range, next.Range).Ptr(),
})
switch next.Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unterminated tuple constructor expression",
Detail: "There is no corresponding closing bracket before the end of the file. This may be caused by incorrect bracket nesting elsewhere in this file.",
Subject: open.Range.Ptr(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing item separator",
Detail: "Expected a comma to mark the beginning of the next item.",
Subject: &next.Range,
Context: hcl.RangeBetween(open.Range, next.Range).Ptr(),
})
}
}
close = p.recover(TokenCBrack)
break
@ -1347,6 +1421,13 @@ func (p *parser) parseObjectCons() (Expression, hcl.Diagnostics) {
Subject: &next.Range,
Context: hcl.RangeBetween(open.Range, next.Range).Ptr(),
})
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unterminated object constructor expression",
Detail: "There is no corresponding closing brace before the end of the file. This may be caused by incorrect brace nesting elsewhere in this file.",
Subject: open.Range.Ptr(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
@ -1387,13 +1468,23 @@ func (p *parser) parseObjectCons() (Expression, hcl.Diagnostics) {
if next.Type != TokenComma && next.Type != TokenNewline {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute separator",
Detail: "Expected a newline or comma to mark the beginning of the next attribute.",
Subject: &next.Range,
Context: hcl.RangeBetween(open.Range, next.Range).Ptr(),
})
switch next.Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unterminated object constructor expression",
Detail: "There is no corresponding closing brace before the end of the file. This may be caused by incorrect brace nesting elsewhere in this file.",
Subject: open.Range.Ptr(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute separator",
Detail: "Expected a newline or comma to mark the beginning of the next attribute.",
Subject: &next.Range,
Context: hcl.RangeBetween(open.Range, next.Range).Ptr(),
})
}
}
close = p.recover(TokenCBrace)
break
@ -1649,7 +1740,7 @@ func (p *parser) parseQuotedStringLiteral() (string, hcl.Range, hcl.Diagnostics)
var diags hcl.Diagnostics
ret := &bytes.Buffer{}
var cQuote Token
var endRange hcl.Range
Token:
for {
@ -1657,7 +1748,7 @@ Token:
switch tok.Type {
case TokenCQuote:
cQuote = tok
endRange = tok.Range
break Token
case TokenQuotedLit:
@ -1700,6 +1791,7 @@ Token:
Subject: &tok.Range,
Context: hcl.RangeBetween(oQuote.Range, tok.Range).Ptr(),
})
endRange = tok.Range
break Token
default:
@ -1712,13 +1804,14 @@ Token:
Context: hcl.RangeBetween(oQuote.Range, tok.Range).Ptr(),
})
p.recover(TokenCQuote)
endRange = tok.Range
break Token
}
}
return ret.String(), hcl.RangeBetween(oQuote.Range, cQuote.Range), diags
return ret.String(), hcl.RangeBetween(oQuote.Range, endRange), diags
}
// ParseStringLiteralToken processes the given token, which must be either a

View File

@ -5,7 +5,7 @@ import (
"strings"
"unicode"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v13/textseg"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
@ -413,13 +413,44 @@ Token:
close := p.Peek()
if close.Type != TokenTemplateSeqEnd {
if !p.recovery {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extra characters after interpolation expression",
Detail: "Expected a closing brace to end the interpolation expression, but found extra characters.",
Subject: &close.Range,
Context: hcl.RangeBetween(startRange, close.Range).Ptr(),
})
switch close.Type {
case TokenEOF:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed template interpolation sequence",
Detail: "There is no closing brace for this interpolation sequence before the end of the file. This might be caused by incorrect nesting inside the given expression.",
Subject: &startRange,
})
case TokenColon:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extra characters after interpolation expression",
Detail: "Template interpolation doesn't expect a colon at this location. Did you intend this to be a literal sequence to be processed as part of another language? If so, you can escape it by starting with \"$${\" instead of just \"${\".",
Subject: &close.Range,
Context: hcl.RangeBetween(startRange, close.Range).Ptr(),
})
default:
if (close.Type == TokenCQuote || close.Type == TokenOQuote) && end == TokenCQuote {
// We'll get here if we're processing a _quoted_
// template and we find an errant quote inside an
// interpolation sequence, which suggests that
// the interpolation sequence is missing its terminator.
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed template interpolation sequence",
Detail: "There is no closing brace for this interpolation sequence before the end of the quoted template. This might be caused by incorrect nesting inside the given expression.",
Subject: &startRange,
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extra characters after interpolation expression",
Detail: "Expected a closing brace to end the interpolation expression, but found extra characters.\n\nThis can happen when you include interpolation syntax for another language, such as shell scripting, but forget to escape the interpolation start token. If this is an embedded sequence for another language, escape it by starting with \"$${\" instead of just \"${\".",
Subject: &close.Range,
Context: hcl.RangeBetween(startRange, close.Range).Ptr(),
})
}
}
}
p.recover(TokenTemplateSeqEnd)
} else {

View File

@ -273,7 +273,7 @@ tuple = "[" (
object = "{" (
(objectelem ("," objectelem)* ","?)?
) "}";
objectelem = (Identifier | Expression) "=" Expression;
objectelem = (Identifier | Expression) ("=" | ":") Expression;
```
Only tuple and object values can be directly constructed via native syntax.
@ -293,18 +293,20 @@ Between the open and closing delimiters of these sequences, newline sequences
are ignored as whitespace.
There is a syntax ambiguity between _for expressions_ and collection values
whose first element is a reference to a variable named `for`. The
_for expression_ interpretation has priority, so to produce a tuple whose
first element is the value of a variable named `for`, or an object with a
key named `for`, use parentheses to disambiguate:
whose first element starts with an identifier named `for`. The _for expression_
interpretation has priority, so to write a key literally named `for`
or an expression derived from a variable named `for` you must use parentheses
or quotes to disambiguate:
- `[for, foo, baz]` is a syntax error.
- `[(for), foo, baz]` is a tuple whose first element is the value of variable
`for`.
- `{for: 1, baz: 2}` is a syntax error.
- `{(for): 1, baz: 2}` is an object with an attribute literally named `for`.
- `{baz: 2, for: 1}` is equivalent to the previous example, and resolves the
- `{for = 1, baz = 2}` is a syntax error.
- `{"for" = 1, baz = 2}` is an object with an attribute literally named `for`.
- `{baz = 2, for = 1}` is equivalent to the previous example, and resolves the
ambiguity by reordering.
- `{(for) = 1, baz = 2}` is an object with a key with the same value as the
variable `for`.
### Template Expressions
@ -489,7 +491,7 @@ that were produced against each distinct key.
- `[for v in ["a", "b"]: v]` returns `["a", "b"]`.
- `[for i, v in ["a", "b"]: i]` returns `[0, 1]`.
- `{for i, v in ["a", "b"]: v => i}` returns `{a = 0, b = 1}`.
- `{for i, v in ["a", "a", "b"]: k => v}` produces an error, because attribute
- `{for i, v in ["a", "a", "b"]: v => i}` produces an error, because attribute
`a` is defined twice.
- `{for i, v in ["a", "a", "b"]: v => i...}` returns `{a = [0, 1], b = [2]}`.
@ -888,7 +890,7 @@ as templates.
- `hello ${true}` produces the string `"hello true"`
- `${""}${true}` produces the string `"true"` because there are two
interpolation sequences, even though one produces an empty result.
- `%{ for v in [true] }${v}%{ endif }` produces the string `true` because
- `%{ for v in [true] }${v}%{ endfor }` produces the string `true` because
the presence of the `for` directive circumvents the unwrapping even though
the final result is a single value.

View File

@ -13,17 +13,12 @@ func (b *Block) AsHCLBlock() *hcl.Block {
return nil
}
lastHeaderRange := b.TypeRange
if len(b.LabelRanges) > 0 {
lastHeaderRange = b.LabelRanges[len(b.LabelRanges)-1]
}
return &hcl.Block{
Type: b.Type,
Labels: b.Labels,
Body: b.Body,
DefRange: hcl.RangeBetween(b.TypeRange, lastHeaderRange),
DefRange: b.DefRange(),
TypeRange: b.TypeRange,
LabelRanges: b.LabelRanges,
}
@ -40,7 +35,7 @@ type Body struct {
hiddenBlocks map[string]struct{}
SrcRange hcl.Range
EndRange hcl.Range // Final token of the body, for reporting missing items
EndRange hcl.Range // Final token of the body (zero-length range)
}
// Assert that *Body implements hcl.Body
@ -390,5 +385,9 @@ func (b *Block) Range() hcl.Range {
}
func (b *Block) DefRange() hcl.Range {
return hcl.RangeBetween(b.TypeRange, b.OpenBraceRange)
lastHeaderRange := b.TypeRange
if len(b.LabelRanges) > 0 {
lastHeaderRange = b.LabelRanges[len(b.LabelRanges)-1]
}
return hcl.RangeBetween(b.TypeRange, lastHeaderRange)
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"fmt"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v13/textseg"
"github.com/hashicorp/hcl/v2"
)
@ -191,8 +191,10 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
toldBadUTF8 := 0
for _, tok := range tokens {
// copy token so it's safe to point to it
tok := tok
tokRange := func() *hcl.Range {
r := tok.Range
return &r
}
switch tok.Type {
case TokenBitwiseAnd, TokenBitwiseOr, TokenBitwiseXor, TokenBitwiseNot:
@ -202,7 +204,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
case TokenBitwiseAnd:
suggestion = " Did you mean boolean AND (\"&&\")?"
case TokenBitwiseOr:
suggestion = " Did you mean boolean OR (\"&&\")?"
suggestion = " Did you mean boolean OR (\"||\")?"
case TokenBitwiseNot:
suggestion = " Did you mean boolean NOT (\"!\")?"
}
@ -211,7 +213,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Unsupported operator",
Detail: fmt.Sprintf("Bitwise operators are not supported.%s", suggestion),
Subject: &tok.Range,
Subject: tokRange(),
})
toldBitwise++
}
@ -221,7 +223,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Unsupported operator",
Detail: "\"**\" is not a supported operator. Exponentiation is not supported as an operator.",
Subject: &tok.Range,
Subject: tokRange(),
})
toldExponent++
@ -234,7 +236,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "The \"`\" character is not valid. To create a multi-line string, use the \"heredoc\" syntax, like \"<<EOT\".",
Subject: &tok.Range,
Subject: tokRange(),
})
}
if toldBacktick <= 2 {
@ -246,7 +248,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "Single quotes are not valid. Use double quotes (\") to enclose strings.",
Subject: &tok.Range,
Subject: tokRange(),
}
diags = append(diags, newDiag)
}
@ -259,7 +261,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "The \";\" character is not valid. Use newlines to separate arguments and blocks, and commas to separate items in collection values.",
Subject: &tok.Range,
Subject: tokRange(),
})
toldSemicolon++
@ -270,7 +272,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "Tab characters may not be used. The recommended indentation style is two spaces per indent.",
Subject: &tok.Range,
Subject: tokRange(),
})
toldTabs++
@ -281,7 +283,7 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid character encoding",
Detail: "All input files must be UTF-8 encoded. Ensure that UTF-8 encoding is selected in your editor.",
Subject: &tok.Range,
Subject: tokRange(),
})
toldBadUTF8++
@ -291,15 +293,26 @@ func checkInvalidTokens(tokens Tokens) hcl.Diagnostics {
Severity: hcl.DiagError,
Summary: "Invalid multi-line string",
Detail: "Quoted strings may not be split over multiple lines. To produce a multi-line string, either use the \\n escape to represent a newline character or use the \"heredoc\" multi-line template syntax.",
Subject: &tok.Range,
Subject: tokRange(),
})
case TokenInvalid:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "This character is not used within the language.",
Subject: &tok.Range,
})
chars := string(tok.Bytes)
switch chars {
case "“", "”":
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "\"Curly quotes\" are not valid here. These can sometimes be inadvertently introduced when sharing code via documents or discussion forums. It might help to replace the character with a \"straight quote\".",
Subject: tokRange(),
})
default:
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid character",
Detail: "This character is not used within the language.",
Subject: tokRange(),
})
}
}
}
return diags

View File

@ -1,121 +0,0 @@
package hclwrite
import (
"bytes"
"io"
)
type File struct {
inTree
srcBytes []byte
body *node
}
// NewEmptyFile constructs a new file with no content, ready to be mutated
// by other calls that append to its body.
func NewEmptyFile() *File {
f := &File{
inTree: newInTree(),
}
body := newBody()
f.body = f.children.Append(body)
return f
}
// Body returns the root body of the file, which contains the top-level
// attributes and blocks.
func (f *File) Body() *Body {
return f.body.content.(*Body)
}
// WriteTo writes the tokens underlying the receiving file to the given writer.
//
// The tokens first have a simple formatting pass applied that adjusts only
// the spaces between them.
func (f *File) WriteTo(wr io.Writer) (int64, error) {
tokens := f.inTree.children.BuildTokens(nil)
format(tokens)
return tokens.WriteTo(wr)
}
// Bytes returns a buffer containing the source code resulting from the
// tokens underlying the receiving file. If any updates have been made via
// the AST API, these will be reflected in the result.
func (f *File) Bytes() []byte {
buf := &bytes.Buffer{}
f.WriteTo(buf)
return buf.Bytes()
}
type comments struct {
leafNode
parent *node
tokens Tokens
}
func newComments(tokens Tokens) *comments {
return &comments{
tokens: tokens,
}
}
func (c *comments) BuildTokens(to Tokens) Tokens {
return c.tokens.BuildTokens(to)
}
type identifier struct {
leafNode
parent *node
token *Token
}
func newIdentifier(token *Token) *identifier {
return &identifier{
token: token,
}
}
func (i *identifier) BuildTokens(to Tokens) Tokens {
return append(to, i.token)
}
func (i *identifier) hasName(name string) bool {
return name == string(i.token.Bytes)
}
type number struct {
leafNode
parent *node
token *Token
}
func newNumber(token *Token) *number {
return &number{
token: token,
}
}
func (n *number) BuildTokens(to Tokens) Tokens {
return append(to, n.token)
}
type quoted struct {
leafNode
parent *node
tokens Tokens
}
func newQuoted(tokens Tokens) *quoted {
return &quoted{
tokens: tokens,
}
}
func (q *quoted) BuildTokens(to Tokens) Tokens {
return q.tokens.BuildTokens(to)
}

View File

@ -1,48 +0,0 @@
package hclwrite
import (
"github.com/hashicorp/hcl/v2/hclsyntax"
)
type Attribute struct {
inTree
leadComments *node
name *node
expr *node
lineComments *node
}
func newAttribute() *Attribute {
return &Attribute{
inTree: newInTree(),
}
}
func (a *Attribute) init(name string, expr *Expression) {
expr.assertUnattached()
nameTok := newIdentToken(name)
nameObj := newIdentifier(nameTok)
a.leadComments = a.children.Append(newComments(nil))
a.name = a.children.Append(nameObj)
a.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenEqual,
Bytes: []byte{'='},
},
})
a.expr = a.children.Append(expr)
a.expr.list = a.children
a.lineComments = a.children.Append(newComments(nil))
a.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenNewline,
Bytes: []byte{'\n'},
},
})
}
func (a *Attribute) Expr() *Expression {
return a.expr.content.(*Expression)
}

View File

@ -1,118 +0,0 @@
package hclwrite
import (
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
)
type Block struct {
inTree
leadComments *node
typeName *node
labels nodeSet
open *node
body *node
close *node
}
func newBlock() *Block {
return &Block{
inTree: newInTree(),
labels: newNodeSet(),
}
}
// NewBlock constructs a new, empty block with the given type name and labels.
func NewBlock(typeName string, labels []string) *Block {
block := newBlock()
block.init(typeName, labels)
return block
}
func (b *Block) init(typeName string, labels []string) {
nameTok := newIdentToken(typeName)
nameObj := newIdentifier(nameTok)
b.leadComments = b.children.Append(newComments(nil))
b.typeName = b.children.Append(nameObj)
for _, label := range labels {
labelToks := TokensForValue(cty.StringVal(label))
labelObj := newQuoted(labelToks)
labelNode := b.children.Append(labelObj)
b.labels.Add(labelNode)
}
b.open = b.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenOBrace,
Bytes: []byte{'{'},
},
{
Type: hclsyntax.TokenNewline,
Bytes: []byte{'\n'},
},
})
body := newBody() // initially totally empty; caller can append to it subsequently
b.body = b.children.Append(body)
b.close = b.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenCBrace,
Bytes: []byte{'}'},
},
{
Type: hclsyntax.TokenNewline,
Bytes: []byte{'\n'},
},
})
}
// Body returns the body that represents the content of the receiving block.
//
// Appending to or otherwise modifying this body will make changes to the
// tokens that are generated between the blocks open and close braces.
func (b *Block) Body() *Body {
return b.body.content.(*Body)
}
// Type returns the type name of the block.
func (b *Block) Type() string {
typeNameObj := b.typeName.content.(*identifier)
return string(typeNameObj.token.Bytes)
}
// Labels returns the labels of the block.
func (b *Block) Labels() []string {
labelNames := make([]string, 0, len(b.labels))
list := b.labels.List()
for _, label := range list {
switch labelObj := label.content.(type) {
case *identifier:
if labelObj.token.Type == hclsyntax.TokenIdent {
labelString := string(labelObj.token.Bytes)
labelNames = append(labelNames, labelString)
}
case *quoted:
tokens := labelObj.tokens
if len(tokens) == 3 &&
tokens[0].Type == hclsyntax.TokenOQuote &&
tokens[1].Type == hclsyntax.TokenQuotedLit &&
tokens[2].Type == hclsyntax.TokenCQuote {
// Note that TokenQuotedLit may contain escape sequences.
labelString, diags := hclsyntax.ParseStringLiteralToken(tokens[1].asHCLSyntax())
// If parsing the string literal returns error diagnostics
// then we can just assume the label doesn't match, because it's invalid in some way.
if !diags.HasErrors() {
labelNames = append(labelNames, labelString)
}
}
default:
// If neither of the previous cases are true (should be impossible)
// then we can just ignore it, because it's invalid too.
}
}
return labelNames
}

View File

@ -1,219 +0,0 @@
package hclwrite
import (
"reflect"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
)
type Body struct {
inTree
items nodeSet
}
func newBody() *Body {
return &Body{
inTree: newInTree(),
items: newNodeSet(),
}
}
func (b *Body) appendItem(c nodeContent) *node {
nn := b.children.Append(c)
b.items.Add(nn)
return nn
}
func (b *Body) appendItemNode(nn *node) *node {
nn.assertUnattached()
b.children.AppendNode(nn)
b.items.Add(nn)
return nn
}
// Clear removes all of the items from the body, making it empty.
func (b *Body) Clear() {
b.children.Clear()
}
func (b *Body) AppendUnstructuredTokens(ts Tokens) {
b.inTree.children.Append(ts)
}
// Attributes returns a new map of all of the attributes in the body, with
// the attribute names as the keys.
func (b *Body) Attributes() map[string]*Attribute {
ret := make(map[string]*Attribute)
for n := range b.items {
if attr, isAttr := n.content.(*Attribute); isAttr {
nameObj := attr.name.content.(*identifier)
name := string(nameObj.token.Bytes)
ret[name] = attr
}
}
return ret
}
// Blocks returns a new slice of all the blocks in the body.
func (b *Body) Blocks() []*Block {
ret := make([]*Block, 0, len(b.items))
for n := range b.items {
if block, isBlock := n.content.(*Block); isBlock {
ret = append(ret, block)
}
}
return ret
}
// GetAttribute returns the attribute from the body that has the given name,
// or returns nil if there is currently no matching attribute.
func (b *Body) GetAttribute(name string) *Attribute {
for n := range b.items {
if attr, isAttr := n.content.(*Attribute); isAttr {
nameObj := attr.name.content.(*identifier)
if nameObj.hasName(name) {
// We've found it!
return attr
}
}
}
return nil
}
// getAttributeNode is like GetAttribute but it returns the node containing
// the selected attribute (if one is found) rather than the attribute itself.
func (b *Body) getAttributeNode(name string) *node {
for n := range b.items {
if attr, isAttr := n.content.(*Attribute); isAttr {
nameObj := attr.name.content.(*identifier)
if nameObj.hasName(name) {
// We've found it!
return n
}
}
}
return nil
}
// FirstMatchingBlock returns a first matching block from the body that has the
// given name and labels or returns nil if there is currently no matching
// block.
func (b *Body) FirstMatchingBlock(typeName string, labels []string) *Block {
for _, block := range b.Blocks() {
if typeName == block.Type() {
labelNames := block.Labels()
if len(labels) == 0 && len(labelNames) == 0 {
return block
}
if reflect.DeepEqual(labels, labelNames) {
return block
}
}
}
return nil
}
// RemoveBlock removes the given block from the body, if it's in that body.
// If it isn't present, this is a no-op.
//
// Returns true if it removed something, or false otherwise.
func (b *Body) RemoveBlock(block *Block) bool {
for n := range b.items {
if n.content == block {
n.Detach()
b.items.Remove(n)
return true
}
}
return false
}
// SetAttributeValue either replaces the expression of an existing attribute
// of the given name or adds a new attribute definition to the end of the block.
//
// The value is given as a cty.Value, and must therefore be a literal. To set
// a variable reference or other traversal, use SetAttributeTraversal.
//
// The return value is the attribute that was either modified in-place or
// created.
func (b *Body) SetAttributeValue(name string, val cty.Value) *Attribute {
attr := b.GetAttribute(name)
expr := NewExpressionLiteral(val)
if attr != nil {
attr.expr = attr.expr.ReplaceWith(expr)
} else {
attr := newAttribute()
attr.init(name, expr)
b.appendItem(attr)
}
return attr
}
// SetAttributeTraversal either replaces the expression of an existing attribute
// of the given name or adds a new attribute definition to the end of the body.
//
// The new expression is given as a hcl.Traversal, which must be an absolute
// traversal. To set a literal value, use SetAttributeValue.
//
// The return value is the attribute that was either modified in-place or
// created.
func (b *Body) SetAttributeTraversal(name string, traversal hcl.Traversal) *Attribute {
attr := b.GetAttribute(name)
expr := NewExpressionAbsTraversal(traversal)
if attr != nil {
attr.expr = attr.expr.ReplaceWith(expr)
} else {
attr := newAttribute()
attr.init(name, expr)
b.appendItem(attr)
}
return attr
}
// RemoveAttribute removes the attribute with the given name from the body.
//
// The return value is the attribute that was removed, or nil if there was
// no such attribute (in which case the call was a no-op).
func (b *Body) RemoveAttribute(name string) *Attribute {
node := b.getAttributeNode(name)
if node == nil {
return nil
}
node.Detach()
b.items.Remove(node)
return node.content.(*Attribute)
}
// AppendBlock appends an existing block (which must not be already attached
// to a body) to the end of the receiving body.
func (b *Body) AppendBlock(block *Block) *Block {
b.appendItem(block)
return block
}
// AppendNewBlock appends a new nested block to the end of the receiving body
// with the given type name and labels.
func (b *Body) AppendNewBlock(typeName string, labels []string) *Block {
block := newBlock()
block.init(typeName, labels)
b.appendItem(block)
return block
}
// AppendNewline appends a newline token to th end of the receiving body,
// which generally serves as a separator between different sets of body
// contents.
func (b *Body) AppendNewline() {
b.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenNewline,
Bytes: []byte{'\n'},
},
})
}

View File

@ -1,201 +0,0 @@
package hclwrite
import (
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
)
type Expression struct {
inTree
absTraversals nodeSet
}
func newExpression() *Expression {
return &Expression{
inTree: newInTree(),
absTraversals: newNodeSet(),
}
}
// NewExpressionLiteral constructs an an expression that represents the given
// literal value.
//
// Since an unknown value cannot be represented in source code, this function
// will panic if the given value is unknown or contains a nested unknown value.
// Use val.IsWhollyKnown before calling to be sure.
//
// HCL native syntax does not directly represent lists, maps, and sets, and
// instead relies on the automatic conversions to those collection types from
// either list or tuple constructor syntax. Therefore converting collection
// values to source code and re-reading them will lose type information, and
// the reader must provide a suitable type at decode time to recover the
// original value.
func NewExpressionLiteral(val cty.Value) *Expression {
toks := TokensForValue(val)
expr := newExpression()
expr.children.AppendUnstructuredTokens(toks)
return expr
}
// NewExpressionAbsTraversal constructs an expression that represents the
// given traversal, which must be absolute or this function will panic.
func NewExpressionAbsTraversal(traversal hcl.Traversal) *Expression {
if traversal.IsRelative() {
panic("can't construct expression from relative traversal")
}
physT := newTraversal()
rootName := traversal.RootName()
steps := traversal[1:]
{
tn := newTraverseName()
tn.name = tn.children.Append(newIdentifier(&Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(rootName),
}))
physT.steps.Add(physT.children.Append(tn))
}
for _, step := range steps {
switch ts := step.(type) {
case hcl.TraverseAttr:
tn := newTraverseName()
tn.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenDot,
Bytes: []byte{'.'},
},
})
tn.name = tn.children.Append(newIdentifier(&Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(ts.Name),
}))
physT.steps.Add(physT.children.Append(tn))
case hcl.TraverseIndex:
ti := newTraverseIndex()
ti.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenOBrack,
Bytes: []byte{'['},
},
})
indexExpr := NewExpressionLiteral(ts.Key)
ti.key = ti.children.Append(indexExpr)
ti.children.AppendUnstructuredTokens(Tokens{
{
Type: hclsyntax.TokenCBrack,
Bytes: []byte{']'},
},
})
physT.steps.Add(physT.children.Append(ti))
}
}
expr := newExpression()
expr.absTraversals.Add(expr.children.Append(physT))
return expr
}
// Variables returns the absolute traversals that exist within the receiving
// expression.
func (e *Expression) Variables() []*Traversal {
nodes := e.absTraversals.List()
ret := make([]*Traversal, len(nodes))
for i, node := range nodes {
ret[i] = node.content.(*Traversal)
}
return ret
}
// RenameVariablePrefix examines each of the absolute traversals in the
// receiving expression to see if they have the given sequence of names as
// a prefix prefix. If so, they are updated in place to have the given
// replacement names instead of that prefix.
//
// This can be used to implement symbol renaming. The calling application can
// visit all relevant expressions in its input and apply the same renaming
// to implement a global symbol rename.
//
// The search and replacement traversals must be the same length, or this
// method will panic. Only attribute access operations can be matched and
// replaced. Index steps never match the prefix.
func (e *Expression) RenameVariablePrefix(search, replacement []string) {
if len(search) != len(replacement) {
panic(fmt.Sprintf("search and replacement length mismatch (%d and %d)", len(search), len(replacement)))
}
Traversals:
for node := range e.absTraversals {
traversal := node.content.(*Traversal)
if len(traversal.steps) < len(search) {
// If it's shorter then it can't have our prefix
continue
}
stepNodes := traversal.steps.List()
for i, name := range search {
step, isName := stepNodes[i].content.(*TraverseName)
if !isName {
continue Traversals // only name nodes can match
}
foundNameBytes := step.name.content.(*identifier).token.Bytes
if len(foundNameBytes) != len(name) {
continue Traversals
}
if string(foundNameBytes) != name {
continue Traversals
}
}
// If we get here then the prefix matched, so now we'll swap in
// the replacement strings.
for i, name := range replacement {
step := stepNodes[i].content.(*TraverseName)
token := step.name.content.(*identifier).token
token.Bytes = []byte(name)
}
}
}
// Traversal represents a sequence of variable, attribute, and/or index
// operations.
type Traversal struct {
inTree
steps nodeSet
}
func newTraversal() *Traversal {
return &Traversal{
inTree: newInTree(),
steps: newNodeSet(),
}
}
type TraverseName struct {
inTree
name *node
}
func newTraverseName() *TraverseName {
return &TraverseName{
inTree: newInTree(),
}
}
type TraverseIndex struct {
inTree
key *node
}
func newTraverseIndex() *TraverseIndex {
return &TraverseIndex{
inTree: newInTree(),
}
}

View File

@ -1,11 +0,0 @@
// Package hclwrite deals with the problem of generating HCL configuration
// and of making specific surgical changes to existing HCL configurations.
//
// It operates at a different level of abstraction than the main HCL parser
// and AST, since details such as the placement of comments and newlines
// are preserved when unchanged.
//
// The hclwrite API follows a similar principle to XML/HTML DOM, allowing nodes
// to be read out, created and inserted, etc. Nodes represent syntax constructs
// rather than semantic concepts.
package hclwrite

View File

@ -1,463 +0,0 @@
package hclwrite
import (
"github.com/hashicorp/hcl/v2/hclsyntax"
)
var inKeyword = hclsyntax.Keyword([]byte{'i', 'n'})
// placeholder token used when we don't have a token but we don't want
// to pass a real "nil" and complicate things with nil pointer checks
var nilToken = &Token{
Type: hclsyntax.TokenNil,
Bytes: []byte{},
SpacesBefore: 0,
}
// format rewrites tokens within the given sequence, in-place, to adjust the
// whitespace around their content to achieve canonical formatting.
func format(tokens Tokens) {
// Formatting is a multi-pass process. More details on the passes below,
// but this is the overview:
// - adjust the leading space on each line to create appropriate
// indentation
// - adjust spaces between tokens in a single cell using a set of rules
// - adjust the leading space in the "assign" and "comment" cells on each
// line to vertically align with neighboring lines.
// All of these steps operate in-place on the given tokens, so a caller
// may collect a flat sequence of all of the tokens underlying an AST
// and pass it here and we will then indirectly modify the AST itself.
// Formatting must change only whitespace. Specifically, that means
// changing the SpacesBefore attribute on a token while leaving the
// other token attributes unchanged.
lines := linesForFormat(tokens)
formatIndent(lines)
formatSpaces(lines)
formatCells(lines)
}
func formatIndent(lines []formatLine) {
// Our methodology for indents is to take the input one line at a time
// and count the bracketing delimiters on each line. If a line has a net
// increase in open brackets, we increase the indent level by one and
// remember how many new openers we had. If the line has a net _decrease_,
// we'll compare it to the most recent number of openers and decrease the
// dedent level by one each time we pass an indent level remembered
// earlier.
// The "indent stack" used here allows for us to recognize degenerate
// input where brackets are not symmetrical within lines and avoid
// pushing things too far left or right, creating confusion.
// We'll start our indent stack at a reasonable capacity to minimize the
// chance of us needing to grow it; 10 here means 10 levels of indent,
// which should be more than enough for reasonable HCL uses.
indents := make([]int, 0, 10)
for i := range lines {
line := &lines[i]
if len(line.lead) == 0 {
continue
}
if line.lead[0].Type == hclsyntax.TokenNewline {
// Never place spaces before a newline
line.lead[0].SpacesBefore = 0
continue
}
netBrackets := 0
for _, token := range line.lead {
netBrackets += tokenBracketChange(token)
if token.Type == hclsyntax.TokenOHeredoc {
break
}
}
for _, token := range line.assign {
netBrackets += tokenBracketChange(token)
}
switch {
case netBrackets > 0:
line.lead[0].SpacesBefore = 2 * len(indents)
indents = append(indents, netBrackets)
case netBrackets < 0:
closed := -netBrackets
for closed > 0 && len(indents) > 0 {
switch {
case closed > indents[len(indents)-1]:
closed -= indents[len(indents)-1]
indents = indents[:len(indents)-1]
case closed < indents[len(indents)-1]:
indents[len(indents)-1] -= closed
closed = 0
default:
indents = indents[:len(indents)-1]
closed = 0
}
}
line.lead[0].SpacesBefore = 2 * len(indents)
default:
line.lead[0].SpacesBefore = 2 * len(indents)
}
}
}
func formatSpaces(lines []formatLine) {
for _, line := range lines {
for i, token := range line.lead {
var before, after *Token
if i > 0 {
before = line.lead[i-1]
} else {
before = nilToken
}
if i < (len(line.lead) - 1) {
after = line.lead[i+1]
} else {
after = nilToken
}
if spaceAfterToken(token, before, after) {
after.SpacesBefore = 1
} else {
after.SpacesBefore = 0
}
}
for i, token := range line.assign {
if i == 0 {
// first token in "assign" always has one space before to
// separate the equals sign from what it's assigning.
token.SpacesBefore = 1
}
var before, after *Token
if i > 0 {
before = line.assign[i-1]
} else {
before = nilToken
}
if i < (len(line.assign) - 1) {
after = line.assign[i+1]
} else {
after = nilToken
}
if spaceAfterToken(token, before, after) {
after.SpacesBefore = 1
} else {
after.SpacesBefore = 0
}
}
}
}
func formatCells(lines []formatLine) {
chainStart := -1
maxColumns := 0
// We'll deal with the "assign" cell first, since moving that will
// also impact the "comment" cell.
closeAssignChain := func(i int) {
for _, chainLine := range lines[chainStart:i] {
columns := chainLine.lead.Columns()
spaces := (maxColumns - columns) + 1
chainLine.assign[0].SpacesBefore = spaces
}
chainStart = -1
maxColumns = 0
}
for i, line := range lines {
if line.assign == nil {
if chainStart != -1 {
closeAssignChain(i)
}
} else {
if chainStart == -1 {
chainStart = i
}
columns := line.lead.Columns()
if columns > maxColumns {
maxColumns = columns
}
}
}
if chainStart != -1 {
closeAssignChain(len(lines))
}
// Now we'll deal with the comments
closeCommentChain := func(i int) {
for _, chainLine := range lines[chainStart:i] {
columns := chainLine.lead.Columns() + chainLine.assign.Columns()
spaces := (maxColumns - columns) + 1
chainLine.comment[0].SpacesBefore = spaces
}
chainStart = -1
maxColumns = 0
}
for i, line := range lines {
if line.comment == nil {
if chainStart != -1 {
closeCommentChain(i)
}
} else {
if chainStart == -1 {
chainStart = i
}
columns := line.lead.Columns() + line.assign.Columns()
if columns > maxColumns {
maxColumns = columns
}
}
}
if chainStart != -1 {
closeCommentChain(len(lines))
}
}
// spaceAfterToken decides whether a particular subject token should have a
// space after it when surrounded by the given before and after tokens.
// "before" can be TokenNil, if the subject token is at the start of a sequence.
func spaceAfterToken(subject, before, after *Token) bool {
switch {
case after.Type == hclsyntax.TokenNewline || after.Type == hclsyntax.TokenNil:
// Never add spaces before a newline
return false
case subject.Type == hclsyntax.TokenIdent && after.Type == hclsyntax.TokenOParen:
// Don't split a function name from open paren in a call
return false
case subject.Type == hclsyntax.TokenDot || after.Type == hclsyntax.TokenDot:
// Don't use spaces around attribute access dots
return false
case after.Type == hclsyntax.TokenComma || after.Type == hclsyntax.TokenEllipsis:
// No space right before a comma or ... in an argument list
return false
case subject.Type == hclsyntax.TokenComma:
// Always a space after a comma
return true
case subject.Type == hclsyntax.TokenQuotedLit || subject.Type == hclsyntax.TokenStringLit || subject.Type == hclsyntax.TokenOQuote || subject.Type == hclsyntax.TokenOHeredoc || after.Type == hclsyntax.TokenQuotedLit || after.Type == hclsyntax.TokenStringLit || after.Type == hclsyntax.TokenCQuote || after.Type == hclsyntax.TokenCHeredoc:
// No extra spaces within templates
return false
case inKeyword.TokenMatches(subject.asHCLSyntax()) && before.Type == hclsyntax.TokenIdent:
// This is a special case for inside for expressions where a user
// might want to use a literal tuple constructor:
// [for x in [foo]: x]
// ... in that case, we would normally produce in[foo] thinking that
// in is a reference, but we'll recognize it as a keyword here instead
// to make the result less confusing.
return true
case after.Type == hclsyntax.TokenOBrack && (subject.Type == hclsyntax.TokenIdent || subject.Type == hclsyntax.TokenNumberLit || tokenBracketChange(subject) < 0):
return false
case subject.Type == hclsyntax.TokenMinus:
// Since a minus can either be subtraction or negation, and the latter
// should _not_ have a space after it, we need to use some heuristics
// to decide which case this is.
// We guess that we have a negation if the token before doesn't look
// like it could be the end of an expression.
switch before.Type {
case hclsyntax.TokenNil:
// Minus at the start of input must be a negation
return false
case hclsyntax.TokenOParen, hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenEqual, hclsyntax.TokenColon, hclsyntax.TokenComma, hclsyntax.TokenQuestion:
// Minus immediately after an opening bracket or separator must be a negation.
return false
case hclsyntax.TokenPlus, hclsyntax.TokenStar, hclsyntax.TokenSlash, hclsyntax.TokenPercent, hclsyntax.TokenMinus:
// Minus immediately after another arithmetic operator must be negation.
return false
case hclsyntax.TokenEqualOp, hclsyntax.TokenNotEqual, hclsyntax.TokenGreaterThan, hclsyntax.TokenGreaterThanEq, hclsyntax.TokenLessThan, hclsyntax.TokenLessThanEq:
// Minus immediately after another comparison operator must be negation.
return false
case hclsyntax.TokenAnd, hclsyntax.TokenOr, hclsyntax.TokenBang:
// Minus immediately after logical operator doesn't make sense but probably intended as negation.
return false
default:
return true
}
case subject.Type == hclsyntax.TokenOBrace || after.Type == hclsyntax.TokenCBrace:
// Unlike other bracket types, braces have spaces on both sides of them,
// both in single-line nested blocks foo { bar = baz } and in object
// constructor expressions foo = { bar = baz }.
if subject.Type == hclsyntax.TokenOBrace && after.Type == hclsyntax.TokenCBrace {
// An open brace followed by a close brace is an exception, however.
// e.g. foo {} rather than foo { }
return false
}
return true
// In the unlikely event that an interpolation expression is just
// a single object constructor, we'll put a space between the ${ and
// the following { to make this more obvious, and then the same
// thing for the two braces at the end.
case (subject.Type == hclsyntax.TokenTemplateInterp || subject.Type == hclsyntax.TokenTemplateControl) && after.Type == hclsyntax.TokenOBrace:
return true
case subject.Type == hclsyntax.TokenCBrace && after.Type == hclsyntax.TokenTemplateSeqEnd:
return true
// Don't add spaces between interpolated items
case subject.Type == hclsyntax.TokenTemplateSeqEnd && (after.Type == hclsyntax.TokenTemplateInterp || after.Type == hclsyntax.TokenTemplateControl):
return false
case tokenBracketChange(subject) > 0:
// No spaces after open brackets
return false
case tokenBracketChange(after) < 0:
// No spaces before close brackets
return false
default:
// Most tokens are space-separated
return true
}
}
func linesForFormat(tokens Tokens) []formatLine {
if len(tokens) == 0 {
return make([]formatLine, 0)
}
// first we'll count our lines, so we can allocate the array for them in
// a single block. (We want to minimize memory pressure in this codepath,
// so it can be run somewhat-frequently by editor integrations.)
lineCount := 1 // if there are zero newlines then there is one line
for _, tok := range tokens {
if tokenIsNewline(tok) {
lineCount++
}
}
// To start, we'll just put everything in the "lead" cell on each line,
// and then do another pass over the lines afterwards to adjust.
lines := make([]formatLine, lineCount)
li := 0
lineStart := 0
for i, tok := range tokens {
if tok.Type == hclsyntax.TokenEOF {
// The EOF token doesn't belong to any line, and terminates the
// token sequence.
lines[li].lead = tokens[lineStart:i]
break
}
if tokenIsNewline(tok) {
lines[li].lead = tokens[lineStart : i+1]
lineStart = i + 1
li++
}
}
// If a set of tokens doesn't end in TokenEOF (e.g. because it's a
// fragment of tokens from the middle of a file) then we might fall
// out here with a line still pending.
if lineStart < len(tokens) {
lines[li].lead = tokens[lineStart:]
if lines[li].lead[len(lines[li].lead)-1].Type == hclsyntax.TokenEOF {
lines[li].lead = lines[li].lead[:len(lines[li].lead)-1]
}
}
// Now we'll pick off any trailing comments and attribute assignments
// to shuffle off into the "comment" and "assign" cells.
for i := range lines {
line := &lines[i]
if len(line.lead) == 0 {
// if the line is empty then there's nothing for us to do
// (this should happen only for the final line, because all other
// lines would have a newline token of some kind)
continue
}
if len(line.lead) > 1 && line.lead[len(line.lead)-1].Type == hclsyntax.TokenComment {
line.comment = line.lead[len(line.lead)-1:]
line.lead = line.lead[:len(line.lead)-1]
}
for i, tok := range line.lead {
if i > 0 && tok.Type == hclsyntax.TokenEqual {
// We only move the tokens into "assign" if the RHS seems to
// be a whole expression, which we determine by counting
// brackets. If there's a net positive number of brackets
// then that suggests we're introducing a multi-line expression.
netBrackets := 0
for _, token := range line.lead[i:] {
netBrackets += tokenBracketChange(token)
}
if netBrackets == 0 {
line.assign = line.lead[i:]
line.lead = line.lead[:i]
}
break
}
}
}
return lines
}
func tokenIsNewline(tok *Token) bool {
if tok.Type == hclsyntax.TokenNewline {
return true
} else if tok.Type == hclsyntax.TokenComment {
// Single line tokens (# and //) consume their terminating newline,
// so we need to treat them as newline tokens as well.
if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' {
return true
}
}
return false
}
func tokenBracketChange(tok *Token) int {
switch tok.Type {
case hclsyntax.TokenOBrace, hclsyntax.TokenOBrack, hclsyntax.TokenOParen, hclsyntax.TokenTemplateControl, hclsyntax.TokenTemplateInterp:
return 1
case hclsyntax.TokenCBrace, hclsyntax.TokenCBrack, hclsyntax.TokenCParen, hclsyntax.TokenTemplateSeqEnd:
return -1
default:
return 0
}
}
// formatLine represents a single line of source code for formatting purposes,
// splitting its tokens into up to three "cells":
//
// lead: always present, representing everything up to one of the others
// assign: if line contains an attribute assignment, represents the tokens
// starting at (and including) the equals symbol
// comment: if line contains any non-comment tokens and ends with a
// single-line comment token, represents the comment.
//
// When formatting, the leading spaces of the first tokens in each of these
// cells is adjusted to align vertically their occurences on consecutive
// rows.
type formatLine struct {
lead Tokens
assign Tokens
comment Tokens
}

View File

@ -1,250 +0,0 @@
package hclwrite
import (
"fmt"
"unicode"
"unicode/utf8"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
)
// TokensForValue returns a sequence of tokens that represents the given
// constant value.
//
// This function only supports types that are used by HCL. In particular, it
// does not support capsule types and will panic if given one.
//
// It is not possible to express an unknown value in source code, so this
// function will panic if the given value is unknown or contains any unknown
// values. A caller can call the value's IsWhollyKnown method to verify that
// no unknown values are present before calling TokensForValue.
func TokensForValue(val cty.Value) Tokens {
toks := appendTokensForValue(val, nil)
format(toks) // fiddle with the SpacesBefore field to get canonical spacing
return toks
}
// TokensForTraversal returns a sequence of tokens that represents the given
// traversal.
//
// If the traversal is absolute then the result is a self-contained, valid
// reference expression. If the traversal is relative then the returned tokens
// could be appended to some other expression tokens to traverse into the
// represented expression.
func TokensForTraversal(traversal hcl.Traversal) Tokens {
toks := appendTokensForTraversal(traversal, nil)
format(toks) // fiddle with the SpacesBefore field to get canonical spacing
return toks
}
func appendTokensForValue(val cty.Value, toks Tokens) Tokens {
switch {
case !val.IsKnown():
panic("cannot produce tokens for unknown value")
case val.IsNull():
toks = append(toks, &Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(`null`),
})
case val.Type() == cty.Bool:
var src []byte
if val.True() {
src = []byte(`true`)
} else {
src = []byte(`false`)
}
toks = append(toks, &Token{
Type: hclsyntax.TokenIdent,
Bytes: src,
})
case val.Type() == cty.Number:
bf := val.AsBigFloat()
srcStr := bf.Text('f', -1)
toks = append(toks, &Token{
Type: hclsyntax.TokenNumberLit,
Bytes: []byte(srcStr),
})
case val.Type() == cty.String:
// TODO: If it's a multi-line string ending in a newline, format
// it as a HEREDOC instead.
src := escapeQuotedStringLit(val.AsString())
toks = append(toks, &Token{
Type: hclsyntax.TokenOQuote,
Bytes: []byte{'"'},
})
if len(src) > 0 {
toks = append(toks, &Token{
Type: hclsyntax.TokenQuotedLit,
Bytes: src,
})
}
toks = append(toks, &Token{
Type: hclsyntax.TokenCQuote,
Bytes: []byte{'"'},
})
case val.Type().IsListType() || val.Type().IsSetType() || val.Type().IsTupleType():
toks = append(toks, &Token{
Type: hclsyntax.TokenOBrack,
Bytes: []byte{'['},
})
i := 0
for it := val.ElementIterator(); it.Next(); {
if i > 0 {
toks = append(toks, &Token{
Type: hclsyntax.TokenComma,
Bytes: []byte{','},
})
}
_, eVal := it.Element()
toks = appendTokensForValue(eVal, toks)
i++
}
toks = append(toks, &Token{
Type: hclsyntax.TokenCBrack,
Bytes: []byte{']'},
})
case val.Type().IsMapType() || val.Type().IsObjectType():
toks = append(toks, &Token{
Type: hclsyntax.TokenOBrace,
Bytes: []byte{'{'},
})
i := 0
for it := val.ElementIterator(); it.Next(); {
if i > 0 {
toks = append(toks, &Token{
Type: hclsyntax.TokenComma,
Bytes: []byte{','},
})
}
eKey, eVal := it.Element()
if hclsyntax.ValidIdentifier(eKey.AsString()) {
toks = append(toks, &Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(eKey.AsString()),
})
} else {
toks = appendTokensForValue(eKey, toks)
}
toks = append(toks, &Token{
Type: hclsyntax.TokenEqual,
Bytes: []byte{'='},
})
toks = appendTokensForValue(eVal, toks)
i++
}
toks = append(toks, &Token{
Type: hclsyntax.TokenCBrace,
Bytes: []byte{'}'},
})
default:
panic(fmt.Sprintf("cannot produce tokens for %#v", val))
}
return toks
}
func appendTokensForTraversal(traversal hcl.Traversal, toks Tokens) Tokens {
for _, step := range traversal {
appendTokensForTraversalStep(step, toks)
}
return toks
}
func appendTokensForTraversalStep(step hcl.Traverser, toks Tokens) {
switch ts := step.(type) {
case hcl.TraverseRoot:
toks = append(toks, &Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(ts.Name),
})
case hcl.TraverseAttr:
toks = append(
toks,
&Token{
Type: hclsyntax.TokenDot,
Bytes: []byte{'.'},
},
&Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(ts.Name),
},
)
case hcl.TraverseIndex:
toks = append(toks, &Token{
Type: hclsyntax.TokenOBrack,
Bytes: []byte{'['},
})
appendTokensForValue(ts.Key, toks)
toks = append(toks, &Token{
Type: hclsyntax.TokenCBrack,
Bytes: []byte{']'},
})
default:
panic(fmt.Sprintf("unsupported traversal step type %T", step))
}
}
func escapeQuotedStringLit(s string) []byte {
if len(s) == 0 {
return nil
}
buf := make([]byte, 0, len(s))
for i, r := range s {
switch r {
case '\n':
buf = append(buf, '\\', 'n')
case '\r':
buf = append(buf, '\\', 'r')
case '\t':
buf = append(buf, '\\', 't')
case '"':
buf = append(buf, '\\', '"')
case '\\':
buf = append(buf, '\\', '\\')
case '$', '%':
buf = appendRune(buf, r)
remain := s[i+1:]
if len(remain) > 0 && remain[0] == '{' {
// Double up our template introducer symbol to escape it.
buf = appendRune(buf, r)
}
default:
if !unicode.IsPrint(r) {
var fmted string
if r < 65536 {
fmted = fmt.Sprintf("\\u%04x", r)
} else {
fmted = fmt.Sprintf("\\U%08x", r)
}
buf = append(buf, fmted...)
} else {
buf = appendRune(buf, r)
}
}
}
return buf
}
func appendRune(b []byte, r rune) []byte {
l := utf8.RuneLen(r)
for i := 0; i < l; i++ {
b = append(b, 0) // make room at the end of our buffer
}
ch := b[len(b)-l:]
utf8.EncodeRune(ch, r)
return b
}

View File

@ -1,23 +0,0 @@
package hclwrite
import (
"github.com/hashicorp/hcl/v2/hclsyntax"
)
type nativeNodeSorter struct {
Nodes []hclsyntax.Node
}
func (s nativeNodeSorter) Len() int {
return len(s.Nodes)
}
func (s nativeNodeSorter) Less(i, j int) bool {
rangeI := s.Nodes[i].Range()
rangeJ := s.Nodes[j].Range()
return rangeI.Start.Byte < rangeJ.Start.Byte
}
func (s nativeNodeSorter) Swap(i, j int) {
s.Nodes[i], s.Nodes[j] = s.Nodes[j], s.Nodes[i]
}

View File

@ -1,260 +0,0 @@
package hclwrite
import (
"fmt"
"github.com/google/go-cmp/cmp"
)
// node represents a node in the AST.
type node struct {
content nodeContent
list *nodes
before, after *node
}
func newNode(c nodeContent) *node {
return &node{
content: c,
}
}
func (n *node) Equal(other *node) bool {
return cmp.Equal(n.content, other.content)
}
func (n *node) BuildTokens(to Tokens) Tokens {
return n.content.BuildTokens(to)
}
// Detach removes the receiver from the list it currently belongs to. If the
// node is not currently in a list, this is a no-op.
func (n *node) Detach() {
if n.list == nil {
return
}
if n.before != nil {
n.before.after = n.after
}
if n.after != nil {
n.after.before = n.before
}
if n.list.first == n {
n.list.first = n.after
}
if n.list.last == n {
n.list.last = n.before
}
n.list = nil
n.before = nil
n.after = nil
}
// ReplaceWith removes the receiver from the list it currently belongs to and
// inserts a new node with the given content in its place. If the node is not
// currently in a list, this function will panic.
//
// The return value is the newly-constructed node, containing the given content.
// After this function returns, the reciever is no longer attached to a list.
func (n *node) ReplaceWith(c nodeContent) *node {
if n.list == nil {
panic("can't replace node that is not in a list")
}
before := n.before
after := n.after
list := n.list
n.before, n.after, n.list = nil, nil, nil
nn := newNode(c)
nn.before = before
nn.after = after
nn.list = list
if before != nil {
before.after = nn
}
if after != nil {
after.before = nn
}
return nn
}
func (n *node) assertUnattached() {
if n.list != nil {
panic(fmt.Sprintf("attempt to attach already-attached node %#v", n))
}
}
// nodeContent is the interface type implemented by all AST content types.
type nodeContent interface {
walkChildNodes(w internalWalkFunc)
BuildTokens(to Tokens) Tokens
}
// nodes is a list of nodes.
type nodes struct {
first, last *node
}
func (ns *nodes) BuildTokens(to Tokens) Tokens {
for n := ns.first; n != nil; n = n.after {
to = n.BuildTokens(to)
}
return to
}
func (ns *nodes) Clear() {
ns.first = nil
ns.last = nil
}
func (ns *nodes) Append(c nodeContent) *node {
n := &node{
content: c,
}
ns.AppendNode(n)
n.list = ns
return n
}
func (ns *nodes) AppendNode(n *node) {
if ns.last != nil {
n.before = ns.last
ns.last.after = n
}
n.list = ns
ns.last = n
if ns.first == nil {
ns.first = n
}
}
func (ns *nodes) AppendUnstructuredTokens(tokens Tokens) *node {
if len(tokens) == 0 {
return nil
}
n := newNode(tokens)
ns.AppendNode(n)
n.list = ns
return n
}
// FindNodeWithContent searches the nodes for a node whose content equals
// the given content. If it finds one then it returns it. Otherwise it returns
// nil.
func (ns *nodes) FindNodeWithContent(content nodeContent) *node {
for n := ns.first; n != nil; n = n.after {
if n.content == content {
return n
}
}
return nil
}
// nodeSet is an unordered set of nodes. It is used to describe a set of nodes
// that all belong to the same list that have some role or characteristic
// in common.
type nodeSet map[*node]struct{}
func newNodeSet() nodeSet {
return make(nodeSet)
}
func (ns nodeSet) Has(n *node) bool {
if ns == nil {
return false
}
_, exists := ns[n]
return exists
}
func (ns nodeSet) Add(n *node) {
ns[n] = struct{}{}
}
func (ns nodeSet) Remove(n *node) {
delete(ns, n)
}
func (ns nodeSet) List() []*node {
if len(ns) == 0 {
return nil
}
ret := make([]*node, 0, len(ns))
// Determine which list we are working with. We assume here that all of
// the nodes belong to the same list, since that is part of the contract
// for nodeSet.
var list *nodes
for n := range ns {
list = n.list
break
}
// We recover the order by iterating over the whole list. This is not
// the most efficient way to do it, but our node lists should always be
// small so not worth making things more complex.
for n := list.first; n != nil; n = n.after {
if ns.Has(n) {
ret = append(ret, n)
}
}
return ret
}
// FindNodeWithContent searches the nodes for a node whose content equals
// the given content. If it finds one then it returns it. Otherwise it returns
// nil.
func (ns nodeSet) FindNodeWithContent(content nodeContent) *node {
for n := range ns {
if n.content == content {
return n
}
}
return nil
}
type internalWalkFunc func(*node)
// inTree can be embedded into a content struct that has child nodes to get
// a standard implementation of the NodeContent interface and a record of
// a potential parent node.
type inTree struct {
parent *node
children *nodes
}
func newInTree() inTree {
return inTree{
children: &nodes{},
}
}
func (it *inTree) assertUnattached() {
if it.parent != nil {
panic(fmt.Sprintf("node is already attached to %T", it.parent.content))
}
}
func (it *inTree) walkChildNodes(w internalWalkFunc) {
for n := it.children.first; n != nil; n = n.after {
w(n)
}
}
func (it *inTree) BuildTokens(to Tokens) Tokens {
for n := it.children.first; n != nil; n = n.after {
to = n.BuildTokens(to)
}
return to
}
// leafNode can be embedded into a content struct to give it a do-nothing
// implementation of walkChildNodes
type leafNode struct {
}
func (n *leafNode) walkChildNodes(w internalWalkFunc) {
}

View File

@ -1,599 +0,0 @@
package hclwrite
import (
"fmt"
"sort"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
)
// Our "parser" here is actually not doing any parsing of its own. Instead,
// it leans on the native parser in hclsyntax, and then uses the source ranges
// from the AST to partition the raw token sequence to match the raw tokens
// up to AST nodes.
//
// This strategy feels somewhat counter-intuitive, since most of the work the
// parser does is thrown away here, but this strategy is chosen because the
// normal parsing work done by hclsyntax is considered to be the "main case",
// while modifying and re-printing source is more of an edge case, used only
// in ancillary tools, and so it's good to keep all the main parsing logic
// with the main case but keep all of the extra complexity of token wrangling
// out of the main parser, which is already rather complex just serving the
// use-cases it already serves.
//
// If the parsing step produces any errors, the returned File is nil because
// we can't reliably extract tokens from the partial AST produced by an
// erroneous parse.
func parse(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) {
file, diags := hclsyntax.ParseConfig(src, filename, start)
if diags.HasErrors() {
return nil, diags
}
// To do our work here, we use the "native" tokens (those from hclsyntax)
// to match against source ranges in the AST, but ultimately produce
// slices from our sequence of "writer" tokens, which contain only
// *relative* position information that is more appropriate for
// transformation/writing use-cases.
nativeTokens, diags := hclsyntax.LexConfig(src, filename, start)
if diags.HasErrors() {
// should never happen, since we would've caught these diags in
// the first call above.
return nil, diags
}
writerTokens := writerTokens(nativeTokens)
from := inputTokens{
nativeTokens: nativeTokens,
writerTokens: writerTokens,
}
before, root, after := parseBody(file.Body.(*hclsyntax.Body), from)
ret := &File{
inTree: newInTree(),
srcBytes: src,
body: root,
}
nodes := ret.inTree.children
nodes.Append(before.Tokens())
nodes.AppendNode(root)
nodes.Append(after.Tokens())
return ret, diags
}
type inputTokens struct {
nativeTokens hclsyntax.Tokens
writerTokens Tokens
}
func (it inputTokens) Partition(rng hcl.Range) (before, within, after inputTokens) {
start, end := partitionTokens(it.nativeTokens, rng)
before = it.Slice(0, start)
within = it.Slice(start, end)
after = it.Slice(end, len(it.nativeTokens))
return
}
func (it inputTokens) PartitionType(ty hclsyntax.TokenType) (before, within, after inputTokens) {
for i, t := range it.writerTokens {
if t.Type == ty {
return it.Slice(0, i), it.Slice(i, i+1), it.Slice(i+1, len(it.nativeTokens))
}
}
panic(fmt.Sprintf("didn't find any token of type %s", ty))
}
func (it inputTokens) PartitionTypeSingle(ty hclsyntax.TokenType) (before inputTokens, found *Token, after inputTokens) {
before, within, after := it.PartitionType(ty)
if within.Len() != 1 {
panic("PartitionType found more than one token")
}
return before, within.Tokens()[0], after
}
// PartitionIncludeComments is like Partition except the returned "within"
// range includes any lead and line comments associated with the range.
func (it inputTokens) PartitionIncludingComments(rng hcl.Range) (before, within, after inputTokens) {
start, end := partitionTokens(it.nativeTokens, rng)
start = partitionLeadCommentTokens(it.nativeTokens[:start])
_, afterNewline := partitionLineEndTokens(it.nativeTokens[end:])
end += afterNewline
before = it.Slice(0, start)
within = it.Slice(start, end)
after = it.Slice(end, len(it.nativeTokens))
return
}
// PartitionBlockItem is similar to PartitionIncludeComments but it returns
// the comments as separate token sequences so that they can be captured into
// AST attributes. It makes assumptions that apply only to block items, so
// should not be used for other constructs.
func (it inputTokens) PartitionBlockItem(rng hcl.Range) (before, leadComments, within, lineComments, newline, after inputTokens) {
before, within, after = it.Partition(rng)
before, leadComments = before.PartitionLeadComments()
lineComments, newline, after = after.PartitionLineEndTokens()
return
}
func (it inputTokens) PartitionLeadComments() (before, within inputTokens) {
start := partitionLeadCommentTokens(it.nativeTokens)
before = it.Slice(0, start)
within = it.Slice(start, len(it.nativeTokens))
return
}
func (it inputTokens) PartitionLineEndTokens() (comments, newline, after inputTokens) {
afterComments, afterNewline := partitionLineEndTokens(it.nativeTokens)
comments = it.Slice(0, afterComments)
newline = it.Slice(afterComments, afterNewline)
after = it.Slice(afterNewline, len(it.nativeTokens))
return
}
func (it inputTokens) Slice(start, end int) inputTokens {
// When we slice, we create a new slice with no additional capacity because
// we expect that these slices will be mutated in order to insert
// new code into the AST, and we want to ensure that a new underlying
// array gets allocated in that case, rather than writing into some
// following slice and corrupting it.
return inputTokens{
nativeTokens: it.nativeTokens[start:end:end],
writerTokens: it.writerTokens[start:end:end],
}
}
func (it inputTokens) Len() int {
return len(it.nativeTokens)
}
func (it inputTokens) Tokens() Tokens {
return it.writerTokens
}
func (it inputTokens) Types() []hclsyntax.TokenType {
ret := make([]hclsyntax.TokenType, len(it.nativeTokens))
for i, tok := range it.nativeTokens {
ret[i] = tok.Type
}
return ret
}
// parseBody locates the given body within the given input tokens and returns
// the resulting *Body object as well as the tokens that appeared before and
// after it.
func parseBody(nativeBody *hclsyntax.Body, from inputTokens) (inputTokens, *node, inputTokens) {
before, within, after := from.PartitionIncludingComments(nativeBody.SrcRange)
// The main AST doesn't retain the original source ordering of the
// body items, so we need to reconstruct that ordering by inspecting
// their source ranges.
nativeItems := make([]hclsyntax.Node, 0, len(nativeBody.Attributes)+len(nativeBody.Blocks))
for _, nativeAttr := range nativeBody.Attributes {
nativeItems = append(nativeItems, nativeAttr)
}
for _, nativeBlock := range nativeBody.Blocks {
nativeItems = append(nativeItems, nativeBlock)
}
sort.Sort(nativeNodeSorter{nativeItems})
body := &Body{
inTree: newInTree(),
items: newNodeSet(),
}
remain := within
for _, nativeItem := range nativeItems {
beforeItem, item, afterItem := parseBodyItem(nativeItem, remain)
if beforeItem.Len() > 0 {
body.AppendUnstructuredTokens(beforeItem.Tokens())
}
body.appendItemNode(item)
remain = afterItem
}
if remain.Len() > 0 {
body.AppendUnstructuredTokens(remain.Tokens())
}
return before, newNode(body), after
}
func parseBodyItem(nativeItem hclsyntax.Node, from inputTokens) (inputTokens, *node, inputTokens) {
before, leadComments, within, lineComments, newline, after := from.PartitionBlockItem(nativeItem.Range())
var item *node
switch tItem := nativeItem.(type) {
case *hclsyntax.Attribute:
item = parseAttribute(tItem, within, leadComments, lineComments, newline)
case *hclsyntax.Block:
item = parseBlock(tItem, within, leadComments, lineComments, newline)
default:
// should never happen if caller is behaving
panic("unsupported native item type")
}
return before, item, after
}
func parseAttribute(nativeAttr *hclsyntax.Attribute, from, leadComments, lineComments, newline inputTokens) *node {
attr := &Attribute{
inTree: newInTree(),
}
children := attr.inTree.children
{
cn := newNode(newComments(leadComments.Tokens()))
attr.leadComments = cn
children.AppendNode(cn)
}
before, nameTokens, from := from.Partition(nativeAttr.NameRange)
{
children.AppendUnstructuredTokens(before.Tokens())
if nameTokens.Len() != 1 {
// Should never happen with valid input
panic("attribute name is not exactly one token")
}
token := nameTokens.Tokens()[0]
in := newNode(newIdentifier(token))
attr.name = in
children.AppendNode(in)
}
before, equalsTokens, from := from.Partition(nativeAttr.EqualsRange)
children.AppendUnstructuredTokens(before.Tokens())
children.AppendUnstructuredTokens(equalsTokens.Tokens())
before, exprTokens, from := from.Partition(nativeAttr.Expr.Range())
{
children.AppendUnstructuredTokens(before.Tokens())
exprNode := parseExpression(nativeAttr.Expr, exprTokens)
attr.expr = exprNode
children.AppendNode(exprNode)
}
{
cn := newNode(newComments(lineComments.Tokens()))
attr.lineComments = cn
children.AppendNode(cn)
}
children.AppendUnstructuredTokens(newline.Tokens())
// Collect any stragglers, though there shouldn't be any
children.AppendUnstructuredTokens(from.Tokens())
return newNode(attr)
}
func parseBlock(nativeBlock *hclsyntax.Block, from, leadComments, lineComments, newline inputTokens) *node {
block := &Block{
inTree: newInTree(),
labels: newNodeSet(),
}
children := block.inTree.children
{
cn := newNode(newComments(leadComments.Tokens()))
block.leadComments = cn
children.AppendNode(cn)
}
before, typeTokens, from := from.Partition(nativeBlock.TypeRange)
{
children.AppendUnstructuredTokens(before.Tokens())
if typeTokens.Len() != 1 {
// Should never happen with valid input
panic("block type name is not exactly one token")
}
token := typeTokens.Tokens()[0]
in := newNode(newIdentifier(token))
block.typeName = in
children.AppendNode(in)
}
for _, rng := range nativeBlock.LabelRanges {
var labelTokens inputTokens
before, labelTokens, from = from.Partition(rng)
children.AppendUnstructuredTokens(before.Tokens())
tokens := labelTokens.Tokens()
var ln *node
if len(tokens) == 1 && tokens[0].Type == hclsyntax.TokenIdent {
ln = newNode(newIdentifier(tokens[0]))
} else {
ln = newNode(newQuoted(tokens))
}
block.labels.Add(ln)
children.AppendNode(ln)
}
before, oBrace, from := from.Partition(nativeBlock.OpenBraceRange)
children.AppendUnstructuredTokens(before.Tokens())
children.AppendUnstructuredTokens(oBrace.Tokens())
// We go a bit out of order here: we go hunting for the closing brace
// so that we have a delimited body, but then we'll deal with the body
// before we actually append the closing brace and any straggling tokens
// that appear after it.
bodyTokens, cBrace, from := from.Partition(nativeBlock.CloseBraceRange)
before, body, after := parseBody(nativeBlock.Body, bodyTokens)
children.AppendUnstructuredTokens(before.Tokens())
block.body = body
children.AppendNode(body)
children.AppendUnstructuredTokens(after.Tokens())
children.AppendUnstructuredTokens(cBrace.Tokens())
// stragglers
children.AppendUnstructuredTokens(from.Tokens())
if lineComments.Len() > 0 {
// blocks don't actually have line comments, so we'll just treat
// them as extra stragglers
children.AppendUnstructuredTokens(lineComments.Tokens())
}
children.AppendUnstructuredTokens(newline.Tokens())
return newNode(block)
}
func parseExpression(nativeExpr hclsyntax.Expression, from inputTokens) *node {
expr := newExpression()
children := expr.inTree.children
nativeVars := nativeExpr.Variables()
for _, nativeTraversal := range nativeVars {
before, traversal, after := parseTraversal(nativeTraversal, from)
children.AppendUnstructuredTokens(before.Tokens())
children.AppendNode(traversal)
expr.absTraversals.Add(traversal)
from = after
}
// Attach any stragglers that don't belong to a traversal to the expression
// itself. In an expression with no traversals at all, this is just the
// entirety of "from".
children.AppendUnstructuredTokens(from.Tokens())
return newNode(expr)
}
func parseTraversal(nativeTraversal hcl.Traversal, from inputTokens) (before inputTokens, n *node, after inputTokens) {
traversal := newTraversal()
children := traversal.inTree.children
before, from, after = from.Partition(nativeTraversal.SourceRange())
stepAfter := from
for _, nativeStep := range nativeTraversal {
before, step, after := parseTraversalStep(nativeStep, stepAfter)
children.AppendUnstructuredTokens(before.Tokens())
children.AppendNode(step)
traversal.steps.Add(step)
stepAfter = after
}
return before, newNode(traversal), after
}
func parseTraversalStep(nativeStep hcl.Traverser, from inputTokens) (before inputTokens, n *node, after inputTokens) {
var children *nodes
switch tNativeStep := nativeStep.(type) {
case hcl.TraverseRoot, hcl.TraverseAttr:
step := newTraverseName()
children = step.inTree.children
before, from, after = from.Partition(nativeStep.SourceRange())
inBefore, token, inAfter := from.PartitionTypeSingle(hclsyntax.TokenIdent)
name := newIdentifier(token)
children.AppendUnstructuredTokens(inBefore.Tokens())
step.name = children.Append(name)
children.AppendUnstructuredTokens(inAfter.Tokens())
return before, newNode(step), after
case hcl.TraverseIndex:
step := newTraverseIndex()
children = step.inTree.children
before, from, after = from.Partition(nativeStep.SourceRange())
var inBefore, oBrack, keyTokens, cBrack inputTokens
inBefore, oBrack, from = from.PartitionType(hclsyntax.TokenOBrack)
children.AppendUnstructuredTokens(inBefore.Tokens())
children.AppendUnstructuredTokens(oBrack.Tokens())
keyTokens, cBrack, from = from.PartitionType(hclsyntax.TokenCBrack)
keyVal := tNativeStep.Key
switch keyVal.Type() {
case cty.String:
key := newQuoted(keyTokens.Tokens())
step.key = children.Append(key)
case cty.Number:
valBefore, valToken, valAfter := keyTokens.PartitionTypeSingle(hclsyntax.TokenNumberLit)
children.AppendUnstructuredTokens(valBefore.Tokens())
key := newNumber(valToken)
step.key = children.Append(key)
children.AppendUnstructuredTokens(valAfter.Tokens())
}
children.AppendUnstructuredTokens(cBrack.Tokens())
children.AppendUnstructuredTokens(from.Tokens())
return before, newNode(step), after
default:
panic(fmt.Sprintf("unsupported traversal step type %T", nativeStep))
}
}
// writerTokens takes a sequence of tokens as produced by the main hclsyntax
// package and transforms it into an equivalent sequence of tokens using
// this package's own token model.
//
// The resulting list contains the same number of tokens and uses the same
// indices as the input, allowing the two sets of tokens to be correlated
// by index.
func writerTokens(nativeTokens hclsyntax.Tokens) Tokens {
// Ultimately we want a slice of token _pointers_, but since we can
// predict how much memory we're going to devote to tokens we'll allocate
// it all as a single flat buffer and thus give the GC less work to do.
tokBuf := make([]Token, len(nativeTokens))
var lastByteOffset int
for i, mainToken := range nativeTokens {
// Create a copy of the bytes so that we can mutate without
// corrupting the original token stream.
bytes := make([]byte, len(mainToken.Bytes))
copy(bytes, mainToken.Bytes)
tokBuf[i] = Token{
Type: mainToken.Type,
Bytes: bytes,
// We assume here that spaces are always ASCII spaces, since
// that's what the scanner also assumes, and thus the number
// of bytes skipped is also the number of space characters.
SpacesBefore: mainToken.Range.Start.Byte - lastByteOffset,
}
lastByteOffset = mainToken.Range.End.Byte
}
// Now make a slice of pointers into the previous slice.
ret := make(Tokens, len(tokBuf))
for i := range ret {
ret[i] = &tokBuf[i]
}
return ret
}
// partitionTokens takes a sequence of tokens and a hcl.Range and returns
// two indices within the token sequence that correspond with the range
// boundaries, such that the slice operator could be used to produce
// three token sequences for before, within, and after respectively:
//
// start, end := partitionTokens(toks, rng)
// before := toks[:start]
// within := toks[start:end]
// after := toks[end:]
//
// This works best when the range is aligned with token boundaries (e.g.
// because it was produced in terms of the scanner's result) but if that isn't
// true then it will make a best effort that may produce strange results at
// the boundaries.
//
// Native hclsyntax tokens are used here, because they contain the necessary
// absolute position information. However, since writerTokens produces a
// correlatable sequence of writer tokens, the resulting indices can be
// used also to index into its result, allowing the partitioning of writer
// tokens to be driven by the partitioning of native tokens.
//
// The tokens are assumed to be in source order and non-overlapping, which
// will be true if the token sequence from the scanner is used directly.
func partitionTokens(toks hclsyntax.Tokens, rng hcl.Range) (start, end int) {
// We us a linear search here because we assume tha in most cases our
// target range is close to the beginning of the sequence, and the seqences
// are generally small for most reasonable files anyway.
for i := 0; ; i++ {
if i >= len(toks) {
// No tokens for the given range at all!
return len(toks), len(toks)
}
if toks[i].Range.Start.Byte >= rng.Start.Byte {
start = i
break
}
}
for i := start; ; i++ {
if i >= len(toks) {
// The range "hangs off" the end of the token sequence
return start, len(toks)
}
if toks[i].Range.Start.Byte >= rng.End.Byte {
end = i // end marker is exclusive
break
}
}
return start, end
}
// partitionLeadCommentTokens takes a sequence of tokens that is assumed
// to immediately precede a construct that can have lead comment tokens,
// and returns the index into that sequence where the lead comments begin.
//
// Lead comments are defined as whole lines containing only comment tokens
// with no blank lines between. If no such lines are found, the returned
// index will be len(toks).
func partitionLeadCommentTokens(toks hclsyntax.Tokens) int {
// single-line comments (which is what we're interested in here)
// consume their trailing newline, so we can just walk backwards
// until we stop seeing comment tokens.
for i := len(toks) - 1; i >= 0; i-- {
if toks[i].Type != hclsyntax.TokenComment {
return i + 1
}
}
return 0
}
// partitionLineEndTokens takes a sequence of tokens that is assumed
// to immediately follow a construct that can have a line comment, and
// returns first the index where any line comments end and then second
// the index immediately after the trailing newline.
//
// Line comments are defined as comments that appear immediately after
// a construct on the same line where its significant tokens ended.
//
// Since single-line comment tokens (# and //) include the newline that
// terminates them, in the presence of these the two returned indices
// will be the same since the comment itself serves as the line end.
func partitionLineEndTokens(toks hclsyntax.Tokens) (afterComment, afterNewline int) {
for i := 0; i < len(toks); i++ {
tok := toks[i]
if tok.Type != hclsyntax.TokenComment {
switch tok.Type {
case hclsyntax.TokenNewline:
return i, i + 1
case hclsyntax.TokenEOF:
// Although this is valid, we mustn't include the EOF
// itself as our "newline" or else strange things will
// happen when we try to append new items.
return i, i
default:
// If we have well-formed input here then nothing else should be
// possible. This path should never happen, because we only try
// to extract tokens from the sequence if the parser succeeded,
// and it should catch this problem itself.
panic("malformed line trailers: expected only comments and newlines")
}
}
if len(tok.Bytes) > 0 && tok.Bytes[len(tok.Bytes)-1] == '\n' {
// Newline at the end of a single-line comment serves both as
// the end of comments *and* the end of the line.
return i + 1, i + 1
}
}
return len(toks), len(toks)
}
// lexConfig uses the hclsyntax scanner to get a token stream and then
// rewrites it into this package's token model.
//
// Any errors produced during scanning are ignored, so the results of this
// function should be used with care.
func lexConfig(src []byte) Tokens {
mainTokens, _ := hclsyntax.LexConfig(src, "", hcl.Pos{Byte: 0, Line: 1, Column: 1})
return writerTokens(mainTokens)
}

View File

@ -1,44 +0,0 @@
package hclwrite
import (
"bytes"
"github.com/hashicorp/hcl/v2"
)
// NewFile creates a new file object that is empty and ready to have constructs
// added t it.
func NewFile() *File {
body := &Body{
inTree: newInTree(),
items: newNodeSet(),
}
file := &File{
inTree: newInTree(),
}
file.body = file.inTree.children.Append(body)
return file
}
// ParseConfig interprets the given source bytes into a *hclwrite.File. The
// resulting AST can be used to perform surgical edits on the source code
// before turning it back into bytes again.
func ParseConfig(src []byte, filename string, start hcl.Pos) (*File, hcl.Diagnostics) {
return parse(src, filename, start)
}
// Format takes source code and performs simple whitespace changes to transform
// it to a canonical layout style.
//
// Format skips constructing an AST and works directly with tokens, so it
// is less expensive than formatting via the AST for situations where no other
// changes will be made. It also ignores syntax errors and can thus be applied
// to partial source code, although the result in that case may not be
// desirable.
func Format(src []byte) []byte {
tokens := lexConfig(src)
format(tokens)
buf := &bytes.Buffer{}
tokens.WriteTo(buf)
return buf.Bytes()
}

View File

@ -1,122 +0,0 @@
package hclwrite
import (
"bytes"
"io"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
)
// Token is a single sequence of bytes annotated with a type. It is similar
// in purpose to hclsyntax.Token, but discards the source position information
// since that is not useful in code generation.
type Token struct {
Type hclsyntax.TokenType
Bytes []byte
// We record the number of spaces before each token so that we can
// reproduce the exact layout of the original file when we're making
// surgical changes in-place. When _new_ code is created it will always
// be in the canonical style, but we preserve layout of existing code.
SpacesBefore int
}
// asHCLSyntax returns the receiver expressed as an incomplete hclsyntax.Token.
// A complete token is not possible since we don't have source location
// information here, and so this method is unexported so we can be sure it will
// only be used for internal purposes where we know the range isn't important.
//
// This is primarily intended to allow us to re-use certain functionality from
// hclsyntax rather than re-implementing it against our own token type here.
func (t *Token) asHCLSyntax() hclsyntax.Token {
return hclsyntax.Token{
Type: t.Type,
Bytes: t.Bytes,
Range: hcl.Range{
Filename: "<invalid>",
},
}
}
// Tokens is a flat list of tokens.
type Tokens []*Token
func (ts Tokens) Bytes() []byte {
buf := &bytes.Buffer{}
ts.WriteTo(buf)
return buf.Bytes()
}
func (ts Tokens) testValue() string {
return string(ts.Bytes())
}
// Columns returns the number of columns (grapheme clusters) the token sequence
// occupies. The result is not meaningful if there are newline or single-line
// comment tokens in the sequence.
func (ts Tokens) Columns() int {
ret := 0
for _, token := range ts {
ret += token.SpacesBefore // spaces are always worth one column each
ct, _ := textseg.TokenCount(token.Bytes, textseg.ScanGraphemeClusters)
ret += ct
}
return ret
}
// WriteTo takes an io.Writer and writes the bytes for each token to it,
// along with the spacing that separates each token. In other words, this
// allows serializing the tokens to a file or other such byte stream.
func (ts Tokens) WriteTo(wr io.Writer) (int64, error) {
// We know we're going to be writing a lot of small chunks of repeated
// space characters, so we'll prepare a buffer of these that we can
// easily pass to wr.Write without any further allocation.
spaces := make([]byte, 40)
for i := range spaces {
spaces[i] = ' '
}
var n int64
var err error
for _, token := range ts {
if err != nil {
return n, err
}
for spacesBefore := token.SpacesBefore; spacesBefore > 0; spacesBefore -= len(spaces) {
thisChunk := spacesBefore
if thisChunk > len(spaces) {
thisChunk = len(spaces)
}
var thisN int
thisN, err = wr.Write(spaces[:thisChunk])
n += int64(thisN)
if err != nil {
return n, err
}
}
var thisN int
thisN, err = wr.Write(token.Bytes)
n += int64(thisN)
}
return n, err
}
func (ts Tokens) walkChildNodes(w internalWalkFunc) {
// Unstructured tokens have no child nodes
}
func (ts Tokens) BuildTokens(to Tokens) Tokens {
return append(to, ts...)
}
func newIdentToken(name string) *Token {
return &Token{
Type: hclsyntax.TokenIdent,
Bytes: []byte(name),
}
}

View File

@ -1,121 +0,0 @@
package json
import (
"math/big"
"github.com/hashicorp/hcl/v2"
)
type node interface {
Range() hcl.Range
StartRange() hcl.Range
}
type objectVal struct {
Attrs []*objectAttr
SrcRange hcl.Range // range of the entire object, brace-to-brace
OpenRange hcl.Range // range of the opening brace
CloseRange hcl.Range // range of the closing brace
}
func (n *objectVal) Range() hcl.Range {
return n.SrcRange
}
func (n *objectVal) StartRange() hcl.Range {
return n.OpenRange
}
type objectAttr struct {
Name string
Value node
NameRange hcl.Range // range of the name string
}
func (n *objectAttr) Range() hcl.Range {
return n.NameRange
}
func (n *objectAttr) StartRange() hcl.Range {
return n.NameRange
}
type arrayVal struct {
Values []node
SrcRange hcl.Range // range of the entire object, bracket-to-bracket
OpenRange hcl.Range // range of the opening bracket
}
func (n *arrayVal) Range() hcl.Range {
return n.SrcRange
}
func (n *arrayVal) StartRange() hcl.Range {
return n.OpenRange
}
type booleanVal struct {
Value bool
SrcRange hcl.Range
}
func (n *booleanVal) Range() hcl.Range {
return n.SrcRange
}
func (n *booleanVal) StartRange() hcl.Range {
return n.SrcRange
}
type numberVal struct {
Value *big.Float
SrcRange hcl.Range
}
func (n *numberVal) Range() hcl.Range {
return n.SrcRange
}
func (n *numberVal) StartRange() hcl.Range {
return n.SrcRange
}
type stringVal struct {
Value string
SrcRange hcl.Range
}
func (n *stringVal) Range() hcl.Range {
return n.SrcRange
}
func (n *stringVal) StartRange() hcl.Range {
return n.SrcRange
}
type nullVal struct {
SrcRange hcl.Range
}
func (n *nullVal) Range() hcl.Range {
return n.SrcRange
}
func (n *nullVal) StartRange() hcl.Range {
return n.SrcRange
}
// invalidVal is used as a placeholder where a value is needed for a valid
// parse tree but the input was invalid enough to prevent one from being
// created.
type invalidVal struct {
SrcRange hcl.Range
}
func (n invalidVal) Range() hcl.Range {
return n.SrcRange
}
func (n invalidVal) StartRange() hcl.Range {
return n.SrcRange
}

View File

@ -1,33 +0,0 @@
package json
import (
"github.com/agext/levenshtein"
)
var keywords = []string{"false", "true", "null"}
// keywordSuggestion tries to find a valid JSON keyword that is close to the
// given string and returns it if found. If no keyword is close enough, returns
// the empty string.
func keywordSuggestion(given string) string {
return nameSuggestion(given, keywords)
}
// nameSuggestion tries to find a name from the given slice of suggested names
// that is close to the given name and returns it if found. If no suggestion
// is close enough, returns the empty string.
//
// The suggestions are tried in order, so earlier suggestions take precedence
// if the given string is similar to two or more suggestions.
//
// This function is intended to be used with a relatively-small number of
// suggestions. It's not optimized for hundreds or thousands of them.
func nameSuggestion(given string, suggestions []string) string {
for _, suggestion := range suggestions {
dist := levenshtein.Distance(given, suggestion, nil)
if dist < 3 { // threshold determined experimentally
return suggestion
}
}
return ""
}

View File

@ -1,12 +0,0 @@
// Package json is the JSON parser for HCL. It parses JSON files and returns
// implementations of the core HCL structural interfaces in terms of the
// JSON data inside.
//
// This is not a generic JSON parser. Instead, it deals with the mapping from
// the JSON information model to the HCL information model, using a number
// of hard-coded structural conventions.
//
// In most cases applications will not import this package directly, but will
// instead access its functionality indirectly through functions in the main
// "hcl" package and in the "hclparse" package.
package json

View File

@ -1,70 +0,0 @@
package json
import (
"fmt"
"strings"
)
type navigation struct {
root node
}
// Implementation of hcled.ContextString
func (n navigation) ContextString(offset int) string {
steps := navigationStepsRev(n.root, offset)
if steps == nil {
return ""
}
// We built our slice backwards, so we'll reverse it in-place now.
half := len(steps) / 2 // integer division
for i := 0; i < half; i++ {
steps[i], steps[len(steps)-1-i] = steps[len(steps)-1-i], steps[i]
}
ret := strings.Join(steps, "")
if len(ret) > 0 && ret[0] == '.' {
ret = ret[1:]
}
return ret
}
func navigationStepsRev(v node, offset int) []string {
switch tv := v.(type) {
case *objectVal:
// Do any of our properties have an object that contains the target
// offset?
for _, attr := range tv.Attrs {
k := attr.Name
av := attr.Value
switch av.(type) {
case *objectVal, *arrayVal:
// okay
default:
continue
}
if av.Range().ContainsOffset(offset) {
return append(navigationStepsRev(av, offset), "."+k)
}
}
case *arrayVal:
// Do any of our elements contain the target offset?
for i, elem := range tv.Values {
switch elem.(type) {
case *objectVal, *arrayVal:
// okay
default:
continue
}
if elem.Range().ContainsOffset(offset) {
return append(navigationStepsRev(elem, offset), fmt.Sprintf("[%d]", i))
}
}
}
return nil
}

View File

@ -1,496 +0,0 @@
package json
import (
"encoding/json"
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
)
func parseFileContent(buf []byte, filename string) (node, hcl.Diagnostics) {
tokens := scan(buf, pos{
Filename: filename,
Pos: hcl.Pos{
Byte: 0,
Line: 1,
Column: 1,
},
})
p := newPeeker(tokens)
node, diags := parseValue(p)
if len(diags) == 0 && p.Peek().Type != tokenEOF {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous data after value",
Detail: "Extra characters appear after the JSON value.",
Subject: p.Peek().Range.Ptr(),
})
}
return node, diags
}
func parseValue(p *peeker) (node, hcl.Diagnostics) {
tok := p.Peek()
wrapInvalid := func(n node, diags hcl.Diagnostics) (node, hcl.Diagnostics) {
if n != nil {
return n, diags
}
return invalidVal{tok.Range}, diags
}
switch tok.Type {
case tokenBraceO:
return wrapInvalid(parseObject(p))
case tokenBrackO:
return wrapInvalid(parseArray(p))
case tokenNumber:
return wrapInvalid(parseNumber(p))
case tokenString:
return wrapInvalid(parseString(p))
case tokenKeyword:
return wrapInvalid(parseKeyword(p))
case tokenBraceC:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing JSON value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
case tokenBrackC:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing array element value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
case tokenEOF:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Missing value",
Detail: "The JSON data ends prematurely.",
Subject: &tok.Range,
},
})
default:
return wrapInvalid(nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid start of value",
Detail: "A JSON value must start with a brace, a bracket, a number, a string, or a keyword.",
Subject: &tok.Range,
},
})
}
}
func tokenCanStartValue(tok token) bool {
switch tok.Type {
case tokenBraceO, tokenBrackO, tokenNumber, tokenString, tokenKeyword:
return true
default:
return false
}
}
func parseObject(p *peeker) (node, hcl.Diagnostics) {
var diags hcl.Diagnostics
open := p.Read()
attrs := []*objectAttr{}
// recover is used to shift the peeker to what seems to be the end of
// our object, so that when we encounter an error we leave the peeker
// at a reasonable point in the token stream to continue parsing.
recover := func(tok token) {
open := 1
for {
switch tok.Type {
case tokenBraceO:
open++
case tokenBraceC:
open--
if open <= 1 {
return
}
case tokenEOF:
// Ran out of source before we were able to recover,
// so we'll bail here and let the caller deal with it.
return
}
tok = p.Read()
}
}
Token:
for {
if p.Peek().Type == tokenBraceC {
break Token
}
keyNode, keyDiags := parseValue(p)
diags = diags.Extend(keyDiags)
if keyNode == nil {
return nil, diags
}
keyStrNode, ok := keyNode.(*stringVal)
if !ok {
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid object property name",
Detail: "A JSON object property name must be a string",
Subject: keyNode.StartRange().Ptr(),
})
}
key := keyStrNode.Value
colon := p.Read()
if colon.Type != tokenColon {
recover(colon)
if colon.Type == tokenBraceC || colon.Type == tokenComma {
// Catch common mistake of using braces instead of brackets
// for an object.
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing object value",
Detail: "A JSON object attribute must have a value, introduced by a colon.",
Subject: &colon.Range,
})
}
if colon.Type == tokenEquals {
// Possible confusion with native HCL syntax.
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing property value colon",
Detail: "JSON uses a colon as its name/value delimiter, not an equals sign.",
Subject: &colon.Range,
})
}
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing property value colon",
Detail: "A colon must appear between an object property's name and its value.",
Subject: &colon.Range,
})
}
valNode, valDiags := parseValue(p)
diags = diags.Extend(valDiags)
if valNode == nil {
return nil, diags
}
attrs = append(attrs, &objectAttr{
Name: key,
Value: valNode,
NameRange: keyStrNode.SrcRange,
})
switch p.Peek().Type {
case tokenComma:
comma := p.Read()
if p.Peek().Type == tokenBraceC {
// Special error message for this common mistake
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Trailing comma in object",
Detail: "JSON does not permit a trailing comma after the final property in an object.",
Subject: &comma.Range,
})
}
continue Token
case tokenEOF:
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed object",
Detail: "No closing brace was found for this JSON object.",
Subject: &open.Range,
})
case tokenBrackC:
// Consume the bracket anyway, so that we don't return with the peeker
// at a strange place.
p.Read()
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Mismatched braces",
Detail: "A JSON object must be closed with a brace, not a bracket.",
Subject: p.Peek().Range.Ptr(),
})
case tokenBraceC:
break Token
default:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute seperator comma",
Detail: "A comma must appear between each property definition in an object.",
Subject: p.Peek().Range.Ptr(),
})
}
}
close := p.Read()
return &objectVal{
Attrs: attrs,
SrcRange: hcl.RangeBetween(open.Range, close.Range),
OpenRange: open.Range,
CloseRange: close.Range,
}, diags
}
func parseArray(p *peeker) (node, hcl.Diagnostics) {
var diags hcl.Diagnostics
open := p.Read()
vals := []node{}
// recover is used to shift the peeker to what seems to be the end of
// our array, so that when we encounter an error we leave the peeker
// at a reasonable point in the token stream to continue parsing.
recover := func(tok token) {
open := 1
for {
switch tok.Type {
case tokenBrackO:
open++
case tokenBrackC:
open--
if open <= 1 {
return
}
case tokenEOF:
// Ran out of source before we were able to recover,
// so we'll bail here and let the caller deal with it.
return
}
tok = p.Read()
}
}
Token:
for {
if p.Peek().Type == tokenBrackC {
break Token
}
valNode, valDiags := parseValue(p)
diags = diags.Extend(valDiags)
if valNode == nil {
return nil, diags
}
vals = append(vals, valNode)
switch p.Peek().Type {
case tokenComma:
comma := p.Read()
if p.Peek().Type == tokenBrackC {
// Special error message for this common mistake
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Trailing comma in array",
Detail: "JSON does not permit a trailing comma after the final value in an array.",
Subject: &comma.Range,
})
}
continue Token
case tokenColon:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid array value",
Detail: "A colon is not used to introduce values in a JSON array.",
Subject: p.Peek().Range.Ptr(),
})
case tokenEOF:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unclosed object",
Detail: "No closing bracket was found for this JSON array.",
Subject: &open.Range,
})
case tokenBraceC:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Mismatched brackets",
Detail: "A JSON array must be closed with a bracket, not a brace.",
Subject: p.Peek().Range.Ptr(),
})
case tokenBrackC:
break Token
default:
recover(p.Read())
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing attribute seperator comma",
Detail: "A comma must appear between each value in an array.",
Subject: p.Peek().Range.Ptr(),
})
}
}
close := p.Read()
return &arrayVal{
Values: vals,
SrcRange: hcl.RangeBetween(open.Range, close.Range),
OpenRange: open.Range,
}, diags
}
func parseNumber(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
// Use encoding/json to validate the number syntax.
// TODO: Do this more directly to produce better diagnostics.
var num json.Number
err := json.Unmarshal(tok.Bytes, &num)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON number",
Detail: fmt.Sprintf("There is a syntax error in the given JSON number."),
Subject: &tok.Range,
},
}
}
// We want to guarantee that we parse numbers the same way as cty (and thus
// native syntax HCL) would here, so we'll use the cty parser even though
// in most other cases we don't actually introduce cty concepts until
// decoding time. We'll unwrap the parsed float immediately afterwards, so
// the cty value is just a temporary helper.
nv, err := cty.ParseNumberVal(string(num))
if err != nil {
// Should never happen if above passed, since JSON numbers are a subset
// of what cty can parse...
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON number",
Detail: fmt.Sprintf("There is a syntax error in the given JSON number."),
Subject: &tok.Range,
},
}
}
return &numberVal{
Value: nv.AsBigFloat(),
SrcRange: tok.Range,
}, nil
}
func parseString(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
var str string
err := json.Unmarshal(tok.Bytes, &str)
if err != nil {
var errRange hcl.Range
if serr, ok := err.(*json.SyntaxError); ok {
errOfs := serr.Offset
errPos := tok.Range.Start
errPos.Byte += int(errOfs)
// TODO: Use the byte offset to properly count unicode
// characters for the column, and mark the whole of the
// character that was wrong as part of our range.
errPos.Column += int(errOfs)
errEndPos := errPos
errEndPos.Byte++
errEndPos.Column++
errRange = hcl.Range{
Filename: tok.Range.Filename,
Start: errPos,
End: errEndPos,
}
} else {
errRange = tok.Range
}
var contextRange *hcl.Range
if errRange != tok.Range {
contextRange = &tok.Range
}
// FIXME: Eventually we should parse strings directly here so
// we can produce a more useful error message in the face fo things
// such as invalid escapes, etc.
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON string",
Detail: fmt.Sprintf("There is a syntax error in the given JSON string."),
Subject: &errRange,
Context: contextRange,
},
}
}
return &stringVal{
Value: str,
SrcRange: tok.Range,
}, nil
}
func parseKeyword(p *peeker) (node, hcl.Diagnostics) {
tok := p.Read()
s := string(tok.Bytes)
switch s {
case "true":
return &booleanVal{
Value: true,
SrcRange: tok.Range,
}, nil
case "false":
return &booleanVal{
Value: false,
SrcRange: tok.Range,
}, nil
case "null":
return &nullVal{
SrcRange: tok.Range,
}, nil
case "undefined", "NaN", "Infinity":
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON keyword",
Detail: fmt.Sprintf("The JavaScript identifier %q cannot be used in JSON.", s),
Subject: &tok.Range,
},
}
default:
var dym string
if suggest := keywordSuggestion(s); suggest != "" {
dym = fmt.Sprintf(" Did you mean %q?", suggest)
}
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Invalid JSON keyword",
Detail: fmt.Sprintf("%q is not a valid JSON keyword.%s", s, dym),
Subject: &tok.Range,
},
}
}
}

View File

@ -1,25 +0,0 @@
package json
type peeker struct {
tokens []token
pos int
}
func newPeeker(tokens []token) *peeker {
return &peeker{
tokens: tokens,
pos: 0,
}
}
func (p *peeker) Peek() token {
return p.tokens[p.pos]
}
func (p *peeker) Read() token {
ret := p.tokens[p.pos]
if ret.Type != tokenEOF {
p.pos++
}
return ret
}

View File

@ -1,94 +0,0 @@
package json
import (
"fmt"
"io/ioutil"
"os"
"github.com/hashicorp/hcl/v2"
)
// Parse attempts to parse the given buffer as JSON and, if successful, returns
// a hcl.File for the HCL configuration represented by it.
//
// This is not a generic JSON parser. Instead, it deals only with the profile
// of JSON used to express HCL configuration.
//
// The returned file is valid only if the returned diagnostics returns false
// from its HasErrors method. If HasErrors returns true, the file represents
// the subset of data that was able to be parsed, which may be none.
func Parse(src []byte, filename string) (*hcl.File, hcl.Diagnostics) {
rootNode, diags := parseFileContent(src, filename)
switch rootNode.(type) {
case *objectVal, *arrayVal:
// okay
default:
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Root value must be object",
Detail: "The root value in a JSON-based configuration must be either a JSON object or a JSON array of objects.",
Subject: rootNode.StartRange().Ptr(),
})
// Since we've already produced an error message for this being
// invalid, we'll return an empty placeholder here so that trying to
// extract content from our root body won't produce a redundant
// error saying the same thing again in more general terms.
fakePos := hcl.Pos{
Byte: 0,
Line: 1,
Column: 1,
}
fakeRange := hcl.Range{
Filename: filename,
Start: fakePos,
End: fakePos,
}
rootNode = &objectVal{
Attrs: []*objectAttr{},
SrcRange: fakeRange,
OpenRange: fakeRange,
}
}
file := &hcl.File{
Body: &body{
val: rootNode,
},
Bytes: src,
Nav: navigation{rootNode},
}
return file, diags
}
// ParseFile is a convenience wrapper around Parse that first attempts to load
// data from the given filename, passing the result to Parse if successful.
//
// If the file cannot be read, an error diagnostic with nil context is returned.
func ParseFile(filename string) (*hcl.File, hcl.Diagnostics) {
f, err := os.Open(filename)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to open file",
Detail: fmt.Sprintf("The file %q could not be opened.", filename),
},
}
}
defer f.Close()
src, err := ioutil.ReadAll(f)
if err != nil {
return nil, hcl.Diagnostics{
{
Severity: hcl.DiagError,
Summary: "Failed to read file",
Detail: fmt.Sprintf("The file %q was opened, but an error occured while reading it.", filename),
},
}
}
return Parse(src, filename)
}

View File

@ -1,297 +0,0 @@
package json
import (
"fmt"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/hashicorp/hcl/v2"
)
//go:generate stringer -type tokenType scanner.go
type tokenType rune
const (
tokenBraceO tokenType = '{'
tokenBraceC tokenType = '}'
tokenBrackO tokenType = '['
tokenBrackC tokenType = ']'
tokenComma tokenType = ','
tokenColon tokenType = ':'
tokenKeyword tokenType = 'K'
tokenString tokenType = 'S'
tokenNumber tokenType = 'N'
tokenEOF tokenType = '␄'
tokenInvalid tokenType = 0
tokenEquals tokenType = '=' // used only for reminding the user of JSON syntax
)
type token struct {
Type tokenType
Bytes []byte
Range hcl.Range
}
// scan returns the primary tokens for the given JSON buffer in sequence.
//
// The responsibility of this pass is to just mark the slices of the buffer
// as being of various types. It is lax in how it interprets the multi-byte
// token types keyword, string and number, preferring to capture erroneous
// extra bytes that we presume the user intended to be part of the token
// so that we can generate more helpful diagnostics in the parser.
func scan(buf []byte, start pos) []token {
var tokens []token
p := start
for {
if len(buf) == 0 {
tokens = append(tokens, token{
Type: tokenEOF,
Bytes: nil,
Range: posRange(p, p),
})
return tokens
}
buf, p = skipWhitespace(buf, p)
if len(buf) == 0 {
tokens = append(tokens, token{
Type: tokenEOF,
Bytes: nil,
Range: posRange(p, p),
})
return tokens
}
start = p
first := buf[0]
switch {
case first == '{' || first == '}' || first == '[' || first == ']' || first == ',' || first == ':' || first == '=':
p.Pos.Column++
p.Pos.Byte++
tokens = append(tokens, token{
Type: tokenType(first),
Bytes: buf[0:1],
Range: posRange(start, p),
})
buf = buf[1:]
case first == '"':
var tokBuf []byte
tokBuf, buf, p = scanString(buf, p)
tokens = append(tokens, token{
Type: tokenString,
Bytes: tokBuf,
Range: posRange(start, p),
})
case byteCanStartNumber(first):
var tokBuf []byte
tokBuf, buf, p = scanNumber(buf, p)
tokens = append(tokens, token{
Type: tokenNumber,
Bytes: tokBuf,
Range: posRange(start, p),
})
case byteCanStartKeyword(first):
var tokBuf []byte
tokBuf, buf, p = scanKeyword(buf, p)
tokens = append(tokens, token{
Type: tokenKeyword,
Bytes: tokBuf,
Range: posRange(start, p),
})
default:
tokens = append(tokens, token{
Type: tokenInvalid,
Bytes: buf[:1],
Range: start.Range(1, 1),
})
// If we've encountered an invalid then we might as well stop
// scanning since the parser won't proceed beyond this point.
return tokens
}
}
}
func byteCanStartNumber(b byte) bool {
switch b {
// We are slightly more tolerant than JSON requires here since we
// expect the parser will make a stricter interpretation of the
// number bytes, but we specifically don't allow 'e' or 'E' here
// since we want the scanner to treat that as the start of an
// invalid keyword instead, to produce more intelligible error messages.
case '-', '+', '.', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
return true
default:
return false
}
}
func scanNumber(buf []byte, start pos) ([]byte, []byte, pos) {
// The scanner doesn't check that the sequence of digit-ish bytes is
// in a valid order. The parser must do this when decoding a number
// token.
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
switch buf[i] {
case '-', '+', '.', 'e', 'E', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
p.Pos.Byte++
p.Pos.Column++
default:
break Byte
}
}
return buf[:i], buf[i:], p
}
func byteCanStartKeyword(b byte) bool {
switch {
// We allow any sequence of alphabetical characters here, even though
// JSON is more constrained, so that we can collect what we presume
// the user intended to be a single keyword and then check its validity
// in the parser, where we can generate better diagnostics.
// So e.g. we want to be able to say:
// unrecognized keyword "True". Did you mean "true"?
case isAlphabetical(b):
return true
default:
return false
}
}
func scanKeyword(buf []byte, start pos) ([]byte, []byte, pos) {
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
b := buf[i]
switch {
case isAlphabetical(b) || b == '_':
p.Pos.Byte++
p.Pos.Column++
default:
break Byte
}
}
return buf[:i], buf[i:], p
}
func scanString(buf []byte, start pos) ([]byte, []byte, pos) {
// The scanner doesn't validate correct use of escapes, etc. It pays
// attention to escapes only for the purpose of identifying the closing
// quote character. It's the parser's responsibility to do proper
// validation.
//
// The scanner also doesn't specifically detect unterminated string
// literals, though they can be identified in the parser by checking if
// the final byte in a string token is the double-quote character.
// Skip the opening quote symbol
i := 1
p := start
p.Pos.Byte++
p.Pos.Column++
escaping := false
Byte:
for i < len(buf) {
b := buf[i]
switch {
case b == '\\':
escaping = !escaping
p.Pos.Byte++
p.Pos.Column++
i++
case b == '"':
p.Pos.Byte++
p.Pos.Column++
i++
if !escaping {
break Byte
}
escaping = false
case b < 32:
break Byte
default:
// Advance by one grapheme cluster, so that we consider each
// grapheme to be a "column".
// Ignoring error because this scanner cannot produce errors.
advance, _, _ := textseg.ScanGraphemeClusters(buf[i:], true)
p.Pos.Byte += advance
p.Pos.Column++
i += advance
escaping = false
}
}
return buf[:i], buf[i:], p
}
func skipWhitespace(buf []byte, start pos) ([]byte, pos) {
var i int
p := start
Byte:
for i = 0; i < len(buf); i++ {
switch buf[i] {
case ' ':
p.Pos.Byte++
p.Pos.Column++
case '\n':
p.Pos.Byte++
p.Pos.Column = 1
p.Pos.Line++
case '\r':
// For the purpose of line/column counting we consider a
// carriage return to take up no space, assuming that it will
// be paired up with a newline (on Windows, for example) that
// will account for both of them.
p.Pos.Byte++
case '\t':
// We arbitrarily count a tab as if it were two spaces, because
// we need to choose _some_ number here. This means any system
// that renders code on-screen with markers must itself treat
// tabs as a pair of spaces for rendering purposes, or instead
// use the byte offset and back into its own column position.
p.Pos.Byte++
p.Pos.Column += 2
default:
break Byte
}
}
return buf[i:], p
}
type pos struct {
Filename string
Pos hcl.Pos
}
func (p *pos) Range(byteLen, charLen int) hcl.Range {
start := p.Pos
end := p.Pos
end.Byte += byteLen
end.Column += charLen
return hcl.Range{
Filename: p.Filename,
Start: start,
End: end,
}
}
func posRange(start, end pos) hcl.Range {
return hcl.Range{
Filename: start.Filename,
Start: start.Pos,
End: end.Pos,
}
}
func (t token) GoString() string {
return fmt.Sprintf("json.token{json.%s, []byte(%q), %#v}", t.Type, t.Bytes, t.Range)
}
func isAlphabetical(b byte) bool {
return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z')
}

View File

@ -1,405 +0,0 @@
# HCL JSON Syntax Specification
This is the specification for the JSON serialization for hcl. HCL is a system
for defining configuration languages for applications. The HCL information
model is designed to support multiple concrete syntaxes for configuration,
and this JSON-based format complements [the native syntax](../hclsyntax/spec.md)
by being easy to machine-generate, whereas the native syntax is oriented
towards human authoring and maintenance
This syntax is defined in terms of JSON as defined in
[RFC7159](https://tools.ietf.org/html/rfc7159). As such it inherits the JSON
grammar as-is, and merely defines a specific methodology for interpreting
JSON constructs into HCL structural elements and expressions.
This mapping is defined such that valid JSON-serialized HCL input can be
_produced_ using standard JSON implementations in various programming languages.
_Parsing_ such JSON has some additional constraints not beyond what is normally
supported by JSON parsers, so a specialized parser may be required that
is able to:
- Preserve the relative ordering of properties defined in an object.
- Preserve multiple definitions of the same property name.
- Preserve numeric values to the precision required by the number type
in [the HCL syntax-agnostic information model](../spec.md).
- Retain source location information for parsed tokens/constructs in order
to produce good error messages.
## Structural Elements
[The HCL syntax-agnostic information model](../spec.md) defines a _body_ as an
abstract container for attribute definitions and child blocks. A body is
represented in JSON as either a single JSON object or a JSON array of objects.
Body processing is in terms of JSON object properties, visited in the order
they appear in the input. Where a body is represented by a single JSON object,
the properties of that object are visited in order. Where a body is
represented by a JSON array, each of its elements are visited in order and
each element has its properties visited in order. If any element of the array
is not a JSON object then the input is erroneous.
When a body is being processed in the _dynamic attributes_ mode, the allowance
of a JSON array in the previous paragraph does not apply and instead a single
JSON object is always required.
As defined in the language-agnostic model, body processing is in terms
of a schema which provides context for interpreting the body's content. For
JSON bodies, the schema is crucial to allow differentiation of attribute
definitions and block definitions, both of which are represented via object
properties.
The special property name `"//"`, when used in an object representing a HCL
body, is parsed and ignored. A property with this name can be used to
include human-readable comments. (This special property name is _not_
processed in this way for any _other_ HCL constructs that are represented as
JSON objects.)
### Attributes
Where the given schema describes an attribute with a given name, the object
property with the matching name — if present — serves as the attribute's
definition.
When a body is being processed in the _dynamic attributes_ mode, each object
property serves as an attribute definition for the attribute whose name
matches the property name.
The value of an attribute definition property is interpreted as an _expression_,
as described in a later section.
Given a schema that calls for an attribute named "foo", a JSON object like
the following provides a definition for that attribute:
```json
{
"foo": "bar baz"
}
```
### Blocks
Where the given schema describes a block with a given type name, each object
property with the matching name serves as a definition of zero or more blocks
of that type.
Processing of child blocks is in terms of nested JSON objects and arrays.
If the schema defines one or more _labels_ for the block type, a nested JSON
object or JSON array of objects is required for each labelling level. These
are flattened to a single ordered sequence of object properties using the
same algorithm as for body content as defined above. Each object property
serves as a label value at the corresponding level.
After any labelling levels, the next nested value is either a JSON object
representing a single block body, or a JSON array of JSON objects that each
represent a single block body. Use of an array accommodates the definition
of multiple blocks that have identical type and labels.
Given a schema that calls for a block type named "foo" with no labels, the
following JSON objects are all valid definitions of zero or more blocks of this
type:
```json
{
"foo": {
"child_attr": "baz"
}
}
```
```json
{
"foo": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
```
```json
{
"foo": []
}
```
The first of these defines a single child block of type "foo". The second
defines _two_ such blocks. The final example shows a degenerate definition
of zero blocks, though generators should prefer to omit the property entirely
in this scenario.
Given a schema that calls for a block type named "foo" with _two_ labels, the
extra label levels must be represented as objects or arrays of objects as in
the following examples:
```json
{
"foo": {
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
},
"boz": {
"baz": {
"child_attr": "baz"
}
}
}
}
```
```json
{
"foo": {
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
},
"boz": {
"baz": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
}
}
```
```json
{
"foo": [
{
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
}
},
{
"bar": {
"baz": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
}
]
}
```
```json
{
"foo": {
"bar": {
"baz": {
"child_attr": "baz"
},
"boz": {
"child_attr": "baz"
}
},
"bar": {
"baz": [
{
"child_attr": "baz"
},
{
"child_attr": "boz"
}
]
}
}
}
```
Arrays can be introduced at either the label definition or block body
definition levels to define multiple definitions of the same block type
or labels while preserving order.
A JSON HCL parser _must_ support duplicate definitions of the same property
name within a single object, preserving all of them and the relative ordering
between them. The array-based forms are also required so that JSON HCL
configurations can be produced with JSON producing libraries that are not
able to preserve property definition order and multiple definitions of
the same property.
## Expressions
JSON lacks a native expression syntax, so the HCL JSON syntax instead defines
a mapping for each of the JSON value types, including a special mapping for
strings that allows optional use of arbitrary expressions.
### Objects
When interpreted as an expression, a JSON object represents a value of a HCL
object type.
Each property of the JSON object represents an attribute of the HCL object type.
The property name string given in the JSON input is interpreted as a string
expression as described below, and its result is converted to string as defined
by the syntax-agnostic information model. If such a conversion is not possible,
an error is produced and evaluation fails.
An instance of the constructed object type is then created, whose values
are interpreted by again recursively applying the mapping rules defined in
this section to each of the property values.
If any evaluated property name strings produce null values, an error is
produced and evaluation fails. If any produce _unknown_ values, the _entire
object's_ result is an unknown value of the dynamic pseudo-type, signalling
that the type of the object cannot be determined.
It is an error to define the same property name multiple times within a single
JSON object interpreted as an expression. In full expression mode, this
constraint applies to the name expression results after conversion to string,
rather than the raw string that may contain interpolation expressions.
### Arrays
When interpreted as an expression, a JSON array represents a value of a HCL
tuple type.
Each element of the JSON array represents an element of the HCL tuple type.
The tuple type is constructed by enumerating the JSON array elements, creating
for each an element whose type is the result of recursively applying the
expression mapping rules. Correspondence is preserved between the array element
indices and the tuple element indices.
An instance of the constructed tuple type is then created, whose values are
interpreted by again recursively applying the mapping rules defined in this
section.
### Numbers
When interpreted as an expression, a JSON number represents a HCL number value.
HCL numbers are arbitrary-precision decimal values, so a JSON HCL parser must
be able to translate exactly the value given to a number of corresponding
precision, within the constraints set by the HCL syntax-agnostic information
model.
In practice, off-the-shelf JSON serializers often do not support customizing the
processing of numbers, and instead force processing as 32-bit or 64-bit
floating point values.
A _producer_ of JSON HCL that uses such a serializer can provide numeric values
as JSON strings where they have precision too great for representation in the
serializer's chosen numeric type in situations where the result will be
converted to number (using the standard conversion rules) by a calling
application.
Alternatively, for expressions that are evaluated in full expression mode an
embedded template interpolation can be used to faithfully represent a number,
such as `"${1e150}"`, which will then be evaluated by the underlying HCL native
syntax expression evaluator.
### Boolean Values
The JSON boolean values `true` and `false`, when interpreted as expressions,
represent the corresponding HCL boolean values.
### The Null Value
The JSON value `null`, when interpreted as an expression, represents a
HCL null value of the dynamic pseudo-type.
### Strings
When interpreted as an expression, a JSON string may be interpreted in one of
two ways depending on the evaluation mode.
If evaluating in literal-only mode (as defined by the syntax-agnostic
information model) the literal string is intepreted directly as a HCL string
value, by directly using the exact sequence of unicode characters represented.
Template interpolations and directives MUST NOT be processed in this mode,
allowing any characters that appear as introduction sequences to pass through
literally:
```json
"Hello world! Template sequences like ${ are not intepreted here."
```
When evaluating in full expression mode (again, as defined by the syntax-
agnostic information model) the literal string is instead interpreted as a
_standalone template_ in the HCL Native Syntax. The expression evaluation
result is then the direct result of evaluating that template with the current
variable scope and function table.
```json
"Hello, ${name}! Template sequences are interpreted in full expression mode."
```
In particular the _Template Interpolation Unwrapping_ requirement from the
HCL native syntax specification must be implemented, allowing the use of
single-interpolation templates to represent expressions that would not
otherwise be representable in JSON, such as the following example where
the result must be a number, rather than a string representation of a number:
```json
"${ a + b }"
```
## Static Analysis
The HCL static analysis operations are implemented for JSON values that
represent expressions, as described in the following sections.
Due to the limited expressive power of the JSON syntax alone, use of these
static analyses functions rather than normal expression evaluation is used
as additional context for how a JSON value is to be interpreted, which means
that static analyses can result in a different interpretation of a given
expression than normal evaluation.
### Static List
An expression interpreted as a static list must be a JSON array. Each of the
values in the array is interpreted as an expression and returned.
### Static Map
An expression interpreted as a static map must be a JSON object. Each of the
key/value pairs in the object is presented as a pair of expressions. Since
object property names are always strings, evaluating the key expression with
a non-`nil` evaluation context will evaluate any template sequences given
in the property name.
### Static Call
An expression interpreted as a static call must be a string. The content of
the string is interpreted as a native syntax expression (not a _template_,
unlike normal evaluation) and then the static call analysis is delegated to
that expression.
If the original expression is not a string or its contents cannot be parsed
as a native syntax expression then static call analysis is not supported.
### Static Traversal
An expression interpreted as a static traversal must be a string. The content
of the string is interpreted as a native syntax expression (not a _template_,
unlike normal evaluation) and then static traversal analysis is delegated
to that expression.
If the original expression is not a string or its contents cannot be parsed
as a native syntax expression then static call analysis is not supported.

View File

@ -1,637 +0,0 @@
package json
import (
"fmt"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
)
// body is the implementation of "Body" used for files processed with the JSON
// parser.
type body struct {
val node
// If non-nil, the keys of this map cause the corresponding attributes to
// be treated as non-existing. This is used when Body.PartialContent is
// called, to produce the "remaining content" Body.
hiddenAttrs map[string]struct{}
}
// expression is the implementation of "Expression" used for files processed
// with the JSON parser.
type expression struct {
src node
}
func (b *body) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
content, newBody, diags := b.PartialContent(schema)
hiddenAttrs := newBody.(*body).hiddenAttrs
var nameSuggestions []string
for _, attrS := range schema.Attributes {
if _, ok := hiddenAttrs[attrS.Name]; !ok {
// Only suggest an attribute name if we didn't use it already.
nameSuggestions = append(nameSuggestions, attrS.Name)
}
}
for _, blockS := range schema.Blocks {
// Blocks can appear multiple times, so we'll suggest their type
// names regardless of whether they've already been used.
nameSuggestions = append(nameSuggestions, blockS.Type)
}
jsonAttrs, attrDiags := b.collectDeepAttrs(b.val, nil)
diags = append(diags, attrDiags...)
for _, attr := range jsonAttrs {
k := attr.Name
if k == "//" {
// Ignore "//" keys in objects representing bodies, to allow
// their use as comments.
continue
}
if _, ok := hiddenAttrs[k]; !ok {
suggestion := nameSuggestion(k, nameSuggestions)
if suggestion != "" {
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
}
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Extraneous JSON object property",
Detail: fmt.Sprintf("No argument or block type is named %q.%s", k, suggestion),
Subject: &attr.NameRange,
Context: attr.Range().Ptr(),
})
}
}
return content, diags
}
func (b *body) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
var diags hcl.Diagnostics
jsonAttrs, attrDiags := b.collectDeepAttrs(b.val, nil)
diags = append(diags, attrDiags...)
usedNames := map[string]struct{}{}
if b.hiddenAttrs != nil {
for k := range b.hiddenAttrs {
usedNames[k] = struct{}{}
}
}
content := &hcl.BodyContent{
Attributes: map[string]*hcl.Attribute{},
Blocks: nil,
MissingItemRange: b.MissingItemRange(),
}
// Create some more convenient data structures for our work below.
attrSchemas := map[string]hcl.AttributeSchema{}
blockSchemas := map[string]hcl.BlockHeaderSchema{}
for _, attrS := range schema.Attributes {
attrSchemas[attrS.Name] = attrS
}
for _, blockS := range schema.Blocks {
blockSchemas[blockS.Type] = blockS
}
for _, jsonAttr := range jsonAttrs {
attrName := jsonAttr.Name
if _, used := b.hiddenAttrs[attrName]; used {
continue
}
if attrS, defined := attrSchemas[attrName]; defined {
if existing, exists := content.Attributes[attrName]; exists {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate argument",
Detail: fmt.Sprintf("The argument %q was already set at %s.", attrName, existing.Range),
Subject: &jsonAttr.NameRange,
Context: jsonAttr.Range().Ptr(),
})
continue
}
content.Attributes[attrS.Name] = &hcl.Attribute{
Name: attrS.Name,
Expr: &expression{src: jsonAttr.Value},
Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()),
NameRange: jsonAttr.NameRange,
}
usedNames[attrName] = struct{}{}
} else if blockS, defined := blockSchemas[attrName]; defined {
bv := jsonAttr.Value
blockDiags := b.unpackBlock(bv, blockS.Type, &jsonAttr.NameRange, blockS.LabelNames, nil, nil, &content.Blocks)
diags = append(diags, blockDiags...)
usedNames[attrName] = struct{}{}
}
// We ignore anything that isn't defined because that's the
// PartialContent contract. The Content method will catch leftovers.
}
// Make sure we got all the required attributes.
for _, attrS := range schema.Attributes {
if !attrS.Required {
continue
}
if _, defined := content.Attributes[attrS.Name]; !defined {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing required argument",
Detail: fmt.Sprintf("The argument %q is required, but no definition was found.", attrS.Name),
Subject: b.MissingItemRange().Ptr(),
})
}
}
unusedBody := &body{
val: b.val,
hiddenAttrs: usedNames,
}
return content, unusedBody, diags
}
// JustAttributes for JSON bodies interprets all properties of the wrapped
// JSON object as attributes and returns them.
func (b *body) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
var diags hcl.Diagnostics
attrs := make(map[string]*hcl.Attribute)
obj, ok := b.val.(*objectVal)
if !ok {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: "A JSON object is required here, setting the arguments for this block.",
Subject: b.val.StartRange().Ptr(),
})
return attrs, diags
}
for _, jsonAttr := range obj.Attrs {
name := jsonAttr.Name
if name == "//" {
// Ignore "//" keys in objects representing bodies, to allow
// their use as comments.
continue
}
if _, hidden := b.hiddenAttrs[name]; hidden {
continue
}
if existing, exists := attrs[name]; exists {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate attribute definition",
Detail: fmt.Sprintf("The argument %q was already set at %s.", name, existing.Range),
Subject: &jsonAttr.NameRange,
})
continue
}
attrs[name] = &hcl.Attribute{
Name: name,
Expr: &expression{src: jsonAttr.Value},
Range: hcl.RangeBetween(jsonAttr.NameRange, jsonAttr.Value.Range()),
NameRange: jsonAttr.NameRange,
}
}
// No diagnostics possible here, since the parser already took care of
// finding duplicates and every JSON value can be a valid attribute value.
return attrs, diags
}
func (b *body) MissingItemRange() hcl.Range {
switch tv := b.val.(type) {
case *objectVal:
return tv.CloseRange
case *arrayVal:
return tv.OpenRange
default:
// Should not happen in correct operation, but might show up if the
// input is invalid and we are producing partial results.
return tv.StartRange()
}
}
func (b *body) unpackBlock(v node, typeName string, typeRange *hcl.Range, labelsLeft []string, labelsUsed []string, labelRanges []hcl.Range, blocks *hcl.Blocks) (diags hcl.Diagnostics) {
if len(labelsLeft) > 0 {
labelName := labelsLeft[0]
jsonAttrs, attrDiags := b.collectDeepAttrs(v, &labelName)
diags = append(diags, attrDiags...)
if len(jsonAttrs) == 0 {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Missing block label",
Detail: fmt.Sprintf("At least one object property is required, whose name represents the %s block's %s.", typeName, labelName),
Subject: v.StartRange().Ptr(),
})
return
}
labelsUsed := append(labelsUsed, "")
labelRanges := append(labelRanges, hcl.Range{})
for _, p := range jsonAttrs {
pk := p.Name
labelsUsed[len(labelsUsed)-1] = pk
labelRanges[len(labelRanges)-1] = p.NameRange
diags = append(diags, b.unpackBlock(p.Value, typeName, typeRange, labelsLeft[1:], labelsUsed, labelRanges, blocks)...)
}
return
}
// By the time we get here, we've peeled off all the labels and we're ready
// to deal with the block's actual content.
// need to copy the label slices because their underlying arrays will
// continue to be mutated after we return.
labels := make([]string, len(labelsUsed))
copy(labels, labelsUsed)
labelR := make([]hcl.Range, len(labelRanges))
copy(labelR, labelRanges)
switch tv := v.(type) {
case *nullVal:
// There is no block content, e.g the value is null.
return
case *objectVal:
// Single instance of the block
*blocks = append(*blocks, &hcl.Block{
Type: typeName,
Labels: labels,
Body: &body{
val: tv,
},
DefRange: tv.OpenRange,
TypeRange: *typeRange,
LabelRanges: labelR,
})
case *arrayVal:
// Multiple instances of the block
for _, av := range tv.Values {
*blocks = append(*blocks, &hcl.Block{
Type: typeName,
Labels: labels,
Body: &body{
val: av, // might be mistyped; we'll find out when content is requested for this body
},
DefRange: tv.OpenRange,
TypeRange: *typeRange,
LabelRanges: labelR,
})
}
default:
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("Either a JSON object or a JSON array is required, representing the contents of one or more %q blocks.", typeName),
Subject: v.StartRange().Ptr(),
})
}
return
}
// collectDeepAttrs takes either a single object or an array of objects and
// flattens it into a list of object attributes, collecting attributes from
// all of the objects in a given array.
//
// Ordering is preserved, so a list of objects that each have one property
// will result in those properties being returned in the same order as the
// objects appeared in the array.
//
// This is appropriate for use only for objects representing bodies or labels
// within a block.
//
// The labelName argument, if non-null, is used to tailor returned error
// messages to refer to block labels rather than attributes and child blocks.
// It has no other effect.
func (b *body) collectDeepAttrs(v node, labelName *string) ([]*objectAttr, hcl.Diagnostics) {
var diags hcl.Diagnostics
var attrs []*objectAttr
switch tv := v.(type) {
case *nullVal:
// If a value is null, then we don't return any attributes or return an error.
case *objectVal:
attrs = append(attrs, tv.Attrs...)
case *arrayVal:
for _, ev := range tv.Values {
switch tev := ev.(type) {
case *objectVal:
attrs = append(attrs, tev.Attrs...)
default:
if labelName != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("A JSON object is required here, to specify %s labels for this block.", *labelName),
Subject: ev.StartRange().Ptr(),
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: "A JSON object is required here, to define arguments and child blocks.",
Subject: ev.StartRange().Ptr(),
})
}
}
}
default:
if labelName != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: fmt.Sprintf("Either a JSON object or JSON array of objects is required here, to specify %s labels for this block.", *labelName),
Subject: v.StartRange().Ptr(),
})
} else {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Incorrect JSON value type",
Detail: "Either a JSON object or JSON array of objects is required here, to define arguments and child blocks.",
Subject: v.StartRange().Ptr(),
})
}
}
return attrs, diags
}
func (e *expression) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) {
switch v := e.src.(type) {
case *stringVal:
if ctx != nil {
// Parse string contents as a HCL native language expression.
// We only do this if we have a context, so passing a nil context
// is how the caller specifies that interpolations are not allowed
// and that the string should just be returned verbatim.
templateSrc := v.Value
expr, diags := hclsyntax.ParseTemplate(
[]byte(templateSrc),
v.SrcRange.Filename,
// This won't produce _exactly_ the right result, since
// the hclsyntax parser can't "see" any escapes we removed
// while parsing JSON, but it's better than nothing.
hcl.Pos{
Line: v.SrcRange.Start.Line,
// skip over the opening quote mark
Byte: v.SrcRange.Start.Byte + 1,
Column: v.SrcRange.Start.Column + 1,
},
)
if diags.HasErrors() {
return cty.DynamicVal, diags
}
val, evalDiags := expr.Value(ctx)
diags = append(diags, evalDiags...)
return val, diags
}
return cty.StringVal(v.Value), nil
case *numberVal:
return cty.NumberVal(v.Value), nil
case *booleanVal:
return cty.BoolVal(v.Value), nil
case *arrayVal:
var diags hcl.Diagnostics
vals := []cty.Value{}
for _, jsonVal := range v.Values {
val, valDiags := (&expression{src: jsonVal}).Value(ctx)
vals = append(vals, val)
diags = append(diags, valDiags...)
}
return cty.TupleVal(vals), diags
case *objectVal:
var diags hcl.Diagnostics
attrs := map[string]cty.Value{}
attrRanges := map[string]hcl.Range{}
known := true
for _, jsonAttr := range v.Attrs {
// In this one context we allow keys to contain interpolation
// expressions too, assuming we're evaluating in interpolation
// mode. This achieves parity with the native syntax where
// object expressions can have dynamic keys, while block contents
// may not.
name, nameDiags := (&expression{src: &stringVal{
Value: jsonAttr.Name,
SrcRange: jsonAttr.NameRange,
}}).Value(ctx)
valExpr := &expression{src: jsonAttr.Value}
val, valDiags := valExpr.Value(ctx)
diags = append(diags, nameDiags...)
diags = append(diags, valDiags...)
var err error
name, err = convert.Convert(name, cty.String)
if err != nil {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid object key expression",
Detail: fmt.Sprintf("Cannot use this expression as an object key: %s.", err),
Subject: &jsonAttr.NameRange,
Expression: valExpr,
EvalContext: ctx,
})
continue
}
if name.IsNull() {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid object key expression",
Detail: "Cannot use null value as an object key.",
Subject: &jsonAttr.NameRange,
Expression: valExpr,
EvalContext: ctx,
})
continue
}
if !name.IsKnown() {
// This is a bit of a weird case, since our usual rules require
// us to tolerate unknowns and just represent the result as
// best we can but if we don't know the key then we can't
// know the type of our object at all, and thus we must turn
// the whole thing into cty.DynamicVal. This is consistent with
// how this situation is handled in the native syntax.
// We'll keep iterating so we can collect other errors in
// subsequent attributes.
known = false
continue
}
nameStr := name.AsString()
if _, defined := attrs[nameStr]; defined {
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Duplicate object attribute",
Detail: fmt.Sprintf("An attribute named %q was already defined at %s.", nameStr, attrRanges[nameStr]),
Subject: &jsonAttr.NameRange,
Expression: e,
EvalContext: ctx,
})
continue
}
attrs[nameStr] = val
attrRanges[nameStr] = jsonAttr.NameRange
}
if !known {
// We encountered an unknown key somewhere along the way, so
// we can't know what our type will eventually be.
return cty.DynamicVal, diags
}
return cty.ObjectVal(attrs), diags
case *nullVal:
return cty.NullVal(cty.DynamicPseudoType), nil
default:
// Default to DynamicVal so that ASTs containing invalid nodes can
// still be partially-evaluated.
return cty.DynamicVal, nil
}
}
func (e *expression) Variables() []hcl.Traversal {
var vars []hcl.Traversal
switch v := e.src.(type) {
case *stringVal:
templateSrc := v.Value
expr, diags := hclsyntax.ParseTemplate(
[]byte(templateSrc),
v.SrcRange.Filename,
// This won't produce _exactly_ the right result, since
// the hclsyntax parser can't "see" any escapes we removed
// while parsing JSON, but it's better than nothing.
hcl.Pos{
Line: v.SrcRange.Start.Line,
// skip over the opening quote mark
Byte: v.SrcRange.Start.Byte + 1,
Column: v.SrcRange.Start.Column + 1,
},
)
if diags.HasErrors() {
return vars
}
return expr.Variables()
case *arrayVal:
for _, jsonVal := range v.Values {
vars = append(vars, (&expression{src: jsonVal}).Variables()...)
}
case *objectVal:
for _, jsonAttr := range v.Attrs {
keyExpr := &stringVal{ // we're going to treat key as an expression in this context
Value: jsonAttr.Name,
SrcRange: jsonAttr.NameRange,
}
vars = append(vars, (&expression{src: keyExpr}).Variables()...)
vars = append(vars, (&expression{src: jsonAttr.Value}).Variables()...)
}
}
return vars
}
func (e *expression) Range() hcl.Range {
return e.src.Range()
}
func (e *expression) StartRange() hcl.Range {
return e.src.StartRange()
}
// Implementation for hcl.AbsTraversalForExpr.
func (e *expression) AsTraversal() hcl.Traversal {
// In JSON-based syntax a traversal is given as a string containing
// traversal syntax as defined by hclsyntax.ParseTraversalAbs.
switch v := e.src.(type) {
case *stringVal:
traversal, diags := hclsyntax.ParseTraversalAbs([]byte(v.Value), v.SrcRange.Filename, v.SrcRange.Start)
if diags.HasErrors() {
return nil
}
return traversal
default:
return nil
}
}
// Implementation for hcl.ExprCall.
func (e *expression) ExprCall() *hcl.StaticCall {
// In JSON-based syntax a static call is given as a string containing
// an expression in the native syntax that also supports ExprCall.
switch v := e.src.(type) {
case *stringVal:
expr, diags := hclsyntax.ParseExpression([]byte(v.Value), v.SrcRange.Filename, v.SrcRange.Start)
if diags.HasErrors() {
return nil
}
call, diags := hcl.ExprCall(expr)
if diags.HasErrors() {
return nil
}
return call
default:
return nil
}
}
// Implementation for hcl.ExprList.
func (e *expression) ExprList() []hcl.Expression {
switch v := e.src.(type) {
case *arrayVal:
ret := make([]hcl.Expression, len(v.Values))
for i, node := range v.Values {
ret[i] = &expression{src: node}
}
return ret
default:
return nil
}
}
// Implementation for hcl.ExprMap.
func (e *expression) ExprMap() []hcl.KeyValuePair {
switch v := e.src.(type) {
case *objectVal:
ret := make([]hcl.KeyValuePair, len(v.Attrs))
for i, jsonAttr := range v.Attrs {
ret[i] = hcl.KeyValuePair{
Key: &expression{src: &stringVal{
Value: jsonAttr.Name,
SrcRange: jsonAttr.NameRange,
}},
Value: &expression{src: jsonAttr.Value},
}
}
return ret
default:
return nil
}
}

View File

@ -1,29 +0,0 @@
// Code generated by "stringer -type tokenType scanner.go"; DO NOT EDIT.
package json
import "strconv"
const _tokenType_name = "tokenInvalidtokenCommatokenColontokenEqualstokenKeywordtokenNumbertokenStringtokenBrackOtokenBrackCtokenBraceOtokenBraceCtokenEOF"
var _tokenType_map = map[tokenType]string{
0: _tokenType_name[0:12],
44: _tokenType_name[12:22],
58: _tokenType_name[22:32],
61: _tokenType_name[32:43],
75: _tokenType_name[43:55],
78: _tokenType_name[55:66],
83: _tokenType_name[66:77],
91: _tokenType_name[77:88],
93: _tokenType_name[88:99],
123: _tokenType_name[99:110],
125: _tokenType_name[110:121],
9220: _tokenType_name[121:129],
}
func (i tokenType) String() string {
if str, ok := _tokenType_map[i]; ok {
return str
}
return "tokenType(" + strconv.FormatInt(int64(i), 10) + ")"
}

View File

@ -21,6 +21,8 @@ import (
// though nil can be provided if the calling application is going to
// ignore the subject of the returned diagnostics anyway.
func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics) {
const invalidIndex = "Invalid index"
if collection.IsNull() {
return cty.DynamicVal, Diagnostics{
{
@ -35,7 +37,7 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Summary: invalidIndex,
Detail: "Can't use a null value as an indexing key.",
Subject: srcRange,
},
@ -66,7 +68,7 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Summary: invalidIndex,
Detail: fmt.Sprintf(
"The given key does not identify an element in this collection value: %s.",
keyErr.Error(),
@ -76,7 +78,10 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
}
}
has := collection.HasIndex(key)
// Here we drop marks from HasIndex result, in order to allow basic
// traversal of a marked list, tuple, or map in the same way we can
// traverse a marked object
has, _ := collection.HasIndex(key).Unmark()
if !has.IsKnown() {
if ty.IsTupleType() {
return cty.DynamicVal, nil
@ -85,31 +90,84 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
}
}
if has.False() {
// We have a more specialized error message for the situation of
// using a fractional number to index into a sequence, because
// that will tend to happen if the user is trying to use division
// to calculate an index and not realizing that HCL does float
// division rather than integer division.
if (ty.IsListType() || ty.IsTupleType()) && key.Type().Equals(cty.Number) {
if key.IsKnown() && !key.IsNull() {
// NOTE: we don't know what any marks might've represented
// up at the calling application layer, so we must avoid
// showing the literal number value in these error messages
// in case the mark represents something important, such as
// a value being "sensitive".
key, _ := key.Unmark()
bf := key.AsBigFloat()
if _, acc := bf.Int(nil); acc != big.Exact {
// We have a more specialized error message for the
// situation of using a fractional number to index into
// a sequence, because that will tend to happen if the
// user is trying to use division to calculate an index
// and not realizing that HCL does float division
// rather than integer division.
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: fmt.Sprintf("The given key does not identify an element in this collection value: indexing a sequence requires a whole number, but the given index (%g) has a fractional part.", bf),
Summary: invalidIndex,
Detail: "The given key does not identify an element in this collection value: indexing a sequence requires a whole number, but the given index has a fractional part.",
Subject: srcRange,
},
}
}
if bf.Sign() < 0 {
// Some other languages allow negative indices to
// select "backwards" from the end of the sequence,
// but HCL doesn't do that in order to give better
// feedback if a dynamic index is calculated
// incorrectly.
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: invalidIndex,
Detail: "The given key does not identify an element in this collection value: a negative number is not a valid index for a sequence.",
Subject: srcRange,
},
}
}
if lenVal := collection.Length(); lenVal.IsKnown() && !lenVal.IsMarked() {
// Length always returns a number, and we already
// checked that it's a known number, so this is safe.
lenBF := lenVal.AsBigFloat()
var result big.Float
result.Sub(bf, lenBF)
if result.Sign() < 1 {
if lenBF.Sign() == 0 {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: invalidIndex,
Detail: "The given key does not identify an element in this collection value: the collection has no elements.",
Subject: srcRange,
},
}
} else {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: invalidIndex,
Detail: "The given key does not identify an element in this collection value: the given index is greater than or equal to the length of the collection.",
Subject: srcRange,
},
}
}
}
}
}
}
// If this is not one of the special situations we handled above
// then we'll fall back on a very generic message.
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Summary: invalidIndex,
Detail: "The given key does not identify an element in this collection value.",
Subject: srcRange,
},
@ -119,12 +177,13 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
return collection.Index(key), nil
case ty.IsObjectType():
wasNumber := key.Type() == cty.Number
key, keyErr := convert.Convert(key, cty.String)
if keyErr != nil {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Summary: invalidIndex,
Detail: fmt.Sprintf(
"The given key does not identify an element in this collection value: %s.",
keyErr.Error(),
@ -140,14 +199,24 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
return cty.DynamicVal, nil
}
key, _ = key.Unmark()
attrName := key.AsString()
if !ty.HasAttribute(attrName) {
var suggestion string
if wasNumber {
// We note this only as an addendum to an error we would've
// already returned anyway, because it is valid (albeit weird)
// to have an attribute whose name is just decimal digits
// and then access that attribute using a number whose
// decimal representation is the same digits.
suggestion = " An object only supports looking up attributes by name, not by numeric index."
}
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Detail: "The given key does not identify an element in this collection value.",
Summary: invalidIndex,
Detail: fmt.Sprintf("The given key does not identify an element in this collection value.%s", suggestion),
Subject: srcRange,
},
}
@ -155,11 +224,21 @@ func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics)
return collection.GetAttr(attrName), nil
case ty.IsSetType():
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: invalidIndex,
Detail: "Elements of a set are identified only by their value and don't have any separate index or key to select with, so it's only possible to perform operations across all elements of the set.",
Subject: srcRange,
},
}
default:
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Invalid index",
Summary: invalidIndex,
Detail: "This value does not have any indices.",
Subject: srcRange,
},
@ -192,6 +271,8 @@ func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagno
}
}
const unsupportedAttr = "Unsupported attribute"
ty := obj.Type()
switch {
case ty.IsObjectType():
@ -199,7 +280,7 @@ func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagno
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Unsupported attribute",
Summary: unsupportedAttr,
Detail: fmt.Sprintf("This object does not have an attribute named %q.", attrName),
Subject: srcRange,
},
@ -217,7 +298,12 @@ func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagno
}
idx := cty.StringVal(attrName)
if obj.HasIndex(idx).False() {
// Here we drop marks from HasIndex result, in order to allow basic
// traversal of a marked map in the same way we can traverse a marked
// object
hasIndex, _ := obj.HasIndex(idx).Unmark()
if hasIndex.False() {
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
@ -231,11 +317,69 @@ func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagno
return obj.Index(idx), nil
case ty == cty.DynamicPseudoType:
return cty.DynamicVal, nil
case ty.IsListType() && ty.ElementType().IsObjectType():
// It seems a common mistake to try to access attributes on a whole
// list of objects rather than on a specific individual element, so
// we have some extra hints for that case.
switch {
case ty.ElementType().HasAttribute(attrName):
// This is a very strong indication that the user mistook the list
// of objects for a single object, so we can be a little more
// direct in our suggestion here.
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: unsupportedAttr,
Detail: fmt.Sprintf("Can't access attributes on a list of objects. Did you mean to access attribute %q for a specific element of the list, or across all elements of the list?", attrName),
Subject: srcRange,
},
}
default:
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: unsupportedAttr,
Detail: "Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?",
Subject: srcRange,
},
}
}
case ty.IsSetType() && ty.ElementType().IsObjectType():
// This is similar to the previous case, but we can't give such a
// direct suggestion because there is no mechanism to select a single
// item from a set.
// We could potentially suggest using a for expression or splat
// operator here, but we typically don't get into syntax specifics
// in hcl.GetAttr suggestions because it's a general function used in
// various other situations, such as in application-specific operations
// that might have a more constraint set of alternative approaches.
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: unsupportedAttr,
Detail: "Can't access attributes on a set of objects. Did you mean to access an attribute across all elements of the set?",
Subject: srcRange,
},
}
case ty.IsPrimitiveType():
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: unsupportedAttr,
Detail: fmt.Sprintf("Can't access attributes on a primitive-typed value (%s).", ty.FriendlyName()),
Subject: srcRange,
},
}
default:
return cty.DynamicVal, Diagnostics{
{
Severity: DiagError,
Summary: "Unsupported attribute",
Summary: unsupportedAttr,
Detail: "This value does not have any attributes.",
Subject: srcRange,
},

View File

@ -4,7 +4,7 @@ import (
"bufio"
"bytes"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v13/textseg"
)
// RangeScanner is a helper that will scan over a buffer using a bufio.SplitFunc