Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cue test support #209

Open
cueckoo opened this issue Jul 3, 2021 · 8 comments
Open

cue test support #209

cueckoo opened this issue Jul 3, 2021 · 8 comments
Labels
FeatureRequest New feature or request FeedbackWanted Further information is requested roadmap/cli Specific tag for roadmap issue #337

Comments

@cueckoo
Copy link
Collaborator

cueckoo commented Jul 3, 2021

Originally opened by @rudolph9 in cuelang/cue#209

Cuelang has reserved *_test.cue similar to golang but does not yet have support for actually running tests.

Unless there is a good reason to follow a different path it seems reasonable to model cuelang testing after golang testing

Ideally we would "eat our own dog food" for testing and not create new builtin function but rather express the logic of the test using cuelang.

An example of a testing package is https://github.com/ipcf/t.

@cueckoo cueckoo added FeatureRequest New feature or request FeedbackWanted Further information is requested roadmap/cli Specific tag for roadmap issue #337 labels Jul 3, 2021
@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @rudolph9 in cuelang/cue#209 (comment)

PLEASE NOTE: I am in the middle of reworking ipcf/t to align nicer with the way cuelang tests naturally get written. The current setup is a little cumbersome.

I briefly introduced list support for defining asserts but reverted it as I found that use of disjunctions made cue eval --ignore totally unreadable when looking for the test that caused the error. Not an issue with the list support, rather the formatter when an error occurs in a disjunction (maybe needs a ticket), but given the available tools it seemed best to just revert the pull.

I am also in the process of moving away from the rspec describe syntax. I've found it to be combersome and out of place in the context of cuelang. I'm working toward an an api similar to node-tap specifically the test and assert apis.

@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @mpvl in cuelang/cue#209 (comment)

Here is my take.
Background/disclosure, I'm the creator of subtests in Go, so I have some of my own biases. :)
I think the testing framework can benefit from directly incorporating some of the best practices learned over the years from Go, which will also be quite different from frameworks found for other languages:

  • don't write your own golden files, we have computers for that.
  • did I say golden files? We can modify code, so we should do that instead.
  • show compact and/or meaningful diffs
  • allow for comparing only certain parts of structs (go-cmp)
  • fuzz if it makes sense

Some of this should be provided natively by the cue tool, other parts can be provided by a library.

In CUE we can step it up a notch using CUE

  • we know the precise space of possible values, which is great for fuzzing
  • tooling layer: as execution is expressed as data, things get easier:
    • manually specifying inputs of tasks will trigger them without the need for more machinery.
    • there could be a --record flag or so that converts results from a normal run to test data.

Anyway, let's start simple by specifying a data representation that is future proof.

I was thinking more along the line of allowing hierarchical tests where you assign a value to a $test field to trigger a certain test, such as:

testRegexp: {
    re :: =~"^([foo]|[bar])+$"

    [X=string]: { $test: re & X  }
    foo: _
    foobar: _
    baz: _
}

and have the testing framework automatically expand this by filling out the test results and running trim as follows:

testRegexp: {
    re :: =~"^([foo]|[bar])+$"

    [X=string]: { $test: re & X  }
    foo: $expect: "foo"
    foobar: $expect: "foobar"
    baz: $expect: _|_
}

or something like that. Note that this is a fairly primitive (Go style) and would allow other frameworks to be mapped on top. Not user whether you would need to import a testing package or not.

A fuzzer could even automatically populate the test cases, either statically or dynamically. For instance (rough sketch):

testFuzz: {
    [string]: { $test: value, $fuzz: { ... // params }  }
}

or something like

fuzzMyValue: {
    $test: value
}

could tell the test tool to automatically create instances of the enclosing template. This may be useful for testing roundtrips between converters for different APIs. Testing those then becomes a matter of testRoundtrip: { $test: v1tov1beta1 & v1beta1tov1, $fuzz: { set: v1tov1beta1 | v1beta1tov1 } } or something like that, The idea is that valid values of either API should result into nonconflicting values in the combination of the two.

I used the $foo style fields to avoid name clashes with other names in the hierarchy and to allow for extension later on. One could imagine using $msg for error messages as well as other primitives to allow for different styles. Opinions vary too much on that front to pick one. :)

Fuzzing may generate large tests sets and it may make sense to either not write them out (but reproduce them on the fly), or put them in a separate file or cue.mod depending on the type of fuzzer and use case.

Many open questions remain. One I don't have a clear picture of is how to represent tool tests, for instance.

@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @mpvl in cuelang/cue#209 (comment)

I think a good first question is, what kind of things do people want to test?

@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @rudolph9 in cuelang/cue#209 (comment)

I think a good first question is, what kind of things do people want to test?

It would be good to get others feedback than mine since my use-case for cue is somewhat unique but the kinds of things I've been testing are mostly just sanity checks. Does a regex do what I expect it to, is a closed struct actually a closed struct, is a number actually a number etc.

The nature of cue likely doesn't warrant exotic test suites but good to have a mechanism to do some basic checks.

@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @rudolph9 in cuelang/cue#209 (comment)

@mpvl Here is my latest update to ipcf/testing (formerly ipcf/t)

It makes for a pretty clean workflow and is similar to the feel of golang testing:

github.com/ipcf/foo/foo.cue:

package foo

Bar: =~ "^([foo]|[bar])+$"

github.com/ipcf/foo/test/foo.cue:

package test

import "github.com/ipcf/testing"

import "github.com/ipcf/foo"

testing.T & {
	test: "foo.Bar": {
		[testing.NumDot]: subject: foo.Bar
		"0": assert: ok:     "foo"
		"1": assert: notOk:  "foobar" // will fail
		"2": assert: ok:     "bar"
		"3": assert: ok:     "barfoo"
		"4": assert: ok:     "barfoo"
		"5": assert: ok:     "barfoofoobarfoo"
		"6": assert: notOk:  ""
		"7": assert: notOk:  "bar1"
		"8": assert: notOk:  "1bar"
		"9": assert: notOk:  int
		"10": assert: notOk: null
		"11": assert: notOk: {}
	}
}
foo/test [master] » cue eval --expression FAIL
BarBaz: {
    "1": {
        subject: =~"^([foo]|[bar])+$"
        assert: {
            pass:  false
            notOk: "foobar"
        }
    }
}

I didn't use the $foo style to avaoid classes since what your testing exists under what are basically reserved paths in your tests (subject: _ and assert: {ok: _, notOk: _}})

Although, I could see using $assert and $subject. Might make it a bit more clear that those are api fields.

@cueckoo
Copy link
Collaborator Author

cueckoo commented Jul 3, 2021

Original reply by @xinau in cuelang/cue#209 (comment)

I like the idea of having a "simple" testing setup as @mpvl mentioned. My test use cases are similar to those of @rudolph9. But I think a more powerful approach to solve this problem is to implement a framework similar to haskells quickcheck (not sure if this falls into the fuzzer category) instead of describing valid and invalid cases.
Other than that I think a more tooling around diffing would be helpful.

ptMcGit pushed a commit to ptMcGit/cue that referenced this issue Jan 22, 2023
Signed-off-by: Paul Jolly <paul@myitcv.io>
@buzzdan
Copy link

buzzdan commented May 17, 2023

any updates on cue test ?

@myitcv myitcv added the zGarden label Jun 15, 2023
@mpvl
Copy link
Member

mpvl commented Aug 1, 2023

We are experimenting with different best practices on how a test framework should look like.

Details.
In the CUE project itself we often use tests in .txtar files that are automatically updated. This has been a big improvement in productivity. However, the auto updating has a downside. Most notably, we occasionally accidentally introduce a regression, because it is too easy to update a test and not notice that a bad chance was made. See, for instance, https://review.gerrithub.io/c/cue-lang/cue/+/551413/5/cue/testdata/eval/let.txtar#454.

We like the autoupdating and don't want to give that up. But this shows that there is merit to having some kind of invariants specified which each test that are not auto-updated. We want cue test to have a design that allows both auto updating as well as specifying such invariants.

One CL that may point in that general direction, and another experiment we are conducting, is the refactoring of the builtin tests in: https://review.gerrithub.io/c/cue-lang/cue/+/556198. This CL organizes tests so that they are easier to write. But more importantly, it generates test output where input and output are collated together. A common nuisance in the current approach to generated test output is that one needs to jump back and forth to compare test input and output. This mitigates that issue considerably. Moreover, one can see how this approach is amenable to invariant injection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FeatureRequest New feature or request FeedbackWanted Further information is requested roadmap/cli Specific tag for roadmap issue #337
Projects
None yet
Development

No branches or pull requests

5 participants