Running tests
You can run the tests individually:
alias Examples.Schemas.Basic.Tester
test "first version" do
Tester.validate(:ok)
Tester.validate(:bad_date)
end
In case of failure, you get the usual line number, code, and actual-versus-expected information, though the stack trace is rather deep:

You can run all the tests in a module:
defmodule App.Schemas.Basic.ValidationTest do
use ExUnit.Case, async: true
alias Examples.Schemas.Basic, as: Basic
import TransformerTestSupport.Runner
check_examples_with(Basic.Tester)
end
check_examples_with
creates a distinct ExUnit test
for each example, so you'll get multiple failures if multiple examples are wrong.
The error messages don't give you a line number, but they give you location information. The first line below shows that you're given the module name (Examples.Schemas.Basic.Validation
) and the example within it (:ok
):

The third line shows you which field was wrong. To provide more context for the failure, changeset validation shows the changeset that failed (line 4).
Finally, you can check all the examples in a set of files:
defmodule App.Schemas.AllSchemasTest do
use ExUnit.Case, async: true
import TransformerTestSupport.Runner
check_examples_in_files("test/*example.ex")
end
Errors are reported as with check_examples_with
.
A single test-runner file is compatible with test-driven design. Instead of writing a new test that fails, you add (or update) an example. You still get a single test failure amongst many test successes.
Last updated
Was this helpful?