📔
Idiosyncratic Elixir
  • Introduction
  • Testing in General
    • Flow style tests
  • Phoenix View Models
    • Single form view models
    • Testing view models
  • Declarative testing of structured (form) input
    • Preface
    • Features, described gradually
      • Describing changeset validations
      • Running tests
      • Example prototypes
      • Shorthand for built-in transformations (`cast`)
      • Shorthand for custom transformations (on_success)
Powered by GitBook
On this page

Was this helpful?

  1. Declarative testing of structured (form) input
  2. Features, described gradually

Running tests

PreviousDescribing changeset validationsNextExample prototypes

Last updated 4 years ago

Was this helpful?

You can run the tests individually:

  alias Examples.Schemas.Basic.Tester
  
  test "first version" do 
    Tester.validate(:ok)
    Tester.validate(:bad_date)
  end

In case of failure, you get the usual line number, code, and actual-versus-expected information, though the stack trace is rather deep:

You can run all the tests in a module:

defmodule App.Schemas.Basic.ValidationTest do
  use ExUnit.Case, async: true
  alias Examples.Schemas.Basic, as: Basic
  import TransformerTestSupport.Runner
  
  check_examples_with(Basic.Tester)
end

check_examples_with creates a distinct ExUnit test for each example, so you'll get multiple failures if multiple examples are wrong.

The error messages don't give you a line number, but they give you location information. The first line below shows that you're given the module name (Examples.Schemas.Basic.Validation) and the example within it (:ok):

The third line shows you which field was wrong. To provide more context for the failure, changeset validation shows the changeset that failed (line 4).

Finally, you can check all the examples in a set of files:

defmodule App.Schemas.AllSchemasTest do
  use ExUnit.Case, async: true
  import TransformerTestSupport.Runner

  check_examples_in_files("test/*example.ex")
end

Errors are reported as with check_examples_with.

A single test-runner file is compatible with test-driven design. Instead of writing a new test that fails, you add (or update) an example. You still get a single test failure amongst many test successes.

See the first three lines, and the last one