Skip to main content

Automated testing

🚧

Reference for writing automated integrated tests with the Assembly Line testing framework. This is under very active development.

Intro

The Kiln (Assembly Line Kiln) framework runs tests on your docassemble interview through GitHub, making sure your interviews are running the way you want.

Kiln works with any docassemble interview, though it is being developed through the Document Assembly Line project.

Docacon 2021, 10 minute intro presentation:

Start

  1. Prepare your repository or organization for testing using https://apps-dev.suffolklitlab.org/start/test-setup. Follow the instructions there to add new code to your repository.
  2. In Docassemble, make a new Project and pull in the updated code.
  3. Write tests in your Sources folder. You should already have an example test there to start with, created by the test setup interview.

How does it work?

Whenever you push to GitHub, GitHub will run the tests automatically with a bot that goes to whatever interviews you named in the tests and fills in the fields. You can see the tests running on your repository's GitHub Actions page.

At the end, you can see a report and logs right in the workflow's "job" page or download the report artifact to your computer.

Quick reminders

  1. You write and edit .feature test files in your Sources folder.
  2. By default, each Step or field may only take 30 seconds. You can change that with the "the maximum seconds" Step listed in the Steps.
  3. Tests are run in GitHub when you commit.
  4. Tests can download files, but humans have to review them to see if they've come out right.
  5. You will be able to download screenshots of pages that errored. They're in the Action's artifact section.
  6. ALKiln also creates test reports that you can download in the same place.

Give us feedback and ideas by making issues at https://github.com/plocket/docassemble-cucumber/issues.

Example

The tests use the gherkin language and syntax. Here's a complex example for a quick refresher on some of our features:

@children
Feature: I have children

@fast @child_benefits
Scenario: child has benefits
Given I start the interview at "some_file_name.yml"
And I get to the question id "benefits" with this data:
| var | value | trigger |
| x[i].name.first | Cashmere | children[0].name.first |
| x[i].name.last | Davis | children[0].name.first |
| x[i].name.first | Casey | children[1].name.first |
| x[i].name.last | Davis | children[1].name.first |
| x.there_are_any | True | children.there_are_any |
| x.target_number | 2 | children.there_is_another |
When I set the var "benefits['SSI']" to "True"
And I tap to continue
Then the question id should be "download"
And I download "some_motion.pdf"

First test

You can write a really simple test right away that just makes sure your YAML file runs. Write a Scenario for each file you want to test.

Feature: Interviews load

Scenario: The 209A loads
Given I start the interview at "ma_209a_package.yml"

Scenario: The Plaintiff's Motion to Modify loads
Given I start the interview at "plaintiffs_motion_to_modify_209a.yml"

You can wait to write more complex tests till your code is more stable.

Story tables

Story table Steps, in our opinion, are the most effective and flexible way to set the values of fields in ALKiln in most cases. Using them, it doesn't matter what order you use to list your fields or what order your pages come in. It also doesn't matter if you include extra fields accidentally. They are a snapshot of the user who's filling out the form for that test.

Example:

    And I get to the question id "has sink" with this data:
| var | value | trigger |
| last_haircut_date | today - 730 | last_haircut_date |
| wants_virtual_haircut | True | wants_virtual_haircut |
| scissors[i].length | 2 | scissors[0].length |
| scissors[i].length | 7 | scissors[1].length |

You can write a story table that goes all the way through your interview, or a story table that only goes part way.

Very basically, you tell the story table Step what question you want to get to and the variables and values it will need to get there. Whenever the test gets to a page, it checks your story table for any variables that match a variable on the page. When it finds a match, it sets the value of the field. When it's done with a page, it continues to the next page until it reaches the terminal question.

You can have multiple tables in one Scenario and you can put other steps between story table Steps.

warning

A story table Step must not be the first step in your Scenario. The interview Step must come before it.

Generate a story table

You can use the story table generator to generate a Scenario draft. Depending on your interview's code you might need to edit the table for it to work properly, but it can give you a good start.

Follow these instructions to use the generator:

  1. If you don't have one already, add a new test file. You can leave out the Scenario.
  2. Ensure your server config is set up to show debug info.
  3. Run your interview manually until you reach the page you want the story table to get to.
  4. Open the "source" display of the interview. Currently, that looks like angle brackets, </>, in the header of the page.
  5. Note the id of the page.
  6. Tap the "Show variables and values" button. It will open a new tab showing a big JSON object.
  7. Copy all the text on that page.
  8. Go to the story table generator.
  9. Paste the JSON into the text area there, as instructed.
  10. Use the other input fields to help finalize your Scenario, including the page id.
  11. Copy the Scenario that has been generated for you.
  12. Paste that into the already prepared test file.

This works best with interviews that don't need index variables or generic objects.

Step description

The Step that triggers a story table is

    And I get to the question id "some id!" with this data:

question id: The story table needs to know the id of the page this story table should get to. You can find the id in the question block in the YAML, or using the </> button in the header of an open interview.

Rows

Indented under the description, put the header row of the table:

      | var | value | trigger |
  • var lists the variable the field sets exactly as it appears in the code of the question.
  • value is the value you want the test to fill in.
  • trigger lists the variable that triggers that variable's page. We describe that more in a section below.

Under that, add a blank row for a field that you want the test to interact with during the interview:

      |  |  |  |

You must include a row for every variable that need to be set in order to get to the page with the id you chose.

var

In the var column, write the name of the variable that a field sets exactly as it appears in the question block. Most times you can see that name in the YAML question block. If code: is used to create the field's variable name, you may have to talk to the developers who wrote that code to find out the name or names of the variable or variables it generates.

Examples:

court_date
users[0].name.first
users[i].children[j].benefits['SSI']
x.favorite_color

value

In the value column, write what you want the field to be set to. For checkboxes, True means 'checked' and False means 'unchecked'.

One special value you can include is today. That will insert the date on which the test is being run. You can also subtract from, or add days to, today. Examples:

      | signature_date | today |  |
| court_date | today + 20 | |
| minors_birth_date | today - 3650 | |

The last example makes sure that the date is 10 years in the past, ensuring that a minor always stays a minor for that test.

trigger

trigger is an optional value in most cases. It is mandatory for rows that list index variables, like i, j, or k, or generic objects (x). Your interview must always include some special HTML to let the trigger variable work properly. As you can see, you will get a warning in the report if you leave that out.

In the trigger column, write the name of the variable that triggers the page on which the field appears.

For the below, the trigger is users[0].hair.how_much.

---
id: interview order
mandatory: True
code: |
users[0].hair.how_much
---
id: hair
question: |
Tell us about your hair
fields:
- How much hair do you have?: users[i].hair.how_much
- What color is your hair?: users[i].hair.color

Your story table rows to set those values would look like this:

      | var | value | trigger |
| users[i].hair.how_much | Enough | users[0].hair.how_much |
| users[i].hair.color | Sea green | users[0].hair.how_much |

Even though the var columns were different, both trigger columns listed users[0].hair.how_much. That's because when the docassemble asks for users[0].hair.how_much, both fields are on that page and both variables have to be set.

There are some rare cases where no trigger exists. For example, question blocks with the mandatory specifier:

mandatory: True
question: |
Do you like mandatory questions?
yesno: likes_mandatory_questions

In those cases, leave the trigger column empty.

Story table examples

Simple field types with their values.

The 'yes' choice of yesno buttons or yesno fields like yesno checkboxes and yesnoradios.

      | has_hair | True | has_hair |

The 'maybe' choice in yesnomaybe buttons and datatype: yesnomaybe fields.

      | has_hair | None | has_hair |

Checkboxes with multiple choices. The value 'True' means to check the checkbox and 'False' means to uncheck it.

      | benefits['SSI'] | True | benefits |

Radio or dropdown choices.

      | favorite_color | green | favorite_color |

Text field or textarea. Even if the answer has multiple lines, you can only use one line. When a new line is supposed to appear, instead use \n. See below:

      | favorite_color | Blue.\nNo, green!\nAaah... | favorite_color |

A generic object with an index variable.

      | x[i].name.first | Umi | users[1].name.first |

.there_is_another loop

The .there_is_another loop in a story table is more complicated than you might expect.

The story table must handle setting the .there_is_another attribute automatically. You, as the developer, must pretend to use the .target_number attribute instead, whether you actually use it or not.

In your var column, replace any .there_is_another rows for a particular variable with with one .target_number row. In the value column, put the number of items of the appropriate type.

The trigger column should have the name of the page's trigger variable, as usual. Example:

      | x[i].name.first | Jose | users[0].name.first |
| x[i].name.first | Sam | users[1].name.first |
| x[i].name.first | Umi | users[2].name.first |
| x.target_number | 3 | users.there_is_another |

Story table signature

The value for a row setting a signature doesn't matter. All signatures will be a single dot.

      | user.signature |  | user.signature |

Avoid taking a screenshot of a signature page. The screenshot will erase the signature.

Other story table notes

Don't worry about accidentally including variables that won't show up during the test. Extra rows will be ignored.

Steps

Steps must be written one after the other in the order they should happen. It's a bit more like you're the user clicking through the form. They can let you do things like download a file or make sure an user input invalidation message appears. If you change the order of the questions, even if you don't change any variable names, you may have to update these types of steps to change their order to match the new order of the screens.

Note: When, Then, And, and Given at the beginning of sentences can all be used interchangeably. It doesn't matter which you use.

Starting Steps

Establishing Steps that you might use as the first few lines of a "Scenario" - a test. They can also be used at any other time.

warning

You must include the interview Step in each Scenario before setting any fields.

Use an interview's filename in the interview Step to open the interview you want to test.

    Given I start the interview at "yaml_file_name.yml"

This Step must always be included in each Scenario before setting the values of any fields. There is no other way for the tests to know what website to go to.


The wait Step can be a way to pause before the test tries to go to the interview's first page.

    Given I wait 120 seconds
When I start the interview at "yaml_file_name.yml"

This Step can also be used anywhere in your scenario to wait between Steps.


You can also start by making sure the test will give the interview's first page time to load once the test goes there. The default maximum time is 30 seconds. This Step can be useful if you know that your interview's first page takes longer to load.

    Given the maximum seconds for each Step is 200
When I start the interview at "yaml_file_name.yml"

This Step can also be used anywhere else in your Scenario to give Steps more time to finish.


You can use the log in Step to sign into your docassemble server before going to the interview:

    Given I log in with the email "USER_EMAIL" and the password "USER_PASSWORD"
When I start the interview at "yaml_file_name.yml"

This is a complex Step to use. To make it work, you have to add GitHub secrets and edit the YAML file in your repository's .github/workflows/ folder.

GitHub secrets are a way to store sensitive information. If you don't encrypt the login information, others will be able to see it in your code.

  1. Follow the GitHub instructions in the link above to set two GitHub secrets - one for the email of the account you want to sign in with and one for the password of that account.

USER_EMAIL and USER_PASSWORD are just placeholders in our example. You can name them whatever you want to. These are the words we'll use to refer to them in here. You can add these to one repository or you can add these to your organization, whichever is right for you.

  1. Go to the home page of your repository. Tap on the .github folder, then on workflows, then on the YAML file in there that runs the ALKiln tests.

It should include lines that look like this:

         with:
SERVER_URL: "${{ secrets.SERVER_URL }}"
PLAYGROUND_EMAIL: "${{ secrets.PLAYGROUND_EMAIL }}"
PLAYGROUND_PASSWORD: "${{ secrets.PLAYGROUND_PASSWORD }}"
PLAYGROUND_ID: "${{ secrets.PLAYGROUND_ID }}"
EXTRA_LANGUAGES: "${{ secrets.EXTRA_LANGUAGES }}"
  1. Add two more lines under those:
           USER_EMAIL: "${{ secrets.USER_EMAIL }}"
USER_PASSWORD: "${{ secrets.USER_PASSWORD }}"
  1. Make sure you use the same words as the GitHub secrets you made.
  2. Write the log in Step and use the names of these secrets as the values.

Observe things about the page

The question id Step will make sure the page's question id is right. This Step can help humans keep track of what page the tests are on. It will also show up in the logs of the tests and can help you see where things went wrong.

Copy the id value from the YAML question block of the screen you want to test.

    Then the question id should be "some yaml block id!"

The invalid answers Step can check that the user was prevented from continuing.

    Then I will be told an answer is invalid

The screenshot Step will take a picture of the screen that will be put in the GitHub action's artifacts.

    Then I take a screenshot

The link Step can make sure a link appears on the page. For example, a link to quickly leave the page for forms that deal with domestic abuse.

    Then I should see the link to "a-url.com"

The phrase Steps can check for text on the page. Checking phrases will be language specific.

::: warning Sometimes the characters in your code and the characters on screen aren't the same. In our code, we often use apostrophes as quotes (') and docassemble changes them to actual opening and closing quote characters ( and ). It's best to copy the text straight from the screen. :::

    Then I SHOULD see the phrase "some phrase"
    Then I should NOT see the phrase "some phrase"

Set fields / Interact with page

The continue Step will tap the button to continue to the next page. The text on the button itself doesn't matter.

    When I tap to continue

You can also tap or click on specific elements, like buttons on a page. You can use any valid CSS Selector to get an element on the page, and can add any additional wait time after tapping the element.

    When I tap the "#element-id" element
And I tap the "#other-element" element and wait for 5 seconds

If you are using the ALToolbox tabs, you can say that you're tapping the tab, and ALKiln will tap and wait until the tab is fully visible.

    When I tap the "TabGroup-specific_tab_name-tab" tab

Use the set variable Step to set the values of fields.

Comparing this to a story table, as described above, the first quotes contain the equivalent of the var column and the second quotes contain the value you want to set.

    When I set the variable "users[i].hair_color" to "blue"

For example, you can use the special word today as a value to set dates.


Sign on a signature page. All signatures are the same - one dot.

    When I sign

The name Step is specifically for the Document Assembly Line 4-part name questions.

Avoid punctuation. We recommend you just use 2 names - the first name and last name - but you can have all these formats:

  • Firstname Lastname
  • Firstname Middlename Lastname
  • Firstname Middlename Lastname Suffix (where suffix is one of the dropdown suffix choices, like II)
    When I set the name of "x[i]" to "Ulli User"

The address Step is specifically for the Document Assembly Line 4-part address questions.

It allows a US address format, but can otherwise be any address you want that matches the format of the example below. Remember the commas.

    When I set the address of "users[0]" to "120 Tremont Street, Unit 1, Boston, MA 02108"

Use the story table Step to make sure the test reaches a particular screen given a set of fields with their values. See a better description in sections above.

    I get to the question id "some question block id" with this data:

Other actions

Use the download Step to download files so that humans can check that they are correct. The files will be in the GitHub action's artifacts. If you think this step could take more than 30 seconds, use the "maximum seconds for each Step" Step) to give the file more time to download.

    Then I download "file-name.pdf"

Leave out other parts of file's url.


Use the upload step to upload one or more files. You must store files that you plan to upload in your "Sources" folder along with your tests.

As you can see in the examples, if you want to upload more than one file you must separate their names with a comma.

And I upload "irrefutable_evidence.jpg, refutable_evidence.pdf" to "evidence_files"

In a story table, use the name of the variable as usual and use the name of the file or files in the value column.

      | evidence_files | irrefutable_evidence.jpg, refutable_evidence.pdf |  |

Use the custom timeout Step to give your pages or Steps more time to finish. The default maximum time is 30 seconds. This Step can be useful if you know that a page or an interaction with a field will take longer. You can also use it to shorten the time to let tests fail faster. If you need, you can use it in multiple places in each Scenario.

    Then the maximum seconds for each Step is 200

Use the wait Step to pause once a page has loaded. will let you wait for a number of seconds when you are on a page. The time must be shorter than the maximum amount of time for each Step. By default, that's 30 seconds, but you can increase that with the "maximum seconds for each Step" Step.

    When I wait 10 seconds

This Step can be used multiple times.

Waiting can help in some situations where you run into problems with timing. The situations that need this are pretty rare, but here's an example: You navigate to a new page and set a field. Sometimes the test passes, but sometimes the test says an element on this page does not exist. The problem is probably that the page sometimes needs an extra few seconds to load. Add this step in to give it that time.

Example:

    And I tap to continue
When I wait 10 seconds
And I set the variable "favorite_color" to "puce"

Tips

Some of these are just good practices to follow when coding your interviews

In questions with choices, give each label a value. See docassemble's documentation on buttons to read about key-value pairs.

Not great with just labels:

question: Tell me about yourself
fields:
- Favorite color

Better with values as well:

fields:
- Favorite color: user_favorite_color

It's always possible to use the labels alone, but giving a value as well ensures your tests will work for translated versions of your interview. It also helps your code be more translatable in general.


Add a unique id to each question block of your interview. This also helps your team communicate with each other more easily.


Avoid noyes type fields. For one thing, the story table generator code will need less editing. For another, we've found that humans tend to find those confusing too.


If your package is not importing specifically al_package.yml from the styled Assembly Line package, make sure to add the trigger variable code to your interview.


You can write tests that just go part-way through an interview. That way, you can work on adding more content and yet also be sure that the work you've already done isn't affected by the new changes.


Use old Scenarios or story tables to help you make new ones. You don't have to make everything from scratch.

Test results

To see the list of past tests or running tests, go to your repository's GitHub Actions page.

One of the rows should have the text of the commit you just made. The test may have a yellow dot next to it. That means it's still running. When the dot has turned into a red 'x' or a green checkmark, tap on the name to go to the test's Summary page.

To see the full output text of the test run, its logs, follow these GitHub instructions.

Summary page

ALKiln automatically creates a report for each set of tests. If there are errors, it creates screenshots of those errors. You can download these GitHub "artifacts" at the bottom of the Summary page.

Error screenshots' artifacts

When a test fails on an interview page, ALKiln will take a screenshot of the page. ALKiln saves these files in the "errors" GitHub artifact. The screenshot's name will be made of the scenario description and a timestamp of when the screenshot was taken.

There you might see that the test was unable to continue to the next page because required fields weren't filled, or that a variable wasn't defined.

Reports

We're always trying to understand what people would find helpful in these reports. Tell us about your experiences at https://github.com/SuffolkLITLab/ALKiln/issues.

ALKiln saves the report in the "report" GitHub artifact. A report might look something like this:

Assembly Line Kiln Automated Testing Report - Wed, 29 Dec 2021 17:49:00 GMT


===============================
===============================
Failed scenarios:

---------------
Scenario: I get to the download page
---------------

ERROR: The question id was supposed to be "download", but it's actually "agree-to-terms".
**-- Scenario Failed --**


===============================
===============================
Passed scenarios:

---------------
Scenario: I fill in my name
---------------
screen id: user-name
| user.name.first | Maru | |
| user.name.last | Plaintiff | |

A report has a title with the date and time. It also has two main sections - the failed Scenarios and the Scenarios that passed.

Within each of those, every Scenario will have its own section. In the Scenario's section, ALKiln will list the id of each screen where fields were set in the order in which they appeared. Under each screen id will be the names of the variables whose fields were set and the values they were set to. We're still working out some issues here.


If you used a story table Step, a Scenario might look more like this:

---------------
Scenario: I fill in my name
---------------
screen id: user-name
| user.name.first | Maru | |
| user.name.last | Plaintiff | |

Rows that got set:
And I get to the question id "child information" with this data:
| var | value | trigger |
| user.name.first | Maru | |
| user.name.last | Plaintiff | |
Unused rows:
| defendant.name.first | Sam | |
| defendant.name.last | Defo | |

Since story table Steps don't care about having extra unused rows, the report lets you know which rows did or did not get used. If rows are listed under "Unused rows", ALKiln couldn't find the fields for those variables during the test. Despite that, it was still able to get to the desired question id.

Rows are listed in alphabetical order. If you have thoughts on pros and cons, we'd love to hear from you.

If everything looks right to you there, you can copy and paste the text under "Rows that got set" into your test to get rid of the extra rows you've got hanging around.


If a screen loaded with an error message, ALKiln will try to reload a few times, and will try to log the error message that it saw:

---------------
Scenario: I opened the interview
---------------

ERROR: On final attempt to load interview, got "Reference to invalid playground path"

ERROR: On final attempt to load interview, got "Reference to invalid playground path"

ERROR: On final attempt to load interview, got "Reference to invalid playground path"

ERROR: Failed to load "a-great-interview" after 3 tries. Each try gave the page 30 seconds to load.
**-- Scenario Failed --**

You will probably find a screenshot of the page in your error artifacts.

Also watch the errors and warnings section for updates on similar information.

Your screenshots' artifacts

You can choose to take a screenshot of a page with the Then I take a screenshot Step. ALKiln saves these files in the "screenshots" GitHub artifact.

Your downloaded files' artifacts

You can choose to download a file with the Then I download "file-name.pdf" Step. ALKiln saves these in the "downloads" GitHub artifact. You can read more about that Step in the Other actions section.

Errors and warnings

This section will be filled out as we go.

A missing trigger variable

This warning only matters for story tables that use index variables or generic objects.

That warning isn't a bug, but if the above doesn't apply to you, you can ignore it. A future goal of ours is to remove the warning from Steps that don't need it.

If you are using a story table with index variables or generic objects, you need to add some code to the interview file where you set your metadata block. It controls items like title and authors.

Add this code to your metadata block to insert an invisible element in all your screens:

  post: |
<div data-variable="${ encode_name(str( user_info().variable )) }" id="trigger" aria-hidden="true" style="display: none;"></div>

If you already have something in your post: metadata, just add that code anywhere inside there. There's a chance it can interfere with your css styles, so putting it at the end may be the best choice.

If you want to see some very technical details about why we need it in the first place, you can go to https://github.com/SuffolkLITLab/ALKiln/issues/256, where we've tried to summarize the problem this is solving. Unfortunately, we haven't found another way to solve this particular problem.

Timeout or "took too long" error

Different problems can cause the report to say that something "took too long" or cause a "timeout" error to show up in the logs of the workflow's "job" page.

This error can happen when the test is trying to go to the wrong url, usually because:

  1. The SERVER_URL GitHub secret has a typo or the server address is wrong. This secret is usually created by the setup interview. It is supposed to be the address of the docassemble server where the docassemble testing account is located. Find the right server name and edit the secret to contain that server name. Exclude an ending /.
  2. The Given I start the interview at... Step is naming an interview that doesn't exist. Check for a typo in the file name.

A "timeout" error can also happen when a page took too long to load at some point in setup, when running tests, or during test cleanup. This can be because:

  1. The page was trying to load a big file.
  2. The server was busy for too long.
  3. The server is down.

If a page was taking to load a big files, use the "custom timeout" Step to give the page more time to load.

If the server was busy, try re-running the tests. As of 01/2022 you have to navigate to the newly running tests manually. For example, by going to the Actions page again.

You can download and look at your "error" artifacts screenshots to check for more details.

Invalid playground path error

If you see the text "invalid playground path" in the report, that means the Given I start the interview at... Step for that scenario is naming an interview that doesn't exist. Check for a typo.

UnhandledPromiseRejection error

This is a misleading error. You need to read the text of the whole paragraph to see what the actual error is.

Security

Using a third-party library or package is always a risk. That said, we take measures to help secure our code, such as protecting our release branches and requiring reviews before merging any new code.

In addition, here are some actions you can take to manage the security of the tests, as well as general guidelines for server security.

Disable the tests

If you become worried about the tests, there are different ways you can stop the tests from running.

In order to run, the test setup interview added a "workflow" file to your repository. GitHub sometimes calls that an "action". That's what triggers the tests. You can manage that workflow, and your actions in general, in GitHub.

Disabling tests in one repository

GitHub lets you disable workflow files like these. See their instructions at https://docs.github.com/en/actions/managing-workflow-runs/disabling-and-enabling-a-workflow.

You can also delete the file from your repository completely. If you go to the front page of your repository, the file is in the workflows folder of the .github folder. It's called run_form_tests.yml. GitHub's instructions about how to delete a file are at https://docs.github.com/en/repositories/working-with-files/managing-files/deleting-files-in-a-repository.

Another option is to disable or limit all tests, all actions, in your repository. GitHub's documentation for managing repository actions is at https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#managing-github-actions-permissions-for-your-repository.

Disabling tests for the whole organization

You can disable these tests, or any actions, for a whole organization. GitHub's documentation for managing organization actions is at https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#managing-github-actions-permissions-for-your-organization.

Use a separate server just for testing

Keep the test server completely separate from production server so that no sensitive information can be revealed to potential malicious actors.

In addition, some general good practices are:

  • Never share API keys or passwords between servers.
  • Periodically clear out the test server and start a new docker container from scratch.
  • Occasionally check the test server to make sure it's not running resource stealing code (blockchain miners, etc.)

See GitHub's security docs

GitHub has documentation on some best practices as well: https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions

Use ALKiln's commit sha

This one requires prior technical knowledge. To summarize, you can freeze the version of the ALKiln code your repository's tests use by referencing a specific ALKiln commit sha in your workflow file.

  1. Go to ALKiln's repository. For example, for the v4 branch, you can go to https://github.com/SuffolkLITLab/ALKiln/commits/releases/v4.
  2. Find the sha of a commit you like.
  3. In your repository's directory, go to .github/workflows and find the file running the tests. There's a line in there that look's something like this:
uses: suffolkLITLab/ALKiln@releases/v4
  1. Change releases/v4 to the commit sha.

When you want to update to a new version of the ALKiln, update that sha manually.

Customizations

Make a GitHub issue when tests fail

  1. Go to your GitHub repository.
  2. Tap on the .github folder, then on workflows, then on the YAML file in there that runs the ALKiln tests.
  3. Tap to edit the file.
  4. Add the below code under the last line of text in the file.
  5. Avoid adding any new GitHub secrets to your repository for this.
      - name: If tests failed create an issue
if: ${{ failure() }}
uses: actions-ecosystem/action-create-issue@v1
with:
github_token: ${{ secrets.github_token }}
title: ALKiln tests failed
body: |
An ALKiln test failed. See the action at ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}.
labels: |
bug
warning

Avoid changing the value github_token and avoid creating a new secret for it. The variable secrets.github_token is a value that your repository has by default.

If you use the code above, the GitHub issue will contain a link to the workflow's action page itself.

You can edit the values of title, body, and bug to customize the issue.

If you've run the Setup interview more recently, you will already have this code in your file, though it will be inactive. You just have to remove the comment symbols (#) from the lines of code.

Schedule test runs

You can decide to run these tests daily, weekly, monthly, or on any other interval. To run the tests on a schedule, you must add code to your workflow file.

  1. Go to your GitHub repository.
  2. Tap on the .github folder, then on workflows, then on the YAML file in there that runs the ALKiln tests.
  3. Tap to edit the file.
  4. Near the top of the code, you will see something like this:
  on:
push:
  1. Between those two lines, add code, like this:

schedule:
- cron: '0 1 * * TUE'

The GitHub docs can tell you more about triggering workflows on a schedule. If you want to change the interval, these examples of cron syntax can help a lot.

If you've run the Setup interview more recently, you will already have this code in your file, though it will be inactive. You just have to remove the comment symbols (#) from the lines of code.

FAQ

I have a private GitHub repository. Can I use this testing framework?

Yes, you can use ALKiln with a private repository, though you have to do a bit of extra work.

  1. Pick a GitHub account that has permission to change the private repository.
  2. Make sure the account on your docassemble server that you linked to the tests is integrated with the GitHub account. See docassemble's documentation on integrating a GitHub account.

As that documentation explains, no two accounts on a docassemble server can be connected to the same GitHub account.

Also, there are some limits on the amount of time private repositories can run workflows: https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions

How do I add a new test file?

Go to your Playground > the dropdown Folders menu > Sources.

Add a new file that ends in the extension .feature. Ex: has_children.feature

Add this to the blank file:

Feature: A description of the category of the tests you'll write in this file

Scenario: The specific situation that this test is written for
Given I start the interview at "name_of_the_interview_file_to_test.yml"

Make sure that

  1. Feature: and it's description is on the first line.
  2. Each test starts with a Scenario: and its description.
  3. Given I start the interview... is the first line under Scenario.

After that, you can add the story table or other Steps that will test your code. Add the file to the files you commit to GitHub. From then on, GitHub will run that tests whenever you commit, or push, to GitHub.

An example for the start of two separate tests for a restraining order:

Feature: I have children

Scenario: I need visits to be supervised
Given I start the interview at "restraining_order.yml"


Scenario: I allow unsupervised visitation
Given I start the interview at "restraining_order.yml"

The Steps under each scenario will be a bit different because they each test a different path for the user.

How do I add a new test to an existing test file?

To add a new test to the existing file you need:

  1. The keyword Scenario with the Scenario description.
  2. The step that loads the interview's page: Given I start the interview at. You must use it before you fill out any fields:

Example:

Scenario: I allow unsupervised visitation
Given I start the interview at "restraining_order.yml"

Make sure to leave the Feature line at the very top of the file.

After the Given step, you can add the story table or other Steps that will test your interview.

ALKiln uses the Scenario description to label test results. Try to use something you'll recognize later.

When do tests run?

Tests run when you commit your files to GitHub. That might be when you hit the 'Commit' button on the Packages page. It can also happen when you edit, add, or delete files in GitHub itself.

If you know how to use GitHub actions, you can also run the tests manually from GitHub actions with some more options.

Why should I write a Scenario description?

Scenario descriptions affect the names of error screenshot files and report headings, so try to write something you will recognize later.

Built with

Kiln uses cucumberjs, puppeteerjs, cheerio, and runs the assertions using mocha and chai libraries.

Even though this is built using cucumberjs, this framework has a different, less lofty, purpose. cucumber focuses on BDD (behavior driven development). This framework mostly deals with regression testing and other conveniences.

Repositories

ALKiln's repository is at https://github.com/SuffolkLITLab/ALKiln.

The developer test setup interivew's repo is at https://github.com/plocket/docassemble-ALAutomatedTestingTests.

ALKiln also tests itself using some of the interviews at https://github.com/plocket/docassemble-ALAutomatedTestingTests.