Edit on GitHub

Server test guidelines

Handling Flaky Tests 

A flaky test is one that exhibits both passing and failing results when run multiple times without any code changes. When our automation detects a flaky test on your PR:

  1. Check if the Test is Newly Introduced

    • Review your PR changes to determine if the flaky test was introduced by your changes
    • If the test is new, fix the flakiness in your PR before merging
  2. For Existing Flaky Tests

    • Create a JIRA ticket titled “Flaky Test: {TestName}”, e.g. “Flaky Test: TestGetMattermostLog”

    • Copy the test failure message into the JIRA ticket description

    • Add the flaky-test and triage-global labels

    • Create a PR to skip the test by adding:

      t.Skip("https://mattermost.atlassian.net/browse/MM-XXXXX")
      

      where MM-XXXXX is your JIRA ticket number

    • Link the JIRA ticket in the skip message for tracking

This process helps us track and systematically address flaky tests while preventing them from blocking development work.

Writing Parallel Tests 

Leveraging parallel tests can drastically reduce execution time for entire test packages, such as api4 and app, which are notably heavy with hundreds of tests. However, careful implementation is essential to ensure reliability and prevent flakiness. Follow these guidelines when writing parallel tests:

Enabling Parallel Tests 

In api4, app, platform, email, jobs packages:

func TestExample(t *testing.T) {
  mainHelper.Parallel(t)

  ...
}

// OR

func TestExample(t *testing.T) {
  th := Setup(t)
  th.Parallel(t)

  ...
}

// OR

func TestExample(t *testing.T) {
  if mainHelper.Options.RunParallel {
    t.Parallel()
  }

  ...
}

If sqlstore package:

func TestExample(t *testing.T) {
  if enableFullyParallelTests {
    t.Parallel()
  }

  ...
}

To enable parallel execution, you should set the ENABLE_FULLY_PARALLEL_TESTS environment variable. Example:

ENABLE_FULLY_PARALLEL_TESTS=true go test -v ./api4/...

When to Use Parallel Tests 

  • Generally Safe: Tests with dedicated setup functions that ensure independence from other tests.
  • Subtests: Only safe if each subtest features its own setup function, ensuring they are decoupled and independent of execution order.
  • Unsafe: When a subtest depends on state changes made by another subtest, thus coupling their execution order.

Common Issues That Break Parallel Safety 

Global State 

Avoid reliance on global variables and registrations such as:

  • LicenseValidator
  • platform.RegisterMetricsInterface
  • platform.PurgeLinkCache
  • model.BuildEnterpriseReady
  • jobs.DefaultWatcherPollingInterval

Filesystem Operations 

Avoid using os.Chdir (or t.Chdir) and relative paths tied to the test executable, as they may introduce inconsistencies when tests run in parallel. When possible, rely on temporary directories such as th.tempWorkspace which are dedicated to the test.

Environment Variables 

Using os.Setenv for feature flags and other settings can cause interference between parallel tests. Instead, use the configuration API:

// UNSAFE for parallel tests:
os.Setenv("MM_FEATUREFLAGS_CUSTOMFEATURE", "true")
defer os.Unsetenv("MM_FEATUREFLAGS_CUSTOMFEATURE")

// SAFE for parallel tests:
th.App.UpdateConfig(func(cfg *model.Config) {
    cfg.FeatureFlags.CustomFeature = true
})

Process-Level Methods 

Be cautious with methods affecting the entire process, such as pprof.StartCPUProfile, which can introduce contention between tests.

Did you find what you were looking for?

Thank you! We appreciate your feedback.
ร—

Tell us more

Your feedback helps us improve the Mattermost developer documentation.

Have a feature request? Share it here.

Having issues? Join our Community server.