How To Automatically Retry Failed Tests in Nunit


Retry Failed Tests in Nunit
Retry Failed Tests in Nunit

How To Automatically Retry Failed Tests in Nunit

Everyone wants to have reliable tests, but no matter how hard you the flaky tests, they keep hunting you and you need to find a way to retry failed tests in Nunit.

A flaky test is a test which could fail or pass for the same configuration. Flaky tests could be harmful for developers because their failures do not always indicate bugs in the code

On one of the c# selenium projects that I’ve worked I had the same issue that you have now , and I tried to use the Retry attribute from Nunit.

But guess what? It re-runs the failed tests but only the one that failed because of an assertion failed.

It didn’t really helped me because I had a lot of tests failing because of the environment that was going down and up.

The Solution

The solution for me was to rewrite the Nunit retry attribute in a way that was going to help me.

If you have a look at the Nunit RetryAttribute class you’ll see they don’t handle all the possible of the results states.

public override TestResult Execute(TestExecutionContext context)
            {
                int count = _retryCount;

                while (count-- > 0)
                {
                    context.CurrentResult = innerCommand.Execute(context);

                    if (context.CurrentResult.ResultState != ResultState.Failure)
                    {
                        break;
                    }
                }

                return context.CurrentResult;
            }
        }
Nunit

So what you can do , inside your helper folder you can create a new class named “CustomRetry.cs” and it will look something like this :

using System;
using NUnit.Framework;
using NUnit.Framework.Interfaces;
using NUnit.Framework.Internal;
using NUnit.Framework.Internal.Commands;


namespace Project.Attributes
{
    [AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
    public class CustomRetry : PropertyAttribute, IWrapSetUpTearDown
    {
        private int _count;

        /// <summary>
        /// Construct a RepeatAttribute
        /// </summary>
        /// <param name="count">The number of times to run the test</param>
        public CustomRetry(int count) : base(count)
        {
            _count = count;
        }

        #region IWrapSetUpTearDown Members

        /// <summary>
        /// Wrap a command and return the result.
        /// </summary>
        /// <param name="command">The command to be wrapped</param>
        /// <returns>The wrapped command</returns>
        public TestCommand Wrap(TestCommand command)
        {
            return new CustomRetryCommand(command, _count);
        }

        #endregion

        #region Nested CustomRetry Class

        /// <summary>
        /// The test command for the RetryAttribute
        /// </summary>
        public class CustomRetryCommand : DelegatingTestCommand
        {
            private int _retryCount;

            /// <summary>
            /// Initializes a new instance of the <see cref="CustomRetryCommand"/> class.
            /// </summary>
            /// <param name="innerCommand">The inner command.</param>
            /// <param name="retryCount">The number of repetitions</param>
            public CustomRetryCommand(TestCommand innerCommand, int retryCount)
                : base(innerCommand)
            {
                _retryCount = retryCount;
            }

            /// <summary>
            /// Runs the test, saving a TestResult in the supplied TestExecutionContext.
            /// </summary>
            /// <param name="context">The context in which the test should run.</param>
            /// <returns>A TestResult</returns>
            public override TestResult Execute(TestExecutionContext context)
            {
                int count = _retryCount;

                while (count-- > 0)
                {
                    context.CurrentResult = innerCommand.Execute(context);
                    var results = context.CurrentResult.ResultState;

                    if (results != ResultState.Error
                        && results != ResultState.Failure
                        && results != ResultState.SetUpError 
                        && results != ResultState.SetUpFailure
                        && results != ResultState.TearDownError
                        && results != ResultState.ChildFailure)
                    {
                        break;
                    }
                }

                return context.CurrentResult;
            }
        }

        #endregion
    }
}
CustomRetry.cs
You are almost done , the next step will be to tag you test with [CustomRetry(int count)]

If you have any questions please do not hesitate to add a comment or [at] hello@testingrepository.com.

P.S If you want to share your story with us you can use the ADD POST button on the top :).

Happy testing!

Comments 18

Your email address will not be published. Required fields are marked *

15 − 2 =

  1. This is exactly what I needed as the Repeat and Retry attribute were not working on NUnit 3.5. I also am unable to use NUnit-Retry due to requirements of NUnit 2.63. I did have to update line 63 from int count = _retryCount; to int count = _retryCount+1; as it wasn’t executing the test again if I added [CustomRetry(1)] attribute to the test method.

    What I really like about this CustomRetry class is I can also intercept the retry based on if there are any exceptions found in the server logs, which would a reason to not retry and “truly” fail the test as opposed to treating it as a floppy test.

    1. Thanks for the Feedback , sorry about the versions of the NUnit , I should’ve been mentioned , I rewrote it for NUnit 3.x. It can be extended to a different level but this gives to the people what they need.
      I’m Glad you find it useful.

      1. I guess I worded it wrong, there is a NuGet/Github NUnit-Retry package that does something similar to the CustomRetry, but only works on 2.63. The CustomRetry works on 3.5, which is a good thing!

  2. We are trying to use your example. We are finding that once there is a fail, even if the next run of the test is a pass, the test context status still contains a fail.

  3. Hi, I am currently using your code and it is very helpful. But I have a problem, could you show me how to tweak the code so that before retrying, it will call the onetimesetup function? Thanks!

  4. Thanks a lot. How can I make it work in assembly level (I’m using SpecFlow and tests are auto-generated so I can’t add this attribute to all test because it will be overridden)

    1. Hi Liraz ,

      You can do that , just have a look at this :

      https://stackoverflow.com/questions/14811012/set-assembly-level-attribute-conditionally

      Attributes are static metadata. Assemblies, modules, types, members, parameters, and return values aren't first-class objects in C# (e.g., the System.Type class is merely a reflected representation of a type). You can get an instance of an attribute for a type and change the properties if they're writable but that won't affect the attribute as it is applied to the type.

  5. When end of execution, I am checking test current context result outcome. Nunut hold the assert failure count. So test result status showing as failed. How to overcome this problem

    1. even with the implementation from the article ? because that should not happen if is done correctly , Nunit doesn’t store the results until the whole execution setup is finished and the results from previous execution should be overwrited

      eg : Run Test case-1 and fails > Nunit automatically reruns that feature > Re-run Test case-1 automatically and it’s passing -> end results should be Test case-1 passed

      1. TestContext.CurrentContext.Result.Outcome.Status holding the previous result. It hold the assertion failure count.

        1. if you create the custom attribute should not happen that , you override Testresults Execute method and you’ll return a new set of results
          return context.CurrentResult;
          If you share with me just the basic project setup and with one test I can say more but I can say that this piece should just work like a charm

You may also like

More From: Test Automation

DON'T MISS