How To Automatically Retry Failed Tests in Nunit

50 comments, 1 share, 651 points
Retry Failed Tests in Nunit
Retry Failed Tests in Nunit

How To Automatically Retry Failed Tests in Nunit

Everyone wants to have reliable tests, but no matter how hard you the flaky tests, they keep hunting you and you need to find a way to retry failed tests in Nunit.

A flaky test is a test which could fail or pass for the same configuration. Flaky tests could be harmful for developers because their failures do not always indicate bugs in the code

On one of the c# selenium projects that I’ve worked I had the same issue that you have now , and I tried to use the Retry attribute from Nunit.

But guess what? It re-runs the failed tests but only the one that failed because of an assertion failed.

It didn’t really helped me because I had a lot of tests failing because of the environment that was going down and up.

The Solution

The solution for me was to rewrite the Nunit retry attribute in a way that was going to help me.

If you have a look at the Nunit RetryAttribute class you’ll see they don’t handle all the possible of the results states.

public override TestResult Execute(TestExecutionContext context) { int count = _retryCount; while (count-- > 0) { context.CurrentResult = innerCommand.Execute(context); if (context.CurrentResult.ResultState != ResultState.Failure) { break; } } return context.CurrentResult; } }

So what you can do , inside your helper folder you can create a new class named “CustomRetry.cs” and it will look something like this :

using System; using NUnit.Framework; using NUnit.Framework.Interfaces; using NUnit.Framework.Internal; using NUnit.Framework.Internal.Commands; namespace Project.Attributes { [AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)] public class CustomRetry : PropertyAttribute, IWrapSetUpTearDown { private int _count; /// <summary> /// Construct a RepeatAttribute /// </summary> /// <param name="count">The number of times to run the test</param> public CustomRetry(int count) : base(count) { _count = count; } #region IWrapSetUpTearDown Members /// <summary> /// Wrap a command and return the result. /// </summary> /// <param name="command">The command to be wrapped</param> /// <returns>The wrapped command</returns> public TestCommand Wrap(TestCommand command) { return new CustomRetryCommand(command, _count); } #endregion #region Nested CustomRetry Class /// <summary> /// The test command for the RetryAttribute /// </summary> public class CustomRetryCommand : DelegatingTestCommand { private int _retryCount; /// <summary> /// Initializes a new instance of the <see cref="CustomRetryCommand"/> class. /// </summary> /// <param name="innerCommand">The inner command.</param> /// <param name="retryCount">The number of repetitions</param> public CustomRetryCommand(TestCommand innerCommand, int retryCount) : base(innerCommand) { _retryCount = retryCount; } /// <summary> /// Runs the test, saving a TestResult in the supplied TestExecutionContext. /// </summary> /// <param name="context">The context in which the test should run.</param> /// <returns>A TestResult</returns> public override TestResult Execute(TestExecutionContext context) { int count = _retryCount; while (count-- > 0) { context.CurrentResult = innerCommand.Execute(context); var results = context.CurrentResult.ResultState; if (results != ResultState.Error && results != ResultState.Failure && results != ResultState.SetUpError && results != ResultState.SetUpFailure && results != ResultState.TearDownError && results != ResultState.ChildFailure) { break; } } return context.CurrentResult; } } #endregion } }
You are almost done , the next step will be to tag you test with [CustomRetry(int count)]

If you have any questions please do not hesitate to add a comment or [at]

P.S If you want to share your story with us you can use the ADD POST button on the top :).

Happy testing!

Like it? Share with your friends!

50 comments, 1 share, 651 points

What's Your Reaction?

cute cute
scary scary
wtf wtf
geek geek
Love Love
Angry Angry
Lol Lol
Test engineer


Your email address will not be published. Required fields are marked *

  1. This is exactly what I needed as the Repeat and Retry attribute were not working on NUnit 3.5. I also am unable to use NUnit-Retry due to requirements of NUnit 2.63. I did have to update line 63 from int count = _retryCount; to int count = _retryCount+1; as it wasn’t executing the test again if I added [CustomRetry(1)] attribute to the test method.

    What I really like about this CustomRetry class is I can also intercept the retry based on if there are any exceptions found in the server logs, which would a reason to not retry and “truly” fail the test as opposed to treating it as a floppy test.

    1. Thanks for the Feedback , sorry about the versions of the NUnit , I should’ve been mentioned , I rewrote it for NUnit 3.x. It can be extended to a different level but this gives to the people what they need.
      I’m Glad you find it useful.

      1. I guess I worded it wrong, there is a NuGet/Github NUnit-Retry package that does something similar to the CustomRetry, but only works on 2.63. The CustomRetry works on 3.5, which is a good thing!

  2. We are trying to use your example. We are finding that once there is a fail, even if the next run of the test is a pass, the test context status still contains a fail.

        1. I had a look at nUnit library and it seems that latest version changed their implementation , so we will have to update our custom attribute. I will have a look at it in the next following days and come up with an update to the post

  3. Hi, I am currently using your code and it is very helpful. But I have a problem, could you show me how to tweak the code so that before retrying, it will call the onetimesetup function? Thanks!

  4. Thanks a lot. How can I make it work in assembly level (I’m using SpecFlow and tests are auto-generated so I can’t add this attribute to all test because it will be overridden)

    1. Hi Liraz ,

      You can do that , just have a look at this :

      Attributes are static metadata. Assemblies, modules, types, members, parameters, and return values aren't first-class objects in C# (e.g., the System.Type class is merely a reflected representation of a type). You can get an instance of an attribute for a type and change the properties if they're writable but that won't affect the attribute as it is applied to the type.

  5. When end of execution, I am checking test current context result outcome. Nunut hold the assert failure count. So test result status showing as failed. How to overcome this problem

    1. even with the implementation from the article ? because that should not happen if is done correctly , Nunit doesn’t store the results until the whole execution setup is finished and the results from previous execution should be overwrited

      eg : Run Test case-1 and fails > Nunit automatically reruns that feature > Re-run Test case-1 automatically and it’s passing -> end results should be Test case-1 passed

        1. if you create the custom attribute should not happen that , you override Testresults Execute method and you’ll return a new set of results
          return context.CurrentResult;
          If you share with me just the basic project setup and with one test I can say more but I can say that this piece should just work like a charm

  6. Could we put the CustomRetry attribute on a TestFixture so that it will go on all the tests in that file or does it have to just go on the individual tests?

      1. I tried that and VS gave me an error saying “Attribute ‘CustomRetry’ is not valid on this declaration type. It is only valid on the ‘method’ declarations.”

        1. that should not be the case , can you just push a project with one test and the custom attribute class. and paste here the repository link, and I will help you out. you are definitely missing something. this custom attribute I implemented it as you see it on a framework that was serving 8 other selenium projects and worked just fine.

  7. Hi, it’s just perfect!! the only issue I’m experiencing is that I’m using ExtentReport API and then if a test runs 3 times then I see 3 “tests” which two were failed and one passed. any suggestion to have only the last test I’m running?Thanks!

    1. thats reporting related , I know what you mean , Nunit basic reports wasn’t showing that. the only thing I can say , from my current experience in jenkins but with selenium Java , I export the reports on the disk and when is re-runing is overwriting the results for a specific test case id , so at the end when I grab all the results in an artifact I get the last one so I don’t. But what I can suggest you is to try to use reportportal , its free and you can easily deploy it using docker. at the end you can manipulate those duplicated results , or maybe reportportal already does that , I haven’t checked the documenation for nunit. Here is the documentation for Nunit Have a look the tool is really powerful .

    2. I posted example code previously, but I don’t know that that is allowed. ExtentReport has a method to remove the ExtentTest object from the ExtentReport object (i.e. ExtentReports.Remove(ExtentTest)). You can wrap this around logic that checks for the # of iterations the test has gone through and only report the very last one.

  8. Hello. This has been working well for me but I noticed that when I run tests in parallel the number of retries varies from test to test. If I set the retry as 3, then some tests retry 3 times if failing, some less than three, some more than three. I can only think this is because the tests read the current global retry attribute count. Is there a way to get each test to utilise the retry class as if it was a new, so each test would initially see the count as 3 in this example? Thanks

  9. This is great. Thanks so much!

    Also, to those with issues adding it to classes or assemblies instead of methods, make sure you’ve got AttributeTargets set to All instead of Methods as in the sample code above.

  10. I love this! Unfortunately, recently I am noticing that previous NUnit failures from a test will be reported as a failure, even if the test has re-run and passed a subsequent time. Have you come across this issue? I am currently using ExtentReports v3.0.2 for reporting (with logic implemented that will remove the test from the entire suite of tests if it fails and has not retried) with NUnit v3.9.0. If I am running from within Visual Studio, I notice all of the NUnit failures are displayed in the test output (for instance, if I have my retry count set to 3 and all three tries failed, it will show all three exceptions in the test explorer output). It’s reporting failures where it should report passes, and I am at a bit of a loss in how to debug this issue.

      1. It appears it might be an issue with newer versions of NUnit. I was using NUnit v 3.9.0, but when I downgraded to NUnit v3.4.0, the issue no longer occured.

        1. yes is the version , have a look at the latest version :

          you can adjust it with my code , give it a try

          the only place that needs to be changed is here:

          while (count-- > 0)
          context.CurrentResult = innerCommand.Execute(context);
          // Commands are supposed to catch exceptions, but some don't
          // and we want to look at restructuring the API in the future.
          catch (Exception ex)
          if (context.CurrentResult == null) context.CurrentResult = context.CurrentTest.MakeTestResult();

          if (context.CurrentResult.ResultState != ResultState.Failure)

          // Clear result for retry
          if (count > 0)
          context.CurrentResult = context.CurrentTest.MakeTestResult();
          context.CurrentRepeatCount++; // increment Retry count for next iteration. will only happen if we are guaranteed another iteration

          thats the bit that will make the diff

          1. Hi,

            I do not have the option for CurrentRepeatCount. Does this need to be initiated somewhere else?


  11. Hello,

    I have just noticed that this doesn’t seem to work for me for some tests where the failure is due to an assertion. If the test fails the first time it will report as a failure as expected, but then for the second retry will fail as well even though it should pass. For some reason the retry is keeping the initial failure status through the subsequent retries.



    1. hey ,

      Im going to quote an answer first :

      The purpose of Specflow scenarios is to assert that a system is behaving as expected.

      If some temporal issue causes the test to fail, then getting the test to re-run and “hoping for the best” is not going to resolve the problem! Having a test fail occasionally should not be expected behaviour. A test should give a consistent result every time it is executed.

      But if you want to get along all these you can check this out :

      just need to integrate it in your specflow project

      Hope it helps!

      1. I have read that already but, our infrastructure is kind of unstable as it faces multiple deployments. So, i chose to proceed with retry mechanism. And i have tried the specflow retry too but there is an open issue in the specflow that restricts the plugin to work.

Choose A Format
Trivia quiz