Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I am trying to get an integration test working. In the test initialization phase I attempt to spin up a Redis server from a docker image.

var p = new Process { StartInfo = new ProcessStartInfo("docker", "run --name redistest –p 6379:6379 redis")};
p.Start();

When I do that the process exits with exit code 125. If I comment out those lines, hit a breakpoint in the test before the test code executes and instead run from the command line

docker run --name redistest -p 6379:6379 redis

the test runs as expected when continuing from the breakpoint. The 125 exist code just means docker run failed, so there's not much more information to go on.

Prior to either the command line invocation or the C# invocation, I made sure there was no container named redistest with

docker stop redistest
docker rm redistest

Yet the difference in behavior remains. All of these attempts to run docker programmatically fail:

  • adding -d
  • running as a normal user
  • running with elevated privileges
  • running from within a test
  • running from a .NET Framework console app
  • Why does programmatic process creation of the docker run command cause docker to exit with a 125?

    It works programmatically just fine for some images but not others.

    I know this is an old question and I'm obviously way late to help the original poster, but since this question appears near the top of the search results when searching for this error for either docker or podman, I thought I would add one other workaround I found that I didn't see mentioned (should work for either podman or docker). Posting in case it helps someone else.

    I have to give credit to the answers above, especially TheECanyon's, for pointing me in the right direction with mention of it being related to passing --name. That said, it's nothing revolutionary; it's just a variation on Kit's answer that delegates the removal to docker/podman instead of requiring additional handling in your code/script. Then again, his answer or a hybrid approach might be more resilient for handling oddball conditions. Like the others, I was getting this error while using that parameter but I wasn't seeing any of the usual errors about not being able to run a container because a container with that name already existed. Possibly because I was running it from a script.

    What I didn't see mentioned thus far and allowed me to keep the --name parameter was to simply remove the existing container manually (e.g. [docker|podman] rm <container-name>) and then add the --rm parameter to the run command (e.g. [docker|podman] run --rm) which causes the container to be removed on exit (in effect, you would then be re-creating the container on every run and docker/podman would remove it when it stops/exits). This probably won't work if there's a reason that you need to use the exact same container every run, but in my case all of the config/data/persistent bits were stored on mounted volumes and re-acquired on the next run anyway so it worked great, including in rootless containers.

    This solution works better for me than simply removing the name because removing the name causes a new container instance to be generated each and every run but without cleaning up the old container instances. Not that big of a deal if you just need to do 10s of runs on a local dev box but in theory this could lead to some storage concerns over time especially if you are doing 100s or 1000s of runs that each create a new container instance.

    Thanks for the explanation and ideas. I've switched my acceptance of my own answer to yours as it supersedes mine and the others, and seems like it might have wider applicability. – Kit Sep 7, 2021 at 12:03 Even older now :-) Just wanted to add for anyone coming in from google, etc that "removing --name from the run command" will probably work fine for one-offs, debugging, or low numbers of runs. But you should keep in mind that this will be generating a new container instance each time unless you use [docker|podman] run --rm to ensure the container is removed on exit (at which point, there is no longer any reason to remove the name). This is mostly a concern in cases where you have a large number of runs being created or low storage space to start with. – zpangwin Sep 5, 2021 at 15:35

    I was unable to find out the reason why I couldn't run the container, and in fact it did happen on a few others. I'm still looking for the answer, but here is a workaround in case someone needs it.

    The blog post Integration testing using a docker container describes the code you need to write to use the Docker.DotNet NuGet package, which is a client that interacts with the Docker SDK.

    Now I'm able to put this code in the integration test fixture:

        [AssemblyInitialize]
        public static async Task InitializeDockerContainers(TestContext testContext)
            await DockerSupport.StartDockerContainer("my container", "redis", "4.0.5-alpine", 6379);
    

    and combine it with (my already working) stop and rm commands to allow for clean creation and removal of containers during integration testing.

    That didn't help unfortunately. The command does wrong, it just fails on that image. It worked fine for a Rabbit MQ image. – Kit Dec 20, 2018 at 22:25

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.