docker stop redistest
docker rm redistest
Yet the difference in behavior remains. All of these attempts to run docker programmatically fail:
adding -d
running as a normal user
running with elevated privileges
running from within a test
running from a .NET Framework console app
Why does programmatic process creation of the docker run
command cause docker to exit with a 125?
It works programmatically just fine for some images but not others.
I know this is an old question and I'm obviously way late to help the original poster, but since this question appears near the top of the search results when searching for this error for either docker or podman, I thought I would add one other workaround I found that I didn't see mentioned (should work for either podman or docker). Posting in case it helps someone else.
I have to give credit to the answers above, especially TheECanyon's, for pointing me in the right direction with mention of it being related to passing --name
. That said, it's nothing revolutionary; it's just a variation on Kit's answer that delegates the removal to docker/podman instead of requiring additional handling in your code/script. Then again, his answer or a hybrid approach might be more resilient for handling oddball conditions. Like the others, I was getting this error while using that parameter but I wasn't seeing any of the usual errors about not being able to run a container because a container with that name already existed. Possibly because I was running it from a script.
What I didn't see mentioned thus far and allowed me to keep the --name
parameter was to simply remove the existing container manually (e.g. [docker|podman] rm <container-name>
) and then add the --rm
parameter to the run command (e.g. [docker|podman] run --rm
) which causes the container to be removed on exit (in effect, you would then be re-creating the container on every run and docker/podman would remove it when it stops/exits). This probably won't work if there's a reason that you need to use the exact same container every run, but in my case all of the config/data/persistent bits were stored on mounted volumes and re-acquired on the next run anyway so it worked great, including in rootless containers.
This solution works better for me than simply removing the name because removing the name causes a new container instance to be generated each and every run but without cleaning up the old container instances. Not that big of a deal if you just need to do 10s of runs on a local dev box but in theory this could lead to some storage concerns over time especially if you are doing 100s or 1000s of runs that each create a new container instance.
–
–
I was unable to find out the reason why I couldn't run the container, and in fact it did happen on a few others. I'm still looking for the answer, but here is a workaround in case someone needs it.
The blog post Integration testing using a docker container describes the code you need to write to use the Docker.DotNet NuGet package, which is a client that interacts with the Docker SDK.
Now I'm able to put this code in the integration test fixture:
[AssemblyInitialize]
public static async Task InitializeDockerContainers(TestContext testContext)
await DockerSupport.StartDockerContainer("my container", "redis", "4.0.5-alpine", 6379);
and combine it with (my already working) stop
and rm
commands to allow for clean creation and removal of containers during integration testing.
–
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.