Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I want to run a tail -f logfile command on a remote machine using python's paramiko module. I've been attempting it so far in the following fashion:

interface = paramiko.SSHClient()
#snip the connection setup portion
stdin, stdout, stderr = interface.exec_command("tail -f logfile")
#snip into threaded loop
print stdout.readline()

I'd like the command to run as long as necessary, but I have 2 problems:

  • How do I stop this cleanly? I thought of making a Channel and then using the shutdown() command on the channel when I'm through with it- but that seems messy. Is it possible to do something like sent Ctrl-C to the channel's stdin?
  • readline() blocks, and I could avoid threads if I had a non-blocking method of getting output- any thoughts?
  • Instead of calling exec_command on the client, get hold of the transport and generate your own channel. The channel can be used to execute a command, and you can use it in a select statement to find out when data can be read:

    #!/usr/bin/env python
    import paramiko
    import select
    client = paramiko.SSHClient()
    client.load_system_host_keys()
    client.connect('host.example.com')
    transport = client.get_transport()
    channel = transport.open_session()
    channel.exec_command("tail -f /var/log/everything/current")
    while True:
      rl, wl, xl = select.select([channel],[],[],0.0)
      if len(rl) > 0:
          # Must be stdout
          print channel.recv(1024)
    

    The channel object can be read from and written to, connecting with stdout and stdin of the remote command. You can get at stderr by calling channel.makefile_stderr(...).

    I've set the timeout to 0.0 seconds because a non-blocking solution was requested. Depending on your needs, you might want to block with a non-zero timeout.

    you can't select on the stdout object, because it lacks a fileno attribute. goathens isn't using a channel object. – JimB May 7, 2009 at 17:33 @Vivek: you'd still need to look at rl, that's the list of sockets that can be read. Take a look at the documentation for channel.recv_stderr() (and channel.recv_stderr_ready()) to see how to read the remote stderr. – Andrew Aylett Jun 27, 2012 at 13:17

    1) You can just close the client if you wish. The server on the other end will kill the tail process.

    2) If you need to do this in a non-blocking way, you will have to use the channel object directly. You can then watch for both stdout and stderr with channel.recv_ready() and channel.recv_stderr_ready(), or use select.select.

    On some newer servers, your processes won't be killed even after you terminate your client. You have to set get_pty=True in the exec_command() in order for the processes to be cleaned up after exiting the client. – nlsun Jul 13, 2016 at 22:54

    Just a small update to the solution by Andrew Aylett. The following code actually breaks the loop and quits when the external process finishes:

    import paramiko
    import select
    client = paramiko.SSHClient()
    client.load_system_host_keys()
    client.connect('host.example.com')
    channel = client.get_transport().open_session()
    channel.exec_command("tail -f /var/log/everything/current")
    while True:
        if channel.exit_status_ready():
            break
        rl, wl, xl = select.select([channel], [], [], 0.0)
        if len(rl) > 0:
            print channel.recv(1024)
                    @azmeuk Both solutions are slightly incorrect, because you don't want to stop receiving output as soon as the exit status is ready. You want to stop when there is no output to be received AND the exit status is ready. Otherwise you may end up quitting before receiving all output.
    – user7610
                    Aug 10, 2016 at 15:35
                    @ Jiri, you are correct, I am facing the similar issue which you mentioned. Can you please let me know if there is any workaround. Some of my output is skipped from tail -f <file>.
    – user5154816
                    Dec 15, 2016 at 2:55
    

    The way I've solved this is with a context manager. This will make sure my long running commands are aborted. The key logic is to wrap to mimic SSHClient.exec_command but capture the created channel and use a Timer that will close that channel if the command runs for too long.

    import paramiko
    import threading
    class TimeoutChannel:
        def __init__(self, client: paramiko.SSHClient, timeout):
            self.expired = False
            self._channel: paramiko.channel = None
            self.client = client
            self.timeout = timeout
        def __enter__(self):
            self.timer = threading.Timer(self.timeout, self.kill_client)
            self.timer.start()
            return self
        def __exit__(self, exc_type, exc_val, exc_tb):
            print("Exited Timeout. Timed out:", self.expired)
            self.timer.cancel()
            if exc_val:
                return False  # Make sure the exceptions are re-raised
            if self.expired:
                raise TimeoutError("Command timed out")
        def kill_client(self):
            self.expired = True
            print("Should kill client")
            if self._channel:
                print("We have a channel")
                self._channel.close()
        def exec(self, command, bufsize=-1, timeout=None, get_pty=False, environment=None):
            self._channel = self.client.get_transport().open_session(timeout=timeout)
            if get_pty:
                self._channel.get_pty()
            self._channel.settimeout(timeout)
            if environment:
                self._channel.update_environment(environment)
            self._channel.exec_command(command)
            stdin = self._channel.makefile_stdin("wb", bufsize)
            stdout = self._channel.makefile("r", bufsize)
            stderr = self._channel.makefile_stderr("r", bufsize)
            return stdin, stdout, stderr
    

    To use the code it's pretty simple now, the first example will throw a TimeoutError

    ssh = paramiko.SSHClient()
    ssh.connect('hostname', username='user', password='pass')
    with TimeoutChannel(ssh, 3) as c:
        ssh_stdin, ssh_stdout, ssh_stderr = c.exec("cat")    # non-blocking
        exit_status = ssh_stdout.channel.recv_exit_status()  # block til done, will never complete because cat wants input
    

    This code will work fine (unless the host is under insane load!)

    ssh = paramiko.SSHClient()
    ssh.connect('hostname', username='user', password='pass')
    with TimeoutChannel(ssh, 3) as c:
        ssh_stdin, ssh_stdout, ssh_stderr = c.exec("uptime")    # non-blocking
        exit_status = ssh_stdout.channel.recv_exit_status()     # block til done, will complete quickly
        print(ssh_stdout.read().decode("utf8"))                 # Show results
            

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.