When displaying the contents of a remote directory containing a very large number of files (8283) the time taken to display is very long (~125 seconds) and local CPU utilisation is very high (~97%).
The time to generate he same directory listing on the remote host is relatively short (shown below):
bash-2.03$ time ls -l > /tmp/zz
The time taken to transfer the contents of the directory listing (/tmp/zz) to my local machine is about 7 seconds (using compression during the file transfer).
The time taken to change directories, to say the root dir, when the current dir contains a very large number of files is also quite slow (~25 seconds).
None of these problems are Earth shattering but it would be great if they get resolved (optimised) in the future.
In the meantime I'm forced to sensibly implement an archiving plan to ensure I don't have so many files in any given directory.
Thanks for a great product!
I also have the same problem (ten years later). For directories with many files (hundreds to thousands of text files varying in size from 4KB to several MBs each), it takes a very long time before I can even see the directory's contents, while with PuTTY, a text-based SSH utility, I can see the directory's contents almost instantaneously. I've always had this problem with the Windows-installed version of WinSCP, but it has recently become debilitating because I need to browse the contents of these directories frequently. I'm not sure what more information I could provide, but I hope you can replicate and fix the problem.
Otherwise, I've loved WinSCP for years! I hope I don't need to use another program.