-
Heyitsme
Guest
Multipart downloads
Advertisement
I would like to see multipart downloads added to the program. And a simpler interface would be nice. It's kind of convoluted what the options do. I found something that seemed like it was multipart downloads in description but it turned out to be something completely different.
Advertisement
-
martin◆
Site Admin - Joined:
- Posts:
- 41,440
- Location:
- Prague, Czechia
Re: Multipart downloads
What's that?I would like to see multipart downloads added to the program.
Any suggestions? :-)And a simpler interface would be nice. It's kind of convoluted what the options do.
What was that? :-)I found something that seemed like it was multipart downloads in description but it turned out to be something completely different.
-
HeyItsMe
Guest
Multipart downloads are multiple connections to the server to download the same file in several segments simultaneously allowing you to maximize your bandwidth throughput.
As for the interface.
More clearly define the options and functions.
As for the interface.
More clearly define the options and functions.
-
martin◆
Site Admin
I've got that. But obviously it is hard for me to tell which options and functions are not clearly defined. Can you tell me?More clearly define the options and functions.
-
HeyItsMe
Guest
@martin: To be honest, all of them. You present so many options in each of options windows that it's confusing. It's good to have so many features but they need to be cleanly and easily understandable. An example, but certainly not the only example, is the download file menu. When you copy or move a file, there is just an insane amount of options not clearly explained. Sure you have the "More >>" and "<< Less" button but it gets to be a real challenge. I'd also like the multi-part downloads added there with an option for how many simultaneous transfer threads are allowed to a single file.
So if I were to download
So if I were to download
backup-%TIMESTAMPT%.tar.gz
-- %TIMESTAMP%
replaced by a real date/time at generation, It would have 3 or 4 threads each downloading a different segment of that file and when it's all done they get recombined on my end and then the transfer is done 4 times quicker. Basically, what a download manager like downThemAll for Firefox or Flashget does. SmartFTP also has that but I'm utterly sick of the nag dialogue so i have stopped using it. But everyday I've got to download 2 gigabytes in website backups in .tar.gz files and it goes much faster with multi-part downloads that WinSCP doesn't support. I'm torn because *I LOVE* using my SSH keyring in WinSCP to auto-connect and login.
Advertisement
-
HeyItsMe
Guest
Did the idea just fall through? :cry:
-
martin◆
Site Admin
@HeyItsMe: No it did not. I've just forgot to answer. It's on TODO list. But it does not have high priority as of now.
- Guest
i'd like to see this feature too...
-
linuxamp
Guest
+1
I'd also like to see multipart transfers. Additionally, I believe if you're going to do multipart it also helps if the app is multithreaded.
Cheers
Cheers
Advertisement
- Guest
Re: Multipart downloads
I'd like to place my vote for this feature as well. My current Internet line allows me to do a max of 250KiB/sec for a single file yet I could do 10 different files from the same server and each one gets 250KiB/sec.
If I could do multi-part downloading and download say bytes 0 – 1MiB, 1MiB – 2MiB then I could download twice as much in less time.
FileZilla does this I believe for FTP downloads but not SFTP downloads.
Also I'm not sure if the SFTP protocol supports it but being able to do this for uploads would be very helpful also.
If I could do multi-part downloading and download say bytes 0 – 1MiB, 1MiB – 2MiB then I could download twice as much in less time.
FileZilla does this I believe for FTP downloads but not SFTP downloads.
Also I'm not sure if the SFTP protocol supports it but being able to do this for uploads would be very helpful also.
-
martin◆
Site Admin - Joined:
- Posts:
- 41,440
- Location:
- Prague, Czechia
Re: Multipart downloads
I do not think so.FileZilla does this I believe for FTP downloads but not SFTP downloads.
Yes SFTP protocol supports that.Also I'm not sure if the SFTP protocol supports it but being able to do this for uploads would be very helpful also.
-
Promethus00
Guest
Add me to the list who wants this!
Pipelined transfers is a related concept that is also very valuable. I once had a program called LeechFTP that did this, and I fell in love.
Pipelined transfers is for moving many small files quickly. It also gives the advantage of multi-part transfers, as long as you have several files to transfer, but it avoids the complexity of handling partial files.
In pipelined transfer, you have only one single download queue, but you have several "transfer slots" (configurable number of slots, in leech FTP, the current number of slots could be altered during operation by +/- buttons).
Pipelined transfer, works like this: An empty transfer slot takes the first file off the transfer queue. It connects and starts transfering. Then the next empty slot takes the next file from the queue and starts transfering, and so on until all transfer slots have an active file. As soon as a transfer slot finishes a file, it opens the next file from the queue and starts transfering that file.
The point of pipelined transfers is that for transfers of many small files, the time to set up and finish a transfer doesn't waste bandwidth. If you only have a single session, then the bandwidth is idle during transfer start and finish. But with pipelined transfers, there is always one (or more) other files that is using the bandwidth while a transfer is being set up.
With pipelined transfers alone, you usually get the advantage of multi-part transfers, as long as you transfer more than a single large file at a time. But pipelined transfers are a bit easier to handle, since you are working with full files only, not file-parts.
Pipelined transfers don't help at all if you are transfering only one single large file. For that you need full multi-part transfers. But the framework for pipelined transfers can help a lot for multi-part transfers. When the top file in the transfer queue is large, you simply split it into several partial transfers, and then use pipelined transfers on those parts as if they were different complete files, filling up the transfer slots as usual. You still have to handle the pre-allocation of the file, and filling in the data into the right place for each part, and so on.
Pipelined transfers is a related concept that is also very valuable. I once had a program called LeechFTP that did this, and I fell in love.
Pipelined transfers is for moving many small files quickly. It also gives the advantage of multi-part transfers, as long as you have several files to transfer, but it avoids the complexity of handling partial files.
In pipelined transfer, you have only one single download queue, but you have several "transfer slots" (configurable number of slots, in leech FTP, the current number of slots could be altered during operation by +/- buttons).
Pipelined transfer, works like this: An empty transfer slot takes the first file off the transfer queue. It connects and starts transfering. Then the next empty slot takes the next file from the queue and starts transfering, and so on until all transfer slots have an active file. As soon as a transfer slot finishes a file, it opens the next file from the queue and starts transfering that file.
The point of pipelined transfers is that for transfers of many small files, the time to set up and finish a transfer doesn't waste bandwidth. If you only have a single session, then the bandwidth is idle during transfer start and finish. But with pipelined transfers, there is always one (or more) other files that is using the bandwidth while a transfer is being set up.
With pipelined transfers alone, you usually get the advantage of multi-part transfers, as long as you transfer more than a single large file at a time. But pipelined transfers are a bit easier to handle, since you are working with full files only, not file-parts.
Pipelined transfers don't help at all if you are transfering only one single large file. For that you need full multi-part transfers. But the framework for pipelined transfers can help a lot for multi-part transfers. When the top file in the transfer queue is large, you simply split it into several partial transfers, and then use pipelined transfers on those parts as if they were different complete files, filling up the transfer slots as usual. You still have to handle the pre-allocation of the file, and filling in the data into the right place for each part, and so on.
-
Prometheus00
Guest
I just wanted to add a quick bit to my above post:
I actually want pipelined transfers more than multi-part transfers. They partly fill each other's roles, but not fully.
Multi-part transfers do not help at all when transfering many small files (which I often do at work). Multi-part actually makes the problem worse, if you split small files (which you shouldn't do). So for many small files, you really need pipelined transfers.
And for a sinlge large file, pipelined transfers do not help, as I explained in my post.
Anyhow, please increase priority on this, both multi-part and pipelined transfers make a huge difference in real throughput, and I am certain that many people would like it!
I actually want pipelined transfers more than multi-part transfers. They partly fill each other's roles, but not fully.
Multi-part transfers do not help at all when transfering many small files (which I often do at work). Multi-part actually makes the problem worse, if you split small files (which you shouldn't do). So for many small files, you really need pipelined transfers.
And for a sinlge large file, pipelined transfers do not help, as I explained in my post.
Anyhow, please increase priority on this, both multi-part and pipelined transfers make a huge difference in real throughput, and I am certain that many people would like it!
Advertisement
-
martin◆
Site Admin
I'm not against that. It is just a huge amount of work.Anyhow, please increase priority on this, both multi-part and pipelined transfers make a huge difference in real throughput, and I am certain that many people would like it!
-
Prometheus00
Guest
I've been playing around with the download queues that already exist, and you can actually almost do pipelined transfers as it is! What you have to do is manually select each file for downloading separately. That creates as many separate queues as there are files to download. Since you can limit the maximum number of queues that can be active, you can use the queues as download slots for one file at a time by making sure each queue only contains a single file.
What I am trying to say is that the current queue-concept is almost the same as pipelined downloads. It's just phrased in the user interface in a way that doesn't make its usefulness clear.
To make this simple to use and understand, you just need to change one behaviour: When multiple files are added as a queue (either through multi-selection, or through downloading a directory with many files), every file is added as a separate queue, each file in a queue of its own.
Then there are some phrasings in the user interface to change. The concept of multiple queues is redundant. Since each queue now always contains only a single file, you could now simply say that there is only one queue. What you earlier called queues you now call download slots or something to that effect. (Since calling something that always has a single item in it a queue is confusing). And you could simplify the code by ripping out all that multi-file queue handling, every "queue" will now always be simply a single enqueued file.
I think the old queue-concept is a left-over from when there only was a single queue? When that became multiple queues, it became confusing, since you now had queueing in two dimensions (number of different queues, and number of elements in each particular queue). Making this change will reduce it back down to a single dimension again, however in the "number of different queues" dimension, rather than the original "number of files in a queue" dimension.
Anyway, I imagine that it could be reasonably simple to implement (basically, you always add every file in a queue of its own). The rest is user-interface text clarification.
Now, if that gets done, there are a number of user-interface enhancements that could be done, mainly regarding queue maintenance.
It would be great if you could multi-select in the queue, and change the download ordering of multiple files at once.
A function that lets you "cut" some selected enqueued files, and "paste" them back in elsewhere in the queue would also be really helpful, since moving a file up/down one position at a time is burdensome for large queues (100:s or 1000:s of small files).
A "Place first"/"Place at end" button-pair would also be really helpful, in particular coupled with multiple selection.
The suspend/resume queue functions could be simplyfied by making the suspend function simply swap the file in the selected download slot with the top file in the queue. The file can then be given a different priority by moving it manually to where in the queue you want it.
An increase/decrease number of download slots button-pair would easily let you experiment to find the perfect number of download slots for current conditions (particularly good if you move around with a laptop, but even time of day changes conditions). When a download slot is removed, the file in that slot is pushed back to the top of the queue again.
Anyway, though the explanation is long and detailed, all that careful thinking-through perhaps that makes the implementation quick and simple, yes? ;-)
What I am trying to say is that the current queue-concept is almost the same as pipelined downloads. It's just phrased in the user interface in a way that doesn't make its usefulness clear.
To make this simple to use and understand, you just need to change one behaviour: When multiple files are added as a queue (either through multi-selection, or through downloading a directory with many files), every file is added as a separate queue, each file in a queue of its own.
Then there are some phrasings in the user interface to change. The concept of multiple queues is redundant. Since each queue now always contains only a single file, you could now simply say that there is only one queue. What you earlier called queues you now call download slots or something to that effect. (Since calling something that always has a single item in it a queue is confusing). And you could simplify the code by ripping out all that multi-file queue handling, every "queue" will now always be simply a single enqueued file.
I think the old queue-concept is a left-over from when there only was a single queue? When that became multiple queues, it became confusing, since you now had queueing in two dimensions (number of different queues, and number of elements in each particular queue). Making this change will reduce it back down to a single dimension again, however in the "number of different queues" dimension, rather than the original "number of files in a queue" dimension.
Anyway, I imagine that it could be reasonably simple to implement (basically, you always add every file in a queue of its own). The rest is user-interface text clarification.
Now, if that gets done, there are a number of user-interface enhancements that could be done, mainly regarding queue maintenance.
It would be great if you could multi-select in the queue, and change the download ordering of multiple files at once.
A function that lets you "cut" some selected enqueued files, and "paste" them back in elsewhere in the queue would also be really helpful, since moving a file up/down one position at a time is burdensome for large queues (100:s or 1000:s of small files).
A "Place first"/"Place at end" button-pair would also be really helpful, in particular coupled with multiple selection.
The suspend/resume queue functions could be simplyfied by making the suspend function simply swap the file in the selected download slot with the top file in the queue. The file can then be given a different priority by moving it manually to where in the queue you want it.
An increase/decrease number of download slots button-pair would easily let you experiment to find the perfect number of download slots for current conditions (particularly good if you move around with a laptop, but even time of day changes conditions). When a download slot is removed, the file in that slot is pushed back to the top of the queue again.
Anyway, though the explanation is long and detailed, all that careful thinking-through perhaps that makes the implementation quick and simple, yes? ;-)
-
martin◆
Site Admin - Joined:
- Posts:
- 41,440
- Location:
- Prague, Czechia
What you ask for is of course quite simpler to implement than multipart downloads. I'm aware of that.
It is tracker here:
Issue 97 – Enqueue each file of batch transfer individually
It is tracker here:
Issue 97 – Enqueue each file of batch transfer individually
- Guest
@martin: Great! Thank you very much! I see that others have been asking about this too, perhaps a priority raise? :P
Anyway, I am impressed with your very organised and professional handling of suggestions from your fan-base. And obviously, thank you very much for a great product in general.
Anyway, I am impressed with your very organised and professional handling of suggestions from your fan-base. And obviously, thank you very much for a great product in general.
Advertisement
-
martin◆
Site Admin
OK :-)Great! Thank you very much! I see that others have been asking about this too, perhaps a priority raise? :P
-
riffy
Guest
Any news on this? I'd KILL for this! I have to use CuteFTP when I have a lot of files. If this had multi-part threading this would be the most perfect thing in the world!
-
another user
Guest
Still waiting for multi part
Is it there yet?
-
martin◆
Site Admin
Re: Still waiting for multi part
Advertisement
You can post new topics in this forum