I really wish people would have all the facts before making catagorical statements.
So do I.
We have 1MB/s bandwidth availability on our server. That allows 100 simultaneous video downloads at 100k/bs, which under any normal circumstance is more than enough.
That is not even close to being true. I'll remind you that I have several years of practical experience designing and implementing protocols, servers, and clients to perform high volume data transfers, including servers that could efficiently handle large numbers of clients (in the thousands).
IP protocols have an inherently high overhead, mostly in the form of packet headers, but other forms as well. HTTP, not being designed for large data transfers, is among the most inefficient protocols used for downloading files. FTP is better, but still not good.
On top of that, IP networks also have a large overhead associated with them, particularly when run over ethernet (which is likely the connection between your server and the gateway to your service provider). Ethernet networks are based on collision detection and recovery. They do not look to see if there is traffic on the network before blasting their packets out. If there is traffic already on the wire, the traffic from both machines is corrupted, which will be detected, and the machines will retransmit after a random timeout. Using switches in place of hubs helps reduce these collisions, but does not prevent them.
On a moderate sized network, ethernet reaches about a 40% efficiency (meaning a 1MB/s connection becomes 400KB/s). When you remove the 20% typical overhead of an IP network, your 400KB/s becomes 320KB/s. From this, you can now deduct the overhead of the HTTP protocol (which I unfortunately don't remember off the top of my head, but it is fairly high). From this, you should be able to see that your 1MB/s is nowhere near what you actually deliver to your customers.
This is a worst case scenario (well, not really, but probably worse than your situation). Your server is probably (hopefully) on a relatively small network, such that the network efficiency will be more in the 60-70% range. But, even if you were to connect two machines to a private network by themselves, and transfer a large file between them, you would still not get near the full bandwidth of your network connection, more likely 80-85%.
The first two days of the print tutorial availability saw as many as 200 – 250 simultaneous downloads requests. This was caused in large part by people trying to download multiple files at the same time, even as many as 10.
To prevent the server from crashing we throttled each download channel to 50 k/bs and sometimes even lower when the traffic really got congested. This was unsatisfactory for everyone, but necessary to keep the server up. Unfortunately it meant that some downloads timed out, or the zips got corrupted.
This would be largely ineffective for at least two reasons. The first, all the timeouts are going to result in your customers retrying, creating even more traffic on your server. Secondly, bandwidth is only part of the problem.
Virtually all HTTP and FTP servers spawn off child processes to handle the clients, resulting in a very large number of processes running on your server. This creates a situation of CPU thrashing, where the processor is spending more time switching between all these processes than it does actually executing them. It also causes process starvation, which means the amount of time between when a processor is kicked off the CPU and when it gets to run again is too long. Starvation leads to buffer overruns, and other such nasty problems, since buffers are not being serviced by the owning process.
Large numbers of processes running also results in thrashing of you I/O resources (primarily, your hard drive). All of these running processes are all trying to access different areas of your hard drive, causing your disk heads to be thrashing all over the place. If you look at your server state, you'll likely see a very large amount of I/O grid lock, as the hard drive attempts to service all the read requests coming from all these different processes. (This will also lead to premature failure of your hard drive, BTW).
When you first announced the move to downloads instead of DVD, I remember asking if your server could handle it. You assured me it had been upgraded, and would not be a problem. I thought you were mistaken then; I know you were mistaken now.
I know you're wishing you had heard the last from me, and with this, you have. It's obvious now that your infrastructure is not capable of handling the demand of download video, as I suspected all along. You are pushing your problems and frustrations of delivering your product onto your customers, and IMO, that is not good business.
...Mike