Do we need to fix all the problems all the time ? My answer is no. Also I believe in good solution today and dismiss ideal solutions tomorrow. Let me show this on the real case with one of the clients. Client has Checkpoint, lots of Checkpoint, just heaps of it. And all their work is based on VPN site to site communication between myriad of remote branches and the central office. All being VPNed. One of the services running inside those endless tunnels is plain old FTP. To be more precise scriptable scheduled transfer of files.
It has been working like that for years until it started making troubles. Not all sites in one go, just one site a day or a week, only that multiplied by the sheer number of the branches it became an avalanche. Following usual path I tried to fix things by myself, and it worked at the beginning. But then number of troublesome sites increased, at some point I attached Checkpoint to the process. They didn't see some major problem, just many seemingly unrelated local ones.
Also the FTP problems differed:
- download of small files went ok but on files > 1Mb it got stuck;
- download of any single file was ok, but multiple files got it stuck;
- files got transferred but with file size 0;
And all this had no obvious reasons - FTP drops here and there. Little by little I found myself fighting the windmills. Could it be solved ? I guess so . How much time ? Months .
Then I solved this problem quite simple - the client didn't care a bit what file protocol is being used as long as it is scriptable and Windows-friendly. So I run a test, and offered him to use SSH/SCP inside VPN tunnels instead of FTP.
The results of the tests were funny - from the same remote server, and all the rest being the same moving
files with scp (pscp.exe) annihilated all the problems seen with the FTP. That is it.