Linked by Thom Holwerda on Thu 12th Apr 2012 08:59 UTC
Internet & Networking I would honestly serve at the altar of the person that did this. Keep the debugging information, but for the love of god, make your email client do something pretty and useful with it.
Thread beginning with comment 513856
To view parent comment, click here.
To read all comments associated with this story, please click here.
Laurence
Member since:
2007-03-26

The thing about FTP, though, is the standard is so simple, and for the vast majority of servers it Just Works. I don't see what needs to change, it's dead simple.

It doesn't though - there's a whole series of hacks from your router (eg FTP doesn't natively work behind NAT nor firewalls without adaptive routing) through to the client itself. (Sorry about the rant I'm about to launch into - it's nothing personal ;) )

Every FTP server (read OS, not daemon) returns different output from commands such as dir, so FTP clients have to be programmed to support every server (utterly retarded standard!!)

What's even worse is that FTP doesn't have a true client / server relationship. The client connects to the server and tells the server which port the server should connect back to the client on. This means that firewalls have to be programmed to inspect the packets on all outgoing port 21 connections to establish which incoming connection requests to port forward. It's completely mental! This means that the moment you add any kind of SSL encryption (which itself isn't fully standardised and data channel encryption isn't always enabled even when the authentication channel is) you can potentially completely break FTP.

Add to that the lack of compression (no compression support on a protocol named "file transfer protocol" - I mean seriously) and the very poor method of handling binary and ASCII files and you're left with an utterly broken specification.

I will grant you that FTP is older than the WWW. FTP harps back to the ARPNET days and it's protocol actually makes some sense back then (All clients were also servers. All machines were known and trusted so servers could happily sit in the DMZ and you incoming connections already knew what OS they were connecting to...etc)

However these days FTP is completely inappropriate. SFTP at least fixes some of these things via SSH tunnelling (compression, guaranteed encryption on both data and authentication channels, no NATing woes, etc), but that in itself can cause other issues (many public / work networks firewall port 20, SFTP servers can be a pain to chroot if you're not an experienced sys admin, etc).

It just seems silly that FTP has never undergone a formal rewrite. Particularly when HTTP has undergone massive upgrades over the years and there's been a whole plethora of significantly more advanced data transfer protocols from P2P to syncing tools (even rsync is more sophisticated than FTP). From cloud storage through to network file systems. FTP really is the bastard child that should have been aborted 10 years ago (sorry for the crude analogy, but I can't believe people still advocate such an antiquated specification)

</rant>

Edited 2012-04-12 14:01 UTC

Reply Parent Score: 6

bitwelder Member since:
2010-04-27

What's even worse is that FTP doesn't have a true client / server relationship. The client connects to the server and tells the server which port the server should connect back to the client on. This means that firewalls have to be programmed to inspect the packets on all outgoing port 21 connections to establish which incoming connection requests to port forward.

What todays is a headache for network security admins, it was a useful feature in the times when FTP has been created: it was possible for a user who had only low speed connection with two hosts, to pilot a data connection between them (via a high speed link).
This feature is sometimes called FXP, but it's already part of FTP protocol, RFC-959.

Reply Parent Score: 3