In reply to Josef "Jeff" Sipeks reply to my post entitle SMTP -- time to chuck it from a couple of years ago.
This is a (long overdue) reply to Ilya's post: SMPT -- Time to chuck it.
[...]
There are two apparent problems at the root of the SMTP protocol which allow for easy manipulation: lack of authentication and sender validation, and lack of user interaction. It would not be difficult to design a more flexible protocol which would allow for us to enjoy the functionality that we are familiar with all the while address some, if not all of the problems within SMTP.
To allow for greater flexibility in the protocol, it would first be broken from a server-server model into a client-server model.
This is first point I 100% disagree with...
That is, traditionally when one would send mail, it would be sent to a local SMTP server which would then relay the message onto the next server until the email reached its destination. This approach allowed for email caching and delayed-send (when a (receiving) mail server was off-line for hours (or even days) on end, messages could still trickle through as the sending server would try to periodically resend the messages.) Todays mail servers have very high up times and many are redundant so caching email for delayed delivery is not very important.
"Delayed delivery is not very important"?! What? What happened to the whole "better late than never" idiom?
It is not just about uptime of the server. There are other variables one must consider when thinking about the whole system of delivering email. Here's a short list; I'm sure I'm forgetting something:
- server uptime
- server reliability
- network connection (all the routers between the server and the "source") uptime
- network connection reliability
It does little to no good if the network connection is flakey. Ilya is arguing that that's rarely the case, and while I must agree that it isn't as bad as it used to be back in the 80's, I also know from experience that networks are very fragile and it doesn't take much to break them.
A couple of times over the past few years, I noticed that my ISP's routing tables got screwed up. Within two hours of such a screwup, things returned to normal, but that's 2 hours of "downtime."
Another instance of a network going haywire: one day, at Stony Brook University, the internet connection stopped working. Apparently, a compromised machine on the university campus caused a campus edge device to become overwhelmed. This eventually lead to a complete failure of the device. It took almost a day until the compromised machine got disconnected, the failed device reset, and the backlog of all the traffic on both sides of the router settled down.
Failures happen. Network failures happen frequently. More frequently that I would like them to, more frequently than the network admins would like them to. Failures happen near the user, far away from the user. One can hope that dynamic routing tables keep the internet as a whole functioning, but even those can fail. Want an example? Sure. Not that long ago, the well know video repository YouTube disappeared off the face of the Earth...well, to some degree. As this RIPE NCC RIS case study shows, on February 24, 2008, Pakistan Telecom decided to announce BGP routes for YouTube's IP range. The result was, that if you tried to access any of YouTube's servers on the 208.65.152.0/22 subnet, your packets were directed to Pakistan. For about an hour and twenty minutes that was the case. Then YouTube started announcing more granular subnets, diverting some of the traffic back to itself. Eleven minutes later, YouTube announced even more granular subnets, diverting large bulk of the traffic back to itself. Few dozen minutes later, PCCW Global (Pakistan Telecom's provider responsible for forwarding the "offending" BGP announcements to the rest of the world) stopped forwarding the incorrect routing information.
So, networks are fragile, which is why having an email transfer protocol that allows for retransmission a good idea.
Pas touche! I have not conducted extensive surveys of mail server configurations, but, from personal experience; most mail server give up on sending email a lot sooner than recommended. RFC 2821 calls for a 4-5 day period. This is a reflection of the times, email is expected to deliver messages almost instantaneously (Just ask Ted Stevens!).
As you are well aware I am not implying that networks are anywhere near perfect, it just does not matter. If you send a message and it does not get delivered immediately your mail client would be able to tell you so. This allows you to reacts, had the message been urgent you can use other forms of communication to try to get it through (phone ). The client can also queue the message (assuming no CAPTCHA system, more on that later) and try to deliver it later. Granted machines which run clients have significantly shorter uptimes than servers but is it really that big of a deal, especially now that servers give up on delivery just a few hours after first attempt?
I, for one, am looking forward to the day when I no longer have to ask my potential recipient whether or not they have received my message.
Instead, having direct communication between the sender-client and the receiver-server has many advantages: opens up the possibility for CAPTCHA systems, makes the send-portion of the protocol easier to upgrade, and allows for new functionality in the protocol.
Wow. So much to disagree with!
- CAPTCHA doesn't work
- What about mailing lists? How does the mailing list server answer the CAPTCHAs?
- How does eliminating server-to-server communication make the protocol easier to upgrade?
- New functionality is a nice thing in theory, but what do you want from your mail transfer protocol? I, personally, want it to transfer my email between where I send it from and where it is supposed to be delivered to.
- If anything eliminating the server-to-server communication would cause the MUAs to be "in charge" of the protocols. This means that at first there would be many competing protocols, until one takes over - not necessarily the better one (Betamax vs. VHS comes to mind).
- What happens in the case of overzealous firewall admins? What if I really want to send email to bob@example.com, but the firewall (for whatever reason) is blocking all traffic to example.com?
- Touche! I have to admit CAPTCHAs are a bit ridiculous in this application.
- See above
- By creating more work for admins. It allows users to more directly complain to the admins that the new protocol feature does not work. Yes I know admins want less work and fewer complaining users, but there are benefits. It really comes down to the fact that with more interactivity it is easier to react to changes, servers do not have brains but the people behind their clients do.
- Hopefully that will still happen.
- Well the worse protocol is already winning SMTP, dMTP (dot Mail Transfer Protocol) is so much better even if it is quite vague. MUAs will not be in charge, if they don not play ball then mail will not be delivered.
- Now you are just getting ahead of yourself. Stop making up problems. The solution to overzealous admins, is their removal. [...]
And so this brings us to the next point, authentication, how do you know that the email actually did, originate from the sender. This is one of the largest problems with SMTP as it is so easy to fake ones outgoing email address. The white list has to rely on a verifiable and consistent flag in the email. A sample implementation of such a control could work similar to the current hack to the email system, SPF, in which a special entry is made in the DNS entry which says where the mail can originate from. While this approach is quite effective in a sever-server architecture it would not work in a client- server architecture. Part of the protocol could require the sending client to send a cryptographic-hash of the email to his own receiving mail server, so that the receiving party's mail server could verify the authenticity of the source of the email. In essence this creates a 3 way handshake between the senders client, the senders (receiving) mail server and the receiver's mail server.
I tend to stay away from making custom authentication protocols.
In this scheme, what guarantees you that the client and his "home server" aren't both trying to convince the receiving server that the email is really from whom they say it is? In kerberos, you have a key for each system, and a password for each user. The kerberos server knows it all, and this central authority is why things work. With SSL certificates, you rely on the strength of the crypto used, as well as blind faith in the certificate authority.
They might, the point is not so much to authenticate the user but to link him to a server. If the server he is linked to is dirty, well you can blacklist it. Much of the spam today is sent from bot-nets, in this schema all the individual botnet senders would have to link themselves to a server. Obviously, a clever spammer would run a server on each of the zombie machines to auth for itself. The catch is that he would have to ensure that the Firewalls/NATs are open and that there is a (sub-) domain pointing back at the server. This is all costly for the spammer and for the good guy it'll be easy to trace down the dirty domains.
At first it might seem that this process uses up more bandwidth and increases the delay of sending mail but one has to remember that in usual configuration of sending email using IMAP or POP for mail storage one undergoes a similar process,
Umm...while possible, I believe that very very large majority of email is sent via SMTP (and I'm not even counting all the spam).
Carton jaune, I addressed that issue in my original posting which is just 2 sentences below this one. Excessive lobotomy is not appreciated.
first email is sent for storage (over IMAP or POP) to the senders mail server and then it is sent over SMTP to the senders email for redirection to the receivers mail server. It is even feasible to implement hooks in the IMAP and POP stacks to talk to the mail sending daemon directly eliminating an additional socket connection by the client.
Why would you want to stick with IMAP and POP? They do share certain ideas with SMTP.
Carton rouge, I said nothing about sticking to IMAP/POP. The point is that the system can be streamlined somewhat.
For legitimate mass mail this process would not encumber the sending procedure as for this case the sending server would be located on the same machine as the senders receiving mail server (which would store the hash for authentication), and they could even be streamlined into one monolithic process.
Not necessarily. There are entire businesses that specialize in mailing list maintenance. You pay them, and they give you an account with software that maintains your mailing list. Actually, it's amusing how similar it is to what spammers do. The major difference is that in the legitimate case, the customer supplies their own list of email address to mail. Anyway, my point is, in these cases (and they are more common than you think) the mailing sender is on a different computer than the "from" domain's MX record.
I do not think that increasing the burden on mass mailers even good ones is such a bad thing.
[...]
I really can't help but read that as "If we use this magical protocol that will make things better, things will get better!" Sorry, but unless I see some protocol which would be a good candidate, I will remain sceptical.
And I can not help but read this as "We should not think about improving protocols because it impossible to do better." In any case I appreciate your mal-pare. The discussion is important as letting protocols rot is not a good idea.
LILUG WWTS news software 2009-04-22T10:47:24-04:00[...]