Flash -- still crawling,

Flash is an absolute resource drain you've probably noticed how it completely hogs resources when you watch a video. Ever wonder how much more power is consumed watching flash over a regular video file? Is it a significant number? For those too lazy to read the rest, the simple answer is yes. And now to the details.

Recently I was watching a hulu video on a 1080P monitor and I noticed it was a little choppy. I decided to conduct an experiment and actually measure the difference in resource, power utilization between flash and h.264 (in mplayer). Not having the desire to make a video and encode it for flash and h.264 I randomly chose a trailer which was sufficiently long and was widely available in multiple formats. Tron Legacy, conveniently available through youtube and The Pirate Bay in 1080P, excellent.

In a more or less idle state my laptop draws around 1500mA of current (according to ACPI), CPU utilization is around 3% and clock averaged both cores is somewhere around 1.5Ghz (1Ghz min 2Ghz max .25Ghz step size utilizing the on-demand CPU frequency governor). Firing up the video through youtube in windowed mode (which scales the video to around 800pixels) The CPU utilization jumps up to around 85% and current draw to around 44000mA clock is continually kept at 2Ghz on both cores. Setting the movie to full screen (1080 pixels wide) decreases CPU usage to 70% and current draw to 3500mA, this might sound counter intuitive but it makes perfect sense as at 1920 wide the video is in native resolution and does not need to be scaled (This actually demonstrates that Flash does not make good use of the hardware scaling AKA Xv). Viewing the same 1080p trailer in mplayer, does reduce CPU load and current draw. Size of the video window does not matter much scaling it to about 800pixels or viewing in native 1920 pixels wide results in same numbers, thanks to mplayers Xv support. CPU utilization is around 40% and CPU does quite frequently clock down to reduce power consumption, current draw is around 3000mA.

So what does all of this mean. Assuming the voltage at the built in ACPI ammeter is equal to battery voltage (11.1V) that means the difference in power consumption between playing a video in flash vs Mplayer h.264 is about equivalent to medium strength CFL light bulb (1.5A*11.1V=15watts). Now this experiment is completely unscientific and has many flaws, primarily perhaps that I use Linux 64 bit flash player (10,0,42,34) the vast majority of flash users are obviously on windows and its possible that it runs better on windows platforms but I wouldn't bet money on that.

It makes me wonder if google is supposedly so concerned about being green maybe they should think about switching the default video format for youtube. We can do some interesting estimations. Lets assume that the average user of youtube watches 10 minutes worth of content in the default flash format, that means they consume about ( 10hours * 15watts / 60 minutes in an hour * 52 weeks in a year / 1000 watt hours in megawatt hours) .13 kilowatt hours per year more than using other formats. This does not sound like all that much but, assuming that 5% of the world population fits into this category it equals to about 40 000 000 kilowatts of power that could be saved. What does this number really mean? I invite you to go to the EPA Greenhouse calculator and plug it in. You'll see its equivalent to annual emission of 5500 cars. Again the numbers are completely unscientific but even if they are off by a factor of 3, it is still a significant number. It would be nice for someone to conduct a more thorough investigation.

While conducting this experiment I noticed something interesting. Playing the 1080p video in youtube would work fine for the first 1.5 min but then it would get choppy. The full trailer was fully downloaded so it didn't make much sense. Firing up KDE System monitor I was able to quite quickly to figure out the problem. As the video got choppy the CPU clock would drop while usage remained high, clearly the problem must be with cooling. System monitor was reporting CPU temperature of about 100C and power consumption of almost 6000mA. It had been a while since I cleaned the inside of my laptop, so I stripped it apart and took out a nice chunk of dust that was between the radiator and the fan. After this CPU temperature never went above 85C and current draw was at a much more reasonable 4400 while playing the flash video. Hopefully this will resolve my choppy hulu problem.

The graphs of this experiment are available. The flash graph, at first the scale trailer was played following by full screen. For the mplayer graph the inverse was done, first full screen then scaled .. but it doesn't matter much for mplayer.

LILUG  WWTS  software  2010-04-07T22:42:19-04:00
En guarde? La ou est le salut?

In reply to Josef "Jeff" Sipeks reply to my post entitle SMTP -- time to chuck it from a couple of years ago.

This is a (long overdue) reply to Ilya's post: SMPT -- Time to chuck it.

[...]

There are two apparent problems at the root of the SMTP protocol which allow for easy manipulation: lack of authentication and sender validation, and lack of user interaction. It would not be difficult to design a more flexible protocol which would allow for us to enjoy the functionality that we are familiar with all the while address some, if not all of the problems within SMTP.

To allow for greater flexibility in the protocol, it would first be broken from a server-server model into a client-server model.

This is first point I 100% disagree with...

That is, traditionally when one would send mail, it would be sent to a local SMTP server which would then relay the message onto the next server until the email reached its destination. This approach allowed for email caching and delayed-send (when a (receiving) mail server was off-line for hours (or even days) on end, messages could still trickle through as the sending server would try to periodically resend the messages.) Todays mail servers have very high up times and many are redundant so caching email for delayed delivery is not very important.

"Delayed delivery is not very important"?! What? What happened to the whole "better late than never" idiom?

It is not just about uptime of the server. There are other variables one must consider when thinking about the whole system of delivering email. Here's a short list; I'm sure I'm forgetting something:

  • server uptime
  • server reliability
  • network connection (all the routers between the server and the "source") uptime
  • network connection reliability

It does little to no good if the network connection is flakey. Ilya is arguing that that's rarely the case, and while I must agree that it isn't as bad as it used to be back in the 80's, I also know from experience that networks are very fragile and it doesn't take much to break them.

A couple of times over the past few years, I noticed that my ISP's routing tables got screwed up. Within two hours of such a screwup, things returned to normal, but that's 2 hours of "downtime."

Another instance of a network going haywire: one day, at Stony Brook University, the internet connection stopped working. Apparently, a compromised machine on the university campus caused a campus edge device to become overwhelmed. This eventually lead to a complete failure of the device. It took almost a day until the compromised machine got disconnected, the failed device reset, and the backlog of all the traffic on both sides of the router settled down.

Failures happen. Network failures happen frequently. More frequently that I would like them to, more frequently than the network admins would like them to. Failures happen near the user, far away from the user. One can hope that dynamic routing tables keep the internet as a whole functioning, but even those can fail. Want an example? Sure. Not that long ago, the well know video repository YouTube disappeared off the face of the Earth...well, to some degree. As this RIPE NCC RIS case study shows, on February 24, 2008, Pakistan Telecom decided to announce BGP routes for YouTube's IP range. The result was, that if you tried to access any of YouTube's servers on the 208.65.152.0/22 subnet, your packets were directed to Pakistan. For about an hour and twenty minutes that was the case. Then YouTube started announcing more granular subnets, diverting some of the traffic back to itself. Eleven minutes later, YouTube announced even more granular subnets, diverting large bulk of the traffic back to itself. Few dozen minutes later, PCCW Global (Pakistan Telecom's provider responsible for forwarding the "offending" BGP announcements to the rest of the world) stopped forwarding the incorrect routing information.

So, networks are fragile, which is why having an email transfer protocol that allows for retransmission a good idea.

Pas touche! I have not conducted extensive surveys of mail server configurations, but, from personal experience; most mail server give up on sending email a lot sooner than recommended. RFC 2821 calls for a 4-5 day period. This is a reflection of the times, email is expected to deliver messages almost instantaneously (Just ask Ted Stevens!).

As you are well aware I am not implying that networks are anywhere near perfect, it just does not matter. If you send a message and it does not get delivered immediately your mail client would be able to tell you so. This allows you to reacts, had the message been urgent you can use other forms of communication to try to get it through (phone ). The client can also queue the message (assuming no CAPTCHA system, more on that later) and try to deliver it later. Granted machines which run clients have significantly shorter uptimes than servers but is it really that big of a deal, especially now that servers give up on delivery just a few hours after first attempt?

I, for one, am looking forward to the day when I no longer have to ask my potential recipient whether or not they have received my message.

Instead, having direct communication between the sender-client and the receiver-server has many advantages: opens up the possibility for CAPTCHA systems, makes the send-portion of the protocol easier to upgrade, and allows for new functionality in the protocol.

Wow. So much to disagree with!

  1. CAPTCHA doesn't work
  2. What about mailing lists? How does the mailing list server answer the CAPTCHAs?
  3. How does eliminating server-to-server communication make the protocol easier to upgrade?
  4. New functionality is a nice thing in theory, but what do you want from your mail transfer protocol? I, personally, want it to transfer my email between where I send it from and where it is supposed to be delivered to.
  5. If anything eliminating the server-to-server communication would cause the MUAs to be "in charge" of the protocols. This means that at first there would be many competing protocols, until one takes over - not necessarily the better one (Betamax vs. VHS comes to mind).
  6. What happens in the case of overzealous firewall admins? What if I really want to send email to bob@example.com, but the firewall (for whatever reason) is blocking all traffic to example.com?
  7. Touche! I have to admit CAPTCHAs are a bit ridiculous in this application.
  8. See above
  9. By creating more work for admins. It allows users to more directly complain to the admins that the new protocol feature does not work. Yes I know admins want less work and fewer complaining users, but there are benefits. It really comes down to the fact that with more interactivity it is easier to react to changes, servers do not have brains but the people behind their clients do.
  10. Hopefully that will still happen.
  11. Well the worse protocol is already winning SMTP, dMTP (dot Mail Transfer Protocol) is so much better even if it is quite vague. MUAs will not be in charge, if they don not play ball then mail will not be delivered.
  12. Now you are just getting ahead of yourself. Stop making up problems. The solution to overzealous admins, is their removal. [...]

And so this brings us to the next point, authentication, how do you know that the email actually did, originate from the sender. This is one of the largest problems with SMTP as it is so easy to fake ones outgoing email address. The white list has to rely on a verifiable and consistent flag in the email. A sample implementation of such a control could work similar to the current hack to the email system, SPF, in which a special entry is made in the DNS entry which says where the mail can originate from. While this approach is quite effective in a sever-server architecture it would not work in a client- server architecture. Part of the protocol could require the sending client to send a cryptographic-hash of the email to his own receiving mail server, so that the receiving party's mail server could verify the authenticity of the source of the email. In essence this creates a 3 way handshake between the senders client, the senders (receiving) mail server and the receiver's mail server.

I tend to stay away from making custom authentication protocols.

In this scheme, what guarantees you that the client and his "home server" aren't both trying to convince the receiving server that the email is really from whom they say it is? In kerberos, you have a key for each system, and a password for each user. The kerberos server knows it all, and this central authority is why things work. With SSL certificates, you rely on the strength of the crypto used, as well as blind faith in the certificate authority.

They might, the point is not so much to authenticate the user but to link him to a server. If the server he is linked to is dirty, well you can blacklist it. Much of the spam today is sent from bot-nets, in this schema all the individual botnet senders would have to link themselves to a server. Obviously, a clever spammer would run a server on each of the zombie machines to auth for itself. The catch is that he would have to ensure that the Firewalls/NATs are open and that there is a (sub-) domain pointing back at the server. This is all costly for the spammer and for the good guy it'll be easy to trace down the dirty domains.

At first it might seem that this process uses up more bandwidth and increases the delay of sending mail but one has to remember that in usual configuration of sending email using IMAP or POP for mail storage one undergoes a similar process,

Umm...while possible, I believe that very very large majority of email is sent via SMTP (and I'm not even counting all the spam).

Carton jaune, I addressed that issue in my original posting which is just 2 sentences below this one. Excessive lobotomy is not appreciated.

first email is sent for storage (over IMAP or POP) to the senders mail server and then it is sent over SMTP to the senders email for redirection to the receivers mail server. It is even feasible to implement hooks in the IMAP and POP stacks to talk to the mail sending daemon directly eliminating an additional socket connection by the client.

Why would you want to stick with IMAP and POP? They do share certain ideas with SMTP.

Carton rouge, I said nothing about sticking to IMAP/POP. The point is that the system can be streamlined somewhat.

For legitimate mass mail this process would not encumber the sending procedure as for this case the sending server would be located on the same machine as the senders receiving mail server (which would store the hash for authentication), and they could even be streamlined into one monolithic process.

Not necessarily. There are entire businesses that specialize in mailing list maintenance. You pay them, and they give you an account with software that maintains your mailing list. Actually, it's amusing how similar it is to what spammers do. The major difference is that in the legitimate case, the customer supplies their own list of email address to mail. Anyway, my point is, in these cases (and they are more common than you think) the mailing sender is on a different computer than the "from" domain's MX record.

I do not think that increasing the burden on mass mailers even good ones is such a bad thing.

[...]

I really can't help but read that as "If we use this magical protocol that will make things better, things will get better!" Sorry, but unless I see some protocol which would be a good candidate, I will remain sceptical.

And I can not help but read this as "We should not think about improving protocols because it impossible to do better." In any case I appreciate your mal-pare. The discussion is important as letting protocols rot is not a good idea.

[...]

LILUG  WWTS  news  software  2009-04-22T10:47:24-04:00
SMPT -- Time to chuck it.

E-mail, in particular SMTP (Simple Mail Transfer Protocol) has become an integral part of our lives, people routinely rely on it to send files, and messages. At the inception of SMTP the Internet was only accessible to a relatively small, close nit community; and as a result the architects of SMTP did not envision problems such as SPAM and sender-spoofing. Today, as the Internet has become more accessible, scrupulous people are making use of flaws in SMTP for their profit at the expense of the average Internet user.

There have been several attempts to bring this ancient protocol in-line with the current society but the problem of spam keeps creeping in. At first people had implemented simple filters to get rid of SPAM but as the sheer volume of SPAM increased mere filtering became impractical, and so we saw the advent of adaptive SPAM filters which automatically learned to identify and differentiate legitimate email from SPAM. Soon enough the spammers caught on and started embedding their ads into images where they could not be easily parsed by spam filters. AOL (America On Line) flirted with other ideas to control spam, imposing email tax on all email which would be delivered to its user. It seems like such a system might work but it stands in the way of the open principles which have been so important to the flourishing of the internet.

There are two apparent problems at the root of the SMTP protocol which allow for easy manipulation: lack of authentication and sender validation, and lack of user interaction. It would not be difficult to design a more flexible protocol which would allow for us to enjoy the functionality that we are familiar with all the while address some, if not all of the problems within SMTP.

To allow for greater flexibility in the protocol, it would first be broken from a server-server model into a client-server model. That is, traditionally when one would send mail, it would be sent to a local SMTP server which would then relay the message onto the next server until the email reached its destination. This approach allowed for email caching and delayed-send (when a (receiving) mail server was off-line for hours (or even days) on end, messages could still trickle through as the sending server would try to periodically resend the messages.) Todays mail servers have very high up times and many are redundant so caching email for delayed delivery is not very important. Instead, having direct communication between the sender-client and the receiver-server has many advantages: opens up the possibility for CAPTCHA systems, makes the send-portion of the protocol easier to upgrade, and allows for new functionality in the protocol.

Spam is driven by profit, the spammers make use of the fact that it is cheap to send email. Even the smallest returns on spam amount to good money. By making it more expensive to send spam, it would be phased out as the returns become negative. Charging money like AOL tried, would work; but it is not a good approach, not only does it not allow for senders anonymity but also it rewards mail-administrators for doing a bad job (the more spam we deliver the more money we make). Another approach is to make the sender interact with the recipient mail server by some kind of challenge authentication which is hard to compute for a machine but easy for a human, a Turing test. For example the recipient can ask the senders client to verify what is written on an obfuscated image (CAPTCHA) or what is being said on a audio clip, or both so as to minimize the effect on people with handicaps. It would be essential to also white list senders so that they do not have to preform a user-interactive challenge to send the email, such that mail from legitimate automated mass senders would get through (and for that current implementation of sieve scripts could be used).

In this system, if users were to make wide use of filters, we would soon see a problem. If nearly everyone has a white list entry for Bank Of America what is to prevent a spammer to try to impersonate that bank? And so this brings us to the next point, authentication, how do you know that the email actually did, originate from the sender. This is one of the largest problems with SMTP as it is so easy to fake ones outgoing email address. The white list has to rely on a verifiable and consistent flag in the email. A sample implementation of such a control could work similar to the current hack to the email system, SPF, in which a special entry is made in the DNS entry which says where the mail can originate from. While this approach is quite effective in a sever-server architecture it would not work in a client- server architecture. Part of the protocol could require the sending client to send a cryptographic-hash of the email to his own receiving mail server, so that the receiving party's mail server could verify the authenticity of the source of the email. In essence this creates a 3 way handshake between the senders client, the senders (receiving) mail server and the receiver's mail server. At first it might seem that this process uses up more bandwidth and increases the delay of sending mail but one has to remember that in usual configuration of sending email using IMAP or POP for mail storage one undergoes a similar process, first email is sent for storage (over IMAP or POP) to the senders mail server and then it is sent over SMTP to the senders email for redirection to the receivers mail server. It is even feasible to implement hooks in the IMAP and POP stacks to talk to the mail sending daemon directly eliminating an additional socket connection by the client.

For legitimate mass mail this process would not encumber the sending procedure as for this case the sending server would be located on the same machine as the senders receiving mail server (which would store the hash for authentication), and they could even be streamlined into one monolithic process.

Some might argue that phasing out SMTP is a extremely radical idea, it has been an essential part of the internet for 25 years. But then, when is the right time to phase out this archaic and obsolete protocol, or do we commit to use it for the foreseeable future. Then longer we wait to try to phase it out the longer it will take to adopt something new. This protocol should be designed with a way to coexist with SMTP to get over the adoption curve, id est, make it possible for client to check for recipients functionality, if it can accept email by this new protocol then send it by it rather than SMTP.

The implementation of such a protocol would take very little time, the biggest problem would be with adoption. The best approach for this problem is to entice several large mail providers (such as Gmail or Yahoo) to switch over. Since these providers handle a large fraction of all mail the smaller guys (like myself) would have to follow suit. There is even an incentive for mail providers to re-implement mail protocol, it would save them many CPU-cycles since Bayesian-spam-filters would no longer be that important.

By creating this new protocol we would dramatically improve an end users experience online, as there would be fewer annoyances to deal with. Hopefully alleviation of these annoyances will bring faster adoption of the protocol.

LILUG  News  WWTS  2008-03-16T22:12:08-04:00
Browsers -- I hate them

I hate browsers every single one that I've used. Every browser out there is a pathetic failure when it comes to user interface. Right now my favourite browser is iceweasel/firefox but in my book it doesn't have much going for it.

Dialogues

The browsers have a love for pop-up-dialogues. It's getting a little better but not good enough. I remember when in firefox if you mistyped a URL it would pop a dialog box "Server not found." So you'd have to take your hands of the keyboard and hit OK and then put the cursor back to the address bar and try again. Why does the browser need to confirm with me that I mistyped something? Now this is no longer a problem; when you go to non-existing page you'll get a message insider your browser pane saying that server cannot be found. This is great but I believe that NOTHING should pop-up without the users intent

Say, for example to search for something on google and you get a link to a mailing list. I've seen a few mail-list archives where they use self-sign signatures (https) so you get a pop-up dialogue saying that the page is not kosher. WHY?! Its not a page I care about for security; in fact most pages I visit I don't care much if the anyone spies in on what I read. I think this warning should be brought up where it can be ignored without any user interaction. For example a drop down bar with a message (like those pop-up blocked notice). Heck you can even turn the whole browser panels and things RED so even the most senile users will notice something strange is up. And maybe the first time the user comes across this error it should pop a dialogue explaining why the browser miraculously turned red.

Users hate dialogues if it has more than 200 or so characters in the message a majority of the users won't even read it, they will in a robotic-type fashion click on some button until the dialogue will disappear. So, just stop with the pop-up dialogue boxes they are annoying and not useful. If your program needs to constantly pop things up for user to select then you have failed user interface design.

Fonts

Iceweasel/Firefox has this awesome feature where you can scale the page fonts. Its incredibly handy when you come across a web 2.0 website with 2 point font (fucking web designers, readability first style second!! STOP IT!!). Now this is all fine but I am tired of always manually adjusting the fonts per website. Fortunately there is other great feature (Edit>preferences>content>font&color>advanced>minimum font size) where you can set the minimum font size. Well you'd think this is the best thing since sliced bread (figure of speech, I hate sliced bread too but thats for another day) but there is a tremendous flaw with this feature. When you select a minimum for of size, say, 8 every font thats less than size 8 will be turned to 8 all larger fonts will not be affected. This sounds great in theory but horrible in practice, if you got to some heavily stylized graphics your setting will send a lot of fonts out of boundaries. So you'll get overflowing menus, notices and all that other jazz. Its so annoying its that its not usable. What the browser should do instead is scale all the fonts on the page. Say the smallest font on the page is of size 5 then 8-5=3 so increase EVERY font on the page by 3 points, kind of like what happens when you use manually adjust font size (view>text size>increase).

Menu bars

Stupid menu-bars. Every browser is full of them. You have the status bar on the bottom the menu bar, search bar, tab bar and bookmark bar on the top, WTF?! When I use the browser I want to see the webpage not the static content of the browser. STOP stealing my real estate. So I suggest you disable the bookmark and the status bar. And you'll scream BUT I want the functionality of my status bar; "I want to know where the link points that I am about to visit." Well so do I, I hate the bar but like the functionality it provies, but there is nothing to say that the functionality can't be moved. Say when you move your mouse over a link your address bar displays the address of the link, and as soon as you move away from the link the address bar goes back to displaying the address. As for the load status, I've found this great plugin called fision which takes from a safari feature, shows the progress of the loading in the background of the status bar.

The great menu bar, its immune from any customization. I just sits there, does nothing most of the time, face it how often do you use it? While its very useful its not needed all that often (maybe once a week) so why is there not a feature where it can collapse into a expandable menu (kind of like the start button on windows or kmenu in kde) And when you click this monster it would just appear. Now allow this menu button to be place into any other panel and forget about. What a real estate saver.

Cookies

I love cookies just not the internet kind. I think cookies are a sign of a lazy developer. Yes in some instances cookies are the only way to go (such as persistent user tracking) but they are often misused and where plain in-URL session tracking would suffice developers still use cookies (SHAME SHAME SHAME ON YOU). Now I have cookies disabled by default and use a cool plugin called cookie button which allows me with one click to enable cookies for a particular page, such as my banking web page or a forum which I regularly visit. Its a great approach to cookie management, with one exception. I wish firefox had a feature where you could accept any cookies for some length of time, for those truly stupid websites like ebay. When you login to ebay you get forwarded through a lot of pages each with their own third level domain name. The cookie management in firefox does not have any features to help you deal with this dilemma. This is where "accept all cookies for next 30 seconds.. and add pages to white-list" would come in extremely handy, for the more advanced users there should be a way to add cookie exceptions with wildcards for example *.ebay.com. If the cookie management features are properly implemented then the firefox developers should consider disabling cookies by default and thus weaning web developers from using cookies as much.

LILUG  Software  WWTS  2007-06-03T11:58:03-04:00
Who Wrote This Shit

Portmap by default listens to all IP addresses. However, if you are not providing network RPC services to remote clients (you are if you are setting up a NFS or NIS server) you can safely bind it to the loopback IP address (127.0.0.1)
<Yes> OR <No>

Maybe I'm slow or something but I really hate this prompt in debian. Which is accompanied by the installation of portmap. Seems like you need a degree in english logic to figure out what you need to select. If you run NFS and NIS and are Confused the hell out by this prompt just select NO.

UPDATE: Just because you select NO doesn't mean that debian will actually not bind RPC to portmap. You might want to run dpkg-reconfigure portmap again and make sure it did the right thing.. I got a nasty surprise the day after .. when 2 of the NFS servers stopped mounting. Filed bug report

Debian  LILUG  Software  WWTS  2007-05-25T21:21:52-04:00

Page 1 / 1