Friday, March 30, 2007

Random Aside:

If you've never listened to the radio communication for an airport's tower, I highly recommend it. The experience gives you a realistic appreciation for the people that we put our hands into every time we fly. All without the dramatic tension. It also makes for great white noise while you're working.

Tuesday, March 27, 2007


This isn't news. It's barely a blog post. But if you can't rant about it in the blogosphere, where can you rant about it?

This particular "it" being basic arithmetic.

The folks over at Petco run a charity to help get pets adopted. They used to just ask for a charitable contribution of $1 at checkout, but I'm guessing too many people said no so they now offer to round your purchase up to the next dollar instead. Simple enough. As a supporter of this particular charity, I always say yes.

This change has apparently thrown the checkout folks for a loop. Since they couldn't do the round up in their head, Petco has placed a cheat sheet next to the cash register. It gets better... the young lady running the register I was at was intimidated by the cheat sheet. I'd make this up if I could (and I have a minor in creative writing!) but my imagination hurts when I think about it too much.

Apologies for being so crass, but WTF?

I'm not suggesting that everyone in society have wizardry status with their local mathematics department. God knows that I haven't busted out sin2+cos2=1 or anything I learned in differential equations since college. I am, however, suggesting that everyone needs a command of basic arithmetic. If you can't do 100-18 in your head (the round up necessary for my last purchase at Petco), then I'm confident that a little practice won't hurt.

Netflix just delivered Idiocracy by Mike Judge of Office Space fame. Talk about timing...

(ps. the answer is 82.)

Friday, March 23, 2007

Random Aside: The Amen Break

The folks over at Coradiant sent me a link to this gem: A video explaining the world's most important 6 second drum loop. It's a tall claim to live up to, but it's a rare moment when the story lives up to the hyperbole.

Thursday, March 22, 2007

The Era of Instability

I had the privilege of hanging out with the founder of Transmeta on Tuesday. Over a tasty ravioli, we discussed the state of networking and its evolution from the just a decade ago.

In the late 90s there were some big changes afoot in networking. For most vendors in the space, doing a new product meant doing a new ASIC. The ability to build new features on a given ASIC either meant expensive respins of the chip or tossing a processor core into the ASIC. (Early Alteon hardware did just that.) The problem with the CPU on the ASIC approach is that there tend to be limits with what can be done in terms of performance, available memory, and of course third party software. There is some room for improvement here with Linux running on PowerPC and MIPS processors, but it's hard to beat Linux's flexibility on x86.

And we noticed that back in the late 90s.

The result was a wave of new appliance based products. My product at the time, the iSD SSL Accelerator, was an early entrant in appliance based networking and while it was nowhere near the flexibility offered by traditional networking products, it did prove that x86 networking was here to stay.

Fast forward a few years. The appliance market is booming and most new networking boxes are x86 server platforms running Linux. A few shops use FreeBSD. All the networking guys have moved over and started "fixing" all the code that was initially written by people like me -- people with application space Linux expertise. Us apps guys of course learned in the process and we watched our code migrate from multi-threaded beasts to single thread state machines built around classic networking ideology.

As the CPU megahertz game really picked up in the middle of the decade we watched the single threaded beasts make stunning -- and linear -- performance improvements. A networking engine that could perform n units of work in 1 second at 1Ghz was able to so 2n units of work at 2Ghz. Feature sets built around these engines and a significant amount of production infrastructure used these appliances.

As 2005 started winding down and 2006 was on the horizon, we had a small problem on our hands. The megahertz game was coming to a close the multicore computing was becoming a reality. As the media around us buzzed in anticipation of dual-core and quad-core machines coming out, a lot of engineers huddled around their cubes and asked what now?

At first glance multi-core doesn't seem like a big deal. Classic networking has long done systems with multiple CPUs. Except for one detail -- classic networking was doing a lot of stateless processing that required little to no inter-processor communication. When work did need to move from CPU to CPU, circular queues could be used to provide lockless data structures -- computer science jibber for saying that multiple CPUs could work well together for the same reason an assembly line works: workers don't need to touch the same thing at the same time.

For classic networking, the nature of the problems being solved allowed them to avoid a lot of messiness. But things had changed. The power and flexibility of being on a real Unix-like server platform meant that new and far more complex problems could be tackled. These problems are unlike older networking problems in that they are not easily divisible.

So what's next? If appliance vendors are going to figure out how to make the next generation of networking hardware, they are going to have to figure out how to take advantage of multi-core CPUs without losing existing features. And this means taking the time to carefully divvy up the work that is not easily divisible. When changes this drastic come down the pike, drastic risk typically come with it.

The next two years are going to be both an opportunity for startups and a curse for IT managers that have to deploy and support networking appliances. Startups that have the luxury of designing support for multi-core platforms into their architecture are going to be able to ship stable products from day 1 that scale as the number of cores go up. By comparison existing players are likely to ship a version or two of products that are simply not nearly as stable as their predecessors. Either way, IT managers are going to have their hands full as they are forced to deal with performance challenges posed by their end users and stability challenges posed by their existing vendors.

Time to look at some of these startups, eh?

Monday, March 12, 2007

Security Needs Speed

The long standing rule amongst security heads is that security trumps performance requirements no matter what. And I've had a long standing belief that ignoring performance requirements in security is flawed security.

Here's the problem: End users are (generally) not graded by how secure they are. Rather, they are graded by how effective they are at their jobs, regardless of security. For example, if an employee forwards company confidential email to his personal Gmail box so that he can work on a document over the weekend, chances are that he'll be praised. He may very well have exposed all kinds of intellectual property to the public Internet, on another company's server, and then made edits on his virus infested home computer, but he'll still be praised. Security issues be damned, the document was completed in time for the Big MeetingTM.

This is where speed starts to matter. When accessing a resource is painfully slow, users will come up with solutions of their own to circumvent the problem. Period. Email is the most common problem with the most available solution (public web mail), but the scope does not end there. Homebrew, unbacked up, security audit failing wikis are setup when IT administrators force sluggish installations of Sharepoint on users. Google Desktop search goes up when end users can't search document repositories fast enough.

The most ambitious effort I have seen was at a branch office that got tired of corporate IT's refusal to get them a faster connection. The local manager approved a DSL line to be put in and a $50 Netgear firewall to be installed to replace a 256Kbps WAN link. Users accessed the corporate network via their VPN connections for internal web and Oracle applications and external web access no longer had to go through the centralized proxy server. The "computer guy" has no idea if updates are pushed out or pulled down and doesn't appreciate how the Netgear's NAT can completely break that. To him, it's moot. "The damn things work."

As consumer tech brings increasingly complex and powerful tools to the masses, the number of workarounds to "make stuff work" is going to increase. As infrastructure professionals, we either make sure that our secure methods are better or we risk losing the battle.