Thursday, March 22, 2007

The Era of Instability

I had the privilege of hanging out with the founder of Transmeta on Tuesday. Over a tasty ravioli, we discussed the state of networking and its evolution from the just a decade ago.

In the late 90s there were some big changes afoot in networking. For most vendors in the space, doing a new product meant doing a new ASIC. The ability to build new features on a given ASIC either meant expensive respins of the chip or tossing a processor core into the ASIC. (Early Alteon hardware did just that.) The problem with the CPU on the ASIC approach is that there tend to be limits with what can be done in terms of performance, available memory, and of course third party software. There is some room for improvement here with Linux running on PowerPC and MIPS processors, but it's hard to beat Linux's flexibility on x86.

And we noticed that back in the late 90s.

The result was a wave of new appliance based products. My product at the time, the iSD SSL Accelerator, was an early entrant in appliance based networking and while it was nowhere near the flexibility offered by traditional networking products, it did prove that x86 networking was here to stay.

Fast forward a few years. The appliance market is booming and most new networking boxes are x86 server platforms running Linux. A few shops use FreeBSD. All the networking guys have moved over and started "fixing" all the code that was initially written by people like me -- people with application space Linux expertise. Us apps guys of course learned in the process and we watched our code migrate from multi-threaded beasts to single thread state machines built around classic networking ideology.

As the CPU megahertz game really picked up in the middle of the decade we watched the single threaded beasts make stunning -- and linear -- performance improvements. A networking engine that could perform n units of work in 1 second at 1Ghz was able to so 2n units of work at 2Ghz. Feature sets built around these engines and a significant amount of production infrastructure used these appliances.

As 2005 started winding down and 2006 was on the horizon, we had a small problem on our hands. The megahertz game was coming to a close the multicore computing was becoming a reality. As the media around us buzzed in anticipation of dual-core and quad-core machines coming out, a lot of engineers huddled around their cubes and asked what now?

At first glance multi-core doesn't seem like a big deal. Classic networking has long done systems with multiple CPUs. Except for one detail -- classic networking was doing a lot of stateless processing that required little to no inter-processor communication. When work did need to move from CPU to CPU, circular queues could be used to provide lockless data structures -- computer science jibber for saying that multiple CPUs could work well together for the same reason an assembly line works: workers don't need to touch the same thing at the same time.

For classic networking, the nature of the problems being solved allowed them to avoid a lot of messiness. But things had changed. The power and flexibility of being on a real Unix-like server platform meant that new and far more complex problems could be tackled. These problems are unlike older networking problems in that they are not easily divisible.

So what's next? If appliance vendors are going to figure out how to make the next generation of networking hardware, they are going to have to figure out how to take advantage of multi-core CPUs without losing existing features. And this means taking the time to carefully divvy up the work that is not easily divisible. When changes this drastic come down the pike, drastic risk typically come with it.

The next two years are going to be both an opportunity for startups and a curse for IT managers that have to deploy and support networking appliances. Startups that have the luxury of designing support for multi-core platforms into their architecture are going to be able to ship stable products from day 1 that scale as the number of cores go up. By comparison existing players are likely to ship a version or two of products that are simply not nearly as stable as their predecessors. Either way, IT managers are going to have their hands full as they are forced to deal with performance challenges posed by their end users and stability challenges posed by their existing vendors.

Time to look at some of these startups, eh?

1 Comments:

Anonymous Anonymous said...

Interesting idea Steve. Is there any room for ASIC accelerated appliances (SSL, compression, security, REGEX processing) or do you think that economically the general purpose x86 will always win out? Sounds like you are advocating use of multiple cores to help address the "feature concurrency" problem...to many fetaures running on a shared general purpose resources. I agree this is interesting, as it being able to "partition" and allocate other hardware resources. How about a followup post on the implications of virtualization on the appliance space?

11:08 PM  

Post a Comment

<< Home