Zach Shepherd's WordPress Blog

Just another WordPress weblog

Sunday, April 20, 2008

Virutalization Benchmarking

At the beginning of March, I had conversations with a few COSI members on possible research projects I could work on in my “free time”. What resulted from the conversations as a whole was the idea of a “Broad Spectrum Comparison of Virtualization Technologies” and the concept of “Broad Spectrum Virtualization Benchmarking”, a term latter shortened to “BSVB”

The goal was simple: Compare a variety of virtualization solutions using a variety of metrics across a diverse set of hardware and software configurations. The reason for wanting to preform BSVB was that many virtualization benchmarks are preformed on a limited set of configurations and the results assumed to be valid for all possible configurations, which is a rather unscientific approach (it completely lacks rigor). The main issue with my plan was that, in order to test just a few different configuration options for each category (hardware, domain0 operating system, domainU operating system, virtualization technology), the test set would have to be preformed on several hundred configurations, which is likely one of the reasons broad comparisons are currently uncommon*.

Because the number of tests was completely unreasonable to do by hand, the idea of a virtualization benchmarking suite was formed. One goal of the project became to make the suite modular enough that if anyone else had something they felt was important to test, they could add it without much difficulty. Another goal was that the tests be repeatable; that some exportable format exist to pass on the information necessary to re-run the tests. The benchmarking suite was named “Benchvm” in an effort to come up with an easy to type and remember name.

To achieve both goals, benchvm was designed to be completely modular. See the draft of The Woes of the Art of Virtualization Benchmarking for more information on the issues associated with virtualization benchmarking and the proposed solution: benchvm. Hopefully, once fully implemented, benchvm would be able to be used by anyone** doing virtualization benchmarking to provide a mechanism for the tests to be run in an identical way across a variety of systems with the added bonus of making the tests completely repeatable.

Currently, an effort to implement benchvm, as outlined in the paper, is underway. The goal is to have a working beta ready to preform tests comparing Xen and KVM with the goal of presenting benchvm and the preliminary set of results at XenSummit and KvmForum this year. The benchmarking side of things includes not only students and faculty at Clarkson, but a variety of other researchers (more information on this as plans become clearer).

* – By “uncommon”, I mean “unheard of”.
** – Of course, to be truly scientific, one comparison that would need to be preformed would be comparison two initial sets of tests, run with and without using benchvm, to ensure that benchvm itself has no impact on results.

posted by Zach at 8:06 pm  

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress