How are you a cloud provider? There are many aspects that you must evaluate.
as a supplier
- service availability, redundancy.
- over time, io / s over time.
- fragmentation of your storage solution.
- responsibilty / restore / failover to malfunctions / electrical.
- default cache and cache overflow in "massive random access" or "sequential access"
For all volumes there are special tools / api / controls. Sometimes it is closely related to your equipment, once less. But the connection between hardware and software leads to the specifics of measurement and integration problems. Finding out what is the benchmark or routing the end-to-end query from the objet storage api to disks can be crazy. If your goal is to get a benchmark (at a higher api level) that can result in an improvement in your system, then you should only decide on the complete control (and understanding) of your cloud system;
Nagios , as tools, are not suitable for this kind of test. You need CMDB and some data collection tools for a large data warehouse. You must understand that all solutions of the benchmark are primary data, and the cloud can be very complex, there is a lot of data. What you learn from your data is not only some graphic data, but also some questions about how to ask your questions. Even getting rights questions will set you up.
As I said in my first short answer, we use VMware VMmark to conduct such a test, but this is just a small part. There are so evil tool numbers (juste to do some real-time monitoring - benchmarking) that one person cannot recognize them all. I am working on some AI programs (Bayesian network for fault detection, evolutionary algorithms for redistribution ...) to provide better control of these things.
Just to tease you: do you expect to have a test when you install a new client, change the storage of the other two and execute the latterโs emergency plan at the same time?
A proper benchmark should cover so many cases. Today, the cloud must govern the complexity of the world, every chaotic event; nothing should redistribute the service. Therefore, simply saying what is the benchmark is quite difficult.
(feeding cmdb alone is a problem)
as a customer
yep :-) I am also a client of cloud providers, as everyone will do in the near future. Just a little background. Openstack was the initial release by organizations with very specific needs (just think that the "Compute" part of the openstack api has nothing to do with share / cluster processing that looks like lhc consume). So what is a regular website? YouTube? Amazon? Even if for example just โthe whole static HTML siteโ can hardly be used to compare a cloud solution.
This week I am also working on translating vCloud api into openstack (free play), vCloud is clearly defined, with a lot of objects that open, but even so, we just cover so few application management needs.
So how could a client compare two cloud solutions? In fact, before trying his own solution, he will not be able to. That's why customers come to us, ask what we use and how, our process ... In the end, commercials for work, for several months for free, just to install the client and find what we need to do to reconfigure our clouds to his applications. Very few clients know how much cpu / ram / disk / iops they use; some of them buy allocated resources (since they are allocated, we cannot share with another client), they will never use it.
Then any comparison tool for a regular website should do the job. If you want to play, you can open up "internal" tools like swiftstack and tempest to get some kind of feedback, but you must determine what the normal use of the website should look like. If you look at openstack products, you should also look at the wiki . But if you want more than A, faster than B - this is the condition that you set, it will be almost impossible as a client.
I believe that I will explain why not a single โclientโ has an answer to your question so far, while your question is vital in many aspects of advertising / industry / ecology.