|Management resists, the guerrilla planner retreats.|
|Management dithers, the guerrilla planner proposes.|
|Management relents, the guerrilla planner promotes.|
|Management retreats, the guerrilla planner pursues.|
Copying someone else's apparent success is like cheating on a test. You may make the grade but how far is the bluff going to take you?
Translation: Vendors, listen up. We need backdoors and peepholes so we can determine how resources are actually being consumed.Corollary: It's better for business if we can manage it properly.
Then explain to me the multi-billion dollar dietary-supplements industry!It's not what you sell, but how you sell it.
Capacity planning techniques, such as the universal scalability model (in Sect. 3), help us to describe and predict these nonlinearities.
"I can understand people being worked up about safety and quality with the welds," said Steve Heminger, executive director ... "But we're concerned about being on schedule because we are racing against the next earthquake."Although this is not an IT manager, the point still applies. It is a quote from an executive manager for the new Bay Bridge currently being constructed between Oakland and San Francisco. Management threw out the independent assessement of the welds in order to stay on schedule.
Until you read and heed this statement, you will probably have a very frustrating time getting your perforance management ideas across to managment.
Capacity management can rightly be regarded as just a subset of systems management, but the infrastructure requirements for successful capacity planning (both the tools and knowledgeable humans to use them) are necessarily out of proportion with the requirements for simpler systems management tasks like software distribution, security, backup, etc. It's self-defeating to try doing capacity planning on the cheap.
Think about it. Performance analysis is a lot like a medical examination, and medical Expert Systems were heavily touted in the mid 1980's. You don't hear about them anymore. And you know that if it worked, HMO's would be all over it. It's a laudable goal but if you lose your job, it won't be because of some expert performance robot.
Today, there is a serious need to squeeze more out of your current capital equipment.
Planning means making predictions. Even a wrong prediction is useful. It means either (i) the understanding behind your prediction is wrong and needs to corrected, or (ii) the measurement process is broken somewhere and needs to be fixed. Start with a SWAG. Next time, try a G. If you aren't making iterative predictions throughout a project life-cycle, you will only know things are amiss when it's too late!
My response to the oft heard platitude: "We don't need no stinkin' capacity planning. We'll just throw more cheap iron at it!" The capacity part is easy. It's the planning part that's subtle.
If the network is out of bandwidth or has interminable latencies, fix it! Then we'll talk performance of your application.
In case you're wondering, those are REAL data and the axes are correctly labeled. I'll let you ponder why these measurements are so broken, they're not even wrong! Only if you don't understand basic queueing theory would you press on regardless (which the original engineer did).
Most people remain blissfully unaware of the fact that ALL measurements come with errors; both systematic and random. An important capacity planning task is to determine and track the magnitude of the errors in your performance data. Every datum should come with a `±' attached (which will then force you to put a number after it).
Data comes from the Devil, only models come from God.
Western culture too often glorifies hours clocked as productive work. If you don't take time off to come up for air and reflect on what you're doing, how are you going to know when you're wrong?
I use it almost daily to cross-check that throughput and delay data are consistent, no matter whether those data come from measurements or models. More details about Little's law can be found in Chap. 2 of Analyzing Computer System Performance with Perl::PDQ. Another use of Little's law is calculating service times, which are notoriously difficult to measure directly. See the Rules of Thumb in Sect. 2.
The bigger the symmetric multiprocessor (SMP) configuration you purchase, the busier you need to run it. But only to the point where the average run-queue begins to grow. Any busier and the user's response time will rapidly start to climb through the roof.
If your audience does not get the point, or things go into the weeds because you didn't expend enough thought on a visual, you just wasted a lot more than your presentation time-slot.
Control over the performance of hardware resources e.g., CPUs and disks, is progressively being eroded as these things simply become commodity black boxes viz., multicore processors and disk arrays. This situation will only be exacerbated with the advent of Internet-based application services. Software developers will therefore have to understand more about the performance and capacity planning implications of their designs running on these black boxes. (See Sect. 3)
A performance model should be as simple as possible, but no simpler!Someone else said:
"A designer knows that he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." -Antoine de St-Expurey
I now tell people in my Guerrilla classes, despite the fact that I repeat this rule of thumb several times, you will throw the kitchen sink into your performance models; at least, early on as you first learn how to create them. It's almost axiomatic: the more you know about the system architecture, the more detail you will try to throw into the model. The goal, in fact, is the opposite.
The Tube map is pure abstraction that has very little to do with the physical railway system. It encodes only sufficient detail to enable transit on the underground from point A to point B. It does not include a lot of irrelevant details such as altitude of the stations, or even their actual geographical proximity. A performance model is a similar kind of abstraction.
Despite several attempts, the original Tube map has hardly been improved upon since its conception in 1933. Apparently, it already met the requirement of being as simple as possible, but no simpler. The fact that it was designed by an electrical draughtsman, probably helped.
As an example, the principle of operation for a time-share computer system can be stated as: Time-share gives every user the illusion that they are the ONLY user active on the system. All the thousands of lines of code in the operating system, which support time-slicing, priority queues, etc., are there merely to support that illusion.
You, as the performance analyst or planner, only have to shine the light in the right place and then stand back while others flock to fix it.
One place to start constructing a PDQ model is by drawing a functional block diagram. The objective is to identify where time is spent at each stage in processing the workload of interest. Ultimately, each functional block is converted to a queueing subsystem like those shown above. This includes the ability to distinguish sequential and parallel processing. Other diagrammatic techniques e.g., UML diagrams, may also be useful but I don't understand that stuff and never tried it. See Chap. 6 "Pretty Damn Quick(PDQ) - A Slow Introduction" of Analyzing Computer System Performance with Perl::PDQ.
Take Little's law Q = X R for example. It is a performance model; albeit a simple equation or operational law, but a model nonetheless. All the variables on the RIGHT side of the equation (X and R) are INPUTS, and the single variable on the LEFT is the OUTPUT. A more detailed discussion of this point is presented in Chap. 6 "Pretty Damn Quick(PDQ) - A Slow Introduction" of Analyzing Computer System Performance with Perl::PDQ.
If the measurements of the real system do not include the service time for a queueing node that you think ought to be in your PDQ model, then that PDQ node cannot be defined.
Suppose, for example, you had requests coming into an HTTP server and you could measure its CPU utilization with some UNIX tool like vmstat, and you would like to know the service time of the HTTP Gets. UNIX won't tell you, but you can use Little's law (U = X S) to figure it out. If you can measure the arrival rate of requests in Gets/sec (X) and the CPU %utilization (U), then the average service time (S) for a Get is easily calculated from the quotient U/X.
An open queueing model assumes an infinite population of requesters initiating requests at an arrival rate λ (lambda). In a closed model, λ (lambda) is approximated by the ratio N/Z. Treat the thinktime Z as a free parameter, and choose a value (by trial and error) that keeps N/Z constant as you make N larger in your PDQ model. Eventually, at some value of N, the OUTPUTS of both the closed and open models will agree to some reasonable approximation.
Apart from making an unwieldy PDQ report to read, generally you are only interested in the interaction of 2 workloads (pairwise comparison). Everything else goes in the third (AKA "the background"). If you can't see how to do this, you're probably not ready to create the PDQ model.
|Equal bang for the buck||Cost of sharing resources||Diminishing returns at higher loads||Negative return on investment|
| || || || |
|α = 0, β = 0||α > 0, β = 0||α > 0, β = 0||α > 0, β > 0|
NOTE: The objective of using eqn.(1) is NOT to produce a curve that passes through every data point. That's called curve fitting and that's what graphics artists do with splines. As von Neumann said, "Give me 4 parameters and I'll fit an elephant. Give me 5 and I'll make its trunk wiggle!" (At least I only have 2)
Amdahl's law for parallel speedup is equivalent to the synchronous queueing bound on throughput in the repairman model of a multiprocessor.It was first published on arXiv in 2002. Both Amdahl's law and my Universal Scalability Law belong to a class of mathematical functions, called Rational Functions, which I have been able to show mathematically possess intimate connections with a load-dependent repairman model in queueing theory. More recently, this result has also been confirmed "experimentally" using simulations developed by my colleague Jim Holtman. So the whole approach to quantifying scalability is now placed on a fundamentally sound physical footing.