### Data tiering: a simple performance analysis

A certain transaction uses 1 ms of cpu time and 20 ms of IO time. With a think time of 10 s the system supports 550 users with a response time R below 1 s. Now the same system is equipped with a faster storage where part of the data is placed. When this data […]

### The scalability of the software

The developers team in the ABC company has just built a five star transaction / program. The program code has a critical region. Basic performance tests with a few users result in a the total execution time of 1 s, with a residence time in the critical region of 0.05 s. These numbers are considered […]

### Phases of the SAPS Benchmark

The SAPS benchmark fits very closely in the model analyzed in the post “Phases of the Response Time”. In essence the benchmark is performed by progressively increasing the customer population, and monitoring the response time.  When response time reaches 1 second, the measured throughput, as expressed in dialog steps per minute, is the SAPS value. […]

### Phases of the Response Time

Let us consider a fixed (closed) population of users interacting with a service center, as shown in the following picture below: At t=0, an user arrives to the service center, From t=0 to t=W the user waits in the queue if all the servers (or workers) are busy at t=0,  W is the wait time. […]

### More or Faster?

The More or Faster dilemma   What would you choose? MORE  but slow workers or to have fewer but FASTER workers?   Let’s explore a little bit to find an answer.   Hairdresser’s You have the opportunity to choose between two hairdresser’s that have advertised, and it’s true indeed, they are capable of 4 haircuts […]

### How much do I have to wait?

Wait time is lost time I suppose everyone agrees with the sentence: wait time is lost time. Nobody likes or wants to wait. From the customer (requester) point of view wait time is, in general, badly tolerated.   But the service provider point of view is different. In general it would not provide ample capacity […]

### Introducing the Response Time

Introducing the Response Time The response time is the loser in the set of all performance metrics. It’s systematically ignored in almost every sizing, it’s neither measured nor accounted in most cases, and it’s only marginally referenced in certain benchmark definitions. Clearly, the throughput is the winner: SAPS, IOPS, transactions per second are throughput metrics, […]

### How much capacity does a virtual cpu guarantee?

The rapid answer to the questionhow much capacity does a virtual cpu guarantee?is as much as one core can deliver (PowerVM VP), or as much as one thread can deliver (ESX vCPU). This is the best case, and so has been In the two previous entries ( “Don’t put in the same bag Xeon and […]

### More on ESX vCPU versus PowerVM VP

Let’s explore further singularities of virtual CPUs (virtual CPUs in ESX parlance, virtual processors in PowerVM parlance). In particular we will try to determine the relationship between the throughput delivered by a single and solitary Virtual Machine and the number of assigned vCPU/VP. Once more we will execute a thought SAPS benchmark with our already […]