Tuesday, July 29, 2008

Old approach

Many organizations have used a "fix-it-later" approach to performance. This approach advocated concentrating on correctness and deferring consideration of performance until the testing phase. Performance problems detected then, were corrected by adding additional hardware, tuning the software (usually in a crisis-mode), or both.

Because it is based on several performance myths, this approach can be dangerous. These myths include:

* Performance problems are rare: The reality is that the number, size and complexity of systems has increased dramatically and todays developers are less expert at dealing with performance than their predecessors. As a result, performance problems are all too common.
* Hardware is fast and inexpensive: The reality is that processor speeds have increased dramatically, but networks are far slower. Furthermore, software threading issues cause performance problems despite the availability of hardware resources. No one has an unlimited hardware budget, and some software may require more resources than the hardware technology can provide.
* Responsive software costs too much to build: This is no longer true thanks to SPE methods and tools. In fact, the "fix-it-later" approach is likely to have higher costs.
* You can tune it later: This myth is based on the erroneous assumption that performance problems are due to inefficient coding rather than fundamental architectural or design problems. Re-doing a design late in the process is very expensive.

Good definition

Software Performance Engineering (SPE) is a systematic, quantitative approach to the cost-effective development of software systems to meet performance requirements. SPE, a software-oriented approach, focuses on architecture, design, and implementation choices.

SPE gives you the information you need to build software that meets performance requirements on time and within budget.
SOME OF THE BASIC TERMS USED IN PERFORMANCE ENGINEERING

1. Bottleneck

A bottleneck is a point in an application where congestion and delay occur, slowing down the processing of requests and causing users to experience unacceptable service delays.
2. Capacity

The capacity of a system is the total workload it can handle without violating predetermined key performance acceptance criteria.
3. Capacity test

A capacity test complements load testing by determining your server’s ultimate failure point, whereas load testing monitors results at various levels of load and traffic patterns. You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
4. Continuous Integration

Continuous Integration is a Software Engineering term describing a process that completely rebuilds and tests an application frequently. Generally it takes the form of a server process or Daemon that monitors a file system or version control system (such as CVS) system for changes and automatically runs the build process (e.g. a make script or Ant-style build script) and then runs test scripts. Other approaches use nightly or hourly builds.
5. Endurance test

An endurance test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time. Endurance testing is a subset of load testing.
6. Horizontal scaling

Adding more computer systems (servers) to the environment. See also: Vertical scaling
7. Instrumentation

Inserting (either statically or dynamically) code around program events in order to capture performance data. See also: Profiling

8. Latency

In general terms, a time delay between the moment something is initiated, and the moment one of its effects begins. In the realm of software performance, latency is often referred to in the contexts of server latency, network latency and disk latency.
9. Load Test

A performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations
10. Performance

Performance refers to information regarding your application’s response times, throughput, and resource utilization levels.
11. Performance test

A performance test is a technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance testing is the superset containing all other subcategories of performance testing.
12. Profiling

The insertion of tools into program code that traces events captures metrics as the program is running. Examples of java profiling tools are JProbe, JXInsight and dynaTrace. Also see: Instrumentation
13. Response time

Response time is the amount of time that it takes for a server to respond to a request.
14. Saturation

Saturation refers to the point at which a resource has reached full utilization.
15. Scalability

Scalability refers to the ability to handle additional workload, without adversely affecting performance, by adding resources such as CPU, memory, and storage capacity.
16. SLA

Service Level Agreement. In the context of performance testing, it is often the required level of performance (response time), availability (up time) or both.
17. Spike test

A spike test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time. Spike testing is a subset of stress testing.
18. Stress test

A stress test is a type of performance test designed to evaluate an application’s behavior when it is pushed beyond normal or peak load conditions. The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.
19. Throughput

Throughput is the number of units of work that can be handled per unit of time; for instance, requests per second, calls per day, hits per second, reports per year, etc.
20. Utilization

In the context of performance testing, utilization is the percentage of time that a resource is busy servicing user requests. The remaining percentage of time is considered idle time.
21. Vertical scaling

Adding more (or faster) CPUs within the same computer system. See also: Horizontal scaling
I work as a performance engineer in PI Corp(www.piworx.com). I will use this blog as a platform to share my views,learning,contributions towards performance engineering.

Here we go