Monday, 14 April 2014

MEMORY LEAKS IN PERFORMANCE TESTING Load Runner Tool

What is Memory leak:

        In computer science (or leakage, in this context), occurs when a computer program consumes memory but is unable to release it back to the operating system. A memory leak has symptoms similar to a number of other problems and generally can only be diagnosed by a programmer with access to the program source code; however, many people refer to any unwanted increase in memory usage as a memory leak, though this is not strictly accurate.

The memory for any consistent increase and also any degredation in CPU performance. Is it a memory leak?
       Note that constantly increasing memory usage is not necessarily evidence of a memory leak. Some applications will store ever increasing amounts of information in memory (e.g. as a cache). If the cache can grow so large as to cause problems, this may be a programming or design error, but is not a memory leak as the information remains nominally in use. In other cases, programs may require an unreasonably large amount of memory because the programmer has assumed memory is always sufficient for a particular task; for example, a graphics file processor might start by reading the entire contents of an image file and storing it all into memory, something that is not viable where a very large image exceeds available memory.

      To put it another way, a memory leak arises from a particular kind of programming error, and without access to the program code, someone seeing symptoms can only guess that there might be a memory leak. It would be better to use terms such as "constantly increasing memory use" where no such inside knowledge exists.

      The term "memory leak" is evocative and non-programmers especially can become so attached to the term as to use it for completely unrelated memory issues such as buffer overrun.

Checking for Leaks:

There are a number of telltale signs that an application is leaking memory.
  • Maybe it's throwing an OutOfMemoryException.
  • Maybe its responsiveness is growing very sluggish because it started swapping virtual memory to disk.
  • Maybe memory use is gradually (or not so gradually) increasing in Task Manager.
When a memory leak is suspected, you must first determine what kind of memory is leaking, as that will allow you to focus your debugging efforts in the correct area.

Use PerfMon to examine the following performance counters for the application:

Process/Private Bytes:
      The Process/Private Bytes counter reports all memory that is exclusively allocated for a process and can't be shared with other processes on the system.

Test: If Process/Private Bytes is increasing, but # Bytes in All Heaps remains stable, un managed memory is leaking.

 .NET CLR LocksAndThreads/# of current logical Threads:
The .NET CLR LocksAndThreads/# of current logical Threads counter reports the number of logical threads in an AppDomain.

Test:
If an application's logical thread count is increasing unexpectedly, thread stacks are leaking.


Test:
If both counters for 'logical thread count' and 'Private Bytes' are increasing, memory in the managed heaps is building up.

 .NET CLR Memory/# Bytes in All Heaps:
      The .NET CLR Memory/# Bytes in All Heaps counter reports the combined total size of the Gen0, Gen1, Gen2, and large object heaps.

Test:
By default, the stack size on modern desktop and server versions of Windows? is 1MB. So if an application's Process/Private Bytes is periodically jumping in 1MB increments with a corresponding increase in .NET CLR LocksAndThreads/# of current logical Threads, a thread stack leak is very likely the culprit.

Test:
If total memory use is increasing, but counters for 'logical thread count' and 'Private Bytes' (measuring managed heap memory) are not increasing, there is a leak in the unmanage


Alternative method:
Start with monitoring the response times, throughput, total tps etc.. You should see the impact here if not monitoring the run time environment or system resources in first instance. Now it could or could not be memory leak.

Look at memory profile of the server hosting the run time environment and application server logs. Check the logs, if out of memory errors are recorded in the logs it could or could not be a memory leak. Check heap usage and gc logs. It could be a memory leak if the heap is full and no memory is being released after gc(s). If there is enough heap but jvm is still kicking off gc's to free the memory, the perm gen space might be full or could be some other reason.

If its a memor leak then jvm would be thrashing and hogging up all the cpu. You won't see any load on the down stream systems. Plotting graph from gc logs would show an increase in the heap troughs.
Above is just one example and there could be many many variations to this. You can simulate a memory leak yourself, just google it and you will find code to both induce and fix it.

As you might guess, memory leak, if left unattended and not corrected, could prove to be fatal. Memory leaks can be found out by running tests for long duration (say about an hour) and continuously checking memory usage.

Issues caused by memory leaks are essentially based on two variables for a standalone windows application
 1) Frequency of usage 
2) Size of memory leak.

 If either one or both are very high, the computer might come to a point when no memory is available for other applications. This could lead to a computer crash. If it is a network based application then you will also have to consider network traffic. If each network transaction causes a memory leak, then a high volume of network transactions could also prove dangerous.

No comments:

Post a Comment