Why nGrinder3.0 performed better than Grinder3.11

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Why nGrinder3.0 performed better than Grinder3.11

DuBan
This post was updated on .
I did some compare work between   Grinder3.11 & nGrinder 3.0,  1:5/1:15/1:4/1:16 on PC/2Core/4Core/8Core machine.

[Conclusion] nGrinder3.0 performed better than Grinder3.11 on very indicators, including TPS/MeanTime/PeakTPS.

I modified grinder.properties and JVM parameters, which may affect performance result.  Mavlarn helped double-checking.

So I can't explain why the Grinder3.11 performed obviously lower than nGrinder3.0.

Are there some thing different in data collecting and accumulating in nGrinder 3.0 comparing to Grinder 3.11?

See nGrinder_Compare_Grinder.xlsx
Reply | Threaded
Open this post in threaded view
|

RE: Why nGrinder3.0 performed better than Grinder3.11

junoyoon
Administrator

The conclusion from what I observed from real deployment here is that we can not determine nGrinder performance in advance.


It seems that nGrinder threads on the single process run on the single core because of some synchronization issue. 

(I'm not sure.. but it seems like it.)


In this theory, if we specify only 1 process to be used, it wastes the remained cores.


This explained a lot. Your trial shows the significant performance differences b/w 1 and 2 cores, not b/w 2 ~ 4 cores.

nGrinder use 1 process(ngrinder agent controller) when it does not perform the tests. If the test runs, at least 2 processes are running.


So it can be distributed into 2 cores and does not interfere each. This can explain the reason of your performance test result which uses only one process.

Please use more than (core count-1) processes for test if  you like to confirm it.


However, the following is the issue we should consider.


* If perf test uses one 1 process, all the threads belong to it shares same memory area.

We didn't specify the process XMS and XMX size when running the test java process yet, It used the default size.

I observed that it uses 64m on XMS and 1G Xmx for CentOs 5.3 64 bit and JDK 1.6-24 as default.

When the thread shared these memory, there is always possibility on OutOfMemory error.

If too many threads shares one process, memory usage is goes up and finally crashed down.

I saw these a lot of times. Whenever I see this, I recommend user to increase process count and decrease thread count.

However I can not determine this in advance. because it's completely depends on the script usage.

Some script uses more memory than the others. In such a case, we should consider the less thread count.

What I wanna say is that the process count is not deterministic based on the core count.


* If we increase process count, there is also swapping possibility.

If the physical memory size is 4G and we're running more than 4 processes, it goes up beyond the real memory size.

It causes on memory swapping, it really decrease the agent performance and causes HW hang finally.

So we should consider maximum process count based on the real memory size. 

It tell us that we can not use process core count only.


* Each agents has different free memory size and CPU cores.

Even if we finally get the optimal value according to above observation, we can not apply this on the agents. Because each agent has different HW spec.

Moreover, some agents might have less memory size than others because some running processes might occupy the memory already.


There are a lot of dynamics for the optimal process and thread count. We can not decide which one is the most appropriate and most optimal values for best performance.


Only one we can do is... that we can make nGrinder agents find out that which value show the best performance based on the real trials.

However it takes time to determine it.. Because nGrinder agents is not very responsive to do such a job. Each trail takes more than 10 sec, and if there are 10 trials, it can find out the optimal value after 100 secs.


Turning into other aspect, your test trials has some flaw.. 

Performance test result are highly affected by current network condition

If some guy downloads Youtube video, it decreases overall TPS. You didn't control the network condition when you ran the benchmark.

Actually this kinds of differences are not very important for users. They should think of it in advance.


nGrinder previous version and current version comparison in terms of performance of nGrinder is not important either.

Users are mostly concerned about consequence test result comparison, not comparison b/w 1 month ago result and current one.

More over, there is the big differences nGrinder agents message loop are run on -server mode(which does more byte code optimization) and Grinder might not.


Followings is my conclusion.


1. Don't try the performance comparison b/w nGrinder each versions. It's not worthwhile.

2. Don't try to recommend process and thread count.

3. We can provide the default memory size when running processes later.

4. We can consider -server as a nGrinder test process JVM parameter for better performnace.

5. We can provide the best performance run mode when user really want to maximize the agent performance. 

    It will takes more than 100 secs but it does best job... User only select this options, when they are ready to do it.






JunHo Yoon
Global Platform Development Lab
/ Senior Engineer

13th FL., Bundang First Tower, 266-1, Seohyeon-dong, Bundang-gu, Seongnam-si, Gyeonggi-do, 463-824, KOREA
Tel 031-600-9071   Fax --   Mobile 010-6255-0559
Email  [hidden email]

NHN Business & Platform NAVER HANGAME 쥬니어네이버 해피빈 미투데이


-----Original Message-----
From: "DuBan [via ngrinder]"<[hidden email]>
To: "junoyoon"<[hidden email]>;
Cc:
Sent: 2012-11-22 (목) 17:21:54
Subject: Why nGrinder3.0 performed better than Grinder3.11

I did some compare work between   Grinder3.11 & nGrinder 3.0,  1:5/1:15/1:4/1:16 on PC/2Core/4Core/8Core machine.

[Conclusion] nGrinder3.0 performed better than Grinder3.11 on very indicators, including TPS/MeanTime/PeakTPS.

I modified grinder.properties and JVM parameters, which may affect performance result.  Mavlarn helped double-checking.

So I can't explain why the Grinder3.11 performed obviously lower than nGrinder3.0.

Are there some thing different in data collecting and accumulating in nGrinder 3.0 comparing to Grinder 3.11?


If you reply to this email, your message will be added to the discussion below:
http://ngrinder.642.n7.nabble.com/Why-nGrinder3-0-performed-better-than-Grinder3-11-tp71.html
To start a new topic under ngrinder_dev, email [hidden email]
To unsubscribe from ngrinder_dev, click here.
NAML