Agent crashed with OutOfMemory error even with 64GB of RAm

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Agent crashed with OutOfMemory error even with 64GB of RAm

Puneet Monga
I recorded a script which logs in to a wesbite, browses and logs out. Fortunately, I have access to a high performance CentOS server which has around 64 GB RAM.

However, when I tried running the script for around 1000 virtual users  (20 processes and 50 threads), agent crashed with the following error:

java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) [na:1.7.0_17]
        at java.lang.Thread.start(Thread.java:691) [na:1.7.0_17]
        at HTTPClient.HTTPConnection.getSocket(HTTPConnection.java:3436) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.sendRequest(HTTPConnection.java:3089) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.handleRequest(HTTPConnection.java:2883) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.setupRequest(HTTPConnection.java:2675) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.Get(HTTPConnection.java:986) ~[grinder-httpclient-3.9.1.jar:na]
        at net.grinder.plugin.http.HTTPRequest$2.doRequest(HTTPRequest.java:504) ~[grinder-http-3.9.1.jar:na]
        at net.grinder.plugin.http.HTTPRequest$AbstractRequest.getHTTPResponse(HTTPRequest.java:1276) ~[grinder-http-3.9.1.jar:na]


I have set the useXmxLimit to false. I found in the logs that there was around 60 GB of RAM free. So nGrinder divided that 60GB memory equally for 20 processes. This comes up to 3GB per process. So, not sure as to why the agent crashed even with such a good memory allocated.

Any pointers so that I can improve the scalability would help. Thanks in advance.
Reply | Threaded
Open this post in threaded view
|

RE: Agent crashed with OutOfMemory error even with 64GB of RAm

junoyoon
Administrator

I've tried 1000 vuser per agent before, it was successful.

The different might be that I useda  very simple script but you may have used very complex one TCPProxy generated.

 

Your error message seem it's not related to Xmx setting. 

Instead it may be related to PermGen size.

 

The Easiest way  might be increase the processes and decrease threads like..

- Process : 50, Thread : 20


Please set useXmxLimit=true in advance.

 

Please try above.


-----Original Message-----
From: "Puneet Monga [via ngrinder]"<[hidden email]>
To: "junoyoon"<[hidden email]>;
Cc:
Sent: 2013-04-02 (화) 03:54:29
Subject: Agent crashed with OutOfMemory error even with 64GB of RAm

I recorded a script which logs in to a wesbite, browses and logs out. Fortunately, I have access to a high performance CentOS server which has around 64 GB RAM.

However, when I tried running the script for around 1000 virtual users  (20 processes and 50 threads), agent crashed with the following error:

java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) [na:1.7.0_17]
        at java.lang.Thread.start(Thread.java:691) [na:1.7.0_17]
        at HTTPClient.HTTPConnection.getSocket(HTTPConnection.java:3436) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.sendRequest(HTTPConnection.java:3089) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.handleRequest(HTTPConnection.java:2883) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.setupRequest(HTTPConnection.java:2675) ~[grinder-httpclient-3.9.1.jar:na]
        at HTTPClient.HTTPConnection.Get(HTTPConnection.java:986) ~[grinder-httpclient-3.9.1.jar:na]
        at net.grinder.plugin.http.HTTPRequest$2.doRequest(HTTPRequest.java:504) ~[grinder-http-3.9.1.jar:na]
        at net.grinder.plugin.http.HTTPRequest$AbstractRequest.getHTTPResponse(HTTPRequest.java:1276) ~[grinder-http-3.9.1.jar:na]


I have set the useXmxLimit to false. I found in the logs that there was around 60 GB of RAM free. So nGrinder divided that 60GB memory equally for 20 processes. This comes up to 3GB per process. So, not sure as to why the agent crashed even with such a good memory allocated.

Any pointers so that I can improve the scalability would help. Thanks in advance.


If you reply to this email, your message will be added to the discussion below:
http://ngrinder.642.n7.nabble.com/Agent-crashed-with-OutOfMemory-error-even-with-64GB-of-RAm-tp523.html
To start a new topic under ngrinder-user-en, email [hidden email]
To unsubscribe from ngrinder-user-en, click here.
NAML
Reply | Threaded
Open this post in threaded view
|

RE: Agent crashed with OutOfMemory error even with 64GB of RAm

Puneet Monga
I tried giving 50 processes and 20 threads with useXmxLimit to true. It crashed pretty quickly.

I was going through this link. http://www.cubrid.org/wiki_ngrinder/entry/virtual-user. Its mentioned that on 8GB machine with 4 cores, vuser can be reached to 1000. However, I couldn't reach 1000 vusers even in 64GB RAM.

Appreciate your help.
Reply | Threaded
Open this post in threaded view
|

RE: Agent crashed with OutOfMemory error even with 64GB of RAm

junoyoon
Administrator

Java Version please

 

-----Original Message-----
From: "Puneet Monga [via ngrinder]"<[hidden email]>
To: "junoyoon"<[hidden email]>;
Cc:
Sent: 2013-04-02 (화) 19:16:48
Subject: RE: Agent crashed with OutOfMemory error even with 64GB of RAm

I tried giving 50 processes and 20 threads with useXmxLimit to true. It crashed pretty quickly.

I was going through this link. http://www.cubrid.org/wiki_ngrinder/entry/virtual-user. Its mentioned that on 8GB machine with 4 cores, vuser can be reached to 1000. However, I couldn't reach 1000 vusers even in 64GB RAM.

Appreciate your help.


If you reply to this email, your message will be added to the discussion below:
http://ngrinder.642.n7.nabble.com/Agent-crashed-with-OutOfMemory-error-even-with-64GB-of-RAm-tp523p532.html
To start a new topic under ngrinder-user-en, email [hidden email]
To unsubscribe from ngrinder-user-en, click here.
NAML