Agent caching of JARs, resources, etc. - in file system

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Agent caching of JARs, resources, etc. - in file system

colinmain
Hi,
I'm running nGrinder 3.2.1 with some tests having a significant size of resources (many MB).
This means that every time a test starts they are loaded onto the agent servers - even though they are frequently still there from the previous run (in the "file-store" directory - we do not use a database).

Is there a means of either caching these resources?
Failing that, can it check to see if they are the same as for the previous test run (via modification timestamps or MD5 or something similar) and reuse them if possible?

(This would be particularly useful during early stages of test development when small tests are run, modified/tuned and re-run frequently.)

Many thanks,

Colin
Colin
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

junoyoon
Administrator
This post was updated on .
The Grinder (which ngrinder uses as a base library) continuously synchronizes the files into agents while they're connected to console, so that a user can start test very quickly without explicit file distribution latency.

However, nGrinder can not use this approach. because each console is only activated when a test is started and agent is connecting this console. The files starts to be synchronized just after a test is started.
It can increase the test starting time.(maybe 1~50 sec... to guarantee that the file distribution is successful. ) We think this start time latency reduced the usability of ngrinder(nGrinder took much time to start test in prior to 3.1 version). So we decide to modify the distribution logic. When a test is started, It just invokes the asynchronous distribution .. and  it just "believes" that file distribution is successful and just start the test.

In this approach, there are some possibility of file corruption and late file transfer problem. Therefore the test might not be executed well. To reduce this problem, we don't delete the previously distributed files per user and use it as a cache as much as possible. agent file synchronization logic in the agent side sends file fingerprint of  previously distributed files, so that only changed files are distributed.

The other feature ngrinder has.. is "Safe distribution". it sends the files one by one with explicit check. It slows down the whole test execution. However the file distribution is guaranteed.
By the way, safe distribution is automatically activated when the test distributes files having more than 10MB even when you didn't select "Safe distribution option".
 
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

junoyoon
Administrator
In reply to this post by colinmain
And you can configure the threshold size for the safe distribution activation by providing..

ngrinder.dist.safe.threshold=10000000
(threshhold or threashhold(this was actually a typo bug in prior to 3.1)

This will modify the safe file distribution threshold value.
Please make it increased.
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

colinmain
I'll give this a try.

As always, thanks for the prompt response.
Colin
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

colinmain
This has made a significant improvement. Thank you once more!
Colin
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

Brian  Brown
In reply to this post by junoyoon
So, I'm trying to get the agent to actually use the cache and not re-download everything. I set the ngrinder.dist.safe.threshold=10000000000

and it has no effect. Everytime, the agent clears the file store. I'm distributing a lot of files, so this makes the test runs very long. Is there anything else I can do?

Also, in the system.conf, the comment:
# From 3.1.1, nGrinder doesn't check the file distribution result to speed up the test execution.
# If your agent is located in the far places or you distribute big files everyday, you'd better to change this to true.
controller.safe_dist=false

I don't understand the comment versus the value - If I am distributing big files everyday, why would I want safe_dist = true?

Thanks!

- Brian
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

junoyoon
Administrator
Which version do you use?
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

junoyoon
Administrator
This post was updated on .
In reply to this post by Brian Brown
the value of controller.safe_dist_threshold should be within int range.

Therefore use following.

ngrinder.dist.safe.threshold=1000000000(9 zeros) # for ngrinder 3.2.X
or
controller.safe_dist_threshold=1000000000(9 zeros) # for ngrinder 3.3

Current your setting 10000000000(10 zeros) is over integer range..
In your case, ngrinder used 0 it instead of emitting errors .

In addition, controller.safe_dist should be true in case that there are much possibility that transmission b/w the agent and controller get bad while sending big files. If your agent and controller is close enough, and you'd better to set it false.
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

Brian  Brown
In reply to this post by junoyoon
3.3 is my version.

Thanks!
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

Brian  Brown
In reply to this post by junoyoon
Ok, in my case, the agent still cleared out incoming and downloaded all the jar files again.
Reply | Threaded
Open this post in threaded view
|

Re: Agent caching of JARs, resources, etc. - in file system

junoyoon
Administrator
OK. To make it clear... Are you using groovy maven project?

I'll do my best to fix this issue next week.