For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Greg_Dostatni_1's avatar
Greg_Dostatni_1
Icon for Nimbostratus rankNimbostratus
Apr 27, 2015

Splitting traffic to test environment

Hello,

 

Is it possible to split off a copy of production traffic to test environment? What I envision is: I have two environments: test and production. I would like to temporarily (maybe for a week) or permanently take an exact copy of the production traffic and send it to both prod and test. Users would receive the output from production (test output would be ignored) while test environment could be used to test performance setting changes, etc.

 

I've heard that it is possible, but I've been unable to find any reference information about it. On the other hand, I'm not an F5 expert, so I may just not be looking for the correct terms.

 

Thanks,

 

Greg

 

3 Replies

  • Greg,

     

    That's interesting idea, if you don't mind me asking what kind of metrics/tools would you be using to determine that indeed performance is being optimized?

     

  • Hi Sheigh,

     

    We're using splunk here, but I am hoping to use it for both performance and functionality testing.

     

    Basic procedure:

     

    1. Configure TEST with the same memory / cpu as production. (Most services are in our VM environment).
    2. Clone production backup to test.
    3. Begin test (some differences will be inevitable, but should be ok.)

    Performance testing: Track the usual CPU / IO / Memory within both environments to see if they respond the same time. For web servers we create a table of response time, with log response size and average, max, 90th percentile, 95th percentile and 98th percentile across the top.

     

    Functionality testing: Basically track the response codes between production and test. Allow a certain number of 403's above what the production sees. Probably look at each transition (200->403 == file missing on test, 200=>500 == something is not quite working correctly).

     

    Obviously the results will vary greatly depending on what we've changed, but for a lot of changes we can get pretty close to a decent generic functional and performance testing procedure.

     

    Cheers,

     

    Greg