Performance testing with Postman

Postman can be used to send requests in a semi-automated, reproducible way and collect information on payload, request time/status. This can be useful to test PRs which have impact on performance.

The free version of Postman has some interesting features which can be used to ease the load of manual testing, when the aim is to test the impact a given PR has on performance.

In these cases, manual testing can be time consuming. With the Collections feature, we can run a given request, a given number of times, with a set request delay, while gathering information from all requests in a JSON file. This file can then be interpreted by simple scripts or a spreadsheet, to plot the results and make them easier to interpret. This section shows how to achieve these goals.

Please take notice: Currently, we're only considering this type of testing on staging servers. Sending this type of (repeated!) requests to production servers could lead to downtimes, and affect customer experience! When in doubt, just don't do it :-)

So, let's see how we can create a Postman collection (1), test it (2), run it within a Collection, gathering the results (3) and plot them (4).

  1. Create a valid request. Depending on the case at hand, testers may wish to test a specific API request, or check the response from an HTTP request. These cases are slightly different.

    • For API requests, have a look at the OFN API Handbook. A good example is using the reports API, like the example below, for packing reports (valid at the time of writing, and after inserting the user token below):<insert_user_token_here>&q%5Border_completed_at_gt%5D=2024-01-01+00%3A00&q%5Border_completed_at_lt%5D=2024-04-22+00%3A00&q%5Border_distributor_id_in%5D%5B%5D=&report_subtype=customer&report_format=&display_summary_row=true&fields_to_show%5B%5D=hub&fields_to_show%5B%5D=supplier&fields_to_show%5B%5D=first_name&fields_to_show%5B%5D=last_name&fields_to_show%5B%5D=product&fields_to_show%5B%5D=variant&fields_to_show%5B%5D=quantity&fields_to_show%5B%5D=report_row_type

    • (WIP) For HTTP requests, depending on the type of request, you may need to set cookies to enable authentication, for example, if you wish to access content which requires logging-in. This section will be extended in the future. To access public pages all you need to do is create a request with the required URL. A good example of this are the shopfront page, of a given producer. Here's an example (working at the time of writing):

  2. Taking one of the two requests above, open your local Postman app, and after logging in, create a Workspace. Then, select that workspace and click on the + sign, to create a request:

    Type in one of the above requests or a valid one and hit Send. You should see the request payload, and the parameters related to that request. Now that you have a valid request and can check that it works, hit Save (or ctrl+s).

  3. Next, we need to create a Collection and add our request to it. Within your workspace, create a Collection (top, left) by clicking the + sign - don't forget to name it; then add your request:

Type in the previously tested request and save it to the collection. Now, left-click on the three dots next to your Collection name and select the Run Collection option from the dropdown. Select the desired options, and hit Run Products page, right-bottom (our collection name). You should get something like this:

Hitting the export results with generate a file which contains information on the request times. These can be used for comparison before, and after staging a PR. This should enable testers to easily answer the question "does the PR improve, worsens or has no effect on performance?"

  1. Bonus points, the first: Plotting the results can be useful especially if your tests require running a large number of repetitions. In these cases, it can be useful to plot them, to have a better picture of the behavior of the app, as a function of the tested PR.

  2. Bonus points, the second: add tests to your Postman request. Go back to your request and select the Tests tab. Notice there are some pre-defined tests, on the right side. Try it out! Select some, change the values if you will, and save the request. Now, re-running the Collection requests should have this into account and display the rest results for each run

Last updated