The Good Tech Companies - HARmageddon is cancelled: how we taught Playwright to replay HAR with dynamic parameters
Episode Date: January 15, 2026This story was originally published on HackerNoon at: https://hackernoon.com/harmageddon-is-cancelled-how-we-taught-playwright-to-replay-har-with-dynamic-parameters. We ...taught Playwright to find the correct HAR entry even when query/body values change and prevented reusing entities with dynamic identifiers. Check more stories related to programming at: https://hackernoon.com/c/programming. You can also check exclusive content about #cicd, #playwright, #har, #ci-cd-solutions, #e2e, #e2e-testing, #correct-har-entry, #good-company, and more. This story was written by: @socialdiscoverygroup. Learn more about this writer by checking @socialdiscoverygroup's about page, and for more stories, please visit hackernoon.com. Playwright is a tool for mocking the network using a HAR file. HAR is a file that contains: all page requests request parameters server responses. HAR files can be used to test the network state without starting the backend.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Harmageddon is cancelled, how we taught Playwright to replay HAR with dynamic parameters.
By social discovery group, problem, regular HAR mocks quickly stop working, dynamic parameters break
request matching, and the content contains sensitive data that can't be committed.
Approach. We introduced automatic HAR normalization, replacing users, tokens, IDs, and a logic layer on top of Playwright,
that correctly handles dynamic parameters.
What we did, we taught Playwright to find the correct HAR entry even when query, body values
change and prevented reusing entities with dynamic identifiers.
Result. E2E tests became environment independent, run stably in C, CD, don't need the back end,
and don't contain personal data. Why mock the network in E2E tests at all?
Even though E2E tests are meant to test the application, as a whole, in practice you almost always
need to isolate the UI from real network calls. Main reasons. 1. Test S-A-B-I-L-T-Y-A-A-RILL API
can be unavailable. Be slow. Return unpredictable data. Mocks eliminate randomness and make
tests reproducible. Two, speed even with sequential test runs. Mock responses are returned instantly,
without real network calls. This makes tests several times faster compared to hitting the real
back end. Three, reducing load on the back end when even
E2E tests run in parallel, they generate a lot of concurrent requests.
Thiscan. Create load spikes on the test server. Lead to rate limit errors. Basically, DDoS,
your own backend. Mocks completely remove network load. The backend doesn't participate in the
test run at all. Four, independence from external services stripe, S3, geocoders, open AI, anything that
can fail will eventually fail. Mocks turn E2E tests into a fully autonomous layer that doesn't
depend on third-party APIs. But there's a downside, if an external service changes its
contractor goes down, a mock test will never know and will happily stay green. How Playwright
mocks the network. Playwright has a low-level interface for intercepting and substituting requests.
Any request matching the pattern, in this example, ending with appi users, will be handled
locally without going to the internet, and the data passed toful fill will be returned to the client
as the response for this request. What is a HAR file and why is it convenient for testing?
H.TTP archive is a JSON file that contains all page requests. Request parameters,
server responses. In other words, HAR is a recording of real network activity. If you create a
HAR once, for example, log in, open a list, load a product card, you can then use this file
as a source of MOX that completely reproduce real API behavior. That's why I have a
HAR is perfect when you need to fix the network state. Want to test the UI without starting the
back end. Need a deterministic scenario that is as close to real as possible. How Playwright uses
HAR files for mocking. Playwright can. 1. Record HAR. 2. Replay HAR without real network calls. Both
operations are done via the route from HAR method on browser context. 1. Recording HAR Playwright
automatically intercepts everything happening in the browser. Unless exclusions are configured in options,
and saves it into.
Har files 2.
Replaying Har now.
If a test makes a request that exists in the Har,
Playwright immediately returns the saved response
without going to the network.
You can read more about recording and replaying
Har in the Playwright docks.
Advantages of Har mocking.
1. Realistic data.
Har contains real server responses,
and the U.I sees them exactly as they were at the moment of recording.
2. Full isolation.
No server needed.
Tests run even on a local machine
without a back end.
3. High speed.
HAR responses are returned instantly.
Tests run at maximum speed.
4.
Ideal for complex flows.
For example, authentication, complex filters,
request chains, pages with dozens of API calls.
Generating such MOX by hand is hard,
and HAR solves it automatically.
Why HAR mocks can unexpectedly break.
A recorded HAR file is a snapshot of network requests
made while hitting the real back end during test execution.
As a result, the file may contain dynamic data, userids in urls, and sensitive information,
auth tokens, emails, etc.
In our project we ran into four problems with data in HAR files, tied to the user under which
the HAR was recorded.
Dynamic parts of URLs, dynamic request bodies, sensitive data that can't be committed.
One, tied to the user under which the HAR was recorded.
In practice, developers recorded HAR files while logged in as different users, which led to
the URLs with user IDs becoming a problem right away.
Example.
In our app, a user can use chat and also view a list of received emails on a separate page.
Tests for the chat page use MOCS from chats, HAR, which contains a user list request
for a user with ID unique user ID 1, GET, Unique User ID 1, Users, List tests for the emails page use
mocks from emails.
HAR, which also contains a user list request, but it was recorded under a user with IED1,
with ID unique user IDID 2.
GET, unique user ID 2, users, list consequences.
Playwright, when replaying, matches requests strictly by URL, method, and body, so even
a small mismatch breaks the mock.
If we run all tests logged in as the user with ID unique user ID 1, playwright won't be
able to find the needed entity in emails.
Har when running the email page tests, because the endpoint get, unique user id 2, users,
list was recorded for user unique user id 2, while Playwright will look for unique user ID 1, users list.
2. Dynamic parts of Urals. Example. The HAR file may contain an entry with a request ID generated on the client's side.
Consequences. Request id will be unique each time, one value at recording time, another at replay time.
So Playwright will never find this entry on subsequent test runs, again because Playwright strictly compares URL data when searching.
3. Dynamic request body dynamic parameters can appear not only in the URL, but also in the
request body. If the body contains values that must match data obtained at a previous step,
for example, requested from Generate Playwright will only be able to find the correct
Har entry if these values match exactly. Although such parameters are unique by nature, their value
must be the same across all related requests in the test. If the request id passed to,
Generate doesn't match request underscore id in the subsequent request body, playwright won't be able to find a matching entry, and the mock just won't work.
Example where the previously generated request is passed in the body of a subsequent request.
Consequences. If request underscore id differs from what was in the generate request, playwright won't find the corresponding entry in the HAR.
4. Sensitive data that can't be committed example. Consequences. Real user data can end up in the repository because Har file is.
must be in their repo for C, CD test runs.
And hopefully your auth endpoint doesn't accept such data in plain text in real life.
We ended up with these tasks.
1.
HAR files must not depend on the user they were recorded under and must not contain sensitive data.
It shouldn't matter where we recorded them.
We want to use them without depending on users and their data.
Every test should run with the same test credentials.
The file must not contain real personal user data, only controlled test values.
2. We need to teach our tests to work with dynamic data. Endpoints with dynamic data must be found
in HAR and used in the test only once, just like with real back-end calls, a specific request,
a specific response. If test logic needs to call an endpoint with data from another endpoint,
the strictly corresponding HAR entry must be used. How to approach the solution.
1. HAR normalization, a separate script that finds original user data in the HAR and replaces it with
test values. Can normalize previously created HARs so we don't have to re-record them. Some tests require
special conditions for the user in order to reproduce them for recording. Two, dynamic data
interception requests to URLs with dynamic data are intercepted, and the correct HAR entry is
returned in the response. HAR normalization. Since we need to normalize previously generated HARs,
the task comes down TEO automatically detecting user credentials, user ID, email,
token and replacing them with controlled test values.
We split the process into two major steps.
1.
Extract real user data from the auth request,
because this is the only place where we can reliably find the correct user data.
User ID, email, authentication token,
walk through all HR entries and replace user-sensitive data with test data everywhere,
in all URLs.
In request, response headers.
In request, response bodies.
Below is a simplified version of how such
normalization can be implemented, already split into logical parts.
Step 1. Extract user data from HAR first. We find a successful auth request,
ath with 200 status, and pull out user ID, email, and token. These data will be the originals.
We'll replace. You can see the HAR file type in more detail here. HTTPS colon slash-Github.
Com, definitely typed, definitely typed, blob, master, types, HAR format, index.
D. T.S. Step 2. Normalize URLs next. We need to get rid of the dependency on a specific user ID in URLs.
Step 3. Normalize headers. Headers often contain tokens and other things that shouldn't appear in the repository.
If needed, you can also add replacement logic for authorization, X user ID, and other custom headers here.
Step 4. Normalize Request and Response Bodies result. Putting it all together in normalize HAR file.
Now that we have all helper functions, the final step is to walk through Allentries and apply
normalization to all parts of the requests, responses. Dynamic data interception. In our system,
there are several related endpoints. One receives requested inquiry parameters generated on the client,
and later the same request id mastopier in the body of another post request. When replaying Har,
Playwright strictly matches by URL plus method plus body and has no idea that two different requests
are logically connected by the SamaraQuest id. So we had to add our own layer on top of
root from Har. Sequence OF steps we split the work with dynamic data into several steps one.
Basic Har wiring through Root from Har Playwright still replays everything it can strictly
match. Two, add a Har Mokx wrapper that loads Har into memory, can search for the correct
entry taking dynamics into account, removing query params, searching in body, etc. Stores already used
request id values to avoid reusing the same entry. 3. Intercept problematic URLs via context.
Root. First request. Request id comes in the query with dynamic request id. Second request.
The same request id is in the body, with requested inside body. Additional case.
Static URL plus dynamic request underscore id only in the body, appi, asterisk,
with dynamic inside post. Four. Every time player
can't match a HAR entry out of the box, we intercept the request, find the correct entry manually,
use it once via route, fulfill, mark the request it as used. Step 1. Basic HAR setup via route from
HAR. Let's start with a simple class that can load HAR and attach it to Playwright. At this point,
Playwright can already replay HAR, but still, falls over, on dynamic parameters. Next, we'll
extend HAR MOX. Step 2. State for working with dynamic parameters.
We need to store already used request id values.
Remember request id values found in one request so we can use them in another.
Backslash.
Step 3.
Helper methods for searching entries in HAR.
3.
1.
URL normalization.
Removing dynamic query params.
3.
2.
Cash key for found P.A.MS 3.
Easy access to query P.A.M.S. 3.
4. Finding HAR entry by URL method with normalization and an extra C-H-E-C-K-3.
5. Finding HAR entry where a parameter is inside the BODY 3. 6. Extracting Request
underscore ID from Request Body. Step 4. Intercepting a request with Request ID in query.
The first endpoint. Request id comes in the query with dynamic request id, asterisk.
Playwright itself won't find the HAR entry because of strict URL comparison.
so do it for playwright. What's happening here? We intercept the request by pattern asterisk asterisk,
with dynamic request id, asterisk. We remove the dynamic request id from the URL to find the matching
entry in HAR. Via custom check we ensure this request ID hasn't been used before. We save the request
id and found search params and used request ids. We return the response from HAR via root.
fulfill, instead of hitting the real back end.
Step 5.
Intercepting a request where request id is in the body.
The second endpoint should use the same request id,
but this time inside the request body as request underscore id.
Logic.
We take the previously saved request id from found search params.
We search for a har entry where this request it is in the body under the request underscore
id key.
If found, we return that entry in the test.
Step 6.
Static URL plus dynamic request.
request underscore id in the body. An additional case, the URL doesn't change, but the request body
contains a unique request underscore id, and we want to use each such entry only once. Putting it
all together to make test logic behave the same way as with a real back end, we need to. Let
playwright use har wherever strict matching works. For dynamic endpoints intercept requests manually,
normalize URL and body, pick the correct entry and ensure each request it is used only once. The code of
above implements exactly this layer on top of root from HAR.
What does Playwright think about this?
At the moment, Playwright remains a simple Har player, with no normalization layer.
Strict matching by URL, method, body.
No cross-environment, base URL-aware HAR support.
Playwright doesn't know your domain logic.
That's why separate solutions appear in the ecosystem on top of Playwright Playwright
Advanced HAR, Playwright Network cache, or custom solutions like hacking Playwright Network
recordings or the hidden cost of Playwright's API mocking and our custom solution.
Conclusion, instead of a bunch of fragile mocks, we ended up with a unified system where
HAR is recorded under any user, automatically normalized, used with completely fake data,
and works correctly even with dynamic parameters. This approach is especially useful in large
projects with many API calls and parallel development. E2E tests remain fast, deterministic,
and independent of real users.
Written by Sergey Levkavik,
senior front-end developer at Social Discovery Group.
Thank you for listening to this Hackernoon story,
read by artificial intelligence.
Visit Hackernoon.com to read, write, learn and publish.
