[SAP BTP Chronicles #5] CAP tests - speed and approaches
🔔This is the 5th part about BTP-related topics. See others here.
One day I came across a blog post and was intrigued by its approach to writing tests in CAP - specifically, the server is started once and then used by other test modules. I was particularly interested in the performance aspect - because to make it work, you need to deviate a bit from the standard approach.
Sounds enigmatic? Let's break it down step by step.
Testing - Standard Way
If we look at the example tests for the bookshop project,
in custom-handlers.test.js
we have:
const { GET, POST, expect } = cds.test("@capire/bookshop");
In odata.test.js file again:
const { GET, expect, axios } = cds.test("@capire/bookshop");
And so on. In other words, if we run npx jest, each test module will start its own
test server using cds.test.
Or a single server?
In the approach from the referenced post the server is started only once, because there is
a single main file that aggregates the tests (here test.js):
// ...
const VehicleService = require("../test/vehicle-service/vehicle-service-test");
// ...
const { GET, POST, PATCH, DELETE, test, expect, axios } = cds.test(
"serve",
__dirname + "/../srv",
"--in-memory"
);
// ...
// run tests
const oTest = new TestClass(GET, POST, PATCH, DELETE, test, expect, axios, cds);
VehicleService.test(oTest);
…and this test method wraps the actual tests and runs them using the GET, POST,
etc. functions provided by the previously started server:
vehicle-service-test.js
// ...
module.exports = {
test: function (oTestClass) {
describe("Vehicle Service", () => {
const { GET, POST, test, expect } = oTestClass;
beforeAll(async () => {});
beforeEach(async () => {
await test.data.reset();
});
it("Create Vehicle", async () => {
// ...
Let's compare
I was curious about the speed improvement in a larger project with this approach.
In one of our projects, we have over 100 test suites, so I focused on just a portion- 19 of
them, with 76 tests. Each test suite operates normally, starting its own
test server through cds.test, as in the bookshop example.
Local execution time:
After switching to the single test-server approach:
The gain is rather small - less than 2 seconds.
But that's locally - on a powerful computer with lots of cores, processors, RAM, and all the bells and whistles.
Let's see how it performed in CICD, where we typically get a container with rather limited resources.
Results from SAP CICD:
Here, there's already a difference of one minute with a total time of 6 minutes.
So, Should We Switch to a Single Server?
Well, I won't. Because the gain is still relatively small.
After wrapping the describe, it skeleton in the test method, you can't
run a single test using the Jest Runner plugin in VSCode - which is
quite important for me as I use it frequently:
Additionally, a single test server is more problematic
when you're mocking calls using prepend.
But we gained a minute in CICD! No worries. Tests with their own test server are independent, so in GitHub Actions we have Jest sharding set up.
Currently, our 100 test suites and 597 tests on 4 shards complete in 3 minutes.
UPDATE: You can as well use Node test runner sharding, we switched to it as it works faster then Jest:
- run: node --test --test-shard=${{ matrix.shard }}/${{ strategy.job-total }}
Speaking of Node test runner...
cds test
The latest addition in CAP is the cds test command, which uses
the Node's native test runner under the hood.
Locally, for our sample project, it's faster than Jest (98s vs 106s).
More about cds test:
- https://broadcast.sap.com/replay/250709_recap - 52:44 CAP Tools - What’s New and Hot Christian Georgi
- Quiet cds test output - two ways by DJ Adams
- Testing SAP CAP Node.js apps with cds.test by Mauricio Lauffer
Summary
Although starting a single server gives some gains, the trade-offs were too big for me. The setup remained unchanged:
- locally
cds test, Jest Runner in VSCode for quick single test execution - GitHub Actions - sharding with
node --test - SAP CICD -
cds test, because sharding doesn't work here. But in our case, it's already the last stage with deployment: we get fast feedback from the 2 previous stages.