[SAP BTP Chronicles #2] Simple queries for CAP logs in OpenSearch
🔔This is the 2nd part about BTP-related topics. See others here.
In the previous part, we examined how our application logs (built with the CAP framework - SAP Cloud Application Programming Model) look at various endpoints - in the terminal, in BTP logs, and in OpenSearch. In this episode, we will focus only on the latter and see how to make some helpful queries.
Sample application
As a base, I am using the same simple application that I prepared in the first part. I am making just one modification that will be useful for learning purposes.
Currently, in our simulation of a business logic error, we use req.reject
.
We will also add the ability to trigger req.error
when attempting to retrieve
the second entry. This way, we can easily trigger such errors by simply
making calls in the browser.
srv\service.js
const cds = require("@sap/cds");
const LOG = cds.log("my-service");
class NamesService extends cds.ApplicationService {
init() {
const { MyNames } = this.entities;
this.before("READ", MyNames, (req) => {
LOG.info("👽 this is my info");
if (req.data.ID === "a7aa1b2d-7514-41dc-82bf-91af2ba67cc1") {
LOG.error("👻 service logic error");
req.reject("you can't read this entity");
}
if (req.data.ID === "842fe729-4077-4190-940b-54e33cfdd77d") {
req.error(400, "this entity is also not for you");
}
});
return super.init();
}
}
module.exports = NamesService;
Additionally, I am deploying the application to two spaces in my account: dev
and sandbox
, so
I have the application deployed twice.
Searching for a specific log
Let's start by trying to find our logging entry from the server.js
file, which is:
...
const LOG = cds.log('my-server');
...
LOG.info('🕺served!');
...
We have two pieces of information useful for the query - the logger name my-server
and the log message itself - served
.
On my dev
space, I go to my service instance, click on the Logs
tab, and
then select Open Kibana Dashboard
. Then, in OpenSearch, I navigate to Discover
.
Note: The successor to Application Logging is Cloud Logging, which is also based on OpenSearch. So, what I'm demonstrating here will also be useful for handling logs in the new service. However, the logs themselves and the way they are collected are slightly different, but that's a topic for another post.
We will see a screen with fields to choose from (left side), an input for queries (top), a chart (middle), and a list of logs (bottom) - by default, all fields are displayed, which represent each log entry (stored as a JSON document) in OpenSearch.
Let's query our log - we are only interested in our CAP service and a
specific text value - so we can use the fields component_name
and msg
for the query.
Hint: by clicking on DQL
, you can go to the page with help for queries
(OpenSearch Dashboards Query Language).
In the field on the right, we can change the time range for the query, keeping in mind that we have access to logs for a maximum of 7 days - this is not a configuration setting, but rather how the Application Log service in BTP works.
Getting a strange result... 2 entries - but this log should only appear once, during the start of the CAP service..
Let's create a clearer view - from the fields on the left (Available Fields
),
click on "+" for msg
and space_name
. These fields will appear in Selected Fields
.
And at the bottom of the screen, the situation became a bit clearer - since I
deployed the application to two spaces, I have 2 logs - one from each space.
Logstash
, which is the component that collects logs, puts everything here - if I
had the application on another account, I would get a third log. That's why in queries
or column selection, we need to use organization_name
for the desired account
and space_name
(or if we prefer, organization_id
and space_id
) for the spaces.
Searching for specific loggers
Let's remind ourselves how we defined our loggers in the application:
server.js
const LOG = cds.log("my-server");
srv\service.js
const LOG = cds.log("my-service");
Let's assume that we want to see all logs added by loggers with our ID, only from the dev
space.
component_name: sample-logging-srv and logger: my-* and space_name: dev
Result (after starting the service, I triggered the error logic 2 times):
For clarity, I have selected only some fields. Here's what we can observe:
LOG.info(...)
- has alevel
of INFO and goes through thechannel
OUTLOG.error(...)
- has alevel
of ERROR and goes through thechannel
ERR
This gives us additional possibilities for queries if we are interested in aggregating specific information (e.g., only error logs).
Logs for different errors
OK, let's now see how JSON documents look like for errors of type:
req.reject(...)
req.error(...)
- validation error - for example, the user did not provide a value for a field marked as
@mandatory
. - a typical JavaScript error in the application, for example, trying to read an
undefined
value.
First, I searched for my req.reject
:
component_name: sample-logging-srv and space_name: dev and msg: "you can't read this entity"
As you can see, the logger is the framework's cds
- you can find more about
standard loggers and their IDs in the documentation.
The value for level
and channel
is ERR
.
Now let's take a look at req.error
:
component_name: sample-logging-srv and space_name: dev and msg: "this entity is also not for you"
What has changed? This time level
is WARN.
Moving on - we are trying to create a new entry without a required field. We then search in the logs:
component_name: sample-logging-srv and space_name: dev and msg: "value is required"
Similarly to req.error
, the level
is WARN (underneath, CAP also uses req.error
to report errors during validation of @mandatory
fields).
Finally, a JavaScript error:
component_name: sample-logging-srv and space_name: dev and msg: "Cannot read properties of undefined"
level
and channel
have the value ERR.
So we can see that depending on the type of event, we have different values, and skillful query control can help us aggregate only the information we want.
Summary
- For our own loggers
LOG...
, it is worth giving meaningful names if it helps us in more efficient log aggregation. - When preparing queries for errors, pay attention to whether we really want to
include errors from standard/custom logic (
req.error
); in that case, in queries withchannel
= ERR, we also control the values forlevel
.