While working in splunk there might me many times you faced an issue where the logs stopped to index in splunk and after investigation you found it was due to a forwarder which went down and you lost many events. Meanwhile this problem can be easily handled by just an alert which will trigger as any forwarder goes down, so that the necessary actions can be taken quickly. Hence in this blog we are going to show you how to create an alert for forwarders down in the environment.

Step 1 – Setting up the query for forwarders down.

Run this command to get all the information about hosts present in environment

| metadata type=hosts index=os OR index=_internal


Use following query to calculate the time since forwarders are inactive.

| metadata type=hosts index=os OR index=_internal 
| eval age = now() - recentTime
| eval status= case(age <= 1800,"Running",age > 1800,"DOWN")
| convert ctime(recentTime) AS LastActiveOn
| eval age=tostring(age,"duration")
| eval host = upper(host)
| table host age LastActiveOn status
| rename host as Forwarder, age as "Last Heartbeat(hh:mm:ss)",LastActiveOn as "Last Active On",status as Status

Note: Here we are using age=1800 secs (30 minutes) to decide the status of forwarder, you can use your own condition as per requirement.

Step 2 – Setting the condition in order to trigger the alert when a forwarder goes down.

To set alert when forwarders become down.
In Trigger condition select option Custom and add condition |where Status=”Down”
to get results only for down forwarders.
And set Throttle for not getting repeated result over selected time.

If you are still facing issue regarding creating alert for forwarders down in environment Feel free to Ask Doubts in the Comment Box Below and Don’t Forget to Follow us on 👍 Social Networks, happy Splunking >😉