3KeyPKITechnical

EJBCA Dashboards and Reporting

EJBCA is one of the best software to manage your public key infrastructure. Because it is open source and is flexible, you can create basically any use case you need. Yet it lacks a few features, such as reporting and visualisation of data, in order to promptly see the overview and status of activities.

Some of the most basic requirements for the reporting and dashboarding are:

  • Overall count of the activity on EJBCA, providing information about the peaks and unusual increase or decrease of activity
  • Activities and events that happened in time with the administrator identification
  • Number of issued or revoked certificates in time
  • Algorithms used in order to quickly identify and resolve non-compliant certificates and private keys

Such information and data can be used to define and automate rules in order to protect public key infrastructure and to optimise its operations.

The following article will focus on how to distribute logs from EJBCA to ELK stack to achieve above mentioned goal.

Infrastructure setup

First, we need to be aware of the underlying infrastructure. For our purposes, we used the following simple setup which can be further extended to additional EJBCA nodes and components:

  • EJBCA that has blocked incoming connections
  • Wildfly setup for shipping the logs through syslog to ELK stack
  • ELK stack consisting of Logstash parsing the syslog data, Elasticsearch to store index data, and Kibana to provide visualisation and dashboarding

EJBCA logging

Typically, EJBCA has the following format of the logs:

%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n

For example, let’s parse the following log entry: 08:37:44,619 INFO [org.cesecore.audit.impl.log4j.Log4jDevice] (default task-5) 2020-03-15 08:37:44+01:00; ACCESS_CONTROL; SUCCESS; ACCESSCONTROL; CORE; CN=SuperAdmin,O=3Key Company s.r.o.; ; ; ; resource0=/administrator; resource1=/ra_functionality/view_end_entity

Part Value
%d{HH:mm:ss,SSS}
08:37:44,619
%-5p
INFO
[%c]
[org.cesecore.audit.impl.log4j.Log4jDevice]
(%t)
(default task-5)
%s%E%n
2020-03-15 08:37:44+01:00; ACCESS_CONTROL; SUCCESS; ACCESSCONTROL; CORE; CN=SuperAdmin,O=3Key Company s.r.o.; ; ; ; resource0=/administrator; resource1=/ra_functionality/view_end_entity

Part of the message in %s%E%n can be Security Audit Log containing fields like timestamp, eventType, eventStatus, module, service, authToken, customId, searchDetail1, searchDetail2, additionalDetails.

See the EJBCA Logging for more information.

Overview of the setup

In order to setup log forwarding and provide desired visualisations, we need to do the following steps:

  • Setup Logstash syslog input, filter to parse the logs, and index data into Elasticsearch
  • Configure Wildfly syslog appender to ship the EJBCA logs to Logstash
  • Prepare index mapping and create visualisations and dashboard to see the data

This is the simple setup. Additionally, we can further parse data to see more details in logs, prepare procedures how to include historical data in index, how to be prepared for modification, or how to create custom log entries in order to ship custom data we would like to see and monitor.

Logstash pipeline setup

First we need to preapre Logstash pipeline which will be parsing EJBCA logs. We will use the syslog input for that purpose.

Input part contains the basic information about the protocol and port, which in this case is syslog on port 5143. We are using multiline codec in order to correctly receive and parse log entries containing multiple lines such as Java exceptions:

input {
  syslog {
    host => "0.0.0.0"
    port => 5143

    codec => multiline {
      pattern => "<%{POSINT:priority}>%{SYSLOGLINE}"
      negate => "true"
      what => "previous"

      auto_flush_interval => 1
      max_lines => 500
      charset => "UTF-8"
   }
  }
}

The most important part of the pipeline definition is filter. Here we define how logs should be parsed and organized before we send them to the index. The filter should parse the data according its format and if we would like to parse more details, we can create custom commands:

filter {

  mutate {
    add_field => {
      "logstash_node" => "lab04.3key.company"
      "[@metadata][index]" => "ejbca-log"
    }

    rename => {
      "message" => "original_message"
      "logsource" => "[hostname]"
      "facility" => "[syslog][facility]"
      "facility_label" => "[syslog][facility_label]"
      "severity" => "[syslog][severity]"
      "severity_label" => "[syslog][severity_label]"
      "priority" => "[syslog][priority]"
      "pid" => "[syslog][pid]"
      "program" => "[syslog][program]"
    }
  }

  # parse syslog and ejbca message
  grok {
    match => { "original_message" => "" }
  }

  if "_grokparsefailure" not in [tags]{
    mutate {
      gsub => [
        # Remove whitespaces from string like "default-threads - 24" if contain dash
        "[ejbca][thread]", " - ", "-",
        # Replace the spaces with a dash
        "[ejbca][thread]", " ", "-"
      ]
      remove_field => [ "original_message", "host", "timestamp" ]

    }

    # message like:
    #  2018-10-29 16:55:14+01:00;ADMINWEB_ADMINISTRATORLOGGEDIN;SUCCESS;ADMINWEB;EJBCA;CN=SuperAdmin
    #  ;-1537224577;31557AE2AFC1994A;;remoteip=192.168.0.40
    #
    if [ejbca][message] =~ /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\+\d{2}:\d{2};/ {
      grok {
        match => { "[ejbca][message]" => "" }
      }

      mutate {
        replace => { "[@metadata][index]" => "ejbca-audit" }
      }
    }

    if [ejbca][additional_details] and [ejbca][event_type]{

      kv {
        allow_duplicate_values => false
        field_split => ";"
        source => "[ejbca][additional_details]"
        target => "[ejbca][additional]"
        transform_key => "lowercase"
      }
    }
  }
}

The last part is definition of output, where we are indexing data into Elasticsearch index with the date specification, so daily log data are always in a separate index:

output {
    elasticsearch {
      hosts => ["http://127.0.0.1:9200"]
      index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"

      user => "logstash_ejbca_user"
      password => ""
    }
}

Wildfly configuration

In order to enable Wildfly to ship the EJBCA logs to Logstash, we need to include the following configuration in standalone.xml:

<custom-handler name="SYSLOG" class="org.jboss.logmanager.handlers.SyslogHandler" module="org.jboss.logmanager">
    <level name="INFO"/>
    <properties>
        <property name="serverHostname" value="lab04.3key.company"/>
        <property name="hostname" value="lab01.3key.company"/>
        <property name="port" value="5143"/>
        <property name="protocol" value="TCP"/>
        <property name="appName" value="ejbca"/>
        <property name="facility" value="LOCAL_USE_7"/>
        <property name="encoding" value="US-ASCII"/>
        <property name="syslogType" value="RFC3164"/>
        <property name="maxLength" value="65000"/>
    </properties>
</custom-handler>

<logger category="org.ejbca">
    <level name="INFO"/>
    <handlers>
        <handler name="SYSLOG"/>
    </handlers>
</logger>

<logger category="org.cesecore">
    <level name="INFO"/>
    <handlers>
        <handler name="SYSLOG"/>
    </handlers>
</logger>

We can do the same through the CLI of the Wildfly:

/subsystem=logging/custom-handler=SYSLOG:add(class=org.jboss.logmanager.handlers.SyslogHandler, module=org.jboss.logmanager, properties={serverHostname="lab04.3key.company", hostname="lab01.3key.company", port="5143", protocol="TCP", appName="ejbca", facility="LOCAL_USE_7", encoding="US-ASCII", syslogType="RFC3164", maxLength="65000"})

/subsystem=logging/custom-handler=SYSLOG:write-attribute(name=level, value=INFO) 
 
/subsystem=logging/logger=org.ejbca:add
/subsystem=logging/logger=org.ejbca:write-attribute(name=level, value=INFO)
/subsystem=logging/logger=org.ejbca:assign-handler(name="SYSLOG")
 
/subsystem=logging/logger=org.cesecore:add
/subsystem=logging/logger=org.cesecore:write-attribute(name=level, value=INFO)
/subsystem=logging/logger=org.cesecore:assign-handler(name="SYSLOG")

Index mapping and visualisation

When logs are shipped and indexed in Elasticsearch, we can see the result on the Discover tab in Kibana, as is on the following picture:

Mapping of the ejbca index should be adjusted to allow searching and indexing of the data. When the mapping is ready, we can use the following definition of visualisations to create:

  • Total number of logs in time with the option to filter
  • Event module distribution
  • Event service distribution
  • Event status distribution
  • Event type distribution
  • List of most active EJBCA administrators
  • List of most popular End Entities

There is, of course, much more you can do and see.

This is a sample ndjson of object representing visualisation configuration of event service distribution:

{"attributes":{"description":"","kibanaSavedObjectMeta":{"searchSourceJSON":"{\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"},\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"},"title":"EJBCA - Event Service distribution","uiStateJSON":"{}","version":1,"visState":"{\"title\":\"EJBCA - Event Service distribution\",\"type\":\"pie\",\"params\":{\"type\":\"pie\",\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"isDonut\":true,\"labels\":{\"show\":false,\"values\":true,\"last_level\":true,\"truncate\":100}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"segment\",\"params\":{\"field\":\"ejbca.service\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"1\"}}]}"},"id":"a81d4520-c88b-11e8-a2fc-c3fe502d7417","migrationVersion":{"visualization":"7.4.2"},"references":[{"id":"bb0596f0-661a-11ea-ae38-217801d097f7","name":"kibanaSavedObjectMeta.searchSourceJSON.index","type":"index-pattern"}],"type":"visualization","updated_at":"2020-03-15T07:17:05.922Z","version":"WzgyLDJd"}

Get in touch with us to know more!

You can find the configuration and setup code on the GitHub page.

We are also on LinkedIn, follow us!

If you like what we are doing than do not hesitate and contact us for more information! We believe that we have experience and knowledge that can be a benefit for your business.