Skip to content

Guides

Watchdog Server User Guide

  • Overview

    This guide explains how the watchdog server works in simple terms. It focuses on how the server checks a controller's license and provides the correct settings based on which department it belongs to. It also includes a new section on the key expiry feature and instructions on accessing the Grafana dashboard for data analytics.

  • Workflow

    • Step 1: Sending Information from the Controller

    The controller gathers details about itself, such as:

    - Unique ID (GUID)
    - Hostname (name of the computer)
    - Version (software version)
    - IP addresses
    - MAC address (unique identifier for network interfaces)
    - Operating system (OS)
    - Label (a tag to identify the machine)
    

    This information is sent to the /api/check-license/ endpoint on the watchdog server.

    • Step 2: Checking the License

    The server receives the information and checks if the license is valid. It ensures all required details are present and that the GUID is formatted correctly. If everything checks out and the GUID is not already in use, it registers the GUID. If the number of active devices exceeds the allowed limit, the request is denied.

    • Step 3: Matching Information to Configuration Settings

    The server reads the configuration and mapping files. It uses the details from the controller to find the best matching department based on criteria like label, IP address, hostname, MAC address, and operating system.

    • Step 4: Sending Configuration to the Controller

    Once the right department is identified, the corresponding settings from the browsermon-watchdog.conf file are selected. These settings are customized based on the controller's operating system (e.g., setting the log directory path differently for Windows and Linux).

    • Step 5: Config is sent to Controller

    The new config from browsermon-watchdog.conf is sent to the controller along with the valid license message.


  • Key Expiry Feature

    • Overview

      The key expiry feature allows administrators to set an expiry date for the keys used by controllers. This ensures controllers must periodically check in with the watchdog server to renew their keys, enhancing security and control over access.

    • Workflow

      • Step 1: Generating and Assigning Keys

      When a controller initially registers with the watchdog server or when a key expires, the server generates a new key with an associated expiry date. This key is then assigned to the controller.

      • Step 2: Checking Expiry

      Each time the controller communicates with the watchdog server, the server checks the expiry date of the assigned key.

      • Step 3: Logging Remaining Days

      Upon successful communication, the watchdog server logs the remaining days until the key expires. This information is logged for monitoring purposes.

      • Step 4: Key Expiry

      Once the expiry date is reached, the key is considered expired. The controller will no longer be able to receive configurations or updates from the watchdog server until a new key is generated and assigned.


  • Implementation Details

    • Configuration

      Administrators can configure the expiry duration for keys in the watchdog.conf file.

    • Logging

      The remaining days until key expiry are logged in the watchdog server's log files. Administrators can monitor these logs to ensure timely renewal of keys.

    • Grafana Dashboard

      The watchdog server integrates with Grafana to provide a comprehensive interface for viewing history and data analytics.

    • Accessing the Grafana Dashboard

      Users can view the Grafana dashboard by navigating to http://localhost:1514 in their web browser. This dashboard presents the history and analytics data collected by the watchdog server in an intuitive and visual format.

  • Benefits of the Grafana Dashboard

    • Data Visualization: The Grafana dashboard offers various charts, graphs, and tables to help users visualize historical data and trends.
    • Customizable Views: Users can customize the dashboard to display the most relevant metrics and information for their needs.
    • Real-Time Monitoring: The dashboard allows for real-time monitoring of data, providing up-to-date insights into the performance and status of the watchdog server.
  • EUNOMATIX Threat Intel (ETI)

    If during installation ETI mode was set, then watchdog will automatically create a threat collector to collect fresh threat intel from phishtank and urlhaus. This data is stored in centralized elastic database running on port 9200. Browsermon endpoints will be able to classify browsed urls into threat categories. The threat collector will collect data daily at midnight to keep the database freshly updated. Inside watchdog.conf you can set the value for eti_index_ttl which represent Time-to-live (TTL) in days for Elasticsearch threat index before deletion. For the functioning of the ETI, the following domains must be accessible from the network where your watchdog is deployed.

    • PhishTank : data.phishtank.com

    • URLHaus : urlhaus.abuse.ch

  • EUNOMATIX URL Classification Service (UCS)

    Before Installation if you want to get daily classification updates, set ucs_updates to true in watchdog.conf. If during installation UCS mode was set, then watchdog will automatically create a ucs_client to collect daily classification updates from EUNOMATIX UCS API. Otherwise if ucs_updates is set to false inside watchdog.conf, then no API calls will be made and local snapshot will automatically get restored for UCS Classification. To get UCS daily updates, following cloud URL (https://ucs.eunomatix.com:8000) should be a accessible to the centralized watchdog instance.

    • EUNOMATIX Cloud : ucs.eunomatix.com:8000

BrowserMon User Guide

  • Introduction

    In today's digital landscape, effective monitoring of web activities is crucial to safeguarding enterprise security. BrowserMon 3.0, developed by EUNOMATIX, is a cutting-edge solution designed to provide detailed insights and control over browsing activities within organizations. This guide will walk you through the key features and setup of BrowserMon 3.0, helping you understand how to utilize it effectively.

  • Key Features Explained

    • Centralized Logging with Watchdog BrowserMon 3.0 collects browsing history data from various devices and sends it to a central server in real-time. This feature enables comprehensive security analysis and facilitates easy searchability without relying on external tools like Splunk or eti. It also helps meet data retention and regulatory compliance requirements.

    • Kafka Integration for Central History Database BrowserMon 3.0 integrates with Apache Kafka and MongoDB to store browser history data in a central database. This setup provides scalable data management and real-time data streaming through Kafka, ensuring up-to-date and consistent logging and analytics. This feature is optional and can be disabled if other solutions like Splunk or eti are preferred.

    • Health Checker Implementation BrowserMon 3.0 includes a Health Checker that verifies the licensing status by sending periodic requests to the Watchdog server. It attempts multiple retries until confirmation is received, ensuring that controllers and the BrowserMon service remain operational.

    • Enhanced Dashboards with Grafana

      BrowserMon 3.0 includes Grafana-based dashboards that offer clear visibility into:

      • Controller counts
      • Operational issues

      These dashboards help administrators quickly derive actionable insights from browsing data.

      • Accessing the Grafana Dashboard

        Users can view the Grafana dashboard by navigating to http://localhost:1514 in their web browser. This dashboard presents the history and analytics data collected by the watchdog server in an intuitive and visual format.

    • EUNOMATIX Threat Intel (ETI)

      To identify malicious URLs in real time, the Browsermon controller interacts with the ETI service, leveraging its intelligence for threat assessment. Additionally, Browsermon maintains a local URL cache to minimize redundant ETI queries, optimizing performance and reducing unnecessary requests. The cache has a configurable time-to-live (TTL) with a default of 30 days, allowing customization in minutes, hours, or days as needed. Furthermore, the cache is capped at a customizable maximum size with default of 1000 URLs, ensuring controlled memory usage and preventing excessive growth beyond the specified limit.

      In order for endpoints to use EUNOMATIX ETI service Username, Password Host and Port needs to be written in browsermon.conf file under elastic section.

      Example:

      [elastic]
      host=localhost
      port=9200
      username=Browsermon
      password=BrowsermonElasticUser
      eti_index=threat_index
      ucs_index=eunomatix_ucs
      
    • EUNOMATIX URL Classification Service (UCS)

      The EUNOMATIX URL Classification Service (UCS) is a free cloud-based service, that can be used by all watchdog servers to provide deep insights into the web traffic by categorizing websites accessed within an organization into different categories. It is an optional service, and customers can selectively enable it. EUNOMATIX UCS service classifies URL's into 80 plus diverse categories.

      In order for endpoints to use EUNOMATIX UCS service Username, Password Host and Port needs to be written in browsermon.conf file under elastic section.

      Example:

      [elastic]
      host=localhost
      port=9200
      username=Browsermon
      password=BrowsermonElasticUser
      eti_index=threat_index
      ucs_index=eunomatix_ucs
      
  • Types of Configurations

    BrowserMon 3.0 supports different types of configurations to tailor settings based on organizational needs:

    1. Default Configuration (browsermon.conf): Provides baseline settings for all deployments.

    2. Central Watchdog Configuration (browsermon-watchdog.conf): Overrides default configurations for centralized management this configuration is recieved from the watchdog server and written in the browsermon-watchdog.conf file in your installation directory.

    3. Local Configuration (browsermon-local.conf): Allows local administrators to customize settings for specific requirements, overriding both default and watchdog configurations.

  • Example Configuration File (browsermon.conf)

[server]
watchdog_ip=0.0.0.0
watchdog_port=8900
[installation]
install_dir=C:\\browsermon
[elastic]
host=localhost
port=9200
username=Browsermon
password=BrowsermonElasticUser
eti_index=threat_index
ucs_index=eunomatix_ucs
[default]
browser=firefox
mode=scheduled
schedule_window=1m
logdir=C:\\browsermon\\history
logmode=csv
rotation=1h
backup_count=0
log_level=DEBUG
kafka_mode=true
kafka_server_url=localhost:9092
eti_mode=false
ucs_mode=false
cache_ttl=30d
cache_max_size=1000
kafka_server_url=localhost:9092
machine_label=DefaultLabel
  • Configuration Explanation

    • watchdog_ip: IP address of the Watchdog server.
    • watchdog_port: Port number where Watchdog service listens for connections.
    • install_dir: Directory where you want to install browsermon into
    • host: eti/ucs server hostname or IP address
    • port: Port number on which eti/ucs is listening
    • username: Username for eti/ucs authentication
    • password: Password for eti/ucs authentication
    • eti_index: Name of the eti index where threat intel data will be fetched from
    • ucs_index: Name of the ucs index where classification data will be fetched from
    • browser: Specifies the browser(s) to monitor, such as Firefox, Chrome, or Edge.
    • mode: Determines whether BrowserMon operates in scheduled mode (default) or real-time mode.
    • schedule_window: Sets the interval between each browser data collection iteration.
    • logdir: Defines the directory where browser history log files are stored.
    • logmode: Specifies the format of the history log files (CSV or JSON).
    • rotation: Sets the interval for rotating history log files.
    • backup_count: Defines the number of backup copies of history log files to retain.
    • log_level: Specifies the logging level (INFO or DEBUG).
    • kafka_mode: Enables (true) or disables (false) Kafka integration for centralized logging.
    • kafka_server_url: URL of the bootstrap Kafka server.
    • eti_mode: Enables (true) or disables (false) EUNOMATIX Threat Intel service.
    • ucs_mode: Enables (true) or disables (false) EUNOMATIX URL Classification Service.
    • cache_ttl: Amount of time for which url will remain in cache
    • cache_max_size: Upper bound the cache can grow to e.g if set to 100 that means 100 urls will be cached
    • machine_label: The label you will sent as a payload to the watchdog server (This can be set by the controller in the browsermon-local.conf and browsermon.conf file).

    *All these config variables can be changed in browsermon-local.conf file except watchdog_ip and watchdog_port.

  • Getting Started

    1. Install BrowserMon: Deploy the lightweight agent on devices running Linux, Mac, or Windows.

    2. Configure Watchdog: Set up the Watchdog server to centrally manage and monitor all BrowserMon controllers.

    3. Define Configurations: Utilize mapping.conf and browsermon-watchdog.conf to customize configurations based on organizational requirements this is to be done in the watchdog server porgram.

    4. Enable Central Logging: Optionally, configure a central history database for real-time logging and analytics.

    By following this guide, you'll be equipped to effectively deploy and utilize BrowserMon 3.0 to enhance your organization's web monitoring and security capabilities.


Watchdog Deployment Guide (with Kafka and Elasticsearch)

  • Introduction

    This guide explains how to install and configure Watchdog using the watchdog-installer Python script. Watchdog can optionally integrate with Kafka (for data ingestion) and Elasticsearch (for data storage and searching).

    The installer supports: 1. Interactive prompts for Docker registry authentication (optional). 2. Enabling/disabling Kafka mode and/or Elasticsearch mode. 3. Automatic creation of necessary directories under /opt/watchdog. 4. File-by-file copy of important Watchdog files (prompts only for /opt/watchdog/watchdog/ overwrites). 5. Automatic generation of a .env file in your current directory, containing the environment variables Docker Compose will need. 6. A final Docker Compose deployment that launches the selected services.


  • Prerequisites

    1. Root/Sudo Access
      The installer must be run as root (or with sudo). It manages system directories (e.g., /opt/watchdog) and sets ownership of data directories.

    2. Docker and Docker Compose

    3. Docker installed and running (docker ps should work).
    4. Docker Compose plugin or Docker Compose CLI installed.
    5. Optionally, Docker registry credentials if you plan to pull images from a private Docker registry.

    6. Local Files/Directories

    7. A local deps/ directory that contains:
      • deps/connect-jars/ (Kafka connector JARs).
      • deps/watchdog/ (Watchdog source files).
      • deps/init-kafka-connect.sh (initialization script).
    8. Docker Compose YAML files in the same directory from which you run the installer:
      • docker-compose.base.yml (required).
      • docker-compose.kafka.yml (if enabling Kafka).
      • docker-compose.elastic.yml (if enabling Elasticsearch).
      • docker-compose.eti.yml (if enabling ETI).
      • docker-compose.ucs.yml (if enabling UCS).
    9. Optional config files (if needed for custom setups):
      • eti.yml (if elastic_mode=true and you want to override default ES config).
      • Any custom .conf files for Watchdog (placed in deps/watchdog before running the script).

  • Installation Steps

    1. Clone or place the watchdog-installer script in the same directory where your docker-compose.*.yml files exist (because it writes a .env file locally and references the compose files in the current directory).

    2. Ensure the script is executable: bash chmod +x watchdog-installer

      If you’re using the Python file directly, you can just run python watchdog-installer install without chmod +x.

    3. Run the installer (as root): bash sudo ./watchdog-installer install

    4. The script will:

      1. Prompt you for Docker registry authentication (optional).
      2. Prompt whether to enable Kafka/ETI/UCS modes.
      3. If Kafka mode is enabled, prompt for a KAFKA_EXTERNAL_IP.
      4. If Elasticsearch mode is enabled, prompt for username, passwords, etc.
      5. Create /opt/watchdog, /opt/watchdog/kafka_data, and /opt/watchdog/elasticsearch_data as needed.
      6. Copy files from deps/ into /opt/watchdog.
        • connect-jars and init-kafka-connect.sh are forced overwrites (no prompt).
        • The watchdog directory is copied file-by-file with a prompt for each existing file.
      7. Generate a .env file in your current directory (where Docker Compose can see it).
      8. Finally, run docker compose up -d using docker-compose.base.yml, plus the Kafka and/or Elastic Compose files if those modes were selected.
    5. Verify installation:

    6. Check running containers: bash docker ps
    7. If Kafka was enabled:
      • kafka, zookeeper, and kafka-connect containers should be running.
    8. If Elasticsearch was enabled:
      • An elasticsearch container (and possibly kibana) should be running (depending on your compose files).

  • Environment Variables and .env File

    The script automatically writes environment variables to a .env file in the current working directory. Docker Compose will automatically load them. If Kafka/Elasticsearch is enabled, you’ll see lines like:

    bash KAFKA_EXTERNAL_IP=your.machine.ip ELASTIC_HOST=elasticsearch ELASTIC_PORT=9200 ELASTIC_PASSWORD=BrowsermonElasticAdmin ELASTIC_USERNAME=Browsermon ELASTIC_USER_PASSWORD=BrowsermonElasticUser ELASTIC_SCHEME=https

    You can modify these directly if needed (though re-running the script may overwrite them).


  • Configuration Files

    Depending on your Watchdog setup, you might need additional configuration files within deps/watchdog (which eventually lands in /opt/watchdog/watchdog): 1. watchdog.conf
    2. ssl-config.ini
    3. mapping.conf
    4. browsermon-watchdog.conf 5. elasticsearch.yml (if ETI/UCS=true and you want to override defaults)

    Make sure to place these files in deps/watchdog before running the installer if you want them copied to /opt/watchdog/watchdog.


  • Service Configuration

    1. Kafka
    2. Typically uses port 8092 (or whatever is in your docker-compose.kafka.yml).
    3. Uses Kafka Connect to push data to MongoDB (or other sinks).
    4. The init-kafka-connect.sh script is placed in /opt/watchdog, but you typically don’t need to run it manually unless your setup requires it.

    5. MongoDB

    6. Often deployed alongside Kafka (depending on your docker-compose.kafka.yml).
    7. The sink connector is configured to push Watchdog data to MongoDB.

    8. Elasticsearch

    9. Typically listens on 9200 for HTTP/HTTPS calls.
    10. The default scheme is https (from the script prompt) but can be changed if you have a custom ES config.
    11. If using elasticsearch.yml, it should be placed in deps/watchdog or your custom location and referenced by docker-compose.elastic.yml.

  • Updating the Installation

    If you re-run the installer and /opt/watchdog is detected, the script enters Update Mode. You will be: - Prompted only for overwriting files inside /opt/watchdog/watchdog. - Other files (like init-kafka-connect.sh or connect-jars) are overwritten automatically. - The script will then re-run Docker Compose to update containers.

    Example: bash sudo ./watchdog-installer install If it sees an existing installation, you’ll be asked:

    Existing installation detected at /opt/watchdog
    Do you want to proceed with the update? (y/n)


  • Uninstalling / Cleaning Up

    To stop and remove the Watchdog containers (Kafka/Elasticsearch included), run: bash sudo ./watchdog-installer clean This will: 1. Look for docker-compose.base.yml, docker-compose.kafka.yml, and docker-compose.elastic.yml in your current directory. 2. Run docker compose down -v with whichever files are found, removing containers and volumes.

    Note: This does not delete /opt/watchdog or the data directories. If you want to remove them entirely, you can do so manually: bash sudo rm -rf /opt/watchdog


  • Troubleshooting

    • Checking Logs

    View logs for a specific container: bash docker logs <container-name> Examples:

    • docker logs kafka-connect
    • docker logs elasticsearch

    • Verifying Kafka Connect

    Inside the Kafka Connect container: bash docker exec -it kafka-connect /bin/bash Then check connector status: bash curl -X GET http://kafka-connect:8083/connectors/mongo-sink-connector/status A valid Mongo Sink Connector shows:

    ```json
    {
    "name": "mongo-sink-connector",
    "connector": {
        "state": "RUNNING",
        "worker_id": "connect-worker-1"
    },
    "tasks": [
        {
        "id": 0,
        "state": "RUNNING",
        "worker_id": "connect-worker-1"
        }
    ],
    "type": "sink"
    }
    ```
    
    • Checking Elasticsearch

    If Elasticsearch is running with HTTPS and basic auth:

    for default configurations bash curl -k -u elastic:BrowsermonElasticAdmin https://localhost:9200/_cluster/health

    - `-k` ignores self-signed certificate errors.
    - Adjust the user/password as you configured them during installation prompts.
    
    • Internet Access Important For the functioning of the Elasticsearch-based URL classification, the following domains must be accessible from the network where your watchdog is deployed.

    For ETI:

    • PhishTank : data.phishtank.com
    • URLHaus : urlhaus.abuse.ch

    For UCS (if ucs_updates=true in watchdog.conf):

    • EUNOMATIX Cloud : ucs.eunomatix.com:8000

    If watchdog is behind a proxy server, relevant proxy settings must be enabled in watchdog.conf file before installation.

    Example:

    [proxy]
    proxy_mode=false
    http_proxy=http://10.10.10.10:1234
    https_proxy=https://10.10.10.10:1234
    

    If proxy server has aunthentication, user should enter url according to it e.g http://username:[email protected]:8080


  • Offline Image Deployment (Optional)

    If you have Docker images saved locally (e.g., .tar files) for offline deployment:

    1. Load them: bash docker load -i your_offline_watchdog_image.tar
    2. Skip Docker Hub Login during the script’s prompts.
    3. Ensure the Docker Compose files reference the images you loaded (matching tags).