Guides
Watchdog Server User Guide
-
Overview
This guide explains how the watchdog server works in simple terms. It focuses on how the server checks a controller's license and provides the correct settings based on which department it belongs to. It also includes a new section on the key expiry feature and instructions on accessing the Grafana dashboard for data analytics.
-
Workflow
- Step 1: Sending Information from the Controller
The controller gathers details about itself, such as:
- Unique ID (GUID) - Hostname (name of the computer) - Version (software version) - IP addresses - MAC address (unique identifier for network interfaces) - Operating system (OS) - Label (a tag to identify the machine)
This information is sent to the
/api/check-license/
endpoint on the watchdog server.- Step 2: Checking the License
The server receives the information and checks if the license is valid. It ensures all required details are present and that the GUID is formatted correctly. If everything checks out and the GUID is not already in use, it registers the GUID. If the number of active devices exceeds the allowed limit, the request is denied.
- Step 3: Matching Information to Configuration Settings
The server reads the configuration and mapping files. It uses the details from the controller to find the best matching department based on criteria like label, IP address, hostname, MAC address, and operating system.
- Step 4: Sending Configuration to the Controller
Once the right department is identified, the corresponding settings from the
browsermon-watchdog.conf
file are selected. These settings are customized based on the controller's operating system (e.g., setting the log directory path differently for Windows and Linux).- Step 5: Config is sent to Controller
The new config from
browsermon-watchdog.conf
is sent to the controller along with the valid license message.
-
Key Expiry Feature
-
Overview
The key expiry feature allows administrators to set an expiry date for the keys used by controllers. This ensures controllers must periodically check in with the watchdog server to renew their keys, enhancing security and control over access.
-
Workflow
- Step 1: Generating and Assigning Keys
When a controller initially registers with the watchdog server or when a key expires, the server generates a new key with an associated expiry date. This key is then assigned to the controller.
- Step 2: Checking Expiry
Each time the controller communicates with the watchdog server, the server checks the expiry date of the assigned key.
- Step 3: Logging Remaining Days
Upon successful communication, the watchdog server logs the remaining days until the key expires. This information is logged for monitoring purposes.
- Step 4: Key Expiry
Once the expiry date is reached, the key is considered expired. The controller will no longer be able to receive configurations or updates from the watchdog server until a new key is generated and assigned.
-
-
Implementation Details
-
Configuration
Administrators can configure the expiry duration for keys in the
watchdog.conf
file. -
Logging
The remaining days until key expiry are logged in the watchdog server's log files. Administrators can monitor these logs to ensure timely renewal of keys.
-
Grafana Dashboard
The watchdog server integrates with Grafana to provide a comprehensive interface for viewing history and data analytics.
-
Accessing the Grafana Dashboard
Users can view the Grafana dashboard by navigating to
http://localhost:1514
in their web browser. This dashboard presents the history and analytics data collected by the watchdog server in an intuitive and visual format.
-
-
Benefits of the Grafana Dashboard
- Data Visualization: The Grafana dashboard offers various charts, graphs, and tables to help users visualize historical data and trends.
- Customizable Views: Users can customize the dashboard to display the most relevant metrics and information for their needs.
- Real-Time Monitoring: The dashboard allows for real-time monitoring of data, providing up-to-date insights into the performance and status of the watchdog server.
-
EUNOMATIX Threat Intel (ETI)
If during installation ETI mode was set, then watchdog will automatically create a threat collector to collect fresh threat intel from phishtank and urlhaus. This data is stored in centralized elastic database running on port 9200. Browsermon endpoints will be able to classify browsed urls into threat categories. The threat collector will collect data daily at midnight to keep the database freshly updated. Inside watchdog.conf you can set the value for eti_index_ttl which represent Time-to-live (TTL) in days for Elasticsearch threat index before deletion. For the functioning of the ETI, the following domains must be accessible from the network where your watchdog is deployed.
-
PhishTank :
data.phishtank.com
-
URLHaus :
urlhaus.abuse.ch
-
-
EUNOMATIX URL Classification Service (UCS)
Before Installation if you want to get daily classification updates, set ucs_updates to true in watchdog.conf. If during installation UCS mode was set, then watchdog will automatically create a ucs_client to collect daily classification updates from EUNOMATIX UCS API. Otherwise if ucs_updates is set to false inside watchdog.conf, then no API calls will be made and local snapshot will automatically get restored for UCS Classification. To get UCS daily updates, following cloud URL (https://ucs.eunomatix.com:8000) should be a accessible to the centralized watchdog instance.
- EUNOMATIX Cloud :
ucs.eunomatix.com:8000
- EUNOMATIX Cloud :
BrowserMon User Guide
-
Introduction
In today's digital landscape, effective monitoring of web activities is crucial to safeguarding enterprise security. BrowserMon 3.0, developed by EUNOMATIX, is a cutting-edge solution designed to provide detailed insights and control over browsing activities within organizations. This guide will walk you through the key features and setup of BrowserMon 3.0, helping you understand how to utilize it effectively.
-
Key Features Explained
-
Centralized Logging with Watchdog BrowserMon 3.0 collects browsing history data from various devices and sends it to a central server in real-time. This feature enables comprehensive security analysis and facilitates easy searchability without relying on external tools like Splunk or eti. It also helps meet data retention and regulatory compliance requirements.
-
Kafka Integration for Central History Database BrowserMon 3.0 integrates with Apache Kafka and MongoDB to store browser history data in a central database. This setup provides scalable data management and real-time data streaming through Kafka, ensuring up-to-date and consistent logging and analytics. This feature is optional and can be disabled if other solutions like Splunk or eti are preferred.
-
Health Checker Implementation BrowserMon 3.0 includes a Health Checker that verifies the licensing status by sending periodic requests to the Watchdog server. It attempts multiple retries until confirmation is received, ensuring that controllers and the BrowserMon service remain operational.
-
Enhanced Dashboards with Grafana
BrowserMon 3.0 includes Grafana-based dashboards that offer clear visibility into:
- Controller counts
- Operational issues
These dashboards help administrators quickly derive actionable insights from browsing data.
-
Accessing the Grafana Dashboard
Users can view the Grafana dashboard by navigating to
http://localhost:1514
in their web browser. This dashboard presents the history and analytics data collected by the watchdog server in an intuitive and visual format.
-
EUNOMATIX Threat Intel (ETI)
To identify malicious URLs in real time, the Browsermon controller interacts with the ETI service, leveraging its intelligence for threat assessment. Additionally, Browsermon maintains a local URL cache to minimize redundant ETI queries, optimizing performance and reducing unnecessary requests. The cache has a configurable time-to-live (TTL) with a default of 30 days, allowing customization in minutes, hours, or days as needed. Furthermore, the cache is capped at a customizable maximum size with default of 1000 URLs, ensuring controlled memory usage and preventing excessive growth beyond the specified limit.
In order for endpoints to use EUNOMATIX ETI service Username, Password Host and Port needs to be written in browsermon.conf file under elastic section.
Example:
[elastic] host=localhost port=9200 username=Browsermon password=BrowsermonElasticUser eti_index=threat_index ucs_index=eunomatix_ucs
-
EUNOMATIX URL Classification Service (UCS)
The EUNOMATIX URL Classification Service (UCS) is a free cloud-based service, that can be used by all watchdog servers to provide deep insights into the web traffic by categorizing websites accessed within an organization into different categories. It is an optional service, and customers can selectively enable it. EUNOMATIX UCS service classifies URL's into 80 plus diverse categories.
In order for endpoints to use EUNOMATIX UCS service Username, Password Host and Port needs to be written in browsermon.conf file under elastic section.
Example:
[elastic] host=localhost port=9200 username=Browsermon password=BrowsermonElasticUser eti_index=threat_index ucs_index=eunomatix_ucs
-
-
Types of Configurations
BrowserMon 3.0 supports different types of configurations to tailor settings based on organizational needs:
-
Default Configuration (browsermon.conf): Provides baseline settings for all deployments.
-
Central Watchdog Configuration (browsermon-watchdog.conf): Overrides default configurations for centralized management this configuration is recieved from the watchdog server and written in the
browsermon-watchdog.conf
file in your installation directory. -
Local Configuration (browsermon-local.conf): Allows local administrators to customize settings for specific requirements, overriding both default and watchdog configurations.
-
-
Example Configuration File (
browsermon.conf
)
[server]
watchdog_ip=0.0.0.0
watchdog_port=8900
[installation]
install_dir=C:\\browsermon
[elastic]
host=localhost
port=9200
username=Browsermon
password=BrowsermonElasticUser
eti_index=threat_index
ucs_index=eunomatix_ucs
[default]
browser=firefox
mode=scheduled
schedule_window=1m
logdir=C:\\browsermon\\history
logmode=csv
rotation=1h
backup_count=0
log_level=DEBUG
kafka_mode=true
kafka_server_url=localhost:9092
eti_mode=false
ucs_mode=false
cache_ttl=30d
cache_max_size=1000
kafka_server_url=localhost:9092
machine_label=DefaultLabel
-
Configuration Explanation
- watchdog_ip: IP address of the Watchdog server.
- watchdog_port: Port number where Watchdog service listens for connections.
- install_dir: Directory where you want to install browsermon into
- host: eti/ucs server hostname or IP address
- port: Port number on which eti/ucs is listening
- username: Username for eti/ucs authentication
- password: Password for eti/ucs authentication
- eti_index: Name of the eti index where threat intel data will be fetched from
- ucs_index: Name of the ucs index where classification data will be fetched from
- browser: Specifies the browser(s) to monitor, such as Firefox, Chrome, or Edge.
- mode: Determines whether BrowserMon operates in scheduled mode (default) or real-time mode.
- schedule_window: Sets the interval between each browser data collection iteration.
- logdir: Defines the directory where browser history log files are stored.
- logmode: Specifies the format of the history log files (CSV or JSON).
- rotation: Sets the interval for rotating history log files.
- backup_count: Defines the number of backup copies of history log files to retain.
- log_level: Specifies the logging level (INFO or DEBUG).
- kafka_mode: Enables (true) or disables (false) Kafka integration for centralized logging.
- kafka_server_url: URL of the bootstrap Kafka server.
- eti_mode: Enables (true) or disables (false) EUNOMATIX Threat Intel service.
- ucs_mode: Enables (true) or disables (false) EUNOMATIX URL Classification Service.
- cache_ttl: Amount of time for which url will remain in cache
- cache_max_size: Upper bound the cache can grow to e.g if set to 100 that means 100 urls will be cached
- machine_label: The label you will sent as a payload to the watchdog server (This can be set by the controller in the
browsermon-local.conf
andbrowsermon.conf
file).
*All these config variables can be changed in browsermon-local.conf file except
watchdog_ip
andwatchdog_port
. -
Getting Started
-
Install BrowserMon: Deploy the lightweight agent on devices running Linux, Mac, or Windows.
-
Configure Watchdog: Set up the Watchdog server to centrally manage and monitor all BrowserMon controllers.
-
Define Configurations: Utilize
mapping.conf
andbrowsermon-watchdog.conf
to customize configurations based on organizational requirements this is to be done in the watchdog server porgram. -
Enable Central Logging: Optionally, configure a central history database for real-time logging and analytics.
By following this guide, you'll be equipped to effectively deploy and utilize BrowserMon 3.0 to enhance your organization's web monitoring and security capabilities.
-
Watchdog Deployment Guide (with Kafka and Elasticsearch)
-
Introduction
This guide explains how to install and configure Watchdog using the
watchdog-installer
Python script. Watchdog can optionally integrate with Kafka (for data ingestion) and Elasticsearch (for data storage and searching).The installer supports: 1. Interactive prompts for Docker registry authentication (optional). 2. Enabling/disabling Kafka mode and/or Elasticsearch mode. 3. Automatic creation of necessary directories under
/opt/watchdog
. 4. File-by-file copy of important Watchdog files (prompts only for/opt/watchdog/watchdog/
overwrites). 5. Automatic generation of a.env
file in your current directory, containing the environment variables Docker Compose will need. 6. A final Docker Compose deployment that launches the selected services.
-
Prerequisites
-
Root/Sudo Access
The installer must be run asroot
(or withsudo
). It manages system directories (e.g.,/opt/watchdog
) and sets ownership of data directories. -
Docker and Docker Compose
- Docker installed and running (
docker ps
should work). - Docker Compose plugin or Docker Compose CLI installed.
-
Optionally, Docker registry credentials if you plan to pull images from a private Docker registry.
-
Local Files/Directories
- A local
deps/
directory that contains:deps/connect-jars/
(Kafka connector JARs).deps/watchdog/
(Watchdog source files).deps/init-kafka-connect.sh
(initialization script).
- Docker Compose YAML files in the same directory from which you run the installer:
docker-compose.base.yml
(required).docker-compose.kafka.yml
(if enabling Kafka).docker-compose.elastic.yml
(if enabling Elasticsearch).docker-compose.eti.yml
(if enabling ETI).docker-compose.ucs.yml
(if enabling UCS).
- Optional config files (if needed for custom setups):
eti.yml
(ifelastic_mode=true
and you want to override default ES config).- Any custom
.conf
files for Watchdog (placed indeps/watchdog
before running the script).
-
-
Installation Steps
-
Clone or place the
watchdog-installer
script in the same directory where yourdocker-compose.*.yml
files exist (because it writes a.env
file locally and references the compose files in the current directory). -
Ensure the script is executable:
bash chmod +x watchdog-installer
If you’re using the Python file directly, you can just run
python watchdog-installer install
withoutchmod +x
. -
Run the installer (as root):
bash sudo ./watchdog-installer install
-
The script will:
- Prompt you for Docker registry authentication (optional).
- Prompt whether to enable Kafka/ETI/UCS modes.
- If Kafka mode is enabled, prompt for a
KAFKA_EXTERNAL_IP
. - If Elasticsearch mode is enabled, prompt for username, passwords, etc.
- Create
/opt/watchdog
,/opt/watchdog/kafka_data
, and/opt/watchdog/elasticsearch_data
as needed. - Copy files from
deps/
into/opt/watchdog
.connect-jars
andinit-kafka-connect.sh
are forced overwrites (no prompt).- The
watchdog
directory is copied file-by-file with a prompt for each existing file.
- Generate a
.env
file in your current directory (where Docker Compose can see it). - Finally, run
docker compose up -d
usingdocker-compose.base.yml
, plus the Kafka and/or Elastic Compose files if those modes were selected.
-
Verify installation:
- Check running containers:
bash docker ps
- If Kafka was enabled:
kafka
,zookeeper
, andkafka-connect
containers should be running.
- If Elasticsearch was enabled:
- An
elasticsearch
container (and possiblykibana
) should be running (depending on your compose files).
- An
-
-
Environment Variables and
.env
FileThe script automatically writes environment variables to a
.env
file in the current working directory. Docker Compose will automatically load them. If Kafka/Elasticsearch is enabled, you’ll see lines like:bash KAFKA_EXTERNAL_IP=your.machine.ip ELASTIC_HOST=elasticsearch ELASTIC_PORT=9200 ELASTIC_PASSWORD=BrowsermonElasticAdmin ELASTIC_USERNAME=Browsermon ELASTIC_USER_PASSWORD=BrowsermonElasticUser ELASTIC_SCHEME=https
You can modify these directly if needed (though re-running the script may overwrite them).
-
Configuration Files
Depending on your Watchdog setup, you might need additional configuration files within
deps/watchdog
(which eventually lands in/opt/watchdog/watchdog
): 1.watchdog.conf
2.ssl-config.ini
3.mapping.conf
4.browsermon-watchdog.conf
5.elasticsearch.yml
(ifETI/UCS=true
and you want to override defaults)Make sure to place these files in
deps/watchdog
before running the installer if you want them copied to/opt/watchdog/watchdog
.
-
Service Configuration
- Kafka
- Typically uses port
8092
(or whatever is in yourdocker-compose.kafka.yml
). - Uses Kafka Connect to push data to MongoDB (or other sinks).
-
The
init-kafka-connect.sh
script is placed in/opt/watchdog
, but you typically don’t need to run it manually unless your setup requires it. -
MongoDB
- Often deployed alongside Kafka (depending on your
docker-compose.kafka.yml
). -
The sink connector is configured to push Watchdog data to MongoDB.
-
Elasticsearch
- Typically listens on
9200
for HTTP/HTTPS calls. - The default scheme is
https
(from the script prompt) but can be changed if you have a custom ES config. - If using
elasticsearch.yml
, it should be placed indeps/watchdog
or your custom location and referenced bydocker-compose.elastic.yml
.
-
Updating the Installation
If you re-run the installer and
/opt/watchdog
is detected, the script enters Update Mode. You will be: - Prompted only for overwriting files inside/opt/watchdog/watchdog
. - Other files (likeinit-kafka-connect.sh
orconnect-jars
) are overwritten automatically. - The script will then re-run Docker Compose to update containers.Example:
bash sudo ./watchdog-installer install
If it sees an existing installation, you’ll be asked:Existing installation detected at /opt/watchdog
Do you want to proceed with the update? (y/n)
-
Uninstalling / Cleaning Up
To stop and remove the Watchdog containers (Kafka/Elasticsearch included), run:
bash sudo ./watchdog-installer clean
This will: 1. Look fordocker-compose.base.yml
,docker-compose.kafka.yml
, anddocker-compose.elastic.yml
in your current directory. 2. Rundocker compose down -v
with whichever files are found, removing containers and volumes.Note: This does not delete
/opt/watchdog
or the data directories. If you want to remove them entirely, you can do so manually:bash sudo rm -rf /opt/watchdog
-
Troubleshooting
- Checking Logs
View logs for a specific container:
bash docker logs <container-name>
Examples:docker logs kafka-connect
-
docker logs elasticsearch
-
Verifying Kafka Connect
Inside the Kafka Connect container:
bash docker exec -it kafka-connect /bin/bash
Then check connector status:bash curl -X GET http://kafka-connect:8083/connectors/mongo-sink-connector/status
A valid Mongo Sink Connector shows:```json { "name": "mongo-sink-connector", "connector": { "state": "RUNNING", "worker_id": "connect-worker-1" }, "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "connect-worker-1" } ], "type": "sink" } ```
- Checking Elasticsearch
If Elasticsearch is running with HTTPS and basic auth:
for default configurations
bash curl -k -u elastic:BrowsermonElasticAdmin https://localhost:9200/_cluster/health
- `-k` ignores self-signed certificate errors. - Adjust the user/password as you configured them during installation prompts.
- Internet Access Important For the functioning of the Elasticsearch-based URL classification, the following domains must be accessible from the network where your watchdog is deployed.
For ETI:
- PhishTank :
data.phishtank.com
- URLHaus :
urlhaus.abuse.ch
For UCS (if ucs_updates=true in watchdog.conf):
- EUNOMATIX Cloud :
ucs.eunomatix.com:8000
If watchdog is behind a proxy server, relevant proxy settings must be enabled in
watchdog.conf
file before installation.Example:
[proxy] proxy_mode=false http_proxy=http://10.10.10.10:1234 https_proxy=https://10.10.10.10:1234
If proxy server has aunthentication, user should enter url according to it e.g
http://username:[email protected]:8080
-
Offline Image Deployment (Optional)
If you have Docker images saved locally (e.g.,
.tar
files) for offline deployment:- Load them:
bash docker load -i your_offline_watchdog_image.tar
- Skip Docker Hub Login during the script’s prompts.
- Ensure the Docker Compose files reference the images you loaded (matching tags).
- Load them: