The Haveno monitor node collects a set of metrics which are of interest to developers and users alike. These metrics are then made available through reporters.
- Tor Startup Time: The time it takes to start Tor starting at a clean system, unpacking the shipped Tor binaries, firing up Tor until Tor is connected to the Tor network and ready to use.
- Tor Roundtrip Time: Given a bootstrapped Tor, the roundtrip time of connecting to a hidden service is measured.
- Tor Hidden Service Startup Time: Given a bootstrapped Tor, the time it takes to create and announce a freshly created hidden service.
- P2P Round Trip Time: A metric hitchhiking the Ping/Pong messages of the Keep-Alive-Mechanism to determine the Round Trip Time when issuing a Ping to a seed node.
- P2P Seed Node Message Snapshot: Get absolute number and constellation of messages a fresh Haveno client will get on startup. Also reports diffs between seed nodes on a per-message-type basis.
- P2P Network Load: listens to the P2P network and its broadcast messages. Reports every X seconds.
- P2P Market Statistics: a demonstration metric which extracts market information from broadcast messages. This demo implementation reports the number of open offers per market.
The *Settled* release features these reporters:
- A reporter that simply writes the findings to `System.err`
- A reporter that reports the findings to a Graphite/Carbon instance using the [plaintext protocol](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol)
The *Haveno Network Monitor Node* is to be configured via a Java properties file. There is a default configuration file shipped with the monitor which reports to the one monitoring service currently up and running.
The distribution ships with a systemd .service file. Validate/change the executable/config paths within the shipped `haveno-monitor.service` file and copy/move the file to your systemd directory (something along `/usr/lib/systemd/system/`). Now you can control your *Monitor Node* via the usual systemd start/stop commands
before the `[default...` blocks of the file. This basically says, that every incoming set of data reflects 5 minutes of the time series. Furthermore, every 30 minutes, the data is compressed and thus, takes less memory as it is kept for 2 years.
Further, edit `storage-aggregation.conf` to configure how your data is compressed. For example, insert
before the `[default...` blocks of the file. With this configuration, whenever data is aggregated, the `average` data is made available given that at least `0%` of the data points (i.e. floor(30 / 5 * 40%) = 2 data points) exist. Otherwise, the aggregated data is dropped. Since we start the first hour with a frequency of 10s but only supply data every 4 to 6 minutes, our aggregated values would get dropped.
*Please note, that I have not been able to get the whole thing to work without the 10s:1h part yet*
Finally, update the database. For doing that, go to the storage directory of graphite, the `Source` directory stated in
Other than that, there is no further configuration necessary. However, you might change your iptables/firewalls to not let anyone access your Graphite instance from the outside.
### Backup your data
The metric data is kept in the `Source` directory stated in
docker run -d --name=grafana -p 3000:3000 grafana/grafana
```
- Port 3000 offers the web interface
more information can be found [here](https://grafana.com/grafana/download?platform=docker)
### Configuration
- Once you have Grafana up and running, go to the *Data Source* configuration tab.
- Once there click *Add data source* and select *Graphite*.
- In the HTTP section enter the IP address of your graphite docker container and the port `8080` (as we have configured before). E.g. `http://172.170.1:8080`
- Select `Server (default)` as an *Access* method and hit *Save & Test*.
You should be all set. You can now proceed to add Dashboards, Panels and finally display the prettiest Graphs you can think of.
A working connection to Graphite should let you add your data series in a *Graph*s *Metrics* tab in a pretty intuitive way.
- Optional: hide your Grafana instance behind a reverse proxy like nginx and add some TLS.
- Optional: make your Grafana instance accessible via a Tor hidden service.
### Backup your data
Grafana stores every dashboard as a JSON model. This model can be accessed (copied/restored) within the dashboard's settings and its *JSON Model* tab. Do with the data whatever you want.