Saturday, January 9, 2021

LibreNMS Distributed Poller - how I did it

I recently found an application for distributed polling in LibreNMS, but the documentation on the aforementioned link seemed a little light, so at first I was hesitant to try it out. Upon some further searching I found this Reddit thread which provided a little more detail, but I still just had to try it out to actually see if I could get it working. Now I'm going to attempt to share my knowledge so someone else can do it in the future! It's not that hard, what's most important is to make sure you install all the extra needed packages and configure both the additional packages AND LibreNMS properly.

Note that in this particular scenario, I have two hosts: the first is the LibreNMS web server, database server, RRDCached server, Redis server...basically all of the server functions. The second host only exists as a poller. My hosts are both running Debian as well, Ubuntu should be more or less the same, other systems should be similar but obviously the package manager will be different.

Also note that in my case, I did the server install, the poller install, then kind of bounced back to the server to configure the additional packages. The order of the steps below is what I would do if I knew what I know now, i.e. complete the server install FIRST, then move on to the poller.

Step 1: Server install

The first step is really the easiest. Simply follow the instructions on how to install LibreNMS from their website. This includes running through the web installer. No special steps here!

Step 2: Install additional packages on the server

apt install memcached rrdcached redis-server python3-memcache php-memcached

Step 3: Configure memcached on the server

Edit /etc/memcached.conf and set the following values. Most of these are already in the file, so you simply need to edit the existing values and make sure they're uncommented.

-d
-m 64
-p 11211
-u memcache
-l 0.0.0.0

Note that in my case, I chose to listen on 0.0.0.0; you should only do this if you implement a firewall and make sure that memcached isn't available to the open internet!

Step 4: Configure rrdcached on the server

Edit /etc/default/rrdcached and add the following to the bottom of the file:

BASE_OPTIONS="-l 0:42217"
BASE_OPTIONS="$BASE_OPTIONS -R -j /var/lib/rrdcached/journal/ -F"
BASE_OPTIONS="$BASE_OPTIONS -b /opt/librenms/rrd -B"
BASE_OPTIONS="$BASE_OPTIONS -w 1800 -z 900"

Step 5: Configure LibreNMS on the server

Edit the following values in /opt/librenms/config.php or add them if they don't already exist:

$config['rrdcached']    = "<ip of your server>:42217";

# Enable distributed polling
$config['distributed_poller'] = true;
$config['distributed_poller_group']                      = 0;
$config['distributed_poller_name']           = php_uname('n');

 Then, edit add following values in /opt/librenms/.env:

CACHE_DRIVER=redis
REDIS_HOST=localhost
REDIS_PORT=6379

Step 6: Enable php-memcached on the server:

phpenmod memcached

Step 7: Allow poller access to the MariaDB instance on your server:

sudo mysql -u root -p <database password you configured during install>
GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'<ip of your poller>' IDENTIFIED BY '<the same password you used to access mysql>' WITH GRANT OPTION;
exit

Step 8: Reboot!

At this point, it may be wise to reboot your server to make sure all the services initialized properly, or if you're feeling savvy you can restart each of them individually.

Once the server is back up, check to make sure all your services are listening on all interfaces using the following command. (NOTE: on some older systems, you may need to replace 'ss' with 'netstat'.)

ss -ant4

You should see something like the following:

State      Recv-Q   Send-Q      Local Address:Port        Peer Address:Port         
LISTEN     0        128         0.0.0.0:42217             0.0.0.0:*            
LISTEN     0        80          0.0.0.0:3306              0.0.0.0:*            
LISTEN     0        128         0.0.0.0:11211             0.0.0.0:*            
LISTEN     0        128         0.0.0.0:6379              0.0.0.0:*            

This means that memcached, MariaDB, rrdcached, and redis are all ready and listening on your server.

Step 9: Install LibreNMS on the poller(s)

Follow the same steps from install LibreNMS, but this time DO NOT RUN THE WEB INSTALLER. Additionally in the initial steps, you *may not* need to install MariaDB on the poller since you're not using it. However I didn't try this myself; I simply installed all the packages, then went back later and removed MariaDB from my poller.

Step 10: Configure LibreNMS on the poller(s)

Edit the following values in /opt/librenms/config.php or add them if they don't already exist:

$config['rrdcached']    = "<ip of your rrdcached server>:42217";

# Enable distributed polling $config['distributed_poller'] = true;
$config['distributed_poller_group']                      = 1;
$config['distributed_poller_name']           = php_uname('n');

Note that in this case, you should use the IP address of the server that's running the "full install" of LibreNMS, on which you installed rrdcached. Also, we're putting this poller in a new group, 1; otherwise the main server's poller and this poller will "work together" to poll the same devices at the same time.

Then edit or add the following values to /opt/librenms/.env:

DB_HOST=<ip of your main librenms server>
DB_DATABASE=librenms
DB_USERNAME=librenms
DB_PASSWORD=<database password you configured during install>
DB_PORT=3306
REDIS_HOST=<ip of your main librenms server>
REDIS_DB=0

Step 11: Reboot the poller(s)!

Again, it's probably easiest to just reboot. I'll admit I'm not 100% sure how you "restart" LibreNMS so I find that sometimes rebooting is the best option. I think it picks up these changes dynamically, but a fresh instance is always best.

 Step 12: Add the new poller group on the server

Navigate to http://<ip of your main librenms server>/poller/groups, then click the "Create new poller group" button. Name it whatever you want, this is just making the server "aware" of the new poller group.

At this point, your poller should show up in the main LibreNMS system, under the "settings" icon in the top right, then go to "Pollers", or http://<ip of your main librenms server>/poller. You should also be able to run "ss -ant4" on the server again and see a connection on the "peer address" of your poller to the "local address" of your server on port 3306; that proves that the poller connected to the database successfully.

 

Hope that helps someone in the future. Please leave a comment if I missed something, I got mine working a couple days before I actually got to sit down and do this write-up so I'm hoping I didn't forget something!

Friday, October 23, 2020

DOSBox with serial port via Digi PortServer on Raspberry Pi

Whew...that's a lot.

The goal here was to connect a DOS application that uses a serial port for communication to a physical serial device, using a Raspberry Pi and a Digi PortServer (because I already had one). Obviously it would be a bit simple just to use a USB-serial adapter, but that's not the point here.

Components required:

  • Digi PortServer with port configured for "TCP Sockets"
  • Raspberry Pi (probably any variant) with DOSBox 0.74.2, vnc4server, and socat installed

This is more of a "notes to myself" post than a detailed how-to, so I'm probably going to forget some things. Post a comment if you get totally lost.

OK, so first, on the Pi, make sure your user (likely pi) is a member of the 'dialout' group. Pretty simple to google.

Next, install dosbox, vnc4server, and socat.

Now, when you use socat, ideally you'll want DOSBox to connect to a "virtual" serial port device, like /dev/virtualcom0 in my example, but for some reason that device always has root permissions no matter what I do, so instead I had to cheat a little bit in my DOSBox serial configuration.

Namely:

[serial]

serial1=directserial realport:pts/2

 Note that I used pts/2 instead of /dev/virtualcom0. When I run socat, I still create /dev/virtualcom0, but I just don't use it. I could probably omit that step, but eventually I'm hoping to resolve the situation entirely and use virtualcom0 instead of pts/2 in DOSBox.

 So, on to the socat command:

sudo socat pty,link=/dev/virtualcom0,raw,user=pi,group=tty,ospeed=b1200,ispeed=b1200 tcp:<digi portserver ip>:<digi portserver RAW TCP port>

Note that 'b1200' refers to the baud rate for my application. If you require a different baud rate, then substitute 'b<your baud rate>'. Also note that 'user=pi', obviously if the user you're using for DOSBox has a different name, use that name instead. 

The VNC server is pretty simple; basically manually starting the VNC server and creating display #13 (you can use any other number really). Then, by 'export'ing that display to the console, you're telling DOSBox to use that screen. Note that subsequent GUI applications on that console will also use screen 13 (VNC), so you may want to use the 'screen' application to run all of this like I do.

Here's the actual script I use to get my environment going:

#!/bin/sh

sudo socat pty,link=/dev/virtualcom0,raw,user=pi,group=tty,ospeed=b1200,ispeed=b1200 tcp:<digi portserver ip>:<digi portserver RAW TCP port>
&
Xvnc -SecurityTypes=None :13 &
export DISPLAY=:13
dosbox &

 So, I create the virtual serial port, start the VNC server, set the default display to #13, then actually start DOSBox, which starts DOSBox in a VNC session. From there I can connect to my Pi via VNC using port 5913 and voila, I have DOS!

I know some of you will see that I didn't use any security for VNC; this is generally not advisable, but in my particular case it's really not important; I do plan to get it working down the road, but I had some issue initially and didn't feel like farting around with it for now.

 Hope this helps someone else doing something equally esoteric.

 

Friday, November 1, 2019

PFSense logs in Graylog and Grafana using Elasticsearch

I recently felt the need to experiment with various "stacks" after seeing a Medium article on setting up these components (among others) on a Rock64 board (basically a souped-up Raspberry Pi). In the process I stumbled across a great video on YouTube showing what you can do when Grafana is added to the mix, and I was hooked.

Initially, I tried to follow the instructions from the Medium article to build the Elasticsearch+Graylog part of the stack myself, but ran into some issues and decided it wasn't worth the pain. Instead, I used the prebuilt OVA image from Graylog, which uses an Ubuntu server base OS and has a basic Elasticsearch+Graylog system preconfigured.

From there, I still needed to configure the various customizations in Graylog and get Grafana going. I found various resources, but the most helpful site I found was still not all that helpful. The information wasn't quite complete, and it wasn't completely in order. However the branch of the original pfsense-graylog implementation by opc40772 on github provided on this blog was critical to getting everything going.

So, here's the steps I would take if I were to do this again:

Note: I know this is lacking screenshots. I'll add some as I have more time, I really wanted to get the content down first before I forgot everything I knew.

  1. Install and configure the Graylog OVA image on my VM server. This includes setting the hostname and static IP as I pleased.
  2. Change the port for Graylog to port 9400, as Cerebro uses port 9000, which is the default for Graylog. (I later found that you can also change the default port for Cerebro, it really doesn't matter what you do.)
    1. Edit /etc/graylog/server/server.conf
    2. Find "http_bind_address"
    3. Change the value to "0.0.0.0:9400". This will allow external connections to Graylog, which is useful if you're not configuring all of this stuff on a server with a GUI like Ubuntu Server. :)
  3. Download the pfsense-graylog content pack: "wget https://github.com/devopstales/pfsense-graylog/archive/master.zip".
  4. Unzip the content pack.
  5. Download and install Cerebro.
    1. Cerebro releases can be found at https://github.com/lmenezes/cerebro/releases/; for Ubuntu or Debian, you can simply run "wget https://github.com/lmenezes/cerebro/releases/download/v0.8.4/cerebro_0.8.4_all.deb", then "dpkg -i cerebro_0.8.4_all.deb". For other operating systems, the download target and install command will change, and of course Cerebro could release v0.8.5 tomorrow, so make sure you check the original link.
  6. Open a web browser at your server's IP port 9000; you should hopefully see the Cerebro dashboard.
  7. Import the index template provided by pfsense-graylog into Elasticsearch using Cerebro. This template provides the fields needed for parsing and using the PFSense data in Grafana.
    1. In the Cerebro dashboard, navigate to "more" > "index templates" (image needed here)
    2. On the right-hand side under "create new template", provide the name "pfsense-custom".
    3. For the template data in the new template, copy and paste the contents of the file at "pfsense-graylog/Elasticsearch_pfsense_custom_template/pfsense_custom_template_es6.json" (downloaded in step 3).
  8. Download the GeoIP database using "wget -t0 -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz".
  9. Extract the downloaded GeoIP database using "tar -xvf GeoLite2-City.tar.gz"
  10. Copy the .mmdb file to the Graylog server directory using "cp GeoLite2-City_*/GeoLite2-City.mmdb /etc/graylog/server"
  11. Also copy the service-names-port-numbers file to the Graylog server directory using "cp service-names-port-numbers.csv /etc/graylog/server/"
  12. Restart Graylog: "systemctl restart graylog-server" on Ubuntu/Debian.
  13. Open Graylog in a web browser at your server's IP port 9400.
  14. Navigate to "System" > "Content Packs", then click "Upload".
    1. Choose file "pfsense-graylog/pfsense_content_pack/graylog3/3-pfsense-analysis.json"
    2. A new content pack should appear entitled "3 pfsense analysis"; click the Install button.
  15. Navigate to "System" > "Inputs", a new input should appear entitled "pfsense".
  16. Navigate to "System" > "Indices", a new index should appear entitled "pfsense-logs".
  17. Navigate to "Streams", a new stream should appear entitled "pfsense".
  18. Navigate to "System" > "Configurations".
    1. Under "Geo-Location Processor", click "Update", then check the box to enable the processor.
    2. Under "Message Processors Configuration", click "Update", then change the order of the processors as follows:
      1. AWS Instance Name Lookup
      2. Message Filter Chain
      3. Pipeline Processor
      4. GeoIP Resolver
  19. Here's where things may diverge a bit for you. In my particular setup, my PFSense box operates on my local timezone, my Ubuntu Server uses UTC time, and therefore the logs I see in Graylog all have the appropriate timestamps for my local timezone, though it seems to make the "last x minutes" search function in Graylog not work for some reason. When I got to setting up Grafana, I then had to configure it to always show items in UTC time, even though it's not really UTC. I messed with this a bit but ultimately haven't resolved it totally to my liking; it "works" as-is for now. 
    1. Nevertheless, navigate to "System" > "Pipelines", click the "pfsense" pipeline, then click "timestamp_pfsense_for_grafana".
    2. On line 6, edit the timezone to indicate your local timezone. For example, I entered "America/Detroit".
    3. Click "Save".
  20. Download Grafana using "wget <package url>"
  21. Install Grafana. On Ubuntu/Debian: 
    1. "apt install -y adduser libfontconfig1"
    2. "dpkg -i grafana_<version>.deb"
  22. Install the Grafana plugins you'll want for the PFSense dashboard. then restart Grafana:
    1. grafana-cli plugins install grafana-piechart-panel
      grafana-cli plugins install grafana-worldmap-panel
      grafana-cli plugins install savantly-heatmap-panel
      systemctl restart grafana-server
  23. Configure PFSense to push logs to the Graylog server:
    1. Log into PFSense
    2. Under "Status" > "System Logs" > "Settings":
      1. Check the box for "Enable Remote Logging"
      2. Set Source Address as needed for your particular system (default should be fine).
      3. Set Remote log server to the IP of your Graylog server, port 5442. For example, if the server IP was 192.168.1.1, the field should show "192.168.1.1:5442".
      4. Under Remote Syslog Contents, check the "Everything" box.
      5. Click "Save".
  24. Open Grafana in your web browser, http://<server ip>:3000
  25. On the left-hand side, hover over the gear icon to get to "Configuration", then click "Data Sources".
    1. Configure a new Elasticsearch data source:
      1. Name: PFSense Graylog
      2. Type: Elasticsearch
      3. URL: <server ip>:9200
      4. Access: Browser
      5. Index Name: pfsense_*
      6. Time Field Name - again I diverge, I used "timestamp", but the original instructions say to use "real_timestamp".
      7. Click "Save & Test". If you see a red message, you have a problem. If not, yay!
  26. If Grafana failed to connect in step 25, you may have an issue with CORS/anti cross-site-scripting protections in Elasticsearch. This seems to be an issue primarily when you have all of these servers on the same machine. Adding the following to /etc/elasticsearch/elasticsearch.yml resolved the problem for me, but it's ultimately not a security "best practice".
    1. # Leave XSS protection enabled, but allow from any source (basically disabling it)
      http.cors.enabled : true
      http.cors.allow-origin : "*"
      http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
      http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type, Content-Length
  27. If you're new to Grafana, I would suggest starting with a prebuilt dashboard and tinkering around. Click the + icon on the left, then select "Import". Use dashboard ID "5420" to download a pretty interesting and useful basic dashboard. I started with this one and HEAVILY customized it, I can provide a link to mine later once I get to registering with Grafana. Dashboard configuration could be another post all in itself, I think that topic is best left to other sources.
That's it! 27 steps later and you have a working GEG stack (not quite as catchy as ELK, huh?).