fogbound.net




Fri, 9 Jul 2021

Virtual Apache SSL Hosts in a Docker Container

— SjG @ 1:40 pm

You might want to “containerize” your development hosting environment so you can easily migrate it from machine to machine. As a Docker noob, I had a bunch of issues getting this set up the first time, and wanted to share a working configuration. This example assumes you have Docker installed and operating. You can also skip reading this, and just download the files at GitHub.

First, we’ll need to create some directories. I create one for the Apache configurations, and one for the code projects I’ll be working on.
mkdir apache-php
mkdir project

Within project, you can check out the code for your various projects into subdirectories. For simplicity, I’ve created project1 and project2 in the sample code. The Apache web server within the container will serve content from these directories.

We’re going to use a fictional top-level domain (TLD) for our development environment. This way, the URL you use to access your sites will be the same every time you spin up a new dev environment, without having to worry about name servers. You do, however, have to worry about your /etc/hosts file (or your platform’s equivalent). Choose a TLD that will be easy to remember. For my example, I’m using “mylocal”. One thing to avoid: start the TLD with a letter rather than a number. Don’t include characters like hyphens. Please learn from my mistakes.

Edit your /etc/hosts, and add the lines:
127.0.0.1 project1.mylocal
127.0.0.1 project2.mylocal

Next, we’ll want to create an SSL certificate for use in development. The easiest way to do this is with mkcert. Once you have mkcert installed and working, you’ll create a wildcard certificate for your TLD:
cd php-apache
mkcert -install mylocal "*.mylocal"

Next, we create a compose.yaml file:

version: "3.9"
services:
php:
container_name: ApachePHPVirtual
networks:
- apache
build:
context: .
dockerfile: PhpApacheDockerfile
volumes:
- "./project:/var/www"
ports:
- 80:80
- 443:443
extra_hosts:
- "project1.mylocal:127.0.0.1" # remember to add "127.0.0.1 project1.mylocal" to your /etc/hosts file or equivalent
- "project2.mylocal:127.0.0.1" # remember to add "127.0.0.1 project2.mylocal" to your /etc/hosts file or equivalent
hostname: project1.mylocal # default
domainname: mylocal
tty: true # if you want to debug
networks:
apache:

This is pretty straightforward. We’re creating a container which we’ll call “ApachePHPVirtual,” it will have a network we call “apache” if we want to connect using other containers, and it links our top level project directory to /var/www in the container. We map ports 80 and 443 on our host machine to those same ports in the container. The extra_hosts directive adds our project names to the container’s /etc/hosts. We set up the container’s hostname to match our first project, and set the default domain to our “mylocal” TLD.

We then want to create configurations for each of the Apache virtual hosts. In the php-apache directory, we create config files for each project. These are just standard virtual host declarations, e.g.:

<VirtualHost *:80>
    ServerName project1.mylocal
    Redirect permanent / https://project1.mylocal/
</VirtualHost>
<VirtualHost *:443>
    ServerName project1.mylocal
    DocumentRoot /var/www/project1
    ErrorLog ${APACHE_LOG_DIR}/project1-error.log
    CustomLog ${APACHE_LOG_DIR}/project1-access.log combined
    DirectoryIndex index.php

    <Directory "/var/www/project1">
        Options -Indexes +FollowSymLinks
        AllowOverride all
        Order allow,deny
        Allow from all
    </Directory>

    SSLEngine On
    SSLCertificateFile    /etc/apache2/ssl/cert.pem
    SSLCertificateKeyFile /etc/apache2/ssl/cert-key.pem
</VirtualHost>

You’ll need to create a similar configuration for each project. Note that the Document Root points at the mapped host directory. That means you won’t need to rebuild the container to see project changes.

The actual image for the Apache/PHP container is created and configured in our next file, “PhpApacheDockerfile”. So we create that:

FROM php:8.0.8-apache-buster

# add some packages
RUN docker-php-ext-install curl gd iconv pdo pdo_mysql soap zip

# Apache Config
COPY php-apache/project1.conf /etc/apache2/sites-available/project1.conf
COPY php-apache/project2.conf /etc/apache2/sites-available/project2.conf
COPY php-apache/mylocal+1-key.pem /etc/apache2/ssl/cert-key.pem
COPY php-apache/mylocal+1.pem /etc/apache2/ssl/cert.pem

# mod rewrite! SSL!
RUN a2enmod rewrite
RUN a2enmod ssl

# enable sites
RUN a2ensite project1.conf
RUN a2ensite project2.conf
RUN service apache2 restart

This pulls the php-8.0.8 image from DockerHub, adds in some PHP extensions, copies over our SSL certificate and key, copies our virtual host configuration files over, enables the projects, and restarts the Apache server.

Now, all that remains to do is build it and power it up:

docker-compose build && docker-compose up -d

You can now visit the project URls in your browser, e.g., https://project1.mylocal/


Tue, 7 Apr 2020

One-liner to get a directory’s worth of video times

— SjG @ 10:58 am

The ffmpeg family of programs is incredibly arcane and powerful for handling video and video information. I needed to get the run times of a collection of videos. Here’s a handy one-liner that creates output suitable for import into a spreadsheet:

for i in *.mp4; do q=ffprobe -i "$i" -show_entries format=duration -v quiet -of csv="p=0";echo "$i, $q"; done

Sample run:
$ cd ~/work/training_videos
$ for i in *.mp4; do q=ffprobe -i "$i" -show_entries format=duration -v quiet -of csv="p=0";echo "$i, $q"; done
First_steps.mp4, 70.868000
Create_a_project.mp4, 134.320000
Loading_libraries.mp4, 45.442000
...


Wed, 8 Aug 2018

Building Kannel 1.4.5 under CentOS 7.5

— SjG @ 1:05 pm

Well, it’s been a few years since I’ve had to build Kannel (see Compiling Kannel under CentOS 7.0), and a server migration required that I figure it out again.

You can find discussions on why Bison v.3.0 or later, standard with modern versions of Linux, prevents compilation from succeeding. Some discussions provide workarounds that I couldn’t make work.

Here’s how I downgraded Bison and built a working Kannel 1.4.5 binary on CentOS 7.5:


$ cd /usr/local/kannel
$ wget --no-check-certificate https://redmine.kannel.org/attachments/download/322/gateway-1.4.5.tar.gz
$ tar xzvf gateway-1.4.5.tar.gz
$ cd gateway-1.4.5
$ sudo yum downgrade bison
$ sudo ./configure --prefix=/usr/local/kannel --disable-wap --enable-start-stop-daemon
$ sudo make
$ sudo make install

In that downgrade step, Bison was rolled back to version 2.7-4.el7. If you’re doing this at some indeterminate time in the future, the downgrade may be to a 3.x version, in which case you’ll want to downgrade until you’re at a 2.7.x version (also, if you’re at some indeterminate time in the future, simply check your watch and it will be roughly determinate again).


Sat, 7 Oct 2017

Simple file monitor

— SjG @ 11:59 am

Say you host a few web sites for various folks, and you give them write access to a directory on your server. Well, then, my friend, you’re as big a fool as I am.

Maybe you want to mitigate this foolhardiness by keeping an eye on what these folks upload. For example, when I see a user uploading SuperBulletinBoardThatIsTotallyNotASpamTool.php or SuperWordPressPasswordSharingPlugin.php, I can call them and explain why I’m deleting it. I can be a slightly-less-bastard operator from heck.

So here’s a quick bash script that I use. It’ll also help to alert you if somehow one of the WordPress sites gets compromised, and rogue php files get installed. It ignores commonly changing files or things we’re not interested in like images. It shouldn’t be considered an intrusion detection system, or a robust security auditing tool — this wouldn’t really help in the case of an actual hacker with any l33t skillz at all. It’s just a quick information source.


#/bin/bash

rm -f /tmp/fcl.txt

rm -f /tmp/fcld.txt

/usr/bin/find /var/www/ -type f -ctime -1 | /bin/egrep -v "\\.git|\\.svn|(*.jpg$)|(*.gif$)|(*.pdf$)|wp-content\\/cache|files\\/cache\\/zend_cache" > /tmp/fcl.txt

xargs -0 -n 1 ls -l < <(tr \\n \\0 /tmp/fcld.txt

[ -s /tmp/fcld.txt ] && /usr/bin/mail -aFrom:account@mydomain.com -s "MYDOMAIN.COM FILES UPDATED" you@youremail.com < /tmp/fcld.txt

Throw it into a crontab, and there you have it. You'll get an email with a list of files changed in the past day.


Thu, 31 Aug 2017

Getting nagios back up and running… again.

— SjG @ 12:42 pm

Nagios monitoring on one Centos 6.9 server seemed to have stopped working after an upgrade. All the tests showed status OK, but they hadn’t actually run in days.

Looking at the service details was weird, because the next scheduled check was about a minute in the past.

The Nagios help page wasn’t. And we’re running Core 4.3.x anyway, without a MySQL database.

The first clue was a bunch of lines in the event log:
Error: Could not open check result queue directory '/var/log/nagios/spool/checkresults' for reading.

Turns out we didn’t even have a /var/log/nagios/spool directory. Creating those directories helped. But Nagios still wouldn’t start from the usual startup scripts. Nothing in the main log. But then, another clue.

Who doesn’t love to see shit like this:

$ cat /var/log/nagios/nagios.configtest
ERROR: Errors in config files – see log for details: /var/log/nagios/nagios.configtest
$

So the startup script /etc/init.d/nagios searches for warnings, and aborts if they exist. It’s supposed to log them. For some reason it didn’t.

You can manually get those warnings and errors yourself by running

/usr/sbin/nagios -v /etc/nagios/nagios.cfg (adjust paths as appropriate).

I ended up with a bunch of warnings for deprecated parameters. So I went in and edited my config files to remove them or update them to the new equivalents. Oh yes. Software authors, please keep in mind: nothing pleases your users more than changing the names of variables in config files. We users live for this shit. When, oh when, will the author of our software next change “retry_check_interval” to “retry_interval”?

Fixing all of the warnings was not enough, though. The startup script gave the message “Starting nagios:” and then silently died. Well, sort of. It actually was starting now, but brokenly:

# ps aux | grep -i nagios
nagios 12610 0.0 0.0 12296 1220 ? Ss 16:36 0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
nagios 12611 0.0 0.0 0 0 ? Z 16:36 0:00 [nagios] <defunct>
nagios 12612 0.0 0.0 0 0 ? Z 16:36 0:00 [nagios] <defunct>
nagios 12613 0.0 0.0 0 0 ? Z 16:36 0:00 [nagios] <defunct>
nagios 12614 0.0 0.0 0 0 ? Z 16:36 0:00 [nagios] <defunct>
nagios 12616 0.0 0.0 11780 520 ? S 16:36 0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
root 12744 0.0 0.0 103328 876 pts/0 S+ 16:44 0:00 grep -i nagios

Starting directly from the command line worked:

# /usr/sbin/nagios -d /etc/nagios/nagios.cfg
# ps aux | grep nagios
nrpe 7282 0.0 0.0 41380 1340 ? Ss 16:51 0:00 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d
nagios 8010 0.0 0.0 16404 1280 ? Ss 17:31 0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
nagios 8011 0.0 0.0 10052 920 ? S 17:31 0:00 /usr/sbin/nagios –worker /var/spool/nagios/cmd/nagios.qh
nagios 8012 0.0 0.0 10052 920 ? S 17:31 0:00 /usr/sbin/nagios –worker /var/spool/nagios/cmd/nagios.qh
nagios 8013 0.0 0.0 10052 920 ? S 17:31 0:00 /usr/sbin/nagios –worker /var/spool/nagios/cmd/nagios.qh
nagios 8014 0.0 0.0 10052 920 ? S 17:31 0:00 /usr/sbin/nagios –worker /var/spool/nagios/cmd/nagios.qh
nagios 8015 0.0 0.0 15888 552 ? S 17:31 0:00 /usr/sbin/nagios -d /etc/nagios/nagios.cfg
root 8018 0.0 0.0 100956 616 pts/0 S+ 17:31 0:00 tail -f /var/log/nagios/nagios.log
root 8037 0.0 0.0 103328 856 pts/1 S+ 17:32 0:00 grep nagios

When things get this weird, there are only two options. Well, three, if you include the “rm -rf /” option. But the other two are: 1) reboot and see if stuff magically starts working, or 2) see what SELinux is breaking.

# tail -f /etc/audit/audit.log
type=AVC msg=audit(1504137329.209:41): avc: denied { execute_no_trans } for pid=7731 comm=”nagios” path=”/usr/sbin/nagios” dev=dm-0 ino=1201464 scontext=unconfined_u:system_r:nagios_t:s0 tcontext=system_u:object_r:nagios_exec_t:s0 tclass=file

Yup. As expected, SELinux breaking stuff.

So, my preferred way to solve this kind of problem now is to snip out all relevant the AVC “denied” sections from the log into a single file (which I called audit.log), and then using audit2allow to create a new module. Since there’s already a nagios module (containing insufficient privileges), I created a nagios2 module:

# audit2allow -M nagios2 < audit.log # semodule -i nagios2.pp

Hooray! After a few iterations of this process (discovering other blocked operation, granting them permission, restarting nagios), everything was working but check_disk_smb, which was returning “results from smbclient not suitable” even as it worked fine when tested from the command-line as follows:

# su – nagios -s /bin/bash -c “/usr/lib64/nagios/plugins/check_disk_smb -H SMBHOST -s share -a 10.X.X.X -u nagios -p \”password\” -w 90 -c 95″
Disk ok – 16.71G (11%) free on \\SMBHOST\share | ‘share’=134108676096B;136850492620.8;144453297766.4;0;152056102912

Diving in and editing check_disk_smb to throw the actual error message, I found nagios getting a “ERROR: Could not determine network interfaces, you must use a interfaces config line” from smbclient. So I edited /etc/samba/smb.conf, and explicitly told samba which interfaces it had available:

interfaces = lo eth0 10.X.X.X/24

Le sigh. Now this error went away, and I got to go for another fun and challenging round of “find all the SMB operations that SELinux is breaking.” This time, I got tripped up the “dontaudits” — there were operations being blocked, but not logged. I was saved by TrevorH and sfix, helpful people in Freenode’s #centos IRC channel:

TrevorH: semodule -DB to disable dontaudit rules, stay permissive, recreate, use the audit log to generate a policy as per the wiki
13:14 TrevorH: @selinux
13:14 centbot: Useful resources for SELinux: http://wiki.centos.org/HowTos/SELinux | http://wiki.centos.org/TipsAndTricks/SelinuxBooleans | http://docs.fedoraproject.org/en-US/Fedora/13/html/Security-Enhanced_Linux/ | http://www.youtube.com/watch?v=bQqX3RWn0Yw | http://opensource.com/business/13/11/selinux-policy-guide
13:15 _SjG_: Thanks
13:16 TrevorH: semodule -B when done (as well as setenforce 1)
13:25 _SjG_: TrevorH: thanks, that resolved it.
13:25 _SjG_: so what I was missing is that there can be donaudit rules that were preventing specific operations from showing up in the audit log?
13:28 TrevorH: yes
13:28 sfix: _SjG_: yep, there’s permissions in the policy that we know are requested but don’t want to allow for whatever reason. dontaudits are our way of preventing them from cluttering the audit log.
13:29 sfix: dontaudits tend to be a bit over-eager though

So there you have it.

I was finally back to where I had been mere days before.