Way back in 2005, I converted my website (and its predecessor) over to Drupal. Drupal has served me well for the last 13 years, but due to the direction in which Drupal as a product has moved, I do not feel it is the right choice for me anymore.
So I instead checked out WordPress, and was rather happy with it. It does one thing (blogging) really really well, instead of trying to be the “kitchen sink” like Drupal. As of this writing, I’ve ported over just about all of the content I wanted to port over, and have since switched www.dmuth.org to point to this WordPress Install.
Along the way, I learned some thing about how to set up and configure WordPress, let me share them with you:
TL;DR If you are comfortable with Docker and Docker Compose, you can go straight to the GitHub repo and get started. For the everyone else, read on…
When I stood up this website, I wanted to do so in Docker, but I ran into an issue: the official WordPress Docker image runs Apache. Apache is a nice webserver for small amounts of traffic, but it does not scale well. As more concurrent connections come into a server running Apache, more copies of the httpd process are forked, which causes RAM usage to go up. Having RAM usage regularly go up and down is not ideal.
Fortunately, there is a better way. The Nginx webserver, combined with PHP running in FPM mode scales much better as the memory usage is more constant, which means that peak loads on the server won’t cause you to thrash the swapfile. Encryption would also be nice, so I wanted to have some SSL going as well.
I couldn’t find any existing solutions, so I built one! In this post, I’m going to walk through each piece of the puzzle.
I’m a big fan of Amazon S3 for storage, and I like it so much that I use Odrive to sync folders from my hard drive into S3 use S3 to store copies of all of my files from Dropbox as a form of backup. I only have about 20 GB of data that I truly care about, so that should be less than a dollar per month for hosting, right? Well…
Close to 250 GB billed for last month. How did that happen?
I’m a big fan of the Discord Musicbot, and run it on some Discord servers that I admin. Wanting to run it on a server, I first created an Ansible playbook and launched a server on Digital Ocean. But after a few months, I noticed that the server was sitting over 90% idle. Surely there had to be a better way.
So I next tried Docker, and created a Dockerized version of the Musicbot. I was quite happy with how much easier it was to spin up the bot, but still didn’t want to run it on a dedicated server on Digital Ocean. Aside from having unused capacity, if that machine were to go down, I’d have to do something manually.
I thought about running it in some sort of hosted Docker cluster, and came across Amazon’s container service. So this post is about creating your own cluster in ECS and hosting a Docker container in it. I found the process slightly confusing when I did it the first time, and wanted to share my experience here.
With the release of SEPTA’s new app, I’ve suddenly been flooded with questions about their API. People wanted to know how stable it was.
Well, I don’t work for SEPTA, which means I don’t have insight into their operations, but I can perform some analytics based on what I have, which is approximately 18 months of Regional Rail train data, read every minute by SEPTA Stats.
This is all of the data that I have in Septa Stats currently:
Events Since Inception: 26,924,887 events
First Event: Mar 1, 2016 12:00:01 AM
Last Event: Nov 16, 2017 10:33:53 PM
That’s way more events than minutes in that timeframe, and the reason for that is each API query is split into a separate event for each train. So if an API call returns status for 20 trains, that gets split into 20 different events. This is done because Splunk has a much easier time working with JSON that isn’t a giant array. 🙂
I’ve been living in a one-bedroom apartment for the last 15 years. It has mostly suited my needs — I don’t have any hobbies which require lots of “stuff”, and having a smaller apartment means that I can live closer to the city which makes for a shorter commute. In short: my apartment is a good fit for me.
However, there was one thing that got steadily worse over the years: clutter. While I cleaned regularly and could make my way around the apartment just fine, it was the little things that got me: the overflowing bookshelf, the ironing board with clean clothes sitting on it (because I had no room in my dresser), etc.
Things reached a breaking point a few months ago, when I realized that I needed to do some serious decluttering of my apartment. With the help of my Amazon Prime subscription, I started to order organizing products by the boxful and was able to make my apartment much more inhabitable then before.
That’s not to say I didn’t throw things out — I threw out a bunchof things, donated other things, and put a few more things into my storage unit. If you are trying to declutter your home, you are very likely going to have to throw somethingout. Be prepared for that. If you must, take pictures of the things you’re throwing out, but understand that the key to decluttering is throwing out the things you no longer need.
I’m going to go through the various things I used for organizing. I’ll start with plastic Rubbermaid/Tupperware containers, then move on to trash bags and shelving. Finally, I’ll wrap up with some additional organizing tips.
While S3 is a great storage platform, what happens if you accidentally delete some important files? Well, S3 has a mechanism to recover deleted files, and I’d like to go into that in this post.
First, make sure you have versioning enabled on your bucket. This can be done via the API, or via the UI in the “properties” tab for your bucket. Versioning saves every change to a file (including deletions) as a separate version of said object, with the most recent version taking precedence. In fact, a deletion is also a version! It is a zero-byte version which has a “DELETE” flag set. And the essence of recovering undeleted files simply involves removing the latest version with the “DELETE” flag.
This is what that would look like in the UI:
To undelete these files, we’ll use a script I created called s3-undelete.sh, which can be found over on GitHub:
At my day job, I get to write a bit of code. I’m fortunate that my employer is pretty cool about letting us open source what we write, so I’m happy to announce that two of my projects have been open sourced!
The first project is an app which I wrote in PHP, it can be used to compare an arbitrary number of .ini files on a logical basis. What this means is that if you have ini files with similar contents, but the stanzas and key/value pairs are all mixed up, this utility will read in all of the .ini files that you specify, put the stanzas and their keys and values into well defined data structures, perform comparisons, and let you know what the differences are. (if any) In production, we used this to compare configuration files for Splunk from several different installations that we wanted to consolidate. Given that we had dozens of files, some having hundreds of lines, this utility saved us hours of effort and eliminated the possibility of human error. It can be found at: