Welcome to Dmuth dot org

Ferris Wheel at Funtown Pier

Welcome to my website! I've been on the web in some form or another since 1996. Even though social media is all the rage these days, I feel it is important to have one's own presence on the web. So I continue to write here as often as I can.

If you'd like to get in touch with me, feel free to reach out! All of my contact info can be found here.

-- Doug

3.051885
Average: 3.1 (212 votes)
Your rating: None

Getting Amazon S3 To Work With Odrive

I've become a huge fan of odrive lately. Odrive is like the Dropbox client, but it lets you sync to just about any cloud service. Examples include Dropbox, Box, Amazon Drive, Slack(!), and my personal favorite: Amazon S3.

In other words, you can have a directory on your hard drive mirrored into S3, so that any changes made are uploaded straight to S3. From there, you can do things like enabling access logging or encryption. All very neat stuff.

When connecting to Amazon S3, you'll need an Access Key and a Secret Access Key. You do NOT want to use the default ones that came with your account, as they have full access to everything that is on Amazon Web Services. Instead, you want to create a key that has access limited just to your S3 bucket. This blog post will explain how exactly to do that.

5
Average: 5 (1 vote)
Your rating: None

ssh-to: Easily manage dozens or hundreds of machines with SSH

Hey software engineers! Do you manage servers? Lots of servers? Hate copying and pasting IP addresses? Need a way to execute a command on each of a group of servers that you manage?

I developed an app which can help with those things, and my employer has graciously given me permission to open source it.

First, here's the link:

And here's how to download a copy:

git clone https://github.com/Comcast/ssh-to.git

Getting Started

In order to use this script, you'll need to create a servers.json file:

./ssh-to --create

This will create a sample servers.json file in the current directory. You may want to then move that file to $HOME/.ssh-to-servers.json so that ssh-to can be run from any directory in the filesystem.

Next, to add hosts and hostgroups to the servers.json file:

./ssh-to --edit

This will bring up $EDITOR to edit the file. After you are done editing, exit the editor, and the validity of the JSON will be checked. If the JSON fails to validate, you'll be prompted to hit ENTER to go back into the editor or ctrl-C to abort.

Using ssh-to

Going forward, here's the syntax for using ssh-to in regular operation:

ssh-to [ --dump | --loop ] group [ host_number ] [ ssh_args ]

--dump prints out the hostname/IP address (or plural, if you just specify a group), which can be used in other scripts like so:

for HOST in $(ssh-to --dump hadoop)
do
   scp file.txt ${HOST}:/path/to/destination/
done

"But wait, there's more!"

Using the --loop switch, this sets the way for other cool automations! Here's an example:

ssh-to splunk --loop "yum install -y splunk; /opt/splunk/bin/splunk stop; /opt/splunk/bin/splunk start --answer-yes --no-prompt"

This would, on an existing Splunk cluster, perform rolling upgrades of Splunk on each machine in the cluster.

"Operators are standing by!"

Please, try out the tool and let me know what you think via email or in the comments below. Feedback, bugfixes, and suggestions for features are always welcome!

-- Doug

5
Average: 5 (1 vote)
Your rating: None

Western PA Furry Weekend 2016 Pictures and Report

A few weekends ago, I had the pleasure of attending Western PA Furry Weekend, which was an outdoor event held the weekend of October 7th-9th in North Park Lodge in Pittsburgh, PA. It was my first WPAFW since last attending in 2010.

WPAFW-2016-017 WPAFW-2016-025 WPAFW-2016-084


Even though Pittsburgh is 300 miles from Philadelphia, getting there was easy--I took that Friday off, rented a car, and then just drove west along the PA Turnpike for about 5 hours. I did it in the middle of the day so as to avoid traffic. Then, once I got to the hotel and got checked in, the lodge was just a 15 minute drive away.

WPAFW-2016-059 WPAFW-2016-068 WPAFW-2016-072


The weather was great and I had a good time. Got to spend most of Friday night and Saturday catching up with folks that I knew. The attendance this year was a record-breaking 270, and the charity was Going Home Greyhounds, which we raised $5,666 for (also record-breaking!).

I also discovered--completely by accident--that putting a scoop of chocolate ice cream on top of pumpkin pie is AMAZING. Seriously, you should try it sometime.

WPAFW-2016-073 WPAFW-2016-119 WPAFW-2016-109


I have many more pictures! The full albums can be found on:

Next year's event will be held form October 20-22, 2017 in the same location. I hope to make it there!

5
Average: 5 (1 vote)
Your rating: None

I Built a Facebook Group Leaderboard

This is a Node.js app which uses the Facebook Graph API to download recent posts from 1 or more groups, and display the top posters, top commenters, and their stats in a leaderboard-style format. In production, I use this app to keep track of some groups I admin with thousands of users each, and make sure that no one is unnecessarily spamming the group.

Live Demo: http://www.dmuth.org/facebook.

Screenshots:


The source can be downloaded from here:

Give it a try, and let me know what you think!

5
Average: 5 (1 vote)
Your rating: None

Two New Open Source Projects

At my day job, I get to write a bit of code. I'm fortunate that my employer is pretty cool about letting us open source what we write, so I'm happy to announce that two of my projects have been open sourced!

The first project is an app which I wrote in PHP, it can be used to compare an arbitrary number of .ini files on a logical basis. What this means is that if you have ini files with similar contents, but the stanzas and key/value pairs are all mixed up, this utility will read in all of the .ini files that you specify, put the stanzas and their keys and values into well defined data structures, perform comparisons, and let you know what the differences are. (if any) In production, we used this to compare configuration files for Splunk from several different installations that we wanted to consolidate. Given that we had dozens of files, some having hundreds of lines, this utility saved us hours of effort and eliminated the possibility of human error. It can be found at:

The next app I developed was written in Node.js and is intended for use in a high-availability environment. In most HA environments, you will have multiple servers running behind a load balancer. In order to check the health of its servers, the load balancer will usually issue an HTTP GET request to a pre-defined endpoint to make sure each server is healthy. But what if... the server didn't have any GET endpoints? This is actually the case with Apache NiFi, which only provides HTTP POST endpoints. What now?

That's where this utility comes in--it starts an HTTP server on the port of your choice, and can be used to turn a GET request into a POST request (with a zero byte payload), send it to a target port on the same server, and relay back the HTTP response. This in effect proxies a GET request as a POST, and returns the result. It's a bit of an odd way to go about it, but it let us more effectively use Apache NiFi in a high-availability environment and did not break any workflow, so we're calling that a win. Smiling That app can be found at:

I hope these are of use to anyone who stumbles across them. If you have any feedback or comments, feel free to leave them below or on GitHub!

5
Average: 5 (1 vote)
Your rating: None

Anthrocon 2016 Pictures

Another Anthrocon has come and gone, and it's been amazing year!

We had 7,310 attendees this year, a considerable jump up from our previous number of 6,348 attendees. Our Fursuit parade had 2,100 fursuiters in it, an even bigger relative jump from the previous year's number of 1,460.

We raised $31,880 for our charity: The Pittsburgh Zoo & PPG Aquarium.

Anthrocon-2016-012 Anthrocon-2016-023 Anthrocon-2016-030 Anthrocon-2016-163 Anthrocon-2016-042 Anthrocon-2016-049 Anthrocon-2016-111 Anthrocon-2016-121 Anthrocon-2016-134

5
Average: 5 (2 votes)
Your rating: None

Announcing Real-time SEPTA Train Stats!

I am pleased to announce the launch of the website Septa Stats! This website provides real-time data on all Regional Rail train lines. The following stats and metrics are supported:

That website is at:



The underlying technology stack consists of PHP, Slim (a microframework for PHP), Redis (for query result caching), and Splunk for data storage and reporting.

My source code is also available!


5
Average: 5 (1 vote)
Your rating: None

Extracting Session IDs from Websocket Requests in Express.js

I decided to learn websockets recently and figured that using the excellent Socket.io library along with Express. The tutorials on their website made sense, however I ran into an issue using the express-session module--cookies are not normally parsed with websocket connections, so I could not get the session data normally.

I then spent several hours reading through blog posts and Stack Overflow to figure out how to manually go through the process of parsing cookie strings and decrypting session data, which I thought I'd share here!

This assumes that you are using Express 4.x and have installed the following modules:

  • cookie - Used to parse the cookie string
  • cookie-signature - Used to decrypt the cookie
  • debug - Used to display debug messages. Replace with Winston or similar if you like.
  • express-session - Used to interface with sessions
  • session-file-store - If you're using a different data store, replace accordingly

And the resulting code looks like this:

5
Average: 5 (1 vote)
Your rating: None

How to Copy Uploads From AWS S3 Automatically

The problem: you write files to an S3 bucket on Amazon Web services. Maybe a single user/process does this, maybe multiple users or processes do this. But you want to keep a particular process from going rogue and deleting your data. What do you do?

The answer: You write a function in AWS Lambda that is fired whenever something is uploaded to the S3 bucket in question. It then calls the copyObject() method and makes a copy of the file to another bucket--one that only it (and your admin account, presumably) have access to write to.

The GitHub repository is at:

It's a quick and dirty thing that I put together mostly as a demo of how to integrate AWS Lambda with other S3 services.

Feel free to give it a try--AWS has a free tier for all new accounts. If you like it, let me know. If you didn't like it, let me know.

5
Average: 5 (2 votes)
Your rating: None