Behold! A QR Code Generator That Doesn’t Suck!

Awhile ago, I got into QR Codes. I’ve found them increasingly handy because I could make QR codes based on documents that I had stored in Google Docs, and then I could invite people to scan those QR Codes in person if I wanted to share what was in the doc. The QR Codes themselves could be printed out on paper, be saved to my phone and scanned from them, or even put on my Apple Watch.

But I ran into a challenge: creating QR Codes. When I did searches for QR Code generators, most of the ones I found online either generated small QR Codes, had ads, required money, or all three! It was very much Not Fun, and I felt it was a minor injustice that being able to create something as useful as QR Codes was off limits for so many people.

So I decided to create my own.

After reading some tutorials on how to do it, I found one that talked about how to make QR Codes in Python. I then remembered, that I had an app with a bunch of API endpoints available, and it was written in Python! So, I figured, why not add a QR Code generator, expose it as another endpoint, and create a form which submits to that API?

And that’s exactly what I did, you can find the QR Code generator here:

https://httpbin.dmuth.org/qrcode/

This is 100% free with no ads, no tracking, and no spam. Just QR Codes when you want them.

Enjoy!

Mirror on Medium.

Introducing FastAPI Httpbin

I write a lot of back end code and sometimes have to make use of HTTP endpoints. Sometimes I want to test those endpoints. I used to use httpbin.org for my endpoint testing, but over time noticed that some of the endpoints were returning HTTP 5xx errors. Some investigation reveals that the project seems to be abandoned, with open issues going all the way back to 2017. That’s not so good.

So I decided to build my own.

In this post I’d like to talk about FastAPI Httpbin. But before I can, I need to talk about FastAPI itself. FastAPI is a framework for building high-performance frameworks in Python based on Python type hints. The really neat thing about FastAPI is that your function definition for each endpoint is the source of truth–FastAPI handles argument validation for any calls to that endpoint, and generates the appropriate Swagger documentation for that endpoint.

Continue reading “Introducing FastAPI Httpbin”

Tarsplit: A Utility to Split Tarballs Into Multiple Parts

Tarsplit is a utility I wrote which can split UNIX tarfiles into multiple parts while keeping files in the tarballs intact. I specifically wrote those because other ways I found of splitting up tarballs didn’t keep the individual files intact, which would not play nice with Docker.

But what does Docker have to do with tar?

“Good tea. Nice house.”

While building the Docker images for my Splunk Lab project, I noticed that one of the layers was something like a Gigabyte in size! While Docker can handle large layers, the issue become one of time it takes to push or pull an image. Docker does validation on each layer as it’s received, and the validation of a 1 GB layer took 20-30 wall clock seconds. Not fun.

It occurred to me that if I could split up that layer into say, 10 layers of 100 Megabytes each, then Docker would transfer about 3 layers in parallel, and while a layer is being validated post-transfer, the next layer would start being transferred. The end result is less wall clock seconds to transfer an entire Docker image.

But there was an issue–Splunk and its applications are big. A few of the tarballs are hundreds of Megabytes in size, and I needed a way to split those tarballs into smaller tarballs without corrupting the files in them. This led to Tarsplit being written.

Continue reading “Tarsplit: A Utility to Split Tarballs Into Multiple Parts”

Doing Rollups of AWS S3 Server Access Logs

If you are storing files in Amazon S3, you absolutely positively should enable AWS S3 Access Logging. This will cause every single access in a bucket to be written to a logfile in another S3 bucket, and is super useful for tracking down bucket usage, especially if you have any publicly hosted content in your buckets.

But there’s a problem–AWS goes absolutely bonkers when it comes to writing logs in S3. Multiple files will be written per minute, each with as few as one event in them. It comes out looking like this:

2019-09-14 13:26:38        835 s3/www.pa-furry.org/2019-09-14-17-26-37-5B75705EA0D67AF7
2019-09-14 13:26:46        333 s3/www.pa-furry.org/2019-09-14-17-26-45-C8553CA61B663D7A
2019-09-14 13:26:55        333 s3/www.pa-furry.org/2019-09-14-17-26-54-F613777CE621F257
2019-09-14 13:26:56        333 s3/www.pa-furry.org/2019-09-14-17-26-55-99D355F57F3FABA9

At that rate, you will easily wind up with 10s of thousands of logfiles per day. Yikes.

Dealing With So Many Logfiles

Wouldn’t it be nice if there was a way to perform rollup on those files so they could be condensed into fewer bigger files?

Well, I wrote an app for that. Here’s how to get started: first step is that you’re going to need to clone that repo and install Serverless:

git clone git@github.com:dmuth/aws-s3-server-access-logging-rollup.git
npm install -g serverless

Serverless is an app which lets you deploy applications on AWS and other cloud providers without actually spinning up virtual servers. In our case, we’ll use Serverless to create a Lambda function which executes periodically and performs rollup of logfiles.

So once you have the code, here’s how to deploy it:

cp serverless.yml.exmaple serverless.xml
vim serverless.xml # Vim is Best Editor
serverless deploy # Deploy the app. This will take some time.
Continue reading “Doing Rollups of AWS S3 Server Access Logs”

The Joy of Using Docker

I’ve written about Docker before, as I am a big fan of it. And for this post, I’m going to talk about some practical situations in which I’ve used Docker in real life, both for testing and software development!

Docker Logo

But first, let’s recap what Docker IS and IS NOT:

  • Docker containers DO spin up quickly (1-2 seconds or less)
  • Docker containers DO have separated process tress and filesystems
  • Docker containers ARE NOT virtual machines
  • Docker containers ARE intended to be ephemeral. (short-lived)
  • You CAN, however, mount filesystems from the host machine into Docker, so those files can live on after the container shuts down (or is killed).
  • You SHOULD only run one service per Docker container.

Everybody got that? Good. Now, let’s get into some real life things I’ve used Docker for.

Experimenting in Linux

Want to test out some commands or maybe a shell script that you’re worried might be destructive? No worries, try it in a Docker container, and if you nuke the filesystem, there will be no long-term consequences.

#
# Start a container with Alpine Linux
#
$ docker run -it alpine

#
# Let's do something dumb
#
$ rm /bin/ls 
$ ls -l
/bin/sh: ls: not found

#
# Just exit the container, restart it, and our filesystem is back!
#
$ exit 
[unifi:~/tmp ] $ docker run -it alpine
$ ls /
bin    dev    etc    home   lib    media  mnt    proc   root   run    sbin   srv    sys    tmp    usr    var

And all of the above takes just a couple of seconds! This works with other Linux distros as well, such as CentOS and Unbuntu–just change your Docker command accordingly:

docker run -it centos
docker run -it ubuntu

Yes, that means you could run CentOS in a container under Ubuntu or vice-versa. Docker doesn’t care. 🙂

Continue reading “The Joy of Using Docker”