If you’re a Mac user, you have a few options for running Docker. Aside from Docker’s official client, there also exists Rancher Desktop and Podman. I’ve used them all, and they’re all decent implementations of Docker. However, I ran into some limitations in each platform that are beyond the scope of this post that nonetheless prompted me to try building out my own Docker offering.
Having used VirtualBox and Vagrant before, I found myself wondering if I could use Vagrant to stand up an instance of Docker, proxy connections to Docker over SSH, and mount directories on the host machine’s filesystem.
If you have a Macbook laptop, you’re probably familiar with the touch bar. It’s a neat little LED display above the top row of your keyboard that Mac OS/X uses to display context-sensitive widgets. However, you don’t just have to accept the widgets that Apple provides–you can in fact customize the touchbar however you like.
“But why would I want to do this?”, I hear you ask. Well, maybe you need to have a custom status of some kind displayed on your menu bar. For me, it was… wireless networks.
MTMR is my new favorite utility.
That sounds confusing, but hear me out. Sometimes when I am traveling, I get kicked off of whatever wireless network I’m on. I wanted a way to easily determine what network I was on, without having to keep clicking on the wireless icon in my menu bar. I found that the touch bar was a convenient way to do that, and in this post, I will show you how I did it.
First, download an app called MTMR. MTMR stands for “My Touchbar. My rules.” Installation instructions are on that page, but most users will want the dmg file.
Once that’s installed, you can edit the file $HOME/Library/Application Support/MTMR/items.json to change what appears in the menu bar. The contents of the file are JSON, and you can edit this in whichever editor you like. Furthermore, once you save changes, they take effect immediately–no restarts of the MTMR app are necessary!
Living 20 minutes from downtown Philadelphia, I’m a big fan of our hockey mascot, Gritty. Recently I’ve been playing around with a website called character.ai, and one of the neat things that site lets you do is create bots based on characters, real or imaginary. For example, there is one character based on Albert Einstein, and another is based on Darth Vader. So I decided I would create a character based on Gritty.
I immediately regretted that.
The AI powering that site is… frightfully good, to say the least. After seeding the character with just a handful of tweets from Gritty’s Twitter feed, the bot quickly took on a life of its own and said things that I would absolutely expect the real Gritty to say.
For example, let’s start with the no-fly list:
Well then.
Next I asked Gritty about his diet, and the answers the bot gave were concerning, to say the least:
I’m a big fan of Amazon S3 for storage, and I like it so much that I use Odrive to sync folders from my hard drive into S3 use S3 to store copies of all of my files from Dropbox as a form of backup. I only have about 20 GB of data that I truly care about, so that should be less than a dollar per month for hosting, right? Well…
“You are not your job or how much data you have in S3!”
Close to 250 GB billed for last month. How did that happen?
I’m a big fan of the Discord Musicbot, and run it on some Discord servers that I admin. Wanting to run it on a server, I first created an Ansible playbook and launched a server on Digital Ocean. But after a few months, I noticed that the server was sitting over 90% idle. Surely there had to be a better way.
So I next tried Docker, and created a Dockerized version of the Musicbot. I was quite happy with how much easier it was to spin up the bot, but still didn’t want to run it on a dedicated server on Digital Ocean. Aside from having unused capacity, if that machine were to go down, I’d have to do something manually.
I thought about running it in some sort of hosted Docker cluster, and came across Amazon’s container service. So this post is about creating your own cluster in ECS and hosting a Docker container in it. I found the process slightly confusing when I did it the first time, and wanted to share my experience here.
While S3 is a great storage platform, what happens if you accidentally delete some important files? Well, S3 has a mechanism to recover deleted files, and I’d like to go into that in this post.
First, make sure you have versioning enabled on your bucket. This can be done via the API, or via the UI in the “properties” tab for your bucket. Versioning saves every change to a file (including deletions) as a separate version of said object, with the most recent version taking precedence. In fact, a deletion is also a version! It is a zero-byte version which has a “DELETE” flag set. And the essence of recovering undeleted files simply involves removing the latest version with the “DELETE” flag.
This is what that would look like in the UI:
To undelete these files, we’ll use a script I created called s3-undelete.sh, which can be found over on GitHub:
Hey software engineers! Do you manage servers? Lots of servers? Hate copying and pasting hostnames and IP addresses? Need a way to execute a command on each of a group of servers that you manage?
I developed an app which can help with those things, and my employer has graciously given me permission to open source it.
At my day job, I get to write a bit of code. I’m fortunate that my employer is pretty cool about letting us open source what we write, so I’m happy to announce that two of my projects have been open sourced!
The first project is an app which I wrote in PHP, it can be used to compare an arbitrary number of .ini files on a logical basis. What this means is that if you have ini files with similar contents, but the stanzas and key/value pairs are all mixed up, this utility will read in all of the .ini files that you specify, put the stanzas and their keys and values into well defined data structures, perform comparisons, and let you know what the differences are. (if any) In production, we used this to compare configuration files for Splunk from several different installations that we wanted to consolidate. Given that we had dozens of files, some having hundreds of lines, this utility saved us hours of effort and eliminated the possibility of human error. It can be found at:
As a service to the Philly tech community (and because folks asked), I took notes at tonight’s presentation, called “Security Practices for DevOps Teams”. It was presented by Chris Merrick, VP of Engineering at RJMetrics.
Security is a “cursed role”
…in the sense that if you’re doing a really good job as a security engineer, no one knows you exist.
It isn’t sexy
It’s hard to quantify
It’s never done
As DevOps engineers, we are all de facto security engineers
Some tips to avoid ending up like this [Picture of a dismembered C3PO]
Sense. This picture makes none.
Security Principles
Obscurity is not Security
“A secret endpoint on your website is not security”
“Don’t rely on randomness to secure things”
Least Privilege
Do not give more privileges than are needed
Weakest Link
If you talk to an insecure system, you’re at risk
Inevitability
Security Types
Physical
Stealing laptops
Breaking into datacenters
Application
Any vector that comes through an application you developed
XSS
Network*
Systems*
Applications you didn’t write
Human
Phishing, social engineering
Server Auth
Reminder:
Authentication is who you are
Authorization is what you can access
Don’t access production directory
Good news: this is our job anyways
Don’t spread private keys around
Don’t put in your Dropbox
Don’t let it leave the machine you generated it on
Use SSH agent forwarding
ssh-add
ssh -A you@remote
ssh-add -l
Don’t use shared accounts
Especially root
Be able to revoke access quickly
Time yourself. Go.
We use Amazon OpsWorks to help us achieve these goals
Chef+AWS, with some neat tricks: simple autoscaling, application deployment, and SSH user management
Logging
“Logs are your lifeline”
When you get into a high pressure security investigation, you start with your logs
Capture all authentication events, privilege, escalations, and state changes.
From your Os and all running applications
Make sure you can trust your logs
Remember – they’re your lifeline
Have a retention policy
We keep 30 days “hot”, 90 days “cold”
Logging – ELK
We use ELK for hot log searching
Kibana creates logs and lets you monitor your application in real time
Deployment
Keep unencrypted secrets out of code
Otherwise, a MongoLab exploit becomes your exploit
Don’t keep old code around
Make deployment and rollback easy
More good news: this is our job anyways
When dealing with a security issue, the last thing we need a “hard last step” in order to get the fix out
IAM
Don’t use your root account, ever.
Set a long password and lock it away
Set a strong password policy and require MFA
Don’t create API keys where API access isn’t needed
Same goes for a console password
Use Managed Policies
To make management easier
Use Roles to gran taccess to other systems
No need to deploy keys, auto-rotates
IAM Policy Pro Tips
Don’t use explicit DENY policies
Keep in mind that everything is denied by default
Don’t assume your custom policy is correct just because it saves – the interface only confirms the JSON is valid
Use the policy simulator
Know Thy Enemy
People are out there scanning for AWS keys – treat your private key like a private SSH key
In the last post, I talked about how to create a Git repository and upload it to GitHub. In this post, I’m going to talk about how to resolve Git conflicts.
Setting Up Our Environment
First, we’re going to create a Git Repository for the user Doug. Since I already covered that in the last post, I’m bring to breeze through those steps below: