Notes from February 2015 Philly DevOps Meetup: Security Practices for DevOps Teams

As a service to the Philly tech community (and because folks asked), I took notes at tonight’s presentation, called “Security Practices for DevOps Teams”. It was presented by Chris Merrick, VP of Engineering at RJMetrics.

Security is a “cursed role”

  • …in the sense that if you’re doing a really good job as a security engineer, no one knows you exist.
    • It isn’t sexy
    • It’s hard to quantify
    • It’s never done

As DevOps engineers, we are all de facto security engineers

Some tips to avoid ending up like this [Picture of a dismembered C3PO]

Sense. This picture makes none. 


  • Security Principles
    • Obscurity is not Security
      • “A secret endpoint on your website is not security”
      • “Don’t rely on randomness to secure things”
    • Least Privilege
      • Do not give more privileges than are needed
    • Weakest Link
      • If you talk to an insecure system, you’re at risk
    • Inevitability

Security Types

  • Physical
    • Stealing laptops
    • Breaking into datacenters
  • Application
    • Any vector that comes through an application you developed
      • XSS
  • Network*
  • Systems*
    • Applications you didn’t write
  • Human
    • Phishing, social engineering

Server Auth

  • Reminder:
    • Authentication is who you are
    • Authorization is what you can access
  • Don’t access production directory
    • Good news: this is our job anyways
  • Don’t spread private keys around
    • Don’t put in your Dropbox
    • Don’t let it leave the machine you generated it on
    • Use SSH agent forwarding
      • ssh-add
      • ssh -A you@remote
      • ssh-add -l
  • Don’t use shared accounts
    • Especially root
  • Be able to revoke access quickly
    • Time yourself. Go.
  • We use Amazon OpsWorks to help us achieve these goals
    • Chef+AWS, with some neat tricks: simple autoscaling, application deployment, and SSH user management

Logging

  • “Logs are your lifeline”
  • When you get into a high pressure security investigation, you start with your logs
  • Capture all authentication events, privilege, escalations, and state changes.
    • From your Os and all running applications
  • Make sure you can trust your logs
    • Remember – they’re your lifeline
  • Have a retention policy
    • We keep 30 days “hot”, 90 days “cold”
  • Logging – ELK
    • We use ELK for hot log searching
    • Kibana creates logs and lets you monitor your application in real time

Deployment

  • Keep unencrypted secrets out of code
    • Otherwise, a MongoLab exploit becomes your exploit
  • Don’t keep old code around
  • Make deployment and rollback easy
    • More good news: this is our job anyways
    • When dealing with a security issue, the last thing we need a “hard last step” in order to get the fix out
  • IAM
    • Don’t use your root account, ever.
      • Set a long password and lock it away
  • Set a strong password policy and require MFA
  • Don’t create API keys where API access isn’t needed
    • Same goes for a console password
  • Use Managed Policies
    • To make management easier
  • Use Roles to gran taccess to other systems
    • No need to deploy keys, auto-rotates
  • IAM Policy Pro Tips
    • Don’t use explicit DENY policies
      • Keep in mind that everything is denied by default
    • Don’t assume your custom policy is correct just because it saves – the interface only confirms the JSON is valid
    • Use the policy simulator
  • Know Thy Enemy
    • People are out there scanning for AWS keys – treat your private key like a private SSH key

Bonus Tips

  • Set up a security page
  • Sign up for the US-CERT email list, and the security notification list for your OS
  • Other resources
    • OWASP – owasp.org
    • SecLists.org
    • Common Weakness Enumeration – cwe.mitre.org

Conclusion

  • What else should we be doing to keep our work secure? [Picture of C3PO in a field full of flowers]

Setting up custom short domains in Bit.ly

What is a URL shortener?

A URL shortener is a service which takes a long URL and creates a much shorter URL which then forwards you to the original URL when loaded. For example, the URL http://bit.ly/18lZ8Ns, if clicked on, will redirect your browser to http://en.wikipedia.org/wiki/Cheetah instead.

Short URLs and the services that offer their creation have grown quite popular in recent years, as microblogging services such as Twitter limit you to 140 characters or less per message. In fact, some services such as Twitter offer their own URL shorteners built into the service for the benefit of their users.

The most popular of the URL shorteners is Bitly, so this post will be about setting up a custom short domain with Bitly.

Continue reading “Setting up custom short domains in Bit.ly”

Git 101: How to Handle Merge Conflicts

In the last post, I talked about how to create a Git repository and upload it to GitHub. In this post, I’m going to talk about how to resolve Git conflicts.

Setting Up Our Environment

First, we’re going to create a Git Repository for the user Doug. Since I already covered that in the last post, I’m bring to breeze through those steps below:

$ mkdir doug
$ cd doug
$ git init
Initialized empty Git repository in /path/to/doug/.git/
$ touch README.md
$ git add README.md 
$ git commit -m "First commit"
[master (root-commit) d86a7e2] First commit
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 README.md

At this point, we have a repository created for the user Doug. Now I’m going to clone that repository for the user Andrew:

Continue reading “Git 101: How to Handle Merge Conflicts”

Git 101: Creating a Git Repo and Uploading It To GitHub

In this post, I’m going to discuss how to create a GitHub repo and upload (or “push”) it to GitHub, a popular service for hosting Git repositories.

What is revision control and why do I need it?

The concept of revision control is a system which tracks changes to files. In programming, that is usually program code, but documents and text files can also be tracked. Using revision control will give the following benefits:

  • You will know what was changed, when it was changed, and who changed it
  • Multiple people can collaborate on a project without fear of overwriting each others’ changes.
  • Protection against accidentally deleting a critical file. (revision history is usually read-only)

In GitHub, we store revisions in “repositories” or “repos” for short. As of this writing, the #1 service for storing Git repositories is GitHub. They offer free hosting for Git repositories.

Continue reading “Git 101: Creating a Git Repo and Uploading It To GitHub”

Notes from “Scaling on AWS for the First 10 Million Users”

Earlier tonight, I had the pleasure of attending a presentation from Chris Munns of Amazon at the offices of First Round Capital about scaling your software on AWS past the first 10 million users. I already had some experience with AWS, but I learned quite a few new things about how to leverage AWS, so I decided to write up my notes in a blog post for future reference, and as a service to other members of the Philadelphia tech community.

Without further preamble, here are my notes from the presentation:

Continue reading “Notes from “Scaling on AWS for the First 10 Million Users””

An American’s Visit to Sweden and Denmark

I found myself between jobs a short awhile ago, and suddenly had a few weeks of downtime. Since I had an open invitation to visit Sweden and Denmark from friends living there, I figured I would take the opportunity and head over. I took a few pictures, and noted some observations about those countries.

First, Know Some Locals

If you want to take a package tour and see the parts of the country that are tourist attractions, feel free. But you’ll get much more out of the experience if you know locals who can show you things that they think are interesting. I was fortunate to know both Pinky Fennec and Joel Fox, who were more than happy to host me while I visited.

Public Transit is Amazing

Subway Station
A subway station on the Hågsatra line

The train system in Stockholm is pretty extensive. Trains and busses run pretty much everywhere you’d want to go. Both systems used a wireless card that you could buy with a week’s worth of fare, and just swipe it at the turn style or at the front of the bus. Subway stations had displays above the tracks stating how much time until the next train would arrive. When bus schedules said that busses arrived on the quarter of every hour, they meant it.

It was also pretty much the same in Copenhagen, except I spent more time on busses. The bus service there was simply phenomenal. They had busses that would arrive every 8 minutes, and they meant it. I regularly commuted from a friend’s place on the outskirts of Copenhagen to the central part of the city every single day.

Getting from city to city was also fun. I rode on what amounted to the Swedish version of Amtrak from Stockholm to Copenhagen. It was a single train ride, about 400 miles in 5 hours. That’s an average of 80 mph, but with the stops we made I’d say we did somewhere between 90 and 100 mph most of the time. The train ride was exceptionally smooth, and several times when we started moving I thought that the train next to us was moving. Naturally, these trains were also on time.

Continue reading “An American’s Visit to Sweden and Denmark”

Multi Core CPU Performance in GoLang

I’ve been playing around with Google’s programming language known simply as Go lately, and have learned quite a bit about concurrency, parallelism, and writing code to effectively use multiple CPU cores in the process.

An Overview of Go

Pictured: Not A Serial Killer

If you are used to programming at the systems level, Go is effectively a replacement for C, but also has higher level functionality.

  • It is strongly typed
  • It can be compiled
  • It has its own unit testing framwork
  • It has its own benchmarking framework
  • Doesn’t support classes, but supports structures and attaching functions to variables instantiated from those structures
  • Doesn’t expose threads, but instead uses a construct called “channels” for communication between different parts of a Go program. More on channels shortly.
Continue reading “Multi Core CPU Performance in GoLang”

Running Dorsai Thing 2013: Lessons Learned

The Liberty Bell

Each year, The Dorsai Irregulars have their own annual convention known as “Dorsai Thing”. It is a weekend long event, held in a different city each year, where we all gather, socialize, and hold our semi-annual business meeting. This year, it was held in Phildelphia, and it fell on my shoulders to run the event, dubbed “Liberty Thing” While I’ve worked at plenty of conventions before, this was the first one where I was effectively the Con Chair. This was an entirely new thing for me, and I wanted to write a blog post about some of the things I learned during the experience.

Continue reading “Running Dorsai Thing 2013: Lessons Learned”

Setting up IPv6 on Linode with nginx

So Linode has gotten onto the IPv6 bandwagon. In addition to each Linode getting an IPv6 address, they will also gladly assign an entire /64 netblock to your specific node on request. That gives you all sorts of flexibility for bringing up IPv6 services. And this blog post is going to be all about how to make use of multiple IPv6 addresses.

As of this writing, everything I mention will work on a Linode running Ubuntu 12.04 LTS and Nginx 1.1. There’s no reason that these instructions shouldn’t work on other distros or versions of Ubuntu with some modifications.

Continue reading “Setting up IPv6 on Linode with nginx”

Scaling Anthrocon’s Website to Handle 1,400 Simultaneous Connections

The Challenge

When hotel reservations open, that is the single busiest time of the year for Anthrocon’s webserver. In fact, it even caused us performance problems last year. That was not so good.

So this year, I decided to try something different. Instead of leaving the regular website up and running, which involves using Drupal, I instead decided to replace the entire page with a relatively static “countdown” page, which displayed a countdown timer and automatically started displaying the hotel link at 11 AM on the opening day.

First, some stats for the Anthrocon website:

  • Peak bandwidth: 1.6 Megabits/sec
  • Peak connections: 1,400 concurrent connections

And now some status for Passkey, who handled most of the traffic:

  • Peak bandwidth: 190 Megabits/sec
  • Peak connections: 4,000 concurrent connections
Continue reading “Scaling Anthrocon’s Website to Handle 1,400 Simultaneous Connections”