With this whole COVID-19 pandemic going on, we all have to socially isolate ourselves in our homes in order to flatten the curve of cases, and this is likely to continue for some time.
I’ve looked into ways to watch Netflix with friends online, and after trying several different “solutions”, I found one which works quite well: Netflix Party
In order to use Netflix Party, each person will need a desktop or laptop running the Google Chrome web browser. If you don’t have Chrome installed, ask your child to install it on your computer for you. If you don’t have any children, text your neighbor’s kid for installation instructions.
Welcome to my website and blog! It has been been around in one form or another for well over a decade. It’s changed purposes a few times, and at this point is now a mostly tech blog where I share things I’ve learned or made. Feel free to have a look around, or consider checking out some of the more popular posts I’ve made over the years:
If you manage Linux servers over the Internet, you use SSH to connect to them. SSH lets you have a remote shell on a host over an encrypted channel so that an attacker cannot watch what you are doing over the network. In this blog post, I’m going to talk about using SSH at scale across thousands of posts.
Phase 0: Passwords
When you get started with SSH for the first time, you likely won’t have keys set up and will instead use passwords to authenticate to your servers. It will look something like this:
You use SSH to connect to the server, type in your password, and you’re good to go. That’s fine for small scale, such as managing a single server, but it doesn’t come without downsides. Specifically, you won’t be able to easily use a tool such as Ansible nor do code checkins with Git.
And that’s actually a bigger problem than it sounds, because if you make it harder to use a tool, that tool will be used far less often. This can lead to things such as configuration drift due to Ansible being run less often, or giant code pushes happening once a day if Git is being run less. And giant code pushes are a particular problem, because if other engineers have written code, you’ll have to do a merge, and if a bug presents itself, you’ll now have to think back to what you did 8 hours ago, not 8 minutes ago. Having to type in a password every single time will also slow down the rate of deployment, which in turn slows down the rate of product releases. Not good.
Seriously, don’t use SSH with a password for any reason other than as a stepping step to using keys. And that brings us to…
Perhaps you’re worried about being doxxed, perhaps you’ve received some specific threats, maybe you just want to increase your security. No matter the reason, this article is for you! Below I will list a collection of good practices to keep you and your accounts safe online. I fully expect to update this post as things change in the future.
I have tried to put things in a logical order, with some later steps depending on earlier steps, and some things that may be considered “controversial” towards the end.
This post was last updated on Jan 2, 2020.
Let’s start with passwords. I shouldn’t have to say this, but I will do so anyway: do not reuse passwords. Reusing passwords mean that if a single account provider is breached and your plaintext password is recovered, you now have additional accounts at risk of compromise. This has happened before.
I recommend using a password manager such as LastPass to keep track of your passwords. While having your passwords stored in an app that uploads them somewhere increases your risk slightly, I feel it is outweighed by using a different password for each service. For passwords themselves, you can use random characters or a system such as Diceware to create long passwords that are easier to remember. While the latter is slightly less secure, a password that can be remembered is one less password to store into a password manager.
Anyway, once you have enabled Untrusted Shortcuts, you can click on any of these links to add shortcuts to Siri that will query the website’s API and report back on the status of Regional Rail, Busses, or both:
As much as I love using Docker, one of the frustrations I have is when I try to remove an an image which other images are based on, only to get this error:
$ docker rmi b171179240df
Error response from daemon: conflict: unable to delete b171179240df (cannot be forced) - image has dependent child images
I did some searches on Google, and most of the advice centered around the heavy-handed approach of removing all Docker images and basically starting over with a clean slate. That approach didn’t sit well with me because it doesn’t strike me as all that efficient, and also causes me to have to spend more time waiting for unrelated containers to rebuild.
That prompted me to write a script which, when provided with the ID of a container to remove, will recurse through all child containers and delete them first.
If you are storing files in Amazon S3, you absolutely positively should enable AWS S3 Access Logging. This will cause every single access in a bucket to be written to a logfile in another S3 bucket, and is super useful for tracking down bucket usage, especially if you have any publicly hosted content in your buckets.
But there’s a problem–AWS goes absolutely bonkers when it comes to writing logs in S3. Multiple files will be written per minute,each with as few as one event in them. It comes out looking like this:
Serverless is an app which lets you deploy applications on AWS and other cloud providers without actually spinning up virtual servers. In our case, we’ll use Serverless to create a Lambda function which executes periodically and performs rollup of logfiles.
So once you have the code, here’s how to deploy it:
cp serverless.yml.exmaple serverless.xml
vim serverless.xml # Vim is Best Editor
serverless deploy # Deploy the app. This will take some time.
If you’re like me, you write a fair bit of a code, which means you have to interact with many Git repositories. If you’re also like me, chances are you have them in a directory called development/ or similar. It might even have some nested directories, something like this:
So that’s cool, but let’s say that you get a new machine and you want replicate your development/ directory structure onto it? One way is to check out everything by hand, but that’s laborious and time consuming. A second way is to keep backups–and you should absolutely do this–but aside from challenges of restoring a single directory out of an entire archive, what if that backup doesn’t have the latest commits in it?
I can now offer a third way. I recently wrote a couple of scripts available on GitHub that can be used to extract Git remote from each repo in an entire directory stucture, and save those remotes and the directories they belong in to a file. Given the above example, it might look something like this: