Splunk Lab News and Updates

Hey everyone! I’ve been hard at work on Splunk Lab these last few months, and I wanted to share what I’ve done with it.

Splunk: Knowledge is Power. Power Corrupts. Yield to Temptation.

The first thing is that I baked in several Splunk apps so that they are all available when launching the app! That list includes:

I’ve also written (or, in one case, re-written) apps using Splunk Lab as a jumping off point. Here’s what I have so far:

  • Splunk Yelp Reviews – Lets you pull down Yelp reviews for venues and view visualizations and wordclouds of positive/negative reviews in a Splunk dashboard
  • Splunk Telegram – This app lets you run Splunk against messages from Telegram groups and generate graphs and word clouds based on the activity in them.
  • Splunk Network Health Check – Pings 1 or more hosts and graphs the results in Splunk so you can monitor network connectivity over time.
  • …plus a few other things that I’m not quite ready to release yet. 🙂
Continue reading “Splunk Lab News and Updates”

Splunking Yelp Reviews

Awhile ago, I found myself trying to make a decision on which of several restaurants to eat at. They were all highly rated in Yelp, but surely there might be more insights I could pull from their reviews. So I decided to Splunk them!

TL;DR If you want to get straight to the code, go to https://github.com/dmuth/splunk-yelp-reviewsto get started.

Downloading the reviews

“Splunk: See your world. Maybe wish you hadn’t.”

Yelp has an API but, I am sorry to say that it is awful. It will only let you download 3 reviews for any venue. That’s it! What a crime.

So… I had to crawl Yelp venue pages to get reviews. I am not proud of this, but I was left with no other other option.

Python has been my go-to language lately, so I decided to solve the problem of review acquisition with Python. I used the Requests module to fetch the HTML code, and the Beautiful Soup module to extract reviews and page links from the HTML.

Continue reading “Splunking Yelp Reviews”

Monitoring RAM Usage on OS/X

I recently noticed that something was using up lots of RAM on my Mac, as it would periodically slow down. I had some suspects, but rather than regularly checking in Activity Monitor, I thought it would be more helpful if I had a way to monitor usage of RAM by various processes over time.

Due to previous success with my Splunk Lab app, I decided to use it as the basis for building out a RAM monitoring app. The data acquisition part, however, was trickier. The output of the UNIX ps app isn’t very structured, and I had some problems parsing that data, especially in situations where there were spaces in filenames and arguments to those commands.

So I wrote a replacement for PS. It turns out that Python has a module called psutil, which lets you programmatically examine the process tree on your Mac. I ended up writing an app called Better PS, and it writes highly structured data on each current process to disk, which is then ingested by Splunk.

Continue reading “Monitoring RAM Usage on OS/X”

Hotel Opening: Anthrocon 2019 Report

One of my activities outside of the office consists of staffing furry conventions. One of those conventions is Anthrocon, a furry convention held in downtown Pittsburgh every June/July. At that particular convention, I manage the website and their social media properties.

Yesterday, we opened general hotel reservations, and that resulted in a huge rush of members booking hotel rooms. 1,000 rooms were booked in the first 15 minutes! This was completely expected, and we kept track of how things played out on social media, and also took a survey of members who booked hotel rooms to see how things went. In this post, we’re going to share what we learned based on those survey results and Twitter activity.

HOTEL BOOKINGS

First, did people who booked a hotel room get the hotel that they wanted?

Did you get the hotel you wanted?

For nearly 70% of you, the answer is yes. This makes us happy, but we would like to see the number higher—ideally 100% of our attendees would get a room in the hotel of their choice. This is something we continue to work on each year by adding new hotels and getting bigger room blocks in existing hotels.

Continue reading “Hotel Opening: Anthrocon 2019 Report”

Using Splunk on Hotel Internet

Splunk> Finding your faults, just like Mom.

In a previous post, I wrote about using Splunk to monitor network health. While useful for home and office use, there’s another valuable use for this app, and that’s when traveling.

In my case, over my Christmas vacation, I checked into a Mom and Pop hotel, or rather a motel! It was about 24 rooms all in a row, occupying a single floor. Since they were on a budget, their Internet offering consisted of what appeared to be 5 or 6 Linksys routers set up every few rooms. You’d simply connect to the closest access point and have Internet.

But there was a problem: determining which access point was closest to me! The signal strength indicator on my computer showed several of them were 3/3 bars so that wasn’t much help. I tried connecting to the first one, but had virtually no Internet connectivity.

That’s when I fired up Splunk:

SPLUNK_START_ARGS=--accept-license \
TARGETS=google.com,1.1.1.1,8.8.8.8,192.168.1.1 \
   bash <(curl -s https://raw.githubusercontent.com/dmuth/splunk-network-health-check/master/go.sh)

Running that command will print up a confirmation screen so that you can back out and change any options (such as hosts to ping), and when you’re ready, just hit <ENTER> to start the container.

In the above example, I added in the TARGETS environment variable, and was sure to include 192.168.1.1, which was the IP for each router (they were all the same). Then I set Splunk “real-time mode” and periodically checked that tab as I was working. This is what I saw:

Testing 3 separate hotel Access Points with Splunk
Continue reading “Using Splunk on Hotel Internet”

Introducing: Splunk Lab!

Splunk> Australian for grep.

In a previous post, I wrote about using Splunk to monitor network health and connectivity. While building that project, I thought it would be nice if I could build a more generic application which could be used to perform ad hoc data analysis on pre-existing data without having to go through a complicated process each time I wanted to do some analytics.

So I built Splunk Lab! It is a Dockerized version of Splunk which, when started, will automatically ingest entire directories of logs. Furthermore, if started with the proper configuration, any dashboards or field extractions which are created will persist after the container is terminated, which means they can be used again in the future.

A typical use case for me has been to run this on my webserver to go through my logs on a particularly busy day and see what hosts or pages are generating the most traffic. I’ve also used this when a spambot starts hitting my website for invalid URLs.

So let’s just jump right in with an example:

SPLUNK_START_ARGS=--accept-license \
   bash <(curl -s https://raw.githubusercontent.com/dmuth/splunk-lab/master/go.sh)

This will print a confirmation screen where you can back out to modify options. By default, logs are read from logs/, config files and dashboards are stored in app/, and data that Splunk ingests is written to data/.

Once the container is running, you will be able to access it at https://localhost:8000/ with the username “admin” and the password that you specified at startup.

First things first, let’s verify our data was loaded and do some field extractions!

Continue reading “Introducing: Splunk Lab!”

Using Splunk to Monitor Network Health

Splunk> Winning the War on Error

I’ve been using Splunk professionally over the last several years, and I’ve become a big fan of using it for my data processing needs. Splunk is very very good about ingesting just about any kind of event data, optionally extracting fields at search time, and providing tools to graph that data, find trends, and see what is really happening on your platform. This is important when your platform consists of thousands of servers, as it does at my day job!

While Splunk can handle events in timestamp key=value key2=value2 format, it also has support for dozens of standardized formats such as syslog, Apache logs, etc. If your data is in a customized format, no problem! Splunk can extract that data at either index or search time! Finally, there’s the Search Processing Language, which is like SQL but for your event data. With SPL, you can run queries, generate graphs, and combine them all programatically.

So yeah, I’m a huge fan of Splunk. One thing I use it for out of the of office is to graph the health of my Internet connection. This is useful both for when I’m at home and when I am traveling–I just feed the output of ping into Splunk and then I can get graphs of packet loss and network latency.

Let’s just jump into an example screen–here’s what I saw when I was a friend’s place and I uploaded a video to YouTube:

Continue reading “Using Splunk to Monitor Network Health”

Examining 12 Months of SEPTA API Queries

With the release of SEPTA’s new app, I’ve suddenly been flooded with questions about their API. People wanted to know how stable it was.

Well, I don’t work for SEPTA, which means I don’t have insight into their operations, but I can perform some analytics based on what I have, which is approximately 18 months of Regional Rail train data, read every minute by SEPTA Stats.

Overall Stats

This is all of the data that I have in Septa Stats currently:

  • Events Since Inception: 26,924,887 events
  • First Event: Mar 1, 2016 12:00:01 AM
  • Last Event: Nov 16, 2017 10:33:53 PM

That’s way more events than minutes in that timeframe, and the reason for that is each API query is split into a separate event for each train. So if an API call returns status for 20 trains, that gets split into 20 different events. This is done because Splunk has a much easier time working with JSON that isn’t a giant array. 🙂

Continue reading “Examining 12 Months of SEPTA API Queries”

Introducing the SEPTA Regional Rail System Dashboard!

…and 2 new API endpionts, too. But more on those later.

I’m proud to say that there is now a dashboard for the entire Regional Rail system. It is present on both the front page and the “SEPTA System Stats” page:

This new dashboard makes it straightforward to determine the status of the entire Regional Rail system at a glance.

Continue reading “Introducing the SEPTA Regional Rail System Dashboard!”