Better Podcasting Through Linux

Note: This post is taken from my reddit post. You might find the comments there helpful if this post isn’t.

I recently noticed that a podcast that I listen to regularly has sound issues; specifically, that one of the hosts has a tendency to talk softer than the other host. This leads to me constantly fiddling with the volume to hear what’s being said or to avoid being blown out by other sounds/music. I thought there had to be a solution, and I found one. Actually, I made one. And so can you! I wanted to share with all of you what I did. I’ll post links to the scripts in their entirety so you can customize them as needed, but I’ll also go over the main points here.

High Level Overview

So I want to fix my podcasts. Let’s break the idea down into its general parts. The basic process would sound something like this:

  • Download new episodes of my podcasts.
  • Normalize the audio.
  • Create an RSS feed that points to the new audio files.
  • Create a script to automatically pull this all together for us so we can put our feet up and relax to a good podcast.

Luckily, there exists open source and self hosted solutions to each of these steps.

  • podfox Linux CLI podcasting client.
  • ffmpeg audio/video Swiss army knife.
  • dir2cast podcast feed generator.

Scripting — ffmpeg

This is where the magic really happens. Let’s start with ffmpeg, because normalizing audio is a complicated feat in and of itself. We’ll be using the loudnorm filter to even out the audio volume. You can use loudnorm in a single line command, but it is suggested that for the best normalization, to use a 2-pass process. I actually put this in its own script called normalize.sh. For the first pass we’ll want to get a few specific value from loudnorm:

input=[generic stand in for input file name]
tempFile=$(mktemp)
ffmpeg -i "$input" -af loudnorm=I=-16:TP=-1.5:LRA=11:print_format=summary -f null - 2> $tempFile

This command will take information gathered by the loudnorm filter that will look something like this and put it into a temporary file:

[Parsed_loudnorm_0 @ 0x7fffb8cef180]
Input Integrated:    -15.1 LUFS
Input True Peak:      -2.7 dBTP
Input LRA:            16.5 LU
Input Threshold:     -28.2 LUFS

Output Integrated:   -16.8 LUFS
Output True Peak:     -5.7 dBTP
Output LRA:           12.7 LU
Output Threshold:    -29.5 LUFS

Normalization Type:   Dynamic
Target Offset:        +0.8 LU

I used a temporary file so I could make things easier on myself. We’ll need to extract four values from this output: Input Integrated, Input True Peak, Input LRA, and Input Threshold. I use grep four times. Is there a better way to do this? Probably. But I’m only so so with bash and this works.

output=[generic stand in for output file name]
integrated="$(cat $tempFile | grep 'Input Integrated:' | grep -oP '[-+]?[0-9]+.[0-9]')"
truepeak="$(cat $tempFile | grep 'Input True Peak:' | grep -oP '[-+]?[0-9]+.[0-9]')"
lra="$(cat $tempFile | grep 'Input LRA:' | grep -oP '[-+]?[0-9]+.[0-9]')"
threshold="$(cat $tempFile | grep 'Input Threshold:' | grep -oP '[-+]?[0-9]+.[0-9]')"    

ffmpeg -i "$input" -loglevel panic -af loudnorm=I=-16:TP=-1.5:LRA=11:measured_I=$integrated:measured_TP=$truepeak:measured_LRA=$lra:measured_thresh=$threshold:offset=-0.3:linear=true:print_format=summary "$output"

Next, I tackled the podcasting setup.

Great. Normalization complete.

Scripting — podfox

Per the instructions, installed and set up podfox, as well as added some feeds. As an example, we’ll use the podcast I used, Opening Arguments.

podfox import https://openargs.com/feed/podcast OA

This will create a directory using the short name you give the podcast, in our case, /home/username/OA.

Our automated process, which will go into podcast.sh, will want to do a few things:

  • Update the podcast feeds
  • Download new episodes for each podcast that haven’t already been downloaded.
  • Normalize the newly downloaded files
  • Move the files to a target directory (that our webserver can find)

Of note, we’ll be running the entire script as the root user so we can move the final mp3 files into /var/www/html/podcasts, but we’ll be running some commands (like podfox) as our own user.

declare -a arr=("OA" "And" "other" "podcast" "shornames")
username=[your username here]
source=/home/$username/podcasts
target=/var/www/html/podcasts
tempdir=$(mktemp -d)

Make sure to change into your temporary directory, because it is the easiest way to move the files once they’re done processing.

cd $tempdir

Next we update the podcast feeds.

sudo -u $username podfox update

Now loop through your array of podcast short names.

for podcast in "${arr[@]}"; do
[work per podcast]
done

For each podcast, we’ll need to check the list of episodes, determine if any of the latest episodes haven’t been downloaded yet, download them if need be, then normalize the file.

tempfile=$(mktemp)
sudo -u $username podfox episodes $podcast > $tempfile
undownloaded=$(head -3 $tempfile | grep "Not Downloaded" | wc -l)
sudo -u $username podfox download $podcast --how-many=$undownloaded

for file in $(find $source/$podcast -iname "*.mp3"); do
[work per file]
done

For each file, we’ll extract the file names, extensions, and check to see if the file was already normalized or not. If not, we’ll normalize it into our temporary directory.

filename=$(basename -- "$file")
extension="${filename##*.}"
filename="${filename%.*}"
normalized="$filename.normalized.mp3"
foundfile=$(find $target/$podcast -iname "$normalized")
if [ "$foundfile" != "" ]; then
    #Already normalized. Nothing to do.
else
    /home/$username/normalize.sh "$file" "$tempdir/$normalized"
    mv * "$target/$podcast"
fi

Other than some cleaning up, we’re done with the podcasting automation.

dir2cast

dir2cast is a nice little PHP script script that uses a directory (or directories) of mp3 files and will generate a valid RSS feed. It has pretty excellent documentation on its github page, but honestly there isn’t much here to configure. Assuming you have apache (or nginx) up and running, it’s mostly drag-and-drop.

I created subdirectories within the directory holding our script (/var/www/html/podcasts) for each podcast feed. These are our $target/$podcast directories referenced in our podcast automation script.

Example Directory Structure

  • /var/www/html/podcasts/
  • /var/www/html/podcasts/dir2cast.php
  • /var/www/html/podcasts/OA/
  • /var/www/html/podcasts/OA/episode01.mp3
  • /var/www/html/podcasts/OA/episode02.mp3

I suggest you take the time to copy the dir2cast.ini to each podcast subdirectory and edit it with the information from the original podcast feed. This will make your experience in your podcast listening apps a little nicer.

cron Automation

Of course we don’t want to have to run this ourselves every time a new podcast comes out. That defeats the automatic nature of podcast feeds. So I run this script using cron every 15 minutes. This ensures that fairly quickly after a podcast releases, I’ll have the updated files in my feed. The problem, however, is that if you’re running this on a slower server, or if you’re downloading multiple podcasts, the script may still be running 15 minutes later. To account for this, I created yet another script; cronpod.sh. This is run as a cron job with the following configuration:

*/15 * * * * /home/[your username]/cronpod.sh

Listening to Your Podcasts

All that’s left is to listen to your podcasts in your favorite podcasting app. And maybe to add a reverse proxy with a Let’s Encrypt SSL cert, but that’s outside the scope of this post. ūüôā

Per the documentation for dir2cast you should be able to access your podcast feeds in a combined form or in individual subfeeds.

  • http://[yourserver]/dir2cast.php
  • http://[yourserver]/dir2cast.php?dir=OA
  • http://[yourserver]/dir2cast.php?dir=[any podfox short name]

Although I didn’t see anything about it in the documentation, I have successfully renamed dir2cast.php to index.php. This will make your urls a little prettier. If this breaks something for you, just revert the name.

Summary or TL;DR

Use podfox, ffmpeg, dir2cast, normalize.sh, podcast.sh, cronpod.sh, and a little elbow grease to improve your podcast listening experience.

Setting Up a Blender Render Farm at Home

connected

Blender is one of the coolest pieces of open source software available because it puts a professional full production workflow in your hands for 3D modeling, animation, video editing, post processing, and even game development. And if you have ever used the wonderful open source application Blender for anything non-trivial, you will have discovered that rendering complex scenes can take a long, long time. Render farms split up large processing jobs among many computers, allowing everything to move much faster.

There is a solution for remote rendering in Blender that is both easier and harder than you thought at the same time. I’ve recently run into long render times myself and I constructed a make shift personal render farm, utilizing both spare computing power I have around the house as well as virtualized servers on Microsoft’s Azure cloud service. This has decreased my render times by a considerable amount.

How did I do it?

First, let’s look at how Blender handles its networked rendering. Blender comes with an add-on called Netrender. You don’t want to use it. You want to use an enhanced fork of the add-on that you can find on Github. Once installed, you can enable it in Blender under File > User Preferences > Add-ons.

addon
Blender User Preferences

Once enabled, you’ll find another option in the render engine selection drop down: Network Render.

dropdown
Render Engine Selection List

Netrender runs within a Blender instance, meaning that you have to have Blender open in order for it to work. It works in one of three modes.

Client

Client mode is used by the machine you’re using to create your 3D scene. The client in this setup is the customer, requesting that it’s work be done by someone else.

Client Mode Options
Client Mode Options

Master

Master mode is used by the co-ordinating server. The master is in charge of accept work requests from clients, processing the files, and dividing up the work among the slaves connected to it.

master
Master Mode Options

Slave

Slave mode is used by computers that get assigned work from the master.

Slave Mode Options
Slave Mode Options

When setting up my personal render farm, my daily-use desktop computer is my client (and a worker!), a computer within my LAN acts as my master server (and a worker!), and all spare computers in the house are connected as slaves. In addition, I have created several virtual machines in the Azure cloud, which all connect back to my internal network  in order to receive work orders.

Network Topography
Network Topography

Setup

How easy, right? just plug in the proper IP addresses (or DNS entries for the external slaves) and port numbers and we’re all set, right? Wrongo bongo. With Blender and Netrender, the devil is in the details. As with most complex software, Blender has what I would call quirks. Paired with being an open source project whose documentation can sometimes be lacking for lesser used features, you’ve got the potential for some odd problems where your Google-fu may not be of much use. My render farm came into being with a lot of experimentation and failure. Don’t get discouraged if yours does as well! It’s all just part of the learning process.

Masters

Luckily you don’t need to put much thought into the master node. You may or may not want to force your clients to upload dependencies. In my experience, it hasn’t mattered, but your mileage may vary there. You do want to pay attention to the maximum timeout setting, though. You’ll want to set it to something sufficiently high so that slower machines (or machines that you have computing on both the CPU and the GPU) are not abandoned by the master if they take longer to render a frame.

Slaves

Always be aware if your slave has a GPU. GPU rendering, in general, is way faster for the Cycles engine. If so, I highly recommend using two instances of Blender, one with the override set to GPU, and one set to CPU. With both services running, you’ll be able to get more use out of a single machine.

Also be aware of the tile size you want to process on the slave. CPU and GPU render times can be greatly affected by the tile size. Consider overriding the tile size, especially for GPUs. CPU times tend to be better when working on smaller tiles, around 32 pixels or 64 pixels. GPUs tend to do better with large tiles, around 128 pixels or 256 pixels.

Clients

The client setup is pretty straight forward, as it will just use the settings from the render engine you used while designing the .blend file. Just make sure that the Engine selection is set to the rendering engine of the .blend file. So if you designed everything in Cycles, but leave the render engine set to Blender in the network rendering options, you’re not going to get anything back.

Headless Slaves

You may have computers available to you (either on site, or in the cloud) that do not have graphic devices attached. Blender needs a GPU in order to draw its¬† 3D viewports. If your machine doesn’t have one, Blender will refuse to open. However,¬†Blender doesn’t need a graphic device to render final images. Luckily, Blender has an extensive command line interface that you can use.

This kind of setup requires you to create a .blend file on a computer that does have a graphics device, like your client computer where you do your design work. Change the render engine to Network Render and configure it like you would a slave, being aware that the output folder specified in the file will be expected to exist on the remote machine. If this machine lives outside your LAN, you will want to set up port forwarding on your router and use dynamic DDNS. Since this is outside of the scope of this post, you should check out this explanation about port forwarding and this explanation about DDNS.

Once you have your .blend slave file created, you’ll want to put it on the remote machine. Make sure Blender is installed. Replace the netrender plugin just as before. Make sure the output folder from the file exists on the machine. Blender can be started in slave mode with the following command:

blender -b --addons netrender [FILE LOCATION] -E NET_RENDER -a

What do these arguments mean?

  • blender – The blender executable
  • -b – Opens blender without a UI
  • –addons netrender – Ensures that the netrender addon is loaded
  • [FILE LOCATION] – The location of the slave .blend file
  • -E NET_RENDER – Specifies the network render engine as the engine to use
  • -a – Start the animation rendering

Note: The -a  tells blender to start rendering the animation. This in turn tells the netrender add-on to begin its service and connect to the master.

Final Thoughts

You’ll probably run into unique issues setting up your render farm. Don’t let it get you down, because the result is well worth it. Reducing render times by multiple factors will way out weigh the time and effort it takes to install.

I’m still learning myself. I’d love to hear if this helps you out at all, or if you have any tips that could make my farm even better! If you have any comments, email me at jordan@jordanwages.com.

How SmartThings Saved My Stuff and My Dog

Figment saying Hi!
Figment saying Hi!

I’ve had a SmartThings home automation system for about a year now. I liked the idea of being able to control my house from my phone, but I loved the idea of my house reacting to us and letting us know proactively about what’s going on. My then fianc√©e (now wife) and I bought smart outlets, door sensors, motion sensors, moisture detectors, smoke alarms, and even a siren. It was great knowing who was at the house, whether or not the garage door had been left open, to be greeted by lights that turned on when you arrived home at night, and to be able to turn those lights off by remote when you remembered you had left them on after you had already laid down in the bed.

SmartThings made my life easier. A few weeks ago, SmartThings went beyond being helpful into being a necessity to protect my home and loved ones.

I was at lunch with a coworker when I received an alert on my phone that my back door was open. It was the middle of the day, so the only “person” home was my dog, Figment. I immediately called my wife to see if she had gone home early that day. She was still at work. I quickly used the SmartThings app to turn on the siren in the house and my wife called the police. The police arrived quickly and let us know through our neighbors that everything was OK in the house.

I arrived home shortly after to see what damage had been done. Thanks to the alert we received, it was minimal. It appears that the burglar had just made it to the bedroom (where Figment was put up in her kennel!) when the siren went off and they fled. He was able to grab a single jewelry box (with just a few pieces) and a wallet that was grabbed on the way out the door.

Yes, we lost some things. But without our SmartThings system, who know what might have happened? The perpetrator could have hurt our helpless Figment. They could have taken Figment! We could have lost all of our valuable possessions. I shudder to think what could have happened.

I know this sounds like an advertisement for SmartThings, and in a way it is. I’m a true believer in this company. If not before, I definitely am now.

Asynchronous WCF Calls – Always Check Your State

After beating my head against the wall for a few days, I feel like I should share my experience with the world. Maybe you won’t have the headache I’ve had.

I have a WCF service reference. I wanted to make a call asynchronously, wcfClient.ProcessTransaction(arguments, null) /* no call back method */ . The synchronous call worked just fine, but the asynchronous call failed silently. After much wailing and gnashing of teeth, I discovered that the channel state on the wcfClient never made it past Opening. The solution was just to manually call wcfClient.Open()  before making the asynchronous call.

Hopefully this will help someone avoid my three day headache.

Smoothie Lessons: Episode 4

Ran into the same problem as before tonight, but with some careful coaxing, I was able to get the smoothie to turn out beautifully. I really should be more careful with the ice and frozen fruits, though.

Ingredients

  • 1 cup non-fat Greek yogurt
  • ¬Ĺ¬†cup skim milk
  • 1 small orange
  • 1 cup frozen pineapple chunks
  • 4 strawberries
  • 3 table spoons of Splenda

Steps

  • Add yogurt.
  • Peel orange.
  • Add orange.
  • Add frozen pineapple chunks*.
  • Add ice*.
  • Blend*.
  • Add strawberries.
  • Add milk.
  • Add Splenda.
  • Blend until desired texture.

We will call this one: Strawberry Fields Forever.

 

* Please note that just like last time, I allowed the blender to get jammed by adding too many frozen things at once. Original order is maintained here for accuracy, but please be careful.

Smoothie Lessons: Episode 3

Thank goodness my wife was around tonight. I was convinced I destroyed my blender. Luckily, she showed me I was only panicking and the blender was fine. I did learn a valuable lesson though: Don’t allow your frozen fruit to wedge underneath the blender blades before turning it on.

But with disaster averted, a brilliant new recipe was born!

Ingredients

  • 1 cup non-fat vanilla yogurt
  • a “little” skim milk
  • 1 serving vanilla protein powder
  • 12 frozen peach slices
  • 2 strawberries
  • 6 ice cubes

Steps

  • Add yogurt.
  • Add peaches*.
  • Add strawberries.
  • Add protein powder.
  • Add ice.
  • Add milk.
  • Blend until desired texture.

We will call this one: Doin’ Peachy.

The Wife’s Smoothie

The wife also made a smoothie tonight, which turned out great. No disasters for her, though.

Ingredients

  • 1 cup non-fat vanilla yogurt
  • 4 strawberries
  • 4 blueberries
  • 4 raspberries
  • 4 icecubes

Steps

  • Add yogurt.
  • Add strawberries.
  • Add raspberries.
  • Add blueberries.
  • Add ice.
  • Blend until desired texture.

We will call this one: Fantastic Four Berry Blast.

 

* Please note that I added peaches right after the yogurt. These frozen slices jammed the blades, which caused the whole fiasco. Be very careful when making this. I maintained order just for accuracy’s sake, but it would probably be a good idea to adjust the order or solidness of the frozen peaches.

Smoothie Lessons: Episode 2

Tried something new tonight. It was pretty good so I’ll share my recipe/steps.

Ingredients

  • 1 can of sliced pineapple
  • 2¬† apple slices
  • 2 straberries
  • ice

Steps

  • Pour can of pineapple slices into blender.
  • Liquify pineapple slices.
  • Strain some of the pineapple froth and discard.
  • Add half ice.
  • Add apples.
  • Add strawberries.
  • Blend till smooth.
  • Add remaining ice.
  • Blend until desired texture.

Let’s call this one: Pineapple Strawberry Blast.

Smoothie Lessons: Episode 1

I’ve recently started making smoothies. As I’m want to do, I bought a blender with pretty good reviews and jumped headlong into the process figuring I would wing it. So after a few smoothies, I’ve learned a few lessons the hard way. I imagine I’ll be learning more lessons the hard way, but at least I can document them here. This way maybe I won’t make the same mistakes twice. And maybe you won’t either. I have color coded the advice for your convenience.

  • If you need ice and you do not have a functioning ice maker, do not put water in a plastic bag and put it in a shelf in your freezer where the bag will deform and wedge itself into cracks making it impossible get out without a hammer.
  • Watermelon + Grapes tastes like cucumbers.
  • The line between not enough honey and too much honey is very, very thin.
  • Fruit containers get very sticky.
  • Do not attempt to freeze a smoothie for the next day; drink it immediately.
  • Milk is pretty good in smoothies. Use it.

Trying to install NVidia Drivers to Linux Mint

I was recently trying to install the lastest NVidia video drivers to a laptop running Linux Mint. The process was quite painful, as the NVidia installer require absolute and exclusive access to X. All information I could find came from posts dated at least from 2012, and most of them from 2009/2010. Most of these posts involved modifying the boot records to start to the console rather than the login client. Many suggested the CTRL + 1 shortcut to open a terminal. I really didn’t want to modify my boot records. The terminal command wouldn’t work either, as there was still another terminal running X.

If you find another program wanting exclusive access to X like the NVidia graphics installer, remember sudo telinit 1 can be your friend.

Of course this is all moot anyway because the NVidia installer destroyed my X configuration so that the machine could no longer boot into X. I decided to just reinstall Windows 7 in the end.

Fund-raising in the Street is Dangerous

I don’t know if this is particularly popular elsewhere, but here in the Louisville area it seems to be very popular to fund-raise in the street. A group trying to support some cause will wear reflective vests and solicit donations from drivers waiting on red lights. I have nothing against charity, but this is incredibly dangerous. I’ve finally had enough of it after a potentially dangerous situation I encountered today. Luckily nothing happened, but it serves as a perfect example of what could happen if someone had gone just a little faster or a little farther.

I contacted all of my local representatives today, starting with my metro council member. If you live in the Louisville area and want find your own representatives, Louisville actually has some great tools for doing just that. Just put in your address and it will return tons of great information.

I’m going to include the message that I sent today that goes into a little more detail as to my reasoning why street fund-raising is dangerous, as well as a video that shows what happened today and illustrates, with little imagination, what could happen should fund-raisers be sharing the road with two-ton vehicles.


 

It is quite popular in this area for fund-raising efforts to be conducted at intersections around the city. Participants wear bright clothes and put cones along the lines of the lanes and walk car to car soliciting donations from drivers waiting at red lights.

While I support local fund-raising efforts, I find this practice to be highly dangerous and irresponsible. Not only are the participants risking their own lives by placing themselves in traffic’s way, but they are providing a distraction to drivers, as well as endangering the conscious of any driver unlucky enough to be unable to avoid them.

Today I was driving at the intersection of 264 Watterson Expressway and Brownsborro Rd in Louisville where fund raising activities were taking place. I encountered a driver exiting the expressway at high speeds as I was passing the off-ramp. Fortunately I was able to slow down and only had to make minor adjustments to avoid a collision. However, it is easy to imagine a scenario in which I would be forced into another lane quickly or forced into another lane or the median by an impact.

I keep a dashcam mounted in my car and I was able to record the incident for your review. Due to the wide angle of the lens of the camera, the red Jeep appears farther away that it was in reality. Even so, it is clear to see the dangers posed at these intersections, especially to bystanders that purposely place themselves in harm’s way.

You can review the video here on Google Drive: https://drive.google.com/file/d/0B7-exy2-V2-yeUFoTnRYcEYzMGM/edit?usp=sharing

I hope you give careful consideration to my concerns. I strongly feel that these activities shouldn’t be permitted on Louisville’s streets. If these fund-raising efforts continue at our intersections, it is only a matter of time until tragedy strikes.