git rebase -WHYYYYYYYYYYYYYYYYYY: Squashing to a single commit

276253_Papel-de-Parede-Meme-Jackie-Chan_1280x1024I was involved in a code-review for a fellow coworker who is a bit green to git and noticed that a PR they had opened consisted of about 40 commits. While I did commend them for sticking to the “commit early, commit often” mantra this was definitely something we didn’t want kicking around the codebase. I politely asked them to perform a rebase in the PR in order to squish all that cruft into a single coherent patch.

There was a tiny problem though, I believe there might have been some dirty rebases that completely hosed the history, so getting that rebase was going to be tricky. They got in contact with me asking for some help since the rebase was constantly running into merge conflicts. I thought at first we could just fire up a screenhero session (they’re in Ottawa, I’m in Toronto) and we could figure it out. Things weren’t going too well, and after about 30 minutes of aborting rebases I figured it might be a better use of our time if I pulled back and did a bit of research into how I could cleanly apply a new patch.

After another attempt at doing the rebase locally, I figured that approach was a lost cause. If we were to merge the code in everything would be fine, but we didn’t want to do that because of the grossness. I ran git diff master and was thinking that if I could just export that to a diff and apply that onto master I’d be golden. So that’s effectively what I did.


git checkout master
git pull origin master
git checkout branch-that-does-not-rebase
git diff master > ~/Desktop/summary-of-my-changes.diff

With the patch resting nicely on my Desktop, I went back to master and started a new branch. This is where I ran into a few snags. When trying to apply the patch I’d get this really nasty message saying that the patch couldn’t apply cleanly and it would abort. The problem appeared to be happening in a single file, and it would be awesome if I could just not care and apply the patch anyway. After a bit of digging around I came across a post on Stack Overflow (which now that I don’t have the problem can’t find) that mentioned this argument you can provide when applying a patch called reject.


git apply --reject --whitespace=fix ~/Desktop/summary-of-my-changes.diff

What it allowed me to do was apply the patch, and anything that wouldn’t apply cleanly would get thrown into a filename.rej file for me to deal with manually. After a few tweaks and figuring out the rejected patch, I opened a new PR on Github letting my coworker know about the changes so they could verify I hadn’t completely hosed something.

So if you’re ever stuck in interactive rebase hell when you’re just trying to create a single patch, give this a shot. I’m sure git has better ways of handling this, but in a pinch this will definitely get you moving forward.


Apprentice to Journeyman: The Importance of Mentorship

Blacksmith striking hot metal
I've been seeing a lot of employers look at prospective interns and rejecting them because even though they show potential, they are still "too green". This has always been an issue that really bothers me because it's the problem that youth always have when trying to break into the work force. They need experience but nobody will give them a chance. I believe that we can look to what tradesmen have been doing for centuries as inspiration.

Coming from an industry that's been putting a lot of pride into becoming masters of their craft I find it odd when an employer won't bring on an intern due to a lack of experience. In order for a apprentice to reach their Journeyman status, they need to go into the workplace and gain that vital real world experience. With an elementary understanding of how things work, the apprentice is brought into a work environment and shown how to apply that knowledge and build up their skills.

There are several countries in the EU that have adopted a system which allows students to become apprentices in various trades. The fabulous Mr. Lütke even wrote the apprenticeship program in Germany and how valuable it was to him. Systems such as Germany's Realschule get apprentices into the workplace much more quickly, while still covering a bit of theory. In our current system things might not work out perfectly since most interns are college or university level and already have some hefty expenses. This doesn't mean we shouldn't strive for a system such as those in EU countries, it'll just take a bit of time to transition.

Although hiring does have it's risks, I feel that they aren't as big a deal when it comes to hiring an intern. Typically an intern is brought on for four months and it's already a given that they'll be leaving to return to their educational institution afterwards. There is a clear line as to when their contract will be over and all that is left for the employer to do is find mentor for them given the project they'll be working on.

I am personally a strong believer in mentorship. It's actually had a major effect in my own career and probably even helped me keep my job. It wasn't long ago that things were going pretty rough for me. I was pretty junior at the time (I think I still am) and had a few projects to work on without much senior leadership, so getting guidance was tricky. I had also had a major product sent my way since there were a few major bugs that had come to light and needed to be fixed. After a few months of the projects I was getting burnt out and needed to change, or I knew I'd be gone – which was the last thing I wanted. I moved over to a new team and was told that another developer would be my point of contact for any questions as well as growth. Things started out pretty simple, read some rails guides, read a few books on ruby and rails and setting some goals for where I wanted to be in a couple of weeks. After a few months I was feeling confident in my current work and upcoming projects.

A mentor provides juniors a single point of contact to obtain professional guidance. This doesn't mean that a mentor will have all the answers, but they have a good idea of who will. If the mentor is motivated to see their apprentice succeed, there are plenty of tools they can use to help drive that path. Another key aspect of mentorship is to clearly express expectations and have a short enough feedback loop such that adjustments can be made promptly.

Goals are an excellent method of setting lines in the dirt to give the apprentice something to shoot for. They need to be actionable, attainable, realistic and preferably chosen by the student. Of course if a student lacks confidence, internal self-reflection can be extremely difficult so providing a few goals also helps. It is important to keep in mind that a lack of confidence does not imply that the student has no desire to succeed.

With the right tools and a strong mentor, an apprentice can go from being good to great in a matter of months. It only takes setting a few small goals to help build up the confidence and skills to tackle new and difficult problems they would've never thought were possible. There isn't much required in hiring the best – that's just a process of filtering out the chaff, but being a place that builds the best takes far more skill.


  1. DocZone: Generation Jobless has a piece that covers the apprenticeship systems in Germany and Switzerland and how that has helped curb the problems we are having in North America. Video available to Canadian IPs only :(
  2. Objectives and Key Results are the system that I used with my mentor at Shopify to drive my professional growth

Hacking Homebrewing — Part 2

Brewwatcher Web in Action
Last week I told you about the Arduino side of my homebrew hardware project. Now, I'd like to talk about the Raspberry Pi, Ruby and JavaScript side of things.

The Pi was really just a deployment platform that would make it easy to get a low power Linux box running that I could easily configure. I wanted it to be headless, so I chose to go the Arch Linux route since that would allow me to have full control over the system and what parts were installed.

As I mentioned in the previous post, I'm using the serialport gem to do all the interaction with my Arduino. The gem takes care of all the business that is required in properly connecting to the device, and helps me reach my goal – to read and write data. The only awkward part of the library is establishing a connection to the device, after that it's just IO.

# Assuming bundler
require 'serialport'

baud      = 9600
data_bits = 8
stop_bits = 1
arduino   = '/dev/arduino.tty' # totally incorrect!
serial    = SerialPort.new(arduino, baud, data_bits, stop_bits)

serial.puts "help"
puts serial.gets # blocking call!!

The biggest lesson I learned was that gets calls to the device are blocking, which threw a wrench in my programs for a bit. One approach I took was to build up a message queue and just read from that, but for whatever reason I messed up my Mutex and the Thread responsible for stuffing message into the actual queue could never get priority. It also made things a bit complicated because of the reboot on connection issue I mentioned in the last post.

As for packages I really only needed a handful: Ruby 2.0, SQLite + SQLite development dependencies and Nginx. I took the approach I've seen in several other places and just have a thin server running my web app, and use Nginx as a reverse proxy. The solution works out fairly well and required little additional setup since I'm using plain old Nginx. I thought of using passenger, but lack of experience and no desire to compile things on the Pi made me change my mind.

I didn't want to have to rely on having my web server running in order to collect data from my Arduino, so I set up a Cron job that would run my data collection ruby script. The script did the fewest things possible in order to connect to the Arduino, grab some data and store it into my SQLite database. I'm using SQLite and there is a bit of a race-condition that if I'm writing Arduino data, I can cause my Sinatra app to fail since it cannot establish a connection to the database. For now, I'm not going to worry about it since my scripts only run every hour and I don't see the contention being an issue for myself.

I had my base system running and collecting data, but without any way of interacting or seeing the data it's just a bunch of numbers in a table. This is really where Sinatra came in. Its primary goal was to make it easy to do some basic CRUD such as: creating new brews, setting them as active, configuring the TTY to connect to and so on. Another thing I used it for was to provide a very simple JSON API for getting access to a brew's readings during fermentation.

Charting out the results of a test run
The reason I needed a JSON endpoint is because I'm using the d3 graphing library for visualization and I based most of my charting code off of Mike Bostock's line chart example that was linked from the d3 gallery page. I added a few minor changes, such as some zones to easily see whether a beer is within its normal fermentation range or not. I'll admit that the charts are far from pretty, but they met my requirements and once I go back to it I'll look into making them a bit sexier. One really nice aspect of d3 is it's just building SVGs which can have CSS classes attached to them, so they are fully styleable.

The Arduino device testing/selection screen
A somewhat tricky part of the application was coming up with a way of making it easy to change what device I was connected to without having to change code. The one thing I hate always doing is having configuration in my code that makes it complicated to change things – especially when I'll be deploying to a different or unpredictable environment. I got around this by making a few slight changes to the system. I assumed that the user the application would be running as would have read and write access to the tty group on the system. With that assumption made, all I had to do was issue a directory listing query to /dev and then let the user choose which device to connect to.

Now, if we run a ls /dev on any UNIX-like platform, we're going to see a boatload of items, and this wasn't going to be acceptable. Based on a few observations I was able to slim down that list to a handful of devices. Because of prior knowledge I somewhat knew which device to connect to, but I'm forgetful so I had to add some tools to help with that. This was solved by adding a little "test connection" button in my Sinatra app that would try to open the connection and if it was successful know that I'm probably connected to the right device. I am using sort of bad habits since the exceptions kind of control my program flow, but I'm still a little green when it comes to the right way to do these kinds of things in UNIX.

Two weeks ago I had brewed my first batch of beer in the new apartment and the system was ready to roll! I was pretty excited to have my little hardware project actually be put to the test, and set everything up – while being extremely sanitary and ensuring everything was sprayed or soaked in Starsan! Though, I noticed after a day or two that my airlock wasn't bubbling, despite obviously active fermentation and got a little bit concerned. After looking around a bit I noticed that the wire from the temperature probe was preventing my bung from getting a perfect seal. Defeated, I removed the sensor and turned everything off before returning the bung back to it's home. About fourteen hours later, the airlock was bubbling like a champ. With that in mind, I'm planning on taking one of my rubber corks and drilling another hole in it that'll be large enough to get the probe through, but will keep my beers environment safe and tasty.


My charting code on github — it's gross! You've been warned
Brewwatcher on Github


A Pain Problem – Dave Rooney

Dave Rooney wrote up a pretty awesome article about that rush of fighting software fires and all nighters, and figured that some people might also be interested in reading it as well.

A Pain Problem


Hacking Homebrewing — Part 1

Over the past few months I've been playing around with Arduino to build out a system that would help me improve and track my brewing. The goal of the system was to aid in the collection of historical data so I could work on making more consistent beers.

Due to the number of things I want to talk about for this project I'm going to break this out into a two part series, which will consist of:

1) Putting together the Arduino hardware to read and generate log data and fetch that data via ruby,
2) A Sinatra web app for displaying and managing my brews and devices and the Raspberry Pi that I used for deployment

I started with building out the hardware because I figured that was going to be the hardest, though it would also be the most fun. Since this was my first hardware hacking project I was missing some really basic tools — such as a soldering iron, third hand and all the extra electrical components needed to safely build electronics. Based on some suggestions from a friend I went over to AdaFruit for all my components. A few pieces of hardware became prohibitively expensive after shipping, so the I bought the rest from RobotShop. Once that was all out of the way I was ready to start prototyping and building out my project.

In essence, the hardware is really simple; it's just one component to read temperature and another to read time. I wanted to go with training wheels so the temperature probe I bought was the food safe DS18B20 as I'd be submerging the probe into fermenting beer and the last thing I want is PVC or something else leeching chemicals into my hard work. For time tracking I went with a DS1307 Real Time Clock that would allow me to always have accurate date measurements, even if I have no internet connectivity or the device loses power. This ended up becoming extremely vital for the operation of my system, which I'll talk about once we get to deployment.

Putting everything together is very straight forward. The hardware community has built out this library for the DS18B20 called One-Wire that makes integrating with the device quite simple. All you need to do is wire up the sensor with a pull-up resistor so you can get the readings, and you're off to the races. The library provides you with floating point numbers when fetching readings, which makes working with the sensor on the Arduino even easier. The convenience did come at a cost because the probe is somewhat expensive.

When it came to working with the DS1307 things got a little bit confusing. This device uses an interface called the I2C bus which allows many devices to be connected the same wire. Trying to understand how it worked threw me off for a bit, mainly because hardware was (and still is somewhat) foreign to me. The bus consists of two things, a Serial Data Line (SDL) and Serial Clock (SCL) which are responsible for passing the data off to the device you are communicating with; which in my case was the Arduino. Arduino comes with support for I2C through the analog pins. Analog pin 5 is used for SDA and analog pin 6 is used for SCL. After that it was simply finding a library for the DS1307 to do all the heavy lifting for me.

With everything wired up, all that was left was to build an interface for communicating with the Arduino so I could actually do something useful with the data. I decided that I was going to deploy my application code to a Raspberry Pi, and since it has USB ports I would just communicate via the serial connection. Basically all I did was build out a really simple command interface that would take messages and based on the content return responses. The most important message was the one that would ask for a time-stamped temperature. This was the code that would read from the RTC followed by grabbing a reading from the temperature probe. Once all that data was fetched, I'd write that out onto the serial connection.

The Arduino documentation pointed me to a ruby gem called serialport which took care of setting up the serial connection for me. The library complies with the IO interface in ruby, so if you're used to reading and writing to a file you have nothing to worry about. With this in hand, I had my wrapper written up so that sending and receiving messages to the Arduino could be done with a few simple commands.

For most things during development the interface worked great! When it came to building out automated scripts that I could run via cron tasks, things got a little awkward. The vanilla setup for an Arduino is to reboot when a serial connection is established with the device. This can of course be circumvented in many ways, the easiest and least permanent was adding a high resistance resistor between the reset and ground pins on the board. Since I didn't have a 120 ohm resistor on hand I figured I'd hack around the problem for the time being. What I did was added a "warmup" function to my ruby script which essentially just wrote a command to the serial connection until I knew that it was ready. Once it was, I needed to consume any garbage data on the serial connection so that I'd be ready for actual output from the device based on my commands. In essence this was a really hacky way of synchronizing my code with the Arduino. Though once I had everything going things worked out quite well.

With that I had my hardware integration completed. I was ready to start taking that information and turning it into something useful I could visualize. I'll be covering that in another post at a slightly later date.


  1. Check out brewwatcher on Github
  2. Check out the Brewwatcher-Arduino on Upverter

Remembering Breakfast

In the past several months I’ve been involved with a rather intense project that involved working on a lot of different parts. Often while I was going through code reviews and analyzing feature requests, something would come up that I could swear had been done previously. Unfortunately, I couldn’t remember if it was directly related to the application at hand or something else I worked on that might’ve had a similar problem.

Often when an issue would be raised about a specific piece of functionality I would get a sense that this had been addressed in the past and promptly reverted because of customer feedback. The problem was I couldn’t pull up any historical data to refute the ticket and explain how the task is a fool’s errand. Another problem was that the lack of history lead to repeating the same mistakes again.

After this happened to me two or three times, I’d had enough. There needed to be a better way to keep track of what I’ve been working on and why. The goal was to be able to either search for a phrase or perhaps some tags and pull up everything that was relevant. Aside from having the benefit of preventing repetition, it also has the added benefit of easily providing status updates.

When I dove onto the internet in a quest to find the journaling software that would meet my needs, I made one tiny mistake; including “mac OS” in my query. This pulled up several applications for mac that would solve my problem but did leave out several good alternatives. I effectively wanted something that would allow me to write journal entries with tags and perhaps send me a nag near the end of the day, something that doesn’t need to be tied to a single platform.

Although in hindsight I could’ve done a better job in looking for software, I also could’ve done a lot worse. The application I chose to use was Day One which met all of my logging needs, and as an additional benefit; is pleasing to the eye.

After having used Day One for a week I inquired on Twitter what everyone else uses. For some it was social media, others use iDoneThis(which I completely forgot about!), to finally more rudimentary tools like vim. It was an interesting insight into how people keep track of what they have been doing throughout their day.

I could see Social Media as note taking or task tracking system perhaps working if you are involved in more open and transparent environments like an open source project. For myself, I’m working on some pretty secret sauce and can’t really use that system for keeping track of what I’ve been working on. Another red mark for me would be I tend to use Social Media for “entertainment”, so I might end up with some false or misleading data in my search results, so this avenue wouldn’t work for me.

iDoneThis is a tool I’d actually used in the past as an alternative to my teams daily stand ups. To be honest this tool is excellent and I’m a bit of a tool for having forgotten about it while searching for a diary software. They support hashtags, which provides the necessary buckets of where a task was related. It is also a separate tool, so I can keep the Signal-to-Noise ratio quite high. Another cool feature about this application is it integrates right with your mailbox, so you can get a message at the end of every day which asks what you’ve done. I’m not too sure how their email integration works with multiple daily updates, but I do know their web interface supports logging multiple things in a day.

One problem that I had with iDoneThis was that I am simply terrible at email. Actionable or important emails I’ll star, and stuff that I need to get to soon I just leave as unread. Unfortunately I put iDoneThis too low on my list of priorities, so my daily updates would get missed. Aside from some workflow issues though, iDoneThis is an excellent tool that comes with the added benefit of being multi-platform.

The last tool I’ll talk about is vim. Well actually it isn’t, because I’m not a sucker1. Instead I’m going to cover vim’s bigger brother, org-mode. This is also another tool I’ve used in the past and even blogged about using it. Org-mode is a fabulous tool that comes with some nice functionality built on top of plain text files with some special formatting. The list of features in org-mode is almost endless, and I barely scratched the surface back when I was using it. I mainly used it as a task tracker instead of a diary so I can’t say much about it’s diary taking features. Considering the scope of the project though, I would be shocked if diaries and tagging weren’t in there. As for why I don’t use it anymore, the answer is quite simple. I’ve sadly moved away from using emacs as my primary editor. I never truly got good at using emacs effectively enough when working on a large codebase. I often had difficulty navigating through various aspects of the code, so I switched over to Sublime Text which doesn’t have the level of org-mode support as the real McCoy.

In conclusion, if you are ever running into a memory-loss problem and are looking for a technical solution, any of mentioned tools might be worth consideration. Of course, this always depends on your workflow and self-discipline.


  1. I’m not serious when I say shit about vim. It’s just not a tool I use, and it’s always a tiny bit fun to get a vimmer all riled up over superficial things like choice of tool.

Viewing PRISM from Another Angle

Another item has been leaked about the US governments involvement in data collection, and once again, the internet is furious. I want to step back from the sensationalism and look at this from a different lens. While I am not fully against what the American government has done, I also don't support it.

First there was the leak by Edward Snowden about the NSA's data collection program, also known as PRISM. The goal of the program is to allow the government to perform massive amounts of data collection — unbeknownst to anyone, of course.

The second leak involved another of the NSA's programs, their XKeyscore system, which effectively amounts to racial profiling against people based on email contents.

The US Governments applications of these two technologies are alarming, but we shouldn't let that get in the way of applying practical uses for this research. Other technologies the NSA has built — while also used for surveillance — have greatly benefitted everyone by other means. For example, they were involved in hardening the cryptography in what would be known as the Data Encryption Standard or DES, which is used basically everywhere in some form.

This organization has extremely intelligent researchers at it's disposal, and the problem domain that is created from the PRISM and XKeyscore technologies is extremely interesting. If we think about it in the worst case, this would mean that the tool has been built to archive every single communication that goes over the internet. If we want to be a tiny bit more optimistic, let's say it's only doing data collection through the American internet.

This collected data is effectively chaos. It will contain so much generated noise, it makes you wonder how they are going to figure out what is relevant and what isn't. I'm personally really interested in knowing how they've built out tools that can intelligently differentiate between what would be a legitimate data point and noise.

Also since the NSA has built technologies that integrate more than just text communication, there's so much more involved. People have different accents, different sayings and, depending on native tongue, may even say words with a different intonation. Again, having a tool that can intelligently differentiate signal from noise could be useful for everyone, for applications other than surveillance.

One thing I'd love to see as part of the aftermath from this would be the government to release the tools (or at least part thereof) to the public — especially because of the NSA's public funding. The public has paid for the technology to be built — whether it wanted it or not — so it should get to see what it's made of.

Another reason why I'd love to know about the guts of these tools is because it would be interesting to see what pre-existing technology they are using that is publicly available or perhaps even open source. I'm sure everyone can think of some pretty simple business uses for these tools, even uses that could integrate into a pre-existing business, which could help provide better analysis.

While these tools that have been built by the NSA are being used for nefarious reasons, it doesn't mean the technology or the research is inherently evil. Pulling funding would be a step in the wrong direction. The issue lies in a lack of regulation and transparency within the government. If a government shares as much information as possible, their citizens should have no reason to fear them.


Farewell Ottawa

The capital region has been a place I’ve called home for the last 10 years. I can’t believe that this much time has passed since I moved here to start my science program in CEGEP. Since then I’ve built an amazing network of friends and colleagues and it’s a bit scary having to move away from all of that.

While this is a bit scary it’s also an excellent opportunity to see what other developer communities are like and hopefully become involved with those, just as I was in Ottawa.


Building extensions for the newly minted Minitest 5.0

A few weeks ago, Ryan Davis rolled out a new major release to Minitest. This was a complete API overhaul which comes with a code cleanup and breaking API changes.

There’s a large number of plugins for Minitest that with the most recent change have become broken. One of those plugins, that I recently integrated into the Shopify codebase was affected; minitest-reporters. This framework provides an easy way to change the output and statistical information generated by your test suite. Some of the simpler examples were things like colourized output, which is cool, but whatever, it’s not that useful.

A big problem I have with the Shopify codebase was how long it took to get feedback from our test suite. The default reporters for Minitest/Test::Unit, require the entire suite to run before you get any output. Considering how the Shopify test suite is massive, it could sometimes take 20 minutes for results. When a test fails early on, it’s a bit frustrating to have to wait before you see the progress. Luckily, minitest-reporters provided the functionality we needed!

The only problem was we don’t want to get stuck on old versions of various libraries. There’s some we are currently locked to, and it’s kind of annoying and when you do need to upgrade, make it even more complicated.

So I decided that I’d start digging into what could be done to upgrade minitest-reporters to MT 5.0 as well as what makes up a Minitest plugin.

The Minitest Extension Framework

Up until recently many aspects of Minitest revolved around a bunch of globally available variables that were accessible from the root Minitest object. Some of those were things like the Runner. In pre-5.0 Minitest, the easiest way to create a custom runner was to subclass MiniTest::Unit, and you’d get immediate access to the test life-cycle. While this worked it did make adding some modules a bit tricky since there was a risk you’d be possibly stepping on the toes of other libraries. The result could end up having two conflicting extensions that when both are included could render one useless in the best case and unpredictable in the worst.

I can’t speak for Mr. Davis, but I think he saw these discrepancies in the older versions of Minitest and wanted to improve upon those. The way he did that was by adding an expected interface that extension implementers would follow.

Let’s say I am building a FabulousPlugin

# fabulous_plugin.rb
module Minitest

  def self.plugin_fabulous_options(opts, options); end

  def self.plugin_fabulous_init(options)
    # Do some things to initialize your plugin
    puts "Hello from the Fabulous Plugin!!!"
  end
end

# Now to include it all you need to do is include the module
# test_helper.rb

require 'fabulous_plugin'
require 'minitest/autorun'

class SomeTest < Minitest::Test

  def test_that_it_passes
    assert_equal true
  end

end

And now if things have gone right (which they should have) you’ll see Hello from the Fabulous Plugin show up in your terminal window. Congratulations you’ve successfully written your first Minitest plugin!

Now what’s so nice about this? Well first things first, we no longer need to override the Test class in order to add our own functionality. This was probably the biggest problem since it made it really tricky to work with different plugins if they both did something to override some of the class variables.

Even in the current implementation you still do need to worry about a namespace conflict, since more than one person could build a FancyPlugin, though this should be an exception and not the rule.

Writing extensions for test result generators

Unfortunately the only thing I’ve been using the Minitest plugin system for is to get the minitest-reporters working. The result is I’ve only been digging into the Reporters system and how that works. Pre-Minitest 5.0, there was a single runner and that was it. So if we wanted to generate multiple report types, such as showing slow tests and displaying a progress bar, we’d have to write our own composite reporter. Luckily, Mintest 5.0 comes with one, and actually uses it by default with a basic Reporter as the single item in there. The added benefit is we are able to piggyback off that effort to build out our own Reporters. Using the information contained in those, all we really need to do is implement two functions: record and report. Everything else is pretty much taken care of for us which is fabulous! Another thing that’s nice about this, is we can write our reporters outside of our actual plugin management. One thing we can do is have all the things we want to load locked to a specific namespace, say Fabulous::Extensions and using a tiny bit of ruby reflection magic, can load the extensions we want.

module Fabulous
  module Extensions
    class SimpleReporter < Minitest::Reporter
      def initialize(options={})
        super(options.delete[:io] || $stdout, options)
        @all_tests = 0
        @passed_tests = 0
      end

      def record(result)
        @passed_tests += 1 if result.passed?
        @all_tests += 1
      end

      def report
        io.puts("#{passed_tests} out of #{all tests} passed")
        io.puts("You made a kitten cry") unless @passed_tests == @all_tests
      end
    end
  end
end

# fabulous_plugin.rb
module Minitest
  def self.plugin_fabulous_init(options)
    reporters = Fabulous::Extensions.constants
    self.reporters.clear
    reporters.each do |reporter|
      klass = Fabulous::Extensions.const_get(reporter)
      self.reporters << klass.new(options)
    end
  end
end

And with that our plugin is tight enough that it’s simple to understand and loads our dependencies automatically based on what modules we decided to load in. How would you make this code work you ask?

require 'fabulous/extensions/simple_reporter'
require 'fabulous_plugin'
require 'minitest/autorun'

Congratulations! You’ve just written a minitest module that will dynamically load it’s code and you know how to include it in your test helpers or wherever you may need it.

You can always read through some of the code I wrote for this. It’s a bit rough around the edges still, but has been a great experiment in discovering how the innards of Minitest and minitest-reporters work.


The benefits of Conferencing by Yourself

This is an item I’ve been wanting to write about for a while. In addition to my post on why you should stay in a hostel for your next conference, this is another point that I think is pretty important.

Conferences serve a few different goals, the first of course being to learn about new things. The other though is to build up our network of contacts for various reasons, finding new gigs, helping the company you are at hire awesome talent, and so on. The problem though is if we go to conferences with a bunch of co-workers or buddies and just hang out with them we won’t meet nearly as many people.

It’s a pretty simple reason really, since you are in a clique of people you already know it’s drastically easier to converse with people. Sure you’ll still meet new people but you don’t need to overcome the fear of going out to talk. It’s going to be the other way around. If you’re hardcore, you’re probably wearing something identifiable from the place you work and someone will go up and try to talk. They’ve mustered a lot of courage to effectively throw themselves to the wolves, because that’s how large groups appear; scary.

Now, let’s try to flip that on it’s head. Instead why not blend in with the group and perhaps during the hallway track go out and try to get into other peoples conversations without your buddies or co-workers in tow. You’ll have a better chance of meeting new people and you’ve put yourself on an even playing field. Don’t get me wrong, it’s pretty scary trying to walk up to new people, especially Internet famous ones to try to strike up a conversation. Though from my experience, almost everyone is extremely chill and loves talking to new people. You already share a common interest, try to find out how they solve a problem you’ve been having or what their thoughts are on this new technology you’ve heard about.

So if you can, next time you are looking into conferences, see if you can pull off going to something that nobody else you know is going to. Besides, what’s the value of everyone going to the same single conference anyway?