Shop Product Sink: Simplify your Shopify Product Synchronization

Down the Drain

A little while ago I mentioned that I was working on a rails engine to simplify keeping products in sync with Shopify. I'll admit that the post was pretty sparse with details and I should really clarify what I meant. Currently the tools available to work with Shopify are somewhat few and far between and mainly cover the most basic requirements. There are many aspects of the product if you want to do anything substantial still require a bunch of boilerplate code.

Gotta Fetch 'em All

A major aspect of many applications is keeping products in sync with what is available on Shopify. Let's say you are building something that catches order creation webhooks. Let's also assume that you don't want to deal with the implementation of synchronization and you'll just pull in the product details as you need it from the API.

# webhooks_controller.rb
def create
  order = ShopifyAPI::Order.new(webhook_data)
  products = order.line_items.map do |line|
    ShopifyAPI::Product.find(line.product_id)
  end
  process_order_with_products(order, products)
end

With the newly released leaky bucket API you are at a higher risk of running out of API calls far more frequently. The refresh rate is about 2 requests every second and let's say your "best customer" has orders with an average cart size of 2 products coming in every second. You shouldn't really have to worry about anything because you'll rarely get close to the limit of 40 requests.

Now let's just say that your average cart size were to increase by 1 on average. Now, after processing each request you are 1 request in deficit. After about 40 seconds of constant orders you'll have burnt through your API requests. Now you have an extremely frustrating edge case because now your application doesn't work when you need it to. Either you'll have to handle for this or add a caching layer for products.

Cache, Cache, Cache, Cache, Cache, Cache. Everbody!

While handling the edge cases might seem like a solution, it's a hack and doesn't really result in easy to understand code. Handling an edge case means adding a layer of indirection that makes your code murky. Using a caching layer allows you to cleanly abstract that part out and keeps things simple, primarily because you shouldn't experience those edge cases as frequently.

# webhooks_controller.rb
def create
  order = ShopifyAPI::Order.new(order_params)
  products = order.line_items.map do |line|
    ProductCache.find(line.product_id)
  end
end

# product_cache.rb
def find(id)
  if product = fetch(id)
    return product
  else
    # Note: This is a blocking operation
    remote = ShopifyAPI::Product.find(id)
    store(remote)
  end
end

def fetch(id)
  # CachedProduct is an ActiveRecord
  CachedProduct.find(id)
end

def store(api_product)
  CachedProduct.create(api_product.attributes)
end

Now this isn't even the best approach here. This will cover the majority of the popular products which is pretty decent but you still run the risk of having some kind of sale go on where you get hammered by a bunch of different orders for different products. Another thing to keep in mind is this implementation does run the risk of going stale. This requires either running jobs to fetch the products and update them or to drain your cache and rebuild it again.

Call me when something interesting happens. Here's my number

So instead of trying to handle the fetching of products during the request which runs the risk of timing out let's move this around a bit. We really want that cache because it keeps things simple for us but we also want it to use as few requests as possible and keep it as fresh as possible. You might have noticed that I mentioned we were handling order webhooks; and well awesomely we can also handle product webhooks too!

There is one thing that sucks about this though, now you are responsible for building out an entire infrastructure to handle those products and the events related to them. We all have more important things to do and this is where a tool I've been working on, shopProductSink comes in. This is a Rails Engine (sorry Sinatrans) that you can easily glue into your rails app to get full product synchronization working. I'll apologize that there is a bit of configuration involved, but I couldn't think of a better way around it :(

Configuration!?!?!

Yes, indeed there is a bit of work that needs to be done before the engine can work it's magic. The engine doesn't have any way of knowing where your application is hosted, so you'll need to setup your default_url_options otherwise when registering webhooks we'll fail since we don't know what host to report. Say you were to only set this in config/production.rb then it would look something like this:

# Somewhere in config/production.rb
app_routes = Rails.application.routes
app_routes.default_url_options[:host] = 'your.domain.tld'

Until further notice another thing you need to ensure is that your application secret is set in the environment for user that will be running your app in production. This is required in order for the webhooks controller to validate any incoming requests. Let's say you were deploying then all you'd need to do is the following:

heroku config:add SHOPIFY_API_SECRET=abracadabra

Jacked up and Good to go

After you are all configured and you're running all you need to do is fire a few commands and you'll have an active product cache as well as have your application setup for receiving webhooks for your shop.

def post_shop_install
  installation_params = {
    shop: newly_installed_shop,
    token: newly_installed_token
  }
  import_products(installation_params)
  setup_webhooks(installation_params)

Let's look at our import_products method in detail and explain what's going on here.

def import_products(options)
  importer = ShopProductsSink::Importers::Product.new(options)
  importer.import
end

Here what we are doing is grabbing the Product import engine that is available in the shopProductSink engine and getting it to import the shops products (provided in the options/installation_params). Keep in mind this depending on the shop this can be a long-running task. So it's probably in your best interest to run this in the background using a tool such as delayed_job or resque.

Now let's look into our setup_webhooks method!

def setup_webhooks(options)
  ShopProductSink::WebhookRegistration.register(
    installation_params
  )
end

I took a slightly different approach to how this works and you can see it in action on github. The tool tells Shopify that it wants to be informed about product creation, deletion and updates in order to keep the local cache in sync. Now after a user of your application modifies anything to do with products you'll receive a message from Shopify about it. The engine comes with a default controller that will be used that will take care of validating all incoming webhooks.

Caveats

The importer is rather aggressive and doesn't handle the edge cases quite yet, though I am hoping it's slow enough and grabbing large enough pages for this to not be an issue. The default is 250 products per page which should be sufficient, but you never know, there's some shops out there with crazy numbers of products in them.

There are a few cleanup tasks I need to do for the engine, but it's ready for general usage by the public and I look forward to your feedback!

Some of you might have noticed that I didn't mention sidekiq as a background worker. This was primarily because the Shopify API gem wasn't thread-safe, though there has been some work done to use a fork of ActiveResource that makes the gem thread-safe which you should be able to sub in.

Special Thanks

I'd like to thank Stephan Hagemann for helping review this article.


An Approach to Token Based Key Distribution for Mobile Clients

Do not duplicate

Back during Rails Rumble 2013 the team I was on was building an application for use by peoples phones. We were deliberating on how we could easily get the application data from the server to the phone without having to resort to poor solutions like username/password combinations or some convoluted OAuth flow that requires embedding application secrets in the binary.

Token-based authentication is an approach I've heard of before and is used in popular applications such as Campfire. The one problem though is the key distribution isn't the nicest thing in the world. With Campfire the user is required to find their token, copy it then paste it in the correct location of their API consumer in order to interface with the product. While this might work alright for a desktop client, once you venture into mobile data entry can sometimes be a little tedious.

Our assumption for the application was that a user would be using the app from their desktop first. This would mean all their session data isn't available on their mobile device so in order to go somewhere to get the data they need would require logging in via their phones web browser and going through the same steps as described previously.

I wanted an easy way to get the information to our mobile clients that required very little in terms of technology. Our approach was to provide a link embedded in a QR Code that could be scanned by your phones barcode scanning application. Once you've scanned the code you now know where to go in order to grab the token that you need in order to get API access.

This Message will Self-Destruct in 5 Seconds

inspector gadget and stuff

There were a few things that needed to be taken care of when creating these URLs. Firstly, we couldn't lock the endpoints behind sessions since that would make it impossible to fetch the key. Secondly, it needed to be very hard for anyone to arbitrarily guess what the URL to get access to a token would be. Finally, we needed to be able to prevent "replay attacks" to the resource that contains the API token information.

In order to prevent people from guessing what the URL would be to get a token, we simply used UUIDs as the resource location. We had an object that held a reference to a token and would provide the token details when requested via JSON. This meant that whenever we needed to provide access to a token we could just create one off objects and link off to it's UUID identifier. Another added benefit is that if we wanted to we could regenerate another object that would reference the same token.

When it came to replay attacks, the main concern here was having someone watching traffic as it went by and clueing in that an endpoint is how keys were distributed. So what could possibly happen is I request my token, while this evil third party (let's call him Carçon) who is watching my requests also makes a request to get the same token. If we aren't careful we could accidentally give the API token to Carçon as well! The way we took care of this was by making it such that by looking at the token you destroy the external reference to it. In this case it would mean that when the attacker makes the request, instead of getting an API token they'd get a 404 instead since the object providing access was destroyed at the end of the initial request!

While I'm certain there are still some problems that might exist with this approach it felt like a novel way of solving the problem of key distribution. It also made much simpler for the developer writing our Android app since there they didn't need to store user credentials or do the OAuth dance.


Thoughts and Experiences from my first Global Game Jam

SpellSword

From January 24 – 26 was the Global Game Jam and I was determined to actually work on a game for once. After having missed my opportunity for Ludum Dare in December this had to happen. While there were a few last minute changes and I ended up working solo I was still going to build something. I decided to grab a few tools, some of which I already knew, others that I didn't, and roll with them. I had 48 hours to get something done and messing around with environments or learning something crazy new like Unity would've been a waste of time.

Knowing that I was by myself I made a few decisions about what I'd need to do. The big thing was I didn't have time for assets so I jumped onto Open Game Art and started looking for retro/8-bit style art that would match my roguelike dungeon crawler. This site ended up becoming the go-to for everything I needed.

There were a few issues with the assets though that needed some minor tweaking on my side. 8-bit and 16-bit style pixel art is somewhat popular, though one problem is often the images are extremely tiny. I fired up my image editor of choice and started making things a bit more realistic in size. There's various kinds of scaling algorithms but the one that works best for pixel style art is most definitely nearest neighbour. Other forms of scaling would leave the sprites looking alright but a tiny bit fuzzy which was definitely less than ideal.

Tiled Map Editor

With my re-sized sprite-sheet and sprites themselves I was ready to start building out my dungeons. This is where my game wasn't truly a roguelike since I wasn't procedurally creating rooms, but building them in this wonderful tool called Tiled. The purpose of a tool such as Tiled is to make it easier to build out maps that your games can use while also helping keep size down. Let's say you were to use a png to represent a very large level. There is a good chance the image would be very large and might even cause your performance to drop because of it. Tile maps on the other hand provide a bunch of meta-data about how your level looks and then you're able to only render what you need. Parts of this will include that sprite-sheet or map-sheet which your software will use in order to know what images to render on the screen. Another added benefit of at least Tiled is I was able to add objects which can be used for whatever. In my case I used them to represent the binding boxes of my rooms, where the player would start and where the exit of the dungeon was.

I'd experimented with Löve2D in the past and had learned enough Lua to at least be able to build something. I was still jumping into the Lua documentation to answer questions, but at least I was able to effectively put my ideas into code. Much of the game was built with a lot of write and run debugging, which worked for most things but started to get a little annoying. I'd run into the typical scripting language problems like "variable not defined" or "you used . instead of : on your function call" which was wearing me down a bit near the middle of day two. I normally build systems and pretty much always drive my design by unit tests. I was starting to work on my map bounds part and I knew that skimping out on testing because I'm making a game was going to cost more than finding a test framework, setting it up and using that. It didn't take too long before I found a tool that made the Rubyist in me pretty happy. There's a Löve2D testing framework called lovetest that gave me a really easy way to setup a test suite and run it. There was a simple naming convention to follow then everything else was like running Unit Tests in something like Ruby or Java. I was able to get immediate feedback and iterate on the errors much more quickly than I would've otherwise.

Another big tool I used during the jam was sleep. I knew going into it that I couldn't afford to run myself into the ground since I had a full time job to get back to on Monday. Getting sick because I pushed my body too far wasn't something I wanted, and besides I had a skiing trip the week after! While I didn't participate the full 48 or so hours, I think I ended up putting in about 27 or so. I noticed that there'd be a point during the day where things just weren't making sense anymore and I was making way too many mistakes. Sleep helped solve so many problems, it cleared my mind and kept me refreshed. I was also able to focus really well with very few distractions. I kinda feel that while I only spent about 26 hours or so building the game, they were all spent on building and not anything else like perusing Twitter or Reddit.

While my game worked I wouldn't say I finished it. Killing monsters doesn't quite work right, there's some collision detection issues, you can't die or beat a level. It does give me targets to keep working on and I was really in for building stuff and learning new things. For that I'd say it was a great success and I'm looking to participating in more to come!


SpellSword on Global Game Jam submissions site


Testing request-based modules and understanding Rails/Rack header munging

bike rack

I'm working on a tool to help make it easier to keep a local cache of product data from Shopify and one part of that means keeping up to date with changes. Thankfully the API provides an easy way to subscribe to webhooks which is quite useful. Though, I'm looking into the problem and know that just building another webhook controller would really only help me in the short term. It'll help me build out another part of the project but it won't really be usable outside of the project.

One big part of this project is I want to be able to tell someone "yeah, take this and add it to your project" with confidence. I don't really feel confident letting anyone use my code unless I've tested it at least a enough that I know the important parts work the way they should. When working with models and such it's pretty simple, though when you start working with requests things get a bit more complicated. The last thing we want our test suite to do is make external calls, they're slow and slow tests make us reluctant to run them.

Using the documentation I started building out some really simple tests to ensure that when a request comes in with the X-Shopify-Hmac-SHA256 header and I want to verify that I'm reading that in and calculating the HMAC correctly. I looked around for some examples of mocking request objects and only found some stuff talking about using tools like mocha or rspec to mock out methods that are called. I've been burned too many times by being stupid and not understanding APIs fully so I figured it would be better to use something. The most obvious is just work with the Rails request object since that's what I'm going to be using anyway. It's Rails, so as far as I'm concerned it's stable; not Standard library stable, but still pretty darn stable.

After giving the initializer a quick glance over and realizing I needed to work with a Rack env I dug a bit deeper into figuring out how to easily create the environment. This brought me to Rack::MockRequest which gave me exactly what I needed in order to build up that request environment.

With my test setup everything should be peachy and I'll be able to pull out that header and have me some passing tests.

class MyTest < ActiveSupport::TestCase
    test "authenticating a request" do
      controller('abracadabra')
      hmac = "u8aZR7htZKE6uWRg6M7+hTZJXZpcRmh5P4syND1EM24="
      controller.request = request('a message from shopify', hmac)
      assert controller.valid_webhook?
    end

    def request(data, hmac=nil)
      options = {
        :method => 'POST',
        :input => data,
        'X-Shopify-Hmac-SHA256' => hmac
      }
      ActionDispatch::Request.new(Rack::MockRequest.env_for("", options))
    end
end

I'd fire up my test and… it failed. I was getting a little frustrated and confused since this should work right? After a bit of digging around I got into ActionDispatch::Header#env_name which was doing some kinda weird stuff. Whenever you are dealing with a header that isn't your "normal" header (Content Type, Accept, etc.) it get's munged a bit. I'm not fully sure where, I think Rack might be involved in this mystery somewhere but I'm probably wrong. Anyway, let's say that along with your request to a Rails app you include the header Needs-More-Taters, well it'll go through your middlewares and all that business and by the time you actually end up looking at it from your controller it's now HTTP_NEEDS_MORE_TATERS!

What does this mean for my tests? Well my module isn't actually going through the whole dance for controller tests, my environment isn't getting setup correctly. Instead what I need to do is pass in the environment in the "proper" state. In my case it means in my options hash I needed to pass in the fully qualified names that Rails would expect to be there.

def request(data, hmac=nil)
      options = {
        :method => 'POST',
        :input => data,
-        'X-Shopify-Hmac-SHA256' => hmac
+        'HTTP_X_SHOPIFY_HMAC_SHA256' => hmac
      }

With this now discovered all my tests are passing! There was a bit of a yak shave involved (I'm notoriously good at falling into them), but it's cool because there was also a chance to learn a bit more about the innards of Rails and how I can just use plain Rack for testing.


Reducing the suck with Rails Engines

train engine

Mid last year I worked on adding a feature to the forum software at Shopify because I wanted to have a bit of context before clicking on a link in my emails. Since adding that feature I feel like I've become far more active on our developer forums and have helped answer a number of questions. It's also given me a lot of insight into the kinds of things that make software sucks.

One thing I'd often see are people asking about getting products along with order information. The problem is if an order would come in with 4 line items, they'd need to make another 4 API calls before they could do anything since the line items only contain product IDs. This is pretty shitty, and my response would typically end up being "you'll need to cache that locally and keep it up to date". Then I started thinking about the problem a bit more and realized that I'd be pretty pissed off if someone told me that.

On top of solving the business problem, I now need to do some boring bullshit in order to save on some API calls. Of course, if I want to passively keep the data up to date I need to add Webhook support and now this minor detail has exploded in scope if I want to be able to scale this at all.

Looking into it a bit more I realized that this is a problem that only needs to be solved once (per language/framework). Being a Rails developer and hearing so much about these engines that are all the rage I figured this is the perfect use case for them. The goal is to give Rails developers a drop in module that makes it super easy to get product detail caching working in your projects with little to no effort.

Currently the engine is in a state that with a little bit of code you can get a bunch of products imported into your application. For example, I have a controller with an import action that does the following:

def import
  api_products = ShopifyAPI::Product.find(:all)
  ShopProductSink::Product.create_from_resources(api_products)
  redirect_to root_path
end

And here is accessing that route in action:

Invoking /import for a logged in user

So there's a bit of work still on the developers side that requires a bit of insight and the API is still a little bit in flux. I'm currently not planning on doing any magic (I'm not smart enough for that yet), though I figure once I get it out there and get feedback I'll be able to make it even better.

If you're interested you can keep track of the project on github


Should we use JavaScript as a teaching language?

A few days ago Ashe Dryden mentioned she was looking into JavaScript as a possible language for teaching. Some argue that it’s a terrible language to start people off with. This got me thinking about the language a bit and whether or not it’s truly terrible.

I started learning programming with Java which is a pretty obtuse language. Even the most basic “Hello World” is about 15 lines long. There’s the whole public static void main(String[] args) which when you are beginning programming makes no sense. This is where scripting languages really win, you write stuff and you get results right away.

One thing I think we need to realize though is that building basic command line applications isn’t the most interesting. “Oh cool I made a computer say ‘Hello Chris'” but there are better things we can do to teach. For one, seeing something visual instead of a bunch of characters in a prompt is a good start.

This is where a language like JavaScript starts to make sense. When bundled with a browser, it comes with a complete widgeting library that learners can use to build some interactive applications. Using plain HTML and CSS they could build a text adventure game, though by adding JavaScript to the mix it can become far more interesting.

Creating games is a good way for people to learn. For one, it’s interactive which makes people more inclined to play with it which may even uncover bugs, pushing them to fix it and make their games better. If you were to give a group of students a basic introduction to JavaScript, variables and functions you could probably have students rolling out a Knots and Crosses game by the end of the day.

There are a number of positives that come with using JavaScript as a teaching language, the big one being that the student doesn’t really need to install any runtime. As long as they have a modern browser they should be good to go. Chrome, Safari and Firefox all come with excellent DOM inspection and debugging tools as well as a JavaScript REPL. I’ve been to a few classes where I was a Coach/Teaching Assistant, and getting an environment set up can be a massive time sink and it can be really discouraging for people just looking to learn. The language also comes with a lot of functionality that makes teaching advanced computer science constructs (such as closures) very simple.

It’s easy to dismiss the language because of the way it’s been built. Variable hoisting is kinda weird, the fact that if you aren’t careful in the way you create your closures can result in the same variable being “assigned” everywhere and double-equal comparison are a few. Though, as teachers we can help steer our students away from the gross bits of the language and if they encounter them try explaining them. If their eyes start to glaze over simply tell them that it’s the language being weird and show them how to get around it.

Sometimes starting off with a “shitty” language can be the way to go. In the end, if they become interesting in programming they can keep rolling with JavaScript or maybe they’ll be tempted to try out something else. Learning to program should be the goal and helping students do it as quickly as possible is the best way to get them there.


Learning to Love2D

4XpHX

I decided to spend the evening learning another game making framework. I’m still quite happy with Game Maker though getting to see how Lua works in a fun and interesting way sounded pretty neat. Initially I was going to try making some kind of Scorched Earth clone but got all hung up on procedurally building height maps, so I took it down to something super simple; Pong.

I saw that Love2D is integrated with Box2D so I figured I could probably leverage that to make a pong game. It would take care of all the deflections for me, so I’d really just need to switch things around based on some collision information. So really, all thats happening when you hit a paddle is the gravity reverses. While it’s nothing super exciting, it does add something kinda cool to the game. Depending on how you angle your shot you can either make a really hard to hit bouncy one or might even slow it down.

I also noticed that theres a bug which *may* cause the ball to even keep it’s gravity and just come back towards you. Feel free to give it a shot, though make sure you download Love2D and with that installed you can even start making your own love games too!

Download Pong

Controls


Fourty Three

Fourty Three

 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.
Permissions beyond the scope of this license may be available at http://christophersaunders.ca.

Deploy a Subdirectory to Heroku as an app

BranchesThis is mainly to share a discovery a coworker and I made while trying to figure out how to deploy a project they were working on after a transition to the single git repo for this project.

Normally when working with heroku you just execute the following and bam, you’re up and serving your code on a dyno.

git push heroku master

But with the way things have been setup now, our git repository now looks like this:

my-shared-project
↳ README.md
↳ abracadabra-app
↳ fabulous-project
↳ spiderman-project

And we have a problem. Doing the deployment as before would push all these directories and our apps won’t show up the way we want. They might not even deploy nicely, which would be terrible!

Git is a pretty powerful tool and the things you can do with it are pretty much limitless. In the newer versions of git (at least git 1.8.x) has this feature called the git subtrees. They kind of evolved as an alternative way to git submodules, so they can be used for stuff like including project dependencies.

But git is just a tool, and while the regular use case for a subtree might be managing other dependencies, we can “trick” (if you want to call it that) heroku into thinking it’s a git repo. By specifying where the subtree is we can treat that as a repository and push that repository up to heroku.

One advantage of this single repo for the team I’m on is it gives us a single location to look at all our projects in progress, open pull requests and get feedback and using the subtree strategy allows each collaborator to deploy their respective project to heroku as easily as deploying a normal rails app, albeit with a few changes to the deployment incantation.

So, how do I deploy? essentially what you are going to do is deploy that prefixed subtree to heroku as master. This is achieved by composing a few discrete git commands:

git push heroku `git subtree split --prefix fabulous-project master`:master

Aside from the code inside the backticks, deployments look almost the same. All we’ve done is changed where our master branch is coming from, sending that off to our remote (heroku) in this case.

If you’re looking to learn more and perhaps get a stronger understanding of what’s going on when using git subtree you can check out some of the following links, that I semi-butchered:


Erosion

Erosion

Dec 12, 2013 – It should be noted that things have gotten a lot better for myself since I wrote this post. I’ve been doing a lot to improve how I’ve been feeling and the feedback from the community has been excellent. I want to thank everyone who reached out to me and shared similar stories

I’m going to preface this with the following: This is an extremely difficult article to write and I’m also pretty sure it’s going to be a bit of a rant. I’m afraid of what people will think of me, the implications it will have on my career and  by simply exposing a vulnerability.

I’ve been living with depression for at least the last five years and I am constantly in fear of being proven a fraud.  I normally don’t talk to people about how I’m really feeling and I believe I’ve been sort of described as having two emotions – ‘angry’ and not ‘angry’.

I don’t come from what I’d call the traditional programming background or ever did anything I’d consider amazing. When I was a teenager I was really into Chemistry, I loved it! I was going to get a degree in Chemical Engineering and do whatever… chemical engineers do. During college things changed a bit, while I still really enjoyed Chemistry I wasn’t as thrilled for it. During my last semester a friend and I were building an educational tool that would help people learn chemistry. It was really just a glorified multiple choice test that was written in Visual Basic. I wrote up the questions, my friend wrote the code. While he was working on it, I was watching and thought that was pretty neat. Building stuff with a computer is kinda cool, and with a tiny bit of convincing I had enrolled to join the Computer Science program at a local university.

I had never programmed before in my life and I never really thought too much of it at the time, but to this day this haunts me. The first year was hard. Recursion was confusing, I wasn’t ever really good at the theoretical mathematics such as proofs and formalized logic, the list goes on. I made it through my first year with decent grades and lost my scholarship, but who doesn’t? Those were really just tools to lure in suckers to the University cash machine, right?

Midway through the beginning of my second year I realized that I needed to get more into the industry and actually building software. Until that point I was doing summer contracts/training as a cook in the military. I was afraid that if I stuck with this path I’d get out of school with no skills and be someone with a degree and useful to show for it. I’m pretty sure that when I started my first work term or perhaps my second that my depression started.

I looked at the work of other interns who were both younger and newer to school than myself and were performing drastically better than I was and this really started to bother me. I sort of started trying to learn new things like reading technical books only to put them down. They weren’t the most interesting things to read and made it really difficult to stay interested. After having been working for a few years, some of the harder books to read I can consume, though at a fairly slow rate. I often question whether I’m applying anything I’ve “learned” from them correctly, and still often doubt myself.

Often I find myself with the following logic looping through my mind:

My coworkers are working on cooler things than I am and they are definitely more interested in the things they are working on than I am. The things I’m interested in have no inherent value, so what’s the point of working on this stuff? Am I even interested in the kind of problems the company I’m at is trying to solve? I’m not a top performer and a fraud… why am I still even on the payroll? I can’t consistently find a project I want to work on, I have no commitment.

Those thoughts really get to me and extremely bother me and got to a point where I had to see a doctor about it. I was prescribed anti-depressants and I’m pretty sure they worked. There was a “noticeable change in my performance” and I’m confident it was just because of the medication. With the move to a new city and my laziness when it comes to seeing health care professionals, I simply stopped taking the medication. The withdrawal was interesting, but after it was over things seemed pretty good. I was confident I didn’t need them, or would be able to cope without them. Besides, they felt like a crutch and I’d need to face these problems for real and can’t hide behind induced chemical reactions. Considering how it feels like I’m thinking of how I’m shitty at my job and should probably just leave almost monthly, I’m starting to think otherwise.

A couple months ago I had a bad break down. It was after a tightly crunched project and things were pretty rough I knew that something had to be done. That week I started seeing a therapist and things have been going alright. We don’t meet frequently enough, but I’ve been getting something out of them. It was nice to find out suicidal thoughts are a pretty normal thing. While we’ve only covered a few things, it’s been nice to have an unbiased third part to talk to about what’s been going on and even still I find this difficult.

I’ve been spending a lot of time asking myself what I want out of my life and career. I’ve also been trying to figure out what I truly enjoy and love doing. It’s been difficult because I’ve always had a hard time on self-reflection and personal goal setting. Goals feel so far away and the path there feels like it’s a smooth vertical rock face. I’ve tried breaking them down and figuring out how I’d get there. If things are going good, it’s not too bad but a trigger will happen and I’m back to dwelling and depression.

From my reflections I’ve been able to figure out the following things:

As such I chose a few and tried to do more of them.

I’ve been lurking in IRC channels and trying to help out wherever I can. I can usually only answer the simple questions though I look at is as taking the burden off common questions of the smarter peoples shoulders. I do know that in the past I’ve always like when some random guy even tried to help me.

I’ve done some volunteering for a local exposition and have started learning how to make games. I’ve been looking into virtual and physical game jams to participate in to test myself and see what kind of game I can get done in 48 hours.

Finally, to I’ve been trying to get involved in pairing more and sort of teaching. I’ve been inviting people to do remote pairing with me, though I’d love it more if I could pair with someone far smarter than I am on something that’s actually cool. I’ve also been paring with some co-workers on things to fix bugs in various systems.

All that being said, I’m hoping that things get better in the future and that I’m able to either not have depression anymore or have it under control such that I’m actually able to enjoy my work, my life and everything else that encompasses those.

If you’re curious what depression is like, check out Zoe Quinn’s game Depression Quest which gives a pretty good example of what it’s like. I didn’t play all of it, but after a few minutes in the game it really resonated with me.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 Canada License.
Permissions beyond the scope of this license may be available at http://christophersaunders.ca.