@awead

Technical notes and explications of code

Surveying the Javascript Landscape

Taking a break from my usual Ruby work, I did a “recovery sprint” and explored some Javascript. My skills are weak at best in this area and it was my intention to improve them by learning some new technologies.

Node.js

I’ve been hearing a lot about Node, and vaguely knew what I was. There’s a lot out there and even more about what you can do with it. In short, it’s a web application platform built on Chrome’s javascript runtime environment. It’s a single process application, which seems weird at first, but that’s the way it works and it work well.

There are tons of great tutorials on Node at Node School. The one I did was “learnyounode” which is a series of 13 exercises. It’s pretty simple, although I got annoyed a few times and grabbed the answers from learnyounode-solutions. Hey, I’m not getting graded on this or anything. It’s easy to get started:

brew install npm
npm install -g learnyounode
learnyounode

And you’re off and running. I posted my solutions as well if you’re intested. However, I didn’t go very deep into Node because I wanted to get a broader picture of the Javascript development landscape.

Express.js

Coming from a Rails background, I was more interested in how you can leverage Node to create web applications. This led me to Express.js which is pretty much exactly what a Rails developer might expect if they were looking for Node on “Rails.”

Express has an app generator that will build a complete application structure in a directory. From there, you can start the application and begin customizing routes and content. It features a routing mechanism and a template language called Jade, although you can supply others if you wish. In short, to start a new Express app:

npm install express
npm install express-generator -g
express my-app
cd my-app
npm install
npm start

It even starts up the webserver on the usual 3000 port. The similarities with Rails were actually a little eerie. Express isn’t a full MVC framework, though, it’s more like VC, or just V for that matter. As-is, Express gives you views and routes so it’s very easy to start creating simple static pages or pages that are transformed using JSON data objects.

For example, if I wanted to add a simple “About me” page to my base Express application, I create a route:

routes/about.js
1
2
3
4
5
6
7
8
9
var express = require('express');
var router = express.Router();

/* GET about page */
router.get('/', function(req, res, next) {
  res.render('about', { title: 'About' });
});

module.exports = router;

Because the route file is about.js it’s going to respond to GET requests at /about which is why the path is only “/” on line 5. After that, it’s just passing a JSON object to the view template, which looks like:

views/about.jade
1
2
3
4
5
extends layout

block content
  h1= title
  p About me

In order to have the page render, we’ll need to wire it up to our application. Add these lines in the relevant locations of your app.js file:

var about = require('./routes/about');

app.use('/about', about);

Now you can restart the server and view the page. Not much there, but you get the idea. I left it very basic because I was more intested in…

Testing

Coming from Rails, I’ve had TDD/BDD beaten into my head so it was very hard for me to doing anything without asking myself “Where is this tested?” To answer that, you have to look at some testing frameworks. I started with expect.js which has the syntax you might expect (heh) with Rspec. However, getting it wired up in Express required a few additional bits: Mocha and superagent. Expect.js provides the language for testing, Mocha provides the framework, which is a lot like Rspec and uses “it” and “describe” blocks. Superagent is a client-side HTTP request library that is making the actual calls to your app.

Here’s how I put it all together

npm install expect.js superagent
npm install -g mocha
mkdir test
mocha

After executing that last command, it will run mocha and report “0 passing” tests. So, let’s add one for our about page:

test/about.js
1
2
3
4
5
6
7
8
9
10
11
12
13
var expect = require('expect.js');
var request = require('superagent');

describe('the about page', function() {
  it('returns information about me', function(done) {
    request
      .get('/about')
      .end(function(err, res){
        expect(res.body).to.contain('About me');
      });
      done();
  });
});

Superagent makes the get request and using expect.js we can parse the body of the response for the expected content. Note: I was getting the warning “double callback!” which may be a bug, but I’m not sure. Everywhere on the net, this seems to be the accepted syntax, so take this for what it’s worth.

Jasmine

Another testing method is to use Jasmine which basically does the same as above, even with some of the same dependencies, but slightly differently. To setup, let use our package.json the way it’s meant to be and specify our dependencies. Add these lines to the dependencies key:

"jasmine-node": "~>1.14",
"request": "~2.56"

Then run:

npm install
jasmine init

Jasmine follows Rspec a little more closely and creates a spec directory and assumes you’ll name your tests with the _spec extension. To test our about page using Jasmine:

spec/about_spec.js
1
2
3
4
5
6
7
8
var request = require('request');

it("renders the about page", function(done) {
  request("http://localhost:3000/about", function(error, response, body){
    expect(body).toContain("About me");
    done();
  });
});

One thing to note, this test will only pass if your app.js is actually running. There are ways to wrap this so that Jasmine will start up the app first before testing.

M is for Model

The last bits to add into this mix are getting Express to model data in a ORM kind of way like Rails does. Again, there are lots to choose from here: Bookshelf, Backbone, Mongoose, Persistence, Sequelize, and the list goes on.

Many of these cater to one particular database, such as Mongoose connecting to MongoDB. There’s a specific platform that wraps these together: MEAN which stands for MongoDB, Express, Angular.js, and Node.js. It’s reminiscent of LAMP.

Backbone.js seems to be the most data agnostic, while others like Mongoose, focus on specific sources. For example, Sequelize does PostGRESql and MySQL.

Too Many Choices?

Angular.js is a popular choice as a fully-fledged MVC framework for Javascript, and it can be pulled into the mix in Express apps, as MEAN shows above. Lastly, Express isn’t the only choice for platforms. react.js is another “V” option in the MVC of the Javascript world.

RSpec: Testing Inputs

After trying to do this the other day, there are a lot of different approaches to doing this. Here’s mine:

Let’s say you have an edit form that has a text input with a value already entered into it.

1
<input name="name_field" value="Adam Wead" type="text" id="document[name_field]" />

You want to write a test that verifies if the content is already in the input field. Seems easy, but it turns out it’s not so. A lot of the answers out there resorted to using Xpath, which works fine, but you can leverage RSpec’s own finding tools to do this too:

1
expect(find_field("document[name_field]").value).to eql "Adam Wead"

It avoids XPath, if that’s not your thing, and it’s slightly easier to read. I would have expected (no pun intended) have_value to work, but it doesn’t look like it responds to has_value?.

Watching Your Test Application

When I run test with a sample Rails app, for example if I’m working on an engine gem, it’s sometimes difficult to see what’s going on if a test is failing. I’m a big proponent of having a clean database before every test. This means running a database cleaner and other tools that wipe out the data before and afterwards. This can present problems when you’re trying to nail down a particular failure because the data gets wiped after it’s over.

You can use tricks like the Byebug gem or calling save_and_open_page if you’re using Capybara, and these will often help. But, what if you want to open the actual test application at the exact moment prior to the failure? Here’s a trick I use:

Let’s say you’ve built-up a test rails application under spec/internal. This implies that you’re using RSpec, so if not, translate accordingly. Go into your test file and put in a byebug call right before the failure. The test will run and stop at the breakpoint. Leave the byebye prompt open and in another terminal:

$ cd spec/internal
$ bundle exec rails server -e test

ViolĂ . There’s your test application at exactly the point prior to the failure. Explore and poke around, but when you’re done, don’t forget to pop back over to the byebug prompt and enter:

(byebug) continue

The test will continue and so will the cleanup process. If you forget this, then the test application will not be a clean state the next time you run your tests.

Filters, Not Overrides

When refactoring some controllers, I realized it’s much nicer to filter actions on controllers instead of overriding them.

Take for example a controller that exists somewhere in your application or in a gem:

1
2
3
4
5
6
7
8
class BaseController < ApplicationController

  def index
    @terms = Terms.all
    render_all_terms
  end

end

I need to use that controller in my application, but have to add some additional stuff to it to use in my views:

1
2
3
4
5
6
7
8
9
10
class MyController < ApplicationController

  include BaseController

  def  index
    @my_terms = MyTerms.all
    super
  end

end

This will work and I’ll have both @terms and @my_terms in my views. However, I find it’s nicer, and a little bit less invasive, if I can work around BaseController without having override it:

1
2
3
4
5
6
7
8
9
10
11
class MyController < ApplicationController

  include BaseController

  before_filter :get_my_terms, only: :index

  def get_my_terms
    @my_terms = MyTerms.all
  end

end

The end result is the same, but I’ve accomplished it without having to change BaseController at all, thereby leaving its public interface untouched.

Git

This is my usual workflow when submitting patches, or even working on my own applications. There’s been a lot of focus on Git branching models as of late. This is a good one, for example. Mine isn’t always that complicated, but no matter how many topic branches, my workflow is pretty much the same.

Start with a topic branch, and make sure you’re current:

git checkout master
git pull
git checkout -b fixing-a-bug

Start working. Take a break… oh, it’s the weekend? Okay, I’ll try again next week. Make a few stabs at it over the weekend and finally get it fixed next week. Now your git log probably looks like:

cdf5389ff978e4ba87150ca599fe6c4dcb6674b3 yay, done!
445cbfcf9ae8b18667474675213612c24a75190d ugh, no.
4e79c46f3e2410a09ddf609d442741e2cf1e8266 got it fixed!
f4f18f3117498f9e0735095fc7f44d63dc3557fe uh oh, this broke something
2a6ed6e963248a138996381f482ef500dea29bcf stashing changes
480ec8c17cb4ba535a97782167d73ba14528730e first stab

If you’re anything like me, some fixes are a journey and you sometimes end up going places you didn’t intend. Your git log may reflect this. Let’s clean this up:

git rebase -i HEAD~6

Now we can squash all those commits down to one, well-written and polished commit that makes it look like we knew exactly what we were doing all along. Of course, you don’t have to, it’s just easier. Sometimes I do squash to more than one commit if there are two different issues at hand and I want the comments in the commit log to reflect that; otherwise, it’s just one big one.

If we’ve been pushing up to Github along the way, we’ll need to force update origin since our commits have now diverged:

git push origin +fixing-a-bug

If you’re the only person working on this repo, you’re probably fine unless you’re Tyler Durden. Otherwise, check on any commits from origin or upstream if you’re using a forked repo. Then, get this into master and rebase against your topic branch:

git checkout master
git pull
git checkout fixing-a-bug
git rebase master

Since you’ve squashed down to one commit, you only have to go through the rebase once, as opposed to multiple times for each commit. Push any changes back up to Github and let Travis run the tests (if you’re using continuous integration):

git push origin fixing-a-bug

Once everything is green, merge your changes to master:

git checkout master
git merge fixing-a-bug

Since you’re merging one commit from the branch, this will be fast-forwarded, and the commit you’ve pulled in will appear with all the other commits in master as if there was never any branch. For small projects, this is probably fine. Alternatively, you can leave your branch in Github and submit a pull request. I’ve even done this on repos where I’m the only one working. It feels odd submitting a PR to yourself, but it helps document the branch and merge process.

If you want to preserve the branch/merge process and you don’t want to submit PRs, when you merge into master:

git checkout master
git merge fixing-a-bug --no-ff

The --no-ff is short for “no fast-forward”. This essentially preserves the merge as a separate commit in the log. Yes, it makes the log a bit longer, but it’s good for clarity because you’ll see the history of the branch and merge process.

Finally, clean up!

git push origin :fixing-a-bug
get remote prune origin

This deletes the branch in Github and cleans up your local clone.

A Passenger in a Passenger

Here’s something I didn’t know you could do, until today…

I have a public website deployed under Passenger, and I wanted to deploy a beta version under the same FQDN. The problem is, I’m not using a sub-uris for the site. Essentially I want:

http://foo.com/
http://foo.com/beta

To be independently deployed Rails apps under Passenger. I thought I was going to need to do some magic with Passenger config and some url rewrites, but after tinkering with the Passenger config files—and the ubiquitous Googling—I discovered I can do it very easily with Passenger alone.

The key is, I deploy the beta site within the public folder of the main site. Here’s what the Passenger config looks like:

/etc/httpd/conf.d/foo.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<VirtualHost *:80>

  ServerName foo.com
  SetEnv GEM_HOME .bundle
  DocumentRoot /var/www/rails/foo/public

  <Directory /var/www/rails/foo/public>
    AllowOverride all
    Options -MultiViews
  </Directory>

  RailsBaseURI /beta
  <Directory /var/www/rails/foo/public/beta>
    AllowOverride all
    Options -MultiViews
  </Directory>

</VirtualHost>

This assumes that you have one server deploying one webapp. Apache looks to the /var/www/html directory to serve out files, so I’ve symlinked this to the public folder of foo:

/var/www/html -> /var/www/rails/foo/public

The master branch of the git repo is located at /var/www/rails/foo. If you had deployed multiple apps on one server, your html directory probably has multiple symlinks to the different public folders of all your Rails apps.

In order to deploy the beta version, I clone a new version of the same github repo and pull down the relevant beta branch. I can then symlink this repo inside the current public folder and Passenger will serve out the new site from there. Here’s a quick synopsis:

mkdir /var/www/rails/beta
cd /var/www/rails/beta
git clone http://github.com/you/foo
cd foo
git checkout -t origin/beta
[run your normal install procedures]
cd /var/www/rails/foo/public
ln -s /var/www/rails/beta/foo/public beta

And that’s it. It’s a bit convoluted, but it got my out of my position where I was stuck with only one server name and no sub-uris.

No Breaks

New year, yes I know, it’s late. I’ve been doing lots of front-end and web design work which I readily admit is not my strong suit. However, I stick to using the Bootstrap framework and try to work within its defaults as much as possible. I find this helps force me to be consistent and maintain control over layout.

Recently I had a sort-of mini realization that the break tag isn’t really that good to use. First of all, it has that weird “no closing tag so we tack one on at the end” thing, which just bothers me. There may be instances when you have to use a break tag or when it makes the most sense, but what I’ve been doing lately is asking myself, why am I using it in the first place?

Let’s say you have some plain text that you need to break up into sections:

Lorem ipsum dolor sit amet, <br/>
consectetur adipisicing elit, <br/>
sed do eiusmod tempor <br/>
incididunt ut labore <br/>
et dolore magna aliqua. <br/>

Nothing wrong with that. But, you might ask yourself what is it that’s significant about these strings that merits their breakage? Maybe these are lines of a poem, or some other structure. So why not reflect that in the code:

<div id="first-stanza">
  <span class="line">Lorem ipsum dolor sit amet,</span>
  <span class="line">consectetur adipisicing elit,</span>
  <span class="line">sed do eiusmod tempor</span>
  <span class="line">incididunt ut labore</span>
  <span class="line">et dolore magna aliqua.</span>
</div>

Then we just add a little css to take care of the line breaks:

.line {
  display: block;
}

Of course, you could use paragraph tags here as well. There are a multitude of options, but my point is that if you’re reaching for that break tag a lot, chances are there’s a more elegant solution that will better reflect the structure of your document. That has broader implications when we think about how our pages get linked to other pages and how we want the web to make sense of them. That’s easier when we’ve created a clear structure to interpret.

Rails Manifesto: Views

I’ve been developing with Ruby on Rails for about three years now, and while that’s not as long as some other folks, it’s long enough for me to have formulated some of my own personal programming maxims. One of these is about views. This past week, I was rewriting view code in order to completely remove all Ruby logic so that it was solely HTML code as much as possible. While you’re allowed to do lots of things in Rails views, I prefer to keep views what they’re supposed to be: just about display. To that end, I use lots of helper methods to handle the logic, and leave the view code as simple nested HTML blocks.

Views view, while Helpers help

Rails views allow you to insert any Ruby code you like directly into escaped HTML strings, so you can have elements of if/then logic mixed in with HTML all in the same page. Take, for example, this view code from the Blacklight plugin that displays a list of recent searches:

index.html.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<div id="content" class="span9">
<h1><%= t('blacklight.saved_searches.title') %></h1>

<%- if current_or_guest_user.blank? -%>
  
  <h2><%= t('blacklight.saved_searches.need_login') %></h2>

<%- elsif @searches.blank? -%>
  
  <h2><%= t('blacklight.saved_searches.no_searches') %></h2>
  
<%- else -%>
  <p>
  <%= link_to t('blacklight.saved_searches.clear.action_title'), clear_saved_searches_path, :method => :delete, :data => { :confirm => t('blacklight.saved_searches.clear.action_confirm') } %>
  </p>

  <h2><%= t('blacklight.saved_searches.list_title') %></h2>
  <table class="table table-striped">
  <%- @searches.each do |search| -%>
    <tr>
      <td><%= link_to_previous_search(search.query_params) %></td>
      <td><%= button_to t('blacklight.saved_searches.delete'), forget_search_path(search.id) %></td>
    </tr>
  <%- end -%>
  </table>

<%- end -%>

</div>

There are three options, each with some view code associated with it: first, if there is no current user logged in, display some text stating that the user should login; second, if there is a user logged in, but there are no saved searches in the @searches variable, then display some text that states this fact; finally, if we have some searches, then display those in a tabular format. There is nothing wrong with this code, it works just fine. If you’re happy with the way it looks, and you write code like that, I think that’s great and you can stop reading. However, I personally prefer a different way, and decided to refactor it.

I found the code a little hard to follow, and wanted a cleaner separation of Ruby logic from the actual HTML code so I could understand it better. If the view just expressed the appearance and the content of the page, it would make a lot more sense to me at first glance. To do this, I identified the primary function of the page: rendering the table of search results. I then separated the logic controlling that and gave it a method name defining it as clearly as possible:

searches_helper.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
module SearchesHelper

  def render_saved_searches_table
    if current_or_guest_user.blank?
      # you need to login
    elsif @searches.blank?
      # you have no searches
    else
      # display the table
    end
  end

end

With the logic sketched, we can add back some of the view code where appropriate. In this case, the helper method can return a single HTML statement, but if it is more than that, the content should be rendered by a new partial:

searches_helper.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
module SearchesHelper

  def render_saved_searches_table
    if current_or_guest_user.blank?
      content_tag :h2, t('blacklight.saved_searches.need_login')
    elsif @searches.blank?
      content_tag :h2, t('blacklight.saved_searches.no_searches')
    else
      render "searches_table"
    end
  end

end

The index view is now much more concise and can be re-written to take advantage of Rails’ content_tag blocks:

index.html.erb
1
2
3
4
<%= content_tag :div, :id => "saved_searches", :class => "span9" do %>
  <%= content_tag :h1, t('blacklight.saved_searches.title') %>
  <%= render_saved_searches_table %> 
<% end %>

Now, we create a new partial called by the helper method to display the searches in a table format:

_searches_table.html.erb
1
2
3
4
5
6
7
8
9
10
11
12
<%= content_tag :p, link_to(t('blacklight.saved_searches.clear.action_title'), clear_saved_searches_path, :method => :delete, :data => { :confirm => t('blacklight.saved_searches.clear.action_confirm') }) %>

<%= content_tag :h2, t('blacklight.saved_searches.list_title') %>

<%= content_tag :table, :class => "table table-striped" do %>
  <% @searches.each do |search| %>
    <%= content_tag :tr do %>
      <%= content_tag :td, link_to_previous_search(search.query_params) %>
      <%= content_tag :td, button_to(t('blacklight.saved_searches.delete'), forget_search_path(search.id)) %>
    <% end %>
  <% end %>
<% end %>

Personally, I find the first line a bit too long. There are a lot of options that are passed to the link_to method, and I chose to isolate that using a helper method:

searches_helper.rb
1
2
3
4
5
  def render_clear_searches_link
    link_to t('blacklight.saved_searches.clear.action_title'),
      clear_saved_searches_path, :method => :delete,
      :data => { :confirm => t('blacklight.saved_searches.clear.action_confirm') }
  end

Then, the final view code for the table looks a little more manageable to me:

_searches_table.html.erb
1
2
3
4
5
6
7
8
9
10
11
12
<%= content_tag :p, render_clear_searches_link %>

<%= content_tag :h2, t('blacklight.saved_searches.list_title') %>

<%= content_tag :table, :class => "table table-striped" do %>
  <% @searches.each do |search| %>
    <%= content_tag :tr do %>
      <%= content_tag :td, link_to_previous_search(search.query_params) %>
      <%= content_tag :td, button_to(t('blacklight.saved_searches.delete'), forget_search_path(search.id)) %>
    <% end %>
  <% end %>
<% end %>

OCD: Obsessive, Compulsive Design

To some, the above may seem like overkill, and I do concede that point. For me, it’s a matter of personal taste and also a nice feeling of satisfaction when looking at the finished product. It also satisfies a creative component that I feel is very important in programming. Writing in any kind of programming language is a creative process and Ruby is an expressive language. The refactoring process allows us to indulge a bit in these aspects.

I started down this path recently when I read this post about using Sandi Metz’s Rules for Developers. Following these rules is somewhat of a challenge, and it’s been a gradual process to get myself to abide by them. While I don’t always follow them, even attempting to has helped my refactoring process immensely. As a result, they’ve played a large part in how I’ve changed my thinking about views in general. The ideas that I’ve tried to apply in this example are making methods as concise and descriptive as possible, as well as crafting your modules and methods to be self-explanatory, which I think showcases Ruby’s expressive potential.

Glob-ins

A couple of months ago, I spent a harrowing Friday afternoon deploying one of my Rails apps. All the tests were passing on my laptop and on the development server. However, once I got the same code on the production box, none of the spec tests were running. Everything was failing with same error:

NameError:
   undefined local variable or method `id' for main:Object

Not a very helpful error. So I did some initial digging around, but that didn’t yield any explanations. What was even more exasperating was that the code and environment were exactly the same in both test and production. I know because I spent a lot of time checking.

I ended up really digging into the Rails source and tracked the problem down to a line in my rspec config:

spec_helper.rb
1
2
3
4
5
6
7
8
9
RSpec.configure do |config|

  [...]

  config.global_fixtures = :all

  [...]

end

When specifying ActiveRecord fixtures, by default, rspec will look in spec/fixtures. If you load all of them in the folder, as I had specified above, it does a glob on the directory to build the list of fixtures.

In most cases, this probably doesn’t make a difference, but in my case it did because one fixture file had records that depended on the other. I had two fixture files, one for the users table and another for activities. Activities needed to be loaded first, which was happening on my laptop and on my development system. However, on my production server, the users fixture was loaded first instead. This at least explained the error message: Rails was complaining that it couldn’t find the “id” method because the activities records hadn’t been loaded yet.

The Goblin in the Glob

This left me puzzling over why files would load in a different order on two identical systems. It’s true that file globing will have different results with different operating systems, but the development and production systems were both CentOS 6, both updated with the same patches, ruby versions, and gems.

On the development system, I got:

irb(main):002:0> Dir["./*"]
=> [./activities.yml", "./users.yml"]

But production was:

irb(main):002:0> Dir["./*"]
=> ["./users.yml", "./activities.yml"]

I assumed that they would load alphabetically, but this was not the case. Ideally, you would just call .sort on the results and Ruby would then sort them alphabetically, but I needed to instruct Rspec to load them in a specific order. Fortunately, this was a one-line change:

spec_helper.rb
1
2
3
4
5
6
7
8
9
RSpec.configure do |config|

  [...]

  config.global_fixtures = :activities, :users

  [...]

end

Another problem remained with cucumber, which also utilized the same fixtures and needed them specified in the same order. Again, another simple fix using ActiveRecord’s new FixtureSet methods:

ActiveRecord::FixtureSet.create_fixtures(File.join(Rails.root, 'spec', 'fixtures'), [:activities, :users])

No Answer

I still do not have an answer as to why the globing order would be different on the exact same operating system. Perhaps that will be the topic of another post.

Testing Engines Under Rails4

Recently, I’ve been using engines a lot, and have run into problems with testing them. The current guide on creating Rails engines covers the basics of creating the engine and touches a bit on testing them. The gist is that there is a dummy application under test/dummy and the code you write in the engine is tested against it.

This works, but what if there’s an update to Rails? Your dummy app is still stuck at the same version it was when you first created it. A problem I run into is that I need to create generators that add code into the dummy app, to mimic that same process real users will go through when they are installing the engine. I need a way to recreate the dummy application each time and run a complete set of tests.

My solution, which I’ve borrowed from my colleagues in the Hydra community, is to create rake tasks that build the dummy app from scratch, go through the steps required to install the engine, and then run all the tests.

Generate the Engine

When creating the engine, I do the same as the guide but I exclude the testing framework because I prefer rspec:

rails plugin new blorgh --mountable -T

Go into your new engine and add rspec and rspec-rails as development dependencies:

s.add_development_dependency "rspec"
s.add_development_dependency "rspec-rails"

And then get rspec ready to use:

bundle install
rspec --init

This creates the initial spec directory, but now we need to create the dummy app inside it. Unfortunately, we can’t just run “rails new” and be done. Rails knows where inside an engine and actually prevents us from running the “new” command. So, we have to hack around this. There are two options: 1) delete the bin directory; 2) rename the bin directory.

I choose to rename my bin directory to sbin:

mv bin/ sbin/

I can then call any generators or other options inside my engine using sbin/rails and then when I run the plain rails command, I get it as if I wasn’t inside my engine. So now, I can create my new dummy app inside spec:

rails new spec/dummy

And you now have a brand-new rails app inside spec, ready for testing.

Using Rake Tasks

At this point, you’ll need to add the engine to your dummy app, and go through the process of initializing it to run your tests. You’re going to have repeat the process a lot, so why not automate it with a rake task.

First, add this to your engine’s Rakefile to include any tasks we creating in our tasks directory:

Dir.glob('tasks/*.rake').each { |r| import r }

Since these are tasks that are only associated with developing and testing your engine, as opposed to tasks that users will run when they’ve installed your engine, I put them in their own file, such as tasks/blorgh-dev.rake

tasks/blorgh-dev.rake
1
2
3
4
5
6
7
desc "Create the test rails app"
task :generate do
  unless File.exists?("spec/internal")
    puts "Generating rails app"
    system "rails new spec/internal"
  puts "Done generating test app"
end

This does the job of creating our test application, but it doesn’t actually hook our engine up to it. In order to do that, we have to add the gem to the dummy app’s Gemfile:

echo "gem 'blorgh', :path=>'../../../blorgh'" >> spec/dummy/Gemfile

This adds our engine to the dummy app, relative to the location of the dummy app itself.

Next, we’ll need to do the usual bits of running bundle install and any migrations within the dummy app. Here we need to expand our rake task to not only perform things within the dummy app, but also do it within a clean bundle environment. This is because don’t want the dummy app to use any our bundler settings that we might be using in our dev environment.

Lastly, we also need to be able to delete our dummy app and regenerate it if we update any of engine code or dependencies.

Taking all of this into account, our updated rake file looks something like this:

tasks/blorgh-dev.rake
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
desc "Create the test rails app"
task :generate do
  unless File.exists?("spec/dummy")
    puts "Generating test app"
    system "rails new spec/dummy"
    `echo "gem 'blorgh', :path=>'../../../blorgh'" >> spec/dummy/Gemfile`
    Bundler.with_clean_env do
      within_test_app do
        puts "Bundle install"
        system "bundle install"
        puts "running migrations"
        system "rake blorgh:install:migrations db:migrate"
      end
    end
  end
  puts "Done generating test app"
end

desc "Delete test app and generate a new one"
task :regenerate do
  puts "Deleting test app"
  system "rm -Rf spec/dummy"
  Rake::Task["generate"].invoke
end

def within_test_app
  return unless File.exists?("spec/dummy")
  FileUtils.cd("spec/dummy")
  yield
  FileUtils.cd("../..")
end

Next Steps

This gets us started, but you’ll probably need to add more in order to get your tests working, and that largely depends on what your engine is going to do. Fortunately, you’re in a good position if you have to test any functionality within your engine.

Generators

Using generators is a good way to create any necessary config files or code that needs to be created within the dummy app. The solution I’m using places the generators inside the spec directory and then copies them to spec/dummy/lib/generators. Then, we can run the generators within the dummy app and perform the same actions users will take when installing the engine to their own applications.

Acknowledgments

Props and shoutouts to @jcoyne who’s came up with this technique.