Welcome to Working With Rails

 

Discussion Forums

Discuss all things Ruby on Rails with perhaps the web's most vibrant group of Ruby on Rails enthusiasts.
Testing without Fixtures (?)
5 Posts
Testing without Fixtures (?)

I'm responsible for writing unit tests for a rails app whose behavior depends on the relationships of tens of thousands of items in a DB. I need this data in the test DB for a lot of tests. Storing this data as fixtures, keeping it in sync with the production DB, and waiting HOURS for the fixtures to load whenever the test suite is run are all untenable options.

Here is what I think I want to do:

Create a reference archive of the test DB once all the data is loaded. This will be checked into

the repo so anyone can load it (quickly and easily with a rake or capistrano task) prior to running the test suite.

Trick rails into believing the data was loaded from fixtures, so tests will behave normally.

Presumably this means overriding the standard procedure of deleting, re-inserting, and instantiating test data that happens before each test method. I believe this could probably be done with a test_helper method without having to modify ActiveRecord.

Concerns: * This seems to be in violation of the Rails philosophy of "make right things easy and the wrong things hard."

  • I think this has to be a fairly common problem, but after lots of google searching, I have found no instances of someone else trying to do this sort of thing.

This makes me think there is a better way, or at least a different way that someone has already implemented.

What I am looking for: * A reality check. Is this a good way to handle this problem?

  • Technical guidance. I am a relative novice (read: n00b) with ruby development, so any input on things I should be aware of as I try to figure this out would be helpful...

  • OR -

  • Sample code. If someone has already solved this problem and talked about it somewhere, just point me in their direction.

Thanks, John

John,

I was recently working on a rails-based data warehouse solution and was running some tests with rspec against test databases of ~2 million rows or so for stress/performance type tests while running the bulk of the "logical correctness" style tests off of fixtures. I disabled the destroy and recreate and load of the test data and structured my tests not to depend on very specific values in this scenario of testing. The best place for me to get 2 million rows of data that reflected production, was, well, production. So I put together a rake task to snap up the production's nightly backup and restore it into the test database prior to running those specific tests.

But even so, I also had a bit of a jerry-rigged data generator built around the faker gem to generate gobs of data that was, in my mind, a fair approximation of what I could expect in production's dataset and could thus initiate a 500k, 2 miliion, or even 15 or 25 million row database. Those took a crazy long time to load up with Ruby, so I resorted to generating SQL Insert scripts and pushing through by repetitively bulk loading the dbms' bulk loaders.

At the same time, we knew our data and model had to conform to certain rules and validations and we had fixtures that we loaded and ran tests against where the data scenario was known and the desired answer was also known. This is how we confirmed logical correctness of our models.

In other words, the right tools and data set for the specific job and set of tests and have a variety of tests to test different aspects of the system.

Your comments don't seem to fall into "amount of data" so much as "variety of inter-dependent data" so I may be commenting on the wrong question. But, having tackled such a huge volume of data, I get where you are coming from in trying to ensure data across tables and rows are what we expect it to be!

This was actually something I encountered with the large volume of data coming in, which was fed to our data warehouse from another system (thus we didn't really know what the quality of the data was going to be). For this sort of situation, I took two approaches. When the user says, "this data report is wrong because..." I would look at the specific scenario and then either, a) change the code to handle the incoming data stream with on-the-fly corrections, b) flag the record somehow as potentially wrong and provide a status page of flagged data for end user to review edit the data so that it would then process (which sometimes the user would review and say, "but this is right, we just need to handle it like so..."

One thing about related data in a database, if you know a key column should never have values unless they exist in another table/column, then set up your foreign keys and enforce the relationship rather than trying to set up validations and unit tests to test for them! (Or do all of these things...) Outright enforcement over hypothetical testing on test data is far better than hoping to be able to write every possible test case conceivable. No matter how crazy the incoming data stream was, we found that there were relatively few rules that the data had to conform to or the reports the application generated simply weren't valid. So the "logical correctness" was built to ensure our DSL / relationship rules were indeed catching invalid data situations. Often, solving for one uncovered issue brought up "what if?" and "how likely?" questions surrounding different, but similar potential data inconsistencies and we'd elect to document user stories for those scenarios and build in as well. Thus our system "grew up" very quickly and became trustworthy along with end-user understanding of their data (as well as ours!).

That's a bit of rambling on my part (sorry!), but maybe some of the ideas/perspectives shared triggers some good ideas for you to pursue.

Michael

Hi Michael, thanks for taking the time to write such a detailed response.

bq. I disabled the destroy and recreate and load of the test data and structured my tests not to depend on very specific values in this scenario of testing.

This is basically what I think I should do. I basically just want a sanity check before diving into what, for my experience level, is a pretty substantial modification of rails' behavior. It is just surprising to me that the need to do this isn't more commonly encountered, and there is no discussion I can find that talks about doing it.

Before I get to invested, though, I'll try to give a better picture of the scenario I am dealing with. Because of an NDA, I can't really talk about what is in the DB or why it all needs to be there with any specificity, but will try to explain by analogy.

Suppose I have an app that tries to smartly generate a list for grocery shopping. The db has a table for all the items sold by the grocery store. There is also a table for categories of groceries, and a join table that associates the items with the categories. Cheddar, for instance, could be joined with the categories: cheeses, dairy, items_that_require_refrigeration. We also a have table for different scenarios we might be shopping for, e.g. general_grocery, camping_trip, dinner_party, christmas_baking, etc. and items are joined with certain scenarios.

The way this might play out in our app: We select dinner_party as our scenario and indicate that we need to buy some cheese. The app provides a list of cheeses to pick from. If we add camembert, perhaps the list suggests a bottle of white wine that pairs well with the cheese, but if we add parmesan, the list gives us the option of adding spaghetti, pasta sauce, and/or all the raw ingredients to make our own pasta sauce.

Suppose the scenario is camping_trip, and we add graham crackers. Chocolate and marshmallows are added to the list automatically. However, if we add graham crackers while shopping for general_grocery, those items are not added.

Some of that behavior may sound a bit obnoxious for generating a shopping list, but remember this is an analogy, and in the real app, that sort of thing is more appropriate. The point is that the behavior of the application is heavily dependent on the grouping relationships of a large data set. So really, our tests need to verify the data relationships as much as the code. I can't really do that if I use a different data set, or restrict the data to a smaller, more manageable load.

Again, thanks for the input.

John

Hi, John

Sorry for the delay in replying as I don't log on here very often. What you're describing is basically facet searching. Have you looked at SOLR?

http://lucene.apache.org/solr/ http://acts-as-solr.rubyforge.org/

At any rate, I am curious myself that the topic of tailoring one's test environment doesn't get much attention. Or perhaps its just that the whole testing approach utilized in Rails in general is new grounds for many developers coming into the field, so much of discussion is simply getting folks on board with the whole TDD and BDD concept altogether. Although TDD/BDD have been around for quite a while, a surprisingly small percentage of developers in general have not been exposed to either, so that's where the energy is expended.

With a system that takes hours to load, you definitely should just snapshot a database you consider "in a known state" and and roll from there as needed.

Michael

Thanks Michael, thats helpful. Someone on another forum also linked me to this utility, which looks quite promising:

http://jailer.sourceforge.net/

5 Posts
Login to add your message