Test your information architecture using C-Inspector

Promo: Also try out our product Naview, which helps you to design and build navigation prototypes quickly and test the usability of your navigation with users. Naview is made specifically for people like yourself. It’s offered on a free plan, too!

It is important that users can easily find the information they are looking for on your website – otherwise they will leave. To make sure they stay, it’s a good idea to test your new information architecture (website structure) with real users before building the site.

Update 26 Sep 2009: I received an email from Steffen clarifying how to access detailed results for individual tasks, and I have amended this article accordingly. Thanks Steffen!


  1. Software tools for IA testing
  2. Running IA testing
  3. Setting up a study using C-Inspector
  4. Uploading your taxonomy
  5. Setting up tasks
  6. Publishing your survey
  7. Reviewing survey findings
  8. Pricing
  9. Strengths and weaknesses
  10. In summary

Software tools for IA testing

I wrote about using Treejack from Optimal Workshop to test your website structure earlier this year. Treejack is a web-based tool that lets you upload your sitemap and define a number of tasks that you want your users to complete. You can then invite people by email to take part in your survey, and review the results easily online.

C-Inspector - Home

I recently heard about another new IA testing tool called C-Inspector (see above) from its creator Steffen Schilb, a freelance IA and user experience consultant based in Cologne, Germany. I took the trial version for a quick spin and summarise my experiences below.

Running IA testing

The overall process of running IA testing is the same regardless of whether you use C-Inspector or Treejack:

  1. Design your website structure (perhaps using a tool like Volkside Naview)
  2. Decide on the objectives for your testing
  3. Prepare user tasks
  4. Prepare participant profile and recruit suitable users
  5. Set up your study/survey using the IA testing tool
  6. Run test sessions
  7. Review survey findings
  8. Revise website structure and retest (steps 5-7)

In fact, the process is pretty much the same even if you do it by hand, although in that case you’ll need run the testing in person. Read the previous article for more details on how to prepare IA testing.

Setting up a study using C-Inspector

Setting up a survey using C-Inspector involves four steps:

  1. Set up categories i.e. the structure you want to test
  2. Set up the tasks you want your users to complete using that structure
  3. Decide on study settings such as name and number of attempts per task
  4. Write content for welcome page, instructions page and thank you page.

Uploading your taxonomy

You can set up the structure you want to test either using the built-in editor, or by uploading a CSV file (“comma-separated values”), typically exported from a spreadsheet document.

The built-in editor is quite clunky and if your structure is complex enough to warrant testing, it is likely that you will design it in a spreadsheet and then upload it to C-Inspector. The first screenshot below the Volkside website structure uploaded to the tool:

C-Inspector Setup - The uploaded taxonomy C-Inspector Setup - Setting up tasks C-Inspector Setup - Study overview
The uploaded taxonomy Setting up tasks Study overview

Setting up tasks

You set up user tasks on a dedicated tab (see above). The way you define the correct answers here will directly affect the results you get from your survey, so make sure you spend the time to think them through properly.

In C-Inspector selecting correct answers is done using the “add navigation path” interface at the bottom of the page. If your structure is deep you’ll be clicking the “Next level” button a lot – I feel Treejack’s interface is a lot quicker in this regard.

Publishing your survey

The study overview page (see above) gives a nice visual summary of the study and ensures that you have typed in the required details on each tab.

Before publishing your survey preview it several times and make sure all tasks can be completed using the proposed website structure. Completing the survey in preview mode will often reveal issues with the IA and help you improve the structure even before you test it with users.

Completing the study

Once you are happy with the survey you can publish it and send a link to the study participants. Here’s what the example survey looks like from the user’s point of view:

C-Inspector Survey - Introduction C-Inspector Survey - One of the tasks C-Inspector Survey - Choosing your answer
Introduction One of the tasks Choosing your answer

I found that the end user experience of C-Inspector is not quite as streamlined as that of Treejack. Navigating around in the structure is less intuitive, and some of the terminology is industry-specific (e.g. “subcategories” and “go back to a superior category”).

On the plus side, C-Inspector allows users to give feedback when they want to skip a task (to describe why they are skipping it) and at the end of the survey (for general remarks) – this is not possible using Treejack.

Reviewing survey findings

Once users have completed the survey C-Inspector gives you a results overview (first screenshot below):

C-Inspector Results - Overview C-Inspector Results - Task results C-Inspector Results - Task details
Results overview Task results Task details

You can also drill deeper and have a look at the results for individual tasks (see above). Update: You can access detailed results for each task by clicking the on it. These results include detailed click-through statistics that help you work out where users go off the intended path and where they end up in the structure.

C-Inspector also allows you to download task vs. taxonomy results matrix as a HTML file, which you can then open in a spreadsheet application:

C-Inspector Results - Results matrix
Results matrix

The reporting in C-Inspector is impressive and it includes details that are not available using Treejack. However the reporting ease of use doesn’t match Treejack’s. Treejack presents survey results visually and you can instantly tell which tasks tested well and which need further refinement. The three values Treejack produces – success, speed and directness – are extremely useful in determining where the issues lie.


The pricing for C-Inspector is currently as follows:

  • $99 single study, unlimited duration
  • $219 unlimited studies, 3 month subscription
  • $599 unlimited studies, 1 year subscription

It’s comparable if slightly more expensive than Treejack:

  • $109 unlimited surveys/tasks, 30 day subscription
  • $559 unlimited surveys/tasks, 1 year subscription

All prices are in US dollars. Both products offer a free trial plan so you can “try before you buy”.

Strengths and weaknesses

Here’s what I liked about C-Inspector, compared to Treejack:

  • Ability to collect feedback from users when they skip a task and at the end of the survey
  • Ability to select branch or leaf node of the structure as your answer (Treejack only allows selecting leafs)
  • Detailed results for individual tasks, including click-throughs
  • Good documentation including ‘why’ and ‘how’ to do IA testing
  • Available in two languages (English and German)

Unlike Treejack, I haven’t completed real-world IA testing using C-Inspector. Even so, based on my trial there appears to be plenty of room for improvement:

  • The CSV format for taxonomy upload is not ideal: Every time you want to change the structure you need to take the step of saving the file. Compare this with the quick copy-and-paste experience of Treejack and tools like Naview.
  • Taxonomy editing is difficult. The web-based interface allows for minor adjustments only, and any major changes need to be done outside the tool and the file re-uploaded. As above, I feel the text-based approach in Treejack is better.
  • Selecting correct answers is slow. You need to click the “Next level” button a lot, whereas in Treejack the whole structure is displayed at once and you can select multiple correct answers in this single view.
  • Once a study is closed it cannot be reopened or cloned and you need to recreate it from scratch. This means a lot of work if you want to re-run your study or use it as a basis for another one.
  • C-Inspector doesn’t allow randomising tasks or siblings like Treejack. Randomising helps reduce learning bias when completing longer surveys.

It also seems that there are some early version teething problems, for instance I lost instructions page text a couple of times even though I clicked the respective Save button.

In summary

C-Inspector is a welcome addition to the expanding IA testing toolbox. Like Treejack, C-Inspector helps you test your information architecture independent of the actual website interface and identify potential issues with the classification and labelling before designing a single wireframe.

C-Inspector is a direct competitor to Treejack and from a functional and pricing point of view they are pretty much on par. However in practice Treejack feels more mature a product than C-Inspector. It’s all in the details: smoother interaction for survey setup and completion, sleeker visual design (subjective of course), better copywriting.

Given the above, my current pick would be Treejack. C-Inspector is a great start though, and I’m expecting future revisions to close the gap to Treejack fairly quickly. In fact, Steffen is already working on some of the improvements listed above.

Don’t take my word for it – try out both C-Inspector and Treejack in your next project. Regardless of the tools, conducting even a little bit of IA testing is infinitely better than doing none!

The author is not affiliated with C-Inspector or Optimal Workshop in any way.

Plug: Try our rapid IA prototyping tool, Naview

Author: Jussi

Jussi Pasanen is the founder and principal at Volkside

One thought on “Test your information architecture using C-Inspector”

  1. As mentioned above I received an email from Steffen clarifying how to access detailed results for individual tasks, and I have amended the article accordingly. Have a look at the following screenshots, too:

    Here you can see all user clickthroughs aggregated to the individual task:
    This gives you an impression of all user clicks
    You can also see the detailed correct clickthrough for all answer items
    This gives you a feeling in what step you lose most of your users (Here you can also check backtracking etc.)

    Thanks Steffen!

Comments are closed.