UK +44 28 9078 5820 IRL +353 16373962 contact@glowmetrics.com

Huddle 4

After a night of some drinks and food at Capitol Yard Golf Lounge day 2 kicked off a little later than day 1 and the first huddle was on ‘Iterating or creating? Risk vs reward’ led by David Leese.

David has a wealth of experience from user testing at Dell (he has a great blog here) and so provided lot of use cases and guidelines for running user tests. For anyone that isn’t familiar with user testing, this is generally displaying different versions of a page/page sections to users and measuring which one is more effective at driving the user to create a goal on the website.

Most people in the huddle relied on web analytics data insights to initiate the user testing plan, most starting with an analysis of the funnel (key pages in the lead up to a sale/lead). This is one of the easier usability tests to run given its one of the most important parts of an ecommerce site and impact is clearly trackable back to ROI.

We also discussed testing on pages that aren’t as close to the final outcome, for example, testing on the homepage/product pages/ services pages. Creating user tests on these pages is still effective but given they are further from the end goal, it can be harder to track impact. In cases like this, it was suggested that defining a website navigation path is key to tracking effectively. For example if you hope changing a design element like a banner on your homepage will drive down bounce rate by getting users to click through on the banner- then use path analysis as a KPI of the user test, complementary to tracking a final outcome achieved like a sale.

Another testing initiative that was discussed was the testing of a site before it goes live. Some participants in the huddle had even served two versions of a site to visitors for 3 MONTHS before they were confident the new version would work better than the old. Even if you don’t have 3 months to test the site, it was stressed that any company deciding to go live with a new version of a site should first test it. But what if the test tells you something you don’t want to hear? People generally don’t like change so it is important that you give the new version a chance to build enough data to make an informed decision- let visitors get familiar with the new version of the site before you compare performance to the old version. However, if the stats still tell you that the new version isn’t working its time to take a step back to the drawing board.

Day 1 Wrap-up Meeting at Capitol Yard Golf Lounge: Picture courtesy of @semphonic

Day 1 Wrap-up Meeting at Capitol Yard Golf Lounge: Picture courtesy of @semphonic

Which leads us on to expectation management…this is certainly something that needs to be in place before a test is conducted. If you carry out a test experiment its important to not always assume that the test is going to give you the answer you want to hear, that’s the beauty of measurement- it will sometimes challenge your intuition.  If you don’t collect enough valid data in your account or the test shows you the new flashy version of the site doesn’t work as well as the boring old one- don’t continue to go with the new version just to please all those involved.

Which usability products were deemed to be good tools for testing?

Lastly, David left us with some great tips:

  • Using green instead of red on website buttons generally works better.
  • Try not to say ‘continue’ on buttons, instead use ‘proceed to…(e.g. the shopping cart)’.
  • Document every test you do so you can build a library of insights and find out where the ‘switch box’ is.
  • Put a value on how much a company will loose if they don’t run the test to help buy-in.

Huddle 5

The last session before lunch was ‘core elements of a successful consumer analytics capability’.

This session was led by Ulla Kruhse-Lehtonen and kicked off with a discussion on big data collection and its use-cases.

Use-cases that were listed ranged from collecting data for targeting and personalisation purposes to using it for KPI tracking and building data-driven insights.

Pulling from everyone’s experience, there was also a discussion around who within an organisation needs to be involved in building and utilising data sets with quite a few roles listed: business analysts, IT, data warehouse specialists, strategic planners, data scientists, sponsors for the program, people working in creative, management and last but not least someone to ensure it is being collected correctly within local privacy guidelines.

Collecting big data seta, bringing them together and reaping correlating insights in no easy task and something that a lot of organisations still struggle with but if you were looking for products to support this then Hadoop might help. Amazon Redshift was also praised as being an economically efficient data warehouse service.

Finally a requirement for most businesses is investment in this area to make consumer analytics a success.  Similar to what was said in a number of sessions I attended, some of the most effective ways of buy-in is through the ‘This is what you would loose in € if you DON’T buy into investing in this’ argument or ‘Don’t you know most of your competitors are investing in this already to expand market share…’. Based on experience, these type of arguments for investment, backed up with stats tends to move approval pretty quickly.

Share