In this report, I will be evaluating the design that I created in coursework 1 (University Robot Hurdles Championship), where different university students who designed robots; will compete against each other in an event that will be held in 2018. This report will include detailed description of the user test set-up, participants and how I recruited them, the tasks they were given to complete, what I used to record the sessions, data that have been collected as well as additional materials that have been used for evaluation e.g. questionnaires.
In addition, this report also contains summary of the main findings of the evaluation, both positive and negative. The report will also discuss if any usability problems were found along with recommendations to improve my design.
Participants and recruitment process
The assessment was set up in a quiet area, with a moderator (me), sitting beside the participant. The environment was controlled consistently, to avoid any anomalies that may influence the participant’s behaviour. All users were required to carry out different tasks except few same tasks were completed by some users. (See appendix 1)
Type of methods that I used was the guerrilla approach and moderated traditional test. As stated in lecture 7 ‘Guerrilla’ refers to ‘in the wild’ style of user test as it can be conducted anywhere with lots of footfall e.g. in a café, library’, whereas, ‘moderator observes an actual user of a system interacting with it to carry out a set of tasks’ for my user testing, I was required to have five participants to conduct the test on, for which I recruited users. I also offered them a free drink at the City Bar as an incentive. I was successful in recruiting four participants from City University of London and one from outside.
Name of participant
User 1 – Sophie Scribbins
City University Library
User 2 – Firoza Patel
City University Library
User 3 – Aimal Hederzada
User 4 – Martin Ivanov
User 5 – Fazale Haq
When recruiting the participants, I assure the participants that their privacy would not be jeopardised in the summary of findings.
Before starting the test, I greeted the participant and explained that I wanted to improve the usability of a system by telling them that ‘This is a University Robot Hurdles Championship website, which allows users to view information about the aforementioned event. The purpose of the website is to provide information to the users about the universities (and designers) whose robots are competing, the robots themselves and provide update on the results of the races, media content, such as videos of the races or interviews of the designers. In addition, to keep up-to-date with the latest news about the championship, the website would feature live tweets about the championship.’ The test was conducted on the system and not the user, which was thoroughly emphasised to ensure that the user reacted in a normal way.
During the session, I record what users do, as this is a vital part of usability testing. I will carry out recordings using my phone as this will capture the user’s response. The moderator (me), observed the recordings after user completes the test and took all possible notes once the user was done, as taking down notes whilst the users were there might have intimidated them. There are different methods of recording usability test e.g. screen activity, but I used my phone. I believe that video recording is superior to other methods since it likely allows for more detail to be captured.
Furthermore, to validate my findings and for a stronger analysis; I gave each user two different type of questionnaires to fill out. The first questionnaire was provided prior to the test while the second one was provided after it. The purpose of questionnaire before the test was to find more about the participant such as, participant’s name, age, occupation and if he/she visits sport websites and whether the participant has done a usability testing before or not. (see appendix 2). The purpose of the questionnaire after the test was to get feedback about the usability of the website. This included questions such as asking the user to rate each task he/she undertook with a rating that represented its ease of completion, and the user’s overall perception of the website (see appendix 4). This activity was carried out off-camera to save file size. Questionnaires contained quantifiable questions. Having both quantifiable data as well as qualitative data helped get a clear picture, that in turn highlighted the severity of issues.
The task itself
The wireframes were printed out, and all the users were asked to complete different tasks (independent variable) for sake of fairness. The tasks that were to be completed are in appendix 1.
A detailed description of the user test
To describe the user test, I will break it down into three stages: before the test, during the test and after the test.
Before the test:
1. Printed out the consent form
2. Printed out the questionnaires
3. Organised the set up – video set up, wireframe prototype, pens and questionnaire.
4. Ensured participants were taken to the silent area
During the test: (off camera to save file size)
1. Greeted the tester
2. Explained the purpose of the test to the participant
3. Explained how the test was to be conducted
4. Filled out the consent form (see appendix 6)
5. Explained the tasks which the participant were required to complete
6. Presented some questions to the user about the system
7. Gave the participant a questionnaire to fill out
After the test: (off camera)
o Was the system easy to navigate through?
o Was the test easily readable?
o Were some of the pages unnecessary? If yes, which ones?
o Do you feel like any of the fields were unnecessary?
o What would you suggest adding to the website?
Summary of main findings
At the start of the testing session, I asked the participants, what was their first impression of the website? (off camera)
o The design is user-friendly and the elements are easy to find
o Professional, clear and user-friendly
o Menu bar, header, footer and context placed in correct positions
Evaluation of Questionnaire findings
Through the analysis of the questionnaire before testing, I validate different types of users that may use the website. The participants that I recruited have different preference for using such websites. This was an advantage of the research as it demonstrated how different people may use the website for different purposes (See appendix 3). The first chart shows that 4 out of 5 participants sometimes use website (in general) for various purposes. The second chart illustrates for what purpose the participants visit the g website. Finally, the third chart states how many users have been part of usability test in the past (All three charts can be found on appendix 3)
From the questionnaires after testing, I found out that participants attempted to complete the tasks as shown in the graph in appendix 5. Most of participants found the system easy to navigate through whereas few suggested it can be improved. All participants successfully completed the given tasks, however there were a few issues with the website, which are explained below.
Evaluation of the sampling technique
The technique I used for sampling was called the guerrilla method (method described above). This method was the most appropriate based on the time I had to complete and finding the right people. However, if I had more time I would have used techniques such as random or cluster sampling methods as it would have provided a fairer test, therefore increasing the validity of the outcomes.
Analysis of the video recording (main issues)
§ Severity scale:
1 = High 2 = Medium 3 = Low
Common issue that was highlighted while analysing the videos was that nearly all the users took a significant amount of time on task 2 which was to ‘to indicate the differences between Gallery and Images page’, this suggests that the content of these pages were not very clear to highlight significant differences. Almost all users clearly highlighted in the video that there isn’t much difference between the two pages as the contents of both pages look very similar. Taking this into consideration, action needs to be taken to improve these two pages, so users can indicate some clear differences between the two pages. The severity scale for this problem is 3, it is not very important to fix this problem on high or medium priority when fixing all the usability problems. Because these two pages do not contain the vital contents.
One more common problem that arose was when users were asked to ‘to find the designer profile on Robots Designers page’. They took some time to find the profile of the designer. This suggests that Robots Designers page is badly designed and not very user-friendly. One of the user clearly specified that the photo size of the robot itself is eye-catching because the size of it is larger than the designer photo. The user also suggested that they would like to see a list of all designers and their robotic work on one page rather than pressing on right and left arrow keys to view other designer profiles. Taking this into consideration, action needs to be taken to improve this page as it is one of the main page where user can find all the designers and their robots and to ensure that the designer profile is focal point than other content. The severity scale for this problem is 1, it is very crucial that this issue gets fixed on high priority as it can cause the user to lose interest by not finding the information about the designer right away when directed to this page.
Another issue that was highlighted was ‘click on view Maps.’ User was not sure where to click as next to the ‘view maps’, was right arrow which was for different purpose. The user took a while to notice where to click. This again suggested that the design of the page is not right and can be improved. The user also stated in the questionnaires after the test feedback section ‘The button/link should be better placed’. This means that I must improve my design to avoid this confusion by placing the Maps to View in more suitable position. The severity scale for this problem is 2, it is important that this problem gets fixed and has a priority when fixing usability problems, as it confuses users where to click. The links or button should appear clickable as they should, to avoid confusing functionality.
Extra issues were accumulated from the feedback. I have listed the most serious issues which prompt negative involvement for users.
Description of the issue
How to overcome it?
Images were not specified correctly on the home page as well as on designer profile
Medium – user struggled the distinguish the differences between the different images connotations
Images could have different sizes. Most important images to be larger than the less important.
Signing up to the newsletter was not noticeable by user on home page
Medium – page full of contents, not the first thing that user will see
This can be slightly bigger size, underlined so it is noticeable.
Labeling issue – user was not sure why the numbers are included
High – confuses users
To improve this issue, I must delete the labeling numbers from each page to it is more user-friendly.
Assessing my prototype enabled me to see both the users and designer perspective. Furthermore, finding the usability issues within the design influenced me to consider the wireframes and how I would develop such system in future. When I attempted the same tasks, the users endeavoured, I was certain that everybody will comprehend the errands and will be straightforward. However, found out that the basic and user-friendly the website is, users will appreciate the utilisation of the website.